King's College London

Research portal

Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability

Research output: Contribution to journalArticle

Standard

Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability. / Celiktutan, Oya; Gunes, Hatice.

In: IEEE Transactions on Affective Computing, 01.2017.

Research output: Contribution to journalArticle

Harvard

Celiktutan, O & Gunes, H 2017, 'Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability', IEEE Transactions on Affective Computing. https://doi.org/10.1109/TAFFC.2015.2513401

APA

Celiktutan, O., & Gunes, H. (2017). Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability. IEEE Transactions on Affective Computing. https://doi.org/10.1109/TAFFC.2015.2513401

Vancouver

Celiktutan O, Gunes H. Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability. IEEE Transactions on Affective Computing. 2017 Jan. https://doi.org/10.1109/TAFFC.2015.2513401

Author

Celiktutan, Oya ; Gunes, Hatice. / Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability. In: IEEE Transactions on Affective Computing. 2017.

Bibtex Download

@article{b10febf501554f1ab5bbb27cceeeec64,
title = "Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability",
abstract = "In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism , openness, attractiveness and likeability continuously in time and across varying situational contexts. Differently from the existing works, we obtain visual-only and audio-only annotations continuously in time for the same set of subjects, for the first time in the literature, and compare them to their audio-visual annotations. We propose a time-continuous prediction approach that learns the temporal relationships rather than treating each time instant separately. Our experiments show that the best prediction results are obtained when regression models are learned from audio-visual annotations and visual cues, and from audio-visual annotations and visual cues combined with audio cues at the decision level. Continuously generated annotations have the potential to provide insight into better understanding which impressions can be formed and predicted more dynamically, varying with situational context, and which ones appear to be more static and stable over time.",
author = "Oya Celiktutan and Hatice Gunes",
year = "2017",
month = "1",
doi = "10.1109/TAFFC.2015.2513401",
language = "English",
journal = "IEEE Transactions on Affective Computing",

}

RIS (suitable for import to EndNote) Download

TY - JOUR

T1 - Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability

AU - Celiktutan, Oya

AU - Gunes, Hatice

PY - 2017/1

Y1 - 2017/1

N2 - In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism , openness, attractiveness and likeability continuously in time and across varying situational contexts. Differently from the existing works, we obtain visual-only and audio-only annotations continuously in time for the same set of subjects, for the first time in the literature, and compare them to their audio-visual annotations. We propose a time-continuous prediction approach that learns the temporal relationships rather than treating each time instant separately. Our experiments show that the best prediction results are obtained when regression models are learned from audio-visual annotations and visual cues, and from audio-visual annotations and visual cues combined with audio cues at the decision level. Continuously generated annotations have the potential to provide insight into better understanding which impressions can be formed and predicted more dynamically, varying with situational context, and which ones appear to be more static and stable over time.

AB - In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism , openness, attractiveness and likeability continuously in time and across varying situational contexts. Differently from the existing works, we obtain visual-only and audio-only annotations continuously in time for the same set of subjects, for the first time in the literature, and compare them to their audio-visual annotations. We propose a time-continuous prediction approach that learns the temporal relationships rather than treating each time instant separately. Our experiments show that the best prediction results are obtained when regression models are learned from audio-visual annotations and visual cues, and from audio-visual annotations and visual cues combined with audio cues at the decision level. Continuously generated annotations have the potential to provide insight into better understanding which impressions can be formed and predicted more dynamically, varying with situational context, and which ones appear to be more static and stable over time.

U2 - 10.1109/TAFFC.2015.2513401

DO - 10.1109/TAFFC.2015.2513401

M3 - Article

JO - IEEE Transactions on Affective Computing

JF - IEEE Transactions on Affective Computing

ER -

View graph of relations

© 2018 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454