King's College London

Research portal

Facial expression recognition in dynamic sequences: An integrated approach

Research output: Contribution to journalArticlepeer-review

Standard

Facial expression recognition in dynamic sequences: An integrated approach. / Borgo, Rita; Fang, Hui; Mac Parthaláin, Neil; Andrew, Aubrey; Tam, Gary K.L.; Rosin, Paul; Grant, Philip W. ; Marshall, David; Chen, Min.

In: PATTERN RECOGNITION, Vol. 47, No. 3, 03.2014, p. 1271-1281.

Research output: Contribution to journalArticlepeer-review

Harvard

Borgo, R, Fang, H, Mac Parthaláin, N, Andrew, A, Tam, GKL, Rosin, P, Grant, PW, Marshall, D & Chen, M 2014, 'Facial expression recognition in dynamic sequences: An integrated approach', PATTERN RECOGNITION, vol. 47, no. 3, pp. 1271-1281. https://doi.org/10.1016/j.patcog.2013.09.023

APA

Borgo, R., Fang, H., Mac Parthaláin, N., Andrew, A., Tam, G. K. L., Rosin, P., Grant, P. W., Marshall, D., & Chen, M. (2014). Facial expression recognition in dynamic sequences: An integrated approach. PATTERN RECOGNITION, 47(3), 1271-1281. https://doi.org/10.1016/j.patcog.2013.09.023

Vancouver

Borgo R, Fang H, Mac Parthaláin N, Andrew A, Tam GKL, Rosin P et al. Facial expression recognition in dynamic sequences: An integrated approach. PATTERN RECOGNITION. 2014 Mar;47(3):1271-1281. https://doi.org/10.1016/j.patcog.2013.09.023

Author

Borgo, Rita ; Fang, Hui ; Mac Parthaláin, Neil ; Andrew, Aubrey ; Tam, Gary K.L. ; Rosin, Paul ; Grant, Philip W. ; Marshall, David ; Chen, Min. / Facial expression recognition in dynamic sequences: An integrated approach. In: PATTERN RECOGNITION. 2014 ; Vol. 47, No. 3. pp. 1271-1281.

Bibtex Download

@article{16d628f6c9794abb98f5424e31cd6c8a,
title = "Facial expression recognition in dynamic sequences:: An integrated approach",
abstract = "Automatic facial expression analysis aims to analyse human facial expressions and classify them into discrete categories. Methods based on existing work are reliant on extracting information from video sequences and employ either some form of subjective thresholding of dynamic information or attempt to identify the particular individual frames in which the expected behaviour occurs. These methods are inefficient as they require either additional subjective information, tedious manual work or fail to take advantage of the information contained in the dynamic signature from facial movements for the task of expression recognition. In this paper, a novel framework is proposed for automatic facial expression analysis which extracts salient information from video sequences but does not rely on any subjective preprocessing or additional user-supplied information to select frames with peak expressions. The experimental framework demonstrates that the proposed method outperforms static expression recognition systems in terms of recognition rate. The approach does not rely on action units (AUs), and therefore, eliminates errors which are otherwise propagated to the final result due to incorrect initial identification of AUs. The proposed framework explores a parametric space of over 300 dimensions and is tested with six state-of-the-art machine learning techniques. Such robust and extensive experimentation provides an important foundation for the assessment of the performance for future work. A further contribution of the paper is offered in the form of a user study. This was conducted in order to investigate the correlation between human cognitive systems and the proposed framework for the understanding of human emotion classification and the reliability of public databases.",
author = "Rita Borgo and Hui Fang and {Mac Parthal{\'a}in}, Neil and Aubrey Andrew and Tam, {Gary K.L.} and Paul Rosin and Grant, {Philip W.} and David Marshall and Min Chen",
year = "2014",
month = mar,
doi = "10.1016/j.patcog.2013.09.023",
language = "English",
volume = "47",
pages = "1271--1281",
journal = "PATTERN RECOGNITION",
issn = "0031-3203",
publisher = "Elsevier Limited",
number = "3",

}

RIS (suitable for import to EndNote) Download

TY - JOUR

T1 - Facial expression recognition in dynamic sequences:

T2 - An integrated approach

AU - Borgo, Rita

AU - Fang, Hui

AU - Mac Parthaláin, Neil

AU - Andrew, Aubrey

AU - Tam, Gary K.L.

AU - Rosin, Paul

AU - Grant, Philip W.

AU - Marshall, David

AU - Chen, Min

PY - 2014/3

Y1 - 2014/3

N2 - Automatic facial expression analysis aims to analyse human facial expressions and classify them into discrete categories. Methods based on existing work are reliant on extracting information from video sequences and employ either some form of subjective thresholding of dynamic information or attempt to identify the particular individual frames in which the expected behaviour occurs. These methods are inefficient as they require either additional subjective information, tedious manual work or fail to take advantage of the information contained in the dynamic signature from facial movements for the task of expression recognition. In this paper, a novel framework is proposed for automatic facial expression analysis which extracts salient information from video sequences but does not rely on any subjective preprocessing or additional user-supplied information to select frames with peak expressions. The experimental framework demonstrates that the proposed method outperforms static expression recognition systems in terms of recognition rate. The approach does not rely on action units (AUs), and therefore, eliminates errors which are otherwise propagated to the final result due to incorrect initial identification of AUs. The proposed framework explores a parametric space of over 300 dimensions and is tested with six state-of-the-art machine learning techniques. Such robust and extensive experimentation provides an important foundation for the assessment of the performance for future work. A further contribution of the paper is offered in the form of a user study. This was conducted in order to investigate the correlation between human cognitive systems and the proposed framework for the understanding of human emotion classification and the reliability of public databases.

AB - Automatic facial expression analysis aims to analyse human facial expressions and classify them into discrete categories. Methods based on existing work are reliant on extracting information from video sequences and employ either some form of subjective thresholding of dynamic information or attempt to identify the particular individual frames in which the expected behaviour occurs. These methods are inefficient as they require either additional subjective information, tedious manual work or fail to take advantage of the information contained in the dynamic signature from facial movements for the task of expression recognition. In this paper, a novel framework is proposed for automatic facial expression analysis which extracts salient information from video sequences but does not rely on any subjective preprocessing or additional user-supplied information to select frames with peak expressions. The experimental framework demonstrates that the proposed method outperforms static expression recognition systems in terms of recognition rate. The approach does not rely on action units (AUs), and therefore, eliminates errors which are otherwise propagated to the final result due to incorrect initial identification of AUs. The proposed framework explores a parametric space of over 300 dimensions and is tested with six state-of-the-art machine learning techniques. Such robust and extensive experimentation provides an important foundation for the assessment of the performance for future work. A further contribution of the paper is offered in the form of a user study. This was conducted in order to investigate the correlation between human cognitive systems and the proposed framework for the understanding of human emotion classification and the reliability of public databases.

U2 - 10.1016/j.patcog.2013.09.023

DO - 10.1016/j.patcog.2013.09.023

M3 - Article

VL - 47

SP - 1271

EP - 1281

JO - PATTERN RECOGNITION

JF - PATTERN RECOGNITION

SN - 0031-3203

IS - 3

ER -

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454