Appearance-Based Gaze Estimation for ASD Diagnosis

Jing Li, Zejin Chen, Yihao Zhong, Hak-Keung Lam, Junxia Han, Xiaoli Li, Honghai Liu

Research output: Contribution to journalArticlepeer-review

17 Citations (Scopus)
144 Downloads (Pure)


Biomarkers, such as magnetic resonance imaging (MRI) and electroencephalogram have been used to help diagnose autism spectrum disorder (ASD). However, the diagnosis needs the assist of specialized medical equipment in the hospital or laboratory. To diagnose ASD in a more effective and convenient way, in this article, we propose an appearance-based gaze estimation algorithm--AttentionGazeNet, to accurately estimate the subject's 3-D gaze from a raw video. The experimental results show its competitive performance on the MPIIGaze dataset and the improvement of 14.7% for static head pose and 46.7% for moving head pose on the EYEDIAP dataset compared with the state-of-the-art gaze estimation algorithms. After projecting the obtained gaze vector onto the screen coordinate, we apply accumulated histogram to taking into account both spatial and temporal information of estimated gaze-point and head-pose sequences. Finally, classification is conducted on our self-collected autistic children video dataset (ACVD), which contains 405 videos from 135 different ASD children, 135 typically developing (TD) children in a primary school, and 135 TD children in a kindergarten. The classification results on ACVD shows the effectiveness and efficiency of our proposed method, with the accuracy 94.8%, the sensitivity 91.1% and the specificity 96.7% for ASD.
Original languageEnglish
Pages (from-to)6504-6517
Number of pages14
JournalIEEE Transactions on Cybernetics
Issue number7
Publication statusPublished - 1 Jul 2022


Dive into the research topics of 'Appearance-Based Gaze Estimation for ASD Diagnosis'. Together they form a unique fingerprint.

Cite this