Omni-Directional Basis Function Network For Sensory-Sensory And Sensory-Motor Transformations

Student thesis: Doctoral ThesisDoctor of Philosophy


The sensory (e.g., vision) and motor (e.g., head or arm) systems through “cause-effect” relationship allow biological systems to adopt a specific behaviour (e.g., eye-head gaze shift, visually guided arm reach, etc.) which is equally important for humanoid robots. Sensory information and motor/action space are both non-linear in nature, therefore to realise sensory-motor transformations is a difficult and very complex task. The purpose of the research presented in the thesis was to realise such complex and non-linear sensorymotor transformations in a biological plausible manner for robotics. An omni-directional Basis Function Network is proposed in this thesis for sensory-sensory and sensory-motor transformations. This non-linear sensory-motor transformation from one frame of reference to another was achieved without using any hard-coded mathematical transformations. The proposed basis function model also solved the common problems raised in case of basis function type networks which are: scalability, direction of transformation and handling multiple stimuli. The visual sensory information of the target coupled with proprioceptive information of the eyes position was transformed to an intrinsic representation with reference to head (i.e., head-centred representation). This head-centred representation was then used to perform eyes saccade and vergence movements. The same network was used to perform sensory-sensory transformations in one direction and sensory-motor transformations in the reverse direction. The network also showed the ability to perform double-step saccade using the head-centric map. The learnt head-centred representation of visual space was further transformed to an intrinsic representation with reference to body (i.e., body-centred representation) by incorporating the proprioceptive information of head movement. This learnt body-centred representation enabled the network to perform coordinated eyes-head gaze shifts. The eye-head system is inherently redundant for gaze shifts. The proposed eyeshead coordination network resolved the redundancy online without posing any constraints or using any kinematic analysis. The learnt body-centred representation of visual space was then used to learn correspondence between a body-centred representation and the arm joint-angles to perform coordinated eyes-head-arm movements. The proposed eyes-head-arm coordination network had the ability to perform the direct visuo-motor transformation in order to perform coordinated eyes-head gaze shift and execute ballistic arm movement to reach the target of interest. Furthermore, the eyes-head-arm coordination network also showed ability to perform the inverse visuo-motor transformation by shifting the gaze to view the hand from random initial eyes, head and arm pose. The proposed model also showed the ability to simultaneously execute a gaze shift towards one target of interest and memory-based reaching to a second. The trained basis function network was validated for all these sensory-sensory and sensory-motor transformations by testing on a simulated humanoid robot (iCub).
Date of Award2016
Original languageEnglish
Awarding Institution
  • King's College London
SupervisorHak-Keung Lam (Supervisor) & Michael Spratling (Supervisor)

Cite this