Learning viewpoint invariant perceptual representations from cluttered images

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)
142 Downloads (Pure)

Abstract

In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.
Original languageEnglish
Article numberN/A
Pages (from-to)753 - 761
Number of pages9
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume27
Issue number5
DOIs
Publication statusPublished - May 2005

Fingerprint

Dive into the research topics of 'Learning viewpoint invariant perceptual representations from cluttered images'. Together they form a unique fingerprint.

Cite this