We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
New technique to examine how the brain categorizes images
News

New technique to examine how the brain categorizes images

New technique to examine how the brain categorizes images
News

New technique to examine how the brain categorizes images

Read time:
 

Want a FREE PDF version of This News Story?

Complete the form below and we will email you a PDF version of "New technique to examine how the brain categorizes images "

First Name*
Last Name*
Email Address*
Country*
Company Type*
Job Function*
Would you like to receive further email communication from Technology Networks?

Technology Networks Ltd. needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy

Researchers have pioneered a new image modulation technique known as semantic wavelet-induced frequency-tagging (SWIFT) to further test how images are processed in the brain -


Despite the obvious difference between a chihuahua and a doberman, the human brain effortlessly categorizes them both as dogs, a feat that is thus far beyond the abilities of artificial intelligence.


Previous research has established that the brain can recognize and categorize objects extremely rapidly, however the way this process occurs is still largely unknown. Researchers from Monash University have pioneered a new image modulation technique known as semantic wavelet-induced frequency-tagging (SWIFT) to further test how images are processed.


This work, by Associate Professor Naotsugu Tsuchiya and Dr Roger Koenig-Robert from the School of Psychological Sciences (affiliates of Australian Research Council Centre for Integrative Brain Functions), identifies a way to visually stimulate the brain to isolate neural activity responsible for categorizing objects. This is not an easy task at all, because areas in the visual cortex supporting these category representations are simultaneously active with areas representing low-level visual features such as lines, forms and contrast.


Categorization of objects is believed to emerge gradually, in a hierarchical manner. For example, simple visual features such as lines, contrast and color are thought to be represented at early stages. For more complex representations, these features are combined, giving rise to abstract categorization (such as cars, faces and animals), and neurons in high-level areas are believed to be responsible.


By isolating the areas representing abstract, object categories in the brain, the research has allowed greater understanding of how humans effortlessly categorize objects, despite massively different appearances.


Associate Professor Tsuchiya said the discovery could prove useful for manipulating images so that they communicate information in a subliminal, non-conscious manner.


"SWIFT can degrade arbitrary natural stimuli in a subtle manner. This technique may find its own application as a way to reveal and/or hide specific aspects of images. Potentially, SWIFT could be used to probe robustness and flexibility of artificial visual systems, such as those used in security," Associate Professor Tsuchiya said.


Using functional magnetic resonance imaging (fMRI), the study determined that in faces and scenes modulated with SWIFT, there was a sustained and constant brain response in early visual areas, and periodical responses in the higher level (category-selective) areas.


Note: Material may have been edited for length and content. For further information, please contact the cited source.

Monash University


Publication

Koenig-Robert R, VanRullen R, Tsuchiya N. Semantic Wavelet-Induced Frequency-Tagging (SWIFT) Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas.   PLoS One, Published December 21 2015. doi: 10.1371/journal.pone.0144858


Advertisement