Why Aren't Our Brains Overloaded by Visual Stimuli?
Why Aren't Our Brains Overloaded by Visual Stimuli?
Complete the form below and we will email you a PDF version of "Why Aren't Our Brains Overloaded by Visual Stimuli?"
Russian researchers from HSE University have studied a hypothesis regarding the capability of the visual system to automatically categorize objects (i.e., without requiring attention span). The results of a simple and beautiful experiment confirmed this assumption. The paper was published in the journal Scientific Reports. The study was supported by a Russian Science Foundation grant.
Humans receive a lot of information from the environment through their vision. Every day, we face a flow of varied visual stimuli. At the same time, information processing requires cognitive resources. Like a computer processor, the human brain has limited capacity in terms of the data it is able to process and save in its memory. One hypothesis states that the visual system somehow 'decreases files' resolution in order to avoid overloading. As a result of such 'compression', instead of a detailed analysis of the observed objects, the visual system categorizes them by simple general attributes, such as size. Later, such 'primary data' can be used for a more thorough analysis.
Researchers sought to answer the following question: is the visual system capable of automatic object categorization (i.e., without attention)? In their study, the researchers tried to determine the conditions in which such automatic categorization would work. They used the visual mismatch negativity (vMMN) component measured by electroencephalography (EEG) as a marker of automatic sensory discrimination. vMMN shows the difference between the brain's reactions to a standard (frequent) or a deviant (rare) stimulus. vMMN demonstrates that the visual system noticed a difference between stimuli and, importantly, that it did so without requiring attention.
'We are very interested and amazed by the human visual system's ability to categorize high numbers of objects. For example, when humans look at an apple tree, they immediately differentiate apples from leaves. This study shows that the process of quick categorization can be performed automatically based on the information on differences between objects', says Vladislav Khvostov, Junior Research Fellow at the HSE Laboratory for Cognitive Research, School of Psychology, one of the paper's authors.
To study the automatic distribution of objects into groups using vMMN, the researchers conducted a simple experiment with a fillertask. Study participants were asked to look at a small asymmetrical cross in the centre of the field and press the button each time the cross changed its orientation. This way, the participants' attention was focused on the position of the cross in the centre of the field. The cross was surrounded by rows of lines of varied lengths and orientation. In each experiment block, the combination of these parameters was different. While the participants' attention was focused on the central figure, the researchers used EEG to record brain activity in response to background visual stimulation. In each block of the experiment, the participants were shown 700 visual stimuli, each of which was presented on the screen for 200 ms followed by 400 ms of empty screen. Most of the stimuli included a fixed combination of lines' length and orientation (for example, long lines were steep, and short ones were flat), but in 10% of cases, the combination of parameters was the opposite.
According to Vladislav Khvostov, the only task for the participants was to press a button when the central cross rotated (third image from the left). In the image above, the central cross size is magnified for illustrational purposes. Together with the cross, the participants observed a background visual stimulation consisting of lines with different lengths and orientations. In most cases (standard stimuli) the combination of length and orientation was the same: long lines were flat, and short ones were steep, but in very rare cases (deviant stimuli, seventh image) this combination changed to the opposite: long lines became steep, while short ones became flat. The participants did not pay attention to the change of stimuli, but analysis of EEG indicators showed that the visual system tracked these changes as well.
The researchers were interested in the brain's reaction to the replacement of a standard stimulus with a deviant one. If the feature had only two peak values (short/long in case of length; vertical/horizontal in case of orientation), it was called 'segmentable'. If the attribute had interim values, it was defined as 'non-segmentable.'
The researchers found considerable visual mismatch negativity in response to a deviant stimulus in cases when either both of the features were segmentable, or only length was. Since on all stimuli inside each block, the distribution of lengths and orientations remained constant, the researchers concluded that categorization was not made by one simple feature. This means that the visual system categorized the lines by their combinations. In their experiment, the researchers thus contradicted the assumption that the visual system categorizes the objects only by simple feature. It can solve a less trivial version of the task and use combinations of features.
Khvostov VA, Lukashevich AO, Utochkin IS. Spatially intermixed objects of different categories are parsed automatically. Scientific Reports. 2021;11(1):377. doi:10.1038/s41598-020-79828-4
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.