We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
Do Neural Networks Get Fooled by Visual Illusions?
News

Do Neural Networks Get Fooled by Visual Illusions?

Do Neural Networks Get Fooled by Visual Illusions?
News

Do Neural Networks Get Fooled by Visual Illusions?

In all three cases, the Sagrada Familia is the same colour but looks different due to the surrounding colours. This is a visual illusion. Credit: UPF
Read time:
 

Want a FREE PDF version of This News Story?

Complete the form below and we will email you a PDF version of "Do Neural Networks Get Fooled by Visual Illusions?"

First Name*
Last Name*
Email Address*
Country*
Company Type*
Job Function*
Would you like to receive further email communication from Technology Networks?

Technology Networks Ltd. needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy

A convolutional neural network is a type of artificial neural network in which the neurons are organized into receptive fields in a very similar way to neurons in the visual cortex of a biological brain. Today, convolutional neural networks (CNNs) are found in a variety of autonomous systems (for example, face detection and recognition, autonomous vehicles, etc.). This type of network is highly effective in many artificial vision tasks, such as in image segmentation and classification, along with many other applications.

Convolutional networks were inspired by the behaviour of the human visual system, particularly its basic structure formed by the concatenation of compound modules comprising a linear operation followed by a non-linear operation. A study published in the advanced online edition of the journal Vision Research examines the phenomenon of visual illusions in convolutional networks compared to their effect on human vision. A study by Alexander Gómez Vila, Adrian Martín, Javier Vázquez-Corral and Marcelo Bertalmío, members of the Department of Information and Communication Technologies (DTIC) with the participation of the researcher Jesús Malo of the University of Valencia.

"Because of this connection of CNNs with our visual system, in this paper we wanted to see if convolutional networks suffer from similar problems to our visual system. Hence, we focused on visual illusions. Visual illusions are images that our brain perceives differently from how they actually are", explains Gómez Vila, first author of the study.

In their study, the authors trained CNNs for simple tasks also performed by human vision, such as denoising and deblurring. What they observed is that these CNNs trained under these experimental conditions are also "deceived" by brightness and colour visual illusions in the same way that visual illusions deceive humans.

Furthermore, as Gómez Villa explains, "for our work we also analyse when such illusions cause responses in the network that are not as physically expected, but neither do they match with human perception", that is to say, cases in which CNNs obtain a different optical illusion than the illusion that humans would perceive.

The results of this study are consistent with the long-standing hypothesis that considers low-level visual illusions as a by-product of the optimization to natural environments (that a human sees in their everyday). Meanwhile, these results highlight the limitations and differences between the human visual system and CNNs artificial neural networks.

Reference:

Gomez-Villa A, Martín A, Vazquez-Corral J, Bertalmío M, Malo J. Color illusions also deceive CNNs for low-level vision tasks: Analysis and implications. Vision Research. 2020;176:156-174. doi:10.1016/j.visres.2020.07.010

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

Advertisement