We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

How do we Make Sense of a Blurry Scene

How do we Make Sense of a Blurry Scene content piece image
Credit: Pixabay
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: Less than a minute

Blurry and clear versions of an image are represented similarly in the brain, according to a neuroimaging study published in eNeuro. The research shows how the visual system fills in missing information to maintain perception when visibility is low.



Vision arises from a “bottom-up” process that transduces light into neural signals and a “top-down” process in which the brain assembles that information into a coherent visual representation of the environment. The interaction between these two pathways is not fully understood.


Mohamed Abdelhack and Yukiyasu Kamitani investigated top-down processing of degraded visual information with an artificial neural network that translates human brain activity as participants view blurry and progressively sharper images. They found that the representations of the blurrier images were skewed toward that of the original, unaltered images. This effect was enhanced when participants were informed that they would be viewing images from a distinct set of categories. These findings provide a more comprehensive account of how individuals perceive their world through vision by combining visual input with prior knowledge. A better understanding of the top-down and bottom-up pathways could help to explain how their disruption may, for example, generate hallucinations.

This article has been republished from materials provided by eNeuro. Note: material may have been edited for length and content. For further information, please contact the cited source.

Reference:
Abdelhack, M., & Kamitani, Y. (2017). Sharpening of hierarchical visual feature representations of blurred images. doi:10.1101/230078