Blurry and clear versions of an image are represented similarly in the brain, according to a neuroimaging study published in eNeuro. The research shows how the visual system fills in missing information to maintain perception when visibility is low.
Vision arises from a “bottom-up” process that transduces light into neural signals and a “top-down” process in which the brain assembles that information into a coherent visual representation of the environment. The interaction between these two pathways is not fully understood.
Mohamed Abdelhack and Yukiyasu Kamitani investigated top-down processing of degraded visual information with an artificial neural network that translates human brain activity as participants view blurry and progressively sharper images. They found that the representations of the blurrier images were skewed toward that of the original, unaltered images. This effect was enhanced when participants were informed that they would be viewing images from a distinct set of categories. These findings provide a more comprehensive account of how individuals perceive their world through vision by combining visual input with prior knowledge. A better understanding of the top-down and bottom-up pathways could help to explain how their disruption may, for example, generate hallucinations.
This article has been republished from materials provided by eNeuro. Note: material may have been edited for length and content. For further information, please contact the cited source.
Abdelhack, M., & Kamitani, Y. (2017). Sharpening of hierarchical visual feature representations of blurred images. doi:10.1101/230078