Health Problems

Study eyes how human brain ‘sees’ world

It is plain to see that the world moves pretty fast—but the human brain moves even faster to see.

A new Brain and Mind Institute study is offering insights into how the our brains process a world in which the images of people, places and things are constantly shrinking, expanding and changing on the retina at the back of our eyes. These findings may hold further keys to perfecting technology in everything from robots to self-driving cars.

The Western-led research team discovered that once the image of an object falls on the retina, it takes just over a tenth of a second for the brain to understand the real-world size of that object.

Juan Chen, Melvyn Goodale and their collaborators at Western’s Brain and Mind Institute, South China Normal University and the University of East Anglia (U.K.) also found that the representations of the real size of objects in the world emerge in the earliest stages of visual processing in the cerebral cortex of the brain.

The study, “Changing the Real Viewing Distance Reveals the Temporal Evolution of Size Constancy in Visual Cortex,” was published in the June 27 edition of Current Biology.

Our innate ability to see the real-world size of objects, despite dramatic changes in the images captured by our eyes, is called size constancy, Goodale explained.

https://youtube.com/watch?v=64QLsnzjx_4%3Fcolor%3Dwhite

“Remarkably, we see a world that is stable. Things are perceived to be the size they really are,” said Goodale, Founding Director of the Brain and Mind Institute and senior author of the study. “This is a good thing—otherwise our perception of the world would be chaotic and impossible to interpret.”

It is understood that human brains create size constancy by calculating the distance of objects we see—the further away the object, the smaller the retinal image. As a result, even though the image of a car driving away from us becomes smaller and smaller on our retina, we continue to see the car as being the same size.

For the study, Chen, Goodale and their collaborators used electroencephalography (EEG) to measure the tiny electrical signals in the brain that occur when people are presented with objects of different sizes at different distances. Unlike previous experiments, in which investigators manipulated the apparent distance of objects by changing their appearance on a computer screen, the Brain and Mind investigators moved the entire display closer or further away from the observers while their brain activity was being measured with EEG.

By conducting the experiment in this way, all of the cues to distance, such as stereo vision, pictorial cues, and the vergence of the eyes were available and completely congruent with one another. Using this technique, the team was also able to pinpoint exactly when size constancy emerges in the visual areas of the brain.

“In the first 100 milliseconds after the presentation of an object on the screen, the EEG signal reflects the size of the image on the retina of the eye but by 150 milliseconds, the signal represents the real size of the object,” Goodale said.

This change from retinal to real-size coding in the EEG signals reflects the merging of information about the size of the retinal image and information about the distance of the object from the observer.

Source: Read Full Article