Memory and perception seem like entirely distinct experiences, and neuroscientists used to be confident that the brain produced them differently, too. But in the 1990s, neuroimaging studies revealed that parts of the brain that were thought to be active only during sensory perception are also active during the recall of memories.

“It started to raise the question of whether a memory representation is actually different from a perceptual representation at all,” said Sam Ling, an associate professor of neuroscience and director of the Visual Neuroscience Lab at Boston University. Could our memory of a beautiful forest glade, for example, be just a re-creation of the neural activity that previously enabled us to see it?

“The argument has swung from being this debate over whether there’s even any involvement of sensory cortices to saying ‘Oh, wait a minute, is there any difference?’” said Christopher Baker, an investigator at the National Institute of Mental Health who runs the learning and plasticity unit. “The pendulum has swung from one side to the other, but it’s swung too far.”

Even if there is a very strong neurological similarity between memories and experiences, we know that they can’t be exactly the same. “People don’t get confused between them,” said Serra Favila, a postdoctoral scientist at Columbia University and the lead author of a recent Nature Communications study. Her team’s work has identified at least one of the ways in which memories and perceptions of images are assembled differently at the neurological level.

Blurry Spots

When we look at the world, visual information about it streams through the photoreceptors of the retina and into the visual cortex, where it is processed sequentially in different groups of neurons. Each group adds new levels of complexity to the image: Simple dots of light turn into lines and edges, then contours, then shapes, then complete scenes that embody what we’re seeing.

In the new study, the researchers focused on a feature of vision processing that’s very important in the early groups of neurons: where things are located in space. The pixels and contours making up an image need to be in the correct places or else the brain will create a shuffled, unrecognizable distortion of what we’re seeing.

The researchers trained participants to memorize the positions of four different patterns on a backdrop that resembled a dartboard. Each pattern was placed in a very specific location on the board and associated with a color at the center of the board. Each participant was tested to make sure that they had memorized this information correctly—that if they saw a green dot, for example, they knew the star shape was at the far left position. Then, as the participants perceived and remembered the locations of the patterns, the researchers recorded their brain activity.

The brain scans allowed the researchers to map out how neurons recorded where something was, as well as how they later remembered it. Each neuron attends to one space, or “receptive field,” in the expanse of your vision, such as the lower left corner. A neuron is “only going to fire when you put something in that little spot,” Favila said. Neurons that are tuned to a certain spot in space tend to cluster together, making their activity easy to detect in brain scans.

Previous studies of visual perception established that neurons in the early, lower levels of processing have small receptive fields, and neurons in later, higher levels have larger ones. This makes sense because the higher-tier neurons are compiling signals from many lower-tier neurons, drawing in information across a wider patch of the visual field. But the bigger receptive field also means lower spatial precision, producing an effect like putting a large blob of ink over North America on a map to indicate New Jersey. In effect, visual processing during perception is a matter of small crisp dots evolving into larger, blurrier, but more meaningful blobs.

Read More