Friday, September 23, 2011

The Impossible Is Getting Closer

This is an idea that most of us would find more probable between the covers of science fiction than science fact.



Ever dreamed of recording your dreams and turning them into a video clip? The technology that enables you to do that is near: UC Berkeley scientists figured out a way to turn the way our brains interpret visual stimuli into a video, and the result is amazing.
To be able to do this, the researches used functional Magnetic Resonance Imaging (fMRI) to measure the blood flow through brain's visual cortex. Then, different parts of the brain were divided into volumetric pixels or voxels (the term might be familiar to those who remember early 3D games which were based on voxels instead of polygons which are more commonly used today). Finally, the scientists built a computational model which describes how visual information is mapped into brain activity.
In practice, test subjects viewed some video clips, and their brain activity was recorded by a computer program, which learned how to associate the visual patterns in the movie with the corresponding brain activity.
Then, test subjects viewed a second set of clips. The movie reconstruction algorithm was fed 18 million seconds of random YouTube videos, which were used to teach the program how to predict the brain activity evoked by film clips. Finally, the program chose 100 clips which were most similar to the movie the subject had seen, which were merged to create a reconstruction of the original movie.
The result is a video that shows how our brain sees things, and at moments it's eerily similar to the original imagery.
“This is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.”, said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published in the journal Current Biology.
Recording our dreams and "reading" the minds of coma patients requires a lot of work still, as current technology only enables scientists to interpret brain activity while the test subject is watching a movie. Ultimately, it could be used to decode how our brain processes visual events in everyday life or, perhaps, our dreams.
Check out another video, which shows the movie reconstruction algorithm at work, below. More details about the study can be found here


The Impossible Is Getting Closer

This is an idea that most of us would find more probable between the covers of science fiction than science fact.

From Yahoo.com news and the UC Berkeley News Center:

BERKELEY — Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube. With a cutting-edge blend of brain imaging and computer simulation, scientists at the University of California, Berkeley, are bringing these futuristic scenarios within reach.
Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.
As yet, the technology can only reconstruct movie clips people have already viewed. However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.
“This is a major leap toward reconstructing internal imagery,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published online today (Sept. 22) in the journal Current Biology. “We are opening a window into the movies in our minds.”
Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases.
It may also lay the groundwork for brain-machine interface so that people with cerebral palsy or paralysis, for example, can guide computers with their minds.
However, researchers point out that the technology is decades from allowing users to read others’ thoughts and intentions, as portrayed in such sci-fi classics as “Brainstorm,” in which scientists recorded a person’s sensations so that others could experience them.

Mind-reading through brain imaging technology is a common sci-fi theme
Previously, Gallant and fellow researchers recorded brain activity in the visual cortex while a subject viewed black-and-white photographs. They then built a computational model that enabled them to predict with overwhelming accuracy which picture the subject was looking at.
In their latest experiment, researchers say they have solved a much more difficult problem by actually decoding brain signals generated by moving pictures.
“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. “In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.”  
Nishimoto and two other research team members served as subjects for the experiment, because the procedure requires volunteers to remain still inside the MRI scanner for hours at a time.
They watched two separate sets of Hollywood movie trailers, while fMRI was used to measure blood flow through the visual cortex, the part of the brain that processes visual information. On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or “voxels.”
“We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity,” Nishimoto said.
The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.
Reconstructing movies using brain scans has been challenging because the blood flow signals measured using fMRI change much more slowly than the neural signals that encode dynamic information in movies, researchers said. For this reason, most previous attempts to decode brain activity have focused on static images.
“We addressed this problem by developing a two-stage model that separately describes the underlying neural population and blood flow signals,” Nishimoto said.
Ultimately, Nishimoto said, scientists need to understand how the brain processes dynamic visual events that we experience in everyday life.
“We need to know how the brain works in naturalistic conditions,” he said. “For that, we need to first understand how the brain works while we are watching movies.”
Other coauthors of the study are Thomas Naselaris with UC Berkeley’s Helen Wills Neuroscience Institute; An T. Vu with UC Berkeley’s Joint Graduate Group in Bioengineering; and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.

and from Mashable:

Scientists Turn Brain’s Visual Memories into a Mind-Blowing Video

Stan Schroeder


Ever dreamed of recording your dreams and turning them into a video clip? The technology that enables you to do that is near: UC Berkeley scientists figured out a way to turn the way our brains interpret visual stimuli into a video, and the result is amazing.
To be able to do this, the researches used functional Magnetic Resonance Imaging (fMRI) to measure the blood flow through brain's visual cortex. Then, different parts of the brain were divided into volumetric pixels or voxels (the term might be familiar to those who remember early 3D games which were based on voxels instead of polygons which are more commonly used today). Finally, the scientists built a computational model which describes how visual information is mapped into brain activity.
In practice, test subjects viewed some video clips, and their brain activity was recorded by a computer program, which learned how to associate the visual patterns in the movie with the corresponding brain activity.
Then, test subjects viewed a second set of clips. The movie reconstruction algorithm was fed 18 million seconds of random YouTube videos, which were used to teach the program how to predict the brain activity evoked by film clips. Finally, the program chose 100 clips which were most similar to the movie the subject had seen, which were merged to create a reconstruction of the original movie.
The result is a video that shows how our brain sees things, and at moments it's eerily similar to the original imagery.
“This is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.”, said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published in the journal Current Biology.
Recording our dreams and "reading" the minds of coma patients requires a lot of work still, as current technology only enables scientists to interpret brain activity while the test subject is watching a movie. Ultimately, it could be used to decode how our brain processes visual events in everyday life or, perhaps, our dreams.
Check out another video, which shows the movie reconstruction algorithm at work, below. More details about the study can be found here