Video Decoding and Reconstruction
In 2011, Jack Gallant’s paper “Reconstructing visual experiences from brain activity evoked by natural movies” had left a profound impact on our lab. Using a straight-forward Bayesian encoder/decoder model, movies subjects watched inside the fMRI scanner could be reconstructed by averaging the best matching videos from a YouTube library. Soon after reading this paper, our lab grew very interested in pursuing a more sophisticated encoder/decoder, with ideas of piecemeal reconstructing videos feature by feature without the need of a library of videos. To this end, we had two very willing subjects (…myself and another fellow graduate student) watch two hours each of random movies from natural scenes to sports to animations, not just movie trailers.
I would like to propose hacking on this dataset for the BrainHack 2013 conference. I would be able to bring the two hours of video clips we watched as well as the fMRI data for both subjects. Both subjects were scanned with a 3.0T Siemens Trio scanner. One subject was scanned with a whole-brain EPI sequence allowing interpretation of more than just the visual world, whereas the other subject was scanned with 18 coronal slices of just the posterior 1/3 of the brain at TR=1 s, allowing a higher sampling of just the visual cortex. Would it be possible to achieve better or alternative reconstructions of this dataset in 3 days at BrainHack?
We welcome any ideas about building an encoding/decoding model or reconstructing the videos we watched while in the scanner. Together, we will build novel ways to understand visual processes in the brain, as well as understand how the brain forms constructions of videos. We will need a good team of hackers including machine learning experts, fMRI processing experts, image/video processing engineers, and more! Hope to discuss more soon!
Please comment and email firstname.lastname@example.org.
Ricky Savjani is bringing the data and looking for members to join hacking on this data set!