Chapter 4: Decoding early visual representations from fMRI ensemble responses
Yukiyasu Kamitani
Despite the
wide-spread use of human neuroimaging, its potential to read out perceptual
contents has not been fully explored. Mounting evidence from animal neurophysiology
has revealed the roles of the early visual cortex in representing visual
features such as orientation and motion direction. However, non-invasive
neuroimaging methods have been thought to lack the resolution to probe into
these putative feature representations in the human brain. In this chapter, we
present methods for fMRI decoding of early visual representations, which find
the mapping from fMRI ensemble responses to visual features using machine
learning algorithms. First, we show how early visual features represented in
'sub-voxel' neural structures could be predicted, or decoded, from ensemble
fMRI responses. Second, we discuss how multi-voxel patterns could represent
more information than the sum of individual voxels, and how an effective set of
voxels can be selected from all available voxels that leads to robust decoding.
Third, we demonstrate a modular decoding approach in which a novel stimulus,
not used for the training of the decoding algorithm, can be predicted by
combining the outputs of multiple modular decoders. Finally, we discuss a
method for neural mind-reading, which attempts to predict a person's subjective
state using a decoder trained with unambiguous stimulus presentation.
Key words: neural
decoding, multi-voxel pattern, machine learning, ensemble feature selectivity,
sparse representation, voxel correlation, modular decoding, visual image
reconstruction, neural mind-reading