Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2016 May 5;371(1693):20150367.
doi: 10.1098/rstb.2015.0367.

Data-driven approaches in the investigation of social perception

Affiliations
Review

Data-driven approaches in the investigation of social perception

Ralph Adolphs et al. Philos Trans R Soc Lond B Biol Sci. .

Abstract

The complexity of social perception poses a challenge to traditional approaches to understand its psychological and neurobiological underpinnings. Data-driven methods are particularly well suited to tackling the often high-dimensional nature of stimulus spaces and of neural representations that characterize social perception. Such methods are more exploratory, capitalize on rich and large datasets, and attempt to discover patterns often without strict hypothesis testing. We present four case studies here: behavioural studies on face judgements, two neuroimaging studies of movies, and eyetracking studies in autism. We conclude with suggestions for particular topics that seem ripe for data-driven approaches, as well as caveats and limitations.

Keywords: ecological validity; face space; intersubject brain correlation; social neuroscience; social perception.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Statistical models of faces. An individual face stimulus can be represented as a vector in a dimensional space. With synthetic face stimuli, one can omit dimensions that would be psychologically irrelevant, such as the type of camera taking the picture, and incorporate a more restricted set of dimensions to begin with. (a) Illustration of statistical face space with two dimensions representing face shape. (b) Illustration of statistical face space with two dimensions representing face reflectance.
Figure 2.
Figure 2.
Faces generated by a data-driven computational model of judgements of (a) competence; (b) dominance; (c) extroversion; and (d) trustworthiness. The middlemost face on each row is the average face in the statistical model. The face to the right is 3SD above the average face on the respective trait dimension; the face to the left is 3SD below the average face.
Figure 3.
Figure 3.
(a) Stimulus-blind analysis using intersubject correlation (ISC) is based on temporal similarity of voxelwise time courses across subjects. When computed in a sliding window across the time course it also allows linking moment-to-moment ISC with a stimulus model for quantifying the relationship between external stimulus and response reliability across subjects. (b) Independent components analysis is based on dividing the BOLD signal into statistically independent components. As in ISC, the extracted components can subsequently be linked with stimulus events. (a) Courtesy of Juha Lahnakoski, (b) adapted with permission from Malinen et al. [32].
Figure 4.
Figure 4.
bsMVPC of single time points (TRs) from a movie, Raiders of the Lost Ark, and from a face and object perception study. (a) A seven-way bsMVPC analysis matched the probability of correct classifications for the two experiments. Response patterns from one TR in each stimulus block of a run in face and object experiment were extracted from all subjects. This was repeated for all eight runs. Response patterns of TRs during the movie presentation at the same acquisition time as selected for the face and object experiment were extracted from all subjects to perform a similar seven-way bsMVPC analysis. (b) Results showed that BSC accuracy for movie time points was more than twice that for time points in the face and object perception experiment. Dashed lines indicate chance classification (one out of seven). Adapted from Haxby et al. [47].
Figure 5.
Figure 5.
Comparison of the general validity of common models based on responses to the movie and on responses to still images. Common models were built based on responses to a movie, Raiders of the Lost Ark, and responses to single images in a face and object category perception experiment [47], performed at Princeton, and an animal species perception experiment (Connolly et al. [49]), performed at Dartmouth. Results on the left show bsMVPC accuracies for the responses to single faces, objects, and animal species. Results on the right show bsMVPC accuracies for 18 s time segments in the movie. Note that common models based on responses to the category images afford good bsMVPC for those experiments but do not generalize to bsMVPC of responses to movie time segments. Only the common model based on movie viewing generalizes to high levels of bsMVPC for stimuli from all three experiments. Dashed lines indicate chance performance. From Haxby et al. [47].

References

    1. Touryan J, Felsen G, Dan Y. 2005. Spatial structure of complex cell receptive fields measured with natural images. Neuron 45, 781–791. (10.1016/j.neuron.2005.01.029) - DOI - PubMed
    1. Yao HS, Shi L, Han F, Gao HF, Dan Y. 2007. Rapid learning in cortical coding of visual scenes. Nat. Neurosci. 10, 772–778. (10.1038/nn1895) - DOI - PubMed
    1. Fox CJ, Iaria G, Barton JJS. 2009. Defining the face processing network: optimization of the functional localizer in fMRI. Hum. Brain Mapp. 30, 1637–1651. (10.1002/hbm.20630) - DOI - PMC - PubMed
    1. Schultz J, Brockhaus M, Bülthoff HH, Pilz KS. 2013. What the human brain likes about facial motion. Cereb. Cortex 23, 1167–1178. (10.1093/cercor/bhs106) - DOI - PMC - PubMed
    1. Felsen G, Dan Y. 2005. A natural approach to studying vision. Nat. Neurosci. 8, 1643–1646. (10.1038/nn1608) - DOI - PubMed

Publication types

LinkOut - more resources