Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
[Preprint]. 2025 Apr 24:2024.09.23.614521.
doi: 10.1101/2024.09.23.614521.

The Neural Underpinnings of Aphantasia: A Case Study of Identical Twins

Affiliations

The Neural Underpinnings of Aphantasia: A Case Study of Identical Twins

Emma Megla et al. bioRxiv. .

Update in

Abstract

Aphantasia is a condition characterized by reduced voluntary mental imagery. As this lack of mental imagery disrupts visual memory, understanding the nature of this condition can provide important insight into memory, perception, and imagery. Here, we leveraged the power of case studies to better characterize this condition by running a pair of identical twins, one with aphantasia and one without, through mental imagery tasks in an fMRI scanner. We identified objective, neural measures of aphantasia, finding less visual information in their memories which may be due to lower connectivity between frontoparietal and occipitotemporal lobes of the brain. However, despite this difference, we surprisingly found more visual information in the aphantasic twin's memory than anticipated, suggesting that aphantasia is a spectrum rather than a discrete condition.

Keywords: Visual imagery; fMRI; functional connectivity; long-term memory; perception.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.. Methods for mental imagery tasks.
(a) Methods for the Novel Imagery task. Participants first encoded a novel scene or object image for 6 sec. Then, there was a 4 sec distractor period in which the participants indicated an intact image amongst a stream of scrambled images. After a 1–4 sec randomized jitter, participants then recalled the original image using mental imagery for 6 sec. Lastly, they rated the vividness of their mental image using a three-point scale. There was a total of 96 trials. (b) Methods for the Familiar Imagery task. Participants were first given a prompt which consisted of the text label of a personally familiar person or place. After a 1 sec mask of scrambled alphanumeric characters, the participants then mentally imagined the corresponding text prompt for 10 sec before rating the vividness of their mental imagery using a three-point scale. There was a 5–7 sec randomized jittered fixation between trials and 144 trials total.
Figure 2.
Figure 2.. Behavioral results.
(a) Drawings produced from memory and perception. Whereas both twins drew many objects in detail from a scene during perception, the aphantasic twin drew starkly less from memory compared to the imager twin. (b) Vividness responses during the imagery tasks. The aphantasic twin additionally reported significantly lower vividness during mental imagery for both the novel imagery and familiar imagery tasks.
Figure 3.
Figure 3.. Univariate brain activity during the imagery tasks for both twins.
(a) The location of PPA during perception and memory of the Novel Imagery task. The vertical green line indicates the location of the peak voxel activity in each condition. We observed an anterior shift in the peak voxel activity of PPA between perception and memory in both twins, with an equal (or even smaller) shift in the aphantasic compared to the imager. (b) A people>places contrast during the Familiar Imagery task. Using this contrast, we identified the recently discovered “familiar memory regions” in the medial parietal cortex in both twins, with their characteristic alternating pattern between familiar people and place selectivity. Each image is shown at a threshold of p<0.001 unless otherwise noted, and all images are from the sagittal view. See also Fig. S3 and Table S3.
Figure 4.
Figure 4.. SVM searchlight methods and results.
(a) Methods for cross decoding between conditions. Using the brain patterns within each searchlight region, we trained an SVM to distinguish between objects and scenes in one condition and tested on the other condition. Conditions were either between-participants (e.g., training on imager perception, testing on aphantasic perception) or within-participants (e.g., training on imager perception, testing on imager recall). To determine whether the voxels within a searchlight region were able to cross-decode above chance, we randomly swapped the image class labels for half of the training and test trials. We did this 100 times to build a null distribution to compare to the true decoding accuracy. (b) Voxels with significant decoding accuracy. Between-participants, there were many significant voxels able to cross-decode between the twins’ representations during perception, whereas there were far fewer during their recall. The decoding accuracy between the twins’ perceptual representations was also significantly higher than between their recall representations. Within-participants, there was a significantly higher decoding accuracy within the imager twin. However, the aphantasic twin had a surprisingly similar number of voxels as well as decoding accuracy as the imager twin. (c) Voxels with significantly higher decoding accuracy in one condition versus another. Whereas visual areas, including PHC, were significantly more similar between the twins’ perception than their recall, few areas emerged with higher similarity between their recall. Surprisingly, visual areas, including the PHC, shared significantly more similarity in their perceptual and mnemonic representations for the aphantasic than the imager. Each image is shown at a threshold of p<0.001, but all key regions reported survive cluster threshold correction (see Supplemental Results 1 and Fig. S1). See also Supplemental Results 2 and Fig. S2 for an ROI-based approach.
Figure 5.
Figure 5.. Representational similarity during familiar imagery.
To determine whether there is coarse level (person vs. place) visual information in aphantasic memory during familiar imagery, we correlated brain activity from the PHC region between every pair of stimuli. We quantified the amount of coarse level information by calculating a discrimination index (D) for each twin, which subtracts the degree of neural similarity within category – between category. Although we found evidence of coarse level visual information in the imager twin, we found nearly next to no discrimination between people and places in the aphantasic twin. Indeed, D was significantly higher in the imager than the aphantasic twin. Pearson correlation values are shown here for visualization purposes, but all analyses were performed after these correlation values were Fisher Z-transformed.
Figure 6.
Figure 6.. Differences in functional connectivity between the lobes of the brain.
Red means a higher correlation between two lobes in the imager, whereas blue means higher correlation in the aphantasic. Interestingly, we generally found lower connectivity between lobes housing immediate memory processes (temporal and occipital) and lobes housing consolidated memory processes (parietal and prefrontal) in the aphantasic twin, which could account for the differences we found between imagery tasks. These connections of interest are outlined in black. See Table S2 for all correlation values.

References

    1. Albers A. M., Kok P., Toni I., Dijkerman H. C., & de Lange F. P. (2013). Shared Representations for Working Memory and Mental Imagery in Early Visual Cortex. Current Biology, 23(15), 1427–1431. 10.1016/j.cub.2013.05.065 - DOI - PubMed
    1. Bainbridge W. A., & Baker C. I. (2022). Multidimensional memory topography in the medial parietal cortex identified from neuroimaging of thousands of daily memory videos. Nature Communications, 13(1), 6508. 10.1038/s41467-022-34075-1 - DOI - PMC - PubMed
    1. Bainbridge W. A., Hall E. H., & Baker C. I. (2021). Distinct Representational Structure and Localization for Visual Encoding and Recall during Visual Imagery. Cerebral Cortex, 31(4), 1898–1913. 10.1093/cercor/bhaa329 - DOI - PMC - PubMed
    1. Bainbridge W. A., Pounder Z., Eardley A. F., & Baker C. I. (2021). Quantifying aphantasia through drawing: Those without visual imagery show deficits in object but not spatial memory. Cortex, 135, 159–172. 10.1016/j.cortex.2020.11.014 - DOI - PMC - PubMed
    1. Baldassano C., Esteva A., Fei-Fei L., & Beck D. M. (2016). Two Distinct Scene-Processing Networks Connecting Vision and Memory. eNeuro, 3(5), ENEURO.0178-16.2016. 10.1523/ENEURO.0178-16.2016 - DOI - PMC - PubMed

Publication types