Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Aug 5:471:115110.
doi: 10.1016/j.bbr.2024.115110. Epub 2024 Jun 11.

Combined representation of visual features in the scene-selective cortex

Affiliations

Combined representation of visual features in the scene-selective cortex

Jisu Kang et al. Behav Brain Res. .

Abstract

Visual features of separable dimensions conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in the occipital place area (OPA). Using functional magnetic resonance imaging (fMRI), we tested how separate features are concurrently represented in the OPA. Participants saw eight types of scenes, in which four of them had one path and the other four had two paths. In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. A glass wall was placed in some paths to restrict navigational distance. To test how the OPA represents path directions and distances, we took three approaches. First, the independent-features approach examined whether the OPA codes each direction and distance. Second, the integrated-features approach explored how directions and distances are integrated into path units, as compared to pooled features, using double-path scenes. Finally, the integrated-paths approach asked how separate paths are combined into a scene. Using multi-voxel pattern similarity analysis, we found that the OPA's representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were similar to the combination of two constituent single-paths, as a combined unit of direction and distance rather than as a pooled representation of all features. These results show that the OPA combines the two features to form path units, which are then used to build multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by the OPA that automatically and efficiently combines multiple features relevant for navigation and represent a navigation file.

Keywords: Multi-voxel Pattern Analysis; Navigation File; Occipital Place Area; Scene Perception; Visually Guided Navigation.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Examples of images in the eight stimulus conditions (See Figure S1 for examples of different textures). (a) Four single-path scenes, each with a straight corridor directed towards the left or towards the right. A glass wall is located for near paths. (b) Four double-path scenes, each with two corridors, one directed towards the left and the other directed towards the right. A glass wall is located in near paths.
Figure 2.
Figure 2.
(a) Independent-features approach. The level of voxel-wise neural similarity between single-path scene pairs that share the direction but not the distance (cells marked with 2), and pairs that share the distance but not the direction (cells marked with 1) were contrasted with pairs that do not share any of the features (cells marked with 0) to test for the representation of path direction and distance. (b) An illustration of an example for the shared direction, shared distance, and no shared feature conditions. The left and right arrows represent left and right path directions. The short and long rails represent near and far distances.
Figure 3.
Figure 3.
An illustration of an example for the correctly integrated condition and the incorrectly integrated condition. Brackets represent the conjunction of features.
Figure 4.
Figure 4.
(a) A schematic visualization of an example of the linear combination (average) of the multi-voxel patterns of two single-path scenes of different directions and different distances. (b) Examples of neural similarities between an average representation of two single-path scenes and a double path scene. One example from the correct integration condition and another example from the incorrect integration condition are depicted.
Figure 5.
Figure 5.
Integrated-features approach contrast. Conditions for assessing the representation of direction and distance in multiple-path scenes. Neural similarities of scenes in the correct integration condition (cells marked with 1) were contrasted with neural similarities of scenes in the incorrect integration condition (cells marked with 0). It is critical that all scene pairs share the same independent features (left, right, near, and far) regardless of its condition, allowing for the examination of integration of features over pooled features.
Figure 6.
Figure 6.
Integrated-paths approach contrast. (a) Conditions assessing the representation of path units within an entire scene. The two shared paths condition includes neural similarities of the average of two single-path scenes that combines to make the same paths as a double-path scene (same distance on the same direction; cells marked with 2). The one shared path condition includes neural similarities of the average of two single-path scenes that share one path unit with a double-path scene (cells marked with 1). The no shared path condition includes neural similarities of the average of two single-path scenes that do not share any path unit with a double-path scene (cells marked with 0). (b) An illustration of an example for the two shared path condition, one shared path condition, and the no shared path condition. Each rail represents a path unit, a combined form of direction and distance features.
Figure 7.
Figure 7.
The schematic visualization of the procedure within a condition block. A series of 12 images were presented from one of the eight conditions. Participants pressed a button when a red frame was presented along the border of the image, while paying attention to the path direction and navigational distance. There were 16 condition blocks in a run, and each participant went through ten experimental runs.
Figure 8.
Figure 8.
Results for the univariate response analyses in the OPA and the PPA. Each letter of the scene types (x-axis) depict either the direction or the distance (L: left, R: right, N: near, F: far). Single-path scenes are labeled with two letters and double-path scenes are labeled with four letters.
Figure 9.
Figure 9.
Results for the independent-features approach. Analysis comparing the neural similarity of the shared direction condition, the shared distance condition, and the no shared feature condition. When comparing with the no shared feature condition, similarity in the shared direction condition and the shared distance condition were both significantly higher. A direct comparison of the shared direction and shared distance conditions in the OPA was not significant, showing no evidence for the bias for one feature over another. *q < 0.05
Figure 10.
Figure 10.
Regression coefficient estimates when regressing data from OPA and EVC onto the RMSD model. The model predicts EVC’s similarity pattern better than OPA’s similarity pattern. *p < 0.05
Figure 11.
Figure 11.
Results for the integrated-features approach. Analysis comparing the neural similarities of an average of two single-path scenes with a double-path scene in the correct integration and the incorrect integration conditions (see Figure 5 for an illustration of the comparisons). The difference between conditions was marginally significant only in the OPA. *p < .05
Figure 12.
Figure 12.
Results of the integrated-paths approach. Analysis comparing similarities between the average representation of two single-path scenes and a double-path scene in the two shared path condition, one shared path condition, and no shared path condition. The OPA showed the highest neural similarity for the two shared path condition, and the lowest for the no shared path condition, with the one shared path condition in between. All differences among the three conditions were significant in the OPA. *p, q < 0.05

Update of

Similar articles

References

    1. Amit E, Mehoudar E, Trope Y, & Yovel G. (2012). Do object-category selective regions in the ventral visual stream represent perceived distance information? Brain and Cognition, 80(2), 201–213. - PubMed
    1. Anderson AJ, Binder JR, Fernandino L, Humphries CJ, Conant LL, Raizada RDS, Lin F, & Lalor EC (2019). An integrated neural decoder of linguistic and experiential meaning. The Journal of Neuroscience, 39(45), 8969–8987. - PMC - PubMed
    1. Baldassano C, Beck DM, & Fei-Fei L. (2017). Human–object interactions are more than the sum of their parts. Cerebral Cortex, 27(3), 2276–2288. - PMC - PubMed
    1. Baldassano C, Esteva A, Fei-Fei L, & Beck DM (2016). Two distinct scene-processing networks connecting vision and memory. Eneuro, 3(5). - PMC - PubMed
    1. Benjamini Y, & Hochberg Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1), 289–300.

LinkOut - more resources