Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Daily Overview |
| Session | ||
104: Beyond The Plane: Dissecting The Influence Of Increasingly Naturalistic Stimuli On Neurocognitive Processes
| ||
| Session Abstract | ||
|
Immersive technologies provide advanced opportunities to investigate human cognition under conditions that approximate the complexities of the real world. This symposium is designed to examine the progression from two-dimensional and static to three-dimensional and dynamic stimuli and to discuss how varying levels of naturalism influence psychophysiological responses. The primary emphasis is on elucidating the impact of immersion on both early sensory processing and higher cognitive functions, contingent upon stimulus complexity and social significance. The contributions follow this progression across five studies. Joanna Kisker and Victoria Nicholls will establish a foundation by comparing the electrophysiological responses to planar, virtual 3D, and real objects. Whereas Joanna Kisker focuses on the early stages of visual perception, Victoria Nicholls examines the impact of contextual congruence on object recognition. Felix Klotzsche, Leon Kroczek, and Merle Sagehorn will explore how social cues of increasing relevance correspond with this foundation. Specifically, Felix Klotzsche investigates the impact of stereoscopic depth cues on face processing using multivariate decoding of EEG and eye-tracking data. Leon Kroczek will examine interpersonal synchrony, employing force-plate data to evaluate how dynamic, emotional agents and social relevance influence physical body sway. Concludingly, Merle Sagehorn will discuss the transition from static 2D to dynamic 3D avatars, focusing on the electrophysiological correlates of their encoding and retrieval. In synthesis, this symposium will demonstrate that the impact of immersion is not uniform; rather than suggesting an inherent superiority of immersive technologies, we show that its advantages are contingent upon the stimulus’s complexity and relevance. | ||
| Presentations | ||
Progressing from 2D to Real-World Perception: Electrophysiological Signatures of Early Visual Processing Across Planar, Virtual, and Real-World Objects 1Osnabrück University, Germany; 2Norwegian University of Science and Technology, Norway Although Virtual Reality (VR) is increasingly used to simulate real-life conditions in experiments, this approach fundamentally depends on whether different modalities are perceived and processed in a similar way. In particular, the early visual response is highly sensitive to stimulus characteristics such as size and complexity. Whereas respective real-life characteristics, e.g., binocular depth, may elicit visual processes beyond those elicited by planar stimuli, VR might adequately simulate these features and, as a result, elicit similar visual processing. Therefore, we compared the event-related responses to abstract objects presented on a 2D desktop (PC), in VR, and as physical 3D prints (RL) to establish a foundational understanding of how early visual processing depends on modality-specific features. Indicating overall similar early visual processing, the canonical P1–N1–P2 complex was elicited across conditions. However, ERP amplitudes to objects presented in RL were more similar to those presented in VR than to those presented on PC, suggesting that stereoscopic depth cues could influence the magnitude of the processes at work. While the P1 response only distinguished PC from RL, the N1 response differentiated PC from both VR and RL, indicating sensitivity to depth information. Conversely, the P2 response uniquely distinguished VR, possibly reflecting stereoscopic visual fatigue. Although early visual processing appears qualitatively comparable across modalities, our findings demonstrate that immersive stimulus characteristics modulate the magnitude of the process and advocate for using VR when the latter is of primary interest. Real-World Contextual Expectations Influence Neural Processing Of Objects. 1LMU, Germany; 2MRC Cognition and Brain Sciences Unit, University of Cambridge, UK; 3SilicoLabs, Montreal, Canada; 4Department of Biological Psychology and Neuroergonomics, TÜ Berlin, Germany; 5Department of Psychology, University of Warwick, Warwick, UK Objects in our day to day lives are typically experienced in certain locations, e.g. a toothbrush next to the bathroom sink, knowledge about which we learn through experience in the form of schemas (e.g. scene grammar). This knowledge can help us in our daily task such as object recognition or visual search, where objects are both recognised and found faster in expected compared to unexpected environments. However, most studies investigating the congruency effect have been conducted in well controlled laboratory environments where objects and scenes are presented together on a screen before being replaced by an unrelated object and scene breaking the spatiotemporal coherence that structures real-life experiences. Studies have increasingly shown differences between neural processes in realistic environments and tasks, and neural processes in the laboratory. Here, we aimed to push the boundaries of traditional cognitive neuroscience by tracking the congruency effect for objects in real-world environments, outside of the laboratory. We investigated how neural activity is modulated when objects are placed in real environments using augmented reality while recording mobile EEG. Participants approached, viewed, and rated how congruent they found the objects with the environment. We found significant differences in ERPs and higher theta-band power for objects in incongruent contexts than objects in congruent contexts. This demonstrates that real-world contexts impact how objects are processed, and that mobile brain imaging and augmented reality are effective tools to study cognition in the wild. EEG-Decodability of Facial Expressions in Immersive Virtual Reality 1Max Planck Institute for Human Cognitive and Brain Sciences, Germany; 2Humboldt-Universität zu Berlin, Department of Psychology, Berlin, Germany; 3Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; 4Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute (HHI), Department of Artificial Intelligence, Berlin, Germany; 5Department of Physics and Life Science Imaging Center, Hong Kong Baptist University, Hong Kong, China; 6Faculty of Education, National University of Malaysia, Bangi, Malaysia What happens to neural face processing when we move from flat screens to immersive 3D setups? Here, we combine immersive virtual reality (VR), EEG, and eye tracking to test whether facial emotional expressions can be robustly decoded from neural signals in a VR setting—and whether stereoscopic depth alters the underlying neural activity patterns. In an emotion recognition task, thirty-four participants viewed computer-generated faces in a VR headset under monoscopic (2D) and stereoscopic (3D) conditions. Using time-resolved multivariate decoding of broadband ERP data, we classified facial expressions alongside task-irrelevant features (identity and depth condition). Concurrent eye tracking allowed us to assess potential oculomotor confounds. Facial expressions were reliably decodable, with sustained above-chance performance throughout the stimulus epoch. Decoding performance did not differ between 2D and 3D conditions and classifiers generalized across them. At the same time, the depth condition itself was well decodable from EEG, demonstrating that depth information was represented although it did not modulate expression representation. Source localization revealed occipito-parietal activation patterns across viewing conditions. Eye tracking enabled decoding of expression and identity, but not depth condition, suggesting complementary feature representations in EEG and eye tracking data. Together, these findings demonstrate the use of EEG in immersive VR as a sensitive tool for decoding visually complex social signals in increasingly naturalistic conditions. Moreover, they suggest that stereoscopic depth is processed as an independent feature, leaving the neural representation of (static, clearly distinguishable) facial expressions largely invariant. Emotional Modulation of Body-Sway Synchronization 1Clinical Psychology and Psychotherapy, University of Regensburg; 2Institute of Sport Science, University of Regensburg Behavioral synchronization is frequently observed in social interactions, yet the conditions under which it occurs are still not fully understood. The present study investigated the influence of facial emotional expressions on the synchronization of body sway using a virtual reality paradigm. Seventy-two participants encountered a virtual agent face-to-face while immersed in a CAVE-based Virtual Reality setup. The agents alternated between periods of immobility and periods in which they displayed body-sway movements at 0.25 Hz. Agents’ facial expressions varied between happy, angry, and neutral. In addition, a non-human control condition was included, consisting of a column moving with the same sway pattern. During the experiment, participants’ center of mass was continuously recorded using a force plate, and subjective experience was assessed through valence and arousal ratings. Angry expressions were experienced as less pleasant and more arousing than both neutral and happy expressions. Frequency analysis of participants’ body movements revealed an increase in power at the frequency of the agent’s movement, demonstrating body-sway synchronization. Importantly, this effect interacted with emotional expression, such that synchronization was stronger for happy than for angry expressions. Interestingly, synchronization was also observed in the non-human control condition. Overall, these findings suggest that behavioral synchronization can be elicited automatically, but may be attenuated in aversive social contexts. Interpersonal synchronization may thus provide a flexible mechanism of social coordination that is sensitive to emotional expressions. Facing Reality: Electrophysiological Insights into the Impact of Perceptual and Contextual Realism on Face Processing and Recognition in VR 1Experimental Psychology, Institute of Psychology, Osnabrück University, Germany; 2Experimental Virtual Reality Psychology, Department of Psychology, Norwegian University of Science and Technology, Norway Human faces play a central role in social settings, surpassing other visual cues in their importance for interpersonal interactions. Using Virtual Reality (VR) for stimulus presentation allows the investigation of how cognitive processing adapts to varying levels of realism in such interactions, considering both visual and social features. Face processing that considers these features has been shown to engage distinct early perceptual and later conceptual processing of static faces presented in 3D via VR, compared with 2D faces on conventional computer screens. Overall, this results in increased involvement of semantic information processing and social cognition. Expanding these findings requires a closer simulation of real-life personal encounters in terms of dynamics and probability, as well as an investigation of longer-term effects on face recognition. The combined use of mobile EEG and Virtual Avatars enables participants to encounter individuals either as dynamic 3D stimuli or static 2D images, all within the same virtual environment. It also allows the study of stimulus processing during encoding and mnemonic mechanisms during retrieval, using event-related potentials (ERPs) and induced oscillatory responses. Ultimately, reactivating 3D face representations involves both identity-related and semantic recognition processes, along with increased allocation of attentional resources. The realistic features of the 3D faces, such as visual complexity and social relevance, promote the formation of modality-specific engrams, resulting in better recognition compared to their 2D counterparts. Therefore, the complexity of face stimuli and their contextual embedding significantly influence the cognitive mechanisms involved in processing, from initial perception to the recognition of faces. | ||
