Conference Agenda

Session
Concurrent Session 19- Visual Perception 2
Time:
Wednesday, 09/July/2025:
9:00am - 10:00am

Session Chair: Daniel Bor
Location: EXPERIMENTAL THEATRE HALL


Presentations
9:00am - 9:10am

Conscious Intention to Predict Modulates Neural Prediction Error Processing but not Visual Representations

Chen Frenkel, Leon Y. Deouell

the Hebrew Univerisy, Israel

Predictive processing theory proposes that the brain generates predictions updated following prediction errors - the difference between predicted and actual sensory input. The high level top-down predictions, in this framework, forms the perceptual experience. This theory highlights the brain as an active agent, however it remains unclear how conscious intentions to predict upcoming sensory input modulate prediction and prediction error.

To investigate this question, we examined neural dynamics during mostly passive viewing tasks. Over two sessions, participants (N=30) viewed stimuli varying in category and color while undergoing EEG recording. Infrequent unexpected response trials, excluded from EEG analysis, ensured minimal motor interference. In each session, color or category was sequentially predictable and the other was presented randomly. We manipulated conscious intention to predict by instructing participants to either explicitly predict upcoming stimuli, judge current stimuli, or maintain previous stimuli in memory (discouraging prediction).

Behaviorally, participants were highly accurate on all tasks. Event-related potentials revealed sensitivity to sequence deviations, which varied significantly between tasks. Using machine learning, we decoded which task was performed and found persistent task representations during passive viewing, intensifying around stimulus onset. Whereas stimulus color and category were significantly decodable during early perceptual processing, decoding accuracy was not modulated by task, predictability or attention.

Taken together, we find that conscious intention to predict is decodable from neural activity and modulates neural prediction errors, yet visual representations remain unaffected by it. This dissociation between intentional predictions and visual perceptual processing raises questions about the relationship between predictions and visual experience.



9:10am - 9:20am

Synergistic Broadband Dynamics Versus Redundant Gamma Oscillations During Visual Perception

Louis Roberts1, Juho Aijala1, Florian Burger1, Cem Uran2, Robin A.A. Ince3, Martin Vinck2, Dora Hermes4, Andres Canales-Johnson1,5,6

1Department of Psychology, University of Cambridge, United Kingdom.; 2Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, The Netherlands; 3Institute of Neuroscience and Psychology, University of Glasgow, United Kingdom; 4Department of Physiology and Biomedical Engineering, Mayo Clinic, USA; 5Neuroscience Center, Helsinki Institute of Life Science, University of Helsinki, Finland; 6Facultad de Ciencias de la Salud, Universidad Catolica del Maule, Talca, Chile.

An unsolved problem of large-scale communication in the visual cortex is whether oscillatory (i.e. narrowband) or non-oscillatory (i.e. broadband) signals encode and communicate statistical image properties. Information theory provides a mathematical framework to decompose distinct types of information processing within and between neural signals. Here, we used information theoretical measure to dissociate neural signals sharing common information (i.e. redundancy) from signals encoding complementary information (i.e. synergy) about images with higher or lower levels of spatial homogeneity. We analyzed electrocorticography (ECoG) signals in the visual cortex of human and non-human primates (macaque) and investigated to what extent narrow-band gamma oscillations and broadband signals conveyed redundant or synergistic information about image homogeneity. In both species, the information conveyed by broadband signals was highly synergistic within and between visual areas. On the contrary, the information carried by narrow-band gamma oscillations was primarily redundant within and between the same areas. These results indicate that non-oscillatory signals rather than narrowband gamma oscillations integrate information about image properties across the visual cortex.



9:20am - 9:30am

Neurons in the Human Brain Encode Rapidly Learned Visual Information to Reshape Perception

Marcelo Armendariz1,2, Julie Blumberg3, Jed Singer1, Franz Aiple3, Jiye Kim1, Peter Reinacher3, Andreas Schulze-Bonhage3, Gabriel Kreiman1,2

1Boston Children’s Hospital, Harvard Medical School, MA, USA; 2Center for Brains, Minds and Machines, MA, USA; 3University Medical Center Freiburg, Freiburg, Germany

Humans can swiftly learn to recognize visual objects after only a few exposures. A striking example of rapid learning is the sudden recognition of a degraded black-and-white image of an object (Mooney image). These images are initially unrecognizable but become easily interpretable after a brief exposure to the original intact version of the image. Integrating recently learned information with existing knowledge necessitates forming enduring neuronal representations to enable future recognition. However, the mechanisms underlying how rapid perceptual changes are reflected in the neuronal dynamics induced by learning within the human brain remain poorly understood. Here, we recorded the spiking activity of neurons in medial occipital and temporal regions of the human brain in patients performing an image recognition task that involved rapid learning of Mooney images. Neurons in the human occipital cortex (OC) and medial temporal lobe (MTL) modulated their firing patterns to encode rapidly learned visual information and reshape perception. Population decoding revealed that occipital neurons resolved the identity of learned images at the cost of additional processing time, with delayed responses observed in MTL neurons. Our findings suggest that OC may not rely on feedback from MTL to support recognition following rapid perceptual learning. Instead, learning-induced dynamics observed in OC may reflect extensive recurrent processing, potentially involving top-down feedback from higher-order cortical areas, before signals reach the MTL. These results highlight the need for further computation beyond bottom-up visual input representations to facilitate recognition after learning and provide spatiotemporal constraints for computational models incorporating such recurrent mechanisms.



9:30am - 9:40am

Neural Signatures of Blindsight: The Role of Motion Coherence, Confidence, and Subcortical-Cortical Connectivity

Vanessa Hadid1,2,3, Annalisa Pascarella3, Dang Khoa Nguyen4, Karim Jerbi2,3, Franco Lepore2

1McGill University, Canada; 2Psychology Department, Université de Montréal, Canada; 3Computational and Cognitive Neuroscience Lab (CoCo Lab); 4Neurology Service, Centre Hospitalier de l’Université de Montréal

Background

Blindsight refers to the preserved ability of cortically blind (CB) patients to discriminate visual stimuli despite no conscious vision. Although subcortical-to-cortical pathways can mediate motion processing, it remains unclear how stimulus difficulty (e.g., motion coherence) and subjective confidence contribute to varying levels of awareness. This study aimed to identify neural signatures distinguishing correct from incorrect detection and different degrees of subjective awareness in a motion detection task.

Methods

Five CB patients with post-chiasmatic lesions restricted to one hemifield performed a forced-choice random-dot motion discrimination task while undergoing magnetoencephalography (MEG). Task difficulty was modulated by varying motion coherence, with 75% of trials containing motion and 25% blank trials to assess d-prime (a sensitivity index measuring detection performance). Patients were cued to attend to either their intact or blind hemifield and reported motion direction and confidence. MEG source reconstruction extracted evoked responses, time-frequency features, and inter-regional connectivity via Granger causality. Machine learning (ML) models classified correct vs. incorrect responses and awareness states.

Results

Patients discriminated motion above chance in their blind hemifield, with performance improving as motion coherence increased. ML analyses revealed distinct neural correlates of confidence: stronger subcortical-to-occipitotemporal connectivity predicted above-chance performance, while enhanced frontoparietal engagement correlated with higher confidence and was predominant when processing stimuli in the intact hemifield.

Conclusions

These findings indicate that stimulus coherence and confidence modulate residual motion processing in blindsight. Reorganized subcortical-cortical pathways support unconscious motion discrimination in CB. However, impaired higher-order integration limits full awareness, suggesting a graded continuum of perceptual awareness.



9:40am - 9:50am

Visual Conscious Awareness and Neural Processing Are Linked to Eye Metrics in Cerebral Blindness

Sharif I. Kronemer1, Victoria E. Gobo1,2, Shruti Japee1, Eli Merriam1, Benjamin Osborne3, Peter A. Bandettini1,4, Tina T. Liu1,5

1Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA; 2Baylor College of Medicine, Houston, TX, USA; 3Department of Neurology and Ophthalmology, Medstar Georgetown University Hospital, Washington, DC, USA; 4Functional MRI Facility, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA; 5Department of Neurology, Georgetown University Medical Center, Washington, DC, USA

Cerebral blindness is vision loss caused by damage to the primary visual pathway. A key area of study in cerebral blindness is the retention of blind field conscious awareness and residual neural activity to visual stimulation. These queries are motivated by cases were cerebrally blind people report degraded vision or non-visual sensations and achieve above chance performance on visually guided tasks without conscious vision, known as blindsight. However, there is concern that common subjective methods to measure visual conscious awareness are insensitive to degraded vision or non-visual sensations in cerebral blindness. An objective marker of conscious awareness could improve on the limits of subjective report. Previous research has shown that eye metrics can serve as a covert measure of consciousness. Correspondingly, we studied pupil size, blinking, and microsaccades as markers of conscious awareness and residual brain activity in cerebral blindness. Adult participants with homonymous hemianopia (N = 8) and matched healthy participants (N = 8) completed a visual perception task with simultaneous pupillometry and eye tracking. Applying standard pupillometry analysis and machine learning approaches, our results revealed two groups of cerebrally blind participants – blind aware and blind unaware – inferred by the presence of eye metrics, even when behavioral performance and verbal report indicated otherwise. Furthermore, magnetoencephalography recordings found visual stimulus-evoked, occipital cortical field potentials linked to blind field conscious awareness and eye metrics. These findings highlight the value of recording eye metrics in cerebral blindness to predict conscious awareness and residual neural activity and suggests possible clinical applications.



9:50am - 10:00am

Characterising Pre-activation Of Expected Stimulus Representations In The Visual System

Morgan Kikkawa1, Carla den Ouden1, Maire Kashyap1, Giuliano Ferla1, Elizabeth Chang1, Mia Nightingale1, Jasmin Bruna Stariolo1, Marta Garrido1,2, Daniel Feuerriegel1

1Melbourne School of Psychological Sciences, The University of Melbourne, Australia; 2Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Australia

Statistical regularities in our environment can be used to generate predictions about future sensory events. Kok et al. (2017) reported that the brain generates multivariate patterns of magnetoencephalographic signals resembling stimuli expected to appear, even before they are presented. Further characterising this proposed phenomenon, known as pre-activation, can help us develop predictive processing models of conscious visual experience. However, pre-activation effects have not been replicated in comparable predictive cueing designs. We conducted two experiments with larger sample sizes (n=48, n=60, cf. n=23 from Kok et al., [2017]) to replicate pre-activation effects and examine their properties. Both experiments used coloured rings to cue the most likely orientation of a subsequently presented grating while collecting electroencephalographic data. A second grating was then shown and participants reported whether it matched the orientation of the first grating. They were also presented with blocks of randomly-oriented gratings. In experiment one, we aimed to assess pre-activation strength as a function of stimulus appearance probability. Experiment two introduced noise to the cued gratings to investigate whether pre-activation signals are more detectable when participants must rely more heavily on the cue. Pre-activation was assessed by training forward encoding models on neural responses to randomly-oriented gratings and testing them on responses to cued gratings. In both experiments, we found significant decodable orientation information in post-stimulus responses. However, we did not find evidence for pre-activation and were unable to replicate these effects. Such findings suggest that pre-activation may arise under specific conditions or be harder to detect than previously thought.