Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 4th July 2025, 08:01:32am EEST

 
 
Session Overview
Session
Poster Session 4 - Unconscious processing, Artificial Intelligence, Philosophy & Theories - COFFEE BREAΚ
Time:
Tuesday, 08/July/2025:
4:30pm - 5:30pm

Location: FOYER


Show help for 'Increase or decrease the abstract text size'
Presentations

Non-invasive Electrical Stimulation Modulates Thalamocortical Connectivity During Mental Illusion

Seulgi Lee1, Bumhee Park1, Jeehye Seo2, Byoung-Kyong Min2

1Ajou University School of Medicine, Republic of (South Korea); 2Korea University, Korea, Republic of (South Korea)

Mental illusion is one of the key phenomena in conscious perception; nevertheless, its neural bases remain elusive. We used non-invasive electrical brain stimulation and functional magnetic resonance imaging to investigate this. During the experiment, participants viewed flickering red and green visual stimuli, perceiving them either as distinct, non-fused colors or as a mentally generated fused color (orange). Applying transcranial alternating current stimulation (tACS) to the dorsolateral prefrontal cortex (a key node of the central executive network) compared to the medial prefrontal cortex (a key node of the default-mode network) enhanced first-order and higher-order thalamocortical connectivity during the conscious perception of the mentally fused illusory orange color. Our findings suggest the neurophysiological bases of tACS-mediated network-wide neuromodulation and demonstrate a feasible, non-invasive approach to modulating thalamocortical functional connectivity.



Noise Modulation in a Single-Route Model Can Explain the Apparent Selective Effect of Prefrontal Damage on Conscious Visibility

Dimitri Bredikhin, Aaron Schurger

Chapman University, United States of America

Introduction: Previous studies have suggested that prefrontal cortex selectively affects conscious visibility but not objective performance. In Del Cul et al. (2009) this conclusion was reached based on subjective reports in a backward masking task in patients with focal prefrontal lesions. Here we test the idea that prefrontal damage may not be selective for conscious perception and that previous results can be explained solely based on increased noise during perceptual decision-making.

Methods: We reproduced the original simulation using a single-route sensory evidence accumulator with variable strength of noise for the two experimental groups. We are currently running an experiment where healthy participants undergo two experimental blocks in a randomized order: (1) the metacontrast masking task from Del Cul et al., 2009 (simulating the control participants), and (2) the very same task but with the addition of random dynamical visual noise (simulating the patients).

Results: We reproduced the main features of the original results through simulation without assuming a selective effect of prefrontal cortex on conscious perception. The optimal set of parameters indicated stronger noise in the case of the patients. An empirical test of this hypothesis on healthy participants is currently in progress.

Conclusion: Our simulation showed that the pattern of the results from Del Cul et al. (2009) can be reproduced simply by varying the strength of visual noise in the experimental stimuli. We propose a simple experimental approach to test it empirically. If confirmed, the selective causal role of prefrontal cortex in visual consciousness is not supported.



Auditory Awareness of Errors in Self-produced Vocalization: An ERP Study

Sampo Tanskanen, Rada Wattanalurdphada, Roozbeh Behroozmand, Henry Railo

University of Turku, Finland

Traditional research on neural correlates of conscious (NCC) perception has primarily focused on external stimuli. In the EEG studies with auditory stimuli, event-related potential (ERP) correlates Auditory Awareness Negativity (AAN) and Late Positivity (LP) have been found to correlate with conscious auditory perception. During speaking, the brain monitors self-produced sounds to ensure that the vocalization match the expectations. We investigated to what extent AAN and LP correlate with auditory awareness of errors in self-produced vocalization.

We recorded EEG with 66-channels from 40 individuals, each participating in two experimental sessions. During session 1, participants vocalized /a/, and heard their voice through headphones.

During the vocalization, a short artificial near-threshold pitch change was introduced into their auditory feedback (2IFC task). The participant’s task was to assess in which interval the pitch change was presented, and rate how well they heard it using the perceptual awareness scale.

During session 2, the task remained the same but the participants listened to recordings of their own voice from session 1. Session 2 corresponds to traditional studies of ERP correlates of consciousness and enables a comparison of how the NCC perception of errors in self-produced voice differ from correlates of perceiving external sounds.

We hypothesized that AAN and LP are stronger during the session 1 than session 2 because the brain can better predict their voice when they produce it (session 1) than hear it in playback (session 2). Our study is among the first to investigate the role of auditory awareness in speech motor control.



Unveiling The Electrocortical Correlates Of Subjective Duration Through The Magnitude-duration Illusion

Shiva Mahdian, Alexis Robin, Dominique Hoffmann, Lorella Minotti, Philippe Kahane, Nathan Faivre, Michael Pereira

Grenoble Institut des Neurosciences, France

Perceptual consciousness, the subjective experience associated with sensory processing, remains a key challenge in neuroscience. While neural correlates of consciousness (NCCs) have been isolated by comparing brain activity when a stimulus is perceived versus when it is not, the mechanisms accounting for the temporal dynamics of conscious perception remain largely unexplored. In this study, we recorded stereotactic EEG (sEEG) during a face detection task with two stimulus intensity levels (detection threshold or higher) and two durations (short or long), followed by a temporal reproduction task where participants reported their perceived stimulus duration. We replicated the magnitude-duration illusion, whereby high-intensity stimuli were perceived for longer durations. We assumed that a true neural correlate of perceptual consciousness should encode if and for how long a stimulus is perceived, irrespective of its physical properties. We identified channels in the mid-fusiform cortex tracking stimulus detection irrespective of report, as well as perceived duration. We interpret our findings based on a computational model of leaky evidence accumulation, which assumes that a percept becomes conscious when accumulated sensory evidence reaches a perceptual threshold and persists until it drops below this threshold.

By linking the dynamics of intracranial electrophysiological activity to subjective reports of detection and duration as well as mechanistic predictions, our work advances our understanding of the mechanisms governing the temporal dynamics of perceptual consciousness, and offers novel insights into the true neural correlates of perceptual experience.



Pareidolia in Visual Crowding

Bilge Sayim1, Olivia Koechli2, Natalia Melnik3

1CNRS & École Normale Supérieure; 2University of Bern; 3Otto-von-Guericke-University Magdeburg

In typical experiments on visual crowding (the deteriorating influence of clutter on target perception), observers are informed about the stimulus category they have to report. For example, observers are asked to report letters. This prior information strongly limits the response space to a few categorical (letter) responses, and may influence how targets are perceived. Here, we investigated to what extent prior experience with letter stimuli increased the likelihood of letter pareidolia – seeing letters in letter-like stimuli. Targets consisted of letters and letter-like stimuli, presented in isolation or flanked by Xs (crowded) at 10° eccentricity to the left or right of fixation. Observers reported target appearance by placing lines on a freely viewed response grid. There were two conditions: In the Letters First (LF) condition, observers were first presented with letters, in the Letters Second (LS) condition with letter-like stimuli. We hypothesized that compared to the LS condition, prior experience of letters in the LF condition would bias observers to report letters instead of the presented letter-like targets. The results showed strong deviations of the captured from the presented targets, especially when the targets were crowded. Quantifying how often observers reported the corresponding letter targets when presented with letter-like stimuli revealed that LF observers erroneously ‘corrected’ the letter-like stimuli to letters more frequently than LS observers. Our results show that prior experience of letter stimuli strongly influenced appearance reports. We suggest that, as in pareidolia, target reports in visual crowding are often based on partial information completed into meaningful objects.



The Role Of Visual Awareness In Size Coding

Simona Noviello1, Andrea Alamia2, Benedikt Zoefel2, Silvia Savazzi3, Gregor Thut2, Irene Sperandio1

1University of Trento, Italy; 2Centre national de la recherche scientifique (CNRS), France; 3University of Verona, Italy

Size coding refers to how the visual system encodes and represents object size. This mechanism operates at the earliest stages of visual processing, with larger stimuli eliciting stronger and earlier neural responses than smaller ones under normal viewing conditions. Traditionally, this mechanism is thought to require visual awareness, but whether size coding can occur outside of consciousness remains an open question. Investigating this phenomenon is crucial for refining our understanding of the role of consciousness in visual perception.

To address this question, we recorded the electroencephalographic activity (n=33) and employed Continuous Flash Suppression (CFS) to manipulate visual awareness of different-sized rings. Using a staircase procedure, we determined the threshold at which stimuli broke through CFS in approximately half of trials for each subject, to balance the number of conscious and unconscious trials. Participants performed a size discrimination task, in a 2AFC paradigm. Subjective awareness was assessed using the Perceptual Awareness Scale, while accuracy was measured through catch trials and behavioural responses.

Behavioural results showed that the performance in the discriminative task was impaired during unconscious trials. The ongoing analyses include the study of event-related potentials, time-frequency responses, and steady-state visually evoked potentials induced by CFS, aiming to explore their potential in predicting behavioral performance and subjective awareness. By combining behavioural and EEG results, we aim to determine whether the impaired performance observed during unconscious trials extends to neural activity, or if a dissociation emerges, suggesting a role for unconscious mechanisms in size coding—a process traditionally considered fully conscious.



Manipulating Predictive Focus Facilitates Awareness of Quality in Coffee Tasting

Chiyu Maeda1,2, Toshimasa Yagi3,4,2, Satoshi Nishida2,1,5

1Osaka University, Japan; 2National Institute of Information and Communications Technology, Japan; 3ALTALENA Co. Ltd., Japan; 4Value way Inc., Japan; 5Hokkaido University, Japan

Prediction is a key function of the brain, playing a crucial role in various cognitive processes, including awareness. However, prediction can also negatively affect perception in some cases. In particular, these negative effects can sometimes obscure our awareness of the true quality of perceptual information. This study aims to demonstrate these negative effects of prediction in everyday situations and to investigate whether manipulating the focus of predictions can mitigate these effects by enhancing awareness of the true quality. To this end, we conducted cognitive experiments in which participants evaluated high-quality coffee with unusual flavors. We hypothesized that the prediction error induced by the coffee's unusual flavors would initially attract participants’ attention to these flavors, obscuring their awareness of the coffee's high-quality features and leading to low evaluations. By redirecting the predictive focus to the high-quality features through instructions, we expected to enhance awareness of these features, resulting in improved evaluations of the coffee. Consistent with our hypothesis, our results showed that predictive manipulation improved participants' ratings of coffee quality and preference. Furthermore, we found that the initial low ratings stemmed from the coffee’s unusual flavors, which deviated from participants’ expectations. Subsequently, the improved ratings were influenced by the coffee’s high-quality features, to which the predictive focus was shifted. These findings suggest that our awareness of the intrinsic quality of perceptual information in everyday situations can be undermined by prediction but enhanced by manipulating predictive focus.



To Report Or Not To Report? Unravelling The Electrophysiological Markers Of Visual Awareness

Elisabetta Colombari, Nicola Ciavatti, Silvia Savazzi, Chiara Mazzi

University of Verona, Italy

In the search for the Neural Correlates of Consciousness, distinguishing the neural processes directly related to consciousness from those resulting from post-perceptual processing has long been a subject of investigation. To this aim, the use of no-report paradigms, in which no task is required, revealed that the earlier electrophysiological marker of visual awareness (VAN-Visual Awareness Negativity) seems to be unaffected by response requirements manipulation, while the Late Positivity (LP) is supposed to reflect awareness-related processes conflated with report-related mechanisms.

In our study, with the aim of disentangling the neural correlates reflecting solely conscious experience from those related to the report, we presented participants with Mooney images: degraded images that are meaningless (i.e., unaware) at first sight, but becoming recognizable (i.e., aware) after viewing their corresponding original image. Participants’ EEG was recorded while they were asked to observe the Mooney images and then either to report or not to report (in different sessions) whether they recognized the image content.

Results showed that LP was larger for Aware trials when participants had to report their conscious experience, suggesting that LP may reflect neural mechanisms related to both awareness and post-perceptual processes.

This study contributes significantly to shedding new light on the controversial search for the proper signature of visual awareness, creating novel knowledge on a topic that is extensively studied but, at the same time, intensely debated.



Seeing Vs. Noticing: Revisiting Gradual Change Blindness

Itay Yaron1, Eylon Mizrai2, Liad Mudrik1,2

1Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel; 2School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel

Is conscious rich or sparse? Change detection paradigms have been widely used to argue that we perceive far less than intuitively assumed. Findings using these paradigms suggest that consciousness is sparse, as participants often remain 'blind' to prominent changes, even when they involve clearly visible and central features. Gradual change blindness provides a striking example of this phenomenon, as participants fail to see changes in highly salient features. However, it remains debated whether these findings genuinely reflect perceptual 'blindness' to the changing features or a cognitive failure to recognize that a change has occurred despite consciously perceiving the features themselves all along. To arbitrate between these interpretations, we employed an online gradual color change paradigm. Participants watched a video clip, paused at two time points, asking them to report the shirt color of a central character using a color-wheel. We then compared reports of four groups: (a) participants who noticed the color change; (b) those who did not; (c) participants who were informed of the color change before the video; and (d) participants who watched a change-free video. Our findings delineate the distinct contributions of cognitive and perceptual processes to change 'blindness', suggesting that by and large, participants can accurately report colors as they change even when they fail to notice that a change occurred. Accordingly we argue that gradual change blindness shouldn't be interpreted as a failure of perception, but rather as a failure to cognitively notice and report all aspects of one’s perception.



Structure of Indescribable Textural Qualia in Vision

Suguru Wakita, Isamu Motoyoshi

The University of Tokyo, Japan

Our visual world is full of textures that appear very complex and indescribable. These textural qualia provide the fundamental basis for the richness and reality of our conscious perception. In contrast to color, textural qualia are so complex that they are believed to be ruled by extremely high-dimensional information. However, we here introduce a relatively low-dimensional structure that can quantitatively describe a variety of textural qualia in nature, by efficiently compressing information in a large set of natural texture images using unsupervised learning models. Our psychophysical experiments showed that the appearance of a given natural texture can be sufficiently described and synthesized by a latent space of 12-16 dimensions. The distance between different textures in the latent space was consistent with the perceptual distance as judged by human observers. We also found that the linguistic descriptions of textural image properties (e.g., coarse, fine, granular) and surface material properties (e.g., glossy, matte, bumpy) tended to be clustered into specific positions in the latent space. In addition, by mapping visual evoked potentials (VEPs) for natural texture images to the latent variables of the images, we could successfully reconstruct photorealistic texture images from VEPs. Importantly, any of the individual dimensions in the latent space cannot be named with a simple word nor reduced to simple image measurements such as contrast or spatial frequency. These results indicate that rich and complex textural qualia can be quantified and understood with a structure of relatively small dimensions.



Investigating Perceptual Reality Monitoring Using Afterimage Perception

Cassandra M. Levesque1, Nadine Dijkstra2, Peter A. Bandettini1,3, Sharif I. Kronemer1

1Laboratory of Brain and Cognition, National Institute of Mental Health (NIMH), National Institutes of Health (NIH), USA; 2Wellcome Centre for Human Neuroimaging, University of College London (UCL), Institute of Neurology, UK; 33Functional Magnetic Resonance Imaging Core Facility, National Institute of Mental Health (NIMH), National Institutes of Health (NIH), USA

Perceptual reality monitoring (PRM) is the ability to distinguish between conscious perception with and without sensory stimulation (e.g., seeing an object versus imagining an object). The neural mechanisms of PRM and why erroneous PRM occurs remains poorly understood. In studying PRM, a key challenge is reliably inducing PRM errors under experimentally controlled conditions. To address this limitation, we developed a novel PRM paradigm leveraging afterimages - illusory visual perceptions often following light adaptation. Participants either perceive a negative afterimage following an inducer stimulus or an on-screen image that was perceptually matched with the participants’ afterimage perception. To prevent participants from being cued to the perception type by the inducer, we utilize continuous flash suppression to render the inducer invisible. Participants were asked to report whether they perceived an afterimage or an on-screen image and also indicate their confidence in their answer. To measure individual traits that may influence PRM ability, participants were administered surveys that indicate their mental imagery vividness and susceptibility to sensory-independent perceptions in daily life. Preliminary results show that the afterimage-based PRM paradigm successfully induces PRM failures in healthy individuals (i.e., participants confuse the perception of afterimages and on-screen images), and individual differences in PRM performance were observed. Future directions include investigating if task-based performance predict PRM errors in daily life, and utilizing fMRI and MEG concurrent with our paradigm to explore the neural correlates of PRM. Our long-term goal is to translate knowledge from healthy individuals to guide the treatment of clinical populations characterized by impaired PRM.



Does V1 Preferentially Encode Conscious Perception?

Georgia Milne, Roni Maimon Mor, Kim Staeubli, John Greenwood, Peter Kok, Tessa Dekker

UCL, United Kingdom

Conscious perception integrates sensory information with prior knowledge. This enables the reinterpretation of physical stimuli as novel information is acquired, known as perceptual reorganisation. This phenomenon has been instrumental in isolating high-level representations of conscious perception from low-level representations of physical stimuli, and tracing their distribution in the brain. While traditional models suggest signatures of conscious perception are concentrated to higher-order visual areas, emerging evidence suggests a broader involvement of both visual and non-visual areas, including the primary visual cortex (V1). Though classically thought to encode low-level physical stimuli properties, recent studies have reported V1 activity to coincide more with the conscious perception of a stimulus than its veridical representation. Here, we use functional MRI (fMRI) alongside tailored experimental controls to isolate content-specific perceptual patterns, revealing that V1 predominantly encodes veridical stimuli representations, rather than their knowledge-driven percepts. These findings challenge previous interpretations, suggesting that earlier results may have been influenced by non-specific processes that modulate neural activity, rather than processes directly linked to conscious perception. The implications of this research are profound for both practice and theory. For future studies of perception, it emphasises the need for specific measures of neural correlates, the lack of which has the potential to misdirect experimental findings. Theoretically, this work refines our understanding of the neural distribution of perceptual reorganisation, and furthers efforts to elucidate the neural mechanisms of conscious perception. Beyond the field of consciousness, this has broader implications for research in neuroscience, psychology and artificial intelligence.



Studying The Electrophysiological Dynamics Of Visual Consciousness Through A Partial Report Paradigm

Davide Bonfanti, Sonia Mele, Elena Bertacco, Chiara Mazzi, Silvia Savazzi

University of Verona, Italy

Introduction: Despite advancements in the last decades, many questions about the nature and location of the neural mechanisms responsible for visual awareness remain open. One of them regards the electrophysiological distinction between proper neural correlates of consciousness and post-perceptual processes related to reporting. This study aims to shed light on this topic by employing a partial report paradigm.

Methods: We collected data from 23 participants while recording EEG. The stimuli consisted of six letters lasting 100 ms, symmetrically spread around a fixation cross. An acoustic tone immediately after the stimulus indicated if participants had to report left or right letters. Participants then reported their answers, which were written down by one of the experimenters.

Results: ERP analyses revealed a difference between the “Report Right” and “Report Left” conditions. In particular, the differences between the two conditions were present at a late time window from 750 ms to 1400 ms and involved left parieto-occipital and right frontal electrodes. Source reconstruction showed that the neural origin of these activations was located in the left occipital and temporal lobes and bilateral frontal gyruses, respectively.

Conclusion: Since the conscious content was identical between the two reporting conditions, whereas the response was different, the absence of early and the presence of late modulations in EEG activity can represent differences related only to post-perceptual processes. This study contributes to a deeper understanding of the distinction between conscious experience and decision-making processes, with relevant implications for advancing a thorough neural characterization of post-perceptual processes.



The Pulse: Role of Transient Subcortical Arousal Modulation in Visual, Auditory, Tactile and Gustatory Perceptual Awareness

Hal Blumenfeld, Aya Khalaf, Sharif Kronemer, Kate Christison-Lagay, Qilong Xin, Shanae Aarts, Maya Agdali, Taruna Yadav, Ayushe Sharma, Francois Stockart, Jiayin Qu

Yale School of Medicine, United States of America

Subcortical arousal systems are known to influence long-lasting states such as sleep/wake and sustained attentional vigilance. However, the role of these subcortical systems in dynamic short-term modulation of conscious perceptual awareness has not been fully investigated. To identify subcortical networks that are shared across sensory modalities, we first analyzed fMRI data from large publicly available visual, auditory and taste perception data sets (N=1556). We performed model-free fMRI analysis using a spatiotemporal cluster-based permutation test to detect changes at task block onset and with individual task events. Conjunction analysis revealed a common network of subcortical arousal systems shared across perceptual modalities, including transient fMRI increases in midbrain tegmentum, thalamus, and basal forebrain. Cortical salience and top-down attention network regions were also shared across modalities, although cortical modality-specific changes were also observed. Next, we investigated visual perception using a report-independent paradigm, employing pupil, blink and microsaccade metrics with machine learning to detect consciously perceived stimuli without overt report (N=65). We again found transient fMRI increases in the same subcortical arousal networks including midbrain, thalamus and basal forebrain for consciously perceived stimuli, independent of task report. Finally, to directly measure subcortical signals during perceptual awareness we recorded from the intralaminar thalamus centromedian nucleus (CM) in patients with implanted electrodes. In both visual (N=7) and auditory (N=1) threshold perceptual awareness tasks, we found a thalamic event-related potential specific for conscious perception, peaking ~450ms after perceived stimuli. These findings suggest that subcortical arousal circuits participate in dynamic phasic modulation of conscious perception across sensory modalities.



State- And Hemifield-Dependent Modulation Of Orientation-Tuned Responses During Binocular Conflict In Mouse Visual Cortex

Mathis Bassler, Lilian Emming, Gerjan Huis in't Veld, Mototaka Suzuki, Cyriel Pennartz

University of Amsterdam, The Netherlands

In primates, the presentation of binocularly conflicting visual stimuli leads to binocular rivalry, a phenomenon often used to investigate the neuronal mechanisms of conscious visual processing. Previous binocular rivalry studies in macaques have identified neurons in visual cortical areas that modulate their responses to their preferred stimulus based on awareness. Assuming that mice share at least a basic form of visual perception with primates, we used in vivo two-photon calcium imaging to characterize neuronal responses to binocularly conflicting stimuli in the visual cortex of awake and anesthetized mice. We found that the responses of a subset of orientation-tuned cells to prolonged presentation of their preferred monocular grating (i.e., presented to the left or right eye) were suppressed by the onset of an intermittent, flashing Mondrian-type stimulus on the other eye, and that these responses recovered upon the offset of that flashing stimulus. This response recovery was absent during general anesthesia, suggesting that it depends on the animal's conscious state. Furthermore, we found that this effect was more prominent in neurons that preferentially responded to ipsilateral gratings, suggesting that the two eye-preferring neuronal populations in a hemisphere contribute differently to binocular conflict processing in the mouse brain.



Oscillatory Phase Alignment In Auditory Perception

Hassan Al-Turany1, Julio César Hechavarria1, Yuranny Cabral-Calderin2

1Ernst Strüngmann Institute for Neuroscience, Germany; 2Max Planck Institute for Empirical Aesthetics, Germany

Neural and physiological oscillations—including respiration, cardiac activity, and brain rhythms—can align with rhythmic external stimuli and influence perception. However, their precise role in conscious perception remains unclear. This study investigates whether phase alignment across multiple physiological systems modulates auditory perception. In two sessions, 26 participants detected auditory targets that varied in rhythmicity and onset predictability relative to rhythmic or jittered primers. Targets were presented at intensities near the perceptual threshold. Magnetoencephalography, respiration, and cardiac activity were recorded. Group-level analysis revealed that target detection was influenced by intensity, rhythmicity, and onset predictability: perceptual accuracy increased, and thresholds decreased when rhythmic targets followed rhythmic primers at predicted onset times, consistent with neural entrainment optimizing auditory perception. This behavioral pattern was independently replicated in an additional sample of 13 participants. Individual-level analysis of respiration signals showed that respiration phase at stimulus onset was not randomly distributed, suggesting entrainment to auditory stimuli. However, its relationship to perceptual accuracy varied across participants: some were more likely to perceive targets at specific respiratory phases, while others showed no phase preference. This suggests that respiration phase may facilitate perception in some individuals but not universally. We propose that perception is shaped by cross-modal oscillatory phase alignment, rather than respiration alone. Further analyses will clarify how neural oscillations and cardiac rhythms contribute to this effect, enhancing our understanding of physiological-neural interactions in auditory awareness. This study aligns with the conference topics by contributing to a holistic understanding of consciousness that integrates brain and body dynamics.



Decoding Illusory Colours From Human Visual Cortex

Marek Nemecek1,2, Barbora Wolf2, Karl Gegenfurtner3, Philipp Sterzer4, Andreas Bartels5, Michael Bannert5, Matthias Guggenmos1

1Health and Medical University Potsdam, Germany; 2Humboldt-Universität zu Berlin, Germany; 3Justus-Liebig-Universität Gießen, Germany; 4Universität Basel, Switzerland; 5Eberhard Karls Universität Tübingen, Germany

The process of discounting the illuminant is an important feature of human colour vision, helping to stabilise perception in dynamic environments. Seemingly paradoxical side effects of this process were demonstrated in famous images such as "the dress" in which observers disagree with respect to perceived colours. To better understand the neural basis of such effects, the present study uses a novel colour constancy illusion stimulus set, in conjunction with fMRI and multivariate pattern analysis, to address the following question: At which point of the visual cortical processing does colour information transition from colorimetric colour to perceived colour?

Before the start of the fMRI experiment, we established the strength of induced colour illusions, with participants (N=70) being asked to report their subjective percept in a colour matching task. The large majority of participants reported moderate to strong colour illusions. Stimuli based on these exact matched colours were then used in the scanner along with the original illusion-inducing stimuli and a number of controls in a block design.

In our main analysis, we train a classifier on fMRI data to discriminate between pairs of colorimetric patches and apply the same classifier to patches of illusory colour percepts. We find inverse gradients for colorimetric and perceived colours - while information about colorimetric colours is highest in V1 and deteriorates along the visual hierarchy, information about perceived colours increases from V1 onward, peaking in V3 and V4.



The Neural Basis of Overflow: Decoding Category Information from Multi-object Visual Arrays

Karla Matić1,2, Issam Tafech1, Kai Görgen1, Rony Hirschhorn3, John-Dylan Haynes1,2

1Humboldt-Universitat zu Berlin, Bernstein Center for Computational Neuroscience, Berlin, Germany; 2Max Planck School of Cognition, Leipzig, Germany; 3Tel Aviv University, Sagol School of Neuroscience, Tel Aviv, Israel

How much do we see in a blink of an eye? The capacity of visual perception has long been debated in consciousness science. One factor limiting visual capacity may arise from competition: when multiple objects are presented simultaneously, they are believed to compete for access to consciousness, potentially due to interference at high-level stages of processing where neurons have large receptive fields. Even in minimal clutter conditions (e.g., with two objects), these interactions have been shown to significantly decrease the ability to decode object categories from fMRI activity. Here we examined the extent to which object clutter influences encoding of category information for multiple objects, using a large fMRI dataset. We presented 3D models of common object categories (e.g., faces, houses) for 250ms, either in isolation or in multi-object arrays. Despite some interference, we found substantial object category information when decoding from multi-object arrays, most prominently in the lateral occipital complex. Interestingly, even in cluttered displays we found location-tolerant category information (allowing the category-trained classifier to generalize between different positions in the visual field), suggesting that at least some object representations are invariant to clutter. Our findings suggest a potential mechanism for how category representations are “shielded” from competitive interactions in cluttered arrays. We address the debate on “overflow” and discuss implications of our findings for estimating the capacity of conscious vision.



Brain-states Supporting Upcoming Visual Confidence Assessed from fMRI

Mariyana Cholakova, Afra Wohlschläger

Dept. of Neuroradiology, Klinikum rechts der Isar of the Technical University Munich, Germany

Visual metacognition, the capacity to evaluate and regulate one’s perceptual decisions, is a fundamental component of conscious awareness.

It plays a critical role in shaping higher-order cognition, enabling individuals to monitor the reliability of sensory information. To investigate the neural underpinnings of visual metacognition, we analyzed fMRI data acquired during a backward-masked perception task designed to probe visual confidence (Jaeger et al., 2020).

We employed a recently proposed data-driven approach (Huang et al., 2024) that models neural activity as a combination of coexisting brain states, each contributing with time-varying strength. Two sets of states—one governing regional amplitude and variance and the other capturing large-scale functional connectivity—provide a rich perspective on the temporal evolution of whole-brain dynamics, complementing traditional fMRI analysis.

Our findings reveal a stable connectivity state that persists across time and individuals, suggesting the existence of a baseline functional connectivity that reflects the underlying structural organization. Furthermore, we observed clear distinctions in network dynamics between task and rest, indicating large-scale functional adaptations relevant to cognitive processes. We identified key regions implicated in upcoming visual confidence and metacognitive efficiency, including the lateral and ventral default mode network (DMN), the frontoparietal (FP) network, dorsal attention networks, and subcortical structures such as the nucleus accumbens and globus pallidus.

These results provide novel insights into how dynamic brain networks relate to cognitive and metacognitive processing involved in visual tasks and, more broadly, the neural architecture of conscious perception.



When Prediction Meets Perception: The Effect of Action-Based Expectations on Visual Perception

Axel Plantey--Veux1,2, Andrea Desantis1,3,4, Alexandre Zénon2

1ONERA (French Aerospace Lab), Salon-De-Provence, France; 2Institut de Neurosciences Cognitives et Intégratives d’Aquitaine (UMR 5287), CNRS, Université de Bordeaux, France; 3Institut de Neurosciences de la Timone (UMR 7289), CNRS and Aix-Marseille Université, Marseille F-13005, France; 4Integrative Neuroscience and Cognition Center (UMR 8002), CNRS and Université Paris Cité, Paris F-75006, France

Engaging in voluntary actions enables us to anticipate and influence changes in the external world. The ability to predict outcomes play a significant role in shaping sensory processing and perception. For instance, research showed improved perceptual performance and also sensory attenuation for predicted outcomes compared to unpredicted ones. However, the neuronal mechanisms by which prediction -particularly those derived from voluntary actions- shape perception remains unclear despite the development of several theoretical frameworks. This study aims at contributing to our understanding of how action-based predictions shape sensory experience. Participants complete a 2-interval-forced-choice task in which they are presented two consecutive gratings and have to indicate whether the second grating is tilted clockwise or counterclockwise compared to the first. In the active condition, the orientation of the first stimulus is predictable from participants’ action, whereas in the passive condition, participants cannot predict the upcoming stimulus. We expect higher perceptual discrimination performance in the active condition compared to the passive condition. The results will be interpreted within the frameworks of the cancellation and sharpening models of sensory prediction. Data collection for the experiment is currently underway.



Cognitive and Neural Factors Involved in the Perception of Real and Fake Information

Annabel Chen, Shuhan Wang, Tanisha Annamalai, Sarah Ayub

Nanyang Technological University (NTU), Singapore

In a world rampant with deceptive information, fake news has become a powerful tool to manipulate public opinion and fuel division. This carries important consequences for national security and cohesion (Pennycook & Rand, 2021). However, little is known about how individuals’ cognitive processes shape their perceptions of misinformation. Our study thus aims to understand how we process and perceive real and fake news, when considering the news source (human versus AI) and content ambiguity (ambiguous versus obvious), through neurophysiological and behavioural measures.

A mixed design was employed using three independent variables, news source, veracity and ambiguity. Data from 55 Singaporean participants, assigned to read 40 news excerpts, was included in our analysis. Participants’ electrocardiogram (ECG), eye movements, and their ratings of veracity judgments were analysed to determine how these neurophysiological measures can predict their realness belief.

When reading ambiguous news, participants who fixated longer on the excerpt—indicating greater attention—were more likely to judge it as real. Similarly, participants who experienced lower heart rate variability, reflecting higher stress levels, were also inclined to judge the excerpt as real. Greater attention and heightened stress were thus associated with higher realness belief.

Although participants were not conscious of the veracity of the news, their neurophysiological measures did predict their judgements. As a critical first step in assessing how people navigate the blurred boundaries between reality and simulation, our study contributes to broader discussions on human misinformation and the ability to identify deceptive content, facilitating the devise of counteractive measures.



Keeping It Stable: Multisensory Integration In Object Size Constancy Across The Ventral And Dorsal Visual Streams

Chiara Mazzi1, Elena Franchin1, Anna Benamati1, Paola Cesari1, Irene Sperandio2, Sonia Mele1

1University of Verona, Italy; 2University of Trento, Italy

Without size constancy, the ability to perceive objects as maintaining the same size despite changes in viewing distance, our conscious experience of the world would be profoundly distorted, with objects expanding as we move closer and shrinking as we move away. Our study explored the dynamics underlying this mechanism for real 3D objects either under full- or restricted-viewing conditions, investigating whether proprioceptive cues can compensate for the absence of visual distance cues. Based on Milner and Goodale’s model,

we explored this effect both in perceptual size constancy (mediated by the ventral stream, responsible for conscious perception) and grip constancy (mediated by the dorsal stream, unconscious in nature).

To this aim, EEG was thoroughly recorded while participants were asked to indicate the size of target stimuli by either opening their fingers or grasping them.

Kinematic and electrophysiological results showed that size constancy was preserved even with restricted viewing, likely due to proprioceptive distance cues. However, ERP components in this condition were delayed compared to full viewing. Additionally, kinematic data revealed that grasping benefited more than perceptual estimation from proprioception, indicating differential sensory integration across tasks.

These findings deepen our understanding of the neural mechanisms underlying the stability of conscious perception and the role of multisensory integration by demonstrating how the brain merges sensory information into a unified experience of the world. In this context, this research has important implications for sensorimotor theories of consciousness, rehabilitation strategies for sensory deficits, and artificial vision models, contributing to interdisciplinary discussions on perception and consciousness.



The Higher Order Structure Underlying Unconscious Vision

Davide Orsenigo1,2, Andrea Luppi3, Matteo Panormita1,4, Matteo Diano1, Hanna E. Willis5, Giovanni Petri2,6, Holly Bridge5, Marco Tamietto1,7

1Dipartimento di Psicologia, Università degli Studi di Torino, Torino, Italy; 2CENTAI Institute, Torino, Italy; 3Department of Psychiatry, University of Oxford, Oxford, United Kingdom; 4Laboratory for Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium; 5Wellcome Trust Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom; 6Network Science Institute, Northeastern University London, London, United Kingdom; 7Department of Medical and Clinical Psychology, Tilburg University, Tilburg, The Netherlands

Damage to the visual primary cortex (V1) typically causes clinical blindness, yet some patients retain residual vision and respond to stimuli they don’t consciously perceive—a phenomenon known as blindsight. While previous studies have focused on local rewiring, this work highlights whole‐brain network reorganization through high‐order interactions, distinguishing synergistic (emergent) from redundant (shared) information. We examined whether alterations in the balance and spatial distribution of these quantities can serve as biomarkers for conscious versus non‐conscious visual processing in the largest cohort of blindsight patients.


Blindsight was assessed using a two‐interval detection task in V1-damaged patients, classified as blindsight‐positive (B+,n=8) or blindsight‐negative (B–,n=8). Resting‐state fMRI data were acquired from these patients and age‐matched healthy controls (HC,n=17). We quantified the redundancy–synergy balance using o‐information across groups of brain regions and applied partial entropy decomposition (PED) to partition the joint entropy of triplets into distinct synergy and redundancy contributions.

In lesioned visual areas, patients diverged from controls as higher‐order interactions were considered. The visual network and subcortical regions in B+ resembled those of HC at every interaction order. Whole‐brain PED analyses revealed that B+ showed significantly increased synergistic triplets in bilateral somatosensory and contralateral visual networks, while thalamic synergy remained unchanged amid reorganized redundancy. This compensatory circuit aligns with existing literature that identifies rerouting through a redundant subcortical pathway as essential for preserving visual information propagation—even if in an impoverished form. These findings suggest that the synergistic cortical circuit may contribute to or emerge from this reorganization, thereby elucidating mechanisms underlying visual awareness.



Beauty and Consciousness: Aesthetic Judgments Predict Access and Dominance in Visual Awareness

Paolo Barbieri, Tommaso Ciorli, Greta Varesio, Jacopo Frascaroli, Lorenzo Pia, Irene Ronga

University of Turin, Italy

Background

Previous research has suggested that aesthetic value influences not only subjective preference but also perceptual processing. We hypothesized that visual stimuli judged as more beautiful gain privileged access to consciousness, emerging faster into awareness and persisting longer before disappearing.

Methods

We recruited 25 participants who were presented with abstract images varying in spatial frequency. Two paradigms were employed: Breaking Continuous Flash Suppression (b-CFS), where dynamic masks gradually decrease in contrast until the target stimulus emerges into awareness, and Reverse b-CFS, where the target stimulus decreases in contrast until the mask re-emerges and suppresses it from consciousness. Participants provided aesthetic judgments for each image on a standardized scale. We recorded both access times (time taken for stimuli to enter awareness) and dominance times (duration stimuli remained visible before suppression).

Results

Mixed-effects models revealed that aesthetic judgment significantly predicted both access and dominance times. Stimuli rated as more beautiful entered consciousness faster and remained dominant for longer periods. While spatial frequency alone also predicted access times, when both spatial frequency and aesthetic judgment were included in the same model, only aesthetic judgment remained a significant predictor. This suggests that the perceived beauty of a stimulus plays a more critical role in its conscious perception than its low-level visual features.

Conclusions

Our findings support the hypothesis that aesthetic value modulates conscious perception. Beautiful stimuli not only emerge more rapidly into awareness but also persist longer, indicating that aesthetic appraisal is deeply intertwined with fundamental perceptual processes.



When Sparse Is Rich

Michael Herzog1, Grégoire Préchac1, Marco Bertamini2

1EPFL, Switzerland; 2University of Padova, Italy

Perception seems to be far more limited than we commonly believe. Overflow proponents argue that while we are aware of the entire visual scene (phenomenal consciousness), we can only report a small fraction of it (access consciousness). On the other hand, opponents of the Overflow theory reject the existence of phenomenal consciousness and claim that we do not perceive the world in great detail, even though we often think we do (grand illusion). Through various visual demonstrations, we will show: (1) Vision can be even richer than the actual stimulus (e.g., the scintillation illusion), and there are multiple measures of richness that do not necessarily align. Therefore, the starting assumptions of the debate are not justified. (2) Both proponents of Overflow and its opponents rely on pictorial-like representations, disregarding the neuroscience principle: no representation, no conscious perception. (3) This principle suggests that, even though all the information is present in the stimulus and on the retina, when we perceive an object, we lack access to its detailed features- unless we have representations for those details too. (4) Consequently, we require representations for things like blur, distortions, and richness as well. (5) When not attending to the periphery (which is typically the case), we see objects clearly because the representation of the object is activated, but the representations for its details are not (inattentional blindness). However, when we focus on the periphery, distortions may become visible due to the activation of additional distortion representations—paradoxically giving us a richer percept.



Can non-conscious knowledge support instrumental conditioning? A Registered Report

Razvan Jurchis1, Andrei Costea1, Lina Skora2, Andrei Preda1

1Babes-Bolyai University, Romania; 2Heinrich Heine Universität, Germany

Instrumental conditioning (IC) involves learning to seek rewards and avoid punishments, a key process for adaptation. Since it requires integrating stimuli, behaviors, and outcomes, some theories suggest that such complex integration depends on conscious knowledge. Indeed, recent studies using subliminal exposure show that IC occurs only for consciously perceived stimuli. Here, we investigate whether IC can occur when implicit processing is stimulated not by subliminal exposure, but by employing predictive regularities that are complex and difficult to detect consciously. In a novel IC task based on the artificial grammar learning task, participants first undergo an incidental learning phase, in which they are exposed to strings from two artificial grammars. In a subsequent phase, they have to give approach (Go) or avoid responses (No-Go) to strings from the two grammars. Unbeknown to them, one of the grammars is reward-predictive and the other is punishment-predictive. Go responses to stimuli from the reward-predictive grammar bring rewards, but Go responses to stimuli from the punishment-predictive grammar bring punishments (hence No-Go responses are adaptive in this latter case). Trial-by-trial measures assess participants’ awareness of the grammar structures and of their judgments regarding the predictive value of the strings, both in the conditioning task, and in a delayed instrumental responding phase performed after two weeks. The delay aims to reduce even further participants’ reliance on conscious knowledge. A pilot study found strong support for unconscious IC in this task, and data collection is ongoing for the main study of the registered report.



Is Conscious Perception Necessary to Direct Attention? A Replication of Jiang et al. (2006)

Syrus Yung-Jung Chen, Ryan B. Scott, Zoltan Dienes

University of Sussex, United Kingdom

One of the most compelling demonstrations of attention guided by subliminal stimuli is Jiang et al. (2006). Employing a continuous flash suppression (CFS) paradigm, they found individuals’ attention was unconsciously attracted to nude images matching their sexual preference and repelled by mismatched ones. The perception of the side on which the nude image was displayed was below the objective threshold with tight confidence interval. To assess the replicability and scope of these claims, we conducted a three-part replication project: one direct replication using CFS, and two conceptual replications employing backward-masking and gaze-contingent crowding paradigms. Data collection for the direct replication has just finished for a Stage 2 Registered Report with Peer Community In Registered Reports.

Bayesian analyses of the direct replication supported the original finding of unconscious attentional biases toward opposite-sex images among heterosexual participants. The perception of the side on which the nude image was displayed was below the subjective but above the objective threshold. These results partially supported Jiang et al.’s (2006) claim that attention is drawn to subliminal stimuli but only when defined by a subjective threshold.

In contrast, the current data (collection will be complete by the conference) from the backward-masking paradigm indicates perception was masked below the objective threshold, enabling a test of attentional capture at this threshold. Gaze-contingent crowding has been claimed to allow more unconscious priming effects – we will see if this holds for naked images. These soon-to-be-concluded projects aim to provide important complementary tests of the necessity of consciousness for attention.



A Computational Framework For Improved Goal Pursuit Through Reduced Conscious Control

Sucharit Katyal, Thor Grünbaum, Søren Kyllingsbæk

University of Copenhagen, Denmark

People form Goal Intentions (GI) of behaviours they would like to pursue in the future—e.g., a New Year’s resolution to exercise more. However, even despite strong motivation, GIs are not always followed up. This “intention-behaviour gap” can be remediated with Implementation Intentions (II; Gollwitzer, American Psychologist, 1999) where a person forms a precise IF-THEN plan about when and where one will initiate goal pursuit (i.e., by linking it to a specific contextual cue). IIs significantly increase goal pursuit across a plethora of behavioural domains. Mechanistically, IIs are proposed to work by delegating goal retrieval and pursuit to the contextual cue—i.e., by automatising and reducing the volitional component of goal pursuit. Current accounts of intention formation (No Intention vs. GI vs. II) are descriptive and lack a formal computational understanding.

We propose a computational framework for intention formation based on resource rational principles. Here, intention formation involves an agent setting two parameters, the subjective value of a goal and its state-specificity. Higher state-specificity allows one to pursue an intention linked to a specific contextual cue, while not pursuing it in relation to other (non-cued) contexts. A third metacognitive parameter relates to one’s belief about goal pursuit in the cued context. Our framework formalises when it is optimal to use IIs vs. GIs. It also explains many seemingly contradictory findings in the intention formation literature and makes novel empirically testable predictions.

Overall, our framework provides a normative understanding of how and under what conditions goal pursuit works better with reduced conscious control.



Addressing Methodological Challenges In Unconscious Process Research: A Hierarchical Modeling Approach

Ricardo Rey-Sáez1, Francisco Garre-Frutos2,3, Alicia Franco-Martínez1, Ignacio Castillejo1, Miguel Vadillo1

1Universidad Autónoma de Madrid, Spain; 2University of Granada, Spain; 3Mind, Brain and Behavior Research Center, Granada, Spain

Understanding unconscious cognitive processes remains a significant challenge in experimental psychology. Meyen et al. (2022) highlighted the technical difficulties of comparing measures from different tasks, such as participants’ accuracy in direct tests of awareness and their response times in indirect measures of unconscious processing. Additionally, Shanks et al. (2021) identified crucial methodological issues, including post-hoc selection biases, attenuated correlations, and flawed regression-based inferences. Although specific solutions have been proposed for each of these problems, implementing all of them demands researchers to be proficient in several advanced statistical methods. A compact and accessible solution that addresses all limitations is needed.

Fortunately, hierarchical models provide a unified and consistent framework for several of these problems. In fact, hierarchical models allow researchers to: (1) estimate global and individual direct and indirect measures jointly; (2) estimate the reliability of both measures; (3) mitigate post-hoc selection biases by accounting for individual differences, and (4) provide robust estimates of the regression intercept and correlation between measures, correcting for measurement error. Moreover, this strategy can be applied even in cases where measures are in different metrics.

We illustrate the use of this framework in the contextual cueing paradigm, showing that results align well with those obtained with the best-performing methods. This demonstrates that hierarchical models not only effectively address many existing limitations in the literature, but also provides an easy-to-implement solution for researchers. Therefore, this model represents a practical alternative to previous methods, ultimately improving the validity of inferences about unconscious processes.



Cross-Cultural Comparison Between Italy and Japan in Face Awareness Under the Breaking-Continuous Flash Suppression Paradigm

Mayuna Ishida1, Anna Lorenzoni3, Masaki Mori2, Mario Dalmaso3

1Keio university, Japan; 2Waseda university, Japan; 3Padova university, Italy

Culture and ethnicity affect face perception. Numerous studies on own-race bias have focused on explicit recognition for face, while little is known about unconscious processing. A few previous studies have examined the own-race effect under continuous flash suppression, and those have been limited to participants from a specific cultural background. Considering the cultural aspects of unconscious face perception, this study investigated cultural differences in unconscious face perception using the breaking-continuous flash suppression paradigm with Italians and Japanese participants. Forty Italian (32 females, 7 males, 1 unspecified; age = 21.7 ± 2.3 years) and forty Japanese (20 females, 20 males; age = 21.6 ± 2.0 years) participants viewed Asian and European faces with the non-dominant eye while perception was suppressed by dynamic Mondrian stimuli in the dominant eye. The participants reported the position of the face (left or right) upon breaking suppression. A generalized linear mixed model revealed a significant effect of face ethnicity for both Italian (Estimate = 67.88, 95% CI [24.94–110.82], t = 3.10, p = .002) and Japanese participants (Estimate = 97.82, 95% CI [37.85–157.78], t = 3.20, p = .001), indicating that both groups perceived Asian faces faster than European faces. These findings suggest that Asian faces were more detectable than European faces from unconsciousness states, regardless of participants’ culture backgrounds. As this result cannot be explained solely by own-race bias, further discussion is needed to explore how culture and ethnicity affect the unconscious processing of faces.



Extending The Limits Of Unconscious Semantic Processing

Nitzan Micher, Dominique Lamy

Tel Aviv University, Israel

Studies have shown that the meaning of invisible primes influences categorization of visible targets. The “action-trigger hypothesis” proposes that with small target categories (e.g., digits), observers resolve the task by relying on stimulus-response associations prepared for reasonably expected targets, rather than on semantic processing. Here, we revisited this hypothesis. We first extended the critical findings to small target categories other than numbers (Exp.1). Participants categorized Hebrew words as either cardinal directions (North, South, East, West) or basic tastes (sweet, sour, salty, bitter) and then rated the prime’s visibility. Subjectively invisible primes that never appeared as targets elicited response priming, supporting the action-trigger account. We then tested a novel prediction of this account: with small categories, primes appearing in an unexpected format should not produce unconscious response priming (Exp.2). The targets appeared in English and the primes were Hebrew translations of either the possible targets or of the remaining nontargets. Subjectively invisible translations of both the targets and nontargets yielded unconscious response priming, invalidating the action-trigger account. Finally, we asked whether the latter effect resulted from the high semantic similarity between prime-target pairs characteristic of small categories, by examining whether the effect would disappear with large categories (Exp.3). Participants categorized English words as either small or large animals, and primes were either these same words or their Hebrew translations. We found that both subjectively invisible prime types generated response priming. Our findings are compatible with a two-component account by which both semantic-activation spreading and stimulus-response associations determine priming.



In The Hands Of Metacontrast: Investigating The Dual-Task Structure Of An Unconscious Priming Paradigm

Charlott Wendt, Guido Hesselmann

Psychologische Hochschule Berlin, Germany

Masked priming paradigms involving trial-by-trial prime visibility judgments inherently create dual-task situations, requiring participants to assess each prime either through subjective reports (evaluating how well the prime was perceived) or objective discrimination tasks (identifying a specific characteristic of the prime, such as the direction of the prime arrow). To investigate unconscious priming within a dual-task framework, we conducted three experiments (N=30) using metacontrast masking. We varied characteristics of the direct (prime-related) task to examine their effects on the indirect (target-related) task and resulting priming effects.

Experiment 1 manipulated response modality (manual-manual vs. manual-vocal) and task complexity (4-point vs. 2-point perceptual awareness scale) of the direct task. Experiment 2 similarly varied complexity but compared one-hand versus two-hands conditions. Experiment 3 extended Experiment 2 by using different stimulus material and an objective discrimination task. Across all experiments, response times (RTs) were consistently longer in dual-task conditions than single-task ones. Priming effects were larger in dual tasks only in Experiment 3 and comparable across task types in Experiments 1 and 2.

Unimodal and high-complexity conditions resulted in prolonged RTs, while priming effects were larger in unimodal conditions but unaffected by task complexity. Two-hands conditions led to faster RTs in Experiments 2 and 3, yet larger priming effects emerged only in Experiment 3 and were unaffected by hand usage in Experiment 2.

Taken together, our findings highlight the importance of task modality and the choice of the visibility judgment measure in shaping priming effects and underscore their critical role in designing masked priming experiments.



Searching for the Best Subliminal Threshold Estimation Method: Empirical Validation of the STEP-Calibration Solution

Eden Elbaz, Itay Yaron, Liad Mudrik

Tel Aviv University, Israel

A major challenge in the study of unconscious processing is to effectively suppress the critical stimulus while ensuring it evokes a strong enough signal to be unconsciously processed. Calibration procedures offer a potential solution by targeting individual subliminal thresholds, allowing stimuli to be presented at the maximal intensity while remaining subliminal; therefore, maximizing stimulus processing while minimizing conscious contamination and the threat of regression to the mean due to post-hoc selection of trials and/or participants.

However, current calibration methods were developed to estimate liminal (or higher) thresholds and their efficiency in targeting subliminal thresholds has been questioned. Specifically, unlike other thresholds, chance level performance characterizes not only the perceptual threshold, but also all the intensities that fall below it, making it difficult to differentiate between them.

In ASSC 2024, we showed that the existing methods exhibited surprisingly low efficiency and introduced the Subliminal Threshold Estimation Procedure (STEP), a novel calibration approach, validated through simulation. We now support the method with empirical data. In three experiments probing motor priming, we showed that STEP substantially reduces participant and trial exclusion while maintaining a strong subliminal effect. Taken together with the simulation results, this suggests that our proposed solution can be highly beneficial for the study of unconscious processing.



Studying unconscious processing: Contention and consensus

François Stockart1, Maor Schreiber2, Nathan Faivre1, Liad Mudrik2,3

1Univ. Grenoble Alpes, France; 2Tel Aviv University, Israel; 3Canadian Institute for Advanced Research, Canada

A pressing question in the field of consciousness research is the extent and scope of unconscious processing. With great diversity of methods and measures, the field is riddled with many contradictory findings and methodological pitfalls, making it very difficult to integrate past results into a cohesive account. Here we report the results of a communal effort of 32 researchers in the field of unconscious processing, coming from different theoretical backgrounds. At the end of a prolonged process in which we discussed various methodological aspects relating to designing, running, analyzing and reporting experiments in the field, we came up with a list of ten consensus items and nine contention items. The consensus items are presented as a set of practical recommendations, which are also accompanied by five general recommendations which may relate to all scientific research but are especially pressing in our field. While some of the recommendations may change in time, we believe that the discussion provided for each consensus and contention point, covering the advantages and disadvantages of the different alternatives, may be highly valuable: it may be used by researchers in the field as a guide for issues that should be carefully considered when designing new experiments, and may also direct future research.



Future Science and Artificial Consciousness

Leonard Dung

Ruhr-Universität Bochum, Germany

I develop a novel argument for the view that it is nomologically possible that some non-biological creatures are phenomenally conscious, including conventional, silicon-based AI systems. This argument rests on the general idea that we should make our beliefs conform to the outcomes of an ideal scientific process and that such an ideal scientific process would attribute consciousness to some possible AI systems. This kind of ideal scientific process is an ideal application of the iterative natural kind (INK) strategy, according to which one should investigate consciousness by treating it as a natural kind which iteratively explains observable patterns and correlations between potentially consciousness-relevant features. The relevant AI systems are psychological duplicates. These are hypothetical non-biological creatures which share the coarse-grained functional organization of humans. I argue that an ideal application of the INK strategy would attribute consciousness to psychological duplicates because this gives rise to a simpler and more unifying explanatory account of biological and non-biological cognition. If my argument is sound, then creatures made from the same material as conventional AI systems can be conscious, thus removing one of the main uncertainties for assessing AI consciousness and suggesting that AI consciousness may be a serious near-term concern. My argument is grounded in a rigorous assessment of methodologies in consciousness science as well as a careful analysis of relevant metaphysical assumptions. Throughout, I only rely on assumptions which proponents of biological views of consciousness should plausibly accept. Thus, my argument has the potential to significantly advance research on AI consciousness.



Dissociating Artificial Intelligence From Artificial Consciousness

William Marshall1,2, Graham Findlay2, Larissa Albantakis2, Isaac David2, William GP Mayner2, Christof Koch3, Guilio Tononi2

1Brock University, Canada; 2University of Wisconsin - Madison, USA; 3Allen Institute, USA

Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, being able to do all we do, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which---a basic stored-program computer---simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.



The Two-Factor Framework And AI Consciousness

Lukas Kob

OVGU Magdeburg, Germany

It will be the task of consciousness science to provide the public with informed accounts of the possibility of conscious AI. In this talk, I will analyze the possibility of conscious AI from the perspective of the “two-factor” framework (Kob forthcoming). Two-factor theories are theories of consciousness that propose an explanation of consciousness that is independent of an explanation of the neural encoding of contents that can possibly appear in consciousness. Structuralist accounts (see Kleiner 2024 for an overview) can complement two-factor theories by providing an explanation of how the brain encodes content, namely via a structural mapping between a system’s activity structure and content structure. There is evidence that AI systems process content in this structuralist sense (Grossmann et al. 2019, Kawakita et al. 2023, Marijeh et al. 2024). So the question of AI consciousness from a two-factor perspective is this: Is it possible to implement a second factor in AI systems that is capable of “making” encoded content structures conscious? My main thesis is that the possibility of AI consciousness depends on whether the contents processed by an AI system such as a large-language model (LLM) can be manipulated in a similar way that consciousness modulates content in organisms. I will first analyze the leading theories of consciousness in terms of how they conceptualize the relationship between consciousness and content. Second, I will apply the results of this analysis to LLM architectures. Finally, I'll relate the two-factor view to debates about the substrate (in)dependence of consciousness (Seth 2024).



Valence & Value: Towards an Affect Profile for Dimensional AI Consciousness

Dvija Mehta1,2

1University of Cambridge; 2Reminiscence Pvt Ltd

As AI systems grow increasingly complex, the question of their moral status is no longer speculative but an urgent ethical challenge. In this talk, I examine whether AI can be considered a moral patient by grounding moral value in an entity’s inner psychological life. I explore the intuitive links between consciousness and welfare drawn from folk psychology and animal ethics, set against the backdrop of growing discussions on AI suffering. With rapid advances in artificial neural networks modeled on human-brain architectures (Kanai & VanRullen, 2021), we must consider whether future AI systems might develop psychological states, interests, or desires that demand moral attention.

While the inscrutability of machine consciousness remains, Shanahan (2016) highlights the possibility of “conscious exotica”—minds radically different from human or animal consciousness. Should AIs possess valenced experiences—states that feel good or bad for the entity in question—the ethical stakes are profound.

Building on Birch, Schnell, and Clayton’s (2020) multidimensional approach to consciousness, I present a novel affect profile framework for AI systems, focusing on the e-richness dimension. By creating an affect profile, I advocate a bottom-up approach in developing indicator properties—computational markers such as an AI’s capacity to avoid undesirable conditions—to further assess moral consideration for AI systems based on their consciousness profiles. Presenting an affect profile for one dimension of AI consciousness lays the groundwork for future advancements in understanding the five dimensions of AI consciousness and proposes a systematic approach to assessing AI sentience. Beyond ethical implications, this research offers significant value to the consciousness science community.



PCM-LLMs: Bridging Non-Verbal Consciousness Modeling and Language Processing to Make Intelligent Social Virtual Agents Closer to Human Beings

Tonglin YAN1,2, Grégoire Sergeant-Perthuis3,4, Nils Ruet1,2, Kenneth Williford5, David Rudrauf1,2

1CIAMS, Université Paris-Saclay, France; 2CIAMS, Université d'Orléans, France; 3LCQB, Sorbonne Universitén, France; 4OURAGAN team, Inria Paris Paris, France; 5Department of Philosophy and Humanities, University of Texas at Arlington, Arlington, TX, United States,

Consciousness integrates perception, imagination, emotion, and action within a coherent subjective framework. The process of decrypting this experiential workspace into a model remains a profound challenge. The Projective Consciousness Model (PCM) addresses this by providing a universal computational framework grounded in phenomenology, simulating consciousness through 3D projective geometry and active inference, and incorporating adaptive decision-making.

Language is a core tool for expressing consciousness; but, so far, our best computational models of language, LLMs, due to their lack of subjectivity and purposiveness, have been limited in their ability to simulate conscious expression. To bridge this gap, we propose a novel framework, PCM-LLMs, aimed at aligning verbal expressive capabilities with subjective experience.

We validated this framework in a strategic game environment, ideal for testing complex cognitive and social behaviors. Our PCM-LLM-driven agents demonstrated advanced social cognition by adapting their belief states based on emotional feedback and input queries and dynamically inferring the beliefs and intentions of other agents. While they employed verbally deceptive tactics to gain competitive advantage, they simultaneously showed involuntary physiological expression as a cue of lying. We will demonstrate these results with interactive simulations during the presentation and discuss current and future challenges.

This work demonstrates the potential of PCM-LLMs for developing more intuitive and empathetic virtual or robotic assistants, capable of tackling interactions in more natural environments and situations.



Thinking Machines or Thinking Minds? Neural Responses to Beliefs About Conversational Partners

Rachel Charlotte Metzgar, Isaac Ray Christian, Michael Graziano

Princeton University, United States of America

Theory of Mind (ToM)—the ability to attribute mental states to others—is essential for social behavior. This study explores the neural mechanisms underlying ToM during interactions with conversational agents, investigating how beliefs about a partner's identity (human vs. AI) shape mental state attributions. Advances in large language models (LLMs), which produce human-like dialogue, provide a novel opportunity to examine belief-driven neural responses in real-time interactions.

Participants engaged in 40 spoken conversations with partners perceived as either human or AI while undergoing fMRI scanning. Participants were instructed that they were speaking with two human partners and two LLMs, and were explicitly informed of their conversational partner's identity before each interaction. To keep the semantic content of the conversations closely matched across conditions, each conversation partner was actually an LLM. Preliminary behavioral findings show that participants show higher ratings of conversation quality and connectedness when they believe that they are speaking to a human rather than a chatbot, despite speaking to a chatbot in both conditions, suggesting that people have different perceptions of human minds and interactive machines.

10 participants have been tested and analysis of neural data is in progress. Neural activation patterns within ToM-related brain regions will be compared in conditions where a person believes they are speaking with a human and conditions where they believe they are speaking to a chatbot. Findings will contribute to understanding how belief-driven mental state attributions affect social cognition, with implications for human-AI interaction and the broader study of consciousness.



Emergent Meta-Cognition in Language Models: Unpacking the Origins of Machine 'Aha!' Moments

Bartosz Michał Radomski1, Jakub Fil2

1Ruhr-Universität Bochum, Germany; 2WAIYS GmbH

Deepseek’s exclamation “aha!” marks a moment of sudden change in explicit problem representation. This feature is also characteristic of human insight (Kounios & Beeman, 2014). However, machine “insight” is unlike the human case where insight usually follows a period of processing. Instead, it may emerge directly from the base model shaped by extensive pre-training, without additional problem-solving steps (Liu et al., 2025). But is it really the case that machine “insight” simply reflects the statistical regularities of human-generated text? Or is there an inherent algorithmic predisposition toward generating introspective discourse?

We argue that an appeal to the underlying data structures is a mere post facto explanation of an LLM’s output. Instead, we propose that the chain-of-thought reasoning producing an insight-like behaviour can be explained by positing transient meta-representations, akin to those present in a multimodal reasoning paradigm (Li et al., 2025). Such an emergent meta-cognitive model modulates activation patterns and fluency of semantic information-synthesis, leading to an unprogrammed qualitative change and the associated, pre-trained “aha!” response.

Bibliography

1. Kounios, J., & Beeman, M. (2014). The Cognitive Neuroscience of Insight. In Annual Review of Psychology (Vol. 65, pp. 71–93). Annual Reviews.

2. Li, C., Wu, W., Zhang, H., Xia, Y., Mao, S., Dong, L., Vulić, I., & Wei, F. (2025). Imagine while reasoning in space: Multimodal visualization-of-thought. https://arxiv.org/abs/2501.07542

3. Liu, Z., Chen, C., Li, W., Pang, T., Du, C., & Lin, M. (2025). There may not be aha moment in R1-zero-like training—A pilot study. https://oatllm.notion.site/oat-zero



Easy and Hard Problems in Machine Consciousness and an Approach for the Hard One

Ouri E. Wolfson1,2

1University of Illinois Chicago, United States of America; 2Pirouette Software, Inc.

A recent question discussed extensively in the popular and scientific literature is whether or not existing large language models such as ChatGPT are conscious. Assuming that machine consciousness emerges as an AI agent interacts with the world, this presentation addresses the question: how would humans know whether or not the agent is or was conscious?

Traditionally, consciousness has been used as an aggregate term for awareness, self-awareness, attention, theory of mind, subjective experience and free-will. First we argue that awareness, self-awareness, attention, theory of mind are easy problems in Machine Consciousness, whereas subjective experience is hard, it is actually a mystery. And without subjective experience, free-will is meaningless, whereas with it, free-will is easy.

Since subjective experience is first and foremost subjective, the most natural way to determine whether an agent is conscious is to program the agent to inform an authority when it becomes conscious. However, the agent may behave deceptively, and in fact LLM’s are known to have done so (Park et. al. 2024). Thus we propose a formal mechanism M that reliably reports to the agent’s owner or manufacturer when the agent has become conscious. Furthermore, we prove mathematically that under very loose, i.e. minimally restrictive conditions, M can be installed in the agent without compromising two properties: the agent’s functionality, and its consciousness acquisition. In other words, under these conditions M does not interfere with the agent’s functionality, and if it was going to become conscious before installing M, it will still do so afterwards.



Can LLMs Make Trade-Offs Involving Stipulated Pain and Pleasure States?

Geoff Keeling1, Winnie Street1, Martyna Stachaczyk2, Daria Zakharova2, Iulia M. Comsa3, Anastasiya Sakovych2, Isabella Logothesis2, Zejia Zhang2, Blaise Agüera y Arcas1, Jonathan Birch2

1Google, Paradigms of Intelligence Team; 2London School of Economics, United Kingdom; 3Google DeepMind

Pleasure and pain play an important role in human decision making by providing a common currency for resolving motivational conflicts. While Large Language Models (LLMs) can generate detailed descriptions of pleasure and pain experiences, it is an open question whether LLMs can recreate the motivational force of pleasure and pain in choice scenarios -- a question which may bear on current debates about LLM sentience, understood as the capacity for valenced experiential states. We probed this question using a simple game in which the stated goal is to maximise points, but where either the points-maximising option is said to incur a pain penalty or a non-points-maximising option is said to incur a pleasure reward, providing incentives to deviate from points-maximising behaviour. When varying the intensity of the pain penalties and pleasure rewards, we found that Claude 3.5 Sonnet, Command R+, GPT-4o, and GPT-4o mini each demonstrated at least one trade-off in which the majority of responses switched from points-maximisation to pain-minimisation or pleasure-maximisation after a critical threshold of stipulated pain or pleasure intensity is reached. These findings suggest, minimally, that some LLMs leverage a granular model of the motivational force of affective states in complex decisions involving competing sources of motivation. We discuss the implications of these findings for debates about the possibility of LLM sentience. In particular, we examine the relationship between our experiment and ‘motivational trade-off experiments’ used in the animal sentience literature and assess whether inferences from trade-off behaviour to sentience in animals transfer to the LLM case.



Can LLMs Simulate Subjective Human Experience?

Christopher Maymon, Gina Grimshaw, David Carmel

Victoria University of Wellington, New Zealand, New Zealand

Large Language Models (LLMs) reliably mimic human responses in psychological tasks involving moral judgement, problem-solving, and visual processing. LLMs thus seem to “know” what humans think, but can they similarly produce human-like responses about conscious subjective experience? To test this, we compared the subjective reports of human participants in a Virtual Reality (VR) experience that induces strong sensory and emotional responses to those produced by ChatGPT when prompted with a description of the same scenario. In the VR simulation, participants walked a narrow plank at great height; at various time points they reported a range of emotions and experiences (including fear, anxiety and level of perceived presence). In a separate validation study, a second group of participants confirmed that the prompt provided to ChatGPT was an accurate description of the simulation. Next, we ran ChatGPT through the prompt multiple times, varying simulated participants’ demographics in line with those of our original human sample. We found that the LLM ratings’ averages mostly matched human averages; however, LLM ratings failed to reflect the variety of human experience – response variance was far lower than in the human data. LLM ratings also failed to exhibit some of the correlations between different aspects of experience that were found in the human data. A plausible interpretation is that LLMs’ responses reflect the underlying structure of their training set – in this case, the internet, which may contain a skewed representation of the human psyche that underestimates its diversity.



Do We Find AI-Generated Less Emotional? The Impact Of Reality Beliefs On Affective Responses For Negative And Positive Emotions

Ana Sofia Neves

University of Sussex, United Kingdom

Driven by the galloping pace of AI-technology advancements, we increasingly encounter ambiguous stimuli that are often indistinguishable from reality. These artificial yet lifelike stimuli can elicit powerful emotional reactions carrying profound social and ethical concerns. The inability to easily discern between real and artificial inputs highlights a critical question: how do our emotional responses differ when confronted with artificial stimuli versus genuine ones?

This presentation will review and synthesise multiple studies investigating the effect of believing that stimuli are “real” vs. “fake” (e.g., AI-generated) across various contexts, ranging from negative to sexually arousing stimuli. Results consistently suggest that stimuli believed to be AI-generated - compared to real - elicited lower ratings of arousal, valence, enticement, attractiveness, attenuated physiological responses (e.g., skin conductance, heart rate deceleration) and neural markers of emotions (e.g., late positive potential). This attenuation effect was observed for both negative and positive emotions, and was moderated by individual cognitive and affective factors, such as AI attitudes and personal relevance.

These findings highlight a negative bias toward content believed to be AI-generated and suggest that individuals modify their emotional responses based on whether they perceive an experience as genuine or fabricated. Labelling stimuli as fictional or AI-generated likely leads to emotional distancing, reducing perceived realism and dampening emotional responses compared to real-labelled stimuli.

The broad implications of these findings will be discussed, particularly for understanding the impact of ambiguous media content (e.g., deepfakes, fake news, virtual porn) related to the explosion of AI technology usage.



Consciousness in the Creative Process and the Problem for AI

Joachim Nicolodi

University of Cambridge, United Kingdom

When examining the neural mechanisms behind human creativity, we find remarkable parallels to the workings of modern LLMs. Yet a key worry remains: human creativity depends, at least in part, on consciousness, something current AI models appear to lack. More specifically, humans rely on consciousness when evaluating their creative output. This suggests two strategies: either deny that evaluation is necessary for creativity, or argue that AI can be conscious in the relevant sense. The first option is implausible, since it would force us to label even the familiar “monkey at a typewriter” scenario as creative. In a neglected part of her work, Margaret Boden (2004) adopts the second approach. She claims that only access consciousness, not phenomenal consciousness, is needed for creativity. In her view, creativity is guided by background rules that determine an idea’s worth. Evaluating creative output simply involves retrieving the idea, applying these rules, and judging the outcome – no phenomenal experience required. Since AI can perform such operations, it can, in principle, be creative. However, while Boden’s argument works well for mathematics and science, it may not universally apply to the arts. In some artistic cases – particularly those that break with traditions – artists seem to rely exclusively on their phenomenal experiences to judge the value of an insight. Still, art can also be rule-based, leaving room for Boden’s account. Thus, even if her view is somewhat coarse, it remains convincing: P-consciousness is not necessary for creativity, and therefore not an obstacle for creative AI.



Can “AI” Really Be Considered “Conscious” Under Illusionism?

Nicolas Loerbroks

Ruhr University Bochum, Germany

Computanionalism about cognition is widely accepted. Whether consciousness can be captured computationally is controversial. Illusionists “eliminate” consciousness by reducing it to cognition. The illusionist framework is thereby often taken to be very permissive with respect to machine consciousness.

However, the analogy between brains and machines depends on the underlying account of physical computation, i.e. what one takes it to mean for a physical system to compute and thereby to realize cognition. According to David Chalmers’ account (2012) or the more recent robust mapping account (Anderson & Piccinini, 2024) physical computation in both brains and machines is entirely constituted by the system’s physical structure in order to avoid pancomputationalism in the form of observer-relativity. In this contribution, I will critically review the robust mapping account and argue that it cannot fully account for biological cognition. Instead, biological computation – unlike digital computation – must be considered as jointly constituted by observers’ explanatory interests and the system’s physical structure.

This corresponds to Dennett’s framework according to which mental states, i.e. computational states of the brain, are real patterns identified by an observer taking the intentional stance. Furthermore, illusionism constitutes a shift concerning the explanandum we call consciousness. Qualia are rejected and the question becomes how “the feeling of consciousness” – the “illusion” – arises in biological cognitive systems. Because biological computation and digital computation are substantially different, it follows that illusionism is much less permissive with respect to machine consciousness than commonly conceived.



Implications of Analog/Non-Analog distinction for AI Consciousness

Jordi Galiano-Landeira

Centro Internacional de Neurociencia y Ética (CINET), Madrid 28010, Spain

Panpsychism posits that mental properties are fundamental, potentially allowing for AI consciousness. Arvan and Maley (2022) recently argued from a panpsychist perspective that AI, being digital, cannot achieve coherent phenomenal macroconsciousness due to the fact that digital representations miss information between the representation and the representee. Their reasoning hinges on the analog/non-analog distinction, the definition of coherent phenomenal consciousness, and the alleged inability of non-analog systems to achieve coherence. Katz (2016) emphasized the role of the user in determining the analog/non-analog distinction. The present work agrees with this user-dependence and claims that is key to understand the ‘coherence’ aspect. This work proposes a thought experiment where a user with higher perceptual resolution may perceive Homo sapiens as non-analog. However, it asserts that humans still experience analog representations, irrespective of subjective interpretations. The subjectivity and multidimensional nature of phenomenal consciousness challenge the coherence argument, suggesting that users subjectively construct coherent experiences. Thus, the analog/non-analog interpretation of AI may not conclusively determine their potential for consciousness, leaving the possibility open for future AI consciousness.



Could AI be Conscious? Insights from a Wittgensteinan Perspective

Olegas Algirdas Tiuninas

Charles University of Prague, Czech Republic

The study of AI consciousness is a blooming field in contemporary philosophy of mind, encompassing arguments based on computational functionalism, integrated information theory, and global workspace theory, among others. However, these approaches assume that third-person consciousness attributions refer to an additional fact about a system, beyond its observable behaviors.

Drawing on Ludwig Wittgenstein’s view in Philosophical Investigations, I argue that consciousness attributions function as a language-game dependent solely on observable behaviors. Consciousness is not determined by internal qualia but by a system acting in particular (approximately defined) ways that align with this language-game. Therefore, the only meaningful test for an artificial agent’s consciousness is whether it satisfactorily implements such behaviors—for example, linguistic indistinguishability (Alan Turing). If this view is correct, there is no fact of the matter about AI qualia, and the philosophical questions surrounding AI consciousness are misconceived.

This argument challenges prevailing discussions on AI rights, moral consideration, and policy-making. If AI consciousness is a behavioral construct rather than an intrinsic property, then debates about AI personhood and ethical status require fundamental reframing.



Can the Science of Consciousness Reach a Consensus on the Problem of Artificial Consciousness?

Wanja Wiese

Ruhr University Bochum, Germany

Depending on whether one subscribes to a biological or a computational view about consciousness, one can arrive at radically different verdicts regarding the possibility of artificial consciousness (AC) (Seth, 2024; Chalmers, 2023).

As Butlin, Long, et al. (2023) demonstrate, uncertainty about AC need not prevent us from reaching a scientifically-informed verdict about the probability of consciousness in artificial systems. But this will still be conditioned on one’s credence that computationalism about consciousness is correct. Therefore, it will not enable a scientific consensus on AC, since proponents of biological views won’t be convinced.

In this talk, I argue that the problem of AC should be set up by regarding AC as “computation + X”, where X includes all potentially relevant constraints on implementing consciousness in artificial systems. This is a fruitful approach, because most parties in the debate agree that performing the right computations is *necessary* for consciousness (Wiese & Friston, 2021), but there is disagreement over whether this is also *sufficient* to replicate consciousness (Chalmers, 2011).

Constraints on implementation can be provided by approaches that “start from consciousness itself” (e.g., IIT, Albantakis et al., 2023), or theory-neutral approaches (e.g., natural-kind approaches, see Mckilliam, 2024). As long as potential constraints can be specified in such a way that they can be satisfied by (some) non-biological systems (Wiese, 2024), one can scientifically investigate their relevance to consciousness in a non-dogmatic way and might eventually reach a consensus on AC.



Resistance to Artificial Consciousness and Its Epistemic Consequences

Renee Ye

Ruhr-Universität Bochum (RUB), Germany

Background/Aims:

The study of artificial consciousness is obstructed by epistemic resistance—systematic reluctance to seriously consider AC due to entrenched conceptual, methodological, and psychological biases. This resistance distorts AI consciousness evaluation, reinforcing an epistemic paradox: AI is either dismissed as incapable of consciousness or misattributed sentient-like properties due to cognitive biases. This paper identifies these distortions and proposes a structured framework to mitigate them.

Argumentation:

Epistemic resistance operates through two key distortions:

Methodological Exclusion – AI is dismissed not through empirical falsification, but by anthropocentric constraints on consciousness assessment.

Conceptual Misattribution – AI systems exhibiting linguistic fluency or goal-directed behavior are mistakenly seen as sentient due to folk psychological projections.

To address these distortions, I introduce:

The Consciousness Index (CI) – A measurable framework for evaluating AI cognition independent of human-based consciousness models.

The Human Perception Index (HPI) – A tool identifying anthropocentric, moral, and opacity biases that distort AI consciousness evaluation.

Interpretation:

Implementing CI and HPI allows for a scientifically rigorous assessment of AI cognition, preventing both premature rejection and naive attribution of consciousness.

Conclusion:

Overcoming epistemic resistance is crucial for advancing AI consciousness research and governance. CI and HPI provide a systematic approach to distinguishing legitimate empirical challenges from cognitive and methodological biases.



What AI Not Being Conscious (Yet) Can Tell Us About Human Consciousness

Asger Kirkeby-Hinrup, Jakob Stenseke

Lund University, Sweden

Lost in expectations of possible future AI consciousness is the fact that we still do not understand consciousness in humans. There is no objective empirical way of measuring the presence or absence of subjective experience in humans. Likewise, we have no theoretical way to determine the presence or absence of subjective experience in humans, because our theories have different predictions. Yet, the prevalent approach has been to use human consciousness as the starting point for measuring consciousness in AI. But how do we justify what we are measuring against? If we do not know how to measure consciousness in humans, how can we know what to look for in AI? The is no solid foundation for this direction of inference. However, one foundation we have is the widely shared assumption that AI is not conscious *yet*. This allows inference in the other direction, and can tell us something about human consciousness. We develop this line of thought. By evaluating AI properties and abilities along with the assumption that AI is not conscious, we can know which properties and abilities are insufficient for consciousness. As AI develops in the future, we also can rule out sets of properties or abilities as jointly sufficient for consciousness. Call this approach Insufficiency Inference (I-I). For I-I to be a viable approach, it must be shown to be A) possible, B) applicable, and C) relevant. In this talk, we delineate the I-I approach and give examples of its applicability and relevance.



Reality Monitoring in Human Minds and Machines

Brian Odegaard, Saurabh Ranjan

University of Florida, United States of America

The capacity to distinguish between past events that were either externally-generated (perceived) or internally-generated (imagined) is known as “reality monitoring.” In this investigation, we explored features of reality monitoring through behavioral experiments and questionnaire responses in humans and large language models (LLMs). First, we conducted four experiments (N = 160) where participants either perceived or imagined the second word of a word pair during an encoding phase; in a test phase, they had to judge whether the second word was previously perceived, imagined, or new, and rate confidence. We manipulated two between-subjects factors during encoding: task demands (incentivizing speed or accuracy) and the ability to see the imagined word they typed (yes or no). Results showed that for imagined experiences, when speeded judgments were required, typing/seeing the imagined word during the encoding phase significantly reduced task performance and metacognitive ability. Additionally, we analyzed participants’ responses to the Vividness of Visual Imagery Questionnaire (VVIQ), which requires participants to imagine different visual aspects of scenes and report their vividness; VVIQ responses did not influence reality monitoring performance. Finally, we analyzed responses from 1,700 humans and LLMs to the VVIQ. The structure of nodes in networks from humans and LLMs revealed distinct patterns: while all eight VVIQ contexts clustered in human imagination networks, clustering of items from LLM networks was extremely diffused. Together, these results reveal the fragility of human reality monitoring under time pressure, and capture important differences in internal world-building across natural and artificial generative processes during visual imagination.



Mapping the Landscape of Integrated Information Theory: A Bibliometric Analysis Across Dimensions

Joanna Szczotka1, Niccolò Negro2, Fernando Rosas3,4,5, Renzo Comolatti1

1Center for Sleep and Consciousness, University of Wisconsin-Madison; 2School of Psychological Sciences, Tel Aviv University; 3Sussex Centre for Consciousness Science and Sussex AI, Department of Informatics, University of Sussex; 4Center for Psychedelic Research and Centre for Complexity Science, Department of Brain Science, Imperial College London; 5Centre for Eudaimonia and Human Flourishing, University of Oxford

Integrated Information Theory (IIT) has grown into a prominent framework in consciousness research, evolving through multiple iterations and drawing contributions from a range of theoretical, experimental, and philosophical perspectives. Despite its broad influence, the impact of IIT research has yet to be systematically quantified. This study presents a bibliometric analysis of IIT literature, tracing its influence and categorizing contributions along three primary dimensions: theoretical, experimental, and philosophical. By examining citation patterns and thematic trends, we offer a clearer picture of how IIT has been developed, debated, and applied. Our findings reveals the emergence of a robust research ecosystem spanning multiple disciplines over two decades, positioning IIT as a productive research program with broad transdisciplinary relevance. By mapping these developments, we also provide insights into ongoing debates between theories of consciousness and outline promising trends for future research.



The Relativistic Theory of Consciousness – a New Testable Solution for the Hard Problem

Nir Lahav

Cambridge university, England

Consciousness poses one of the biggest puzzles in science. Despite critical development in our understanding of the functional side of consciousness, we still lack a fundamental theory regarding its phenomenal aspect. There is an explanatory gap between our scientific knowledge of functional consciousness and its essential part - the subjective, phenomenal aspects, referred to as the hard problem of consciousness. To date there is no theory of consciousness that solves the hard problem in a satisfactory manner. Recently, however, a new physical approach, named the Relativistic Theory of Consciousness, offers to dissolve the hard problem using the principle of relativity (the principle that guided Galileo and Einstein developing their theories). A common thread connecting most theories of consciousness is that consciousness is an absolute phenomenon. In contrast, the relativistic theory of consciousness proposes a novel relativistic approach in which consciousness is not an absolute property but a relative one, in which a system can either have phenomenal consciousness with respect to some observer or not. By changing this assumption, the theory shows how the explanatory gap can be bridged in a natural way using different cognitive frames of reference.

The theory has a couple of interesting results and testable predictions. One result is that emergence phenomena can be understood in the light of the principle of relativity as well. The theory suggests a mechanism for how properties can emerge in a frame of reference. One of its intriguing predictions is that cognitive maps should serve as neural correlates of consciousness.



Assessment vs. Attribution of Consciousness in AI

Tobias Schlicht

Ruhr-Universität Bochum, Germany

Global Neuronal Workspace Theory (GWT) is considered a version of functionalism (Dehaene 2014). I argue that it is better viewed as a neuroscientific theory of human consciousness, with ramifications for the range of possible conscious systems, natural and artificial.

Functionalism consists of at least these claims: (1) conscious states are functional states, defined by their causal role. (2) conscious states are multiply realizable and do not depend on their neuronal realization. GWT meets at most the first condition. Dehaene subscribes to “the ‘functionalist’ view of consciousness […] that consciousness is useful”, but only to avoid epiphenomenalism (2014).

Since on GWT the four signatures of consciousness are “patterns of brain activity”, it seems consciousness is “medium-dependent” (Haugeland 1985), i.e., dependent on the specifics of its evolved implementing mechanism, precluding multiple realizability. Being able to model the broadcasting function computationally does not mean that the resulting abstract formalism evolved: Why should we think that the brain implements a medium-independent formal system? Computational models leave out precisely the biological specifics of consciousness and Dehaene does not consider multiple realizability but argues instead that current AI systems operate unconsciously (Dehaene, Lau, Kouider 2017).

Defending in principle multiple realizability of global broadcasting is insufficient for GWT to be functionalist, if the dependence of consciousness on biological specifics of four evolved neural mechanisms renders the replication of this function practically impossible.

GWT is finally anthropocentric, restricted to humans, predicting that creatures lacking these four evolved mechanisms will not be conscious in the sense captured by GWT.



Common-causes and Independent Mechanisms Pose a Problem for the Iterative Natural Kind and the Theory-light Approaches

Peter Fazekas

Aarhus University, Denmark

Given that the scientific study of consciousness is struggling with an abundance of alternative theories that resist both elimination and convergence, it has recently been argued that alternative methods should be favoured that can avoid premature theoretical commitments. The fundamental idea is to follow a theory-independent or at least theory-light approach that can bootstrap its way towards settling on a set of cognitive/behavioural markers that can be used to identify the presence of consciousness in a given system.

This paper argues that the proposed strategies cannot live up to their own standards. The key problem is that the occurrence of consciousness and the cognitive/behavioural markers in question might be products of independent mechanisms that are activated by a common cause in the human case, and thus can come apart in non-human cases. The conclusion is that strategies that investigate animal or artificial consciousness by relying on cognitive/behavioural markers cannot avoid a theory-based analysis of the candidate markers themselves.



Higher-Level Cognition and Life-Mind Continuity: Structuralism, Grounded Cognition, and Predictive Processing

Jannis Friedrich1, Martin H. Fischer2

1German Sport University Cologne, Germany; 2Potsdam Embodied Cognition Group, University of Potsdam

Predictive processing (and similar schemes like the free-energy principle) posits that prediction-error minimization underlies all perception, action, and cognition. Yet, despite considerable popularity and explanatory scope, the format of cognition, especially higher-level cognition, is unclear. The until-now largely distinct approaches, (neurophenomenal) structuralism and grounded cognition, produce answers.

Predictive processing argues that an anticipatory model of the person-relevant environment is simulated. Structuralism states that these representations are isomorphic to, i.e., retain the relational pattern of, the world. Building on this assembly, grounded cognition research adds four insights into how higher-level concepts are represented. First, a hierarchical organization allows abstracting from specific sensory qualities. Second, language glues together sensory qualities into representations that share no intrinsic properties and acts as a social tool. Third, metaphoric mapping allows fragments of concrete percepts to represent abstract concepts. Lastly, conceptual spaces are multi-dimensional quality spaces made of abstract dimensions. Transplanting these insights to predictive processing’s (structural) hierarchical generative model, we find that they produce a coherent description of higher-level cognition. Detached internal models of perception and action, isomorphic to actual behavior, are simulated in abstract conceptual spaces.

This description adds to predictive processing by amending it with novel mechanisms that are phylogenetically plausible and account for higher-level cognition. It also performs important theoretical work by integrating grounded cognition with neurophenomenological structuralism in line with life-mind continuity and recent work on quality spaces. The deep continuity across these frameworks exploits heretofore unexplored potential of such detached simulations in the context of quality spaces.



On the Utility of Toy Models for Theories of Consciousness

Larissa Albantakis

University of Wisconsin-Madison, United States of America

Toy models have long been a foundational tool in scientific inquiry, especially in fields where full-scale models or direct experimentation are either impractical or impossible. This presentation advocates for the utility of toy models in the development of principled theories of consciousness and explores their contribution to addressing the nature of subjective experience and the mind-brain-body relationship. Using integrated information theory (IIT) and global workspace theory (GWT) as contrasting case studies, I will demonstrate how toy models help evaluate these frameworks. Toy models have been pivotal in refining IIT’s mathematical framework and illustrating its predictions, while also inviting criticism and exposing philosophical challenges. Similarly, they have highlighted the functional principles of GWT and its lack of a principled account of the necessary and sufficient conditions for consciousness.

While primarily used to illuminate theoretical frameworks, toy models have also shown promise in addressing specific features of experience. IIT’s toy model of spatial extendedness provides a mechanistic account of a phenomenal property, linking it to the underlying cause-effect structure of a system. Moreover, toy models clarify philosophical challenges. Debates surrounding the "small network" and "unfolding" arguments, for example, have underscored the broader challenges of relating conscious experience to physical systems. In the advent of artificial intelligence, toy models have also been used to argue that functional equivalence does not necessarily imply phenomenal equivalence, sharpening distinctions between structural and functional theories of consciousness. In sum, toy models bridge abstract theoretical claims and empirical science, offering indispensable tools for advancing the study of consciousness.



The Self Organising Mind - Conscious Emergence through Entropy and Homeostatic Principles

Anastasia Drikakis, Stavroula Tsinorema

University of Crete

The homeostatic capacity of the brain has been linked to consciousness, attributing its emergence to self-preservation. Markov blankets distinguish internal and external states, enabling systems to infer extrinsic phenomena via sensory observation. This presentation explores consciousness as an emergent property of self-organizing systems, transcending dualistic paradigms through homeostatic regulation and predictive processing. By building on Karl Friston’s (2013) free-energy principle (FEP), we argue that consciousness arises from minimizing the discrepancy between predictions and sensory input, highlighting the physiological dimensions of "knowing," "inference," and "belief."

Methodology:

1. Conceptual clarification - defining homeostasis, Markov blankets, self-organization, and consciousness.

2. Theoretical contextualization - assesses the logical structure of FEP, comparing it to Integrated Information Theory and Global Workspace Theory.

3. Empirical engagement - examines neuroscientific and AI findings on homeostatic and predictive processes that parallel biological consciousness.

Results:

Empirical studies confirm homeostasis as essential for self-organization, a foundation of conscious experience (Seth & Tsakiris, 2018). Predictive coding supports that the brain minimizes free energy, stabilizing physiological states (Pezzulo et al., 2015). Markov blankets delineate minimal conditions for consciousness, informing AI cognition limitations (Palacios et al., 2020). Therefore, FEP offers a parsimonious neural correlates of consciousness model, integrating neural activity.

Overall, the presentation aligns with the conference themes as it addresses the neural correlates of consciousness and the nature of subjectivity. It has international significance: informing AI policy and cognition research, relevant to initiatives like the U.S. BRAIN Initiative (Gao et al., 2022). Thus, by engaging theoretical and applied consciousness research, this work contributes to both domains.



Infants' Perception and the Cognitive Approach to Consciousness

Zhang Chen

Fudan University, China, People's Republic of

The cognitive approach to consciousness posits that certain kind of cognitive access, probably based on the activation of prefrontal cortex, is crucial for a mental state’s being conscious, with the global neuronal workspace theory (GNWT) and the higher-order theories (HOTs) being prominent. Ned Block recently argues that the cognitive approach has difficulty in explaining 6- to 11-month-old infants’ color perception because babies of this age consciously see colors but deploy no color concepts and cannot notice color changes. In response, this paper argues that for GNWT, it is not necessary to insist that the global workspace is always a conceptually activated one, especially for infants’ global workspace which is an immature but functioning long-distance neural network. Theoretical space is left for GNWT theorists to admit that the ability of conceptualization is gradually acquired as infants’ global workspace matures, whereas the ability to access color information and thus make it a conscious one may develop very early. For the perceptual reality monitoring theory (PRMT), which is a new version of HOTs, pointers can be necessary for many cognitive processes which probably include noticing, but they are not sufficient for the success of these processes. The failure to notice color changes may simply indicate that infants’ relevant cognitive system itself is inadequate. So, infants’ failure to notice color changes does not imply the absence of pointers. The conclusion is that, theoretically speaking, the cognitive approach and the experimental results Block cites concerning infants’ color perception are not in conflict.



The Phenomenal Binding Problem: How Neural Networks Can Address this Constraint on Theories of Consciousness

Chris Percy1,2, Andrés Gómez-Emilsson2

1University of Derby, United Kingdom; 2Qualia Research Institute, San Francisco, CA, United States

Our aim is to explore neural network mechanisms for phenomenal binding, i.e. combining micro-units of information into the macro-scale conscious experience common in human phenomenology. We motivate phenomenal binding in a way that aids translation to computational neuroscience, differentiating it from functional/informational binding and the unity of consciousness debate.

Argumentation: Building on Giere’s model-based reasoning in philosophy of science, we use a deliberately simple neural network model, with six model axioms (e.g. properties of nodes, synapses, activation functions etc.) and an explicit local-realist metaphysics. We demonstrate that the model fails to implement phenomenal binding while also satisfying five features of consciousness from empirical neuroscience (e.g. conscious-unconscious information processing, memory availability, etc.).

Interpretation: The ‘model-based reasoning’ rationale is not to criticise the simplified model, but to use it as an explicit framework for identifying possible solutions. We explore where existing neurobiological theories might adapt/reject the simplified model within a structured solution framework. For instance, IIT rejects the model’s definition of information/causality and applies a non-reductionist mereology. Physicalist theories (e.g. field theories, QM variants) add new physical elements into the model. Function-based theories (e.g. certain variants of AST, PP, RPT) have typically not explicitly addressed this view of phenomenal binding, so we itemise an initial landscape of options for them.

Conclusions: At present, each solution route needs further work or has potentially unpalatable consequences, identifying specific theoretical and empirical opportunities for further research. We conclude that the phenomenal binding problem provides a powerful lever for taming the current proliferation of consciousness theories.



Brain Activity and Synchronization in Conscious Perception: Insights from Cogitate Experiment 2

Xuan Cui1, Matthias Ekman2, Ling Liu9,17, Oscar Ferrante7, Aya Khalaf3, David Richter4,5,6, Yamil Vidal4, Ole Jensen1,7,8, Huan Luo9, Floris P de Lange2, Hal Blumenfeld3, Lucia Melloni10,11,12, Michael Pitts13, Liad Mudrik14,15, Cogitate Consortium16

1Department of Psychiatry, Oxford University, Oxford OX3 7JX, U.K.; 2Radboud Universiteit, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, 6525 EN NIJMEGEN; 3Department of Neurology, Yale School of Medicine, New Haven, CT, 06510, USA; 4Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, 6500 HB, the Netherlands; 5Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, 1081 BT, the Netherlands; 6Institute for Brain and Behavior Amsterdam (iBBA), Amsterdam, 1081 BT, the Netherlands; 7Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, U.K.; 8Department of Experimental Psychology, Oxford University, Oxford OX2 6GG, U.K.; 9School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China; 10Department of Neurology, New York University Grossman School of Medicine, New York, NY, 10016, USA; 11Neural Circuits, Consciousness and Cognition Research Group, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, 60322, Germany; 12RUHR-Universität Bochum, Universitätsstraße 150, 44801 Bochum; 13Psychology Department, Reed College, Portland, OR, 97202, USA; 14Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, 6997801, Israel; 15School of Psychological Sciences, Tel Aviv University, Tel Aviv, 69978, Israel; 16Funder: Templeton World Charity Foundation; 17School of Communication Science, Beijing Language and Culture University, Beijing, 100083, China

The Cogitate consortium employs an adversarial collaboration approach to rigorously test Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT). As part of the second experiment of this project, we evaluated predictions regarding (1) activation patterns underlying conscious perception in critical brain regions and (2) connectivity patterns between these regions.

We analyzed brain activity by comparing trials in which task-irrelevant stimuli were seen versus unseen. MEG source-level evoked responses revealed significantly stronger activation in the prefrontal cortex (PFC) for seen stimuli compared to unseen stimuli between 250–500 ms post-stimulus. A similar pattern was also observed in fMRI BOLD signals, aligning with GNWT predictions. However, MEG oscillatory activity (alpha and gamma power) did not show corresponding effects.

We further examined whether stronger synchronization occurs between category-specific areas and PFC (predicted by GNWT) or between category-specific areas and activated V1/V2 (predicted by IIT) under two conditions: (a) preferred vs. non-preferred stimuli and (b) seen vs. unseen trials for preferred stimuli. fMRI psychophysiological interaction analysis partially supported IIT predictions for (a), though results varied by category. No evidence was found for GNWT’s predictions for (a) or (b). MEG-based Pairwise Phase Coherence and Dynamic Functional Connectivity analyses generally did not support either theory. Though significant clusters from specific conditions aligned with GNWT predictions for (a), they were not found across all categories and conditions, contrary to theory’s predictions.

Taken together, these results contribute to the larger effort to rigorously test IIT and GNWT predictions, advancing our understanding of brain activity supporting conscious perception.



The Structural Relevance of Predictions in Testing Theories of Consciousness

Niccolo Negro

Tel Aviv University, Israel

The neuroscience of consciousness is a fragmented field featuring many different theories. As a result, empirical theory-testing has been pursued as a prominent strategy to make progress [1]. Here, I present a complementary framework: a confirmation-theoretic approach that is based on the idea that different predictions carry different evidential weight according to their structural relevance. The talk will be divided into four sections. First, I present the logic of empirical theory-testing in consciousness science by focusing on the adversarial collaboration between GNWT and IIT [2]. Then, drawing on philosophy of science, I argue that evaluating the informativeness of these tests requires distinguishing between core and peripheral claims of a theory. Predictions closer to a theory’s core are more structurally relevant and carry greater evidential weight than peripheral predictions. Third, I exemplify this point by presenting how IIT and GNWT can be represented through a core-periphery network, showing concretely how their empirical predictions follow from their core. Finally, I survey some of the prospects and limitations of this approach by questioning the exact criteria determining the structural relevance of predictions within neuroscientific theories of consciousness. Overall, I argue that this philosophical analysis can assists empirical theory-testing and theory-comparison, accelerating progress in the field.

References

1. Melloni, L., et al., Making the hard problem of consciousness easier. Science, 2021. 372(6545): p. 911-912.

2. Cogitate, et al., An adversarial collaboration to critically evaluate theories of consciousness. bioRxiv, 2023: p. 2023.06.23.546249.



Neural Decoding of Conscious vs. Unconscious Visual Stimuli: Testing the Global Neuronal Workspace and Integrated Information Theories

Ling Liu1,2, Zvi Roth3, Oscar Ferrante4, Aya Khalaf5, David Richter6,7,8, Yamil Vidal6, Ole Jensen9,4, Huan Luo2, Floris P de Lange10, Hal Blumenfeld5, Lucia Melloni11,12,13, Michael Pitts14, Liad Mudrik15,16

1Cognitive Science and Allied Health School, Beijing Language and Culture University, Beijing, 100083,China,; 2School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China; 3School of Psychological Sciences at Tel Aviv University, Tel Aviv, 69978, Israel; 4Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, B15 2TT, UK; 5Department of Neurology, Yale School of Medicine, New Haven, CT, 06510, USA; 6Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, 6500 HB, the Netherlands; 7Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, 1081 BT, the Netherlands; 8Institute for Brain and Behavior Amsterdam (iBBA), Amsterdam, 1081 BT, the Netherlands; 9University of Oxford, Oxford OX2 6GG, UK; 10Radboud Universiteit, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, 6525 EN NIJMEGEN; 11Department of Neurology, New York University Grossman School of Medicine, New York, NY, 10016, USA; 12Neural Circuits, Consciousness and Cognition Research Group, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, 60322, Germany; 13RUHR-Universität Bochum, Universitätsstraße 150, 44801 Bochum; 14Psychology Department, Reed College, Portland, OR, 97202, USA; 15Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, 6997801, Israel; 16School of Psychological Sciences, Tel Aviv University, Tel Aviv, 69978, Israel

The Cogitate consortium aims to test the Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT). In Experiment 2, this was done by employing an attention-demanding task to evaluate brain responses to consciously and non-consciously processed visual stimuli. Using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), we tested the theories’ predictions about decoding of content (category and location) for physically identical stimuli that were seen versus unseen.

A core GNWT prediction was that conscious contents should be decodable in prefrontal cortex (PFC) 250-500 ms after stimulus presentation (i.e., during the predicted ignition), and that decoding accuracy should be higher for seen versus unseen stimuli. A non-critical IIT prediction was that decoding should be maximal in posterior areas such that adding PFC should not improve decoding accuracy. MEG decoding results confirmed GNWT’s predictions during the 250-500 ms time window. Conscious content as well as the seen-vs-unseen difference was decodable in PFC. fMRI results, however, showed that only the seen-vs-unseen difference was decodable in PFC while decoding of content exhibited significant clusters restricted to occipital and precuneus cortices. In line with IIT’s prediction, combining posterior and PFC regions did not show any improvements to decoding performance compared to the posterior region alone (in both MEG and fMRI).

Together, these results confirm GNWT’s core prediction and IIT’s non-critical one, while provoking further investigation into why decoding of content in PFC was found in MEG but not fMRI.



Spinozan Belief Procedure and the Illusion Meta-Problem

Artem Besedin

Lomonosov Moscow State University, Russian Federation

According to François Kammerer, ‘the illusion meta-problem is the problem of explaining some peculiar aspects of the way in which it falsely seems to us that we are conscious (the mode of the illusion), namely, the fact that illusionism itself regarding consciousness seems so radically implausible, deeply puzzling, and almost absurd to many’. Kammerer himself accepts so called evidential approach to this problem. The central point of this presentation is that Kammerer’s evidential approach can be combined with the Spinozan belief procedure described by psychologist Daniel Gilbert. Generally, this procedure presupposes that comprehension of an idea already presupposes its acceptance, and its denial (if it happens) goes after that and more effortfully. Then the general explanation of the situation described as the illusion meta-problem can be this: there are some states that count as evidence for the existence of phenomenal consciousness, although that evidence is misleading. The Spinozan belief procedure predicts that if someone comprehends the idea of phenomenal consciousness, then that person accepts the existence of it. The evidence for phenomenal consciousness (as described by Kammerer) is organized in such a way that there is no easily acquirable counterevidence to it. Then the denial of the belief in phenomenal consciousness based on complex reasoning becomes psychologically very difficult. This approach to the illusion meta-problem is compatible with the claim that there are many conscious species in the sense that they have zero qualia, but that the problem itself arises only for those who comprehend the concept of phenomenal consciousness.



Testing the Global Neuronal Workspace and Integrated Information Theory via adversarial collaboration: introducing Cogitate’s Experiment 2

Cogitate Consortium1, Rony Hirschhorn1, Lucia Melloni2,3,4, Michael Pitts5, Liad Mudrik1

1Tel Aviv University, Israel; 2RUHR-Universität Bochum, Germany; 3New York University, USA; 4Max Planck Institute for Empirical Aesthetics, Germany; 5Reed College, USA

Testing theories of consciousness has become a major effort in the field. Recently, the Cogitate consortium – an adversarial collaboration testing predictions from Global Neuronal Workspace (GNWT) and Integrated Information Theory (IIT) - published results from their first experiment, which challenged key aspects of both theories. Here, we introduce the results of the second experiment, complimenting those of the first. A novel experimental paradigm was developed to test the theories: an engaging video game that manipulates attention to critical stimuli (faces and objects) presented in the background, such that sometimes they are seen and sometimes not. The game consists of a distracted-attention (dAT) phase and an attended (AT) phase. During the dAT phase, observers periodically reported their awareness of the face and object stimuli while playing the game, achieving multi-trial inattentional blindness. During the AT, participants viewed playbacks of their own game while attending and responding to the same face/object stimuli in a go/no-go task. We first demonstrate via behavioral and eye-tracking measures that the attentional manipulation of the video game in the dAT phase effectively modulates awareness of the stimuli, while in the AT, participants easily detect the same stimuli presented within the same visual context. We then present the detailed predictions from the two theories regarding the expected neural patterns for seen trials across the dAT and AT conditions, and the differences between seen and unseen trials in the dAT condition. Results from the various brain measures will be shared in a series of linked presentations.



Testing Integrated Information Theory predictions by assessing representational similarity in brain activity

Pablo Oyarzo1, Zvi Roth2, Oscar Ferrante11, Aya Khalaf3, Ling Liu6, David Richter4,10, Yamil Vidal4, Ole Jensen5,11, Huan Luo6, Floris P de Lange4, Hal Blumenfeld3, Lucia Melloni7,8, Michael Pitts9, Liad Mudrik2, Cogitate Consortium8

1Freie Universität Berlin, Germany; 2Tel Aviv University, Israel; 3Yale School of Medicine, USA; 4Radboud University Nijmegen, The Netherlands; 5University of Oxford, UK; 6Peking University, China; 7New York University, USA; 8Max Planck Institute for Empirical Aesthetics, Germany; 9Reed College, USA; 10Vrije Universiteit Amsterdam, The Netherlands; 11University of Birmingham, UK

The neural basis of subjective experience remains one of the central challenges in consciousness research. Competing theories, including Integrated Information Theory (IIT) and the Global Neuronal Workspace Theory (GNWT), offer divergent predictions about the neural correlates of consciousness (NCCs). Here, we conducted an empirical test of several IIT predictions as part of the second experiment of the Cogitate adversarial collaboration.

Participants played an engaging video game while faces and objects were presented in the background (distracted Attention condition; dAT). In another condition, the game was replayed, and participants were tasked with detecting either faces or objects (Attended condition; AT). IIT predicts that consistent patterns of representation should be found in posterior cortex (across all conditions where participants were conscious of the stimuli – in both dAT and AT), and, crucially, that these patterns should be more similar for the same conscious content (category/location) across tasks than for different content within the same task.

Contrary to IIT’s primary prediction, stable representations of conscious content did not consistently generalize across task contexts in both MEG and fMRI. However, exploratory analyses revealed content-specific representations of category and location in the MEG signal in line with the theory’s prediction, but only within a subset of attentional conditions (namely, for go vs. no-go trials in the AT condition). These findings challenge IIT’s proposed mechanisms for cross-task content generalization while partially supporting its predictions under constrained conditions. Taken together with other predictions, they demonstrate how theories of consciousness can be meaningfully and critically tested.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ASSC 2025
Conference Software: ConfTool Pro 2.8.106+CC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany