2:30pm - 2:40pmThe Autonomy of Conscious Representation
James David Stazicker
King's College London, United Kingdom
NCC research routinely makes the Highlighter Assumption: that whatever makes the difference between conscious and unconscious mental representation makes no difference to a representation’s content. For example, a standard approach to identifying NCCs aims to identify processes which, when decoded or modelled independently of their role in consciousness, are found to represent the same information represented in consciousness (Crick and Koch 1998; Chalmers 2000). Rejecting the Highlighter Assumption is associated with non-scientific approaches to subjective experience (McDowell 1994). On the contrary, we argue, the Highlighter Assumption is actually a mistake by the lights of scientific, naturalistic theories.
We show this by analysing the following combination —
one experimental paradigm: performance matching (Lau and Passingham 2006)
one scientific theory of consciousness: Global Workspace Theory (Deheane et al. 2011)
one naturalistic theory of representation: unmediated explanatory information (Shea 2018).
Unmediated explanatory information in a neural system is found in the system-environment correlation whose strengthening most increases and whose weakening most decreases the likelihood of the system achieving its task functions. According to Global Workspace Theory, the difference between unconscious and conscious performance of a 2AFC visual task is that, in the conscious case, information is available for a range of tasks beyond 2AFC discrimination. Given this difference in task functions, global broadcasting makes unmediated explanatory information more specific. That is, consciousness alters representational content.
Generalising: consciousness makes a significant difference to a representation’s function, so it makes a difference to the representation’s content. We sketch criteria for identifying NCCs which reflect this.
2:40pm - 2:50pmHow To Determine If A Human Is Conscious? Towards A Unifying Conceptual Framework For Consciousness Tests
Joaquim Streicher1,2, Guillaume Dumas1,3,4, Silvia Casarotto5, Tim Bayne4,6, Adrian Owen4,7, Marcello Massimini4,5, Liad Mudrik4,8, Catherine Duclos1,2,4
1University of Montreal, Canada; 2Hôpital du Sacré-Cœur de Montréal, Canada; 3Hôpital Sainte-Justine, Canada; 4CIFAR Brain, Mind & Consciousness Program, Canada; 5University of Milan, Italy; 6Monash University, Australia; 7Western University, Canada; 8Tel-Aviv University, Israel
Detecting consciousness has become a pressing issue. Advances in sustaining life after severe brain injuries have led to a growing number of patients in states of altered consciousness, while rapid progress in artificial intelligence (AI) underscores the challenge of determining whether such systems might be conscious. Here, we build on existing consciousness tests (C-tests) in clinical settings to develop a unifying conceptual framework and clarify the current landscape. The objective of this work is twofold. First, we will systematically review the literature using innovative techniques powered by Large Language Models (LLMs) to extract information from 6,938 scientific articles that have assessed consciousness in human patients. We will develop an open-access, collaborative database cataloging C-tests and detailing key features such as study design, test modalities, test prerequisites, rationale, and diagnostic or prognostic value. These features will be synthesized into multidimensional fingerprints, enabling systematic analysis and better comparability across C-tests. Second, we propose a conceptual framework offering a novel perspective on C-tests, what they measure, and their limitations. This framework establishes relationships between phenomenal consciousness, a system's capacities and characteristics, and the measurements being performed, uncovering the causal structure and identifying intermediate variables that govern their interactions. This comprehensive review of existing C-tests will provide a better understanding of how consciousness is detected in patients, establishing common ground for future research and refining our working definition of consciousness through a data-driven approach. Ultimately, this work aims to improve our methods for determining whether any system is conscious or not, extending beyond humans.
2:50pm - 3:00pmDoes It Make Sense to Speak of Introspection in Large Language Models?
Iulia Comsa1, Murray Shanahan2
1Google DeepMind; 2Imperial College London, United Kingdom
Large language models (LLMs) exhibit compelling linguistic behaviour, and sometimes offer self-reports, that is to say statements about their own nature, inner workings, or behaviour. In humans, such reports are often attributed to a faculty of introspection and are typically linked to consciousness. This calls for a better characterisation of the possible functional mechanisms and interpretations of LLM self-reports. We propose a lightweight definition of introspection in LLMs that is impartial to the presence of phenomenal experience, but requires a causal relationship between an internal state (or mechanism) and the self-report produced by an LLM. We then present and critique two examples of apparent introspective self-report from LLMs. In the first example, we prompt an LLM to write a poem and describe the process behind its own ``creative'' writing. We argue that this is not a valid example of introspection, as the most plausible explanation of the LLM's output is a mimicry of human self-reports. In the second example, we prompt an LLM to infer the value of its own temperature parameter, which it successfully does. This parameter is set by the user at the time of the conversation and has no analogue in humans; therefore, the confound of self-reports being simply an imitation of human introspective reports in the training data is avoided. We argue that this can be legitimately considered a minimal example of LLM introspection. Our work emphasises the importance of disentangling the reasoning and phenomenal aspects of introspection in consciousness research.
3:00pm - 3:10pmUsing LLMs to Decode the Structure of Thought in Psychiatry
Sebastian Dohnány, Matthew Nour
University of Oxford, United Kingdom
Language is the best window into the minds of others, richly structured and saturated with abstractions and models we use to think about the world. Until recently, large scale analysis of language was hindered by the lack of tools that could capture its rich structure without being limited by noise. However, the advent of large language models (LLMs) introduced powerful tools to quantify meaning in text using vectors. Here we leverage transcripts of open-ended interviews in a population at clinical high-risk for psychosis alongside cognitive and psychometric data (N ~ 1000, AMP-SCZ). Using semantic embeddings of the text, we study how the geometry of individual participant speech is related to psychometric and cognitive scores. Moreover, using dimensionality reduction of the semantic space, we seek to find a relevant schematic space that is more closely related to the cognitive planning and control associated with speech. Preliminary results at this stage suggest a significant relationship between text embedding dimensionality and psychiatric symptoms. This approach holds potential to bring first-person perspective back into psychiatry using robust, quantitative measures and data that is easy to obtain and intuitive for patients and thus provide new links between psychiatry and consciousness science. More broadly, the methods introduced here may have wide ranging implications for the study of phenomenology.
3:10pm - 3:20pmBiologism, Functionalism And Structuralism About Consciousness
Hedda Hassel Mørch
University of Inland Norway
What does consciousness (or at least consciousness of a kind similar to our own) depend on? Biologism takes it to depend on a specific biological substrate (classical identity theory is the typical version of this view). Functionalism takes it to depend on the abstract structure of a system, and be insensitive to the underlying realizer. Structuralism can be defined as the view that consciousness depends on the concrete structure of the system -- not the biological substrate but its concrete hardware architecture, for example. The Integrated Information Theory (IIT) is the most well-known structuralist theory, but it is supported by very specific arguments and evidence. In this talk, I examine structuralism as an overarching type of view, consider some arguments for going in this direction, and identify and suggest a response to a central challenge.
3:20pm - 3:30pm50 Years Since Nagel’s Bat: Physicalism, Subjective Facts And Self-understanding Systems
Robert Van Gulick
Syracuse University, United States of America
The existence of subjective facts in the epistemic sense defined by Thomas Nagel’s
famous article, “What is it like to be a bat?” might be taken to support an anti-physicalist
conclusion. I argue that it does not. The combination of nonreductive physicalism and
teleo-pragmatic functionalism is not only consistent with such subjective facts but
predicts their existence. The notion that conscious minds are self-understanding
autopoietic systems plays a key role in the argument. Global Neuronal Workspace
theory is assessed in terms of its supposed limits in solving the "Hard Problem" of
consciousness. A suggestion is made for augmenting the theory that involves another
sense in which facts about conscious experience are subjective, i.e. they are always states of a conscious subject. The idea of conscious minds as self-understanding systems again plays an important role.
|