3:30pm - 3:40pmOn the Logic of Measuring Neural Correlates of Consciousness
Johannes Kleiner1,2
1Institute for Psychology, University of Bamberg; 2Munich Center for Mathematical Philosophy, Ludwig Maximilian University of Munich
This paper introduces a novel method for measuring Neural Correlates of Consciousness (NCCs), derived from a mathematical analysis of the logic of measurement of NCCs. This method, termed 'co-activation analysis,' might complement or extend existing methods such as contrastive analysis and decoding. It affords for the empirical identification of NCCs as defined by Crick & Koch (1990) and Chalmers (2000) based on co-activation data, which is data about the co-activation of neural states and states of consciousness. We show that in theory, co-activation analysis is compatible with all major conceptions of states of consciousness, including, e.g., global or micro-phenomenological notions, does not require data from near-threshold conditions, and can be applied to most meaningful conceptions of neural states, including, e.g., those provided by predictive processing and active inference. Furthermore, we prove that as far as the logic of measurement is concerned, if applied to data from contrastive analysis studies, co-activation analysis enhances the result provided by contrastive analysis. This paper is a purely theoretical contribution which aims to lay the groundwork for extending the methodological tools available to those who work on identifying NCCs.
3:40pm - 3:50pmMetacognition as Modal Cognition
Kevin O'Neill, Stephen Fleming
University College London, United Kingdom
Metacognition, broadly, the capacity to form and act on estimates of confidence in other cognitive processes, is essential for adaptive behavior in a rich, dynamic, and uncertain world. We ask conversational partners to repeat themselves when we recognize a lapse in our attention, discredit headlines from news outlets we feel are untrustworthy, and refrain from persecuting individuals without evidence we judge to be beyond a reasonable doubt. But the computational target of metacognition is heavily debated, with several accounts disagreeing about which phenomena are metacognitive (as opposed to merely responsive to uncertainty), what constitutes as evidence of metacognition, and what specifically metacognition is for. Here we argue for the integrative perspective that metacognition is aimed at computing causal robustness: its purpose is to determine the conditions under which a self-model of a psychological process fails to track veridical causal relationships in the world. Under this conception, metacognition is a kind of modal cognition which evaluates the truth of hypothetical or counterfactual propositions regarding one's own cognition, placing domain-general representational and computational constraints on metacognitive phenomena. This new approach unifies previous conceptualizations of metacognition, including accounts focused on the probability of correctness, choice consistency, and interpersonal communication. Importantly, it makes sense of a wide number of convergent findings between psychological, neural, developmental, and comparative research on metacognition and modal cognition. Moving forward, it prompts new questions about the mechanisms underlying metacognition, the developmental and evolutionary origins of metacognition, and the domain-generality of metacognition.
3:50pm - 4:00pmPalatable Conceptions of Disembodied Consciousness: Terra Incognita in the Space of Possible Minds
Murray Shanahan
Imperial College London, United Kingdom
For some philosophers and cognitive scientists, consciousness is intimately connected with embodiment. By these lights, when users ascribe consciousness to contemporary, disembodied conversational AI systems, they are making a conceptual mistake. Yet such ascriptions are increasingly common, not only among ordinary users, but even among engineers familiar with how these systems work. Moreover, talk of disembodied beings with mind-like properties is commonplace throughout history and across cultures, and plays a prominent role in many religions and spiritual traditions. These anthropological facts challenge the view that the very concept of disembodied consciousness is incoherent, which leads to the question addressed in this paper. Is it possible to articulate a conception of consciousness that is compatible with the exotic characteristics of contemporary disembodied AI systems, and that can stand up to philosophical scrutiny? Without implying that any extant AI system conforms to this conception, the answer offered is tentatively positive, with respect to certain aspects of consciousness. However, this conception bends the language of consciousness almost to breaking point. In addition to the absence of sensorimotor interaction with an external world, it must accommodate a fundamentally fragmented sense of the passage of time, and a radically fractured form of selfhood.
4:00pm - 4:10pmMortal Computation, Medium Dependence and Functionalism
Holger Lyre
University of Magdeburg, Germany
Several authors have recently argued that today's AI is unlikely to develop consciousness. Curiously, their arguments are based on sometimes conflicting assumptions about the relationship between computation and functionalism. Seth (2024) and Kleiner (2024) specifically emphasize that today's AI is tied to “immortal” digital computation, echoing Hinton's (2022) recent proposal of “mortal computation” (MC). Considerations of MC are certainly fruitful, but it is important to recognize that the medium or substrate dependence of computation is not a yes/no affair, but gradual. Different forms of computation require different strengths of medium dependence.
Digital computation (DC) is highly medium independent. The hardware/software divide makes DC immortal, but it is still bound to realization constraints (we will hardly implement a Turing machine in jelly). Analog computation (AC) is highly medium dependent. Unlike DC, it works by first-order representation: magnitudes of physical properties serve to represent numbers. Neural computation (NC) is mainly a species of AC. The collapse of the hardware/software divide makes AC/NC mortal and highly energy efficient. Connected to this, NC draws on learning rather than on symbolic programming.
However, the medium dependence of MC isn’t metaphysically radical. The mortal/immortal distinction is no in-principle distinction and doesn't justify overdrawn metaphysical conclusions. The representational physical magnitudes in AC/NC are still functional properties, and research in neuromorphic computing is mainly the search for media that are functionally equivalent to the biological substrate. MC doesn’t commit us to mysterious and intrinsic causal substrate powers, but is clearly compatible with functionalism.
4:10pm - 4:20pmThe Starting Point Problem
Andy Kenneth Mckilliam
National Taiwan University, Australia
We didn’t always have thermometers. Initially, temperature could only be measured through sensations of hot and cold. Researchers, however, bootstrapped their way to instruments that could correct these sensations via epistemic iteration (Chang 2004). Recently, several authors have argued that consciousness science can progress in the same way (Michel 2022; Bayne et al. 2024; McKilliam 2024). But this project faces a serious problem. Epistemic iteration works best when there is a single phenomenon to converge upon. When that is the case, disagreements over how to weigh pre-theoretical criteria tend to wash out over time. However, if multiple distinct phenomena lie in the vicinity (as appears to be the case in consciousness science), small differences in starting assumptions can be compounded rather than corrected, leading to divergence rather than convergence. This is the starting point problem.
In this talk, I clarify why consciousness science faces this challenge and sketch a potential solution.
4:20pm - 4:30pmExistential Meaning in the Age of Neurocentrism and the Posthuman Future
Mette Leonard Høeg1,2
1Aarhus University, Denmark; 2Oxford Uehiro Institue, University of Oxford
With the rise of the scientific authority of neuroscience and recent neurotechnological advances, the understanding of the human being and its future are undergoing a radical change. Some of the concepts and beliefs most fundamental to how humans view themselves, act and structure societies are being undermined. A normative and existential vacuum is opening, and in this space both hopes and fears about the future of humanity are flourishing. Some philosophers predict a broad neuroscientific disenchantment, sociocultural disruption and a new neuroexistential anxiety of Kierkegaardian dimensions related to the clash of the neuroscientific and humanistic image of persons. Others are expecting the technological and scientific developments to lead to moral enhancement, existential emancipation and more harmonious ways of living.
In the first part of the paper, I outline these two contrasting responses to the rise of neurocentricism and new biotechnologies. In the second part, I argue that the divide between the ‘old’ anthropocentric paradigm and the emerging neuroscientific is misconceived. Pointing to the alignment of central ethical and existential ideas in Eastern contemplative traditions, including Buddhism, and literary and philosophical works from the Western canon with modern neuroscience, I argue that humanist ideas can indeed cohere with scientific materialism and naturalism, including non-essentialist notions of personal identity and self, the determinist worldview, biocentrism and non-speciesism. Finally, I draw the contours of an existentialist philosophical position that is both scientifically valid and conducive to human thriving and flourishing in a neuroscientific age and a possibly posthuman future.
|