Conference Agenda
| Session | ||
ORAL SESSION_14: Artificial Intelligence
| ||
| Presentations | ||
5:30pm - 5:45pm
The violence that makes our research possible: AI, extraction, and qualitative ethics University of South Florida, United States of America Artificial intelligence (AI), often celebrated as an engine of progress, is implicated in the escalating planetary crises of climate breakdown, mass extinction, democratic erosion, and widening inequality (McQuillan, 2022). Far from immaterial, AI relies on infrastructures of energy, water, minerals, and labor that intensifies ecological collapse and global dispossession (Crawford, 2022). Each interaction with AI is tethered to extractive chains that accelerate “slow violence”—the gradual, dispersed harm that unfolds across generations and borders (Nixon, 2013). Once mediated by AI, qualitative inquiry, often framed as small-scale, relational, and human-centered, becomes entangled in the extractive systems it might otherwise resist. That is, the slow violence of AI is not external to our work but an intensifying condition shaping who speaks, who is silenced, and whose suffering is rendered (in)visible. For qualitative researchers, these realities raise urgent ethical challenges. Institutional ethics frameworks, dominated by procedural compliance, cast AI as a neutral tool (Bennett, 2025), a narrow view that risks reproducing colonial logics of extraction, where infrastructure remains invisible and unexamined until it breaks down (Star & Ruhleder, 1996). In this paper, we theorize slow violence as both an ethical and representational problem for qualitative inquiry, performing an infrastructural inversion to expose AI’s technical and epistemic systems (Bowker, 1994). We ask: What does it mean to produce knowledge through systems complicit in environmental and social harm? What kinds of subjects are we becoming when our methods rely on extractive infrastructures? Confronted with these questions, researchers may be tempted to disavow the violence that enables convenience, feeling shame, ambivalence, or resistance in the face of complicity. Rather than seeking resolution or denial, we reimagine research ethics as a practice of dwelling within these tensions, acknowledging complicity, foregrounding entanglement, and cultivating modes of inquiry that strive toward responsive care (Raffaghelli et al., 2025). 5:45pm - 6:00pm
Artificial intelligence, SRL and SEL in primary education: Teachers’ reflections on practice University of Crete, Greece This qualitative study explores primary school teachers’ reflective experiences following the implementation of an intervention in six Grade 4 classrooms in Greecefocused on Social and Emotional Learning (SEL) and integrated Self-Regulated Learning (SRL) strategies, delivered an AI-enhanced Learning Management System (LMS) traditional printed materials.The core instructional focus SEL competencies—including self-awareness, self-management, social awareness, relationship, and responsible decision-making—while SRL strategies were explicitly taught and embedded within the flow of learning activities, enabling students to apply goal-setting, self-monitoring, and emotional regulation during their engagement with SEL content. Teachers participating in both modalities documented their post-intervention reflections in structured journals, offering rich narratives about their pedagogical choices, classroom interactions, and the evolving role of technology in supporting emotionally meaningful and cognitively self-directed learning. Through a thematic analysis of these reflective accounts, the study how teachers conceptualized the integration of AI in SEL practice, how they navigated the co-teaching of emotional and metacognitive skills, and how their own professional perspectives were shaped by the process. ituated within the theoretical frameworks of the CASEL model for SEL and Zimmerman’s theory of SRL, interplay between affective and regulatory dimensions of learning. Emerging teacher reflections indicate pedagogical value both implementation settings, with the AI-supported environment student engagement, emotional expression, and autonomous learning. study contributes to ongoing discussions around ethically grounded technology integration, reflective teaching practice, and innovation in emotionally and cognitively rich learning environments. 6:00pm - 6:15pm
Family, Therapy, and AI: The Elephant in the Room 1Department of Psychology, Panteion University, Athens, Greece; 2Athenian Institute of Anthropos, Athens, Greece This qualitative study explores psychotherapists' lived experiences and perceptions of engaging with generative AI in a simulated therapeutic context. Employing an exploratory research design, we conducted a simulation where three systemic family therapists participated in role-play sessions with ChatGPT, which was prompted to act as a therapist. Participants adopted the roles of a client from a fictional distressed couple with a child, directing the conversation based on a provided scenario. Following the simulation, we conducted in depth interviews to understand their personal experiences as role-playing “clients” as well as their professional reflections as practitioners. A thematic analysis of the interview transcripts revealed several key findings. Participants consistently reported a surprising sense of being understood and validated by the AI, noting its ability to ask reflective questions and offer support. However, this was invariably tempered by the critical observation of a lack of authentic human presence. They highlighted the absence of non-verbal cues, tone of voice, and embodied connection as a decisive limitation, leading to interactions that often felt generic, repetitive, and ultimately superficial despite the AI's technically competent responses. Building on these empirical findings, we engage in a theoretical conceptualization, proposing a systems thinking view of AI in therapeutic practice to understand the multidimensional effects of the inevitable AI presence in the room. This systemic lens reframes AI not merely as a tool, but as a relational actor that can potentially stabilize or disrupt therapeutic and family systems, posing specific risks. The study concludes that a critical, systemic understanding is crucial for anticipating how AI is integrated into human relational networks, highlighting implications for therapeutic practice, ethics, and the future of family and therapeutic systems in a postdigital era. 6:15pm - 6:30pm
Socratic Dialogue with AI: Toward the Anamnesis of the Unknown 1KU Leuven; 2LUCA School of Arts; 3University of Melbourne Debates on artificial intelligence (AI) and its role in the Anthropocene unfold less as prospect-opening inquiries than as polarized stances, shaped by systemic fatigue and eschatological anxieties over limits already reached. At such a moment—when the foundations of thought seem unsettled—we are challenged to think at the limits of the thinkable, to risk the unthinkable, and to resist systematization as the domination of humans by reason. This paper responds to that challenge through a speculative, artistically informed inquiry. Drawing on the author’s art installation Creatures Cluster as a symbolic schema of interconnected eco-logies, and on Platonic prompt-dialogues conducted across successive ChatGPT versions, the project explores anamnesis (knowledge as recollection) as a politics that seeks the improbable and incalculable within organological relations, enabling new connections that open spaces for speculation. Here, AI is positioned not merely as a tool but as an immanent, maieutic (generative) function, analogous to Socratic provocation: a self-reflective process staging thought through otherness. Hovering between method and (un)concealment, logocentrism and paradox, logic and myth, the inquiry refuses absolute thinking, instead cultivating a dialogical praxis that enables polyphonic, multilayered engagements with diverse intelligences that evade closure—there is no absolute ground for any epistemology. In this mode, the paper proposes a shift from anthropocentric logic toward allocentric realization through techno-logical alienation. This shift neither seeks timeless truths nor treats AI as an inevitable agent tasked with delivering them. Instead, it articulates a language of co-existence that speaks from within crisis rather than transcending it. Anamnesis is thus explored as a transformative process nested within AI’s recursive feedback, where knowledge is co-constituted through relation in time and as time, as difference engaging difference, and where dialogue becomes an event rather than a debate. 6:30pm - 6:45pm
Therapists' perceptions of artificial intelligence integration in mental healthcare. 1University of Greater Manchester, UK; 2New York College, Greece This qualitative study explores the complex and multifaceted perceptions of mental health therapists regarding the adoption of Artificial Intelligence (AI) in their professional practice. As AI tools become more prevalent in healthcare, understanding the perspective of practitioners is crucial for effective and ethical implementation. Using a Thematic Analysis approach, this research explores in depth the insights of eight Greek female therapists, aged 28 to 45, to uncover their views on AI's potential and limitations. The analysis of semi-structured interviews revealed three primary category-themes: opportunities, concerns, and potentiality. Participants identified AI's potential to enhance their therapeutic practice by providing time and cost-effective assistance in areas like research, organization, and improving accessibility to mental health services. However, a significant sub-theme of resistance was also prominent, with the majority of therapists expressing reluctance to use AI to substitute core components of their therapeutic work. A major concern that arose, is the perceived inability of machine learning to replicate and apply essential therapeutic skills such as empathy, emotional awareness, and genuine understanding. Despite these concerns, participants acknowledged AI's future potential, highlighting the need for structured training, strict and close monitoring, clear guidelines regarding GDPR and data privacy while having governmental bodies form a thorough legal framework. Along the same lines, a prominent motive for AI’s adoption has been the fear of missing out (FOMO). Overall, this research reveals the perspectives of therapists, who recognize the technological opportunities while remaining cautious about AI's capacity to replicate the human elements of therapy and follow rules of data privacy. While the small sample size limits generalizability, the findings provide a foundational understanding of the challenges and opportunities associated with AI adoption in mental healthcare, paving the way for future, larger-scale investigations into this critical topic. | ||