Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
STS 10B: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Time:
Wednesday, 10/July/2024:
1:30pm - 3:30pm

Session Chair: Susanne Dirks, TU Dortmund
Session Chair: Aashish Kumar Verma, JKU Linz
Session Chair: Klaus Miesenberger, Johannes Kepler University linz
Location: Track 3

Meeting Room 3 Uni-Center, 1st floor 140 people https://www.jku.at/en/campus/the-jku-campus/buildings/uni-center-university-cafeteria/

Show help for 'Increase or decrease the abstract text size'
Presentations
ID: 113 / STS 10B: 1
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Dementia, Self-Management, Stakeholder Engagement, Co-Design

Designing Self-Management for and with Persons Living with Dementia

D. OSullivan

Technological University Dublin, Ireland

Promoting high quality of life for persons with dementia has emerged as a central goal in global public health agendas. The emphasis has shifted from extending life to actively enhancing overall well-being by postponing or preventing additional disability. This represents a departure from traditional medical perspectives on dementia to a more socially-oriented approach, placing a strong focus on wellbeing.

In parallel, the concept of self-management for people with dementia has emerged, defined as “a person-centred approach in which the individual is empowered and has ownership over the management of their life and condition [1].” Practice recommendations for person-centered care have been recommended, which emphasize the importance of knowing and understanding the person with dementia such that individualized choice and dignity are supported. There is also a need to include informal carers (family members or friends), in designing collaborative care planning while balancing empowerment and active engagement for the person with dementia in self-management with carer support. This paper describes our approach to co-designing assistive technologies for care planning with and for persons with dementia and their caregivers.

1. The Dementia Engagement and Empowerment Network (DEEP). Dementia and self-management: Peer to peer resource. Available at https://www.dementiavoices.org.uk/dementia-and-self-management-peer-to-peer-resource-launched-on-6th-may-2020/. (Accessed January 2024).



ID: 212 / STS 10B: 2
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Autism, Communication skills, Facial expression recognition, Chatbots, Assistive Technology (AT)

A Social Communication Support Application for Autistic Children Using Computer Vision and Large Language Models

R. Jafri

King Saud University, Riyadh, Saudi Arabia

A novel, affordable and accessible software solution that utilizes computer vision tools and Large Language Models (LLMs) to provide communication support to high functioning autistic children during online meetings with the aim of improving their social communication skills is presented. The system displays the remote attendee’s facial expressions as distinct emoticons to facilitate the child’s understanding of non-verbal social cues and suggests appropriate responses on demand based on the conversational context and the detected expressions. A gamification option for practicing facial expression recognition in an engaging manner is also offered. The application serves as a support platform as well as a teaching tool which autistic children can utilize to connect with friends and caregivers to improve their social communication skills. It is being developed in consultation with therapists who work with autistic children to ensure that its design is compatible with the unique needs of the end users. The system is more cost-effective and sensory friendly as compared to similar robotic and virtual reality-based solutions and has the added advantage that instead of interacting with robots or virtual agents, the child converses with a real human being, albeit remotely, thus, increasing the likelihood that the social skills learned would be effectively transferred to co-located face-to-face conversations.



ID: 216 / STS 10B: 3
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Easy-to-Read, readability, evaluation, accessibility

Towards Reliable E2R Texts: A Proposal for Standardized Evaluation Practices

M. Madina

Darmstadt University of Applied Sciences (Hochschule Darmstadt)

Easy-to-Read (E2R) is a method of enhancing the accessibility of written text by using clear, direct, and simple language. E2R texts are designed to improve readability and accessibility, especially for individuals with cognitive disabilities. However, there is a significant lack of standardized evaluation methods for these texts. Traditional ATS (Automatic Text Simplification) evaluation methods such as BLEU, SARI or ROUGE present several limitations for E2R evaluation. Readability measures such as Flesch Reading Ease (FRE) and Flesch-Kincaid Reading Grade Level (FKRGL) do not take into account all document factors. Manual evaluation methods, such as Likert scales, are resource-intensive and lead to subjective assessments. This paper proposes a threefold evaluation method for E2R texts. The first step is an automatic evaluation to measure quantitative aspects related to text complexity. The second step is a checklist-based manual evaluation that takes into account qualitative aspects. The third step is a user evaluation, focusing on the needs of end-users and the understandability of texts. Our methodology ensures thorough assessments of E2R texts, even when user evaluations are not feasible. This approach aims to bring standardization and reliability to the evaluation process of E2R texts.



ID: 146 / STS 10B: 4
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: brain-computer interface, steady-state visual evoked potential, aphasia

EEG Measurement Site Suitable for SSVEP-BCI Assuming Aphasia

S. Kondo

Kogakuin University, Japan

The purpose of this study is to investigate the SSVEP-BCI measurement channel for aphasia patients with decreased visual acuity in the right eye due to right hemiplegia. Steady-state visual evoked potential (SSVEP) is a visual response to a flashing stimulus. Brain-computer interface (BCI) is an interface that connects the brain and computer. SSVEP-BCI has high information transmission ability. Aphasia caused by cerebral hemorrhage is often accompanied by paralysis, and BCI may be an effective supplement. However, in SSVEP-BCI, it is not appropriate to acquire signals from conventional measurement sites for patients whose visual acuity has decreased due to paralysis. In this study, with the cooperation of an aphasia patient with right hemiplegia, we discussed what kind of measurement electrode placement would be appropriate when implementing SSVEP-BCI for aphasia patients. Electrodes were placed on the left side of the back of the head, the right side of the back of the head, and the entire back of the head. As a result of the 4-input SSVEP-BCI experiment, the BCI accuracy for the entire occiput, left occipital area, and right occipital area was 81.03, 43.96, and 86.97%. The BCI accuracy for the right side of the occipital region showed better values than the data for the entire occipital region, which has many channels. Based on the above, this study demonstrated the need to adapt to the characteristics of the subject when providing SSVEP-BCI to aphasia patients.



ID: 150 / STS 10B: 5
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: Virtual Reality, Persons with Intellectual Disabilities, Vocational Skills, Intelligence Quotient, Visual-motor integration

The Correlation among the Intelligence, Visual-Motor Skills and Virtual Reality Operation Performance of Students with Intellectual Disabilities

H.-S. Lo, T.-F. Wu

National Taiwan Normal University, Taiwan

Introduction: Virtual reality (VR) finds application across diverse domains, and could provide a multisensory experience and enhance students' skill development by simulating real-world work situations. These features of VR are particularly suitable for individuals with intellectual disabilities (ID). This study primarily explored the correlation among intelligence and visual-motor skills with performance in VR operations of students with ID.

Methods: There were 57 students with ID who participated in this study. Participants completed two trials of the tasks in the VR system, which automatically recorded the time spent on the task and the accuracy of performing the steps. When participants completed the VR task, their intelligence and visual-motor integration skills were then assessed.

Results: The results demonstrated that students with ID spent less time and obtained higher accuracy rates on the second than on the first trial. In addition, full scale intelligence quotient and visual-motor integration could predict the time spent on the first trial, as well as the working memory index of students with ID positively correlated with accuracy on both the first and second trials.

Conclusion: The findings indicated that students with ID possess the capability to navigate and interact with VR. Their intellectual and visual-motor skills no longer affect the time spent after repeated practice, and the working memory of students with ID affect the accuracy of VR.



ID: 196 / STS 10B: 6
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: Ambient and Assisted Living (AAL), Assistive Technology (AT), User Centered Design and User Participation

Assistive Augmented Reality for Adults on the Autism Spectrum with Intellectual Disability

T. Westin

Stockholm university

Augmented reality (AR) presents opportunities for creating new assistive technologies, by integrating virtual objects with the actual world. However, AR also presents challenges for co-design and accessibility. The goal in this study is to co-design AR support from the perspective of people on the autism spectrum and with intellectual disability, at day activity centres (DACs), who are of working age, not gainfully employed or in training. Two workshops were first done with staff only, to educate staff, to learn more about how AR could be designed for DAC practices, and ease communication in the third workshop; a series of individual, local workshops at several DACs, to be convenient for participants. Workshops included testing functional AR prototypes, video modelling, AR design with a visualisation kit and an inclusive SUS questionnaire. The results show how AR-based, in-door navigation support, as well as a QR based media player, can be designed and how participants responded. Further, how the in-door navigation and personalisation can be managed and setup by staff in AR and web interfaces. Results from testing revealed accessibility issues with physical size of tablets, the duality of AR interaction with both tangible and virtual objects, and some details with the digital design. However, there was also clear confirmation that the AR indoor navigation and the media player can be very useful. Co-design of AR was further achieved by redesign based on the workshop results.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ICCHP 2024
Conference Software: ConfTool Pro 2.8.102+TC+CC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany