Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Filter by Track or Type of Session 
Only Sessions at Location/Venue 
 
 
Session Overview
Date: Wednesday, 10/July/2024
9:00am - 10:30amPlenary I: Conference Opening and Opening Keynote
Location: Plenary
https://www.icchp.org/content/keynotes-3#w3c
Session Chair: Klaus Miesenberger, Johannes Kepler University linz
Details and Information on Opening Messages:
 
ID: 286 / Plenary I: 1
Keynote

The European Accessibility Act (tbc)

I. Placencia-Porrero

European Commission



ID: 304 / Plenary I: 2
Plenary

ICCHP Roland Wagner Award Ceremony

K. Miesenberger

Johannes Kepler University linz, Austria



ID: 303 / Plenary I: 3
Plenary

Young Researchers' Consortium Ceremony

K. Miesenberger

Johannes Kepler University linz, Austria

 
10:30am - 10:45amB1: Coffee Break
10:45am - 12:30pmAccessible EU Event: Implementing the European Accessibility Act
Location: Track 1
Session Chair: Klaus Hoeckner, Access Austria/HGBS
This track is organised by and in cooperation with the AccessibleEU Centre
Logo Accessible EU centre
More information at Hilfsgemeinschaft (in German) and at the Accessible EU Center:
10:45am - 12:30pmSTS 15: STS New Methods for Creating Accessible Material in Higher Education
Location: Track 2
Session Chair: Karin Müller, Karlsruher Institute of Technology (KIT)
Session Chair: Thorsten Schwarz, Karlsruhe Institute of Technology
Session Chair: Svatoslav Ondra, Masaryk University
Session Chair: Radek Pavlicek, Teiresias - Support Centre for Student With Specific Needs, Masaryk University
 
ID: 116 / STS 15: 1
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher Education
Keywords: Chart Analysis, Alt-Text, Image Retrieval, CLIP Model, User Interface

Alt4Blind: A User Interface to Simplify Charts Alt-Text Creation

O. Moured1,2, K. Müller2, T. Schwarz2

1CV:HCI@KIT, Karlsruhe Institute of Technology, Germany; 2ACCESS@KIT, Karlsruhe Institute of Technology, Germany

Alternative Texts (Alt-Text) for chart images are essential for making graphics accessible to blind and visually impaired individuals. Traditionally, Alt-Text is manually written by authors but often encounters issues such as oversimplification, or overcomplication. Recent trends have seen the use of AI for Alt-Text generation. However, existing models, are susceptible to producing inaccurate or misleading information. We address this challenge by retrieving high-quality alt-texts from similar chart images, serving as a reference for the user. Our three contributions are as follows: (1) we introduce a new benchmark comprising 5,000 real images with semantically labeled and high-quality Alt-Text, collected from HCI venues. (2) We have developed a deep learning-based model to rank and retrieve similar chart images that share the same visual and textual semantics. (3) We have designed a user interface to facilitate our model and ease the alt-text creation process. Our preliminary interviews and investigations highlight the usability of our UI.



ID: 131 / STS 15: 2
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher Education
Keywords: Tactile Charts, Chat Analysis, Vision Langauge Models, Transformers, Scalable Vector Graphics

ChartFormer: A Large Vision Language Model for Converting Chart Images into Tactile Accessible SVGs

O. Moured1,2, S. Alzalabny3, K. Müller2, T. Schwarz2

1CV:HCI@KIT, Karlsruhe Institute of Technology, Germany; 2ACCESS@KIT, Karlsruhe Institute of Technology, Germany; 3NeptunLab, University of Freiburg, Germany

Visualizations, such as charts, are crucial for interpreting complex data. However, they are often provided as raster images, which are not compatible with assistive technologies for blind and visually impaired individuals, such as embossed papers or tactile displays. At the same time, creating accessible vector graphics visualizations requires a skilled sighted person and is time-intensive. In this work, we leverage advancements in the field of chart analysis to generate tactile charts in an end-to-end manner. Our three key contributions are as follows: (1) introducing the ChartFormer model trained to convert raster chart images into tactile-accessible SVGs, (2) training this model on the Chart2Tactile dataset, a synthetic chart dataset we created following accessibility standards, and (3) evaluating the effectiveness of our SVGs through a pilot user study with the HyperBraille, a refreshable tactile display. Our work is publicly available at (link after review).



ID: 163 / STS 15: 3
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher Education
Keywords: (e)Accessibility, Artificial Intelligence, eLearning and Education, Real-time Chart Image Description, Charts Interpretation

A Real-Time Chart Explanation System for Visually Impaired Individuals

W. Cho1, J. Park2

1Semyung University, Korea, Republic of (South Korea); 2Institute of ICT Convergence, Sookmyung Women's University, Korea, Republic of (South Korea)

This research addresses the critical need for real-time chart captioning systems optimized for visually impaired individuals in the evolving era of remote communication. The surge in online classes and video meetings has increased the reliance on visual data, such as charts, which poses a substantial challenge for those with visual impairments. Our study concentrates on the development and evaluation of an AI model tailored for real-time interpretation and captioning of charts. This model aims to enhance the accessibility and comprehension of visual data for visually impaired users in live settings. By focusing on real-time performance, the research endeavors to bridge the accessibility gap in dynamic and interactive remote environments. The effectiveness of the AI model is assessed in practical scenarios to ensure it meets the requirements of immediacy and accuracy essential for real-time applications. Our work represents a significant contribution to creating a more inclusive digital environment, particularly in addressing the challenges posed by the non-face-to-face era.



ID: 201 / STS 15: 4
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher Education
Keywords: Mathematical functions, Graphs, Description standard, Accessible content

Towards Establishing a Description Standard of Mathematical Function Graphs for People with Blindness

T. Schwarz, K. Müller

Karlsruhe Institute of Technology, Germany

Visual representations are widely used for conveying information for sighted people. However, these visualizations are often inaccessible to blind people, requiring special adaptations. Although the description of graphics is a solution to make them accessible, there are only few standards for descriptions for this target group. In this paper, we develop a standard for describing function graphs at the university level in a user-centered multi-stage design process, focusing on structuring the information for blind readers and guiding authors to include necessary details to avoid errors.



ID: 213 / STS 15: 5
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher Education
Keywords: Exam Documents, Layout Analysis, Hierarchy, Object Detection, Document Analysis

ACCSAMS: Automatic Conversion of Exam Documents to Accessible Learning Material for Blind and Visually Impaired

O. Moured1,2, D. Wilkening2, K. Müller2, T. Schwarz2

1CV:HCI@KIT, Karlsruhe Institute of Technology, Germany; 2ACCESS@KIT, Karlsruhe Institute of Technology, Germany

Exam documents are essential educational materials for exam preparation. However, they pose a significant academic barrier for blind and visually impaired students, as they are often created without accessibility considerations. Typically, these documents are incompatible with screen readers, contain excessive white space, and lack alternative text for visual elements. This situation frequently requires intervention by experienced sighted individuals to modify the format and content for accessibility. We propose ACCSAMS, a semi-automatic system designed to enhance the accessibility of exam documents. Our system offers three key contributions: (1) creating an accessible layout and removing unnecessary white space, (2) adding navigational structures, and (3) incorporating alternative text for visual elements that were previously missing. Additionally, we present the first multilingual manually annotated dataset, comprising 1,293 German and 900 English exam documents which could serve as a good training source for deep learning models.

 
10:45am - 12:30pmSTS 10A: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Location: Track 3
Session Chair: Susanne Dirks, TU Dortmund
Session Chair: Aashish Kumar Verma, JKU Linz
Session Chair: Klaus Miesenberger, Johannes Kepler University linz
 
ID: 141 / STS 10A: 1
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Virtual Environment, Virtual Reality, Web Browsers, Virtual Computer Laboratories, Cognitive Disabilities

Development of a Virtual Environment that Contains Multiple Browsers to Explore the Learning Experience of Students with Cognitive Disabilities

M. H. H. Ichsan1,3, C. Sik Lanyi1,2

1Department of Electrical Engineering and Information Systems, Faculty of Information Technology, University of Pannonia, Hungary; 2Hungarian Research Network, Piarista u. 4. 1052 Budapest, Hungary; 3Department of Informatics, Faculty of Computer Science, Brawijaya University, Indonesia

Traditional window browsers in physical computer laboratories have limits when it comes to improving the learning experience for students with cognitive disabilities. For example, students with cognitive disabilities have difficulties understanding while displaying many web pages at the same time; also, having multiple tabs open can be distracting and stressful. A virtual reality (VR) environment can give an immersive browsing experience by displaying numerous online pages at the same time, making browsing and learning more efficient. Furthermore, the virtual environment can eliminate distractions and improve focus by offering a regulated browsing environment. This work project focused on multi-browser development in a VR environment to explore the skills experience of, helping students with cognitive disabilities to improve their independence in learning activities. The system uses a virtual environment to visualize several web pages and their interconnections; additionally, the user movement inside the virtual environment uses camera movement to create a unique and immersive browsing experience suitable for cognitive disabilities. The results demonstrate the ability to retrieve websites in a virtual environment, as well as the movement experience between browsers inside the virtual computer laboratories. Currently, the system is being tested with The System Usability Scale, but the results will be ready on camera-ready submission.



ID: 178 / STS 10A: 2
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Making, Creative Expression, Accessibility, Intellectual Disabilities

Creative Technologies in Action: Empowering Individuals with Intellectual Disabilities

L. S. Guedes

Università della Svizzera italiana, Switzerland

This paper investigates the integration of creative activities with interactive technology to enhance the participation and engagement of individuals with intellectual disabilities. Conducted in two workshops, the study explores the use of Play-Doh, drawing, and game creation, combined with Makey Makey and Scratch programming. Seven participants with varying verbal communication abilities engaged in activities tailored to their preferences and needs. The methodology involved thematic analysis of participant and support worker interactions, observations, and the artifacts created. Key findings demonstrate the importance of adaptability in activities, the empowering role of technology in creative expression, and the significant impact of facilitators and support workers. The study underscores the need for flexible, participant-led approaches in educational settings for individuals with intellectual disabilities, highlighting technology's role as an enabler of engagement and creative exploration.



ID: 187 / STS 10A: 3
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Assistive Technology, Autism Spectrum Disorder, Dentist and Medical care, Smart chair

SMED: SMart chair for Emotion Detection

S. Comai

Politecnico di Milano (POLIMI), Italy

The paper presents the concept of SMED, a smart chair designed for real-time monitoring of patients' vital signs like heart rate and respiratory rate and a functional prototype developed to evaluate the effectiveness of the concept. The prototype leverages a strain gauge system integrated into a harmonic steel to detect changes in body pressure and vibrations. Physiological data of interest are obtained using the ballistocardiography methodology.
The final goal of this work is to enhance the quality of care and support for individuals with Autism Spectrum Disorder (ASD) who face difficulties in communicating their emotions, stress and discomfort, during medical or dental visits.



ID: 189 / STS 10A: 4
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Easy-to-Read Methodology, Cognitive Accessibility, Dialogues, Assistive Technology (AT)

Towards An Automatic Easy-to-Read Adaptation of Dialogues in Narrative Texts in Spanish

I. Diab

Ontology Engineering Group (UPM), Universidad Politécnica de Madrid (UPM)

People with cognitive disabilities have the right to actively participate in various aspects of society, including culture, on an equal basis with others. Since these groups of people may encounter difficulties in the reading comprehension process, the Easy-To-Read (E2R) methodology was created to make texts more accessible by providing a set of guidines and recommendations related to both writing and layout aspects. Currently, this methodology is manually applied, limiting the amount of accessible texts; however, the application of Artificial Intelligence (AI) methods and techniques could be used for automatising part of the E2R adaptation process. Specifically, in this paper we present a first approach to an AI-based method for adapting dialogues to a theatrical style, as suggested by the E2R methodology. This method is based on symbolic and sub-symbolic AI to automatically adapt dialogues, and is implemented as a proof-of-concept.



ID: 250 / STS 10A: 5
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Authoring Tools, Scenagram, Naive Users, Usability, Visual Programming

Programming Learning Difficulties: How Can Naive Users Create Human-Machine Interaction Scenarios?

J. Debloos, D. Archambault

Paris 8 University, France

The work presented in this paper is part of a project which aims to enable people with no programming skills to create human-machine interaction scenarios without programming. An example of a human-machine interaction scenario could be an interactive cognitive stimulation exercise created by a therapist for an Alzheimer's patient, or a sensorially adapted digital learning exercise for a student with an autism spectrum disorder, created by a teacher. This paper presents a literature review whose objectives are to understand why learning computer programming and algorithms is such a complex activity, and how visual programing languages, learning tools, digital tools designed for non-developers and their features can inspire the design of a human-machine interaction scenarios authoring tool, in order to propose a series of recommendations for the design of a human-machine interaction scenarios authoring tool.

 
10:45am - 12:30pmSTS 13: STS Assistive Technologies and Inclusion for Older People
Location: Track 4
Session Chair: Jean Denise Hallewell Haslwanter, FH OÖ
 
ID: 193 / STS 13: 1
LNCS submission
Topics: STS Assistive Technologies and Inclusion for Older People
Keywords: Ambient and Assisted Living (AAL), (e)Ageing and Gerontechnology, Assistive Technology (AT), Artificial Intelligence and Autonomous Systems

Action Recognition from 4D Point Clouds for Privacy-Sensitive Scenarios in Assistive Contexts

I. Ballester

TU Wien, Austria

Dementia is a leading cause of disability and dependency among older people worldwide. To address the challenges faced by people with dementia, vision-based technologies have been proposed to provide context-aware assistance. These technologies typically rely on cameras to understand actions and tailor assistance accordingly. However, privacy concerns hinder their adoption, particularly in privacy-sensitive contexts. This study proposes the use of 4D point clouds as a privacy-preserving modality for assistive systems. By relying only on 3D data and excluding RGB information, we aim to enable personalised assistance while mitigating privacy risks.

To assess the feasibility of this approach, we collect a real-world dataset with the help of 16 people with dementia and evaluate the state-of-the-art P4Transformer model on this dataset. Our results show promising performance, demonstrating the viability of point clouds as a practical alternative for privacy-sensitive action recognition in real-world settings. However, the model does not reach the performance achieved on benchmark datasets, highlighting the importance of adapting models to deal with the complexity of real-world data.

By addressing privacy challenges and validating the model with real-world datasets, this research contributes to the advancement of privacy-aware assistive systems for people with dementia, towards more personalised and effective dementia care.



ID: 129 / STS 13: 2
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Ambient and Assisted Living (AAL), Artificial Intelligence and Autonomous Systems, Assessment/ Profiling and Personalization, Design for All and Universal Design, Usability and Ergonomics

Customising Seniors’ Living Spaces: a Design Support System for Reconfiguring Bedrooms Integrating Ambient Assisted Living Solutions

T. Ferrante

Sapienza University of Rome - Department of Planning, Design and Technology of Architecture (PDTA)

In the context of an increasing demand for home adaptation in response to the needs of the elderly and caregivers, this paper introduces a novel design support system, merging Virtual Reality (VR) and semantic technologies to facilitate the reconfiguration of bedrooms, integrating Ambient Assisted Living (AAL) solutions.

The methodology combines a knowledge representation of user health conditions, physical environments, and device characteristics with a virtual simulation for pre-implementation evaluation.

The system leverages domain ontologies to provide computational representations of the space and its features; the semantic knowledge base is engineered referencing building (IFC) and healthcare standards (ICF, ICD), alongside ontologies for devices and assistive technologies. Through a Java-based middleware, a VR simulation of the environment (developed with Unity 3D) can be interacted and customised by designers. The system, exploiting semantic reasoning, supports the designers in the selection of the best options, including assistive technologies, spatial re-organization, and indoor comfort metrics. The reconfigured bedroom models can be exported to any design software via IFC-BIM data exchange.

Initial implementations demonstrate the system's efficacy in customising spaces for two user profiles (personas) with chronic conditions, proposing a comprehensive BIM-based tool for spatial and assistive technology integration in home spaces for enhanced elderly quality of life.



ID: 237 / STS 13: 3
OAC Submission
Topics: STS Assistive Technologies and Inclusion for Older People
Keywords: Physical activities, codesign, Physical web, Ageing and Gerontechnology, User Centered Design and User Participation

Iteration and Co-design of a Physical Web Application for Outdoor Activities with Older Adults

F. Badmos

Technological University Dublin

Existing research and physical activity guidelines highlight the benefits of outdoor physical activities for ageing populations. There is potential for technology to facilitate outdoor activity through Physical Web infrastructure. We proposed that embedding Physical Web applications that are engaging and interactive in public open spaces as part of interactive wellness parks can encourage older adults to participate in physical activities outdoors and motivate rehabilitation. We have created an initial design prototype based on design requirements generated from a qualitative field study with 24 older adults to explore their perceptions, experiences, and routines of outdoor physical activities. In this paper, we present the initial prototype and findings from a co-design session with 12 older adults, eliciting their feedback on the design and their ideas for future design iterations.



ID: 223 / STS 13: 4
LNCS submission
Topics: STS Assistive Technologies and Inclusion for Older People
Keywords: Health Data Representation, Expert Evaluation, (e)Ageing and Gerontechnology, Digital Health, (e)Accessibility

“What does THIS Mean?”: A Collaborative Expert Evaluation of Health Data Representations for Older Adults

J. Peterson

Technological University Dublin

Health Data Representation (HDR) poses significant accessibility problems for people with disabilities and older adults, particularly those with visual, hearing, speech, motor and cognitive impairments, as well as literacy problems. While methodologies like heuristic evaluation and visualisation literacy are valuable, they have limitations in addressing the varied and nuanced range of data representations and perceptual matching issues. This paper presents a collaborative expert evaluation methodology that strategically bridges the gap between domain experts and non-experts. By scoping out representative HDRs, our approach significantly expands the research space for accessibility issues within the designated scope, narrowing critical gaps in existing independent guidelines. Using this methodology, we carefully examined common conventional HDRs, collaborating with experts to identify 179 potential issues specific to older adults. Categorisation strategies highlighted key issues within this broad problem space, showing that existing guidelines fail to effectively address all of the predominant categories. Our paper presents a set of emerging impairment-agnostic principles in response, embedding crucial steps towards mitigating these problems. Our study not only identifies challenges but also provides a model for iterative evaluation and adaptation of critical HDR. Beyond informing more accessible system design, it also unlocks innovative opportunities for future HDRs.



ID: 154 / STS 13: 5
OAC Submission
Topics: STS Assistive Technologies and Inclusion for Older People
Keywords: Active Assisted Living (AAL), user interface design, older people, ageism, accessibility

Student Perceptions About Age, Computer Literacy and Design Needs: A Longitudinal Study

J. D. Hallewell Haslwanter

University of Applied Sciences Upper Austria, Wels, Austria

Studies done in different countries have found that beginning computer science students think older people are less likely to use computers. To understand the impact this may have on the designs conceived, some studies investigated the design aspects suggested for younger and older, women and men.

We asked the same cohort the same questions about perceived computer literacy and design aspects at the beginning and end of their bachelor's degree to see if these views persist. Three runs of the questionnaire were done, each with more than 95 participants: a) first year students, b) shortly before or after graduation and c) students starting after the COVID-19 lock-downs (to check this was not a significant factor).

Mixed methods were used to analyse the differences between the beginning and end of their studies. We analyzed if the stage in studies was a factor in their perceptions of computer use. We compared the design aspects at each stage, to see if more aspects of user experience were included for all people. Since many older people have limitations, we also evaluated whether the aspects mentioned covered the accessibility recommendations for older.

Although biases towards older people remain, graduates perceive less difference in the likelihood of regular use by young and old. Regarding design, aspects related to usability are mentioned more often for all ages. For older, less focus is put on large fonts and other aspects of accessibility are mentioned more often.

 
10:45am - 12:30pmI1: Innovation Area
Location: Innovation Area
Session Chair: Andrea Petz, JKU Linz
Will be announced, soon: workshops, posters, ...
12:30pm - 1:30pmB2: Lunch Break
1:30pm - 3:30pmSTS 1A: Software, Web and Document Accessibility
Location: Track 1
Session Chair: Nataša Rajh, JKU
Session Chair: Reinhard Koutny, JKU
Session Chair: Klaus Miesenberger, Johannes Kepler University linz
Session Chair: Matjaž Debevc, University of Maribor, FERI
 
ID: 268 / STS 1A: 1
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Accessibility, education, age simulation

Introducing Computer Science Students to Inclusive Design and Accessibility: Evaluation of Practical Exercises with a Low-Cost

H. Petrie

University of York, United Kingdom

Young developers of digital technologies need to be aware of the principles of inclusive design and accessibility, but it is difficult to teach these concepts in the abstract. Practical simulations of the effects of disability and aging can be engaging for students, but expensive to purchase and are criticised as disrespectful to people with disabilities and older people. We have developed a very low-cost simulation kit and used it in a practical exercise with computer science students, emphasising that this are not the same as being disabled or older, but give some insight into the experience of visually disabled and older people in using digital technologies. The low-cost nature and tasks we designed to undertake with the kit were designed to instil some fun into the exercise. An evaluation with 63 first year undergraduate computer science students yielded positive ratings of the exercise and many interesting comments from the students.



ID: 191 / STS 1A: 3
LNCS submission
Topics: STS Web Accessibility: Methods, Techniques and Tools for Design, Development and Evaluation/Monitoring
Keywords: Accessibility, Evaluation Tools list, Web, Developers, User requirements, Tools

Tools for Novice and Expert Accessibility Professionals: Requirements for the Next Generation Web Accessibility Evaluation Tools List

C. V. Swart, V. Lange

HAN University of Applied Sciences, Netherlands The

The W3C Web Accessibility Evaluation Tools list compiles over 160 tools that can help check or monitor various accessibility issues on websites and apps, already published or in development. It is one of the most comprehensive and most referenced lists for evaluation tools today. The page was published originally in 2006. This paper describes the re-design of the list to ensure a growing variety of stakeholders, both accessibility experts, novices and others, can find the right tool for their situation in the current day and age.

The re-design included defining the audience, information needs, user-stories, design and layout, comprehensive user and technology oriented filter options, decision support for users, etc. To gather the requirements, the research team conducted contextual inquiries with a large variety of stakeholders including both tool users and vendors. The resulting prototypes were extensively and iteratively tested with the users to further improve upon the design and functionality. During the re-design process, the research group coordinated with the WAI Education and Outreach Working Group (EOWG) to ensure the design iterations stayed within the W3C/WAI brief and requirements.

The researchers propose new functionality to guide users to relevant accessibility tools, such as a step-by-step search assistant, new updated filter and search functions, and adding relevant and understandable information for each tool. Not everything made it into the final version.



ID: 203 / STS 1A: 4
LNCS submission
Topics: STS Web Accessibility: Methods, Techniques and Tools for Design, Development and Evaluation/Monitoring
Keywords: Digital Accessibility, Accessibility evaluation, Manual evaluation, Adobe Accessibility Checker, PDF, PDF/UA, WCAG

The Accessibility Paradox: Can Research Articles Inspecting Accessibility Be Inaccessible?

A. B. Szentirmai

NTNU, Norway

Relevant literature focusing on the accessibility of electronic documents (e.g., PDFs) has been increasing incrementally in recent years. Despite its significance, paradoxically, scientific articles, even those related to accessibility-themed topics, often fail to provide fully accessible content, thus creating a gap between theory and practice. Therefore, we aimed to explore how academic articles on digital accessibility evaluation published in the past ten years comply with established accessibility standards. We performed a tool-based evaluation using Adobe Accessibility Checker and a manual evaluation to inspect the articles comprehensively. The results showed that none of the analyzed accessibility articles were problem-free. They, however, contained recurring, severe accessibility barriers, making it highly challenging for people who rely on screen readers to access information. Also, the evaluated data showed no discernible pattern of changes over the years.



ID: 214 / STS 1A: 5
LNCS submission
Topics: STS Web Accessibility: Methods, Techniques and Tools for Design, Development and Evaluation/Monitoring
Keywords: software accessibility, software engineering, systematic literature review, integrated development environments, engineering assets

Accessibility in the Software Engineering (SE) Process and in Integrated Development Environments (IDEs): A Systematic Literature Review

N. Rajh, K. Miesenberger, R. Koutny

Institute Integriert Studieren, Johannes Kepler University Linz (Linz), Austria

Software accessibility, once relatively unknown, has been recognized as both a socio-technical necessity and a legal requirement, drawing considerable research attention in the field of Computer Science over the years. A wide range of guidelines, methods, techniques, and tools has been developed, accompanied by numerous training and support opportunities. However, practical implementation of accessibility still faces challenges, evident in persistent barriers in many software products. While acknowledging the necessity of including accessibility throughout the engineering process, research on comprehensive approaches has been sparse, primarily focusing on requirements engineering and evaluation, neglecting implementation phase. We see integrating accessibility support into the core of design and development as a crucial step for improvements, revealing a need for integrating accessibility evaluation support into integrated development environments (IDEs). This paper aims to present a collection of approaches, methods and tools that support accessibility involvement within different stages of the engineering process, and additionally to provide a theoretical foundation for research and development of incorporating accessibility evaluation support within IDEs. Thereafter, we analyzed existing related Systematic Literature Reviews (SLRs) and complemented the findings with a new SLR. The study provides a solid base for advanced approaches in integrating accessibility into SE.



ID: 244 / STS 1A: 6
LNCS submission
Topics: STS Accessible and Inclusive Digital Publishing
Keywords: Accessibile Digital Publishing, Daisy, EPub3, Personalisation

Flex Picture eBook Builder - Simplifying the Creation of Accessible eBooks

K. Miesenberger, D. Gharbieh, M. Punz

Johannes Kepler University Linz, Austria

This paper presents a novel approach of making content more accessible and usable by improving adaptability focussing on non-text content. The design and authoring concepts are supported by a software client for authoring and adapted reading. It is based on the requirements of users with disabilities, care providers and educators gathered in a UX/participatory and co-research approach facilitated in a EU-co-funded project aiming at a new level of adaptability in the content/publishing sector, in particular for illustrated children books: Flexi Picture eBook (FPB)

In this paper we present the approaches and functionalities of the authoring and reading software featuring new and innovative approaches for a) mainstreaming better accessibility both for readers with vision as well as learning disabilities in the publishing sector and b) support of authors related to content adaptation, in particular related to non-text content. The key research challenges addressed are at the level of a) conceptualising an efficient way of adapting non-text content to different layers of understanding during the authoring process and b) providing well usable functionalities for incorporating such adaptations into popular accessible digital reading formats such as EPUB3.

 
1:30pm - 3:30pmSTS 2: STS Making Entertainment Content More Inclusive
Location: Track 2
Session Chair: Deborah Fels, Toronto Metropolitan University
Session Chair: Rumi Hiraga, Tsukuba University of Technology
Session Chair: Yuhki Shiraishi, Tsukuba University of Technology
 
ID: 112 / STS 2: 1
LNCS submission
Topics: STS Making Entertainment Content More Inclusive
Keywords: accessible darts, audio-tactile displays

TARGET: Tactile-Audio daRts for Guiding Enabled Throwing

D. Fels1, M. Kobayashi2

1National University Corporation Tsukuba University of Technology; 2Toronto Metropolitan University, Canada

The game of darts is a popular, simple to learn and difficult to master, social game. The TARGET tool was developed to support access to a BLE-enabled electronic dartboard for people who are Blind/Low Vision. Functionality included aurally announcing the score, current thrown position and remaining score to win. Human researchers tapped a metal rod on the dartboard to assist with aiming the dart, and provided advice on what possible areas on the board should be targeted. Tactile displays consisted of a small diameter replica of the dartboard that could be held in one hand and a second dart board on a horizontal surface that duplicated the dart positions that could be felt by the user. A user study with five blind participants was conducted to learn the game rules, practice throwing the darts and then playing an actual game. Results indicated that blind players enjoyed the game and were willing to play again, were in flow and developed a somewhat high degree of perceived competency over the duration of the study. Future work involves using machine learning technology to improve aiming support.



ID: 157 / STS 2: 2
LNCS submission
Topics: STS Making Entertainment Content More Inclusive
Keywords: Upper Extremity Motor Impairment, Video Game Accessibility, Mobile Device Accessibility, Head Gestures

Personalized Facial Gesture Recognition for Accessible Mobile Gaming

D. Ahmetovic

Università degli Studi di Milano, Italy

For people with Upper Extremity Motor Impairments (UEMI), interaction with mobile devices is challenging because it relies on the use of the touchscreen interface. Assistive technologies that replace touchscreen interactions with sequences of simpler, more accessible ones have been proposed. However, these sequential interactions are slower, and therefore not suitable for time-constrained interaction (e.g., games). One-to-one remapping of touchscreen interactions to alternative inputs has been proposed as a way to enable accessibility of existing games. In this context, external switches and vocal sounds have been used with promising results. However, for people with UEMI that cannot access external switches and have a speech impairment (e.g., anarthria), these interactions are still inaccessible.
We propose a new one-to-one interaction substitution method based on personalized Facial Gestures (FGs) recognition to account for the specific needs of different users with UEMI. Our approach relies on few-shot learning to enable custom definition of personalized FGs which are then mapped to the required game interactions. In this work we describe the FG recognition pipeline, and in particular we detail the processes of feature selection, few-shot learning, result aggregation, and their fine-tuning. Preliminary experimental evaluation indicates a classification accuracy of 96.99% and the ability to process 8.26 ± 1.55 frames per second on a commodity Android device.



ID: 195 / STS 2: 3
LNCS submission
Topics: STS Making Entertainment Content More Inclusive
Keywords: Visual impaired, para e-sports, spatial cognition, audio-tactile effects, falling block puzzle games

Tactris: Inclusive Falling Block Puzzle Game with Audio-Tactile Effects for Visually impaired People

M. Matsuo1, D. Erdenesambuu1, J. Onishi1, T. Miura2

1Tsukuba University of Technology, Japan; 2AIST, Japan

The purpose of this study is to develop an action puzzle game that is easy to play for visually impaired people, and to expand the number of visually impaired people participating in e-sports. To this end, we developed a prototype of Tactris, a falling-puzzle game that can be played by using auditory and tactile information, and evaluated the ease of use and playability of the game for six visually impaired people. We have been studying the presentation method, operation method, and game rules necessary to realize a falling object puzzle game that can be played by visually impaired people, and have aimed to develop a falling object puzzle game that combines voice, sound effects, and tactile illustrations. We aimed to research and develop a puzzle game that presents the situation in the game by combining audio, sound effects, and tactile illustrations. Since this interface uses dynamically changing tactile diagrams, it is expected to be used as a content to efficiently promote understanding of tactile perception in education for the visually impaired. It will also help to improve the tactile skills of dynamic tactile diagrams.
Six visually impaired people evaluated the ease of use and playability of the game, and although the ease of use remained an issue, there was a strong need to play the game repeatedly.



ID: 175 / STS 2: 4
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Design for All and Universal Design, inclusive game, fighting game, auditory cues, empirical study

Inclusive Fighting with Mind’s Eye: Case Study of a Fighting Game Playing with Only Auditory Cues for Sighted and Blind Gamers

M. Matsuo1, J. Onishi1, T. Miura2

1Tsukuba University of Technology; 2National Institute of Advanced Industrial Science and Technology (AIST)

Computer games have diversified due to advances in hardware performance and software capabilities. However, this context has led to the problem that visually impaired people frequently find it difficult to enjoy most of these state-of-the-art games, despite the vast amount of accessibility research on content and interfaces. Meanwhile, a growing number of playable games regardless of visual impairment status have been released. However, there are still few games enabling both blind and sighted players to compete against each other. The Street Fighter 6 ® (Capcom Co. Ltd.), released in 2023, introduced a sound accessibility feature for the first time in a commercial fighting action game. By clarifying the requirements for sighted and visually impaired players to compete smoothly using this feature, not only can they participate in the game regardless of their disability status, but also new accessible interfaces can be created by using the technology. In this study, the goal is to evaluate the playability of a fighting game with sound accessibility features for visually impaired and sighted groups. This paper reports on the evaluation results of usability and user experience of the sound accessibility features implemented in the Street Fighter 6, for two groups. In this extended abstract, we report only the results for the sighted players that we have analyzed, and if the study is accepted, we will report the results for the visually impaired players as well in our article.



ID: 184 / STS 2: 5
LNCS submission
Topics: STS Making Entertainment Content More Inclusive
Keywords: Deaf and Hard of Hearing, Music, Vibration, VIBES

Towards Improving The Correct Lyrics Detection By Deaf And Hard Of Hearing People

H. Yamamoto, R. Hiraga, K. Yasu

Tsukuba University of Technology

Access to music for people who are Deaf/Hard of Hearing (D/HoH) includes not only the instrumental portion but also the lyrics. While closed captioning can provide the lyrics in text format, it does not necessarily provide accurate timing of the lyrics with the instrumental portion. The purpose of this study is to clarify how vibrotactile stimuli affects the understanding of the onset timing of song lyrics for D/HoH people. To achieve this goal, we developed a system, called VIBES: VIBrotactile Engagement for Songs, that simultaneously provides music and vibration playback as an iPhone app. Unlike other vibrotactile systems for music, which focus primarily on the percussion/beat or frequencies of the instrumental portions, VIBES presents vibrations for the timing of vocal utterances, syllable by syllable. We conducted a study with 10 participants to determine the effectiveness of the system. Although statistically no effectiveness was obtained, the mean value of the correct timing acquisition increased after using the system. We found VIBES effective to a participant from the subjective evaluation who judged listening comprehension very poor before using VIBES while very positive after using it.



ID: 128 / STS 2: 6
LNCS submission
Topics: STS Making Entertainment Content More Inclusive
Keywords: Information Accessibility, Information Support, Information Sharing, Deaf and Hard of Hearing, Blind and Low Vision

Enhancing Accessibility in Sports and Cultural Live Events: A Web Application for Deaf, Hard of Hearing, Blind, Low Vision, and All Individuals

Y. Shiraishi, R. Hiraga, M. Kobayashi, Y. Zhong

Tsukuba University of Technology, Japan

This paper addresses the issue in live events, especially in sports viewing, where individuals who are deaf, hard of hearing (DHH), blind, or have low vision (BLV) struggle to access sufficient information. Traditionally, information accessibility has relied on specific professionals or volunteers, which is often inadequate. To tackle this challenge, we propose a mechanism that facilitates information sharing for all individuals, regardless of their abilities. We have developed an inclusive web application tailored to address these needs. This application is beneficial not only for DHH and BLV individuals but also for all audiences. Additionally, the system's applicability extends to other domains, such as museum visits. This paper details the newly developed web application and presents the outcomes of pilot studies conducted in sports viewing and museum settings with DHH and BLV participants. The results of these experiments are analyzed to assess the system's effectiveness and to identify future improvement areas.

 
1:30pm - 3:30pmSTS 10B: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Location: Track 3
Session Chair: Susanne Dirks, TU Dortmund
Session Chair: Aashish Kumar Verma, JKU Linz
Session Chair: Klaus Miesenberger, Johannes Kepler University linz
 
ID: 113 / STS 10B: 1
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Dementia, Self-Management, Stakeholder Engagement, Co-Design

Designing Self-Management for and with Persons Living with Dementia

D. OSullivan

Technological University Dublin, Ireland

Promoting high quality of life for persons with dementia has emerged as a central goal in global public health agendas. The emphasis has shifted from extending life to actively enhancing overall well-being by postponing or preventing additional disability. This represents a departure from traditional medical perspectives on dementia to a more socially-oriented approach, placing a strong focus on wellbeing.

In parallel, the concept of self-management for people with dementia has emerged, defined as “a person-centred approach in which the individual is empowered and has ownership over the management of their life and condition [1].” Practice recommendations for person-centered care have been recommended, which emphasize the importance of knowing and understanding the person with dementia such that individualized choice and dignity are supported. There is also a need to include informal carers (family members or friends), in designing collaborative care planning while balancing empowerment and active engagement for the person with dementia in self-management with carer support. This paper describes our approach to co-designing assistive technologies for care planning with and for persons with dementia and their caregivers.

1. The Dementia Engagement and Empowerment Network (DEEP). Dementia and self-management: Peer to peer resource. Available at https://www.dementiavoices.org.uk/dementia-and-self-management-peer-to-peer-resource-launched-on-6th-may-2020/. (Accessed January 2024).



ID: 212 / STS 10B: 2
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Autism, Communication skills, Facial expression recognition, Chatbots, Assistive Technology (AT)

A Social Communication Support Application for Autistic Children Using Computer Vision and Large Language Models

R. Jafri

King Saud University, Riyadh, Saudi Arabia

A novel, affordable and accessible software solution that utilizes computer vision tools and Large Language Models (LLMs) to provide communication support to high functioning autistic children during online meetings with the aim of improving their social communication skills is presented. The system displays the remote attendee’s facial expressions as distinct emoticons to facilitate the child’s understanding of non-verbal social cues and suggests appropriate responses on demand based on the conversational context and the detected expressions. A gamification option for practicing facial expression recognition in an engaging manner is also offered. The application serves as a support platform as well as a teaching tool which autistic children can utilize to connect with friends and caregivers to improve their social communication skills. It is being developed in consultation with therapists who work with autistic children to ensure that its design is compatible with the unique needs of the end users. The system is more cost-effective and sensory friendly as compared to similar robotic and virtual reality-based solutions and has the added advantage that instead of interacting with robots or virtual agents, the child converses with a real human being, albeit remotely, thus, increasing the likelihood that the social skills learned would be effectively transferred to co-located face-to-face conversations.



ID: 216 / STS 10B: 3
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Easy-to-Read, readability, evaluation, accessibility

Towards Reliable E2R Texts: A Proposal for Standardized Evaluation Practices

M. Madina

Darmstadt University of Applied Sciences (Hochschule Darmstadt)

Easy-to-Read (E2R) is a method of enhancing the accessibility of written text by using clear, direct, and simple language. E2R texts are designed to improve readability and accessibility, especially for individuals with cognitive disabilities. However, there is a significant lack of standardized evaluation methods for these texts. Traditional ATS (Automatic Text Simplification) evaluation methods such as BLEU, SARI or ROUGE present several limitations for E2R evaluation. Readability measures such as Flesch Reading Ease (FRE) and Flesch-Kincaid Reading Grade Level (FKRGL) do not take into account all document factors. Manual evaluation methods, such as Likert scales, are resource-intensive and lead to subjective assessments. This paper proposes a threefold evaluation method for E2R texts. The first step is an automatic evaluation to measure quantitative aspects related to text complexity. The second step is a checklist-based manual evaluation that takes into account qualitative aspects. The third step is a user evaluation, focusing on the needs of end-users and the understandability of texts. Our methodology ensures thorough assessments of E2R texts, even when user evaluations are not feasible. This approach aims to bring standardization and reliability to the evaluation process of E2R texts.



ID: 146 / STS 10B: 4
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: brain-computer interface, steady-state visual evoked potential, aphasia

EEG Measurement Site Suitable for SSVEP-BCI Assuming Aphasia

S. Kondo

Kogakuin University, Japan

The purpose of this study is to investigate the SSVEP-BCI measurement channel for aphasia patients with decreased visual acuity in the right eye due to right hemiplegia. Steady-state visual evoked potential (SSVEP) is a visual response to a flashing stimulus. Brain-computer interface (BCI) is an interface that connects the brain and computer. SSVEP-BCI has high information transmission ability. Aphasia caused by cerebral hemorrhage is often accompanied by paralysis, and BCI may be an effective supplement. However, in SSVEP-BCI, it is not appropriate to acquire signals from conventional measurement sites for patients whose visual acuity has decreased due to paralysis. In this study, with the cooperation of an aphasia patient with right hemiplegia, we discussed what kind of measurement electrode placement would be appropriate when implementing SSVEP-BCI for aphasia patients. Electrodes were placed on the left side of the back of the head, the right side of the back of the head, and the entire back of the head. As a result of the 4-input SSVEP-BCI experiment, the BCI accuracy for the entire occiput, left occipital area, and right occipital area was 81.03, 43.96, and 86.97%. The BCI accuracy for the right side of the occipital region showed better values than the data for the entire occipital region, which has many channels. Based on the above, this study demonstrated the need to adapt to the characteristics of the subject when providing SSVEP-BCI to aphasia patients.



ID: 150 / STS 10B: 5
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: Virtual Reality, Persons with Intellectual Disabilities, Vocational Skills, Intelligence Quotient, Visual-motor integration

The Correlation among the Intelligence, Visual-Motor Skills and Virtual Reality Operation Performance of Students with Intellectual Disabilities

H.-S. Lo, T.-F. Wu

National Taiwan Normal University, Taiwan

Introduction: Virtual reality (VR) finds application across diverse domains, and could provide a multisensory experience and enhance students' skill development by simulating real-world work situations. These features of VR are particularly suitable for individuals with intellectual disabilities (ID). This study primarily explored the correlation among intelligence and visual-motor skills with performance in VR operations of students with ID.

Methods: There were 57 students with ID who participated in this study. Participants completed two trials of the tasks in the VR system, which automatically recorded the time spent on the task and the accuracy of performing the steps. When participants completed the VR task, their intelligence and visual-motor integration skills were then assessed.

Results: The results demonstrated that students with ID spent less time and obtained higher accuracy rates on the second than on the first trial. In addition, full scale intelligence quotient and visual-motor integration could predict the time spent on the first trial, as well as the working memory index of students with ID positively correlated with accuracy on both the first and second trials.

Conclusion: The findings indicated that students with ID possess the capability to navigate and interact with VR. Their intellectual and visual-motor skills no longer affect the time spent after repeated practice, and the working memory of students with ID affect the accuracy of VR.



ID: 196 / STS 10B: 6
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: Ambient and Assisted Living (AAL), Assistive Technology (AT), User Centered Design and User Participation

Assistive Augmented Reality for Adults on the Autism Spectrum with Intellectual Disability

T. Westin

Stockholm university

Augmented reality (AR) presents opportunities for creating new assistive technologies, by integrating virtual objects with the actual world. However, AR also presents challenges for co-design and accessibility. The goal in this study is to co-design AR support from the perspective of people on the autism spectrum and with intellectual disability, at day activity centres (DACs), who are of working age, not gainfully employed or in training. Two workshops were first done with staff only, to educate staff, to learn more about how AR could be designed for DAC practices, and ease communication in the third workshop; a series of individual, local workshops at several DACs, to be convenient for participants. Workshops included testing functional AR prototypes, video modelling, AR design with a visualisation kit and an inclusive SUS questionnaire. The results show how AR-based, in-door navigation support, as well as a QR based media player, can be designed and how participants responded. Further, how the in-door navigation and personalisation can be managed and setup by staff in AR and web interfaces. Results from testing revealed accessibility issues with physical size of tablets, the duality of AR interaction with both tangible and virtual objects, and some details with the digital design. However, there was also clear confirmation that the AR indoor navigation and the media player can be very useful. Co-design of AR was further achieved by redesign based on the workshop results.

 
1:30pm - 3:30pmSTS 4A: STS Blindness, Low Vision: New Approaches to Perception and ICT Mediation
Location: Track 4
Session Chair: Katerine Romeo, University of Rouen Normandy
 
ID: 171 / STS 4A: 1
LNCS submission
Topics: STS Blindness Gain or New Approaches to Artwork Perception and ICT Mediation
Keywords: (e)Accessibility, artworks, saliency detection, blind people, deep learning

Detecting Areas of Interest for Blind People: Deep Learning Saliency

W. Luo, L. Djoussouf, C. Lecomte, K. Romeo

Rouen Normandy University, Normandy Univ, LITIS UR 4108, F-76000 Rouen

  The purpose of this study is to explore human visual attention when observing artworks to create audio descriptions that will guide tactile exploration via a force feedback tablet F2T. To find the semantically important elements, we tested with an Eye-tracker people's behaviour when observing scenes with and without audio description. The collected data and the small dataset constituted from images of the Bayeux Tapestry will be used to train a deep learning model. This model aims to predict saliency on other images. We use three models to predict images of Bayeux Tapestry: Resnet50, TransalNet, SAM-LSTM-Resnet. We can see that after the training phase on our dataset, the predictions of chosen model (SAM-LSTM-Resnet) are closer to the ground truth and have better correlation, which is a significant improvement over the same model learnt with only the original dataset.



ID: 199 / STS 4A: 2
LNCS submission
Topics: STS Blindness Gain or New Approaches to Artwork Perception and ICT Mediation
Keywords: artwork accessibility, tactile graphics, multisensory

A Comparison Of Audio-Tactile Exploration Methods To Discover The Apocalypse Tapestry

M. Redon1, L. Djoussouf2, K. Romeo2

1Normandie Univ, UNICAEN, ENSICAEN, CNRS, GREYC, Caen, France; 2Rouen Normandy University, Normandy Univ, LITIS UR 4108, F-76000 Rouen, France

A major issue for blind and partially blind museum visitors is artwork accessibility. To that extent, tactile graphic techniques and supports have been developed to allow haptic exploration of bi-dimensional artworks, which is not always available in museums. Focusing on the Apocalypse Tapestry located in the Castle of Angers, France, this study aims to compare different technologies. To evaluate their impact on the museum visitors’ experience, we seek to establish the benefits and drawbacks and look for further improvements. Several tactile supports have been used in this experiment: a tactile book with embossed paper, a swell paper representation, a 3D printed puzzle, and a haptic device, named F2T (Force Feedback Tablet). Blind, partially blind, and sighted participants were tasked to explore these supports accompanied by audio descriptions. Participants were presented with several Likert scale and open-ended questions to rate their general feeling. Overall, all the conditions received good ratings from the participants. However, more nuanced comments were gathered on each condition. Through users’ feedback, new ways to improve the technologies have been proposed



ID: 172 / STS 4A: 3
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: HCI and Non-Classical Interfaces, Assistive Technology (AT), Musical Interfaces, Tangible Interaction

Sonic Interactions - Towards Accessible Digital Music-Making

J. Vetter

University of Arts Linz, Austria

Can tangible computer music interfaces inspire creativity and enhance artistic stage performances for visually impaired and blind musicians? And what are the benefits of engaging with tangible music interfaces in the context of computer music? This paper presents the research results of an in-depth exploration of inclusive and interactive digital music practice with regard to accessibility barriers in the field of computer music software and hardware. Based on the collaboration of visually impaired, blind and sighted musicians and researchers new concepts were developed for tangible computer interaction, focusing on interfaces that are both accessible and enable professional artistic expression. Derived from an artistic research approach, three tangible music interfaces will be proposed, each addressing the physical representation of individual sonic features commonly found in computer music. The practical work is summarized, including the development of accessible music software, the development of the hardware interfaces and their discussion during presentations and workshops with visually impaired and blind experts, and finally their use as part of artistic practice on stage. The resulting expert feedback acquired during workshops, presentations and live performances is grouped into four key topics and is interpreted with regard to the aforementioned research questions.



ID: 144 / STS 4A: 4
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Braille, Finger Tracking, Accuracy, Assistive Technology (AT)

Analyzing the Accuracy of Braille Finger Tracking with Ultraleap’s Virtual Reality Hand Tracking System

M. Treml1,2

1Institute of Visual Computing and Human-Centered Technology, TU Wien, Vienna, Austria; 2TETRAGON Braille Systems GmbH, Vienna, Austria

Technical solutions for tracking the fingers of people reading braille usually require specific laboratory settings that do not resemble braille reading in real-life. This paper reports on part 2 of a study that analyzes the potential of a virtual reality hand tracking system from Ultraleap to be used for braille finger tracking with a minimum of such inconveniences. This part of the study focuses on whether the system is accurate enough to reliably detect the braille cell that is currently the focus of the reader’s tactile perception.

The Ultraleap camera was mounted 14 cm above a refreshable braille display with 20 six-dot cells. To measure the accuracy of finger detection, a software was developed to log the coordinates of the detected index fingers’ fingertips.

The measured distance between neighboring cells reached values from 0.005 to 0.01 Ultraleap units (about 0.5 to 1 cm in real life), and the x-coordinates measured for single cells varied by 0.01 Ultraleap units in the worst case, which is evidence that the accuracy is not sufficient to reliably detect the single braille cell that is currently supposed to be the center of the user’s tactile perception. However, the small amount of overlap (only one cell off in the worst case) suggests that the system can still be utilized for use cases like audio output of the current word.



ID: 137 / STS 4A: 5
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: remote collaboration, mixed ability teams, mixed visual ability teams, synchronous remote collaboration

Remote Collaboration Between Sighted and Non-Sighted People in Mixed-Ability Teams

F. Kaschnig, T. Neumayr, M. Augstein

University of Applied Sciences Upper Austria, Austria

Remote work settings became increasingly important in recent years, which often inherently requires collaboration at a distance. Hereby, especially in mixed-ability teams, different individuals might face different challenges. In this paper, we thus aim to analyze remote collaborative interaction between sighted and non-sighted users based on a qualitative study with twelve participants (six non-sighted) in three mixed-ability teams, who worked together on a problem solving task.
Our analysis considers a variety of aspects related to the perception and closeness of collaboration, assistance and communication among team members, disruptive elements, workspace awareness, territoriality, or triggers for problematic situations, and indicates both, challenges but also potentials of collaboration within mixed-ability teams.

 
1:30pm - 3:30pmI2: Innovation Area
Location: Innovation Area
Session Chair: Andrea Petz, JKU Linz
Will be announced, soon: workshops, posters, ...
3:30pm - 4:00pmB3: Coffee Break
4:00pm - 5:30pmSTS 1B: Software, Web and Document Accessibility
Location: Track 1
Session Chair: Nataša Rajh, JKU
Session Chair: Reinhard Koutny, JKU
Session Chair: Klaus Miesenberger, Johannes Kepler University linz
Session Chair: Matjaž Debevc, University of Maribor, FERI
 
ID: 111 / STS 1B: 1
OAC Submission
Topics: STS Web Accessibility: Methods, Techniques and Tools for Design, Development and Evaluation/Monitoring
Keywords: Design Models, Design Processes, Co-Design, Participatory Design

Towards a Framework of Inclusive Software Design Process and Practices

D. O'Sullivan

Technological University Dublin, Ireland

A system can be considered inclusive if it is usable by as wide a range of people as possible (including people with disabilities and people using a wide range of technologies). One approach that designers can take to create products and services that are inclusive is to involve end-users into the design approach. There are a number of potential design approaches that can structure this inclusion process, and in this paper, both design models and design processes will be described and categorized.

In terms of design models, the four inclusive models that will be categorised and explained are Accessible Design (which focuses on people with disabilities), Inclusive Design (which focuses on excluded people, including: older people, and people with disabilities), Universal Design (which focuses on all people), Design for All (which focuses on all people using a range of devices).

In terms of the design processes, an overall philosophy of design that suggests that we include end-users is called “Co-Design”, and specific processes that are reviewed in this paper are: Participatory Design (requiring users to partners in designing), User-Centred Design (optionally users are partners in designing), Co-Production (focusing on the public sector), and Co-Creation (focusing on the private sector).

This research is being undertaken as part of trans-European research project and incorporates perspectives from a range of European stakeholders working in inclusive design.



ID: 118 / STS 1B: 2
LNCS submission
Topics: STS Accessibility and Usability of Mobile Platforms for People with Disabilities and Elderly Persons: Design, Development and Engineering
Keywords: Accessible user experience, Assistive Technology (AT), (e)Accessibility, Design for All and Universal Design

Inequality in User Experience: Can Mobile User Interfaces that Help Sighted Users Create Barriers for Visually Challenged People?

S. Tanwar, P. Rao

Indian Institute of Technology, Delhi, India

Designing mobile interfaces for enhanced usability and user experience has become a standard practice in modern day app development. However, this approach often prioritizes the needs of sighted users, leading to compromised experience for people with visual impairments and blindness. This study reveals the user experience elements that make it easier for sighted users to accomplish a task while creating barriers for people who rely on screen readers at the same time. Using task-based usability tests of six popular mobile apps, the study compares the experiences of 12 sighted and 15 visually challenged users. The results reveal drastic differences in usability and experience between the two groups, highlighting the gaps and experience compromises. The study highlights how designing interfaces for enhanced usability and user experience for sighted users compromise six prominent aspects of usability for people with visual impairments and blindness leading to productivity challenges and poor user experiences, calling for a more inclusive and accessible approach to mobile app design.

The study suggests investigating technological advancements, such as building screen reader capabilities in understanding designer's intent keeping screen reader limitations in mind can address such issues to provide better experiences for screenreader users while being productive at tasks accomplished through mobile devices.



ID: 208 / STS 1B: 3
OAC Submission
Topics: STS Accessibility and Usability of Mobile Platforms for People with Disabilities and Elderly Persons: Design, Development and Engineering
Keywords: accessibility, usability, mental wellbeing, mobile app, higher education

The Accessibility and Usability of Mobile Apps for Students’ Mental Wellbeing in Higher Education

W. Chen

Oslo Metropolitan University, Norway

Students’ mental wellbeing has become an increasing challenge in higher education. Various studies have demonstrated the positive effects of mobile apps in improving the mental wellbeing of students. Despite of the potential benefits, accessibility and usability issues in mental health apps create barriers for diverse students, particularly students with disabilities, preventing them from taking full advantage of these applications. The goal of this study is to understand the digital barriers present in mental health apps for students in higher education and provide recommendations for designing and developing mental health apps with high level of accessibility and usability. To achieve this goal, we conducted heuristic evaluation and user testing on a carefully selected group of apps. A preliminary analysis indicates that all the selected apps exhibit accessibility issues and usability issues, which can have a negative impact on user engagement.



ID: 190 / STS 1B: 4
LNCS submission
Topics: STS Accessible and Inclusive Digital Publishing
Keywords: PDF accessibility, digital accessibility, higher education, persons with visual impairments

PDF Accessibility in International Academic Publishers

O. Pierrès, A. Darvishy

Zurich University of Applied Sciences (ZHAW), Switzerland

Academic articles are commonly published in Portable Document Format (PDF). However, for many people with visual impairments, PDF formats present significant accessibility issues. This study addresses two research questions: 1) To what extent are PDFs in prominent academic repositories accessible? and 2) To what extent are accessibility issues in academic articles known and addressed by repositories? To answer these questions, 8,000 PDFs from four prominent repositories (Springer, Elsevier, ACM, and Wiley) were retrieved and were automatically analyzed according to accessibility criteria based on the Matterhorn Protocol. Additionally, a quantitative content analysis was performed on the submission guidelines of repositories to determine the degree to which accessibility is considered in document creation. Results suggest that most PDFs were not tagged in spite of the fact that some repositories included accessibility in their general author guidelines. This paper concludes with recommendations to improve the accessibility of papers in academic repositories.

 
4:00pm - 5:30pmSTS 8: STS Interaction Techniques for Motor Disabled Users
Location: Track 2
Session Chair: Mathieu Raynal, IRIT - University of Toulouse
Session Chair: Ian Scott MacKenzie, York University
 
ID: 246 / STS 8: 1
LNCS submission
Topics: STS Interaction Techniques for Motor Disabled Users
Keywords: Pointing technique, Motor disabled users, Automatic scanning, Single input switch

Automatic Bars with Single-Switch Scanning for Target Selection

M. Raynal1, I. S. MacKenzie2

1IRIT - University of Toulouse, France; 2Department of Electrical Engineering and Computer Science, York University, Toronto, Canada

In this article, we propose two pointing techniques based on automatic scanning and requiring only a single input switch for the user. The first technique is an improved version of the “Button” technique which was the best of the two versions previously presented in [XX]. The second pointing technique (CS3) is based on circular scanning of the screen, first by an arc, then by a line. First results show that our second technique, CS3, was slower than iButton, but the participants also made fewer pointing errors with this technique. So if we take throughput into account, our two techniques are equivalent in terms of performance.



ID: 247 / STS 8: 2
LNCS submission
Topics: STS Interaction Techniques for Motor Disabled Users
Keywords: Text Entry, Word prediction, Motor Disabled Users

WordGlass: Additional Keys to Present the Most Likely Words

M. Raynal

IRIT - University of Toulouse, France

In this article, we propose WordGlass, a system of additional keys that are dynamically added as the user types. Each key proposes one of the most likely words. The keys have been spaced out on the keyboard so that one additional key can be displayed above and another below the last key pressed, without obscuring the keys already present. First results show that, although typing speed is slightly higher with WordList than with WordGlass (1.3 cps versus 1.25 cps), WordGlass has some benefits: the prediction use rate is much higher with WordGlass than with WordList. The majority of participants said that it was easier to see a word that was close to the pointer than to look beside the keyboard. Moreover, when there were few characters left to type on the current word, participants preferred to continue typing on the keyboard rather than moving to the list. This proximity to the cursor was also reflected in the distance covered by the pointer. On average, participants covered 28.3% less distance with WordGlass than with WordList.



ID: 158 / STS 8: 3
LNCS submission
Topics: STS Interaction Techniques for Motor Disabled Users
Keywords: Assistive Technology (AT), Motor Disability, Mouth-Operated Joystick, Mouse Emulation, Alternative HCI

Development and Evaluation of a Low-Cost, High-Precision Sensor System for Mouth-Operated Joysticks

C. Veigl1,2, B. Klaus1

1UAS Technikum Wien, Austria; 2Johannes Kepler University Linz (JKU)

Mouth-operated mouse emulation devices enable an accurate computer control for people with restricted movement of their upper limbs. To be useable even with a limited range of head movement and minimal muscle strength, precise sensor systems are required, which are usually expensive. In this work, we explore the phenomenon of piezoresistivity of SMD thick film resistors and utilize this effect for a low-cost, highly sensitive force sensor for mouth-operated joysticks. A proof-of-concept implementation is presented, including the PCB layout and signal processing strategy. The sensor performance is compared to standard strain gauge measurements, showing minimum detectable forces at the joystick tip of 0.5g (strain gauge) or 10g (piezoresistive measurement). A prototype of a mouth-operated joystick based on the novel sensor boards was evaluated in a user study with 10 participants. In a quantitative evaluation, the efficiency of the pointing device was measured using Fitts' law, showing an average information throughput of 1,51 bits/s, with an error rate of 9,79%. The manufacturing cost of the novel sensor system is only €7.5, as it can be produced almost completely automatically. This drastically reduces the cost of our mouth-operated FlipMouse device (or similar alternative input solutions), making it affordable in low-income settings. All designs have been released under open source licenses.

 
4:00pm - 5:30pmSTS 11A: STS Dyslexia, Reading/Writing Disorders: Assistive Technology and Accessibility
Location: Track 3
Session Chair: Melanie Schaur, JKU Institut Integriert Studieren
Session Chair: Anna Ajlani, Johannes Kepler University Linz
 
ID: 266 / STS 11A: 1
LNCS submission
Topics: STS Dyslexia, Reading/Writing Disorders: Assistive Technology and Accessibility
Keywords: Dyslexia, Reading/Writing Disorders, Higher Education, Sociology, Assistive Technology.

Sociotechnical Experiences and Strategies of People with Reading/Writing Disorders in Higher Education: A Systematic Literature Review in Sociology

A. Ajlani

Johannes Kepler University Linz, Austria

This systematic literature review addresses the persistent challenges faced by students and employees with dyslexia in higher education settings, de-spite structural adjustments in academic institutions. Emphasizing the need for a comprehensive understanding of dyslexia's sociocultural context, the sociologi-cal exploration maps out diverse experiences, navigational strategies, and the role of technological tools for individuals with reading and writing disorders.
The ongoing systematic literature review involves qualitative and quantitative studies, aiming to synthesize the global state of inclusion for dyslexic individuals.
The work-in-progress entails completed database searches in ProQuest and Web of Science, with additional searches planned in Springer and SocINDEX. Selec-tion criteria involve peer-reviewed publications from the last 10 years discussing dyslexia in higher education. Initial results suggest challenges faced by dyslexic students, technological interventions for inclusive learning, and the need for broader assistive tools beyond reading and writing.
The preliminary findings underscore the importance of addressing dyslexia, able-ism and technological interventions in academic settings from a holistic, that is, sociocultural perspective, thus providing a foundation for more participative and inclusive approaches to academic administration and technological development.



ID: 256 / STS 11A: 2
OAC Submission
Topics: STS Dyslexia, Reading/Writing Disorders: Assistive Technology and Accessibility
Keywords: dyslexia, higher education, academic achievement, support

Factors Of Academic Achievement In University Students with Dyslexia

M. Massoumzadeh

JKU Linz, RID, Austria

Dyslexia, a learning disability, affects word recognition, decoding, and spelling, hindering reading comprehension and vocabulary expansion. Research shows persistent reading difficulties from early schooling into adulthood.

We aim to explore factors influencing learning success in university students with dyslexia. Previous studies highlight challenges in writing, note-taking, and organization. Dyslexic individuals may also face heightened anxiety and reduced self-esteem. Comorbidities and varying levels of support further complicate their educational journey.

Our study includes a literature review and assessment of 10 university students, focusing on factors such as persistence of difficulties, accompanying issues like anxiety, and the role of support systems.

Preliminary findings suggest 5 factors significantly impact academic achievement in dyslexic students, emphasizing the need for tailored interventions and support mechanisms.



ID: 219 / STS 11A: 3
LNCS submission
Topics: STS Assistive Technologies and Inclusion for Older People
Keywords: AAL, egocentric vision, LLM, Assistive Technology (AT), reading assistance

TEXT2TASTE: A Versatile Egocentric Vision System for Intelligent Reading Assistance Using Large Language Model

W. Mucha

TU Wien, Austria

The ability to read, understand and find important information from written text is a critical skill in our daily lives for our independence, comfort and safety. However, a significant part of our society is affected by partial vision impairment, which leads to discomfort and dependency in daily activities. To address the limitations of this part of society, we propose an intelligent reading assistant based on smart glasses with embedded RGB cameras and a Large Language Model (LLM), whose functionality goes beyond corrective lenses. The video recorded from the egocentric perspective of a person wearing the glasses is processed to localise text information using object detection and optical character recognition methods. The LLM processes the data and allows the user to interact with the text and responds to a given query, thus extending the functionality of corrective lenses with the ability to find and summarize knowledge from the text. To evaluate our method, we create a chat-based application that allows the user to interact with the system. The evaluation is conducted in a real-world setting, such as reading menus in a restaurant, and involves four participants. The results show robust accuracy in text retrieval. The system not only provides accurate meal suggestions but also achieves high user satisfaction, highlighting the potential of smart glasses and LLMs in assisting people with special needs.

 
4:00pm - 5:30pmSTS 4B: STS Blindness, Low Vision: New Approaches to Perception and ICT Mediation
Location: Track 4
Session Chair: Katerine Romeo, University of Rouen Normandy
 
ID: 147 / STS 4B: 1
OAC Submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: visual impairement, screen reader, information access gap, inclusive work environment, AI, image recognition, accessibility

Survey For Development Of An Assistive Tool For Screen Reader Users To Utilize Icons Without Alternative Text

Y. Ina

Tsukuba University of Technology, Japan

This study explores the challenges faced by visually impaired individuals using screen readers in the workplace and proposes the development of assistive tools to address these challenges. Interviews with five visually impaired workers revealed difficulties related to screen reader compatibility, image and character recognition, and lack of accessibility awareness. The data, analyzed by clustering, highlighted significant issues such as inaccessibility of PDFs, icons without alternative text, and inadequate visual layouts. The research emphasizes the need for improved tools to bridge the information access gap, demonstrating the potential of AI and image recognition technologies in creating more inclusive work environments. Future work will focus on developing a tool that interprets icons' functions and presents them in a screen-reader-friendly format.



ID: 124 / STS 4B: 2
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: visual impairment, image recognition, neural network, expiration date

Expiration Date Recognition System Using Spatial Transformer Network for Visually Impaired

Y. Takeuchi

Daido University, Japan

In this paper, we propose a system for recognizing an expiration date of perishable food products. A visually impaired user takes a photo of the product. Then, the system automatically recognizes the expiration date of the product and tells it to the user. The dates in the image were sometimes skewed, misaligned, or partially missing. To solve these problems, we use Spatial Transformer Network (STN) to recognize skewed, misaligned, or partially missing date images. STN explicitly allows the spatial manipulation of data within the neural network.

We propose a dedicated CNN with STN expiration date recognition. The input to this network is an image of the expiration date. The network has six STNs to detect each digit of the date. Each STN outputs the rectified image of the digits. The image of the digits is then recognized by the traditional CNN.

We trained the expiration date recognition network. The network converged quickly, and the training was stopped in 10 epochs. We tested this network on 1088 date images that were not used during training. The experimental results showed that the system was able to achieve a high recognition rate of 99.42% for the dataset without dates with spaces and 98.44% for the dataset with dates with spaces.



ID: 259 / STS 4B: 3
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Digital Twin, Assistive Technology (AT), Strabismus, Nystagmus

Enhancing Accessibility through ICT Trends for Extraocular Muscles Prosthesis Implant: A Comprehensive Literature Review

A. K. Verma

JKU Linz, Austria

The objective of this research is to explore ICT trends and their potential impact on EOM prosthesis implant surgeries, with a specific emphasis on improving accessibility and surgical outcomes.

Research Questions:

  1. What imaging modalities and data sources are essential for constructing Digital Twins tailored for Strabismus Surgeries?
  2. What medical image databases exist for addressing EOM issues, particularly in the context of Strabismus?
  3. What are the current diagnostic image processing and neural network algorithms available for detecting Strabismus?
  4. What components constitute the 3D geometrical models utilized in EOM simulators?
  5. How can the eye be quantified using medical imaging?
  6. What frameworks and algorithms are available for converting medical imaging into three-dimensional geometrical models, facilitating diagnosis and simulation?
  7. What mathematical models are employed in EOM simulations to enhance understanding and training?

Methods: The SLR methodology was employed, incorporating diverse research questions and precise definitions. A total of 14 essential keywords were identified, leading to the selection of six appropriate databases. Following rigorous search criteria, 56 relevant research papers were identified and analyzed.

Results: Existing databases primarily rely on digital imaging, lacking explicit medical image repositories. Various AI and ML algorithms were identified for diagnosing Strabismus, though a scarcity of algorithms based on medical images was observed.

 
4:00pm - 5:30pmI3: Innovation Area
Location: Innovation Area
Session Chair: Andrea Petz, JKU Linz
Will be announced, soon: workshops, posters, ...
Date: Thursday, 11/July/2024
8:30am - 10:00amSTS 1C: Software, Web and Document Accessibility
Location: Track 1
Session Chair: Nataša Rajh, JKU
Session Chair: Reinhard Koutny, JKU
Session Chair: Klaus Miesenberger, Johannes Kepler University linz
Session Chair: Matjaž Debevc, University of Maribor, FERI
 
ID: 263 / STS 1C: 1
LNCS submission
Topics: STS Web Accessibility: Methods, Techniques and Tools for Design, Development and Evaluation/Monitoring
Keywords: Web accessibility; Automated tools; Accessibility evaluation; Disability, Accessibility improvement

A Declarative Model for Web Content Accessibility Evaluation Process

J. Ara1, C. Sik-Lanyi1,2

1Department of Electrical Engineering and Information Systems, University of Pannonia; 2Hungarian Research Network, Budapest, Hungary

The goal of this work is to develop a web accessibility evaluation tool to implement in real-life cases to evaluate website accessibility. Though the existing web accessibility evaluation tools are effective, due to some limitations their reported results seem unclear for the end-user which could act as prime factors to reduce the effectiveness of the tool. Therefore, in this work, our prime focus is to develop a web content accessibility testing tool focusing on five aspects to improve the limitations of the existing tools such as the updated guideline implementation, incorporate guideline simplification process, consider user criteria as additional evaluation criteria, categorize the report for textual and non-textual information and provide the overall accessibility report in terms of accessibility score of each disability type. Unfortunately, the scenario of website accessibility is becoming worse day by day due to a lack of concern about accessibility issues and the requirements of people who have special needs. Thus, the development of an advanced website accessibility evaluation tool is an emerging demand especially to provide complete accessibility reports considering every type of disability.



ID: 202 / STS 1C: 2
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: PDF Accessibility, Accessibility Tools

Testing Usability of Tools for Making PDFs Accessible: Pressing Issues and Pain Points

T. Schwarz, K. Müller

Karlsruhe Institute of Technology, Germany

Nowadays, most digital documents are available in the form of PDF files.
The main problem, however, is that there are hardly any tools that automatically generate accessible PDFs, which is why they have to be made accessible.
In a study with 8 participants, we investigate the usability of three tools (Adobe Acrobat Pro, Axes and PAVE), used for making PDFs accessible. Our results show that the SUS score of PAVE is best, followed by AXES and Acrobat. None of the three tools, however, achieves an SU score above 50 and therefore need improvements. The qualitative analysis points at three main problems: (i) users felt lost and do not know what to do, (ii) there is no or insufficient possibility to track changes and (iii) the interfaces are not easy to use.



ID: 218 / STS 1C: 3
LNCS submission
Topics: STS Web Accessibility: Methods, Techniques and Tools for Design, Development and Evaluation/Monitoring
Keywords: Digital Participation, Digital Accessibility, Web Accessibility Directive, Feedback Mechanism

How to Provide Actionable Feedback on Web Accessibility Issues – Development of a Model Curriculum and Practical Guidelines

S. Dirks

TU Dortmund, Germany

In the EU the Web Accessibility Directive (2016, WAD) requires public sector bodies' websites and mobile apps to be accessible to all users and to document and monitor accessibility problems. The WAD also introduced a feedback mechanism that can be used to flag accessibility problems or request information about content that is provided in a non-accessible way. Member state reports from 2021 show that most websites and mobile applications do not meet all required demands although there are hardly any complaints on still existing barriers.

In this context, the paper presents a ‘model curriculum’ on how to provide actionable feedback on web accessibility issues. The curriculum is based on research on existing barriers, user needs, and best-practice-examples and was designed for Vocational Education and Training providers and Organisations of Persons with Disabilities across the EU, interested in providing training courses for their members. The main aim is to empower end-users to give feedback on existing barriers and establish guidance on how to give actionable feedback. The curriculum will be discussed focusing on implications for the implementation and future studies and projects.



ID: 227 / STS 1C: 4
LNCS submission
Topics: STS Web Accessibility: Methods, Techniques and Tools for Design, Development and Evaluation/Monitoring
Keywords: Assistive Technology (AT), (e)Accessibility, Built Environments, Simulation, Wayfinding

On the Use of a Simulation Framework for Studying Accessibility Challenges Faced by People with Disabilities in Indoor Environments

V. Namboodiri

Lehigh University, United States of America

Persons with disabilities (PWDs) face many challenges in navigating unfamiliar indoor spaces due to physical accessibility barriers, insufficient wayfinding signage and a lack of satellite-positioning capability. It is not particularly easy to study accessibility for indoor environments due to challenges in collecting sufficient mobility data from PWDs who navigate such environments. This paper introduces a simulation framework called MABLESim designed to study accessibility of indoor spaces for people with disabilities (PWDs). The use of MABLESim on two different buildings is demonstrated followed by a comparative study of their accessibility for individuals with differing physical and sensory abilities. By providing valuable insights and practical recommendations, this research contributes to the broader dialogue on inclusivity in built environments, offering a roadmap for creating indoor spaces that cater to the diverse needs of individuals with disabilities.

 
8:30am - 10:00amSTS 5A: STS Art Karshmer Lectures in Access to Mathematics, Science and Engineering
Location: Track 2
Session Chair: Katsuhito Yamaguchi, NPO: Science Accessibility Net
Session Chair: Dominique Archambault, Université Paris 8-Vincennes-Saint-Denis
 
ID: 242 / STS 5A: 1
LNCS submission
Topics: STS Art Karshmer Lectures in Access to Mathematics, Science and Engineering
Keywords: tactile graphics, (e)Accessibility, Graphics in STEM and other professional publications often use color as an important part of the content. Color is one of many challenges to making graphics accessible today. Word descriptions are used to "make simple g

Accessing Graphics with Color Content

J. A Gardner

ViewPlus, United States of America

Graphics in STEM and other professional publications often use color as an important part of the content. Color is one of many challenges to making graphics accessible today. Word descriptions are used to "make simple graphics accessible", but tactile versions are necessary for adequate access to more complex graphics such as maps, GIS diagrams, and bar and pie charts. Color in tactile graphics is normally represented by some kind of pattern, texture, or other tactile quality. Experts in the field typically use standard graphics software to create the image for a tactile graphic. They remove colored regions and replace them by patterns. then the file is printed on swell/capsule paper. This process is labor-intensive, and swell paper is expensive.

Users of ViewPlus embossers and version 8 or later of the Tiger Software Suite have a considerably simpler and less expensive option. A print driver option permits substitution of patterns, so editing of color is not needed. The color to pattern choices maybe selected from a variety of internal files, or users may create their own patterns. Custom patterns may be created or edited by sighted users in an editor that is included in the Tiger Software Suite. Any user, whether sighted or blind, can create/edit patterns using any text editor.

Finally, the resulting tactile graphic is printed by the embosser using inexpensive braille paper. Most blind readers find copy embossed on a ViewPlus embosser far superior to swell paper images.



ID: 165 / STS 5A: 2
LNCS submission
Topics: STS Art Karshmer Lectures in Access to Mathematics, Science and Engineering
Keywords: Online service; PDF; Fixed-layout DAISY; Automatic conversion, STEM

ChattyBox: Online Accessibility Service Using Fixed-Layout DAISY

K. Yamaguchi1, T. Komada2

1NPO: Science Accessibility Net, Japan; 2Nihon University, Japan

A new type of daisy books, "Fixed-layout DAISY" is proposed, in which the whole page is treated as a multi-layer picture, the second layer of which has the same form as the original print document. A DAISY (EPUB3) player can read out any texts on the transparent front layer together with highlighting them. By making use of Fixed-layout DAISY, "ChattyBox" is developed, which provides dyslexic people with various innovative online accessibility services such as automatically converting e-born PDF into Fixed-layout DAISY, playing DAISY books back with a popular browser, etc.



ID: 224 / STS 5A: 3
LNCS submission
Topics: STS Art Karshmer Lectures in Access to Mathematics, Science and Engineering
Keywords: Assistive Technology (AT), STEM

Author Intent: Eliminating Ambiguity in MathML

N. Soiffer

Talking Cat Software, United States of America

MathML has been successful in improving the accessibility of mathematical notation on the web. All major screen readers support MathML to generate speech, allow navigation of the math, and generate braille. A troublesome area remains: handling ambiguous notations such as . While it is possible to speak this syntactically, anecdotal evidence indicates most people prefer semantic speech such as “absolute value of x” or “determinant of x” instead of “vertical bar x vertical bar” when first hearing an expression. Several attempts to infer the semantics have had some success, but ultimately the author is the one that definitively knows how an expression is meant to be spoken. The W3C Math Working Group is in the process of allowing authors to convey their intent in MathML markup and this paper describes that work.

 
8:30am - 10:00amSTS 11B: STS Dyslexia, Reading/Writing Disorders: Assistive Technology and Accessibility
Location: Track 3
Session Chair: Melanie Schaur, JKU Institut Integriert Studieren
Session Chair: Anna Ajlani, Johannes Kepler University Linz
 
ID: 222 / STS 11B: 2
OAC Submission
Topics: STS Dyslexia, Reading/Writing Disorders: Assistive Technology and Accessibility
Keywords: Mathematics, inclusion, virtual reality (VR), augmented reality (AR), educational robotics (ER)

Exploring AR, VR, and Educational Robotics for Inclusive Mathematics Education for Dyslexic Students

M. H. Al Omoush

Dublin City University, Ireland

Dyslexic students often face challenges in comprehending mathematical concepts. Numeracy issues in dyslexia include symbol confusion, digit reversal, problem-solving challenges, slow calculations, and spatial perception difficulties, leading to a significant gap in learning outcomes. The authors investigate the traditional inclusive learning approaches and explore the potential integration of augmented reality (AR), virtual reality (VR) technologies, and educational robotics (ER) to support dyslexic students with mathematics education. AR and VR in education have shown promising results in enhancing student learning experiences across various domains. By creating immersive and interactive environments, these technologies have the potential to enable dyslexic students to explore mathematical concepts in novel and engaging ways. ER integration can complement AR and VR experiences by providing tangible and interactive tools, bridging the gap between the physical and digital worlds. By leveraging AR, VR, and educational robots, educators can create an inclusive and supportive learning environment, promoting active participation and knowledge retention among dyslexic students. The paper also highlights the significance of professional development for educators, aiming to provide educators with essential knowledge and skills to effectively implement these technologies in the classroom and cater to the diverse needs of their students.



ID: 262 / STS 11B: 3
OAC Submission
Topics: STS Dyslexia, Reading/Writing Disorders: Assistive Technology and Accessibility
Keywords: Dyslexia, Adults, Assistive Technology, SLR, Systematic Literatur Review, Reading Disorder, Writing Disorder

Words Unleashed: A Systematic Literature Review Study on to Use of Current Assistive Technology for Adults with Dyslexia

R. Koutny, M. Schaur

JKU, Austria

This systematic literature review (SLR) explores the domain of assistive technology (AT) for adults with dyslexia, to investigate the gap in research beyond the focus on children. Dyslexia, a neurological condition affecting reading and writing abilities, presents challenges that AT aims to tackle. Through a comprehensive and systematic search across multiple databases, the review identifies various AT solutions. However, it highlights a scarcity of studies systematically evaluating the use of AT specifically for adults, indicating a critical area for future research. This work concludes that existing research is primarily centered on children, and therefore identifies a need to extend understanding to adult contexts such as professional life and higher education.

 
8:30am - 10:00amSTS 3A: STS: Blind and Low Vision: Orientation and Mobility
Location: Track 4
Session Chair: Alireza Darvishy, Zurich University of Applied Sciences
 
ID: 160 / STS 3A: 1
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Orientation and Mobility, Visually impaired, New technology and applications

Use of Technology and Applications in Orientation and Mobility of Visually Impaired Persons in Bulgaria: A Contemporary Overview

M. V. Tomova

Sofia University "St. Kliment Ohridski", Bulgaria

This paper examines the use and popularity of some technological solutions and applications for orientation and mobility purposes among visually impaired persons in Bulgaria. Through questionnaires it reports about the knowledge and use of technology in visually impaired students taught in special schools and adults with visual impairments who are provided orientation and mobility in rehabilitation centers as well as involves participants from the Union of the Blind in the country. The paper attempts to offer an overview of small number of representatives from variety of organizations for visually impaired in the country and to shape a picture of the current situation and the future trends in the technological devices’ and applications’ use in orientation and mobility among visually impaired individuals in Bulgaria.



ID: 119 / STS 3A: 2
LNCS submission
Topics: STS Innovation and Change in the Delivery of Future Assistive Technology Services
Keywords: visual impairment, Assistive Technology (AT), orientation, navigation, virtual simulation

Advancing Mobility for the Visually Impaired: A Virtual Sound-Based Navigation Simulator Interface

D. Erdenesambuu1, M. Matsuo1, T. Miura2, J. Onishi1

1Tsukuba University of Technology, Japan; 2National Institute of Advanced Industrial Science and Technology

This study explores the development and evaluation of a navigation system designed specifically for visually impaired users. The research primarily focuses on enhancing the clarity and accuracy of voice and sound guidance, leveraging technology adapted from autonomous driving navigation systems. The investigation begins by examining how visually impaired individuals comprehend and utilize guided systems and the duration required for them to become proficient in their use.
A key component of the study is the construction of a speech / sound guide model. This model integrates beacon sounds, such as auditory beacons, to improve the effectiveness of the guidance provided. A group of seven visually impaired participants was involved in evaluating this model. Their feedback and performance were critical in assessing the practicality and efficiency of the developed system.
The results demonstrate that all participants were able to reach their destinations effectively by following the guidance of the voice guide model. This indicates a significant improvement in the navigational aid provided to visually impaired users. The study's findings underscore the potential of incorporating advanced auditory signals in enhancing the mobility and independence of visually impaired individuals.
Overall, this research contributes to the growing field of assistive technologies for the visually impaired, offering insights into the design and implementation of more effective and user-friendly navigation systems.



ID: 229 / STS 3A: 3
LNCS submission
Topics: STS Accessibility and Usability of Mobile Platforms for People with Disabilities and Elderly Persons: Design, Development and Engineering
Keywords: (e)Accessibility, Assistive Technology (AT), People with Disabilities, Visually Imapired, Indoor Wayfinding

FindMyWay: Toward Developing an Accessible Wayfinding Application for Visually Impaired in an Indoor Space

U. Das, B. Hong

The College of New Jersey, United States of America

Due to the limitations of satellite-based navigation systems, such as GPS, indoor wayfinding has been very challenging for people with disabilities. Unfamiliarity and infrastructure complexity in an indoor space could put an additional burden on people in their wayfinding, who have various types of disabilities such as vision, mobility, hearing, etc. An accessible wayfinding application could be of great help to them in navigating an indoor space. To fulfill the wayfinding needs of visually impaired persons in an indoor space, an iOS application named “FindMyWay” has been developed in this work based on Bluetooth Low Energy (BLE) beacons. FindMyWay provides both exploration and navigation features for the visually impaired for indoor wayfinding. This work incorporated a parallel processing approach into the app while implementing both exploration and navigation functionalities. The exploration feature provides customized information about a point of interest (PoI) in an indoor space based on user preference. On the other hand, the navigation feature provides guidance to reach a destination in a multi-floor environment. The preliminary results indicate the effective performance of the app while navigating an indoor space.



ID: 180 / STS 3A: 4
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: Independent mobility, Blindness, Visual impairment, Electronic travel aids, Artificial intelligence, Intelligent system, Neural network, Cane mountable, Embedded device

An Embedded AI System for Predicting and Correcting the Sensor-Orientation of an Electronic Travel Aid During Use by a Visually Impaired Person

P. Chanana

Indian Institute of Technology Delhi, India

We have developed an AI-based embedded system that fits on the Electronic Travel Aid (ETA), detects the orientation of the sensors of ETA in real-time using an Inertial measurement unit sensor, and guides the user to self-correct incorrect sensor orientation through intuitive audio-vibratory feedback. The system aims to minimize the dependence on trainers for learning to use ETA effectively and promote self-learning, especially in resource-constrained geographics.

 
8:30am - 10:00amI4: Innovation Area
Location: Innovation Area
Session Chair: Andrea Petz, JKU Linz
Will be announced, soon: workshops, posters, ...
10:00am - 10:15amB4: Coffee Break
10:15am - 12:00pmPlenary II: Digital Accessibility Keynote, Panel Discussion and Workshop: Accessibility has failed: Try Generative UI = Individualized UX
Location: Plenary
https://www.icchp.org/content/keynotes-3#w3c
Session Chair: Klaus Hoeckner, Access Austria/HGBS
This session is organised by and in cooperation with the Accessible EU Centre
Logo Accessible EU centre
Information in German at Hilfsgemeinschaft and in English at:
 
ID: 287 / Plenary II: 1
Keynote

KEYNOTE W3C/WAI: The Success Story of Digital Accessibility

K. White

W3C/WAI



ID: 288 / Plenary II: 2
Plenary

Panelist (Amazon)

S. Abou-Zahra

Amazon



ID: 292 / Plenary II: 3
Plenary

Panelist (Apple) (tbc)

S. Herrlinger

Apple



ID: 289 / Plenary II: 4
Plenary

Panelist (ATOS) (tbc)

N. Milliken

ATOS



ID: 291 / Plenary II: 5
Plenary

Panelist (Microsoft) (tbc)

H. Minto

Microsoft



ID: 290 / Plenary II: 6
Plenary

Panelist (Google) (tbc)

C. Patnoe

Google

 
12:00pm - 1:00pmB5: Lunch Break
1:00pm - 3:00pmSTS 7A: STS Accessibility for the Deaf and Hard-Of-Hearing
Location: Track 1
Session Chair: Matjaž Debevc, University of Maribor, FERI
Session Chair: Raja Kushalnagar, Gallaudet University
Session Chair: Sarah Ebling, University of Zurich
 
ID: 127 / STS 7A: 1
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: Deaf and Hard-Of-Hearing, Artificial Intelligence (AI) Technology, Voice Recognition, Hair Salon Communication

Advancing Inclusive Beauty Experiences: A System for Communication Support for the Hearing Impaired in Hair Salon Environments

Y. Zhong, M. Kobayashi

Tsukuba University of Technology

With the rapid advancement in both medicine and technology, the lives of individuals with hearing impairments have witnessed significant improvements.

Nevertheless, in the course of our investigation, it has come to light that in certain settings, such as hair salons, individuals with hearing impairments are compelled to remove their hearing aids. Consequently, this poses a considerable challenge in effective communication with hairstylists.

With the rapid advancement of Artificial Intelligence(AI) technology, particularly in the realm of voice recognition, there has been notable progress in supporting individuals with hearing impairments.

This paper introduces the design of a system aimed at providing an improved communication experience for individuals with hearing impairments in hair salon environments.

An experiment were conducted to evaluate the performance of the system. The experiment included two tasks. We recruited 8 participants included 2 hairstylists and 6 hearing impaired. We conducted four questionnaires and a semi-structured interview.

In the future, the system can be integrated with other existing IoT technologies for use in real hair salons and can be extended to broader users beyond the hearing impaired, including foreigners.



ID: 188 / STS 7A: 2
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: Deaf and Hard-of-Hearing, Environmental sounds, Hearing aids, Cochlear implant

Perception of Environmental Sound by Young Deaf and Hard-of-Hearing people: Listening at Different Noise Levels

K. Yasu, R. Hiraga

Tsukuba University of Technology, Japan

It is difficult to be aware of the auditory signals of everyday life when one has a hearing impairment. In this study, we investigated which everyday acoustic signals are difficult for young deaf and hard of hearing (D/HoH) people to listen to. The participants were fifteen university students between the ages of 19 and 22. The auditory background varied (hearing aids, cochlear implants, bilateral hearing loss). Listening tests were conducted at different signal-to-noise ratios (SNR). The materials were clean sounds without noise and recorded sounds with noise (recorded in Taiwan and Japan). The materials were presented from loudspeakers in the classroom. The loudness of sounds was approximately 75 dBA at 1 m for the nearest person. Participants were asked to write down the name of the sound. The degrees of familiarity, confidence, and awareness were also assessed using a 5-point Mean Opinion Score (MOS). The results showed that the correct response rate tended to be high for sounds with good SNR. Significant correlations with the correct response rate were found for all three scores. From this result, we found that even if they thought the sound was easy to hear or they had confidence, they still listened to it as the wrong sound. For unfamiliar sounds, the percentage of correct responses is low. In this way, it is difficult for D/HoH people to passively recognize environmental sounds, and it is important to understand the environmental sounds around them through learning.



ID: 205 / STS 7A: 3
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: Closed Captioning, Deaf, Hard of Hearing, Comprehension, Methodology

Using a Novel Conversational Method for Measuring Comprehension of D/HoH Viewers Consuming Captioning Content for Entertainment Purposes

T. Kumarasamy1,2, S. Nam2, D. Fels2

1University of Toronto, Canada; 2Toronto Metropolitan University, Canada

Assessing the comprehension of Deaf and Hard of Hearing (D/HoH) viewers in the context of television content has traditionally relied on methods such as focus groups, open-ended questionnaires, and multiple choice questions. This paper proposes a novel methodology rooted in the social pragmatic model of conversation to measure the comprehension of D/HoH viewers who watch fast-paced live sports. Sixteen Deaf and 11 Hard of Hearing individuals, all sports fans, engaged in one-on-one informal conversations with researchers after viewing a sports game with two different styles of captioning. Comprehension scores were determined based on correct responses to predetermined topics related to gameplay and commentary. Results indicate comprehension scores were above 60% with the majority of participants exceeding 80%. This conversation methodology was effective in capturing participants' understanding beyond traditional comprehension measures, encouraging spontaneous engagement and insights. Researchers recognized the natural and free-flowing nature of the conversations. This study introduces a promising method for evaluating content comprehension in D/HoH viewers, emphasizing the need for researchers to immerse themselves in content and conversations. Future research should explore variations in this methodology and topic selection while addressing training needs for researchers.



ID: 207 / STS 7A: 4
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: Caption quality, HCI and Non-Classical Interfaces, Quality evaluation, Deaf, Hard of Hearing

NERWEB: User-Friendly Software for Caption Quality Assessment Using the NER Model

S. Nam

Toronto Metropolitan University, Canada

Canadian regulations require broadcasters to report the quality of Closed Captioning (CC) by using the Number-Edition-Recognition (NER) model, which requires certified human evaluators to assess CC in English programming. However, current methods to perform these evaluations are inefficient and lack a dedicated tool. This paper explores the development of NERWEB, a dedicated software tool designed to improve the efficiency and user experience of CC quality evaluation under the Canadian NER model. Employing a User-Centered Design approach, we conducted interviews and user studies with five certified evaluators. The resulting prototype offers a more integrated, efficient, and user-friendly interface compared to traditional methods. We present a set of design recommendations for developing a software interface dedicated to performing the NER quality evaluation.



ID: 215 / STS 7A: 5
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: sign language, interpreting, media accessibility, Usability and Ergonomics, deafness

Closed Sign Language Interpreting Accessibility: A Usability Study

C. Vogler, R. Kushalnagar

Gallaudet University, United States of America

Closed sign language interpreting (SLI) makes media accessible to deaf and hard of hearing viewers who use sign languages as their primary mode of communication. Analogous to subtitles for the deaf and hard of hearing, this feature allows to toggle sign language interpretation on and off, and customize its appearance in conjunction with videos. This paper provides information on how to design closed interpreting in a media player through a pair of mixed-method studies. The first study assesses the usability of technical SLI features, while the second one assesses how users interact with the content and focus their attention via eye gaze. Results indicate above-average usability for the technical features. Additionally, preliminary results on eye gaze suggest that optimal configuration of the SLI depends on the type of content viewed, and that user preferences vary. Overall, customizability of features and placement will be important in closed interpreting implementations.

 
1:00pm - 3:00pmSTS 5B: STS Art Karshmer Lectures in Access to Mathematics, Science and Engineering
Location: Track 2
Session Chair: Katsuhito Yamaguchi, NPO: Science Accessibility Net
Session Chair: Dominique Archambault, Université Paris 8-Vincennes-Saint-Denis
 
ID: 233 / STS 5B: 1
LNCS submission
Topics: STS Art Karshmer Lectures in Access to Mathematics, Science and Engineering
Keywords: Higher Education, Braille, Tactile Graphics, Web accessibilty

PreTeXt as Authoring Format for Accessible Alternative Media

V. Sorge

University of Birmingham, UK

We describe an approach for producing accessible mathematical content for visually impaired learners. Our main focus is on postsecondary education textbooks required for a course of study towards undergraduate degree in mathematics. As a base for our approach we use the PreTeXt language which combines a rigorous XML document structure with the strength of {LaTeX} typesetting for formulas and graphics. This not only allows to produce formats such as PDF and ePub, but more importantly fully accessible HTML and tactile Braille output from a single source file. The braille version of the text includes mathematical formulas in Nemeth braille as well as embossable tactile diagrams automatically generated from the source. Similarly formulas in the HTML version are screen reader accessible and the graphics provide a means to interactively explore them using screen reading and sonification. We developed our software using two large open-source textbooks on abstract algebra and calculus. Quality of the braille transcription was checked by a certified transcriber and overall readability was checked by a blind mathematician. We have since experimented with automatic translation of other mathematical textbooks to provide to blind student learners and volunteers.



ID: 173 / STS 5B: 2
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: Mathematic, Barriers, Primary, Accessibility, Blind, Visually-Impaired

Mathematics Accessibility in Primary Education: Enhancing Mathematics Learning Skills and Overcoming Barriers for Visually Impaired Primary Students

M. Shoaib

University College Cork, Ireland

This paper describes a qualitative study concerning the experiences of blind and visually impaired students learning mathematics in primary schools in Ireland, along with a thematic analysis of the findings. It investigates the instructional challenges and teaching techniques that facilitate an inclusive learning environment through flexible assessment plans. Key focus areas include adapting class collaboration activities, using digital and non-digital materials, and deploying specialized strategies to boost learning skills. This study identified significant barriers, including technology accessibility, classroom mobility, and the need for supportive educational aids such as tactile materials and Braille resources. This paper also discusses the importance of support and assistance, including teacher training, parental involvement, and peer support processes. Based on expert opinions, the study identifies gaps in current solutions and suggests future research and development directions. These include the advancement of User-Centred Design (UCD), the integration of multimodal techniques, and the improvement of collaborative and interactive learning solutions. Furthermore, this study recommends reforming educational policy and comprehensive training programs for educators to ensure that visually impaired students receive an equitable mathematics education. This study lays the groundwork for future advancements in teaching methodologies aimed to remove educational barriers.



ID: 134 / STS 5B: 3
LNCS submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: Chatbot, Mathematical word problem-solving abilities, Primary school students with intellectual disabilities, Video prompting

Using a Chatbot Integrated with Video Prompting to Enhance Mathematical Word Problem-Solving Skills in Primary School Students with Intellectual Disabilities

C.-L. Wu

National Tauchung University of Education, Taiwan

The purpose of this study was to investigate the effectiveness of using a chatbot combined with video prompting instructions in improving the mathematical word problem-solving abilities of primary school students with intellectual disabilities. This study adopted a single-subject design with multiple probes across participants. The independent variable was the intervention mode of using a chatbot combined with video prompting instructions, while the dependent variable was the performance of the participants in solving mathematical word problems. The participants of this study were three primary school students with mild intellectual disabilities in grades 4-5.The intervention period consisted of a total of 12 sessions, with each teaching session lasting for 30 minutes. In the data analysis, a combination of visual analysis and the Tau-U test was used to analyze the results. Additionally, qualitative data from interviews with the participants and teachers were collected and utilized. The findings revealed a significant improvement in the participants' accuracy of problem-solving after the instructional intervention. The Tau-U values exhibited a large effect from the baseline period to the treatment period, indicating immediate learning effects. Furthermore, all three participants and their resource class teacher expressed a positive attitude towards the effectiveness of the instructional approach.



ID: 234 / STS 5B: 4
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher Education
Keywords: sonification, chart, education

Audible Graphs of Mathematical Functions

D. Hanak

Silesian University of Technology, Poland

This article describes the implementation of a prototype educational mobile application for visually disabled students, prepared in the form of a research tool. This tool allows students to explore the features of polynomial functions by audibly analyzing their graphs. Control is performed using the STT (Speech-To-Text) mechanism with a defined set of commands. The application communicates with the user using a synthesized voice generated by the TTS (Text-To-Speech) engine. The core of the tool is to explore the graph of the drawn function. It is implemented using audio feedback that reflects the position of the indicator relative to the function graph line and the value of the nearest point on the graph. This type of sonification is performed through sound waves synthesized in real time with an amplitude that is a function of the distance of the indicator from the graph line. The research goal is to verify which of the feature parameters of the audio graph are most suitable for graph exploration. The following features of the graph sonifying oscillator can be analyzed: type (sinusoidal, square, triangular, sawtoothed), base tone, sound sampling frequency, buffer size, amplitude decay threshold. The above-mentioned parameters can be properly configured, adapting the sound of the charts to the features of the mobile device's touch screen and the student's hearing capabilities.



ID: 249 / STS 5B: 5
LNCS submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: tangible user interface, programming learning, programming process, co-occurrence matrix, visually impaired

Analyzing the Programming Process with Tangible Tools using Co-occurrence Matrix

I. Takida, T. Motoyoshi

Toyama Prefectural University, Japan

We developed a programing education tool “ P-CUBE3 ”, which uses blocks with tangible information as its language. P-CUBE3 can control audio output by placing HIRAGANA(Japanese character) blocks on the mat, and also implements an operation history acquisition system that can record the type of block, its position, and operation time. We conducted hands-on programming classes for the visually impaired and sighted children, and analyzed the data obtained from the classes. We have classified the programming process into several patterns and analyzed the relationship between the patterns using a co-occurrence matrix. We report on the characteristics of the programming process of the visually impaired and sighted.

 
1:00pm - 3:00pmSTS 12A: STS Accessible, Smart, and Integrated Healthcare Systems for Elderly and Disabled People
Location: Track 3
Session Chair: Yehya Mohamad
 
ID: 151 / STS 12A: 1
LNCS submission
Topics: STS Accessible, Smart, and Integrated Healthcare Systems for Elderly and Disabled People
Keywords: Cervical rehabilitation, head-tracker interface, System Usability Scale, elder

An Attempt to Approach Mobile Cervical Rehabilitation to Elder Patients

M. F. Roig-Maimó1, I. Salinas-Bueno2

1University of the Balearic Islands, Spain; 2University of the Balearic Islands and Health Research Institute of the Balearic Islands (IdISBa), Spain

RehbeCa is a platform for mobile cervical rehabilitation that emerged with the goal to provide a treatment of nonspecific neck pain at home (or elsewhere). The platform includes a mobile application addressed to patients with an exergame that integrates a head-tracker interface, which allows the monitoring and analysis of the fulfilment of the neck therapeutic exercises prescribed by physiotherapists. Among all the potential patients of the mobile application, there are elder patients. As the chronological age has been considered as the main barrier to access digital technology, the evaluation of the developed mobile application should consider the group of the elder. We present a user study with 36 participants to evaluate the usability of the mobile application, including 11 elder participants. The results are presented in terms of the System Usability Scale (SUS).



ID: 179 / STS 12A: 2
LNCS submission
Topics: STS Accessible, Smart, and Integrated Healthcare Systems for Elderly and Disabled People
Keywords: Self-efficacy, Anticipatory Gaze, Dexterity Task, Regression Model

Self-Efficacy Measurement Method Using Regression Models with Anticipatory Gaze for Supporting Rehabilitation

Y. Hayakawa, A. Tsuji

Tokyo University of Agriculture and Technology, Tokyo, Japan

Self-efficacy (SE) is important for task motivation in rehabilitation patients. We propose a method for measuring SE using anticipatory gaze. Experiments were conducted with on-screen tasks with multiple difficulty levels, and gaze and manipulation data were collected. The four anticipatory gaze-related features and the subjective SE were compared, and the highest correlation coefficient between the gaze feature and SE was 0.573. We also construct a regression model using machine-learning methods and verified results. The mean absolute error of the model was 11.1, suggesting the possibility of measuring SE using gaze information. We plan to apply this method to real-world rehabilitation.



ID: 185 / STS 12A: 3
LNCS submission
Topics: STS Accessible, Smart, and Integrated Healthcare Systems for Elderly and Disabled People
Keywords: CNN, LSTM, GRU, RNN, human activity recognition, deep learning, older adults

Exploring Advanced Deep Learning Architectures for Older Adults Activity Recognition

R. O. Zafar

Dalarna University, Sweden

This study provides a comprehensive exploration of deep learning architectures for human activity recognition (HAR), focusing on hybrid models (leveraging Convolutional Neural Networks (CNN) with Long- and Short-Term Memory (LSTM)) and a range of alternative deep learning framework. The main goal is to evaluate the performance and effectiveness of the hybrid CNN-LSTM model compared to independent models such as Gated Recurrent Units (GRU), Recurrent Neural Networks (RNN), and traditional CNN architectures. By examining multiple models, this study aims to elucidate the advantages and disadvantages of each approach to accurately identify and classify human activities. The study examines the nuanced capabilities of each model, exploring their respective abilities to capture the spatial and temporal dependencies inherent in activity data. Through rigorous comparative analysis, this study provides insights into the effectiveness of hybrid CNN-LSTM models compared to other popular deep learning architectures, paving the way for advancements in HAR systems.



ID: 257 / STS 12A: 4
LNCS submission
Topics: STS Accessible, Smart, and Integrated Healthcare Systems for Elderly and Disabled People
Keywords: Independent life, quality of life, Assistive Technology (AT), Social Innovation, User Centered Design and User Participation

The Independent Life of Persons with Disabilities in Puglia: An Analysis of the Pro.V.I. Projects Grant

S. Pagliara

University of Cagliari, Italy

The UN Convention advocates for disabled individuals' rights to live independently and be included in the community. Despite global commitments, barriers like inaccessible housing and insufficient support prevail. Italy's Puglia region addresses these through the Pro.V.I. initiative (Progetto Vita Indipendente), marking progress towards transforming rights into tangible life improvements.

This study evaluates Pro.V.I.'s impact on persons with disabilities in Puglia using the Individually Prioritised Problem Assessment (IPPA) to analyze data from 40 beneficiaries before and after the initiative. Findings indicate significant enhancements in autonomy, social participation, and overall quality of life. Assistive technologies and personalized support boosted self-determination and societal integration, while advances in home autonomy and technology access were crucial in realizing personal goals.

Results underscore a strong correlation between regional efforts for independent living and life quality improvements for people with disabilities. This case study contributes to discussions on social policies supporting autonomy, highlighting the wider implications for enhancing global independent living conditions for the disabled. Pro.V.I's success demonstrates how collective commitment can overcome barriers, setting a precedent for future disability rights and social inclusion advancements.

 
1:00pm - 3:00pmSTS 16: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Location: Track 4
Session Chair: Susanne Dirks, TU Dortmund
 
ID: 148 / STS 16: 1
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: Extended Reality, Persons with Intellectual Disabilities, Vocational Skills, Technology Acceptance Model

The Intention of Professionals to Use Extended Reality to Train Vocational Skills for Persons with Intellectual Disabilities

T.-F. Wu, H.-S. Lo

National Taiwan Normal University, Taiwan

Introduction: The incorporation of information technology into teaching has gradually increased the diversity and feasibility of using extended reality (XR) in learning. This study explored professionals’ intention of integrating XR to train vocational skills of individuals with ID by using the technology acceptance model framework.

Methods: This study recruited 44 professionals working in the field of vocational training for individuals with ID. The study utilized the “Extended Reality Vocational Skills Training System”, which was designed by the research team. Participants were first introduced to the system, completed the task of product arrangement, and filled out the questionnaire.

Results: The mean score of each concept was from 3.34 to 4.01. The scores are ranked in descending order as perceived usefulness (PU), attitude to use (AT), behavior intention (BI), and perceived ease of use (PEOU). Based on the PLS analysis, PU and PEOU explained 68.8% variance in AT, and the overall model explained 66.3% of the total variance of BI.

Conclusion: According to the results, the professionals expressed a positive attitude toward using XR and agreed that using XR to train the vocational skills of individuals with ID is a great idea. AT is an important factor which not only affects BI directly, but also serves as a mediating factor of PU and PEOU to BI. In addition, PU and PEOU had positive effects on AT. When designing new technology, it is essential to consider the needs of users.



ID: 236 / STS 16: 2
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: Anxiety disorder, Panic disorder; Daily life-management; Smart device, Android

Designing Smart Diary and Distraction Tasks to Manage Panic Attacks

C. Sik Lanyi

University of Pannonia

Our project aims to create a panic diary that not only records panic attacks but also serves as a journal for applying breathing exercises, mindfulness and distraction techniques, and reviewing existing diaries. Anxiety problems are becoming increasingly common in our fast-paced world, which can significantly impact quality of life. The exercises included in the application are designed to help manage stress. The program is accessible to all Android phone users as panic attacks can affect anyone, regardless of gender or age. It was developed using the Java programming language in Android Studio. During the development process, we consulted with individuals who have anxiety disorders, as well as psychiatrists.



ID: 245 / STS 16: 3
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: {Augmented Reality, Training, Cleaning Activities, Instructional Guides, Immersive Experience

Augmented Reality Training Application for Cleaning Activities: Empowering People with Special Needs for the Il Seme Social Cooperative.

M. Covarrubias

Department of Mechanical Engineerinng, Politecnico di Milano, Italy.

Training individuals with special needs for cleaning activities presents unique challenges due to diverse learning styles and accessibility requirements. Augmented Reality (AR) technology offers a promising solution by providing interactive, immersive, and customizable training experiences. This paper presents the development and evaluation of an AR application tailored for training cleaning activities specifically designed to accommodate the needs of individuals with special needs. The application utilizes AR to provide personalized guidance, adapt training content to individual capabilities, and enhance engagement through interactive simulations. We discuss the design considerations, implementation details, and user feedback gathered through a pilot study involving individuals with various special needs. Results indicate that the AR application significantly improves the accessibility and effectiveness of training for cleaning activities, thereby empowering individuals with special needs to develop essential life skills and achieve greater independence.



ID: 161 / STS 16: 4
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Emotion classification, Affective Computing, Emergency system, Wearable device, Fear

BRAVE: Bio Responsive Alert VEst

S. Comai

Politecnico di Milano (POLIMI), Italy

The paper describes a low-cost system incorporating galvanic skin response, heart rate, and respiratory rate sensors, allowing for emotion classification based on the valence-arousal model. The system lays the groundwork for an innovative device capable of detecting fear during aggressive situations and automatically triggering an SOS. Tests in a controlled environment by visually stimulating emotions demonstrate the reliability of the low-cost sensors.



ID: 133 / STS 16: 5
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Assistive Technology (AT), Assistive Driving, Motor Disability, Power Wheelchair, Reinforcement Learning

Design and Preliminary Validation of an Assisted Driving System for Obstacle Avoidance Based on Reinforcement Learning Applied to Electrified Wheelchairs

F. Pacini

University of Pisa, Italy

Operating a motorized wheelchair poses inherent risks and demands substantial cognitive effort to achieve effective environmental awareness. Consequently, individuals with severe disabilities face heightened risk, leading to diminished social engagement which impacts their overall well-being. Therefore, we have developed a collaborative driving system for obstacle avoidance based on a trained reinforcement learning (RL) algorithm. The RL is based on a TD3 agent which has been trained in a simulated environment. The system interfaces with the user through a joystick, capturing the desired direction and speed, while a lidar positioned in front of the wheelchair provides information about obstacle distribution. Taking both inputs into account, the system generates a pair of forward and rotational speeds that prioritize obstacle avoidance while closely aligning with the user's commands. Preliminary validation through simulations involved comparing the RL algorithm with the absence of an assistive system. The results are promising, showcasing that the RL algorithm reduces collisions without imposing constraints on the desired speed. Ongoing research is dedicated to expanding tests and conducting comparisons with traditional obstacle avoidance algorithms.



ID: 235 / STS 16: 6
LNCS submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: Communication, ASD, Office Context, Assistive Technologie, Headphones

Open Sesame! Use of Headphones at Work Considering Social Acceptance

G. Weber

TU Dresden, Germany

Focused work on computers takes a central role in today’s workplace. Concentration is often disturbed, especially in offices with a high noise level. For this reason, ambient noise is often suppressed by using headphones with ANC or masking, thus masking disturbing conversations from colleagues, for example. This leads to isolation in the workplace and, at the same time, people with headphones are often perceived as isolating themselves. In contrast, EN ISO 10075-2:2000 describes that social interaction plays an important role in the workplace, for example, to avoid monotony. Effective solutions need to be found to build bridges between these opposites, allowing both focused work and the opportunity for social interaction. To achieve this and to be able to mediate effectively between the person addressing and the person being addressed, it is essential to analyze the context. We approach this analysis within the paper based on observations and interviews in the training context of autistic people. The resulting contextual information forms a first step towards the development of systems that support communication reception for further elaboration.

 
1:00pm - 3:00pmI5: Innovation Area
Location: Innovation Area
Session Chair: Andrea Petz, JKU Linz
Will be announced, soon: workshops, posters, ...
3:00pm - 3:30pmB6: Coffee Break
3:30pm - 5:00pmSTS 7B: STS Accessibility for the Deaf and Hard-Of-Hearing
Location: Track 1
Session Chair: Matjaž Debevc, University of Maribor, FERI
Session Chair: Raja Kushalnagar, Gallaudet University
Session Chair: Sarah Ebling, University of Zurich
 
ID: 228 / STS 7B: 1
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: subtitles, closed captions, video, large language models, media

Customization of Closed Captions via Large Language Models

A. Glasser, R. Kushalnagar, C. Vogler

Gallaudet University, United States of America

This study investigates the feasibility of employing artificial intelligence and large language models (LLMs) to customize closed captions/subtitles to match the personal needs of deaf and hard of hearing viewers. Drawing on recorded live TV samples, it compares user ratings of caption quality, speed, and understandability across five experimental conditions: unaltered verbatim captions, slowed-down verbatim captions, moderately and heavily edited captions via ChatGPT, and lightly edited captions by an LLM optimized for TV content by AppTek, LLC. Results across 16 deaf and hard of hearing participants show a significant preference for verbatim captions, both at original speeds and in the slowed-down version, over those edited by ChatGPT. However, a small number of participants also rated AI-edited captions as best. Despite the overall poor showing of AI, the results suggest that LLM-driven customization of captions on a per-user and per-video basis remains an important avenue for future research.



ID: 258 / STS 7B: 2
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: Language Learning, Auditory Training, Children with Hearing Impairment, User Centered Design and User Participation

Designing Pachi: A Verbal Language Learning Application for Children with Hearing Impairment in India

R. Sharma, A. Johry

Indian Institute of Technology, Delhi, India

Language is crucial for a child's development, especially for those with Hearing Impairment (HI), making speech and language acquisition challenging. Early intervention positively impacts language development in HI children, but challenges persist due to limited resources and unmet demand in India. While auditory and speech training is given to young children for mainstream integration, formal therapy accessibility is limited by geographical and socio-economic constraints. This paper introduces a tablet-based application for HI children (0-48 months) and facilitators, enhancing verbal language development in Hindi.

It involves initial field visits that identified challenges faced by HI preschoolers, leading to the design of a physical toolkit with three speech therapy games. These games focused on auditory participation, lip-speech cue recognition, and articulation, tested with 10 participants.

Building on it, expert consultations shaped a comprehensive four-stage therapy model for Hindi language learning, covering auditory skills, verbal comprehension, speech, and language development.

Utilizing the four-stage model, the digital application Pachi was developed. Aimed at a broader audience without speech therapist access, it offers 10–12-minute daily modules tailored to the ISD scale. A sample module for 13-15 months 'Hearing Age' involves four games aligned with the four stages - focusing on auditory training, vocabulary building, pronunciation, and articulation.



ID: 167 / STS 7B: 3
OAC Submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: Online language instruction, sign language, Deaf, hearing learners

Post-COVID Sign Language Instruction By The Deaf: Perspectives From Hearing Sign Language Learners

M. Kakuta, R. Ogata

Kanto Gakuin University, International Christian University IERS, Japan

COVID-19 has brought many changes in the form of language classes. Many tools such as Zoom and Microsoft Teams have become known to the public and are used in the education settings. Concerning the Deaf, use of technology and online tools brought positive effects. They could use auto-captioning and spot-light function, and these were very convenient when the online meeting was conducted. As for language classes, the pandemic was an opportunity for the Deaf to start online classes for hearing sign language learners. Many Deaf started online classes and hearing sign language learners took these online classes. Now all classes have returned to normal and hearing sign language learners can learn sign language in a face-to-face setting. This study looks at the post-COVID online sign language instruction and how the hearing students who took the sign language classes feel about online classes conducted by the Deaf. A survey was conducted to analyze the hearing learners’ views towards online sign language classes. It was found that although many preferred to receive sign language instruction face to face, there are still needs for online sign language instructions. The time saved for travel is a key factor that the hearing respondents stated as a positive point of online classes. Yet there still needs to be research concerning how 3 dimentional signs could be shown in a 2 dimentional setting. Post-COVID has brought new opportunities for a new style of sign language instruction.

 
3:30pm - 5:00pmSTS 14A: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Location: Track 2
 
ID: 204 / STS 14A: 1
LNCS submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: Online inclusive education, Teaching difficulties, EInclusion, Structural equation modeling, Quality education

General and Special Education Teachers’ Perspectives on Distance Teaching in Post-Pandemic Taiwan

C.-Y. Hsieh

National Ping Tung University, Taiwan

In the post-pandemic Taiwanese educational landscape, both general and special education teachers hold diverse views on online inclusive education. This study uses quantitative and qualitative analyses to explore teacher acceptance during and after the COVID-19 pandemic, scrutinizing instructional challenges.Stratified random sampling divides Taiwan into seven regions based on 2022 teacher distribution. Initially, 358 participants were sampled during the pandemic peak. A ten-month post-pandemic follow-up includes 307 teachers (40 special education), maintaining a national distribution ratio.Survey results reveal that during the pandemic, general education teachers embraced online inclusive education more than special education counterparts. Post-pandemic, many general education teachers express reluctance, citing challenges in engaging special needs students, fair assessment, and limited direct interaction.Special education teachers consistently express reservations and face challenges in online inclusive education, irrespective of the pandemic. Issues include the lack of personalized support for special needs students, difficulties collaborating with general education teachers, and shortcomings in online platforms for diverse learning needs. Despite differing perspectives, common concerns emerge, emphasizing the need for effective online inclusive education in student participation, fair assessment, and collaboration.



ID: 252 / STS 14A: 2
LNCS submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: Virtual Reality, Training, Special Education, Neurodevelopmental Disorder, Down Syndrome, Autism, Inclusive Education, Special Educational Needs.

Virtual Reality (VR) in Special Education: Cooking Food App to Improve Manual Skills and Cognitive Training for SEN Students Using UDL and ICF Approaches

M. C. Carruba1, M. Covarrubias Rodriguez2

1Università Telematica Pegaso, Italy; 2Politecnico Milano, Italy

Virtual reality (VR) enters educational processes today as
a tool capable of promoting immersive learning experiences and facili-
tating the engagement and participation of all students, including those
with Special Educational Needs (SEN). This paper aims to present a case
study concerning the use of VR to improve manual skills and promote
immersive and enjoyable cognitive training for students with disabilities.
A VR cooking food preparation app is introduced to illustrate how VR
serves as a tool for skill training on one hand and as a genuine digital
learning environment on the other. In addition to presenting the tech-
nical features of the app, the International Classification of Functioning
(ICF) approach by the World Health Organization (WHO) will be dis-
cussed. This approach is useful for identifying the best strategies to pro-
mote learning for students with cognitive disabilities who participated in
this case study. Furthermore, the perspective of universal learning design
(UDL), also known as Universal Design for Learning (UDL) by CAST,
will be explored to guide teachers and trainers in designing digital and
innovative learning activities that can accommodate all students.



ID: 176 / STS 14A: 3
OAC Submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: eLearning and Education, Evaluation, Higher Education, Personas

Evaluating Interactive Accessibility Personas On The BlindDate Website

K.-A. Heitmeier, P. Piskorek

Hochschule der Medien, Germany

This paper examines approaches to evaluating accessibility personas that have been developed for the BlindDate resource website. This website provides a space for university teaching staff to have a virtual encounter with interactive personas representing students with various disabilities. This encounter is curated to help improve accessibility awareness, and ultimately the design of curricular products for these students. A large part of the effectiveness of personas is in their empathy-building capacity, and their ability to authentically represent the target populations. In this study we describe the results of a survey which included an adapted persona perception scale, and open-ended questions that targeted how the personas are viewed by subject matter experts including persons with disabilities, and pedagogical specialists in inclusive education.



ID: 261 / STS 14A: 4
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Higher Education, Curricula, Accessibility, Universal Design, Inclusion

Bridging the Higher Education Gap: Exploring the Integration of Accessibility and Universal Design in Higher Education Curricula

K. Nuppenau, R. Koutny

Johannes Kepler University, Austria

Curricula shape students' awareness, priorities, and values and act as political documents that reflect societal expectations. Additionally, they reveal what society expects, including the 'hidden curriculum' (Snyder 1979), which refers to unwritten attitudes and expected behaviors.

Accessibility and universal design are crucial for independent living and participation in various aspects of society. Although accessibility and universal design have a proven positive impact on business and society, these principles are often not integrated into core higher education curricula. Currently, only a few elective courses cover these topics. This higher education gap hinders inclusion, innovation, and the realization of the potential of digital inclusion.

The Erasmus+ ATHENA project analyzed the curricula of study programmes across different fields of knowledge in four European countries to determine whether and how accessibility and the universal design approach were incorporated.

The findings will be used to advocate for increased inclusion and diversity within higher education. Furthermore, these characteristics will serve as a basis for formulating recommendations to integrate accessibility and universal design principles into higher education curricula.

 
3:30pm - 5:00pmSTS 12B: STS Accessible, Smart, and Integrated Healthcare Systems for Elderly and Disabled People
Location: Track 3
Session Chair: Yehya Mohamad
 
ID: 226 / STS 12B: 1
LNCS submission
Topics: STS Accessible, Smart, and Integrated Healthcare Systems for Elderly and Disabled People
Keywords: Digital therapeutics, Intervention, Autism spectrum disorder, Automatic speech recognition, Automatic assessment

Automatic Speech Recognition and Assessment Systems Incorporated into Digital Therapeutics for Children with Autism Spectrum Disorder

S. Lee, S. Kim, M. Chung

Seoul National University

Children with autism spectrum disorder (ASD) frequently encounter challenges in social communication and interaction, which necessitates continuous, comprehensive interventions to enhance their communication skills. Despite the growing interest in digital therapeutics (DTx), research on speech-utilizing interventions for children with ASD remains limited. This study introduces speech-based technologies integrated into a DTx designed to facilitate the development of communicative skills in children with ASD. Initially, we compiled a large speech corpus from children with ASD and typically developing children, which includes clinical scores on social communication severity and speech sound, rated by certified speech and language pathologists. Then three speech-based technologies were developed: automatic speech recognition for verbal interaction with the DTx, automatic assessment of social communication severity to monitor progress, and automatic speech sound assessment to foster speech production skills. The results were promising, demonstrating a syllable error rate of 12.36% in automatic speech recognition for keywords, a 0.71 correlation coefficient for assessing social communication, and a 0.75 correlation coefficient for speech sound assessment. These technologies are expected to improve the accessibility of interventions for children with ASD, overcoming barriers related to space, time, and human resources.



ID: 164 / STS 12B: 2
LNCS submission
Topics: STS Internet of Things: Services and Applications for People with Disabilities and Elderly Persons
Keywords: Assistive Solution, Internet of Things (IoT), Mobility, Digital Health, Assistive Technology (AT)

Engaging with my Health: Information Required to Support the Mixed-Use of Passive and Active Assistive Technology Devices for Mobility

T. P. d. S. Maximo

The Hong Kong Polytechnic University, Hong Kong S.A.R. (China)

The use of physically passive assistive technologies like electric wheelchairs and electric scooters by people with some walking function runs the risk of de-conditioning the user's physical functionality and their mobile capabilities at a faster rate than if they had used a more physically active assistive technology. This project explores the information necessary to support the combined use of passive and active assistive technology for mobility, aiming to enhance well-being. The study utilized an online survey to gather insights from individuals in Hong Kong using multiple mobility devices. The survey focused on demographic data, user behaviour, mobility device usage, IoT elements, and information requirements. Results from 263 respondents revealed the prevalence of mixed active and passive mobility device usage, shedding light on the use of IoT devices and the essential information needed for commuting, device selection, and overall well-being. The findings provide valuable insights for healthcare professionals, policymakers, and technology developers aiming to improve assistive technology systems and promote well-being in the context of mobility.



ID: 285 / STS 12B: 3
Innovation Area Activity Proposal
Keywords: Older People, Home Care, eHealth

Technology Support for Active and Healthy Ageing of Community-Dwelling Older Adults: Results of an International Mixed Methods Study in Europe and Japan (On Behalf of the e-VITA Consortium)

R. Wieching, J. Boudy

eVita

Project Presentation as part of the session on eHealth

 
3:30pm - 5:00pmSTS 3B: STS: Blind and Low Vision: Orientation and Mobility
Location: Track 4
Session Chair: Alireza Darvishy, Zurich University of Applied Sciences
 
ID: 142 / STS 3B: 1
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: blind and/or vision impaired, thermal-tactile biofeedback, spatial navigation, wearable

Orientation Aid For Blind And Visually Impaired People By Using Thermal-Tactile Biofeedback At Lumbar Region For Hazard Prevention: A User Survey

V. Frank

University of Applied Sciences Upper Austria, Austria

Blind and visually impaired people rely on acoustic information when using navigation guides. But the auditory sensation is imperative for road traffic orientation as well. Due to the fact that this sensory channel can quickly become overloaded, it is obvious to consider another sensorial perception like the haptic channel for navigation commands. Thermal biofeedback demonstrates high potential to present messages without demanding special attention and has therefore advantages for gathering low cognitive workload. Further investigations are needed to consider how thermal information could be perceived by this target group and how far it can support users’ navigation. Cold stimuli can be perceived more quickly. This fact resulted in the decision to use cold perception for stop signals. The prototype consists out of 2 heat modules for spatial instructions and 1 cooling module to display a stop command at lumbar region. A camera in the head area detects obstacles. In Trial 0 the users’ absolute perception threshold was estimated. During Trial 1&2 an anti-collision walking experiment was performed with 2 different transmission signals. The reaction behaviors and collisions were only observed, user feedback and data were collected, and the NASArtlx index was later determined. 8 BVI users participated. The stop signal triggers a reliable reaction. By considering only thermal biofeedback perception, all participants were surprised in a positive way.



ID: 248 / STS 3B: 2
LNCS submission
Topics: STS Accessibility and Usability of Mobile Platforms for People with Disabilities and Elderly Persons: Design, Development and Engineering
Keywords: People who are blind, orientation and mobility, customization, mobile devices

Empowering Orientation and Mobility Instructors: Digital Tools for Enhancing Navigation Skills in People Who Are Blind

W. Viana

Federal University of Ceará, Brazil

Spatial navigation could present challenges for a Person Who is Blind (PWB), impeding their ability to effectively determine locations, navigate, and interact with their environment. This study proposes an innovative orientation and mobility (OM) virtual environment. Our research encompasses the development of a map editor and an audio-haptic OM training environment for mobile devices, empowering OM instructors to create customizable maps for training PWBs, thus enhancing learner autonomy and independence. We evaluated it through an extensive assessment involving 25 human-computer interaction (HCI) specialists, 24 OM instructors, and 10 PWBs. The evaluation aimed to gauge the system's effectiveness in improving the navigation and wayfinding skills of PWBs. Findings suggest that our approach significantly aids PWBs in creating mental maps, facilitating navigation and spatial awareness. Feedback from HCI experts, OM instructors, and PWB participants was instrumental in identifying critical areas for further refinement, particularly in enhancing the intuitiveness of the user interface and the accuracy of audio-haptic feedback. By enabling OM instructors to tailor learning materials to the specific needs of their students, this tool has the potential to make a meaningful impact on the autonomy and mobility of PWB. Further research and development are warranted to refine the system and explore its full potential in real-world applications.



ID: 221 / STS 3B: 3
LNCS submission
Topics: STS Blindness Gain or New Approaches to Artwork Perception and ICT Mediation
Keywords: (e)Accessibility, User Centered Design and User Participation, Assistive Technology (AT)

Designing an Inclusive Audiovisual Tool for Museum Exploration

M. Erdemli

Carleton University, Canada

This study focuses on the formative design of an audio map for individuals with blindness and low vision (BLV) for accessible wayfinding at museums. The first step entails examining the preferred accessibility features of wayfinding tools among participants. This research aims to explore design elements of an audio map and evaluate interaction strategies for an accessible wayfinding tool enhancing wayfinding and spatial awareness for users with BLV. This research contributes to a co-design approach of an ability-based audio map through surveys, interviews and discussions with individuals with blindness, low vision, and hearing impairments, and collaboration with accessibility specialists.



ID: 126 / STS 3B: 4
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: blindness, wayfinding, Sensor Technology

Step Length Estimation for Blind Walkers

R. Manduchi

UC Santa Cruz, United States of America

Independent wayfinding can be challenging for people who are blind. While GPS localization can be very helpful in outdoor environments, GPS cannot be used inside buildings, and thus different mechanisms for localization and wayfinding need to be relied upon. In this work we focus on inertial sensing for localization. Inertial sensors (accelerometers and gyros) are contained in any standard smartphone. Data from these sensors can be used by pedestrian dead-reckoning (PDR) algorithms to estimate the user's location given a known starting point.

Standard PDR algorithms use inertial data to count the number of steps taken by the user, and to determine the walking direction. By multiplying the number of steps by each step's length, the distance traversed in a certain period of time can be determined. This approach, however, requires knowledge of the length of each step taken by the walker.

In this study, we tested a machine learning algorithm for step length estimation on inertial data from 7 blind walkers (5 using a long cane and 2 using a dog guide). Note that the gait of a blind walker using a cane is typically different from that of a sighted walker, and it is also different from that of walkers using a dog guide. It is thus important that the step length prediction system be tested with data from walkers from the same communities the wayfinding is designed for.

 
3:30pm - 5:00pmI6: Innovation Area
Location: Innovation Area
Session Chair: Andrea Petz, JKU Linz
Will be announced, soon: workshops, posters, ...
Date: Friday, 12/July/2024
8:00am - 9:30amSTS 6A: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Location: Track 1
Session Chair: Yoshinori Teshima, Chiba Institute of Technology
Session Chair: Tetsuya Watanabe, Niigata University
Session Chair: Kazunori Minatani, National Center for University Entrance Examinations
 
ID: 139 / STS 6A: 1
LNCS submission
Topics: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Keywords: Design for All and Universal Design, Assistive Technology (AT), (e)Accessibility, Tactile Relief

Designing an Inclusive Tactile Panoramic Relief of the City of Graz

A. Reichinger

VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Austria

Tactile models are an important tool for blind and visually impaired (BVI) people. They help to understand objects or situations that are difficult to perceive without the visual channel.

In this paper, we report on the practical experience of designing a tactile representation of the view over a city and landscape from the elevated position of a hill, which was digitally created from a combination of various geographic data sources and photographs. The tactile relief is part of a permanent museum exhibition and is mounted on an outdoor balcony enabling all museum visitors to experience the breathtaking view. Unlike existing works, we intended to create a faithful tactile representation of the view, mimicking many aspects of how the human eye perceives the world. This includes correct panoramic projection, three-dimensional representation of all buildings with realistic surface textures, correct depth layering, and plausible foreshortening not only in image space but also in depth. As perceived by the human eye, near objects are not only larger, but also have a more pronounced depth.

We describe the entire process, from design considerations, data acquisition, depth-aware projection mapping algorithm, texture generation, production, mounting, and the inclusion of markers and a legend pointing out important buildings. We provide preliminary user feedback and will conduct a formal evaluation in time for the final publication.



ID: 177 / STS 6A: 2
LNCS submission
Topics: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Keywords: Additive Manufacturing, Anatomical 3D Model, Simplified Shape, Tactile 3D Model, Tactile Teaching Material, Tactile Learning

Improvements of Tabletop Model of Internal Organs for Anatomy Learning of the Visually Impaired

Y. Teshima

Chiba Institute of Technology, Japan

This study established an enhanced tabletop model of internal organs designed to anatomy learning for students with visual impairments. We implemented several modifications to the model presented in 2022. These included adjusting the color scheme of the respiratory system components, correcting the positional relationship between the pancreas and stomach, and modifying the shape of the large intestine. Unlike the previous model, which did not incorporate magnets to connect or fixate organs, this updated model integrates this feature. As a result of these modifications, we have developed three variations of the model: Model-A, which does not use magnets to connect and fixate organs; Model-B, which employs magnets solely for fixation purposes and not for connectivity; and Model-C, which uses magnets for both organ connection and fixation. The results of the evaluation experiment revealed that Model-B serves as a superior instructional tool in terms of operability.



ID: 209 / STS 6A: 3
LNCS submission
Topics: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Keywords: (e)Accessibility, Assistive Technology, Sensor Technology, Tactile Graphics, Audio Labeling

Accessible Point-and-Tap Interaction for Acquiring Detailed Information about Tactile Graphics and 3D Models

A. Narcisi1, D. Ahmetovic1, J. Coughlan2

1University of Milan, Italy; 2Smith-Kettlewell Eye Research Institute, United States of America

We have devised a novel “Point-and-Tap” interface that enables people who are blind or visually impaired (BVI) to easily acquire multiple levels of information about tactile graphics and 3D models. The interface uses an iPhone’s depth and color cameras to track the user’s hands while they interact with a model. To get basic information about a feature of interest on the model read aloud, the user points to the feature with their index finger. For additional information, the user lifts their index finger and taps the feature again. This process can be repeated multiple times to access additional levels of information. For instance, tapping once on a region in a tactile map could trigger the name of the region, with subsequent taps eliciting the population, area, climate, etc. No audio labels are triggered unless the user makes a pointing gesture, which allows the user to explore the model freely with one or both hands. In addition, multiple taps can be issued in rapid succession to skip through to the desired information (an utterance in progress is halted whenever the fingertip is lifted off the feature), which is much faster than having to listen to all levels of information being played aloud in succession to reach the desired level. Experiments with BVI participants demonstrate that the approach is practical, easy to learn and effective.



ID: 230 / STS 6A: 4
LNCS submission
Topics: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Keywords: tactile graphics, handwriting, blind

Automatic Generation of Tactile Graphics of Characters to Support Handwriting of Blind Individuals

S. Sonobe, A. Fujiyoshi

Ibaraki University, Japan

This study develops an automatic generation tool for tactile graphics of characters. The tool is developed by using the tactile graphics production system BPLOT. The tool generates character images from fonts installed on Windows, extracts the contours of these images, and outputs a figure-drawing program for BPLOT from the coordinates of those contours. Then, through BPLOT, the final tactile graphics are produced from a braille plotter printer.

Users of the tool can choose a typeface of tactile graphics of characters from the four choices: Mincho (serif font), Gothic (sans-serif font), Kaisho (regular script font). In addition to choosing character spacing and arrangement, users can also select whether to apply the thinning processing to character images.

The evaluation of the tactile graphics of characters was conducted with four blind university students as experimental participants. As experimental materials, the following four types of tactile graphics are prepared: (1) Mincho typeface, (2) Gothic typeface, (3) Kaisho typeface, (4) Kaisho typeface subjected to the thinning processing. As a result, though the material (1) got the lowest score, all tactile graphics were evaluated as being legible. All participants rated the material (4) as having the highest readability.



ID: 265 / STS 6A: 5
LNCS submission
Topics: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Keywords: Blind and Visually Impaired People, User Centered Design and User Participation, Perception of Spatial Information, Accessibility, Usability

Exploring Space: User Interfaces for Blind and Visually Impaired People for Spatial and Non-verbal Information

R. Koutny

JKU, Austria

Meetings play an important role in today’s work environment. Unfortunately, blind and visually impaired people often encounter difficulties in participating fully and equally. The main reasons are twofold: Firstly, visual aids such as whiteboards, flipcharts, or projectors are frequently used, which present information in a 2D format. Secondly, nonverbal communication plays an essential role, with frequent use of deictic gestures, like pointing gestures, incorporating spatial information into the conversation. These factors lead to a disadvantage for blind and visually impaired individuals in the workplace.

As part of the research project [removed for blind review], an accessible brainstorming tool for meetings has been developed, capable of storing nonverbal and spatial information, which can be connected to various devices such as computers, smartphones, and smartwatches. This provides a solid foundation for successfully addressing these issues through innovative user interaction concepts, making spatial information accessible and understandable for blind and visually impaired individuals. Prototypes of these user interface concepts have been developed and tested with the target group in an staged and iterative manner, with implementations running in the browser, on a smartphone, and on a smartwatch. This paper will outline the development and testing procedures, as well as the corresponding test results.

 
8:00am - 9:30amSTS 14B: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Location: Track 2
 
ID: 220 / STS 14B: 1
LNCS submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: Post-pandemic Education, Hospital Classrooms, Educational Robotics, Environmental Education, Inclusive Education

Environmental Robotics for Educational Revival in Hospital Classrooms

J. Alé, J. Sánchez

University of Chile, Chile

The COVID-19 pandemic has had a significant impact on classrooms in Latin America, especially in Chile, where the prolonged suspension of classes has resulted in learning losses, mental health issues, and student students' disinterest in education, especially in more disadvantaged sectors. This study focuses on addressing these challenges by employing strategies aimed at reactivating post-pandemic education among students in hospital classrooms, belonging to a historically marginalized sector in Chile.

With the aim of enhancing interest in the natural sciences, two didactic modules with environmental robotics were co-designed with our students from hospital classrooms. Thirty-one students, aged between 12-17, who attend classes at the hospital, participated in the study. Surveys were used to measure students' initial personal interest, while the Experience Sampling Method assessed situational interest during the implementation of the modules.

A Multilevel Analysis revealed that factors such as gender, group work, and the use of conventional teaching methods, such as writing and calculations, had a negative impact on students' interest. Conversely, the incorporation of scientific practices, such as asking questions and interpreting data, positively contributed to increased student interest.

This study provides evidence on pedagogical strategies with technologies that may be more effective in increasing interest in hospital classrooms.



ID: 186 / STS 14B: 2
OAC Submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: Inclusive Education, Information and CommunicationTechnology, eInclusion, Adaptive Learning

Enhancing Inclusive Education Through ICT – Lessons From 10 Years Of Supporting Students With Different Challenges

G. Wagner

Upper Austria University of Applied Sciences, Austria

This paper reviews a decade of efforts to enhance inclusive education through ICT, highlighting significant progress in supporting master students with various challenges.

Methodology
The methodology encompasses case study research on three specific study programmes , illustrating how ICT integration supports inclusive education. The studies cover providing digital resources in management accounting, offering online language training, and facilitating hybrid teaching and online collaboration for students facing barriers like long-term illness.

Results
Results indicate the effectiveness of ICT tools and strategies in enhancing student satisfaction and learning outcomes. For instance, the management accounting course showed that ICT implementation positively impacts workload distribution and student satisfaction. Moreover, language support for non-German speakers and customized IT support for students with health issues or disabilities highlight the adaptability of ICT in meeting diverse needs.

The paper concludes that the adaptive use of various ICT tools is both possible and necessary to support students with special needs, addressing the growing heterogeneity among students. The lessons learned from these case studies underscore the potential of ICT in fostering an inclusive educational environment that accommodates individual challenges and preferences.



ID: 255 / STS 14B: 3
OAC Submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: Teaching aids, assistive technology, training, education, enabling environments

A Pedagogical Model for In-Situ Training Interventions: Creating Inclusive Educational Pathways with Assistive Technologies through the Support of GLIC Assistance Centers for the Ministerial "Sussidi" Grant."

S. M. Pagliara

University of Cagliari, Italy

This study examines a pedagogical model devised by a Center associate to the Italian AT Network GLIC, designed to enhance inclusive education through the deployment of Assistive Technologies (AT). Centered around an innovative support framework for teachers, this model operates as an in-situ informational and training desk in the Campania region, aiming to foster inclusive educational environments. By integrating technological aids and didactic supports tailored for students with disabilities and special educational needs, the initiative seeks to address the accessibility challenges within the traditional educational landscape.

Anchored by the national "Sussidi" Grant since the 2017-18, this initiative has facilitated the provision of assistive technologies, aids, and teaching supports to state public schools. The research underscores a preliminary evaluation of the service's outcomes by analyzing aid requests from teachers of Salerno province who have applied for the grant.

The interdisciplinary team center’s efforts have contributed to promoting inclusive and accessible educational content. Empirical evidence from the last three grant cycles illustrates the positive impact of these educational interventions, substantiating the crucial role of AT in fostering an inclusive learning environment. This study highlights the increased specialization in requested assistive technologies, underscoring the evolving needs within inclusive education frameworks.



ID: 269 / STS 14B: 4
LNCS submission
Topics: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Keywords: digital accessibility, digital documents, syllabus, psychology students

Accessibility and Digital Competencies of Psychology Students – New Perspectives

I. Mrochen

MultiAccess Centre, Poland

In the 21st century the digital skills and competencies seem to be a fundamental requirement for many employees. It can be said that willingness to use digital technologies is not very common among therapists or psychologists. Those professionals are perceived as employees with non-digital approaches. However, the global pandemia changed the way doctors, nurses, therapists or psychologists have worked. This is why, it should be underlined that the process of training psychology students should be changed. Generally, they are expected to identify mental problems among their would be patients or clients, while they also ought to break down any technological barriers to improve digital inclusion. As a result, in the context of the academic syllabus, psychology students should have better digital literacy skills enabling them to recognize digital accessibility barriers e.g. found in digital documents. The need for training and education in digital accessibility has increased significantly for some time, which was the reason of modification of the syllabus framework.

This paper aims to explore the effectiveness students study how to design accessible digital documents so as to merge their skills and accessibility issues. A set of proposed changes to ensure the syllabi would cover accessibility issues was implemented in October 2022. To gather full-time and part-time students` feedback on proposed changes, a questionnaire was organized from January 2023 to February 2024.

 
8:00am - 9:30amSTS 17A: Disability, Inclusion, Service Provision, Policy and Legislation
Location: Track 3
Session Chair: Weiqin Chen, Oslo Metropolitan University
 
ID: 156 / STS 17A: 1
LNCS submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: technology for people with disabilities, research trends, topic modelling, literature review, bibliometric analysis

Trends in Technology for People with Special Needs A Literature Review of 20+ Years of Research

B. Fisseler

FernUniversität in Hagen, Germany

Technology is always evolving, especially in the area of assistive technology for people with disabilities. Therefore, this paper aims to analyze trends in technology for people with special needs since the year 2000 through the use of topic modeling. It is important for both professionals and academics to stay updated on the latest research and technological advances in the field of technology for people with speical needs. A bibliometric study was conducted by analyzing journal articles from four large academic databases to identify prevalent topics in the relevant literature over time. Using topic modeling, a total of 41 prevalent topics were identified using a corpus of more than 10,000 scientific journal articles. The results show a steady increase in the number of articles published since 2000, with 13 specific journals playing a significant role in publishin articles related to technology and people with special needs. Topics identified are related closely to the design and evaluation of technological interventions, studiesn on technological interventions in schools, but also on behavior modeling through ICT and technological trends in everyday life. The final paper will contain a full overview of all 41 topics, additional analysis of the development of topic prevalence over time as well as detailed analysis of outstanding topics and topic development using five-year periods as unit of analysis.



ID: 110 / STS 17A: 2
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: mobile and pervasive assistive technologies, persons with disabilities, incentives, investments, accessibility

The Economics of Investments in Accessibility for Persons with Disabilities

S. Joseph

Kansas State University, United States of America

Emerging trends in technology are providing opportunities for a broader range of mobile and pervasive assistive technologies (MPAT) to positively impact persons with disabilities in terms of independent living and employment. However, such technologies typically require significant investments by entities that offer such options. It is not clear how such firms compete in a market with other firms that may not provide such options. Understanding such competition can help promote greater investments in accessibility infrastructure by entities and provide insights into how federal efforts can further boost such efforts. To that end, this paper presents a game-theoretic framework of market competition between two firms where one invests in accessibility (bearing additional upfront costs) and compares it with another one that does not. Numerical evaluations demonstrate the range of parametric values where accessibility investments pay off.



ID: 264 / STS 17A: 3
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Assistive Technology (AT), Economics/ Policies and Legislation, Austria

Assistive Technologies in Austria: Exploring the Impact of Legal Frameworks and Subsidies

M. Schaur

JKU Institut Integriert Studieren, Austria

Assistive Technology (AT) can improve the quality of life of persons with disabilities by promoting increased independence, social inclusion, and facilitating the ongoing demand for deinstitutionalization. For AT to be effective, it must allow for individualization and adaptation to the user's needs and preferences. Legislation and funding systems have a significant impact on access to AT. In Austria, there is evidence that several barriers prevent people with disabilities from accessing appropriate AT. Therefore, the submitted paper asks which legal frameworks and funding schemes in Austria influence the accessibility, affordability and use of AT, and whether existing legal frameworks and funding schemes in Austria allow for individualization in the provision of AT. A combination of research methods was used to gain a comprehensive understanding of how legal frameworks and subsidy schemes influence access to AT in Austria. The empirical findings show that there is no legal entitlement to AT. So-called benefit catalogues for AT refer primarily to medical rehabilitation measures, are not standardized and do not reflect the state of the art. The financing of AT in Austria is not transparent, some users have to rely on additional donations from private organizations, and often lengthy procedures prevent people with disabilities from immediately using the AT they need.



ID: 200 / STS 17A: 4
LNCS submission
Topics: STS Cognitive Disabilities, Assistive Technologies and Accessibility
Keywords: User Centered Design and User Participation, eInclusion

How to Overcome Dissemination Challenges for Technical Solutions for Participation: A Journey from Reasearch Prototypes to User-Centric Software

L. Wilkens, S. Dirks

TU Dortmund University, Germany

Nowadays, many people use digital media and software to support them in their everyday lives. Research on digital solutions for individuals with disabilities has led to the development of prototypes like browser add-ons. However, sustaining and disseminating these prototypes beyond project durations pose significant challenges. This paper examines the distribution challenges of a browser add-on developed for individuals with disabilities. The project involved end users throughout the research and development phases, resulting in a tailored prototype. The subsequent follow-up project aims to validate the add-on's effectiveness in different educational contexts. Formal and informal educational settings were explored, identifying diverse target groups. However, challenges emerged, including the evolving use of mobile devices, technical malfunctions, and a lack of digital literacy among end users. The discussion emphasizes continuous software updates, user involvement, and support structures for long-term sustainability. Empowering end users and fostering inclusive training environments are pivotal steps toward overcoming barriers and ensuring the effective use of digital technologies among diverse populations. The paper discusses the challenges and difficulties that have arisen in the dissemination and further development of a software based on the results of for more digital participation on the Internet and presents initial solution ideas and approaches.

 
8:00am - 9:30amSTS 9A: STS Augmentative and Alternative Communication Innovations in Products and Services
Location: Track 4
Session Chair: David Banes, David Banes Access and Inclusion Services
 
ID: 117 / STS 9A: 1
LNCS submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: Augmented and Alternative Communication (AAC), Language capabilities, Core vocabulary

Developing A Web-Based Platform For Forming Language Capability Assessment For AAC Users

M.-C. Chen

National Chiayi University, Taiwan

Although language capability has been regarded as one of the key factors of communication competences for AAC users, no proper language capabilities assessment that considers AAC users’ limitations of motor control and speech is available in Taiwan. This study aimed to develop a web-based platform for the clinical professionals a simple interface to create their owned language capabilities assessment for their AAC users based on core vocabulary approach. Meanwhile an assessment for lower grade students who use AAC was also developed by this platform.



ID: 143 / STS 9A: 2
LNCS submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: Autism Spectrum Disorder (ASD), Technology Based AAC, perceived engagement, perceived overall wellbeing

Perceived Impact of Person-Centred Technology-Based AAC for Adults with ASD

E. Feerick

IADT

Augmentative and Alternative Communication (AAC) technology has potential to improve wellbeing and communication access for non-verbal and semi-verbal autistic adults, but most research, design, and provision focuses on non-verbal autistic children. There is a gap in the literature in relation to the impact that person-centred approaches of AAC have on adults with autism. This study aimed to determine whether technology enabled AAC has an impact on these adults in relation to perceived overall wellbeing and perceived interaction with AAC devices, when compared with traditional unaided AAC methods.As this study aimed to measure impact of AAC, as well as obtain practitioners’ perspectives and opinions, a mixed methods within-groups research design was deemed most suitable. A survey instrument, designed by the researcher was utilised to collect the data. This survey included 7 items off the Warwick-Edinburgh Mental Well-Being Scale (WEMWBS), while still maintaining validity, to measure perceived wellbeing. Research with adults suggests that the WEMWBS can detect clinically meaningful change. Purposive sampling was used to recruit 10 participants from autism day services. These services are currently piloting a new technology-based method of AAC called Aided Language Input (ALI) using the app TD Snap on an iPad or tablet device. Results provide additional data to build on current study findings and existing theories. Future practical implications and limitations are discussed.



ID: 155 / STS 9A: 3
LNCS submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: Augmented and Alternative Communication (AAC), Social Innovation, Assistive Technology (AT), eInclusion

AsTeRICS Grid: Why Freely Accessible Software is Needed for Democratizing AAC in the Long-term

B. Klaus1, C. Veigl1,2

1UAS Technikum Wien, Austria; 2Johannes Kepler University Linz (JKU)

Around 97 million people in the world could benefit from Augmentative and Alternative Communication (AAC), but according to the WHO and other organisations, access to digital AAC resources is very limited, especially in low-income countries. Free AAC software could help to improve this situation, but many existing "free" applications are either behind a paywall or not published under a free licence. We therefore propose "freely accessible software" for AAC, which truly can be used by anyone.

AsTeRICS Grid (AG) is a feature-rich, grid based AAC web application which has been developed based on the feedback of AAC users from around the world. The single-page architecture and offline capability of AG reduce the required server resources and human workload for maintenance, so that the service can remain freely accessible in the long term.

We conducted an online survey which was completed by 277 participants, to examine the current usage context of AG. Most answers came from professionals (78%) who had previously used other AAC apps. Open questions were asked about advantages and disadvantages of AG, with free accessibility being rated most positively and the lack of certain features being rated most negatively.

Examples from the past show that "free" AAC apps (that are not "freely accessible software") have either disappeared or become chargeable. AsTeRICS Grid shows how the conditions for free AAC can be created and how these concepts can be implemented sustainably.



ID: 170 / STS 9A: 4
LNCS submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: Augmentative and Alternative Communication, Location and situation-based AAC service, Web application, Synchronization

A Location and Situation Based AAC Web Service with Automatic Synchronization of Individual Mobile Applications

J. Seo1, J. Lee1, K.-H. Hong2

1Department of Future Convergence Technology Engineering, Sungshin Women's University, Seoul, Republic of Korea; 2Department of Service and Design Engineering, Sungshin Women's University, Seoul, Republic of Korea

People having difficulties in verbal expression have used mobile Augmentative and Alternative Communication(AAC) applications. The location and situation-based AAC mobile application that recommends AAC boards based on users’ current location and communication situation was developed recently. Facilitators can manage AAC resources such as AAC symbols and boards on the mobile application, but, because of the fact that mobile devices are for personal use and their relatively small screen, facilitators have difficulty in managing and sharing AAC resources.

In this study, we developed a location and situation based AAC web for facilitators. The web supports the same functions as the mobile app. The web makes it easy to share AAC resources among other facilitators, and provides a convenient user interface compared to the mobile app for facilitators. Furthermore, by supporting automatic synchronization of AAC resources generated by the web to individual mobile devices of AAC users using them, it is easy to develop and distribute AAC resources for individual AAC users of facilitators.

We conducted the usability evaluation for the web with 21 people experienced in AAC, and the results show the web can ease the inconvenience and can improve efficiency of managing AAC users and hierarchical AAC resources for facilitators. Also, the web enables the sharing of AAC resources and is interoperable with the mobile app to extend the range of the location and situation-based AAC service.

 
8:00am - 9:30amI7: Innovation Area
Location: Innovation Area
Session Chair: Andrea Petz, JKU Linz
Will be announced, soon: workshops, posters, ...
9:30am - 9:45amB7: Coffee Break
9:45am - 11:15amSTS 6B: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Location: Track 1
Session Chair: Yoshinori Teshima, Chiba Institute of Technology
Session Chair: Tetsuya Watanabe, Niigata University
Session Chair: Kazunori Minatani, National Center for University Entrance Examinations
 
ID: 145 / STS 6B: 1
OAC Submission
Topics: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Keywords: Blind People, 3D Models, Distribution Service, Touch

3D Model Distribution Service for Blind and Visually Impaired People

T. Watanabe

Niigata University, Japan

We are providing a 3D model distribution service for blind and visually impaired people. In this service, we print and send 3D models upon their requests, free of charge. 119 blind and visually impaired people and 16 supporters had submitted 313 requests for 3D models from November 2019 to the end of year 2023. We classified the 313 requests into the following categories: architecture, terrain, biology, map, vehicle, astronomy, coin, and others. Architectures occupy 51.4% of the total requests and rank on the top. The requests for architecture models were concentrated on world-famous architectures. Terrain models rank as second with 57 requests (18.2%). Reasons for requesting a terrain model are, on one hand, intellectual curiosity on their residential or visiting places and, on the other hand, practical uses such as educational materials and hazard maps. The places of the terrain models requested differred from client to client. We made and sent 237 models, which amounts to 75.7% of the total requests. As for other 76 requests, we recommended the client to purchase the model or declined because of the difficulty of modeling. By looking back on the 3D distribution service and analyzing the requests and responses, we discussed the way to continue this leading, unique service even after the project ends.



ID: 174 / STS 6B: 2
OAC Submission
Topics: STS Tactile Graphics and 3D Models for Blind People and Shape Recognition by Touch
Keywords: Visually Impaired, Geometry, Game Based Learning, Accessible Teaching Learning Aids, Inclusive Design, 3D Shape Nets

Development and Assessment of Inclusive Tangible Shape Nets to Bridge Geometry Gaps for the Visually Impaired

M. Aggarwal, P. Chanana, P. Rao

Indian Institute of Technology Delhi

Geometry plays a crucial role in fostering spatial understanding and problem-solving skills, which are essential for both academic excellence and real-world applications. However, learners with visual impairments (VI) face challenges due to the limited availability of inclusive tools and resources for geometry, resulting in a deficiency in effective learning and confidence. Due to the inherently visual nature of geometry, traditional teaching-learning methods (TLMs) need to be adapted to incorporate multi-sensory modifications for an inclusive learning environment. This research paper presents an innovative approach to tackle these challenges. The ShapeScape kit, featuring 11 shape cutouts/nets that can be folded and converted into corresponding 3D shapes has been specifically designed to augment the accessibility, usability, and overall efficacy of geometric education for this demographic. The kit's development underwent an iterative process, including rigorous testing through semi-structured interviews and usability assessments. The outcomes affirm the ShapeScape kit as a promising solution, bridging the gap in personalised teaching and learning materials for VI students. This research contributes to the advancement of inclusive education practices, providing a tangible and effective solution to empower these learners in their journey toward proficiency in geometry.



ID: 120 / STS 6B: 3
LNCS submission
Topics: STS Accessible and Inclusive Digital Publishing
Keywords: tactile graphics, sensory substitution, color to tactile

RainbowTact: An Automatic Tactile Graphics Translation Technique that Brings the Full Spectrum of Color to the Visually Impaired

H. W. Ka

KAIST, Korea, Republic of (South Korea)

Tactile graphics enable visually impaired people to access visual information via touch. Despite extensive guidelines and solutions, effectively representing color, vital for visual communication, remains a key challenge. This research aims to address this gap by developing RainbowTact, inspired by the correlation between visible colors and light wavelengths, to translate colors into intuitive tactile representations. RainbowTact distinguishes chromatic and achromatic colors using tactile wave and dot patterns depicting hues, saturations and brightness. Achromatic shades are shown by dot density and size. RainbowTact meets key design criteria: covering the full color space with omnidirectional, orientation-independent patterns aligning with standards. A software prototype automates conversion. A pilot study evaluated RainbowTact’s effectiveness and usability through quantitative and qualitative analyses. Results showed decent color identification success and increasing task efficiency. Users strongly favored RainbowTact, highlighting benefits like pattern differentiation and non-directionality. While initial learning ease scored lower, participants expressed overall positive inclination. This demonstrates RainbowTact's potential to effectively convey color information via intuitive tactile representations to advance tactile graphics capabilities.

 
9:45am - 11:15amSTS 14C: STS ICT to Support Inclusive Education - Universal Learning Design (ULD)
Location: Track 2
 
ID: 130 / STS 14C: 1
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: eLearning and Education, (e)Accessibility, Design for All and Universal Design

Accessible by Design? Exploring How Barriers faced by Disabled Students are Resolved in Online and Distance Learning

T. Coughlan

Institute of Educational Technology, The Open University, United Kingdom

Common policies and models for enhancing the inclusion of disabled students promote a trajectory of moving from adjustments and provision in response to individual requests and assessment of needs, towards an inclusive educational experience that is accessible by design. It is possible to see partial successes in this regard, particularly in the space of online and distance learning (ODL) However there remain prominent conceptual and practical challenges and a lack of clear data on what is working. This paper reports research conducted with students via a survey (n=50) and interviews (n=4) at an ODL higher education institution. These aimed to better understand the extent to which study was experienced as accessible by design, where and how adjustments were effective, where further efforts should be focused to enhance provision. Students report barriers across types of materials, activities and assessments, however they were often able to resolve barriers related to study materials independently, and most of those related to assessments were resolved with support. Barriers in communicative activities such as tutorials and online forums were more often reported as unresolved. A range of features that represent an accessible by design approach were reported as useful, and students used various sources to gain guidance around accessibility. We discuss how the findings and further data collection processes could enable accessibility by design to be enhanced.



ID: 183 / STS 14C: 2
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Serious Game, Type 1 diabetes, Kindergarten-age children, Education, Daily life-management, Lactose sensitivity, Gluten sensitivity

User-friendly Serious Game Design For Diabetic Preschool Children

P. Szabó1,2

1Eötvös Loránd Research Network, Piarista u. 4, H-1052 Budapest; 2University of Pannonia, Egyetem u. 10, 8200 Veszprem

The "DIAB SMART" is a novel "serious game" designed for preschool children recently diagnosed with type 1 diabetes, a condition whose prevalence in childhood has tripled in the past 30 years. Aimed at addressing the increased need for knowledge during diagnosis and ongoing treatment, our primary motivation was to assist children with type 1 diabetes in learning crucial information. [1-2] We designed software that comprises two main components: the "DIAB SMART" game for children and an editor for parents. The game features three mini-games: "True-False," "Which," and "Plate," while parents and dietitians can modify the game database by uploading meal/food data and images, as well as introducing new questions. The software underwent testing and evaluation by both adults and young children, utilizing modified System Usability Scale questionnaires. Results indicate high satisfaction levels among both parents and children. Notably, the versatility of "DIAB SMART" extends its utility to children with other conditions such as gluten or lactose sensitivity, and its user-friendly design makes it accessible for children with autism spectrum disorder, children with learning disabilities, and dyslexia as well. Overall, the "DIAB SMART" game represents an innovative and valuable tool for diverse pediatric health needs.



ID: 239 / STS 14C: 3
LNCS submission
Topics: STS Design for Accessible/Inclusive Sports and Rehabilitation with Assistive Technologies
Keywords: Extended Reality, Training, Special Education, Neuro developmental Disorder, Down Syndrome, Autism, Inclusive Education, Special Educational Needs.

Extended Reality for Special Educational Needs: From the Design Process to Real Products through 3D Printing

M. Covarrubias1, M. C. Carruba2

1Politecnico di Milano; 2Universita' Telematica Pegaso, Italy

Extended Reality (ER) stands out as one of the major technology trends currently, with expectations of even further growth in the near future. The primary objective of this paper is to develop an ER application for the Sharebot One and Sharebot Next Generation 3D Printers, specifically designed to assist students with special needs. The application follows an inclusive design perspective based on the International Classification of Functioning, Disability, and Health (ICF) by the World Health Organization (WHO), enabling students to work and interact with the machine safely.

To achieve this goal, three configurations have been developed and tested: a desktop version running on a PC, an Android tablet-based version, and a solution optimized for the Microsoft HoloLens device.



ID: 169 / STS 14C: 4
LNCS submission
Topics: STS Advanced Technologies for Innovating Inclusion and Participation in Labour, Education, and Everyday Life
Keywords: Artificial Intelligence and Autonomous Systems, eLearning and Education, Assessment/ Profiling and Personalization

The Effective Use of Generative AI for Personalized Learning: An Exploratory Study in the Norwegian Context

S. F. Hellesnes

Department of Design, Norwegian University of Science and Technology, Gjøvik, Norway

With the recent development of generative Artificial Intelligence (GAI) and the release of ChatGPT, opportunities for innovation have been opened in a wide range of fields. In the field of education, one of the primary goals is adapting the education for each student to support a student-centered approach instead of a ‘one-size-fits-all’ approach. Adapted education requires additional effort and time for teachers, and the integration of GAI into educational settings has the potential to be a game-changer by offering innovative solutions to enhance the learning process in this context. This study, therefore, explored the opportunities for using GAI to adapt texts based on the needs of individual students with special needs (i.e., reading comprehension challenges caused by ADHD in our study). The study comprises three phases: understanding the problem area through interviews, workshops for text generation, and finally, a comparison of the generated texts. Preliminary results show teachers struggling with adapting texts as much as they feel their students need due to a lack of time, highlighting a need to streamline this process.

 
9:45am - 11:15amSTS 17B: Disability, Inclusion, Service Provision, Policy and Legislation
Location: Track 3
Session Chair: Weiqin Chen, Oslo Metropolitan University
 
ID: 136 / STS 17B: 1
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: assistive devices, UNCRPD, post-human, habilitation, functional profile

Post-human, Disability and Inclusion

G. Griffo

European Disability Forum, Italy

In recent years, the development of assistive technologies has profoundly changed the idea of how to define human beings. This also happened in the field of persons with disabilities. The first beneficiaries of aids (orthoses and prostheses), the perception of their condition has gradually transformed up to the CRPD which underlined that disability is a social construction. In this perspective, new concepts have been developed, such as empowerment, enabling, inclusion, human rights that have changed the meaning of making tools available to encourage their full participation, today impeded by barriers, obstacles and discrimination. The development of assistive devices has completely revolutionized the meaning of aids applied to persons with disabilities. This is because these new products have reconfigured the limits of human beings for all person and have made it clear that people with functional limitations can overcome the hereditary limits of the materiality of the body, activating enabling and resilient factors that configure a different normality of "doing". The post-human perspectives question the traditional forms of rehabilitation, which is no longer recovering a lost or limited function according to a predefined model of "normality" and health to achieve (abilism), but reformulating the idea of how a person can function keeping account of all its characteristics. It is precisely from the analysis of the functioning profile of a human being ...



ID: 149 / STS 17B: 2
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: well-being, Artificial Intelligence, Ambient and Assisted Living (AAL)

From Support To Specific Limitations Of Abilities To Autonomous Life In The Intelligent World

L. Burzagli

CNR, Italy

Since the development of ICT started, there has been concerns for the accessibility of some groups of users. Research and development activity solved specific problems with the development of AT products, which now are often integrated in the operating system. However, in the meantime, society's attitude towards these problems has changed. More recently, there has been an increasing interest in people well-being, which is defined by WHO as the possibility of conducting an independent, active, and fulfilling life, with all the necessary activities. An approach to increase well-being of people is the development of “ambient active living environments,” to help all people in their living activities. This is supposed to be useful for all people, particularly for the support of older people and people with limitations of activities. In this context, accessibility, and usability, although necessary, are not sufficient. Therefore, the technological environment must manage daily activities, considering the abilities necessary to perform them and specific support when some abilities are limited, make use of monitoring and reasoning capabilities to adapt and evolve over time the type and level of support provided and support the contact with other people, according to legal principles. The usefulness and the complexity of the approach is shown with practical examples for feeding, loneliness and mobility, with the adoption of Artificial Intelligence.



ID: 192 / STS 17B: 3
OAC Submission
Topics: No STS - I prefer to be allocated to a session by Keyword(s)
Keywords: Digital Skills Gap, Training, Labour Market Inclusion, European Blueprint

Bridging the Gap: A Comprehensive European Strategy for Digital Skills Development in Work Integration Social Enterprises

K. Matausch-Mahr, M. Schaur, K. Nuppenau

JKU Institut Integriert Studieren, Austria

The B-WISE project addresses the challenges faced by Work Integration Social Enterprises (WISEs), which aim to integrate disadvantaged individuals into the labor market. An early project outcome highlights the pressing issue of skills shortages and mismatches, particularly in digital skills and soft skills, affecting WISEs. To address these challenges, the project proposes the development and implementation of a strategic approach, known as the blueprint, which promotes sectoral cooperation to address skills gaps in the WISEs sector. Key objectives included identifying sectoral labor market needs, analyzing the responsiveness of existing vocational education and training (VET) systems, developing transnational curricula to assess current and future needs, and promoting good practice at national and regional levels. In addition, the project aims to develop a sustainable plan to match demand and supply of identified skills, build a supportive community for skills growth, innovation and competitiveness in the sector, and design a long-term action plan for the progressive implementation of the strategy. Led by two European networks and involving 28 partners from 13 EU countries, the project focuses on creating flexible and adaptable outputs, ensuring local implementation with a clear European perspective.

 
9:45am - 11:15amSTS 9B: STS Augmentative and Alternative Communication Innovations in Products and Services
Location: Track 4
Session Chair: David Banes, David Banes Access and Inclusion Services
 
ID: 198 / STS 9B: 1
LNCS submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: Augmented and Alternative Communication (AAC), GPT-4, Cerebral Palsy, Response generation

An AAC Application for Generating Japanese Response Phrases Using GPT-4

S. Kitayama1, T. Hirotomi2

1Interdisciplinary Faculty of Science and Engineering, Shimane University, Japan; 2Institute of Science and Engineering, Academic Assembly, Shimane University

Conversational narratives consisting of anecdotes, experiences, and jokes are useful to facilitate interactional communication for augmentative and alternative communication (AAC) users. We developed a new AAC application that uses speech recognition and the Generative Pre-trained Transformer 4 (GPT-4) to generate four types of Japanese response phrases: boke, tsukkomi, neutral, and backchannel phrases. The purpose of this research is to examine (1) whether GPT-4 can be used to generate boke, tsukkomi, neutral, and backchannel phrases, (2) whether the generated phrases are used in conversations, and (3) what type of interaction occurs when the generated phrases are used.

A 22-year-old Japanese man with cerebral palsy participated in the development as an AAC user. We conducted four sessions with him, each lasting 20 to 30 min. In these sessions, we used our application in real conversations and exchanged opinions for further improvements. These sessions were videotaped for analysis.

The results were summarized as followed: (1) Our application could generate four types of Japanese phrases in a mean duration of 5.37 s (SD 2.47) after the partner's utterance. (2) The participant selected one tsukkomi, two boke, and four neutral phrases. The mean duration to produce the selected utterance from the end of the partner's utterance was 17.52 s (SD 14.71). (3) When the participant presented the generated Japanese jokes, the dyad laughed together.



ID: 206 / STS 9B: 2
LNCS submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: open source, AAC, Augmentative and Alternative Communication, disability, symbols, assistive technology, participatory

Symbol Builder for Autocreation of Images for Alternative and Augmentative Communication

D. Banes

David Banes Access and Inclusion Services, UK

In many developing countries, access to Alternative and Augmentative Communication (AAC) is limited to pictographic symbols designed to reflect languages and cultures other than that of the locality. Simple, functional communication is available, but the breadth and depth of local vocabulary is often restricted. This is usually due to the cost of customization and a lack of support necessary to create symbols in a similar style to those already offered or yet to be created for an individual with speech and language difficulties. Generative AI tools have the potential to affect the use of AAC in diverse situations and settings, by accelerating automated symbol development, supported by participatory evaluation. Symbol Builder uses AI models for image to text processes where individual symbol style descriptions are created. These captions are automatically paired with actual symbols from open licensed symbol sets. The next stage requires the provision of text prompts that engage the AI model, trained on the schema for a specific symbol set, to generate a symbol representing a new concept. The resulting image can be edited or accepted as a new pictograph which then goes through a voting process of acceptance. This is where AAC users and communication partners from the relevant linguistic and cultural setting decide if the symbols are ready to be published or require further adaptations. Finally, symbols are uploaded to a repository of open licensed AAC symbols for public use.



ID: 210 / STS 9B: 3
LNCS submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: Cerebral palsy, Augmented and Alternative Communication (AAC), adaptive technology, multimodal

Towards Adaptive Multi-Modal Augmentative and Alternative Communication for Children with CP

A. Zisman

The Open University, United Kingdom

Effective communication can pose significant challenges for non-verbal children with Cerebral Palsy (CP). Augmentative and Alternative Communication (AAC) systems have helped many but can fail to meet the needs of some users. This research proposes a hybrid adaptive approach, utilizing sensors and machine learning (ML) algorithms to create a personalized mobile communication system for those whose abilities are ill-suited to existing approaches. The system aims to tailor to individual abilities, reducing the need for users to adapt to system requirements. Online surveys gathered data on gestures, actions, and sounds used by non-verbal CP children, informing a classification system and functional requirements. The participants reported 28 communication messages with diverse means of expression. Representative examples and their classification highlight the intricacies of non-verbal communication. The proposed architecture emphasizes real-time classification, multiple sensors, and a feedback loop for continuous improvement, enhancing communication for non-verbal children with CP.



ID: 153 / STS 9B: 4
OAC Submission
Topics: STS Augmentative and Alternative Communication Innovations in Products and Services
Keywords: Autism Spectrum Disorder (ASD), Augmented and Alternative Communication (AAC), Mobile Application, Digital Therapeutic

AVATA-AAC: an AAC-based Digital Therapeutic to Improve Communication Skills in Children with Autism Spectrum Disorder

Y. R. Kang

Department of Service Design Engineering, Sungshin Women’s University

ASD (Autism Spectrum Disorder) is a neurodevelopmental disorder that affects social interaction and communication skills. The purpose of this study was to develop a digital therapeutic application (AVATA-AAC: Amusing Verbal and Alternative Communication Tools for Children with ASD - Augmentative and Alternative Communication) to improve the communication skills of children with ASD and to evaluate the effectiveness of the application in improving vocabulary acquisition and AAC-based communication skills. The social validity of this digital therapeutic was also evaluated by assessing its acceptability to caregivers.

This digital therapeutic consists of two main parts: Joint Attention and AAC-based communication learning. The Joint Attention part focuses on developing skills such as pointing at objects, sharing attention with caregivers, and focusing attention. AAC-based communication learning part aims to understand vocabulary, lean AAC graphic symbols, and acquire communication expressions with AAC symbols in various real-life scenarios.

We conducted a user study with four children with ASD between the ages of 3 and 5. The results of the user study indicate that AVATA-AAC is effective in improving the language learning and communication of children with ASD. In addition, parents reported high levels of satisfaction. Future research should aim to diversify the contents and conduct user studies with a larger sample of children with ASD.

 
9:45am - 11:15amI8: Innovation Area
Location: Innovation Area
Session Chair: Andrea Petz, JKU Linz
Will be announced, soon: workshops, posters, ...
11:15am - 11:30amB8: Coffee Break
11:30am - 12:30pmPlenary III: Springer Keynote
Location: Plenary
https://www.icchp.org/content/keynotes-3#w3c
Session Chair: Josef Küng, Johannes Kepler University Linz
This session is suppoerted by Springer Lecture Notes in Computer Science.
Springer Logo
More Information at:
 
ID: 293 / Plenary III: 1
Keynote

Making AI Really Useful in R&D and Practice (for AT, Accessibility and Inclusion)

J. Kofler

Johannes Kepler University Linz, Austria

 
12:30pm - 2:00pmFarewell Party: See you again in 2026
Location: Plenary
https://www.icchp.org/content/keynotes-3#w3c
Session Chair: Petr Peňáz, Teiresias Centre, Masaryk University
Session Chair: Boris Janča, Masaryk University

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ICCHP 2024
Conference Software: ConfTool Pro 2.8.102+TC+CC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany