ID: 116
/ STS 15: 1
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher EducationKeywords: Chart Analysis, Alt-Text, Image Retrieval, CLIP Model, User Interface
Alt4Blind: A User Interface to Simplify Charts Alt-Text Creation
O. Moured1,2, K. Müller2, T. Schwarz2
1CV:HCI@KIT, Karlsruhe Institute of Technology, Germany; 2ACCESS@KIT, Karlsruhe Institute of Technology, Germany
Alternative Texts (Alt-Text) for chart images are essential for making graphics accessible to blind and visually impaired individuals. Traditionally, Alt-Text is manually written by authors but often encounters issues such as oversimplification, or overcomplication. Recent trends have seen the use of AI for Alt-Text generation. However, existing models, are susceptible to producing inaccurate or misleading information. We address this challenge by retrieving high-quality alt-texts from similar chart images, serving as a reference for the user. Our three contributions are as follows: (1) we introduce a new benchmark comprising 5,000 real images with semantically labeled and high-quality Alt-Text, collected from HCI venues. (2) We have developed a deep learning-based model to rank and retrieve similar chart images that share the same visual and textual semantics. (3) We have designed a user interface to facilitate our model and ease the alt-text creation process. Our preliminary interviews and investigations highlight the usability of our UI.
ID: 131
/ STS 15: 2
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher EducationKeywords: Tactile Charts, Chat Analysis, Vision Langauge Models, Transformers, Scalable Vector Graphics
ChartFormer: A Large Vision Language Model for Converting Chart Images into Tactile Accessible SVGs
O. Moured1,2, S. Alzalabny3, K. Müller2, T. Schwarz2
1CV:HCI@KIT, Karlsruhe Institute of Technology, Germany; 2ACCESS@KIT, Karlsruhe Institute of Technology, Germany; 3NeptunLab, University of Freiburg, Germany
Visualizations, such as charts, are crucial for interpreting complex data. However, they are often provided as raster images, which are not compatible with assistive technologies for blind and visually impaired individuals, such as embossed papers or tactile displays. At the same time, creating accessible vector graphics visualizations requires a skilled sighted person and is time-intensive. In this work, we leverage advancements in the field of chart analysis to generate tactile charts in an end-to-end manner. Our three key contributions are as follows: (1) introducing the ChartFormer model trained to convert raster chart images into tactile-accessible SVGs, (2) training this model on the Chart2Tactile dataset, a synthetic chart dataset we created following accessibility standards, and (3) evaluating the effectiveness of our SVGs through a pilot user study with the HyperBraille, a refreshable tactile display. Our work is publicly available at (link after review).
ID: 163
/ STS 15: 3
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher EducationKeywords: (e)Accessibility, Artificial Intelligence, eLearning and Education, Real-time Chart Image Description, Charts Interpretation
A Real-Time Chart Explanation System for Visually Impaired Individuals
W. Cho1, J. Park2
1Semyung University, Korea, Republic of (South Korea); 2Institute of ICT Convergence, Sookmyung Women's University, Korea, Republic of (South Korea)
This research addresses the critical need for real-time chart captioning systems optimized for visually impaired individuals in the evolving era of remote communication. The surge in online classes and video meetings has increased the reliance on visual data, such as charts, which poses a substantial challenge for those with visual impairments. Our study concentrates on the development and evaluation of an AI model tailored for real-time interpretation and captioning of charts. This model aims to enhance the accessibility and comprehension of visual data for visually impaired users in live settings. By focusing on real-time performance, the research endeavors to bridge the accessibility gap in dynamic and interactive remote environments. The effectiveness of the AI model is assessed in practical scenarios to ensure it meets the requirements of immediacy and accuracy essential for real-time applications. Our work represents a significant contribution to creating a more inclusive digital environment, particularly in addressing the challenges posed by the non-face-to-face era.
ID: 201
/ STS 15: 4
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher EducationKeywords: Mathematical functions, Graphs, Description standard, Accessible content
Towards Establishing a Description Standard of Mathematical Function Graphs for People with Blindness
T. Schwarz, K. Müller
Karlsruhe Institute of Technology, Germany
Visual representations are widely used for conveying information for sighted people. However, these visualizations are often inaccessible to blind people, requiring special adaptations. Although the description of graphics is a solution to make them accessible, there are only few standards for descriptions for this target group. In this paper, we develop a standard for describing function graphs at the university level in a user-centered multi-stage design process, focusing on structuring the information for blind readers and guiding authors to include necessary details to avoid errors.
ID: 213
/ STS 15: 5
LNCS submission
Topics: STS New Methods for Creating Accessible Material in Higher EducationKeywords: Exam Documents, Layout Analysis, Hierarchy, Object Detection, Document Analysis
ACCSAMS: Automatic Conversion of Exam Documents to Accessible Learning Material for Blind and Visually Impaired
O. Moured1,2, D. Wilkening2, K. Müller2, T. Schwarz2
1CV:HCI@KIT, Karlsruhe Institute of Technology, Germany; 2ACCESS@KIT, Karlsruhe Institute of Technology, Germany
Exam documents are essential educational materials for exam preparation. However, they pose a significant academic barrier for blind and visually impaired students, as they are often created without accessibility considerations. Typically, these documents are incompatible with screen readers, contain excessive white space, and lack alternative text for visual elements. This situation frequently requires intervention by experienced sighted individuals to modify the format and content for accessibility. We propose ACCSAMS, a semi-automatic system designed to enhance the accessibility of exam documents. Our system offers three key contributions: (1) creating an accessible layout and removing unnecessary white space, (2) adding navigational structures, and (3) incorporating alternative text for visual elements that were previously missing. Additionally, we present the first multilingual manually annotated dataset, comprising 1,293 German and 900 English exam documents which could serve as a good training source for deep learning models.
|