Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
STS 7B: STS Accessibility for the Deaf and Hard-Of-Hearing
Time:
Thursday, 11/July/2024:
3:30pm - 5:00pm

Session Chair: Matjaž Debevc, University of Maribor, FERI
Session Chair: Raja Kushalnagar, Gallaudet University
Session Chair: Sarah Ebling, University of Zurich
Location: Track 1

Ceremony Room A Uni-Center, 1st floor 210 seats (253) Cinema/theater-style seating with a gallery https://www.jku.at/en/campus/the-jku-campus/buildings/uni-center-university-cafeteria/

Show help for 'Increase or decrease the abstract text size'
Presentations
ID: 228 / STS 7B: 1
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: subtitles, closed captions, video, large language models, media

Customization of Closed Captions via Large Language Models

A. Glasser, R. Kushalnagar, C. Vogler

Gallaudet University, United States of America

This study investigates the feasibility of employing artificial intelligence and large language models (LLMs) to customize closed captions/subtitles to match the personal needs of deaf and hard of hearing viewers. Drawing on recorded live TV samples, it compares user ratings of caption quality, speed, and understandability across five experimental conditions: unaltered verbatim captions, slowed-down verbatim captions, moderately and heavily edited captions via ChatGPT, and lightly edited captions by an LLM optimized for TV content by AppTek, LLC. Results across 16 deaf and hard of hearing participants show a significant preference for verbatim captions, both at original speeds and in the slowed-down version, over those edited by ChatGPT. However, a small number of participants also rated AI-edited captions as best. Despite the overall poor showing of AI, the results suggest that LLM-driven customization of captions on a per-user and per-video basis remains an important avenue for future research.



ID: 258 / STS 7B: 2
LNCS submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: Language Learning, Auditory Training, Children with Hearing Impairment, User Centered Design and User Participation

Designing Pachi: A Verbal Language Learning Application for Children with Hearing Impairment in India

R. Sharma, A. Johry

Indian Institute of Technology, Delhi, India

Language is crucial for a child's development, especially for those with Hearing Impairment (HI), making speech and language acquisition challenging. Early intervention positively impacts language development in HI children, but challenges persist due to limited resources and unmet demand in India. While auditory and speech training is given to young children for mainstream integration, formal therapy accessibility is limited by geographical and socio-economic constraints. This paper introduces a tablet-based application for HI children (0-48 months) and facilitators, enhancing verbal language development in Hindi.

It involves initial field visits that identified challenges faced by HI preschoolers, leading to the design of a physical toolkit with three speech therapy games. These games focused on auditory participation, lip-speech cue recognition, and articulation, tested with 10 participants.

Building on it, expert consultations shaped a comprehensive four-stage therapy model for Hindi language learning, covering auditory skills, verbal comprehension, speech, and language development.

Utilizing the four-stage model, the digital application Pachi was developed. Aimed at a broader audience without speech therapist access, it offers 10–12-minute daily modules tailored to the ISD scale. A sample module for 13-15 months 'Hearing Age' involves four games aligned with the four stages - focusing on auditory training, vocabulary building, pronunciation, and articulation.



ID: 167 / STS 7B: 3
OAC Submission
Topics: STS Accessibility for the Deaf and Hard-Of-Hearing
Keywords: Online language instruction, sign language, Deaf, hearing learners

Post-COVID Sign Language Instruction By The Deaf: Perspectives From Hearing Sign Language Learners

M. Kakuta, R. Ogata

Kanto Gakuin University, International Christian University IERS, Japan

COVID-19 has brought many changes in the form of language classes. Many tools such as Zoom and Microsoft Teams have become known to the public and are used in the education settings. Concerning the Deaf, use of technology and online tools brought positive effects. They could use auto-captioning and spot-light function, and these were very convenient when the online meeting was conducted. As for language classes, the pandemic was an opportunity for the Deaf to start online classes for hearing sign language learners. Many Deaf started online classes and hearing sign language learners took these online classes. Now all classes have returned to normal and hearing sign language learners can learn sign language in a face-to-face setting. This study looks at the post-COVID online sign language instruction and how the hearing students who took the sign language classes feel about online classes conducted by the Deaf. A survey was conducted to analyze the hearing learners’ views towards online sign language classes. It was found that although many preferred to receive sign language instruction face to face, there are still needs for online sign language instructions. The time saved for travel is a key factor that the hearing respondents stated as a positive point of online classes. Yet there still needs to be research concerning how 3 dimentional signs could be shown in a 2 dimentional setting. Post-COVID has brought new opportunities for a new style of sign language instruction.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ICCHP 2024
Conference Software: ConfTool Pro 2.8.102+TC+CC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany