Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 9th May 2025, 10:28:11am America, Santiago

External resources will be made available 5 min before a session starts. You may have to reload the page to access the resources.

 
 
Session Overview
Session
STE-R S4: Remote Presentations
Time:
Thursday, 10/Apr/2025:
2:30pm - 4:30pm

Session Chair: Cristian Robles, DUOC UC
Location: online



External Resource: https://us06web.zoom.us/j/82249455476?pwd=lMG0HLEALQtDH2ea2u8j8u6sQgP6Y7.1
Show help for 'Increase or decrease the abstract text size'
Presentations
2:30pm - 2:54pm

Unveiling the Relationship among Chinese Pre-service Foreign Language Teachers’ AIGC Literacy Beliefs, Practices, and Emotions

Yang Gao, Mengmeng Xu

Xi'an Jiaotong Universty, China, People's Republic of

With the ongoing digital transformation in education, enhancing Artificial Intelligence Generated Content (AIGC) literacy among pre-service foreign language teachers has become essential for adapting to the challenges of the AI-driven era. This study investigates the relationship among Chinese pre-service foreign language teachers' AIGC literacy beliefs, practices, and emotions. Using an explanatory sequential mixed method, the study employs questionnaire surveys, classroom observations, and in-depth interviews to gain a comprehensive understanding of pre-service foreign language teachers' AIGC literacy beliefs, practices, and emotions. The questionnaire covers five dimensions related to AIGC literacy, including Beliefs and Ethics, Thinking and Language Competence, Knowledge and Skills, Application, and Emotional Dynamics. Quantitative data from a large sample of pre-service teachers will be analyzed using SPSS, while qualitative data from classroom observations and interviews will be analyzed using Nvivo. The findings reveal how pre-service teachers integrate AIGC into their teaching practices and the emotional dynamics they experience during this process. This study is expected to provide insights into pre-service language teachers’ views on using AI technologies for language education, which helps prepare them for an AI-driven educational environment and enhance the integration of foreign language education with AI technologies, ultimately improving education quality.



2:54pm - 3:18pm

Using Computer Vision And Open Data To Improve Sign Language Proficiency In An Inclusive Communication Environment

Hlib Stupak1, Hanna Telychko1, Daria Lytvyn2, Mykyta Telychko2

1Donetsk National Technical University, Ukraine; 2Secondary School # 9 Pokrovsk City Council, Ukraine

CONTEXT

The World Health Organization states that over 5% of the global population—approximately 430 million people—experience some degree of deafness and muteness. The lack of sign language proficiency among the general population creates communication barriers, impacting the emotional well-being of both deaf-mute and hearing individuals. Developing accessible tools for self-guided sign language learning is crucial for fostering mental health and mutual understanding across diverse groups.

PURPOSE OR GOAL

This study aims to enhance sign language proficiency for deaf-mute and hearing individuals by utilizing computer vision technology to create an interactive, inclusive sign language dictionary. This tool will facilitate independent learning, improve communication accuracy, and promote inclusive social interactions. We hypothesize that computer vision integration will make sign language learning more accessible and efficient, thereby fostering a more inclusive society and improving psychological well-being.

APPROACH

Our approach included analyzing current sign language learning methods, exploring open databases like WLASL with educational video content, and developing a gesture recognition algorithm using computer vision. We programmed algorithms for real-time gesture recognition via webcam, enabling users to compare their gestures with standard models and receive immediate feedback. An interactive user interface supports various sign languages, allowing users to practice specific gestures. The application was tested with a target group to assess usability and gesture recognition accuracy.

OUTCOMES

The dictionary application achieved an 85% accuracy rate in gesture recognition, allowing users to practice gestures in real-time and facilitating smoother self-study. Feedback from test users indicated that the tool effectively supported learning new gestures and improving proficiency.

CONCLUSIONS

This project advances accessible tools for sign language learning, enhancing communication between deaf-mute and hearing communities. Findings confirm that computer vision effectively improves gesture recognition and communication skills. Future work will expand the gesture database and enhance feedback mechanisms for greater accuracy.



3:18pm - 3:42pm

Evaluation of Exergame Adaptation of Computer Games from the Open Visual Programming Language Scratch Repository for Balance Board Gamification

Oleksandr Blazhko1, Lyudmyla Vovkochyn2, Asan Volkov1

1Odesa Polytechnic National University, Ukraine; 2Cherkasy State Technological University, Ukraine

The study is devoted to computer gamification of balance boards deploying Arduino microcontrollers and computer games with open source software. The re-search investigates 62 computer games with topics related to Olympic sports from the open Scratch-repository of programs. To evaluate computer gamification, the authors proposed recommendations to create a sequence for using games that employs technical limitations of the Rocker and Wobble balance boards; a description of game design based on the “Mechanics-Dynamics-Aesthetics” model and the “GamePlay-Bricks” model. This paper proposes to evaluate games by the following properties: informational proximity of boards to the game scenario, proximity of signals from computerized boards and signals of keyboard keys or computer mouse movements in games; proximity between the movements of the game character and the movements of the player on the board, complexity of the game scenario, methods of evaluating player results; level of game mechanics coverage, level of balance between mechanics, dynamics of game characters. Based on the recommendations, 42% of games have been selected without reprogramming each game by creating the C-program of the Arduino Leonardo microcontroller, which simulates keystrokes and computer mouse movement. The study investigated the mechanics and dynamics of games while analyzing the impact of game aesthetics on the effectiveness of board gamification was beyond the scope of this study, as was the impact of Exergame design on improving the player’s mental health, which will be the topic of further research.



3:42pm - 4:06pm

Emotion Recognition and Identity Protection System with AI-Driven Spoofing Prevention

Alexandru Stefan Negrea, Nicolaie-Alin Marin, Horia-Daniel Oprea, Ioana Corina Bogdan, Horia Alexandru Modran

Transilvania University of Brasov, Romania

Thanks to the continuous evolution of computational technologies, emotion recognition and identity protection systems have become vital components in artificial intelligence (AI), especially in domains involving human-machine interaction (HMI) and affective computing (AC). These systems are crucial for capturing and analyzing real-time emotional responses, providing significant advancements in personalized user experiences. However, due to cyber-attacks growth (such as spoofing attempts, malware, ransomware, phishing, brute force attack, denial of service), robust systems are needed not only to ensure accurate emotion recognition but also to provide secure identity management and optimized data storage. This project addresses these challenges by integrating advanced detection algorithms and hardware solutions to enhance both the reliability and security of emotion recognition platforms.



4:06pm - 4:30pm

FLOSSA: An Application for Improving the Interaction between Patient and a Humanoid Robot in a Dental Care Scenario

Kristoffer Kuvaja Adolfsson, Dennis Biström, Leonardo Espinosa-Leal

Arcada University of Applied Sciences, Finland

Project MäRI intends to bring forth evidence-based information regarding the experience when a social, humanoid robot, a care recipient and nursing students meet. To collect this information an application, Flossa, needs to be developed for the Arcada robot Alf, a commercial robot from Sanbot called Elf. Information for app development on social, humanoid robots is limited and to make sure Flossa can bring forth the appropriate information for the MäRI project Alf and the application must be serviceable. This in turns means that Alf and Flossa must be able to communicate in Swedish through speech synthesis and speech recognition.

This work describes the development process for the application Flossa and its 4 software. A server implementation that uses speech synthesis and speech recognition services from Google to make the robot Alf communicates in Swedish. A web application that becomes Flossa’s graphical user interface, based on a nod-tree that enables the interaction between user and robot. And finally, a streaming server and an Android application, developed from the included tools provided by Sanbot, that together bring life and movements to the robot during the interaction with the user and the graphical user interface.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: STE2025
Conference Software: ConfTool Pro 2.8.105+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany