Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 18th Apr 2026, 06:18:06pm EEST
External resources will be made available 5 min before a session starts. You may have to reload the page to access the resources.
|
Agenda Overview |
| Session | ||
STE-R PS5: Remote Session 5
| ||
| External Resource: https://uni-wuppertal.zoom-x.de/j/62396405977?pwd=Z9BzEP8aS3GfENhJAzLakI0VsUZkpT.1 | ||
| Presentations | ||
4:30pm - 4:48pm
Review On Using AI In Drone Swarms: Problems And Perspective DonNTU, Ukraine The rapid evolution of Artificial Intelligence (AI) has transformed the operation of drone swarms from human-controlled formations into autonomous, decentralized systems capable of collective intelligence. Inspired by natural phenomena such as bird flocks or fish schools, drone swarms act as multi-agent systems where each unit follows simple local rules, producing complex global behavior without centralized control. This paradigm shift unlocks scalability, resilience, and efficiency previously unattainable by traditional single-drone systems. The goal of this research is to analyse how AI methodologies enable intelligent coordination, autonomous decision-making, and cooperative mission execution in drone swarms, demonstrating measurable advantages in efficiency, adaptability, and reliability across military, civil, and humanitarian domains and to prepare for development of architectural solution of a control system for swarms of drones. 4:48pm - 5:06pm
Pre-Learning Unit Tests: Auto-Graded Assessment and Intrinsic Motivation Arcada UAS, Finland Auto-graded unit tests are commonplace in programming education, yet their use as pre-learning diagnostics remains underexplored. This study evaluates a short, optional, pre-course Python diagnostic built around unit tests and immediate pass/fail signals. Although the surface task (imple-menting add(a: int, b: str) -> str) appears simple, the suite demands robust handling of messy numeric inputs (locale formats, fractions, scientific nota-tion, bases, complex numbers, currency symbols, verbal numerals). The in-vestigation addressed three questions: (i) how instant pass/fail affects mo-tivation, confidence, and self-esteem; (ii) how students judge usefulness, fairness, and transparency; and (iii) which design changes they prefer. Engagement logs from students (N=18) and post-course interviews (N=15) were analysed. One third completed the full suite, while many non-completers still reached advanced cases, indicating exploration beyond su-perficial tinkering. Interviews showed that the test felt authentic and moti-vating when it resembled real development work; however, repeated opaque failures reduced confidence. Participants reported that usefulness and fair-ness would be enhanced by an immediate linkage of tests outcomes to con-crete study steps and by the provision of a minimal, autonomy-supportive hint after repeated failure. Changes in intrinsic motivation followed a con-sistent pattern: curiosity at first ‘fail’, a short confidence boost at first ‘pass’, and a dip during repeated failure that could be recovered with a brief informational cue. Two complementary typologies are proposed to interpret these responses: a three-profile scheme aligned with self-determination theory and a simple two-axis matrix defined by competence-signal sensitivity and transparency requirement. Practical implications include stating optionality and scope up front, maintaining purely binary early feedback, introducing a single mi-cro-hint after repeated failure, mapping outcomes to syllabus items, and ac-knowledging partial structure with lightweight code-aware indicators. 5:06pm - 5:24pm
Modeling a Digital Twin of the Niryo Ned2 Cobot in Festo Ciros for Educational Robotics Arcada University of Applied Science, Finland Simulation-based learning plays a crucial role in robotics education by allowing students to practice complex tasks safely and cost-effectively. Digital twins—virtual replicas of physical robots—foster a deeper understanding of kinematics, control, and automation, yet many open-source simulators lack the reliability and precision necessary for consistent educational use. Commercial platforms like Festo CIROS offer advanced multi-body modeling and inverse-kinematics capa-bilities, but no digital twin exists for the Niryo Ned2 collaborative robot. This study addresses this gap by developing a fully functional digital twin of the Niryo Ned2 in CIROS, including peripheral devices such as a conveyor, object feeder, ultrasonic sensor, and sorting bins. A MELFA Basic program implements color-based pick-and-place operations using inverse-kinematics motion and by integrat-ing digital inputs and outputs. The resulting simulation provides a realistic, in-teractive environment that mirrors physical robot behavior, enabling hands-on experience in programming, sensor integration, and automation workflows with-out physical hardware. The work demonstrates the potential of CIROS as a robust educational platform for digital twins and lays the foundation for future en-hancements, including vision-based and AI-driven automation training. 5:24pm - 5:42pm
3D Virtual Learning Objects as Digital Twins in the Spatial Understanding of Human Neuroanatomy: A Strategy for Smart Education Pontificia Universidad Católica del Ecuador Sede Ambato, Ecuador Neuroanatomy education is challenging due to the spatial complexity of brain structures and the difficulty of three-dimensional visualization in traditional teaching methods. Immersive learning technologies, such as 3D Virtual Learning Objects (VLOs), have demonstrated potential for improving knowledge comprehension in advanced educational settings, aligning with the principles of Smart Education and Digital Twins. This research seeks to evaluate the impact of 3D VLOs as digital twins of neuroanatomical structures on improving the spatial understanding of neuroanatomy students, promoting immersive and active learning strategies.An interactive 3D VLO modeling key brain structures was implemented, integrated into a virtual learning platform. The methodology used was ADDIE for the development of the VLOs and the educational process. An experimental design was applied. Spatial understanding and retention were measured pre- and post-intervention, complemented by usability and learning perception surveys using the TAM model. The hypothesis was tested using the Wilcoxon test, and the alternative hypothesis that VLOs improve spatial understanding of neuroanatomy was accepted. The results identified that students who used 3D OVAs significantly improved their spatial understanding and retention of neuroanatomical content compared to traditional methods. They also reported greater motivation, satisfaction, and interactivity with learning. 3D OVAs as digital twins constitute an effective tool for teaching neuroanatomy and, in Education 4.0, enhancing spatial understanding. This strategy aligns with the objectives of Smart Education and the integration of advanced learning technologies, offering a replicable model for education in health sciences and other disciplines that require complex spatial visualization. 5:42pm - 6:00pm
Augmented Reality in Foot Palpation Training: Enhancing Accuracy and Clinical Skills through AR application Sam Houston State University, United States of America Palpation, a cornerstone of medical examination, relies on the practitioner's tactile acuity and anatomical expertise. This paper introduces an innovative augmented reality (AR) approach to enhance foot palpation training and practice. Traditionally, developing proficiency in palpation techniques requires extensive hands-on experience, which can be time-consuming and inconsistent. To address these challenges, we have developed an AR application for foot palpation training that provides real-time guidance by overlaying palpation zones on a scanned image of the patient's foot. Our study compared the accuracy of locating the medial cuneiform bone using traditional methods versus our AR app. A paired-samples t-test (n=30) revealed a statistically significant improvement in accuracy when using the AR app (M = 4.55 cm, SD = 0.53) compared to traditional methods (M = 9.28 cm, SD = 3.24, p < 0.001). The mean improvement of 4.73 cm (95% CI: 3.52 to 5.94) underscores the potential of AR technology to enhance anatomical education and improve clinical skills. The AR application integrates with a wearable device, such as the Head Mounted Tablet (HMT-1), enabling interactive learning experiences for medical students and residents5. This technology not only facilitates skill acquisition but also offers potential applications in remote patient care, such as training visiting nurses to triage homebound patients at risk of diabetic foot complications. Our findings suggest that AR-assisted palpation training could significantly enhance the learning experience, potentially leading to improved diagnostic accuracy and procedural outcomes in clinical settings. This approach represents a promising advancement in medical education, bridging the gap between traditional training methods and the evolving landscape of healthcare technology. | ||
