Veranstaltungsprogramm

Eine Übersicht aller Sessions/Sitzungen dieser Veranstaltung.
Bitte wählen Sie einen Ort oder ein Datum aus, um nur die betreffenden Sitzungen anzuzeigen. Wählen Sie eine Sitzung aus, um zur Detailanzeige zu gelangen.

 
 
Sitzungsübersicht
Sitzung
MCI-SE03: Input
Zeit:
Dienstag, 05.09.2023:
11:00 - 12:30

Chair der Sitzung: Tanja Döring
Ort: Gebäude 4, Aula


Zeige Hilfe zu 'Vergrößern oder verkleinern Sie den Text der Zusammenfassung' an
Präsentationen

Predicting Mouse Positions Beyond a System’s Latency Can Increase Throughput and User Experience in Linear Steering Tasks

Jannik Wiese, Niels Henze

Universität Regensburg, Deutschland

Latency is present in all interactive systems and decreases user experience and performance. Previous work developed approaches that predict user actions and show these predictions to reduce latencies’ negative effects. While this can increase user experience and performance, it is unclear if predicting beyond a system’s latency results in further improvements. Therefore, we investigated the effects of predicting beyond a system’s latency. We collected data from 60 participants performing Steering Law tasks to systematically train an artificial neural network (ANN) that predicts 100ms into the future. We integrated the ANN into the Steering Law task and buffered users’ inputs to simulate latency between 50ms and -50ms. A study with 30 participants showed that decreasing latency beyond the system’s latency increases throughput up to -50ms. Subjective measures improved up to -16.67ms without negative effects on agency. Overall, we show that predicting beyond a system’s latency can increase performance and user experience.



HapticCollider: Ungrounded Force Feedback for Rigid Collisions during Virtual Tool Use

Juan F. Olaya-Figueroa1, Ferdinand Streicher2, Marco Kurzweg1, Jan Willms1, Katrin Wolf1

1Berliner Hochschule für Technik, Deutschland; 2Konstruktiv, Deutschland

Controllers are not merely the dominant interface to interact in virtual reality (VR); they also are the main resource for haptically perceiving the virtual world. As standard VR controllers fail in generating realistic haptic feedback, we designed HapticCollider, a kinetic controller rendering force feedback, e.g., to simulate a collision when hammering against a virtual object. In our user study, we demonstrated that HapticCollider significantly increases realism in tool usage compared with a standard VR controller. As key factors for tool use realism in VR, we identified force feedback, controller weight, and grip shape in combination with software solutions, namely collision prediction, and control-display ratio to render the force timing, as well as, the tool position according to the user's expectations.



How Unique do we Move? Understanding the Human Body and Context Factors for User Identification

Yasmeen Abdrabou1,2, Lukas Mecke2, Radiah Rivu2, Sarah Prange2, Quy Dat Nguyen2, Vanessa Voigt3, Florian Alt2, Ken Pfeuffer4

1Lancaster University; 2University of the Bundeswehr Munich; 3LMU Munich; 4Aarhus University

Past work showed great promise in biometric user identification and authentication through exploiting specific features of specific body parts. We investigate human motion across the whole body, to explore what parts of the body exhibit more unique movement patterns, and are more suitable to identify users in general. We collect and analyze full-body motion data across various activities (e.g., sitting, standing), handheld objects (uni- or bimanual), and tasks (e.g., watching TV or walking). Our analysis shows, e.g., that gait as a strong feature amplifies when carrying items, game activity elicits more unique behaviors than texting on a smartphone, and motion features are robust across body parts whereas posture features are more robust across tasks. Our work provides a holistic reference on how context affects human motion to identify us across a variety of factors, useful to inform researchers and practitioners of behavioral biometric systems on a large scale.



Pull Outperforms Push as Vibrotactile Wristband Feedback for Mid-Air Gesture Guidance

Jan Willms, Maximilian Letter, Emile Marchandise, Katrin Wolf

Berlin University of Applied Sciences and Technology

The use of mid-air gestures to control interactive systems is becom- ing increasingly important, particularly in mixed reality scenarios. However, these gestures are not always intuitive and can be chal- lenging to learn as they lack visual guidance. Therefore, it is crucial to explore strategies to improve the learnability of these gestures. In this work, it is investigated how a vibration stimulus can be applied at the forearm to guide a person in performing a gesture. Utilizing a prototypical wristband with 24 vibrotactile actuators, the metaphors pull and push, representing attractive and repulsive feedback, were compared against each other. Results of a controlled user study show that participants perform significantly better with the pull metaphor, completing gestures faster, and make fewer er- rors. In line with this, the majority stated a subjective preference towards pull after experiencing both metaphors.



The Walking Talking Stick: Understanding Automated Note-Taking in Walking Meetings

Luke Haliburton1, Natalia Bartłomiejczyk5, Albrecht Schmidt1, Paweł W. Woźniak2, Jasmin Niess3,4

1LMU Munich; 2Chalmers University of Technology, Sweden; 3University of St. Gallen, Switzerland; 4University of Oslo, Norway; 5Institute of Applied Computer Science, Lodz University of Technology, Poland

While walking meetings offer a healthy alternative to sit-down meetings, they also pose practical challenges. Taking notes is difficult while walking, which limits the potential of walking meetings. To address this, we designed the Walking Talking Stick—a tangible device with integrated voice recording, transcription, and a physical highlighting button to facilitate note-taking during walking meetings. We investigated our system in a three-condition between-subjects user study with thirty pairs of participants (N=60) who conducted 15-minute outdoor walking meetings. Participants either used clip-on microphones, the prototype without the button, or the prototype with the highlighting button. We found that the tangible device increased task focus, and the physical highlighting button facilitated turn-taking and resulted in more useful notes. Our work demonstrates how interactive artifacts can incentivize users to hold meetings in motion and enhance conversation dynamics. We contribute insights for future systems which support conducting work tasks in mobile environments.



 
Impressum · Kontaktadresse:
Datenschutzerklärung · Veranstaltung: MuC 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany