TipTopTyping: A Thumb-to-Finger Text Input Method and Character Layout Optimized for Mobile Spatial Computing
Roman Beier1, Florian Wolling2, Eva Hornecker1, Florian Michahelles2
1Bauhaus University Weimar; 2Vienna University of Technology
While text input remains the primary interaction method with most computer applications, it faces diverse challenges with spatial computing devices becoming successively more wearable and mobile. We present TipTopTyping, a novel computer vision-based thumb-to-finger text input method for virtual and augmented reality that uses pinch gestures between thumb and fingertips for easy and more intuitive text entry. With OPTI, we further propose a new character layout specifically designed and optimized for this input modality. The system performance is evaluated in a user study (N = 20) of two mobile scenarios: standing and walking. After only 12 sentences of practice, the participants quickly achieved mean text entry rates of 6.15 and 5.69 words per minute and mean accuracies of 1.27 and 1.43 keystrokes per character while standing and walking, respectively. Furthermore, OPTI not only shows a 4.36 % better typing accuracy but also the potential to outperform QWERTY layouts in writing speed with a little more practice.
Comparing the Effectiveness and Ergonomics of Smartphone-Based Gamepads
Christoph Würth, Andreas Schmid, Sabrina Hößl, Raphael Wimmer
University of Regensburg
Even though smartphones offer a broad design space for being used as input devices for video games, their form factor makes them less ergonomic than physical gamepads. Previous research suggests that customizable controller layouts and added haptic feedback can improve the quality of smartphone-based gamepads. However, there are no rigorous user studies comparing different types of smartphone controllers to each other. In this paper, we present results of a user study in which we compared three different smartphone-based gamepads: a smartphone controller with a standard layout, a customizable smartphone controller, and a smartphone controller with a haptic case. Additionally, we included a physical gamepad as a reference in our study. Participants used the different controllers to play a racing game and complete pointing tasks. We found that the physical gamepad outperforms smartphone-based controllers in terms of efficiency, but there was no significant difference in effectiveness. Furthermore, our qualitative findings open up design considerations for future improvements of smartphone-based game controllers.
I've Got the Data in My Pocket! -- Exploring Interaction Techniques with Everyday Objects for Cross-Device Data Transfer
Martina Emmert, Nicole Schönwerth, Andreas Schmid, Christian Wolff, Raphael Wimmer
University of Regensburg
People interact with a multitude of personal digital devices every day. However, transferring data between devices is still surprisingly cumbersome due to technical barriers, such as authentication or device pairing. Due to their clear affordances, physical devices offer a promising design space as mediators for natural interaction techniques. In a workshop and an elicitation study (n=30), we investigated different interaction techniques for cross-device data transfer using everyday objects. Our results suggest that depending on the use case, extending always-available physical objects might be more beneficial than developing new artifacts. Designing effective interaction techniques requires consideration of an artifact’s physical characteristics, affordances, and situational surroundings. Participants preferred multi-functional objects which are always at hand, such as their smartphone. However, they opted for more impersonal objects in unfamiliar situations. Interaction techniques associated with objects also influenced users' actions. We provide an overview of factors influencing intuitive interactions and we derived guidelines for user-centered development of interaction techniques with physical objects as mediators for data transfer.
The Impact of Smart-glass-based Video Tutorials on Knowledge Transfer in Practice
Esra Gümüs, Jan Christoph Gutzmann, Sebastian Thomas Büttner, Michael Prilla
Universität Duisburg-Essen
Video tutorials are an effective method of knowledge transfer and learning. However, they are often time-consuming to create and difficult to access during work. This paper introduces an approach that simplifies the creation of video tutorials in the workplace. By utilizing smart glasses, practitioners can record video tutorials during their daily work processes and utilize them for knowledge transfer without much additional effort. This offers the advantage of directly and easily sharing expertise in the workplace without being constrained by time or location. The paper presents a study that compares the effectiveness of knowledge transfer using these video tutorials against traditional personal training methods in the workplace. With 18 participants from the nursing and production sectors, we observed the learning outcomes of using video tutorials on smart glasses over multiple sessions, comparing them with personal training, which is considered the standard for practical onboarding. The study results indicate that learning with video tutorials does not significantly differ in terms of learning outcomes from traditional personal training methods. Overall, this study highlights the potential of video tutorials with smart glasses for knowledge transfer in workplaces, while also identifying challenges and opportunities for optimizing onboarding processes for employees.
Usability and Adoption of Graphical Tools for Data-Driven Development
Thomas Weber1, Sven Mayer2
1LMU Munich, Germany; 2LMU Munich, Germany
Software development of modern, data-driven applications still relies on tools that use interaction paradigms that have remained mostly unchanged for decades. While rich forms of interactions exist as an alternative to textual command input, they find little adoption in professional software creation. In this work, we compare graphical programming using direct manipulation to the traditional, textual way of creating data-driven applications to determine the benefits and drawbacks of each. In a between-subjects user study (N=18), we compared developing a machine learning architecture with a graphical editor to traditional code-based development. While qualitative and quantitative measures show general benefits of graphical direct manipulation, the user's subjective perception does not always match this. Participants were aware of the possible benefits of such tools but were still biased in their perception. Our findings highlight that alternative software creation tools cannot just rely on good usability but must emphasize the demands of their specific target group, e.g., user control and flexibility, if they want long-term benefits and adoption.
[Invited Talk] Loopsense: low-scale, unobtrusive, and minimally invasive knitted force sensors for multi-modal input, enabled by selective loop-meshing
Roland Aigner1, Mira Alida Haberfellner1, Michael Haller2
1Media Interaction Lab, University of Applied Sciences Upper Austria, Austria; 2Faculty of Engineering, Free University of Bozen-Bolzano, Italy
Integrating sensors into knitted input devices traditionally comes with considerable constraints for textile and UI design freedom. In this work, we demonstrate a novel, minimally invasive method for fabricating knitted sensors that overcomes this limitation. We integrate copper wire with piezoresistive enamel directly into the fabric using weft knitting to establish strain and pressure sensing cells that consist only of single pairs of intermeshed loops. The result is unobtrusive and potentially invisible, which provides tremendous latitude for visual and haptic design. Furthermore, we present several variations of stitch compositions, resulting in loop meshes that feature distinct response with respect to direction of exerting force. Utilizing this property, we are able to infer actuation modalities and considerably expand the device's input space. In particular, we discern strain directions and surface pressure. Moreover, we provide an in-depth description of our fabrication method, and demonstrate our solution's versatility on three exemplary use cases.
|