Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MCI-SE02: Tools and Technology
Time:
Monday, 05/Sept/2022:
2:00pm - 3:30pm

Session Chair: Thomas Ludwig
Location: Darmstadtium / Ferrum


Show help for 'Increase or decrease the abstract text size'
Presentations
2:00pm - 2:15pm

Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input Modality

Martin Schmitz1, Sebastian Günther1, Dominik Schön1, Florian Müller2

1Technical University of Darmstadt; 2LMU Munich

From zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efficiency of such grips are affected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our findings, we conclude that the pinching interaction between the thumb and index finger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that benefit from pinching as an additional and complementary interaction modality.



2:15pm - 2:30pm

FADER: An Authoring Tool for Creating Augmented Reality-Based Avatars from an End-User Perspective

Kevin Krings, Philip Weber, Florian Jasche, Thomas Ludwig

University of Siegen, Cyber-Physical Systems, Deutschland

Although augmented reality (AR) is becoming more common in our society, there are few specialized end-user tools for appropriate AR content creation. Most tools are focused on creating entire 3D applications or require extensive knowledge in programming and 3D modeling. With reference to End-User Development (EUD), we present a design case study for an end-user-friendly authoring tool that allows domain experts to create individual AR avatars in the field of Human-Food Interaction. After reviewing current approaches and design guidelines, we designed and implemented FADER, a web-based tool for creating AR-based food avatars. Our evaluation shows that playful design fosters immersion, and that abstract placeholders and highly simplified controls empower non-developers to create AR content. Our study contributes to a better understanding of end-user needs and practices during the AR creation process and informs the design of future AR authoring tools.



2:30pm - 2:45pm

Tooling for Developing Data-Driven Applications: Overview and Outlook

Thomas Weber, Heinrich Hußmann

LMU Munich, Germany

Machine Learning systems are, by now, an essential part of the software landscape.

From the development perspective this means a paradigmatic shift, which should be reflected in the way we write software.

For now, the majority of developers relies on traditional tools for data-driven development, though.

To determine how research into tools is catching up, we conducted a systematic literature review, searching for tools dedicated to data-driven development.

Of the 1511 search results, we analyzed 76 relevant publications in detail.

The diverse sample indicated a strong interest in this topic from different domains, with different approaches and methods.

While there are a number of common trends, e.g. the use of visualization, in these tools, only a limited, although increasing, number of these tools has so far been evaluated comprehensively.

We therefore summarize trends, strengths and weaknesses in the status quo for data-driven development tools and conclude with a number of potential future directions this field.



2:45pm - 3:00pm

Cobity: A Plug-And-Play Toolbox to Deliver Haptics in Virtual Reality

Steeven Villa Salazar, Sven Mayer

LMU Munich, Germany

Haptics increase the presence in virtual reality applications. However, providing room-scale haptics is an open challenge. Cobots (robotic systems that are safe for human use) are a promising approach, requiring in-depth engineering skills. Control is done on a low abstraction level and requires complex procedures and implementations. In contrast, 3D tools such as Unity allow to quickly prototype a wide range of environments for which cobots could deliver haptic feedback. To overcome this disconnect, we present Cobity, an open-source plug-and-play solution to control the cobot using the virtual environment, enabling fast prototyping of a wide range of haptic experiences. We present a Unity plugin that allows controlling the cobot using the end-effector's target pose (cartesian position and angles); the values are then converted into velocities and streamed to the cobot inverse kinematic solver using a specially designed C++ library. Our results show that Cobity enables rapid prototyping with high precision for haptics. We argue that Cobity simplifies the creation of a wide range of haptic feedback applications enabling designers and researchers in human-computer interaction without robotics experience to quickly prototype virtual reality experiences with haptic sensations. We highlight this potential by presenting four different showcases.



3:00pm - 3:15pm

The Gesture Authoring Space: Authoring Customised Hand Gestures for Grasping Virtual Objects in Immersive Virtual Environments

Alexander Schäfer1, Gerd Reis2, Didier Stricker1,2

1TU Kaiserslautern, Germany; 2German Research Center for Artificial Intelligence, Germany

Natural user interfaces are on the rise. Manufacturers for Augmented, Virtual, and Mixed Reality head mounted displays are increasingly integrating new sensors into their consumer grade products, allowing gesture recognition without additional hardware. This offers new possibilities for bare handed interaction within virtual environments. This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world. The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures. In a user study, the proposed approach is compared with the pinch gesture and the controller for grasping virtual objects. The different grasping techniques are compared in terms of accuracy, task completion time, usability, and naturalness. The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.



3:15pm - 3:30pm

Auto-Generating Multimedia Language Learning Material for Children with Off-the-Shelf AI

Fiona Draxler1, Laura Haller1, Albrecht Schmidt1, Lewis L. Chuang2

1LMU Munich, Germany; 2TU Chemnitz, Germany

The unique affordances of mobile devices enable the design of novel language learning experiences with auto-generated learning materials. Thus, they can support independent learning without increasing the burden on teachers. In this paper, we investigate the potential and the design requirements of such learning experiences for children. We implement a novel mobile app that auto-generates context-based multimedia material for learning English. It automatically labels photos children take with the app and uses them as a trigger for generating content using machine translation, image retrieval, and text-to-speech. An exploratory study with 25 children showed that children were ready to engage to an equal extent with this app and a non-personal version using random instead of personal photos. Overall, the children appreciated the independence gained in comparison to learning at school but missed the teachers' support. From a technological perspective, we found that auto-generation works in many cases. However, handling erroneous input, such as blurry images and spelling mistakes, is crucial for children as a target group. We conclude with design recommendations for future projects, including scaffolds for the photo-taking process and information redundancy for identifying inaccurate auto-generation results.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: MuC 2022
Conference Software - ConfTool Pro 2.6.145+TC
© 2001–2022 by Dr. H. Weinreich, Hamburg, Germany