Sitzung | ||
MCI-Paper03: Understanding Experiential Qualities and User Needs
| ||
Präsentationen | ||
User Perceptions and Experiences with Smart Homes - The Smart Home as an Obedient Guard Dog, Disinterested Cat, Ambitious Octopus or Busy Beehive§ 1Bauhaus-Universität Weimar; 2Robert Bosch GmbH - Bosch Research We investigated people's experience with living in shared smart homes, involving both smart home initiators and maintainers (primary users) as well as other inhabitants (secondary users). Through a cultural probe study with 35 participants from 16 shared homes and follow-up interviews with a subset, we gained insights into people's understanding of smart home technology, their ideas for the future, their experiences with the technology, and how they relate to their smart home. Our findings highlight how the role taken (primary or secondary user) influences how smart homes are experienced and understood in everyday life, and how 'smartness' is defined. The study further investigates how people describe their smart home 'as a living being', yielding a wide range of animal metaphors, that reveal character traits that people associate with smart home technology. What Did I Say Again? Relating User Needs to Search Outcomes in Conversational Commerce 1GESIS - Leibniz Institute for the Social Sciences; 2University of Twente Recent advances in natural language processing and deep learning have accelerated the development of digital assistants. In conversational commerce, these assistants help customers find suitable products in online shops through natural language conversations. During the dialogue, the assistant identifies the customer's needs and preferences and subsequently suggests potentially relevant products. Traditional online shops often allow users to filter search results based on their preferences using facets. Selected facets can also serve as a reminder of how the product base was filtered. In conversational commerce, however, the absence of facets and the use of advanced natural language processing techniques can leave customers uncertain about how their input was processed by the system. This can hinder transparency and trust, which are critical factors influencing customers' purchase intentions. To address this issue, we propose a novel text-based digital assistant that, in the product assessment step, explains how specific product aspects relate to the user's previous utterances to enhance transparency and facilitate informed decision-making. We conducted a user study (N=135) and found a significant increase in user-perceived transparency when natural language explanations and highlighted text passages were provided, demonstrating their potential to extend system transparency to the product assessment step in conversational commerce. What Makes for a Good UX Professional?: Development of a UX Professional Competence Model University of Siegen, Germany More than ever, companies need to find the right user experience (UX) employees who are competent in UX and have the necessary soft skills. Although training plans are available, the competencies required in practice are mainly unclear. An overview of the competencies relevant to UX professionals would help with the composition of digital product teams. Therefore, 25 exploratory interviews were conducted with UX managers to learn what they considered to be the relevant competence characteristics of UX professionals. The thematic analysis of the interviews reveals motives (e.g., interest in what they do), traits (e.g., honesty), knowledge (e.g., knowing UX theories and methods), and many skills (e.g., communicating with and mediating among others), but also the expected self-image (e.g., eager for knowledge) of competent UX professionals. Based on these insights, we develop a competence model that enables managers to identify suitable candidates for a job and develop the competencies of UX professionals. [Invited Talk] Analyzing Security and Privacy Advice During the 2022 Russian Invasion of Ukraine on Twitter 1CISPA Helmholtz Center for Information Security, Germany; 2Paderborn University, Germany; 3Leibniz University Hannover, Germany; 4GESIS - Leibniz Institute for Social Sciences, Germany The Russian Invasion of Ukraine in 2022 resulted in a rapidly changing cyber threat environment globally and incentivized the sharing of security and privacy advice on social media. Previous research found a strong impact of online security advice on end-user behavior. Twitter is an important platform for sharing information in crises. We examined 306 tweets with security and privacy advice related to the Ukrainian war, and created a taxonomy of 224 unique pieces of advice in seven categories, targeted at individuals or organizations in Ukraine and elsewhere. While our findings include untargeted and generic advice known from previous research, we identify novel advice specific to the invasion, offers for individual consultation, and misinformation on security and privacy advice as a new threat. Our findings highlight the strengths and shortcomings of the security and privacy advice given online during the invasion and establish areas for improvements and future research. [Remote] You Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but Not for Hard Ones fortiss GmbH Explaining the mechanisms behind model predictions is a common strategy in AI-assisted decision-making to help users rely appropriately on AI. However, recent research shows that the effectiveness of explanations depends on numerous factors, leading to mixed results, with many studies finding no effect or even an increase in overreliance, while explanations do improve appropriate reliance in other studies. We consider the factor of decision difficulty to better understand when feature-based explanations can mitigate overreliance. To this end, we conducted an online experiment (N=200) with carefully selected task instances that cover a wide range of difficulties. We found that explanations reduce overreliance for easy decisions, but that this effect vanishes with increasing decision difficulty. For the most difficult decisions, explanations might even increase overreliance. Our results imply that explanations of the model's inner workings are only helpful for a limited set of decision tasks where users easily know the answer themselves. |