Session | ||
WE 11: Human-AI Interface
| ||
Presentations | ||
How Prediction Intervals Improve Human Algorithm Collaboration University of Cologne, Germany For managerial decision tasks, humans and algorithms often work together with the intention to combine their skills and thereby achieve complementary performance, that is, a higher performance than either party could achieve on their own. Data from practice and research suggests that algorithmic advice often improves human decisions, however, not beyond the algorithmic performance. Missing collaboration mechanisms are seen as the main reason for unexploited complementary performance potential. A potential collaboration mechanism is to communicate algorithmic certainty. In this paper, we analyze how human decision making in algorithmically supported tasks is affected by the provision of prediction intervals. In a laboratory experiment, participants worked on a forecasting task in which they and the algorithmic advisor had complementary skills and information. We show that prediction intervals are an effective collaboration mechanism causing a more appropriate reliance on advice. This way, decision makers rely more on accurate advice that comes with high certainty and less on inaccurate advice that comes with low certainty, leading to a higher complementary performance. Our results contribute to a better understanding of how humans and algorithms can achieve complementary performance. We suggest that managers consider the provision of prediction intervals for algorithmically supported forecasting tasks, since they lead decision makers to efficiently use algorithmic advice and improve complementary performance. Automation and Augmentation: Roles of AI in Collaborated Decision Making Universität zu Köln, Germany Artificial intelligence (AI) will have a growing influence in the future of work. Human decision-makers may see significant changes in their day- to-day work as collaboration between humans and AI will become commonplace. We explore the application of AI for automation (i.e., AI performing tasks independently) and for augmentation (i.e., AI advising humans) in collaborative environments. Using an analytical model, we show that whether AI should be used for automation or for augmentation depends on different types of human-AI complementarity: The share of automation increases with higher levels of between-task complementarity, which can arise due to task-level performance differences between humans and AI. In contrast, the share of augmentation increases with higher levels of within-task complementarity, which arise due to task-based interaction between humans and AI. We include both AI roles in a task allocation framework, where an AI and humans work on a set of classification tasks to optimize performance with a given level of available human resources. We validate our framework with an empirical study based on experimental data in which humans had to classify images with and without AI support. When between-task and within-task complementarity exist, we see an interesting distribution of work pattern for optimal work configuration: AI automates relatively easy tasks, augments humans on tasks with similar human and AI performance, and humans work without AI on relatively difficult tasks. Our work provides several contributions to theory and practice and our task allocation framework showcases potential job designs in the future of work. |