Conference Agenda

Session
Process Science 1
Time:
Tuesday, 17/Sept/2024:
11:00am - 12:00pm

Session Chair: Christian Bartelheimer
Location: 0.002


Presentations

Experts versus Novices: Analyzing Behavioral Variability in Complex Process Environments

N. Elbert, C. M. Flath

Julius-Maximilian-University Würzburg, Germany

This paper introduces a novel approach to studying user behavior through the User Behavior Mining Framework, analyzing a unique log of 948,251 complex user interaction traces across 78,547 individuals featuring logs of low-level activities in a complex process environment. Employing methods such as directly-follows-graph analysis and trace clustering and next action prediction, the research uncovers the impact of varying experience levels on user interaction patterns and enhances predictive modeling for action forecasting in complex scenarios. This work not only addresses a significant gap in the field by leveraging an underutilized data source but also highlights the importance of rich, detailed datasets for a comprehensive understanding of user behavior and system interaction.

Elbert-Experts versus Novices-299_a.pdf


How Can Generative AI Empower Domain Experts in Creating Process Models?

N. Klievtsova1, J. Mangler1, T. Kampik2, J.-V. Benzin1, S. Rinderle-Ma1

1Technical University of Munich, Germany; 2SAP Signavio, Berlin, Germany

Considering the human factor in information systems is a key to future digitalization efforts, as stated in the Industry5.0 research and innovation actions of the EU. Especially in the design phase of a process-oriented information system, the human factor includes the empowerment of domain experts in process model creation lowering the entry hurdle for process modeling, and increasing modeling speed. In this work, we investigate how generative AI methods can support domain experts in creating process models in interaction with a chatbot based on textual process descriptions.
We explore the amount of necessary information required as input to create process models with immediate visual representation using markdown-inspired languages and extend existing evaluation methods for assessing generated models, focusing on their completeness and correctness. Overall, an evaluation method has to consider the complex relationships between model completeness, correctness, textual process description, textual representation, and prompt engineering to support the domain expert.

Klievtsova-How Can Generative AI Empower Domain Experts in Creating Process Models-274_a.pdf