Automating Documentation: A critical perspective into the role of artificial intelligence in clinical documentation
1Oxford Internet Institute, University of Oxford. OX1 3JS, UK; 2University of North Carolina, Chapel Hill, NC 27599, USA
The current conversation around automation and artificial intelligence technolo-gies creates a future vision where humans may not possibly compete against in-telligent machines, and that everything that can be automated through deep learn-ing, machine learning, and other AI technologies will be automated. In this article, we focus on general practitioner documentation of the patients’ clinical encounter, and explore how these work practices lend themselves to automation by AI. While these work practices may appear perfect to automate, we reveal potential negative consequences to automating these tasks, and illustrate how AI may ren-der important aspect of this work invisible and remove critical thinking. We con-clude by highlighting the specific features of clinical documentation work that could leverage the benefits of human-AI symbiosis.
Toward Three-Stage Automation of Detecting and Classifying Human Values
1Kyushu University, Japan; 2University of Maryland, USA; 3The University of Texas at Austin, USA; 4National Sun Yat-sen University, Taiwan
Prior work on automated annotation of human values has sought to train text classification techniques to label text spans with labels that reflect specific human values such as freedom, justice, or safety. This confounds three tasks: (1) selecting the documents to be labeled, (2) selecting the text spans that express or reflect human values, and (3) assigning labels to those spans. This paper proposes a three-stage model in which separate systems can be optimally trained for each of the three stages. Experiments from the first stage, document selection, indicate that annotation diversity trumps annotation quality, suggesting that when multiple annotators are available, the traditional practice of adjudicating conflicting annotations of the same documents is not as cost effective as an alternative in which each annotator labels different documents. Preliminary results for the second stage, selecting value sentences, indicate that high recall (94%) can be achieved on that task with levels of precision (above 80%) that seem suitable for use as part of a multi-stage annotation pipeline. The annotations created for these experiments are being made freely available, and the content that was annotated is available from commercial sources at modest cost.
Illegal Aliens or Undocumented Immigrants? Towards the Automated Identification of Bias by Word Choice and Labeling
1University of Konstanz, Germany; 2University of Wuppertal, Germany
Media bias, i.e., slanted news coverage, can strongly impact the public perception of topics reported in the news. While the analysis of media bias has recently gained attention in computer science, the automated methods and results tend to be simplistic when compared to approaches and results in the social sciences, where researchers have studied media bias for decades. We propose Newsalyze, a work-in-progress prototype that imitates a manual analysis concept for media bias established in the social sciences. Newsalyze aims to find instances of bias by word choice and labeling in a set of news articles reporting on the same event. Bias by word choice and labeling (WCL) occurs when journalists use different phrases to refer to the same semantic concept, e.g., actors or actions. This way, bias by WCL can induce strongly divergent emotional responses from their readers, such as the terms "illegal aliens" vs. "undocumented immigrants." We describe two critical tasks of the analysis workflow, finding and mapping such phrases, and estimating their effects on readers. For both tasks, we also present first results, which indicate the effectiveness of exploiting methods and models from the social sciences in an automated approach.