Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
PaperSession-23: Agents, Actants and AI
Friday, 12/Oct/2018:
11:00am - 12:30pm

Session Chair: Christian Djeffal
Location: Sheraton - Ballroom East

Show help for 'Increase or decrease the abstract text size'

Hey Alexa, Who Are You?! The Cultural Biography of Artificial Agents

Bart Simon, Ceyda Yolgörmez

Concordia University, Canada

This paper considers the agency question in the practical engagement of consumer artificial agents like Alexa, Siri, Google Home and others. Drawing on literature in the cultural studies of robotics and artificial intelligence, STS and interactionist sociology we argue that the agency and attendant human-likeness of increasingly sophisticated artificial agents is less of an existential question and more a matter of practical attribution by human interlocutors. Our guiding question then is, how do artificial agents’ interlocutors assess and attribute agency and what are the conditions for differential attributions?


Nathaniel Poor1, Roei Davidson2

1Underwood Institute, United States of America; 2University of Haifa, Israel

Google, Facebook, and Linkedin have recently integrated artificial intelligence-driven recommendation systems into their widely used communication services. These systems suggest replies to users which they can send when communicating with others. For example, Google provides “Smart Reply” in its Gmail mobile applications. Senders can click on a smart reply, modify it if they wish, or send it as is, as if they typed it themselves, and receivers may be none the wiser. Such technologies recast the first part of Lasswell’s (1948) model of communication, who, making interpersonal communication impersonal. A similar feature is also embedded in Facebook’s Messenger application as well as in LinkedIn.

For this work, we use Critical Discourse Analysis to examine how the institutional creators and their surrounding intermediaries (PR professionals and technology journalists) discuss Smart Reply and similar technologies, not only looking for what people mention but what is absent. To aid in considering how these technologies are framed we draw on work that considers the relationship between humans and computers, and on recent science fiction. While institutional and journalistic discourses focus on what is present, these approaches allow us to consider socially-relevant absences related to the consequences such recommendation technologies might have for human autonomy, well-being and deliberation.

When you can trust nobody, trust the smart machine

Sun-ha Hong

MIT, United States of America

The diffusion of smart machines for tracking individual bodies and homes raise new questions about what counts as self-knowledge, how human sense experience should be interpreted, and how data is to be trusted (or not). Self-tracking practices intersect the contemporary faith in the objectivity of data with the turn towards what has been called ‘i-pistemology’: a revalorisation of personal and experience-based truth in opposition to top-down and expert authority. What does it mean to ‘know myself’, insofar as this knowing is performed through machines that operate beyond the limits of the human senses? What does it mean to turn to personalised and individuated forms of datafication amidst a wider crisis of consensus, expertise, and shared horizons of reality?

This analysis draws on a larger research project into datafication and knowledge, conducted between 2014 and 2017. It included analysis of news media coverage on self-tracking technologies; of self-tracking products and prototypes, including the promotional discourse and the design of individual devices; and interviews and participation observations of the Quantified Self community. The presentation will explore how these technologies connect the faith in data-driven objectivity with a contrarian and individualistic form of ‘personalised’ knowledge, remixing wider themes of trust, expertise and verification.

Look Who’s Talking: Using Human Coding to Establish a Machine Learning Approach to Twitter Education Chats

K. Bret Staudt Willet1, Brooks D. Willet2

1Michigan State University, United States of America; 2Birch Wayfinders, LLC

Twitter has become a hub for many different types of educational conversations, denoted by hashtags and organized by a variety of affinities. Researchers have described these educational conversations on Twitter as sites for teacher professional development. Here, we studied #Edchat—one of the oldest and busiest Twitter educational hashtags—to examine the content of contributions for evidence of professional purposes. We collected tweets containing the text “#edchat” from October 1, 2017 to June 5, 2018, resulting in a dataset of 1,228,506 unique tweets from 196,263 different contributors. Through initial human-coded content analysis, we sorted a stratified random sample of 1,000 tweets into four inductive categories: tweets demonstrating evidence of different professional purposes related to (a) self, (b) others, (c) mutual engagement, and (d) everything else. We found 65% of the tweets in our #Edchat sample demonstrated purposes related to others, 25% demonstrated purposes related to self, and 4% of tweets demonstrated purposes related to mutual engagement. Our initial method was too time intensive—it would be untenable to collect tweets from 339 known Twitter education hashtags and conduct human-coded content analysis of each. Therefore, we are developing a scalable machine-learning model—a multiclass logistic regression classifier using an input matrix of features such as tweet types, keywords, sentiment, word count, hashtags, hyperlinks, and tweet metadata. The anticipated product of this research—a successful, generalizable machine learning model—would help educators and researchers quickly evaluate Twitter educational hashtags to determine where they might want to engage.

Contact and Legal Notice · Contact Address:
Conference: AoIR 2018
Conference Software - ConfTool Pro 2.6.126
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany