An Analysis of Poet Demographic and Thematic Diversity in a Poetry Collection for Inclusive AI
K. Choi1, G. Kang2
1University of Illinois Urbana-Champaign; 2Indiana University Bloomington
AI technologies, such as theme classification and named entity recognition, can enhance the accessibility and user-friendliness of digital library collections. However, they may introduce biases against marginalized groups if the collections behind the AI models do not represent them adequately. In the previous studies, AI models for poetry collections were developed without carefully assessing the datasets, raising concerns regarding representation. To address this issue, we annotated and published the race and ethnicity of poets in an American poetry collection curated by poets.org, which was recently used to train a poetry theme classification system. We then examined the diversity of the collection based on these annotations. Our findings indicate that most underrepresented groups are well-represented in the collection, which supports the dedication of the Academy of American Poets, the organization managing poets.org, to inclusivity and diversity. However, we found that poems by Latino/a/x American poets are less prevalent compared to their actual demographic representation. Furthermore, we found that poems from underrepresented groups increase the collection’s linguistic and thematic diversity, drawing on their unique cultures and histories. To design responsible AI that embraces diversity, it is important to support non-standard English and themes beyond those popular with the general population.
Discourses of Fear around AI and their Implications for Library and Information Science
S. Appedu, Y. Qin
Syracuse University, United States of America
Since its inception, the seemingly unlimited potential of artificial intelligence (AI) to alter human existence has evoked feelings of fear and amazement. Today, all sectors of industry, academia, and society are anticipating the potential changes new AI technologies are forecasted to bring and mitigate their harms. In this climate, there is a clear need to center the complex interactions between discourse, power, and individual/institutional actors within sociotechnical systems and their material consequences. While scholars have previously made connections between discourses of fear and library and information science (LIS), there has not yet been an attempt to understand how discourses of fear may currently be shaping the field's response to AI. In this paper, we argue that focusing our critical gaze on the discourses of fear shaping the material interactions between LIS, technological artifacts, industry, and society better positions us to intervene in the predicted trajectory of AI innovation. We posit that cultivating discourses of refusal – which are committed to the belief that more just worlds must be possible – requires both individual and collective consideration of how fear has and continues to shape our own responses to new technologies.
The Inevitability of AI: A Study of Undergraduate Students’ Perceptions of AI Tools in their Future Careers
M. Colón-Aguirre1, K. Bright2
1University of South Carolina, United States of America; 2East Carolina University, United States
Introduction. Artificial intelligence (AI) tools garner more attention every day, questions have arisen regarding their possible negative impacts in future job markets. Some predict a potential for massive job losses, especially in high-skilled jobs. This study seeks to explore undergraduate students’ perceptions of how these tools might affect their future careers.
Method. This study follows a case study design, employing phenomenological interviews as a research method. The data set was made up of interviews with 17 undergraduate students.
Analysis. Data was analyzed by employing constant comparative analysis, with various rounds of coding including the creation of open, axial, and structural codes.
Results. Students saw AI tools as an inevitable part of their future work. Participants expressed their intention to learn how to optimize their use of various tools, which they see as having the potential to positively benefit them in their future careers. They do not perceive AI to be a viable substitute for their skills, especially in terms of identifying misinformation and emotions.
Conclusion(s). Academic institutions must provide curricular spaces which allow students to harness the power of AI tools. While employers should also make efforts to train employees to make the most of AI tools.
|