Using the Information Inequity Framework to Study GenAI Equity: Analysis of Educational Perspectives
S. Zipf1, C. Wu1, T. Petricini2
1The Pennsylvania State University; 2Penn State Erie
Introduction. Generative AI presents opportunities and challenges for higher education, particularly concerning equity. Understanding stakeholders' perceptions of equity is crucial as AI increasingly influences teaching, learning, and administrative practices.
Method. The study was conducted in a large, research-intensive institution in the US. Participants (n=206) from diverse university roles responded to an open-ended question about how Generative AI affects educational equity. The responses were analyzed based on the information and equity dimensions (Lievrouw & Farb, 2003).
Analysis. Data were analyzed using a combination of deductive and inductive coding to identify key themes. The framework of information inequity underscores how disparities in access, skills, and ethical considerations create uneven opportunities for stakeholders to benefit from Generative AI, making these dimensions essential for understanding educational equity.
Results. Findings revealed differing focal points among the groups: faculty and staff concentrated on issues of physical and financial access to AI tools, while students placed greater emphasis on the ethical implications and value-based considerations of AI in education.
Conclusion(s). The study suggests that addressing AI equity in higher education requires a comprehensive approach that goes beyond improving access. AI literacy education should include skills development and address ethical considerations, ensuring that all stakeholders' concerns are met.
“Sora is Incredible and Scary”: Public Perceptions and Governance Challenges of Text-to-Video Generative AI Models
K. Z. Zhou1, A. Choudhry1, E. Gumusel2, M. R. Sanfilippo1
1University of Illinois Urbana-Champaign, USA; 2Indiana University Bloomington, USA
Text-to-video generative AI models such as Sora OpenAI have the potential to disrupt multiple industries. In this paper, we report a qualitative social media analysis aiming to uncover people’s perceived impact of and concerns about Sora’s integration. We collected and analyzed comments (N=292) under popular posts about (1) Sora generated videos, (2) a comparison between Sora videos and Midjourney images, and (3) artists’ complaints about copyright infringement by Generative AI. We found that people were most concerned about Sora’s impact on content creation-related industries. Governance challenges included the for-profit nature of OpenAI, the blurred boundaries between real and fake content, human autonomy, data privacy, copyright issues, and environmental impact. Potential regulatory solutions proposed by people included law-enforced labeling of AI content and AI literacy education for the public. Based on the findings, we discuss the importance of gauging people’s tech perceptions early and propose policy recommendations to regulate Sora before its public release.
Finding Pareto Trade-offs in Fair and Accurate Detection of Toxic Speech
S. Gupta1, V. Kovatchev2, A. Das1, M. De-Arteaga1, M. Lease1
1University of Texas at Austin, United States of America; 2University of Birmingham, United Kingdom
Optimizing NLP models for fairness poses many challenges. Lack of differentiable fairness measures prevents gradient-based loss training or requires surrogate losses that diverge from the true metric of interest. In addition, competing objectives (e.g. accuracy vs. fairness) often require making trade-offs based on stakeholder preferences, but stakeholders may not know their preferences before seeing system performance under different trade-off settings. To address these challenges, we begin by formulating a differentiable version of a popular fairness metric to provide balanced accuracy across demographic groups. Next, we show how model-agnostic, HyperNetwork optimization can efficiently train arbitrary NLP model architectures to learn Pareto-optimal tradeoffs between competing metrics. Focusing on the task of toxic language detection, we show the generality and efficacy of our methods across two datasets, three neural architectures, and three fairness loss functions.
|