Preliminary Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

This agenda is preliminary and subject to change.

Please note that all times are shown in the time zone of the conference. The current conference time is: 29th May 2024, 10:12:26am CST

 
 
Session Overview
Session
SP 1: Short Research Papers 1
Time:
Monday, 22/Apr/2024:
9:00am - 10:30am

Session Chair: Hiroyoshi Ito, University of Tsukuba
Location: Room 5

Events V on 3F 3F沙龙V

Show help for 'Increase or decrease the abstract text size'
Presentations

Enhancing Ethical Governance of Artificial Intelligence through Dynamic Feedback Mechanism

Y. Liu, W. Zheng, Y. Su

Zhongnan University of Economics and Law, China, People's Republic of

The rapid advancement of Artificial Intelligence (AI) has ushered in significant opportunities while also giving rise to profound ethical concerns. Governments, non-governmental organizations, research institutions, and industries worldwide have fervently engaged in the exploration and implementation of AI ethics. This work reviews the research and practice of AI ethical governance issues worldwide, with a particular emphasis on AI ethics legislation and the practical application of ethical principles. Within the current landscape of AI ethics governance research, several pressing challenges emerge, including the establishment of a robust ethical decision-making framework, the integration of ethical principles into AI systems, and the formulation of guiding legal policies. Based on an extensive analysis of existing AI ethics governance theories and technological research, this study presents an innovative conceptual framework rooted in dynamic feedback reinforcement learning theory. This pioneering framework, cultivated through the collaboration of multiple stakeholders, encompasses various dimensions encompassing law, technology, and market dynamics. Additionally, it establishes an AI ethics governance committee tasked with supervising the behavior of AI systems, guiding their acquisition of ethical principles, and adapting to the ever-evolving environment. The overarching objective of this collaborative, multi-faceted AI ethics governance framework is to serve as a reference point for global AI ethics governance mechanisms and to promote the sustainable development of the AI industry. By taking into account legal, technological, and market factors, our aim is to facilitate a harmonious interaction between technology, humanity, and society, ultimately paving the way for a healthy and inclusive intelligent society.



Detection vs. Anti-detection: Is text generated by AI detectable?

Y. Zhang1, Y. Ma1, J. Liu1, X. Liu2, X. Wang3, W. Lu1

1Wuhan University, China; 2Worcester Polytechnic Institute, USA; 3Indiana University Bloomington, USA

The swift advancement of Large Language Models (LLMs) and their associated applications has ushered in a new era of convenience, but it also harbors the risks of misuse, such as academic cheating. To mitigate such risks, AI-generated text detectors have been widely adopted in educational and academic scenarios. However, their effectiveness and robustness in diverse scenarios are questionable. Increasingly sophisticated evasion methods are being developed to circumvent these detectors, creating an ongoing contest between detection and evasion. While the detectability of AI-generated text has begun to attract significant interest from the research community, little has been done to evaluate the impact of user-based prompt engineering on detectors' performance. This paper focuses on the evasion of detection methods based on prompt engineering from the perspective of general users by changing the writing style of LLM-generated text. Our findings reveal that by simply altering prompts, state-of-the-art detectors can be easily evaded with F-1 dropping over 50\%, highlighting their vulnerability. We believe that the issue of AI-generated text detection remains an unresolved challenge. As LLMs become increasingly powerful and humans become more proficient in using them, it is even less likely to detect AI text in the future.



Will affiliation diversity promote the disruptiveness of papers in artificial intelligence?

X. Tang1, X. Li2, M. Yi1

1School of Information Management, Central China Normal University, Wuhan, China; 2School of Medicine and Health Management, Tongji Medical College, Huazhong Universi-ty of Science and Technology, Wuhan, China

Abstract. This study investigates the causal relationship between affiliation diver-sity and paper disruptiveness in the field of Artificial Intelligence (AI). We ob-tained 646,100 AI-related papers with complete affiliation information between 1950 and 2019 from the Microsoft Academic Graph. Descriptive analysis and Propensity Score Matching (PSM) methods are employed in this study. The results show that homophily (over 70%) has still been prevalent over the past 70 years among multi-affiliation collaborations in AI, despite the average affiliation diver-sity exhibiting startling upward trends when AI steps into the deep learning stage. Affiliation diversity cannot promote the disruptiveness of AI papers. On the con-trary, AI papers with affiliation diversity can be 1.75% less disruptive compared to AI papers that collaborated by similar affiliations.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: iConference 2024
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany