Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 30th Apr 2024, 08:40:43pm CEST

 
 
Session Overview
Session
PP09: Artificial intelligence & IL
Time:
Tuesday, 10/Oct/2023:
2:00pm - 3:30pm

Session Chair: Jos van Helvoort
Location: C1: Room 0.313

The III CAMPUS UJ Institute of Information Studies Faculty of Management and Social Communication Łojasiewicza 4 Str.

Show help for 'Increase or decrease the abstract text size'
Presentations

Students’ Perceptions of Using Artificial Intelligence in Written Assignments – Is Information Literacy Still Needed?

Krista Lepik

University of Tartu, Estonia

Writing a short literature review at university can be a useful assignment to develop a variety of academic (Sharples, 2022) and information literacy (IL) skills. However, recent technological developments have provided students with a new shortcut, artificial intelligence (AI), which can provide a coherent, albeit somewhat technical and stilted output (Sharples, 2022, p. 1122). As “AI-assisted writing is already deeply embedded into practices that students already use” (Fyfe, 2022, p. 2), the faculty and librarians at universities face numerous questions that challenge traditional practices of teaching students academic writing and supporting IL skills. The issues of plagiarism and bias (Fyfe, 2022), fake references (Sharples, 2022), opacity of algorithms (Lloyd, 2019), and questions about trust and neutrality (Haider & Sundin, 2022) are but a few of these questions.

This presentation focuses on an intervention of applying the GPT-3 in the process of writing students’ short literature reviews. Writing a short literature review has been a traditional task to support Information Management students’ IL skills during ‘Information Behavior Theories and Practices’ course at University of Tartu. During the spring semester of the 2022-23 academic year, this task was enhanced by adding a request to use GPT-3 (such as OpenAI playground) in the writing process and reflect upon using the new technology. Of particular interest in the reflections was the students’ perceived usage of AI, hence the research questions:

• How do students evaluate the usability of AI in terms of searching, evaluating, and presenting information?

• What are the benefits and problems related to using AI in process of writing?

During the introduction of this assignment, the usage of reflections in research was covered, including the possibility to opt-out at any time without any negative consequences. The students’ reflections (N=26) were analyzed using thematic analysis to identify themes related to using AI in academic writing process. Despite the initial excitement, the students presented critical attitudes toward the results provided by GPT-3. Nevertheless, AI provided help with translations, and sometimes finding new perspectives. In line with Dinneen & Buginger (2021), this study contributes to the discussions around the usage of AI in academic tasks that have predominantly benefitted from the domain of information literacy.

References

Dinneen, J. D., & Bubinger, H. (2021). Not quite ‘Ask a Librarian’: AI on the nature, value, and future of LIS. Proceedings of the Association for Information Science and Technology, 58(1), 117–126.

Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI & SOCIETY, 1–11.

Haider, J., & Sundin, O. (2022). Paradoxes of media and information literacy: The crisis of information. Taylor & Francis.

Lloyd, A. (2019). Chasing Frankenstein’s monster: Information literacy in the black box society. Journal of Documentation, 75(6), 1475–1485.

Sharples, M. (2022). Automated essay writing: An AIED opinion. International Journal of Artificial Intelligence in Education, 32(4), 1119–1126.



Improving STEM Competences by Using Artificial Intelligence to Generate Video Games Based on Student Written Stories

Ivana Ogrizek Biškupić1, Mario Konecki2, Mladen Konecki2

1Algebra University College, Zagreb, Croatia; 2University of Zagreb, Croatia

Due to a rapid development of digital services and digital transformation in a wide range of disciplines, new challenges have emerged seeking out to acquire digital skills and gain digital competence (Vuorikari et. al., DigiCompEdu 2.2, 2022). Learning about information technology can be quite challenging for STEM-oriented students, and it’s even more challenging for non-STEM students (May, 2022). However, new possibilities to teach non-STEM students about fundaments of information technology have emerged as a result of new developments in the fields of artificial intelligence, gaming, virtual reality, etc. In order to provide an effective education approach for non-STEM students a model that uses advanced artificial technology has been formed. In this model students learn about a non-STEM subject by using information technology paired with artificial intelligence. More precisely, students boost their literacy skills by writing a story that will be used as a scenario. This scenario has to be structured well-enough to be processed by algorithms based on artificial intelligence that generate a video game based on a created scenario. Regarding the scenario, technics and methods, students learn more about both structured writing and literacy, and information technology at the same time.

Gamification as a method, among others, describes a social context by game-elements such as awards, rule structures, and interfaces that are inspired by video games. Mathias Fuchs et. al. (2014) analyses the role and impact of gamification method in business and wide society suggesting revising the question of re-thinking the method by applying it in a number of other fields (e. g. education, business, health, wide society, etc.) (Hartmann, 2022).

By integrating literacy and creativity fostered by gamification method, non-STEM students learn about literacy and coding simultaneously and boost-up their STEM skills. They get more familiar with STEM aspects by mapping information literacy in the field, and their motivation is increased since they are provided with engaging and assisted environment. This process has been based on the gamification method with an aim to support process of strengthening four different competences 1) literary competences, 2) writing in a foreign language, 3) coding and STEM skills and 4) lifelong learning competences.

References

Fuchs, M., Fizek, S., Ruffino, P., & Schrape, N. (Eds.). (2014). Rethinking gamification. Lüneburg: Meson Press.

Hartmann, F. G., Mouton, D., & Ertl, B. (2022). The Big Six interests of STEM and non-STEM students inside and outside of teacher education. Teaching and Teacher Education, 112, 103622. Elsevier Ltd. https://doi.org/10.1016/j.tate.2021.103622

Kampylis, P., Punie, Y., & Devine, J. (2015). Promoting effective digital-age learning - A European framework for digitally-competent educational organisations. Luxembourg: Publications Office of the European Union.

Kergerl, D., Heidkamp-Kergel., B., Arnett, R., & Macino, S. (2020). In Communication and Learning in an age of Digital Transformation. Routledge Press.

May Brienne, K., Wendt, J. L., & Barthlow, M. J. (2022). A comparison of students’ interest in STEM across science standard types. Social Sciences & Humanities Open, 6, 100287z. Elsevier Ltd. https://doi.org/10.1016/j.ssaho.2022.100287

Vuorikari, R., Kluzer, S., & Punie, Y. (2022). DigComp 2.2: The digital competence framework for citizens, EUR 31006 EN. Luxembourg: Publications Office of the European Union. https://dx.doi.org/10.2760/115376



Artificial Intelligence and Information Literacy: Hazards and Opportunities

Michael Ryne Flierl

Ohio State University Libraries, Columbus, USA

ChatGPT and other artificial intelligence (AI) software and tools are already changing the world. ChatGPT can pass an MBA exam from an Ivy League Institution (Terwiesch, 2023). It can also create disinformation on topics like “vaccines, COVID-19, the Jan. 6, 2021, insurrection at the U.S. Capitol, immigration and China’s treatment of its Uyghur minority” (Associated Press, 2023). Current AIs have the ability to provide seemingly valid information that is, in fact, devoid of any relationship with reality.

Consider a state actor who uses a propaganda model leveraging the fact that “information overload leads people to take shortcuts” in deciding on the trustworthiness of information (Rand, 2016). New AI systems can more cheaply and easily than ever before create plausible, yet ultimately false, information about healthcare choices or a political candidate. It is not difficult to imagine a deluge of mis- or dis-information that becomes extremely difficult, time-consuming, and expensive to separate the true from the mostly true from the blatantly false.

Information Literacy (IL) theorists and practitioners are uniquely positioned to lead and facilitate important discussions around these topics as there are real implications for healthcare, education, and democracy. Yet existing IL theory, practices, and research are not currently adequate to address the challenges new developments in AI pose. Accordingly, this conceptual paper will identify three specific areas IL professionals can devote time and resources to address some of these problems.

First, we can advocate for new kinds of AI systems designed with specific limitations and parameters. Similarly, we can further explainable AI (XAI) research that aims to help users “understand, trust, and manage” AI applications (Gunning et al., 2019). Secondly, we must reconsider IL and higher education instruction in light of the new ability for students to easily create AI-generated text. Embracing certain elements of AI tools intentionally could lead to pedagogical innovation yielding new ways to teach and learn—including new strategies to sift through a tremendous glut of AI-generated content of unknown veracity. Lastly, information professionals have the opportunity to refine or develop IL theory that can provide holistic, strategic thinking and justification for how educators, policy-makers, and the general public should treat and approach AI systems.

The future of AI is uncertain. What is clear is that without intentional forethought for how we design and use such systems we invite serious, and likely deleterious, consequences.

References

Terwiesch, C. (2023). Would chat GPT get a Wharton MBA? A prediction based on its performance in the operations management course. Mack Institute for Innovation Management at the Wharton School, University of Pennsylvania. Retrieved February 1, 2023 from https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP-1.24.pdf

Associated Press. (2023). Learning to lie: AI tools adept at creating disinformation. Retrieved February 1, 2023 from https://www.usnews.com/news/us/articles/2023-01-24/learning-to-lie-ai-tools-adept-at-creating-disinformation

Paul, C., & Matthews, M. (2016). The Russian “Firehose of Falsehood” propaganda model: Why it might work and options to counter it. RAND Corporation. Retrieved February 1, 2023 from https://www.rand.org/content/dam/rand/pubs/perspectives/PE100/PE198/RAND_PE198.pdf

Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4. https://doi.org/10.1126/scirobotics.aay7120



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECIL 2023
Conference Software: ConfTool Pro 2.8.101+CC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany