Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
16 SES 07 A: ICT, Language Learning and Media Literacy
Time:
Wednesday, 28/Aug/2024:
15:45 - 17:15

Session Chair: Katarina Mićić
Location: Room 016 in ΧΩΔ 02 (Common Teaching Facilities [CTF02]) [Ground Floor]

Cap: 56

Paper Session

Show help for 'Increase or decrease the abstract text size'
Presentations
16. ICT in Education and Training
Paper

Large Language Models in Educational Activities

Ruslan Suleimanov, Roman Kupriyanov, Igor Remorenko

Moscow City University, Russian Federation

Presenting Author: Suleimanov, Ruslan

In the near future, the presence of advanced generative technologies, including ChatGPT and other services that use large language models (LLM), has the potential to greatly impact the field of education and the role of teachers within it. In particular, chatbots can perform four roles: interlocutor, content provider, teaching assistant and evaluator [1].

A notable characteristic of large language models (LLM) is their capacity for further training, wherein the initial model can be adapted and refined to cater to a specific subject area. Specifically, large language models (LLM) can undergo additional training using the written works of specific authors, enabling the creation of a “digital counterpart” of real historical figures.

The application of LLM holds significant potential in assisting both students and teachers in their textual work. For students, LLM can serve as a reviewer when working on creative assignments, offering guidance by identifying obvious and serious mistakes. Likewise, teachers can use LLM to conduct preliminary assessments of students' work and identify areas that require further educational attention [2]. This may be particularly useful when evaluating creative essays, a genre of literature known for its concise format and flexible style of presentation. Although essays have a changeable structure, they generally include an introduction, thesis statement, argumentation, and conclusion.

This research aims to investigate the implementation of LLM as a personal assistant in this context. In order to train LLM on specific data and create a “digital counterpart,” several tasks need to be accomplished:

  • Gathering and preprocessing a dataset.
  • Establishing evaluation criteria and annotating the dataset accordingly.
  • Identifying educational shortcomings in LLM.
  • Collecting and constructing a training set based on the “question-answer” principle for further training of the large language model.

The primary research focuses include the criteria for annotation required for subsequent training and potential limitations of LLM for educational purposes.


Methodology, Methods, Research Instruments or Sources Used
To evaluate LLM’s effectiveness, a dataset of text essays on two topics was prepared. The first topic involved explaining reasons for selecting a specific profile for master's degree admission and discussing research directions within that profile. The second topic focused on entrance tests in “Socio-psychological mechanisms of the influence of the additional education system on the child giftedness development”, “Mentoring as a method of developing outstanding abilities of students with signs of giftedness”, and “Modern domestic concepts of giftedness” and others. A total of 80 text essays were analysed for each topic.
Criteria were established and rated on a scale of 0 to 2 for evaluation, including:
• Expression of the author's position regarding the presented problem or topic.
• Concise presentation of key points and theses.
• Well-reasoned grounds for profile selection and research direction (only applicable to the first topic).
The work via LLM involves using the API via the http protocol for communication. Prompt instructions are used to interact with the LLM-powered chatbot and complete tasks. Through iterations, a final prompt is refined to resolve issues and ensure the desired response from the chatbot: “You are a text evaluation system. You have the text and the criteria by which you need to make an assessment. Evaluate the text based on the criteria, based solely on the criteria given. You should only use the attached criteria. Set the final number of points (‘BALLS’) and describe why you set exactly such an assessment (‘BALLS_DESCRIPTION’) using only the presence of criteria in the text. Don’t try to make up the answer”.
To evaluate the accuracy [3] of the chatbot’s results, the Mean Absolute Error (MAE) was used as the main metric, along with the 75th quantile of absolute error (AE_75P). Based on the data collected, it can be concluded that the model deviates by an average of one point for most criteria.
During grading, it was noticed that the chatbot often gives higher scores, deviating from the grade distribution. To investigate this, the “Pearson contingent coefficient” was calculated to analyse the correlation between nominal indicators X and Y. However, the analysis found no evidence of consistent overestimation.
To evaluate the level of agreement among experts, including the chatbot, the “Kendall concordance coefficient” was calculated. This coefficient, ranging from 0 to 1, quantifies the consistency among expert opinions. The analysis concluded that there is minimal agreement between the ratings of experts and the chatbot.

Conclusions, Expected Outcomes or Findings
Pre-trained large language models in the form of chatbots can function as teaching assistants by conducting initial reviews of essays and providing feedback on how to correct and enhance the work. This type of solution can be particularly beneficial for teachers, as it allows them to efficiently evaluate students’ work and generate a set of basic comments to address common mistakes. This approach significantly reduces the teacher’s workload and saves valuable time.
As the experience of interacting with artificial intelligence systems shows, the effectiveness of the feedback received relies on the accuracy of the request. It is crucial to establish clear evaluation criteria and avoid ambiguous statements in grading scales, such as “clear author’s position” or “partially presented author’s position.” To evaluate the quality of feedback from the chatbot, it is important to have multiple experts assess the essay to ensure consistency in their opinions. In the future, this system has the potential to become a valuable tool for the initial analysis of students’ work. The chatbot can be beneficial for both students, allowing them to assess the quality of their work before submitting it to the teacher, and teachers, providing an objective perspective on the student’s work.

References
1. Jeon, J., Lee, S. Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT. Educ Inf Technol 28, 15873–15892 (2023). https://doi.org/10.1007/s10639-023-11834-1
2. Elkins, S., Kochmar, E., Serban, I., Cheung, J.C.K. (2023). How Useful Are Educational Questions Generated by Large Language Models? In: Wang, N., Rebolledo-Mendez, G., Dimitrova, V., Matsuda, N., Santos, O.C. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. AIED 2023. Communications in Computer and Information Science, vol 1831. Springer, Cham. https://doi.org/10.1007/978-3-031-36336-8_83


16. ICT in Education and Training
Paper

Social Transmedia Storytelling. A University Media Literacy Project

Jose Miguel Gutiérrez, Mónica Salcedo, Yaimara Batista Fernández, Raquel Brioa, María del Carmen Herguedas, Laura de la Iglesia

University of Valladolid, Spain

Presenting Author: Gutiérrez, Jose Miguel; Salcedo, Mónica

Currently, the scenario of university media education seems to be constituted from an increasing awareness of the need to favor the development of participatory cultures where students not only interact with each other constituting learning communities in the classroom, but at the same time use a whole series of resources extracted from the media flow through which they confer meaning to their daily lives (Jenkins, Ito, Boyd, 2015), conforming then what has been called a culture of connectivity.

One of the phenomena emerging most strongly within this trend towards the shaping of participatory cultures and collective construction is that of transmedia storytelling (Scolari, 2016).

Transmedia storytelling refers to stories told across multiple media. The most important stories tend to flow across multiple platforms and media (Wängqvist, M. & Frisén, A. 2016). From the consumers' perspective, transmedia practices promote multi-literacy, that is, the ability to comprehensively interpret discourses coming from different media and languages. It is a matter of understanding how young people are acquiring transmedia skills and incorporating these processes into the educational sphere so that learning is a collateral effect of creative production and community collaboration, which is called connected learning (Ito, 2010).

The use of digital technologies has provided an opportunity for the exercise of new forms of social interaction that are currently transforming the functioning and role of formal learning institutions, especially schools and universities (Malone, T. W., Bernstein, M. S., 2015). One of the most important challenges we must face is that all these experiences in which new forms of production, communication and acquisition of knowledge, generated in areas of diverse nature and origin are developed, extended and disseminated, have a translation at the educational level, and are transformed into comprehensive learning processes (Ito, 2010). Digital media, then, opens the door to a new educational paradigm in which learning can take place "anytime, anywhere", a cultural dynamic that has been described in the literature as ubiquitous and that reminds us that everyday life becomes a space for new pedagogies and new learning practices.

This study focuses on the possibilities offered by transmedia narratives to initiate open, creative and participatory processes of content production and dissemination in university classrooms from a perspective oriented to social empowerment and community development.

The objective of the research is to deepen the analysis of the design and creation of transmedia narratives elaborated by young university students within the framework of participatory network cultures that combine the creation of multimedia content with educational proposals oriented to social and community development.

The research question of the study are: Do the modalities and strategies of participation, collaboration and propagability present in transmedia literacy processes allow young university students to empower themselves concerning the different spheres present in digital culture and communication?


Methodology, Methods, Research Instruments or Sources Used
The research process was carried out during the 2022/23 academic year within the framework of the Social Communication Media course belonging to the Social Education Degree at the University of Valladolid (Castilla y León-Spain).

The study develops a narrative research focused on transmedia narrative productions with young university students through which they shape ways of acting and configure meanings in the hyperconnected environment.

The research instruments and data sources used to carry out the research were as follows
- Transmedia storytelling: refers to the transmedia productions chosen by different groups for analysis, both in the field of fiction (e.g. literature, cinema, music, video games, etc.) and in the field of social reality (e.g. journalism and social documentation).
- Classroom observations: in the two classrooms where the research was carried out, there was an external observer who made observations on the dynamics of classroom work.
- Comments and recommendations made in the group work: all the work done in group by the young people, collected in the form of comments and written texts to each of the narratives.
- Video recordings: all the processes of designing, creating, presenting and sharing in the classroom of students' work were recorded on video.

Conclusions, Expected Outcomes or Findings
The design and creation of transmedia educational projects allow the configuration of a new educational ecology (Cobo and Moravec, 2011) in the university classroom. Agents with diverse roles throughout the process in the classroom initiate open and participatory processes of production and distribution of knowledge through the use and appropriation of technologies and digital artifacts involved in the creation of transmedia content (Bar, Weber and Pisani, 2016).

The educational design around narratives tries to explore how young university students involved in collaborative and participatory activities of design, creation, presentation and dissemination through the network of their own techno-media experiences, not only find a personal meaning to their participation in digital culture but also qualitatively and quantitatively modify their own informational capital by appropriating all these tools, knowledge and practical skills in the digital ecosystem of the augmented society.

Educational designs from a transmedia perspective such as the one we have studied allow us to help redefine the active role that social media and media culture can play as instruments of social and citizen empowerment (Buckingham and Kehily, 2014). At the same time, we believe that the processes associated with transmedia literacy can be a good opportunity to reintroduce issues related to citizenship into university classrooms.

References
Bar, F.; Weber, M. S.; Pisani, F. (2016). «Mobile technology appropriation in a distant mirror: Baroquization, creolization, and cannibalism». New Media & Society, 18 (4).
Buckingham, D.; Kehhily, M. J. (2014). «Introduction: Rethinking Youth Cultu- res in the Age of Global Media». En: S. Bragg, M. J. Kehily, D. Buckingham (ed.). Youth Cultures in the Age of Global Media. UK: Palgrave MacMillan, 1-18.
Cobo, C.; Moravec, J. W. (2011). Aprendizaje invisible. Hacia una nueva ecología de la educación. Barcelona: Col·lecció Transmedia XXI. Laboratori de Mitjans Interactius / Publicacions i Edicions de la Universitat de Barcelona.
Ito, M. (2010). Hanging Out, Messing Around, and Geeking Out. Massachusetts: MIT
Jenkins, H.; Ito, M.; Boyd, D. (2015). Participatory Culture in a Networked Era: A conversation on Youth, Learning, Commerce, and Politics. Cambridge, UK: Polity Press.
Malone, T. W.; Bernstein, M. S. (2015). Handbook of Collective Intelligence. Cambridge, MA: MIT Press.
Scolari, C. (2016). «Alfabetismo transmedia. Estrategias de aprendizaje informal y competencias mediáticas en la nueva ecología de la comunicación». Telos, 103, 13-23.
Wängqvist, M.; Frisén, A. (2016). «Who am I OnLine? Understanding the mea- ning of OnLine Contexts for Identity Development». Adolescent Research Review, 1, 139-152.


16. ICT in Education and Training
Paper

Assessing Algorithmic Media Content Awareness Among Third-grade Students: First Insights from an Explorative Study

Teemu Leinonen1, Oleksandra Sushchenko1, Elisa Vilhunen1,2, Anttoni Kervinen2, Terhi Maskonen2

1Aalto University, Finland; 2University of Helsinki, Finland

Presenting Author: Leinonen, Teemu

When our interaction with the world becomes more and more mediated by screens, digital and physical realities are intertwined. It is important to understand how the nature of this new reality affects us in our everyday lives. In this paper we explore third-grade school children’s level of understanding of the algorithmic nature of the digital platforms they use daily and influence on their behavior. The growing use of Artificial Intelligence (AI), algorithms and machine learning, in applications popular among children, are changing the ways they see the world and themselves. To understand how the applications are affecting their experience we wanted to study precisely children’s understanding of the role of algorithms in their use of digital media content. Therefore, we wanted to study one aspect of media and digital literacy, algorithmic literacy.

Understanding of digital literacy lies beyond mere use of digital application, simple ability to use them. To be literate, to read more than what is seen, one should be aware of the underlying algorithms affecting our experiences of interaction with the applications. Recent research, dedicated to the distinctions between multi-platform and single-platform users, has demonstrated how diverse platform engagement significantly enhances algorithmic understanding (Espinoza-Rojas et al., 2023; Shin et al., 2020; Andersen, 2020). These studies underline the factor of users’ adaptive behaviors in response to algorithmic outputs and highlight the importance of emotional and ethical considerations of digital interactions.

Algorithm literacy (AL) can be defined as having an understanding of the utilization of algorithms in online applications, platforms, and services. It involves knowledge of the functioning of algorithms, the ability to critically assess algorithmic decision-making, and possessing the skills necessary to navigate and potentially impact algorithmic operations (Andersen, 2020; Dogruel, 2021; Shin et al., 2022). Algorithmic literacy can be considered the informed ability to critically examine, interrogate, propose solutions for, contest and agree with digital services (Long & Magerko 2020). At the core of algorithmic literacy is explicability, which shapes individuals’ attitudes towards and views on algorithmic decision-making technologies (Hermann 2021).

To explore childrens as users of algorithmic media we conducted a study with a teaching experiment in a third-grade classroom (9 to 10 years old) in [nation]. In the beginning of the experiment the students (N=18) filled a questionnaire measuring the awareness of algorithmic media content. The same questionnaire was filled after the teaching experiment.

In the core of the teaching experiment was the student's own project work done in small teams (2-3 in each). During the classes the students designed advertisements consisting of two photos taken by them and two slogans invented by them and attached to the photos. The task was (1) to design a good advertisement of carrots and (2) a bad advertisement of carrots. To work on their photos each team got a bag of carrots.

In the second class the students voted for the best five advertisements. Then children were provided with a calculation of votes and selection of the top five advertisements with a number of votes each got. Based on the results, the students were asked to share media time for each advertisement. This way the children in teams were acting like a human-algorithm. For the task we didn’t give them any math examples for calculating the shares, but rather let them figure it out (or not) by themselves. The small team discussions were audio recorded during the design of the advertisements as well during making decisions on how long each advertisement should get media time. In the end of the second class we demonstrated how a computer-algorithm would share the media time, based on the votes given.


Methodology, Methods, Research Instruments or Sources Used
Children’s understanding of the algorithmic media content was studied with the Algorithmic Media Content Awareness scale (AMCA-scale) (Zarouali et al., 2021) and by collecting qualitative data, audio recordings from their work in small teams.

Through the AMCA questionnaire — localized for the purpose —  we assessed the dimensions of the children’s algorithmic awareness: ‘content filtering’, ‘automated decision-making’, ‘human-algorithm interplay’, and ‘ethical considerations’. In the questioner we used statements and a simple scale: “yes”, ”no”, “I don’t know”. The 13 questions were related to the role of  algorithms in media content recommendation, content tailoring, automated decision-making, and their ethical implications ((e.g. “YouTube makes independent decisions about which videos to show me”). Combining the results from the questionnaire and analysis of the audio recording we aimed to know how children perceive ethical considerations in algorithmic media by assessing their understanding of transparency, potential biases, and privacy concerns. With the teaching experiments we wanted to explore if working with the advertisement task and as a human-algorithm would have any effect on their understanding about algorithmic media and its logics. Therefore the questionnaire was done by the students twice, before starting the teaching experiment and after the teaching experiment.

The audio recordings from each teams’ two working sessions — during designing their  advertisements and when acting as a human-algorithm and making decisions on the media time — was conducted to analyze the children’s thinking process. In the analysis of the qualitative data we will apply Thematic Content Analysis (TCA) (Anderson, 2007; Smith, 1992. The results of the content analysis will be combined with the results from the questionnaire, although recognizing all the individual students from the audio recordings has been found impossible.

The Principal of the school approved the research plan and informed consent was addressed to the children’s guardians and the children. The nature of research was explained to children by their teacher and the researchers. The questionnaire data was stored in a secure server and the audio recordings were stored in a harddisk accessible only for the researchers. The research applied the guidelines and recommendations of the [nation] National Board on Research Integrity and followed their ethical principles of conducting research with children participants: participant consent, right to self determination, prevention of harm and privacy and data protection.

Conclusions, Expected Outcomes or Findings
Students' initial understanding of how algorithms affect their media content and how data is collected and used was very limited. In the pre-questionnaire, almost 80% of the students answered “yes” to the statement “YouTube knows how to recommend videos for me”. On the other hand, 45% of the students answered “no” or “I don’t know” to the statement “YouTube can estimate how interested I am in any video”. The answers are possibly demonstrating mystification with their thinking. Same time students know that YouTube is able to “know” and recommend videos for them, but they do not understand how it happens.

With the questions related to ethics and privacy, the answers to the pre-questionnaire did not include many signs of concerns, but again, rather lack of understanding. To the statement “Videos YouTube shows for me, may be inaccurate or biased. They may increase prejudices” 30% answered “yes”, 50% “I don’t know”, and 20% “no”. The large number of not being sure, may demonstrate that the students have never thought about the issue.

The results from the post-questionnaire demonstrate a slight change in the students' understanding of algorithms. In their answers to the privacy issues students were a bit more concerned. When in the pre-questionnaire 50% of the students answered “I don’t know” 22% “no" and 28% “yes” to the statement “computer programs on YouTube use information collected about me in order to recommend certain types of videos to me. This affects my privacy”iIn the post-questionnaire 40% were still answering “I don’t know" and  20% “no”, but 40% answered “yes”. The similar patterns exist in the students' answers to other questions, too.

These first insights from the pre- and post-questionnaire will guide us in the qualitative data analysis to understand the students' thinking before, during and after the teaching experiment.

References
Andersen, J. (2020). Understanding and interpreting algorithms: Toward a hermeneutics of algorithms. Media, Culture & Society, 42(7–8), 1479–1494. https://doi.org/10.1177/0163443720919373.

Anderson, R. (2007). Thematic content analysis (TCA). Descriptive presentation of qualitative data, 3, 1-4.

Dogruel, L. (2021). What is algorithm literacy? A conceptualization and challenges regarding its empirical measurement. 75898, 9, 67-93.

Espinoza-Rojas, J., Siles, I., & Castelain, T. (2023). How using various platforms shapes awareness of algorithms. Behaviour & Information Technology, 42(9), 1422-1433. https://doi.org/10.1080/0144929X.2022.2078224.

Hermann, E. (2022). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New Media & Society, 24(5), 1258-1277.

Long, D., & Magerko, B. (2020, April). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-16).

Shin, D., Rasul, A., & Fotiadis, A. (2022). Why am I seeing this? Deconstructing algorithm literacy through the lens of users. Internet Research, 32(4), 1214-1234.

Shin, D., Zhong, B., & Biocca, F. A. (2020). Beyond user experience: What constitutes algorithmic experiences?. International Journal of Information Management, 52, 102061.

Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis. Cambridge University Press.

Zarouali, B., Boerman, S. C., & de Vreese, C. H. (2021). Is this recommended by an algorithm? The development and validation of the algorithmic media content awareness scale (AMCA-scale). Telematics and Informatics, 62, 101607.