Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 11:21:16 EEST

 
 
Session Overview
Session
99 ERC SES 03 G: Assessment, Evaluation, Testing and Measurement
Time:
Monday, 26/Aug/2024:
11:30 - 13:00

Session Chair: Gasper Cankar
Location: Room 101 in ΧΩΔ 01 (Common Teaching Facilities [CTF01]) [Floor 1]

Cap: 54

Paper Session

Show help for 'Increase or decrease the abstract text size'
Presentations
99. Emerging Researchers' Group (for presentation at Emerging Researchers' Conference)
Paper

Assess Students' Digital Competencies: from the Measurement Scale to the Threshold Levels

Marco Giganti

Catholic University, Italy

Presenting Author: Giganti, Marco

The paper is part of national and international studies aimed at defining and assessing digital competencies, with particular attention to those of students at different school levels.

Digital competencies are at the top of the European political agenda, aiming to improve them for digital transformation. The European Skills Agenda 2020 promotes digital competencies and supports the goals of the Digital Education Action Plan for the development of a high-performance digital education system. The Digital Compass and the European Pillar of Social Rights Action Plan set targets to reach at least 80% of the population with basic digital skills and 20 million specialists in information and communication technologies by 2030. In Italy, the current legislation (National Plan for Recovery and Resilience) provides for the country to equip itself with a system of certification of digital competencies from 2025.

It is therefore necessary to define what is meant by digital skills and to measure them. In this perspective, INVALSI, the Italian National Institute for the Evaluation of the Education and Training System, is launching the DIGCOMP.MIS project to define a prototype model to attest digital competencies, applicable for spring 2025; reference are students of secondary II second grade but with the prospect of observing the evolution of digital skills from the end of secondary first grade to the end of secondary second grade.

The framework assumed by INVALSI and by this paper is DIGCOMP 2.2. (Digital Competence Framework for Citizens) developed by the European Commission to describe and assess the digital skills of citizens aged 16 and over.

From 2013 to today, DIGCOMP has found application in the context of employment, education, training, and lifelong learning; it has been adopted at the European level to build the Digital Skills Indicator and to monitor the Digital Economy and Society Index.

Specifically, this project deals with the definition of the levels of digital competence and the adequacy thresholds corresponding to the different school grades.

In large-scale educational surveys, the variables considered consist of skills, knowledge, or skills possessed at a stage of the school career or in a given age group, constructs not directly observable, but defined based on a theoretical reference framework and operationalized to administer standardized tests.

An outcome in terms of numerical score, however, is not directly informative of what students with a given score know and can do concerning the investigated domain; this is a limit for those interested in interpreting the results of a survey and obtaining information for interventions or teaching practices.

The attribution of an explicitly described level allows students, families, and teachers to have significant feedback, which can be integrated by the students into their perception of competence and useful for teachers teaching. Many national and international surveys combine a score result with a description of the corresponding level; similarly, INVALSI does.

The aim of the project will therefore be to define the type of target levels and their identification.

The paper aims to give an account of the first phase of the project, particularly the analysis of scientific literature and models tested or in use in other European contexts that allow linking of the elaboration of the model to the most authoritative and updated studies of national and international research; This, together with the reference to the DIGCOMP framework, allows the proposed model to be modular also given future comparative developments of digital competences surveys.


Methodology, Methods, Research Instruments or Sources Used
The project will consist of the following phases and methodologies:
• Retrieval and critical analysis of scientific literature
• First definition of target-level typology and identification modalities (e.g. standard-referenced approach, descriptive proficiency levels approach)
• Improvement of target levels
• Level verification and remodeling, also based on INVALSI data (DIGCOMP.MIS, act. 3)
• Definition of the scale at levels

Conclusions, Expected Outcomes or Findings
The outcome of the project will be the proposal of a model of levels descriptive of the competencies of students in terms of digital skills, corresponding to the score obtained in a standardized national test to measure them. To date, Italy does not have tools and models for this purpose.
The collaboration with INVALSI and the contemporaneity of the project concerning DIGCOMP.MIS will allow to orient the research in a way that also corresponds to the application and organizational requirements of a public survey on a national scale at least.
There are many predictable benefits. In the field of educational and docimological research, the project addresses original themes and will represent a useful advancement of knowledge. Professionals working in the field of school practice, teaching, and more generally training can make use of the descriptions of the level of competence achieved by the students.

References
Calvani A., Fini A., & Ranieri M. (2009). Valutare la competenza digitale. Modelli teorici e strumenti applicativi. TD-Tecnologie Didattiche, 48, 39-46.
Cortoni, I. (2016). La valutazione delle competenze digitali. Analisi di un case study. Rassegna italiana di valutazione, 20(66), 7-28.
Cortoni, I. & Lo Presti. V. (2014). Verso un modello di valutazione delle competenze digitali. Rassegna italiana di valutazione, 18(60), 7-23.
Desimoni, M. (2018). I livelli per la descrizione delle prove INVALSI. Roma: INVALSI.
Durda, T., Artelt, C., Lechner, C.M., Rammstedt, B., & Wicht, A. (2022). Proficiency level descriptors for low reading proficiency: An integrative process model. International Review of Education, 66, 211-233. https://doi.org/10.1007/s11159-020-09834-1
Griffin, P., Gillis, S., & Calvitto, L. (2007). Standards-referenced assessment for vocational education and training in schools. Australian Journal of Education, 51(1), 19-38.
https://doi.org/10.1787/dfe0bf9c-en
OECD (2023). PISA 2022 Assessment and Analytical Framework. Paris: PISA, OECD Publishing.
Scalcione, V.N. (2022). Ambienti tecnologici di apprendimento: strumenti per la valutazione delle competenze digitali. QTimes Journal of Education, Technologies and social studies, 14(4), 171-193.
Vuorikari, R., Kluzer, S., & Punie, Y. (2022). DigComp 2.2: The Digital Competence Framework for Citizens. Luxembourg: Publications Office of the European Union.
Zumbo, B.D. (2016). Standard-setting methodology: Establishing performance standards and setting cut-scores to assist score interpretation. Appl. Physiol. Nutr. Metab., 41, S74–S82 dx.doi.org/10.1139/apnm-2015-0522


99. Emerging Researchers' Group (for presentation at Emerging Researchers' Conference)
Paper

Developing and Piloting a Digital Assessment Tool for Social-Emotional Skills in Early School Years

Andrea Kogler, Barbara Gasteiger-Klicpera, Katharina Prinz, Lisa Paleczek

Universität Graz, Austria

Presenting Author: Kogler, Andrea

Social-emotional competences are crucial for children’s development, especially in middle childhood. Numerous frameworks describe social-emotional competences (Soto et al., 2019), often focus on the acquisition of social-emotional skills: i.e., social-emotional learning (SEL). SEL aims to enhance five interrelated individual competences: self-awareness, self-management, social-awareness, relationship skills, and responsible decision-making (CASEL, 2024). Promoting SEL not only helps to prevent behavioral disorders but also positively influences other areas, such as prosocial behavior, well-being, and academic skills (Durlak et al., 2022).

To assess these competences and measure the effectiveness of SEL interventions, accurate assessment instruments are needed, for both research and educational purposes (McKnown et al., 2017; Soto et al., 2019). These instruments should identify risks to social-emotional well-being and then ease the search for appropriate interventions supporting each child’s individual development (Denham et al., 2016). Especially concerning children during their early school years, developing and/or improving assessment tools in the area of social-emotional competences is essential (e.g., Abrahams et al., 2019; Halle & Darling-Churchill, 2015).

Addressing this need for accurate assessments for children (Soto et al., 2019), we developed a screening to assess social-emotional skills. In addition to the dimensions proposed in the CASEL model, we also considered four subdomains that were identified by Halle and Darling-Churchill (2016) as frequently being part of social-emotional assessments: social competence, emotional competence, behavior problems, and self-regulation. Based on these models and proposed sub-competences, we developed a digital screening taking into consideration Emotion Recognition, Prosocial Behavior, Emotion Regulation, and Social Situations. As our target group is children at an early stage of schooling (6 to 8 years), we used a digital approach via tablets and provided all questions and instructions as audios and in a written form. This enabled children to work at their own pace using headphones. Another advantage of the digital assessment is a higher motivation of the children (Blumenthal & Blumenthal, 2020).

For measuring Prosocial Behavior and Emotion Regulation, we used a 5-point Likert scale with the word-based response format rating ‘never’ to ‘very often’ to achieve better scale properties and more differentiated results than with the traditional yes-no format (Mellor & Moore, 2014). The subtest Social Situations is a situational judgement test (SJT), consisting of descriptions of challenging school scenarios (e.g., someone laughing at the child) supported by a graphical representation. The test offers four different behavioral options describing reactions in the scenario. Children rate these options on a 5-point Likert scale referring to whether or not they would react like proposed in the option. The scale is anchored in “certainly would” and “certainly would not” react like this, as used by Murano et al. (2020). This subtest requires social-cognitive information processing (Crick & Dodge, 1994), and SJT is a promising approach assessing social skills (Soto et al., 2019). The subtest Emotion Recognition is a performance measurement, therefore, very robust against attempts of faking good (Abrahams et al., 2019). Pictures of facial expressions (produced with the support of artificial intelligence) representing emotions are presented and children choose the fitting emotion out of five options.

Currently, we are piloting this assessment tool in two pilot studies with second graders. First, using a participatory approach, we ask children how they liked the screening and where they experienced challenges. The collected feedback is then used to adapt the instrument before implementing it with a larger group. Our presentation will focus on the following two research questions:

  • Is the assessment user-friendly and intuitive for Grade 2 students in individual and group settings?
  • Do the items of the introduced screening meet criteria like difficulty and discriminatory power?
  • Does the proposed structure fit the findings in the factor analyses?

Methodology, Methods, Research Instruments or Sources Used
The paper will present two studies on a screening’s (Emotion Recognition, Prosocial Behavior, Emotion Regulation, and Social Situations) usability and test design, using a mixed-methods approach.
First (01-02/2024), second graders aged 7-8 (n = 8) complete the assessment providing feedback on usability and ratability of each subscale. We will use screencasts to record and observe their navigation through the questions. To find out if the students can relate to the challenging situations presented and whether they found the format easy to complete, we will ask them questions during and after working on the screening.
Secondly (03/2024), about 60 children (aged 7-8) will complete the adapted (based on Study 1) screening in groups of 5 to 6. Besides learning about the feasibility of the group setting, we will analyze item parameters (difficulty and discriminatory power as well as run factor analyses) and check for testing time. The children can work independently using headphones ensuring unbiased responses. Besides observing the group setting, we will briefly interview the children on their experiences as well as on ideas for improvement and let them rate usability and their motivation.
The screening is implemented in an online survey tool (LimeSurvey version 3.28.22) and is modified for children (graphic design, font type and font size, audios to guide through). In the first subtest, Emotion recognition, children look at 10 pictures of other children’s facial expressions and choose the fitting emotion out of 7 basic emotions. Pictures were generated by artificial intelligence and pre-evaluated by master’s and doctoral students. The Prosocial Behavior subtest consists of five items about prosocial behavior in classroom. Students have to rate from ‘1-never’ to ‘5-always’ whether they have acted prosocially towards their classmates (e.g., helped another child in the class, cheered up another child) during the past two weeks. To assess Emotion Regulation, students have to indicate on a five-point Likert scale, how often they use certain emotion regulation strategies when being angry, sad or afraid of something (e.g., “When I am angry, I think of something positive.”). The subscale Social Situations contains 15 different challenging everyday situations at schools (e.g., feeling left out). The students’ task is to decide on a five-point Likert scale (‘1-no, never’ to ‘5-certainly’) how likely they will act in a certain way.

Conclusions, Expected Outcomes or Findings
This paper presents a newly developed screening tool to assess social-emotional skills (social competence, emotional competence, behavior problems, and self-regulation) in second graders. The goal of this assessment is to reliably measure social-emotional skills, taking easy classroom implementation and a high motivation of children into account. Using tablets should make the tool more accessible for heterogenous groups of students, as the audio guiding through the tool meets the needs of students with reading difficulties. We expect the children to help us identify improvements of the assessment. Analyses conducted in the second pilot study should show to what extent the subscales’ characteristics are satisfactory.
We provide insights into the developmental process and adaptions for usability and reliability due to piloting in an individual and small group setting. Especially, the participatory approach with students in individual settings (Pilot 1) will clarify whether the structure of the tool as well as the instruction of the subscales were clear to them, and the proposed challenging situations met their school life experiences. The usability will also be reflected when using the adapted instrument in small groups (Pilot 2). Based on these findings, a reduction of items in some subscales to increase internal consistency and improve the economy of the assessment is expected.
To accompany and evaluate interventions, accurate assessment tools are needed that differentiate between various aspects of social-emotional skills. Our developed tool should fill the gap of missing assessment instruments (Abrahams et al., 2019) for German-speaking countries. Further, we discuss general conditions, such as use of digital devices, item scaling, that should be addressed when assessing social-emotional skills in primary grade students.

References
Abrahams, L., Pancorbo, G., Primi, R., Santos, D., Kyllonen, P., John O. P., & de Fruyt F. (2019). Social-Emotional Skill Assessment in Children and Adolescents: Advances and Challenges in Personality, Clinical, and Educational Contexts. Psychological Assessment, 31(4), 460-473. https://doi.org/10.1037/pas0000591

Blumenthal, S., & Blumenthal Y. (2020). Tablet or Paper and Pen? Examining Mode Effects on German Elementary School Students’ Computational Skills with Curriculum-Based Measurements. International Journal of Educational Methodology, 6(4), 669-680. https://doi.org/10.12973/ijem.6.4.669

CASEL. (2024). What Is the CASEL Framework? https://casel.org/fundamentals-of-sel/what-is-the-casel-framework/

Crick, N. R., & Dodge, K. A. (1994). A review and reformulation of social information-processing mechanisms in children‘s social adjustment. Psychological Bulletin, 115(1), 74–101. https://doi.org/10.1037/0033-2909.115.1.74

Denham, S. A., Ferrier, D. E., Howarth, G. Z., Herndon, K. J., & Bassett, H. H. (2016). Key considerations in assessing young children’s emotional competence. Cambridge Journal of Education, 46(3), 299–317. https://doi.org/10.1080/0305764x.2016.1146659

Durlak, J. A., Mahoney, J. L., & Boyle, A. E. (2022). What we know, and what we need to find out about universal, school-based social and emotional learning programs for children and adolescents: A review of meta-analyses and directions for future research. Psychological Bulletin, 148(11-12). 765-782. https://doi.org/10.1037/bul0000383

Halle, T. G., & Darling-Churchill, K. E. (2016). Review of measures of social and emotional development. Journal of Applied Developmental Psychology, 45, 8–18. http://dx.doi.org/10.1016/j.appdev.2016.02.003

McKown, C. (2017). Social-Emotional Assessment, Performance, and Standards. The Future of Children, 27(1), 157-178. https://www.jstor.org/stable/44219026

Mellor, D., & Moore, K. A. (2014). The use of Likert scales with children. Journal of pediatric psychology, 39(3), 369–379. https://doi.org/10.1093/jpepsy/jst079

Murano, D., Lipnevich, A. A., Walton, K. E., Burrus, J., Way, J. D., & Anguiano-Carrasco, C. (2020). Measuring social and emotional skills in elementary students: Development of self-report Likert, situational judgment test, and forced choice items. Personality and Individual Differences, 169, 110012. https://doi.org/10.1016/j.paid.2020.110012

Soto, C. J., Napolitano, C. M., & Roberts, B. W. (2021). Taking Skills Seriously: Toward an Integrative Model and Agenda for Social, Emotional, and Behavioral Skills. Current Directions in Psychological Science, 30(1), 26-33. https://doi.org/10.1177/0963721420978613


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2024
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany