Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 04:37:04 EEST

 
 
Session Overview
Session
09 SES 16 B: Exploring Factors Influencing Academic Achievement and Motivation
Time:
Friday, 30/Aug/2024:
11:30 - 13:00

Session Chair: Mari-Pauliina Vainikainen
Location: Room 012 in ΧΩΔ 02 (Common Teaching Facilities [CTF02]) [Ground Floor]

Cap: 56

Paper Session

Show help for 'Increase or decrease the abstract text size'
Presentations
09. Assessment, Evaluation, Testing and Measurement
Paper

Development of Cognitive Learning to Learn Competences, Learning-related Beliefs, and School Achievement Through the Nine-year Basic Education in Finland

Natalija Gustavson1, Mari-Pauliina Vainikainen2, Satu Koivuhovi3

1University of Helsinki, F, Finland; 2Tampere University, Finland; 3University of Turku, Finland

Presenting Author: Gustavson, Natalija; Koivuhovi, Satu

Learning to learn skills are fundamental cognitive, metacognitive, motivational, and affective resources to help reach a learning goal (James, 2023). Acquiring these skills and abilities is vital for lifelong learning in the 21st century. The Finnish Learning to Learn (L2L: Hautamäki, 2002; Vainikainen & Hautamäki, 2022) scales have been developed and utilised in national and regional assessments since the late 1990s. They cover general cognitive competences needed in different school subjects, such as reading comprehension, mathematical thinking skills, general thinking and reasoning skills, and problem-solving.

This paper reports on a longitudinal L2L study, in which around 1000 children were followed through the nine-year basic education in Finland. Longitudinal studies can collect a broad range of information and provide unique insight into the importance of cognitive development in the early stages of education, identify connections between student abilities and academic achievement, and allow for adjustments to the pedagogical process throughout schooling. Studying the characteristics of stability and trends in the development of cognitive abilities in different age groups makes it possible to identify the weakest points and direct pedagogical efforts to increase the level of abilities and motivation (Metsämuuronen, J., & Tuohilampi, 2014). The level of development of cognitive abilities largely determines performance in mathematics and other subjects and seems to influence children's goal orientation in learning (Mägi et al., 2010; Williams, T., & Williams, K. 2010). Longitudinal assessments of them also make it possible to identify certain trends in the development of certain skills at different age periods, which must be taken into account in the diagnosis and evaluation of the learning process (Weinstein, 2015).

The present study focuses on the development and changes in the cross-curricular cognitive competences and learning-related beliefs measured by the Finnish L2L scales. We also study how they are reflected on pupils’ school achievement as measured by grade point average (GPA). We aim at analysing how individual and group-level differences develop from when the pupils enter the formal education system until they complete basic education and move to the tracked upper secondary education. We answer the following questions:

1. How are the cognitive L2L competences, learning-related beliefs and school achievement connected and how do they influence each other over the years during basic education?

2. How stable are the individual and group-level trends observed in cognitive L2L competences, learning-related beliefs and school achievement throughout the school years?


Methodology, Methods, Research Instruments or Sources Used
A nine-years longitudinal L2L study was conducted in one large Finnish municipality starting in 16 randomly sampled schools with 744 first grade pupils. For the second measurement, 4 new schools were included, making the pupil-level sample size around 1000. Assessments were conducted during multiple occasions including the 1st, 4th, 6th, and 9th grade assessments reported in this paper. At the beginning of the first school year, the pupils completed a learning preparedness test. In the subsequent assessments, they completed mathematical thinking, reading comprehension, and general reasoning subscales of the Finnish learning-to-learn test, and answered questionnaires about their learning-related beliefs. In this paper, we used the subscale measuring pupils’ agency beliefs of effort based on Skinner’s action-control theory (1988). The pupils rated themselves in relation to presented statements on a 7-point Likert scale. For the cognitive test and GPA, we calculated a manifest average score over different domains/subjects for each measurement point. Learning-related beliefs were included in the models as latent factors. The 1st grade learning preparedness test was used in the model as a latent factor consisting of three subscores (analogical reasoning; visuo-spatial memory; following instructions and inductively reasoning the applied rule). We specified a cross-lagged panel model in Mplus 8 to study the interrelations of the 4th, 6th and 9th, grade cognitive competences, learning-related beliefs and GPA. In addition, we predicted the 4th grade variables by the latent 1st grade learning preparedness test score. Before specifying the full model, we tested measurement invariance of latent factors over time and groups by constraining factor loadings and intercepts stepwise and studying the change in fit indices. In general, we used RMSEA <.06, CFI and TLI <.95 (Kline, 2005) as criteria for a good model. We first ran the model in the full data, and after that we performed multiple-group comparisons.
Conclusions, Expected Outcomes or Findings
We first focused on studying the level of cognitive competences, learning-related beliefs and GPA over the years. As expected based on earlier literature, pupils’ cognitive competences considerably improved, but the level of learning-related beliefs declined from the 4th to the 9th grade. The cognitive differences between pupils observed when the pupils started their school path seemed relatively stable over time, as in the cross-lagged panel model (CFI= .984, TLI = .979, RMSEA = .0, 26, p < .001), the first grade learning preparedness test score predicted 4th grade performance very strongly (β=.82), and there was a relatively strong connection between the test scores of subsequent assessments as well. The first grade learning preparedness predicted fourth grade GPA (β=.44), and also GPA seemed to be very stable over the years. Learning-related beliefs, on the contrary, were on the fourth grade not predicted by learning preparedness, and their connection with the other variables in the model were weak. However, the connections strengthened over time when pupils’ self-evaluation skills improved and the overly positive evaluations declined by the sixth grade. Overall, learning-related beliefs seemed to be somewhat more connected with GPA than cognitive competences, perhaps indicating that pupils are to some extent rewarded for the effort they put in schoolwork regardless of the cognitive outcomes. We also found some cross-lagged effects over time, and in the next stage, we will focus on studying these in multiple-group analyses based on competence levels and gender.        
References
Hautamäki, J., Arinen, P., Eronen, S., Hautamäki, A., Kupiainen, S., Lindblom, B., & Scheinin, P. (2002). Assessing learning-to-learn: A framework. National Board of Education, Evaluation 4/2002.
James, M. (2023). Assessing and learning, and learning to learn. International Encyclopedia of Education (Fourth Edition), p. 10-20.  https://doi.org/10.1016/b978-0-12-818630-5.09015-1.
James, M. (2010). An overview of Educational Assessment. In: P. Peterson, E. Baker& B. McGaw (Eds.) International Encyclopedia of Education. Vol.3: 161-171. Oxford: Elsevier
Marsh, H. W., Byrne, B. M., & Shavelson, R. J. (1988). A Multifaceted Academic Self-Concept: Its Hierarchical Structure and Its Relation to Academic Achievement. Journal of Educational Psychology, 82(4), 623–636. https://doi/10.1037/0022-0663.80.3.366
Metsämuuronen, J., & Tuohilampi, L. (2014). Changes in Achievement in and Attitude toward Mathematics of the Finnish Children from Grade 0 to 9—A Longitudinal Study. Journal of Educational and Developmental Psychology , 4(2), 145-169. https://doi.org/10.5539/jedp.v4n2p145
Mägi K, Lerkkanen M-K, Poikkeus, A-M, Rasku-Puttonen H & Kikas E (2010). Relations between achievement goal orientations and math achievement in primary grades: A follow-up study. Scandinavian Journal of educational Research, 54(3), 295‒312.
Skinner, E. A., Chapman, M., & Baltes, P. B. (1988). Control, means-ends, and agency beliefs: A new conceptualization and its measurement during childhood. Journal of Personality and Social Psychology, 54(1), 117–133. https://doi.org/10.1037/0022-3514.54.1.117
Vainikainen , M-P & Hautamäki , J 2022 , Three Studies on Learning to Learn in Finland :Anti-Flynn Effects 2001-2017 ' , Scandinavian Journal of Educational Research , vol. 66 , no. 1 , pp. 43-58 . https://doi.org/10.1080/00313831.2020.1833240
Weinstein, C. E., Krause, J., Stano, N., Acee,T., Jaimie,K., Stano, N.(2015), Learning to Learn, 2015 International Encyclopedia of the Social & Behavioral Sciences (Second Edition)  p.712-719
Weinstein, C., Krause, J., Stano, N., Acee, T., Jaimie, R. (2015) Learning to Learn. International Encyclopedia of Education (Second Edition), p. 712-719  
Williams, T., & Williams, K. (2010). Self-efficacy and performance in mathematics: Reciprocal determinism in 33 nations. Journal of Educational Psychology, 102(2), 453-466. http://dx.doi.org/10.1037/a0017271


09. Assessment, Evaluation, Testing and Measurement
Paper

Motivation Profiles as Explanatory Factors of Task Behaviour and Student Performance

Laura Nyman1,3, Satu Koivuhovi1,2,3, Mari-Pauliina Vainikainen3, Risto Hotulainen1

1University of Helsinki, Finland; 2University of Turku, Finland; 3Tampere University, Finland

Presenting Author: Nyman, Laura

Student’s effort and motivational factors behind it have an essential role in determing how students approach new tasks and perform in them (e.g., Kupiainen et al., 2014). Together, they affect the ability to apply the cognitive processes fundamental to identifying problems and designing and applying solutions (Kong & Abelson, 2019; Skinner ym., 1998). These processes have traditionally been measured and evaluated through self-reports and observation. While these methods undoubtedly have an important place in the human sciences, they have challenges regarding validity and large sample sizes. One solution to these challenges is that the technology's vast potential allows seamless data collection from individuals in digital environments without disrupting their natural activities (Wise & Gao, 2017). Hence, this paper focuses on investigating what time on task, number of trials, and use of problem-solving strategies in different tasks tell us about student performance and whether the results in different tasks are consistent with each other. The relations between these task behavior indicators are examined from the perspective of motivational profiles students may hold by examining whether the profiles differ in this matter.

In this study, the focus is on students' control-related beliefs within the framework of Action-Control Theory (Skinner et al., 1988). According to the theory, perceived control encompasses beliefs about the relation of agents, means, and ends, shaping a student's perception of how school outcomes are achieved and the extent to which they are actively involved. These beliefs are found to be related to school achievement in to a varying degree and varying hindering or fostering effects. Accordingly, while some students with beliefs that have shown to positively predict school performance have done well, other students with similarly above average beliefs have done less well, highlighting the existence and importance of different combinations of beliefs when considering their association with motivational orientation and performance (Malmberg & Little, 2007).

Treating time use as a measure of motivational investment in a task is grounded in Carroll's Model of School Learning (Carroll, 1989). According to the model, students vary in the time they need to learn, which in turn depends on students' aptitude for the task, their ability to understand instruction, and the quality of instruction. Higher aptitude corresponds to shorter learning times, while lower aptitude may require more effort. The time students ultimately invest in learning is composed of the time allocated for learning and the time students are willing to dedicate. The required time, the time spent, and the quality of instruction act as the determinants of the level of learning (Kupiainen et al., 2014). Computer-based assessment (CBA) research has confirmed that students' too short time on task indicates a lack of effort and task commitment (e.g., Wise & Gao, 2017). This results from reacting too quickly compared to the time needed for a proper task solution (Schnipke, 1995). This supports findings in problem-solving tasks, indicating that in every ability level longer response times positively correlate with correct answers as task difficulty increases (Goldhammer et al., 2014).

The study delves into the diverse strategies individuals employ during problem-solving that guide the problem-solving process and ultimately influence how effectively they navigate problem-solving situations (Stubbart & Ramaprasad, 1990). Some problems may require multiple trials and inductive reasoning, while in other problems the most appropriate way is to test how individual variables affect the outcome, isolating the effect of other variables. CBA enables the exploration of these strategies by utilizing log data collected during tasks, which have been done in the past, particularly for studying the differentiation of the effect of variables in solving more complex problems (e.g., Greiff et al., 2016).


Methodology, Methods, Research Instruments or Sources Used
This study uses national longitudinal data for the academic year 2021-2022 (N = 8556) collected by the University of Tampere and the University of Helsinki in the framework of the DigiVOO project. This study does not use the longitudinal aspect but includes measures from three different measurement points.

Motivational beliefs were assessed using Action-Control Theory Scales (e.g., Chapman et al., 1990), covering agency beliefs on ability and effort, control expectancy, and means-ends beliefs on various factors. Each scale included three items with a 7-point Likert-type scale (1 = not true at all, to 7 = very true).

The success rate in problem-solving tasks was computed from the overall percentage of correct answers in programming tasks (code building and debugging) and a task measuring vary-one-thing-at-a-time (VOTAT) problem-solving strategies (Greiff et al., 2016). The programming tasks involved coding a robot to pick up a sock in a room with obstacles. The VOTAT-based task, Lilakki, required students to vary conditions for optimal plant growth.

Task behavior indicators were derived from log data, including time on task measured in seconds and trials related to the number of completed items in programming tasks. Problem-solving strategies (VOTAT) in Lilakki were analyzed by calculating the relative percentage of used strategies from the overall number of trials in the task.

General Point Average (GPA) reflected students' prior ability against the achievement in problem-solving tasks, incorporating grades in Finnish, mathematics, English, history, and chemistry.

In this study, latent profile analysis (LPA) and multigroup structural equation modeling (SEM) will be conducted. LPA is used to identify subgroups of students based on their self-reports on the motivational measures. Fit indices for LPA are Bayesian Information Criterion (BIC), sample size adjusted BIC (SABIC), Akaike Information Criterion (AIC), Consistent Akaike Information Criterion (CAIC), Vuong-Lo-Mendel-Rubin likelihood ratio test (VLMR), adjusted VLMR, and Bootstrap Loglikelihood ratio test (BLRT) and entropy. In addition, the elbow plot method for AIC, CAIC, BIC, and SABIC is used, and the qualitative investigation is done against substantive theory and previous studies. In multigroup SEM, the MLR estimator will be used. The goodness of fit of the model will be assessed by the following fit indices: RMSEA (< 0.05 = good model, < 0.08 = acceptable model) and CFI & TLI (> 0.95 = good model, > 0.90 = acceptable model).

Conclusions, Expected Outcomes or Findings
Preliminary results concerning motivational profiles have been analyzed. Based on the fit indices, elbow method, and qualitative inspection, a 5-class solution in LPA was considered the best fit. The five motivational profiles are preliminarily named Avoidant, Normative, Mildly Agentic, Agentic, and Mixed. Students in the Agentic (Class 1) profile saw their effort and ability and control over school achievement most positively compared to believing that luck and ability would determine school outcomes. Thus, this profile was considered to have the most adaptive beliefs. Mildly agentic (Class 2) and Moderate (Class 3) reflected pattern demonstrated by Agentic students but moderately. Avoidant (Class 4) students had the lowest adaptive beliefs (i.e., beliefs about their ability, effort, and control as well as effort as a means for success) and attributed school outcomes to ability over other beliefs. In the Mixed profile (Class 5), students had one of the most positive adaptive beliefs with the Agentic profile. Similarly, they possessed the most positive means-ends beliefs on ability and luck. This profile is seen to indicate adaptive as well as maladaptive consequences to achievement (Malmberg & Little, 2007).

In multigroup SEM, the hypothesis is that motivational profiles play a role in how task behavior indicators (time on task, trials and strategies), prior ability, and performance in problem-solving tasks are related to each other due to differences in their approaches to novel tasks (see Callan, et al., 2021; Skinner et al., 1998).

In summary, this paper delves into the complex dynamics of effort, motivation, and cognitive processes during academic tasks, utilizing innovative technology for data collection. The findings provide novel insights into students' problem-solving strategies.

References
Callan, G. L., Rubenstein, L. D., Ridgley, L. M., Neumeister, K. S., & Finch, M. E. H. (2021). Selfregulated learning as a cyclical process and predictor of creative problem-solving. Educational Psychology, 1–21. https://doi.org/10.1080/01443410.2021.1913575

Carroll, J. B. (1989). The Carroll model: A 25-year retrospective and prospective view. Educational Researcher, 18, 26–31. https://doi.org/10.3102/0013189X018001026

Chapman, M., Skinner, E. A., & Baltes, P. B. (1990). Interpreting correlations between children’s perceived control and cognitive performance: Control, agency or means–ends beliefs. Developmental Psychology, 26, 246–253. https://doi.org/10.1037/0012-1649.26.2.246

Goldhammer, F., Naumann, J., Stelter, A., Klieme, E., Toth, K. & Roelke, H. (2014). The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights
from a computerbased large-scale assessment. Journal of Educational Psychology, 106(3), 608–626. https://doi.org/10.1037/a0034716

Greiff, S., Niepel, C., Scherer, R., & Martin, R. (2016). Understanding students' performance in a computer based assessment of complex problem solving. An analysis of behavioral data from computer-generated log files. Computers in Human Behavior, 61, 36–46. https://doi.org/10.1016/j.chb.2016.02.095

Kong, S.-C. & Abelson, H. (2019). Computational Thinking Education. Springer Singapore. https://doi.org/10.1007/978-981-13-6528-7

Malmberg, L.-E., & Little, T. D. (2007). Profiles of ability, effort, and difficulty: Relationships with worldviews, motivation and adjustment. Learning and Instruction, 17(6), 739–754. https://doi.org/10.1016/j.learninstruc.2007.09.014

Schnipke, D. L. (1995). Assessing speededness in computer-based tests using item response times. [Dissertation, John Hopkins University]. The Johns Hopkins University ProQuest Dissertations Publishing.

Skinner, E. A., Chapman, M. & Baltes, P. B. (1988). Control, means-ends, and agency beliefs: A new conceptualization and its measurement during childhood. Journal of Personality and Social Psychology, 54, 117–133. https://doi.org/10.1037/0022-3514.54.1.117

Skinner, E. A., Zimmer-Gembeck, M. J. & Connell, J. P. (1998). Individual differences and the development of perceived control. Monographs of the Society for Research in Child Development, 6(2–3), 1–220. https://doi.org/10.2307/1166220

Stubbart, C. I., & Ramaprasad, A. (1990). Conclusion: The evolution of strategic thinking. Teoksessa A. Huff (toim.), Mapping strategic thought. John Wiley and Sons.

Wise, S. L., & Gao, L. (2017). A general approach to measuring test-taking effort on computer-based tests. Applied Measurement in Education, 30(4), 343–354. https://doi.org/10.1080/08957347.2017.1353992


09. Assessment, Evaluation, Testing and Measurement
Paper

Does the Use of ICT at School Predict Lower Reading Literacy Scores? Multiple Group Analyses with PISA 2000-2022 Data

Nestori Kilpi, Ninja Hienonen, Mari-Pauliina Vainikainen

Tampere University, Finland

Presenting Author: Kilpi, Nestori

Previous studies have shown that the use of information and communication technologies (ICT) in leisure time, and also at school, is related to lower level of school performance (Biagi & Loi, 2013; Gubbels, Swart, & Groen, 2020). Furthermore, data from the Programme for International Student Assessment (PISA) studies have indicated that higher levels of ICT use is related to lower scores in reading literacy both internationally and in Finland (OECD, 2011; Saarinen, 2020). Analyses of the PISA data from 2012 have also shown no significant improvements in student achievement in reading, mathematics or science in the countries that had invested heavily in ICT for education (OECD, 2015). These findings have sometimes been interpreted as an indication of the harmful effects of digitalisation of education.

PISA results have shown a declining trend in many countries (OECD, 2023). The most recent decrease in PISA 2022 scores have been explained, at least in Finland, for example, by the excess use of ICT. On the other hand, mixed results have also been reported, and it is difficult to draw clear conclusions about the relationship between the use of digital technologies and learning (Harju, Koskinen, & Pehkonen, 2019). PISA studies have found that students who use computers moderately and for a variety of purposes have the highest levels of literacy (Leino et al. 2019, p. 94; OECD, 2011).​

The use of ICT in schools can be seen as a target of learning but also as a learning tool, which means that ICT can also be used as a mean to support students (Jaakkola, 2022). Based on previous research, there are some indications that the digital technology is used to differentiate teaching (Biagi & Loi 2013; Lintuvuori & Rämä, 2022; OECD 2011, pp. 20-21). This study will test the hypothesis that the use of ICT could be targeted especially to lower performing students.

The research questions investigated in this study are:

1. How the use of ICT at school is related to students’ reading literacy scores in PISA? Do the levels of proficiency in reading literacy explain the relationship between ICT use and reading performance?

2. Does the student’s special educational needs (SEN) status explain the relationship between the ICT use and reading performance scores?


Methodology, Methods, Research Instruments or Sources Used
In this study, we will use data from all eight PISA cycles, collected every three years between 2000–2022. We used the plausible values of reading literacy and the questions from ICT questionnaire related to the use of digital technology at school. In the first three cycles, it was simply asked how often the students used computers for schoolwork. We created dichotomously coded variables, comparing students selecting more seldom than once a month, 1–4 times a month, a few times every week, or almost every day to those who reported they never used computers at schools. From 2009 on, the questionnaires had longer scales measuring the different ways of using digital technology in schools, and indices of use of computers and digital devices for schoolwork were created based on them.

We analysed the data using Mplus 8.0. Regression models were run for each data set separately, using the categories for computer use (years 2000–2006), the index for computer use (years 2009–2012) and the index for the use of digital devices (years 2015–2022) at school as predictors for reading literacy performance. The stratified two-stage sample design was acknowledged by taking into account school-level clustering and by using house weights that scale the final student weights to sum up to the sample size.

First, we ran the analyses for the whole sample, then as multiple group analysis comparing the students at different reading proficiency levels 1–6. For the 2018 data, we performed multiple group analyses also using the information about students support needs according to the Finnish support model (no support, intensified support, special support). For comparing the coefficients between groups, we bootstrapped confidence intervals for the coefficients using 1000 replicates.

Conclusions, Expected Outcomes or Findings
The results from the cycles 2009–2018 showed that ICT use was negatively related to the reading literacy scores, and the effects were statistically significant. However, the ICT use explained only from one to three percent of the variation in reading literacy scores. By using the reading literacy proficiency levels, we examined whether these different levels of student performance explained negative effects of ICT use on reading literacy scores. On average, students at the lowest proficiency levels used ICT at school more than students at higher levels. However, when examined by performance level, the majority of the relationships between ICT use and reading scores remained statistically non-significant. Students with SEN used more ICT at school than other students and students’ SEN status explained the relationship between ICT use and reading literacy scores, and the relationship was negative and statistically significant.

The results of this study suggest that the previous PISA results of the negative relationship between the use of ICT and student performance have often been interpreted as causal effect and thus, in a wrong way: instead of digitalisation causing the decline of performance, schools might use digital technology as a means of support for lower performing students and students with SEN. This, in turn, may at least partly explain the negative correlations between ICT use and student performance.

So far, the analyses have been conducted with PISA 2000-2018 data. For this presentation, the same analyses will also be conducted with the most recent PISA 2022 data. The latest PISA results also reflect the impact of Covid-19. Furthermore, the pandemic might also have increased the use of ICT. It is important to explore the PISA 2022 results and the effect the effect of ICT use on reading performance.

References
Biagi, F. & Loi, M. (2013).  Measuring ICT Use and Learning Outcomes: Evidence from recent econometric studies. European Journal of Education, 48(1), 28–42. https://doi.org/10.1111/ejed.12016

Gubbels, J., Swart, N., & Groen, M. (2020). Everything in moderation: ICT and reading performance of Dutch 15-year-olds. Large-scale Assessments in Education, 8(1), 1–17. https://doi.org/10.1186/s40536-020-0079-0

Harju, V., Koskinen, A., & Pehkonen, L. (2019). An exploration of longitudinal studies of digital learning. Educational Research, 61(4), 388–407. https://doi.org/10.1080/00131881.2019.1660586

Jaakkola, T., 2022. Tieto- ja viestintäteknologia oppimisen kohteena ja välineenä. In N. Hienonen, P. Nilivaara, M. Saarnio & M.-P. Vainikainen (Eds.), Laaja-alainen osaaminen koulussa. Ajattelijana ja oppijana kehittyminen (pp. 179–189). Gaudeamus.

Leino, K., Ahonen, A., Hienonen, N., Hiltunen, J., Lintuvuori, M., Lähteinen, S., Lämsä, J., Nissinen, K., Nissinen, V., Puhakka, E., Pulkkinen, J., Rautopuro, J., Sirén, M., Vainikainen, M.-P. & Vettenranta, J. 2019. PISA 18 ensituloksia – Suomi parhaiden joukossa. Opetus- ja kulttuuriministeriön julkaisuja 2019:40. Opetus- ja kulttuuriministeriö. http://urn.fi/URN:ISBN:978-952-263-678-2

Lintuvuori, M. & Rämä, I., 2022. Oppimisen ja koulunkäynnin tuki - Selvitys opetuksen järjestäjien näkemyksistä tuen järjestelyistä kunnissa. Opetus- ja kulttuuriministeriön julkaisuja 6:2022. Ministry of Culture and Education.

OECD. (2011). PISA 2009 Results: Students on Line: Digital Technologies and Performance (Volume VI). http://dx.doi.org/10.1787/9789264112995-en

OECD. (2015). Students, Computers and Learning: Making the Connection. OECD Publishing. http://dx.doi.org/10.1787/9789264239555-en

OECD. (2023). PISA 2022 Results (Volume I): The State of Learning and Equity in Education, PISA, OECD Publishing. https://doi.org/10.1787/53f23881-en.

Saarinen, A. (2020). Equality in cognitive learning outcomes: The roles of educational practices. Kasvatustieteellisiä tutkimuksia 97. http://urn.fi/URN:ISBN:978-951-51-6713-2


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2024
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany