Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th May 2024, 04:49:35am GMT

 
 
Session Overview
Session
09 SES 08 A JS: Assessment and Curriculum Reforms: Understanding Impacts and Enhancing Assessment Literacy
Time:
Wednesday, 23/Aug/2023:
5:15pm - 6:45pm

Session Chair: Sarah Howie
Location: Gilbert Scott, EQLT [Floor 2]

Capacity: 120 persons

Joint Paper Session, NW 09 and NW 24

Show help for 'Increase or decrease the abstract text size'
Presentations
09. Assessment, Evaluation, Testing and Measurement
Paper

The Impact of Curriculum and Assessment Reform in Secondary Education on Progression to Mathematics Post-16

Joanna Williamson, Carmen Vidal Rodeiro

Cambridge University Press & Assessment, United Kingdom

Presenting Author: Williamson, Joanna

In most education systems around the world there is a strong case for increasing the mathematical skills of young people beyond the age of 16. Evidence from international student surveys such as PISA show that, for example, in the European Union about 23% of 15 year-olds in 2018 did not reach basic levels of skills in mathematics (OECD, 2019).

Incentivising young people to continue to study mathematics post-16 should not only help satisfy demands for mathematically and quantitatively skilled people in the labour market, but more generally help ensure that young people have the knowledge to succeed in an increasingly technological society (e.g., Mason et al. 2015; Smith, 2017; European Commission, 2022). Moreover, young people with good mathematical knowledge will benefit from the quantitative, analytical and problem-solving skills mathematics qualifications develop, which will support attainment in other disciplines, particularly those with a significant quantitative component.

In England, unlike other countries in Europe and the rest of the world, the study of mathematics post-16 is not compulsory for all students. A recent study comparing upper secondary mathematics participation in 24 countries (Hodgen & Pepper, 2019) showed that in England fewer than 20% of students persist with mathematics education in any form beyond the age of 16. In contrast, 18 countries have post-16 participation rates higher than 50%, with rates at more than 95% in eight of them, including Sweden, Finland, Japan and Korea.

One reason for low progression to post-16 mathematics in England could be a longstanding concern about how well the mathematics qualifications offered to students aged 14 to 16 (GCSE, General Certificate of Secondary Education) prepare students for advanced study in mathematics, with algebra frequently mentioned as the key problem (e.g., Wiliam et al., 1999; Hernandez-Martinez et al., 2011; Noyes & Sealey, 2011; Rigby 2017).

To increase uptake of mathematics and to improve students’ mathematical skills at all levels takes effort, funding and a range of interventions. In England, GCSE qualifications (in all subjects) were recently reformed “to ensure they are rigorous and robust, and give students access to high quality qualifications which match expectations in the highest performing jurisdictions”. For mathematics specifically, the new GCSE “focuses on ensuring that every student masters the fundamental mathematics that is required for further education and future careers”, and, in particular, aims to “be more demanding” and “provide greater challenge for the most able students” (Gove, 2013).

There were concerns that the new mathematics GCSE could deter students from post-16 mathematics (e.g., by reducing their confidence) and unintentionally reduce uptake (ALCAB, 2014; Lee et al., 2018). A decrease in post-16 mathematics entries in 2019 leant weight to these fears but, to date, there has been little published research on how the reform of GCSE mathematics has affected mathematics learning and progression to post-16 study. One of the few studies to consider this issue in detail was carried out by Howard and Khan (2019). Their qualitative research found that, in general, teachers were positive about the extent to which the reformed GCSE prepared students for post-16 mathematics. Their participants also reported that the reformed GCSE had positive implications beyond studying mathematics and that it would support students studying other subjects with mathematical content. Grima and Golding (2019) and Pearson Education (2019) reported similar findings from qualitative research in schools.

The current research aims to complement the qualitative analyses of existing research described above, by approaching the question of how the reform of GCSE mathematics has affected progression to and performance in post-16 mathematics and maths-related subjects via quantitative analysis of entries and performance data.


Methodology, Methods, Research Instruments or Sources Used
This work addressed the research question via quantitative analysis of national results data available in the National Pupil Database (NPD). The NPD is a longitudinal database for children in schools in England, linking pupil characteristics to school and college learning aims and attainment. It holds individual pupil level attainment data for pupils in all schools and colleges who take part in the tests/exams, and pupil and school characteristics (e.g., age, gender, ethnicity, special educational needs, eligibility for free school meals, etc.) sourced from the School Census for state schools only.

Candidates who completed a GCSE mathematics in each of the years from 2014 to 2017 (2014-2016 pre-reform; 2017 post-reform) were followed up for two years and the post-16 qualifications they achieved included in the research. For example, students who achieved a GCSE in mathematics in 2015 were followed up in 2016 and 2017 and the qualifications achieved identified. Later cohorts could not be included because end-of-course exams were cancelled in 2020 and 2021 due to the Covid pandemic.

Progression from GCSE mathematics pre- and post-reform to the following qualifications was then investigated: progression to a range of different post-16 mathematics qualifications (core maths, maths, further maths); and progression to post-16 maths-related subjects (Biology, Chemistry, Physics, Economics, Psychology).

Descriptive statistics on the number and proportion of GCSE mathematics students progressing to the qualifications listed above (overall and by GCSE grade), pre-reform (2014-2016) and post-reform (2017), were produced and compared. Marginal grade distributions for all qualifications, overall and by GCSE mathematics grade, pre-and post-reform were also produced.

To further explore the effect of GCSE reform on progression to and performance in post-16 maths or maths-related subjects multilevel logistic regression analyses were carried out. The regression analyses differ from the descriptive analyses in that they take into account students’ background characteristics when looking at the impact of GCSE reform on progression to or performance post-16.
The outcomes modelled in the regression analyses were as follows:
- progression to post-16 maths (any qualification, core maths, maths, further maths);
- progression to maths-related subjects (Biology, Chemistry, Physics, Economics, Psychology);
- achievement of specific grade thresholds in post-16 maths qualifications, and in maths-related subjects.

The independent variables in the regression models included: year the GCSE maths was achieved (i.e., an indicator of pre- or post-reform), GCSE grade, gender, overall prior attainment at school, level of socio-economic deprivation and type of school attended (e.g., private vs. state).

Conclusions, Expected Outcomes or Findings
Contrary to fears about reduced uptake, this research showed that progression to mathematics post-16 generally increased following the recent reforms to secondary level mathematics qualifications. The uptake of core maths and further maths increased independently of the grade achieved by students in their mathematics GCSE. However, for post-16 maths (i.e., the mainstream mathematics qualification, not core maths or further maths), the increase in uptake was higher amongst those who achieved top grades in their mathematics GCSE than for students with just a pass. Performance in all three post-16 maths qualifications was, in general, lower post-reform – in contrast to teacher expectations. However, it should be taken into account that students taking the reformed GCSE would have also taken newly reformed post-16 qualifications, and it is known that performance tends to dip in the first years of a new qualification.

The research also found that progression to five maths-related subjects (Biology, Chemistry, Physics, Economics, and Psychology) was higher post-reform than pre-reform. Compared to pre-reform years, performance in these maths-related subjects was generally worse post-reform. In particular, in science subjects (Biology, Chemistry and Physics) performance was very similar pre- and post-reform for students with the very top GCSE grades in mathematics, but it was lower post-reform for students with lower grades in the GCSE.

In conclusion, this research has shown that some of the aims of the curriculum and assessment reform in secondary mathematics (in particular, increasing uptake of mathematics post-16) seem to have been fulfilled. As with any reforms, changes take time to bed in, but this research has raised important issues for the mathematics education community as countries seek to increase the numbers of people that are well prepared to apply their mathematical knowledge and skills not only in further education and the workplace, but also in society more generally.

References
ALCAB (2014). Report of the ALCAB panel on Mathematics and Further Mathematics. Hemel Hempstead: The A Level Content Advisory Board.

European commission (2022). Increasing achievement and motivation in mathematics and science learning in schools. Luxembourg: European Education and Culture Executive agency.

Grima, G., and Golding, J. (2019). Reformed GCSE Mathematics qualifications: teachers’ views of the impact on students starting A levels. Ofqual Educational Assessment Seminar Scarman House, Warwick University.

Gove, M. (2013). Ofqual policy steer letter: reforming Key Stage 4 qualifications. [Letter from the Secretary of State for Education to Ofqual's Chief Regulator]. https://www.gov.uk/government/publications/letter-from-michael-gove-regarding-key-stage-4-reforms.

Hernandez-Martinez, P., Williams, J., Black, L., Davis, P., Pampaka, M., and Wake, G. (2011). Students' views on their transition from school to college mathematics: rethinking ‘transition’ as an issue of identity. Research in Mathematics Education, 13(2), 119-130.

Hodgen, J., and Pepper, D. (2019). An international comparison of upper secondary mathematics education. London: Nuffield Foundation.

Howard, E., and Khan, A. (2019). GCSE reform in schools: The impact of GCSE reforms on students’ preparedness for A level maths and English literature. Coventry: Office of Qualifications and Examinations Regulation.

Lee, S., Lord, K., Dudzic, S., and Stripp, C. (2018). Investigating the Impact of Curriculum and Funding Changes on Level 3 Mathematics Uptake. Trowbridge: Mathematics in
Education and Industry.

Mason, G., Nathan, M. and Rosso, A. (2015). State of the nation: a review of evidence on the supply and demand of quantitative skills. London: British Academy and NIESR.

Noyes, A., and Sealey, P. (2011). Managing learning trajectories: the case of 14–19 mathematics. Educational Review, 63(2), 179-193.

Pearson Education (2019). GCSE Mathematics Qualification – UK Regulated qualification efficacy report. London: Pearson UK.

OECD (2019). PISA 2018 Results (Volumes I to IV). Paris: OECD (INFULL)

Rigby, C. (2017). Exploring students’ perceptions and experiences of the transition between GCSE and AS Level mathematics. Research Papers in Education, 32(4), 501-517.

Smith, A. (2017). Review of Post-16 Mathematics. London: Department for Education

Wiliam, D., Brown, M., Kerslake, D., Martin, S., and Neill, H. (1999). The transition from GCSE to Alevel in mathematics: a preliminary study. Advances in Mathematics Education, 1(1), 41-56.


09. Assessment, Evaluation, Testing and Measurement
Paper

Quality of an Assessment Task Developed by a Preservice Mathematics Teacher: The Role of Feedback from Agencies

Gözde Kaplan-Can, Erdinç Çakıroğlu

Middle East Technical University, Turkiye

Presenting Author: Kaplan-Can, Gözde

Assessment literacy has been a repetitious term in the assessment literature since it was popularized by Stiggins (1991) (Koh et al., 2018). Assessment literacy is mainly related to teachers’ assessment practices and skills in selecting, designing, or using assessments for various purposes (Stiggins, 1991). The term also defines the knowledge of principles behind selecting, adapting, or designing assessment tasks, judging students’ work and using obtained data to enhance their learning (Koh et al., 2018).

Mathematical thinking arises when students work on problem-like tasks (Jones & Pepin, 2016). However, traditional mathematics instruction and assessment mainly emphasize memorization instead of creative thinking or reasoning. Some research also supports this claim (see Jäder et al., 2015; Stein et al., 2009; Stevenson & Stigler, 1992; Vacc, 1993). On the other hand, such instruction and assessment fail to enhance students’ competencies in mathematics and lead them to follow rote learning (Hiebert, 2003). Hence, students must face challenging and unfamiliar problems that activate their higher-order thinking skills (HOTS).

HOTS require making explanations, interpretations, and decision-making. Students with HOTS can learn how to improve their success and reduce their weaknesses (Tanujaya, 2016). Hence, mathematics teachers should be knowledgeable about HOTS and how to enhance these skills to carry out quality mathematics instruction and assessment. For this reason, teacher education programs must support preservice mathematics teachers (PMTs) to understand the significance of engaging students with higher-level tasks.

Several categorizations are provided for HOTS in the field of education. Bloom’s taxonomy proposed that analysis, synthesis, and evaluation levels include HOTS (McDavitt, 1994). Stein et al. (1996) described a higher level of cognitive demand as doing mathematics or the use of procedures with connection to concepts, understanding, or meaning. A national categorization for mathematics competence levels was also provided in a Monitoring and Evaluating Academic Skills Study (MEASS) project. This framework comprises four categories, and the last two categories were devoted to students’ higher-order thinking skills (MoNE, 2015). The framework will be introduced during the presentation.

Although challenging tasks can promote students’ HOTS, research has shown that designing worthwhile mathematical tasks is not trivial (Leavy & Hourigan, 2020). Besides, preservice teachers (PT) cannot create such tasks (Silver et al., 1996). This is predictable since they have fewer opportunities to write tasks in their teacher education programs (Crespo & Sinclair, 2008). Significantly less is known about the PT’s ability to develop mathematical tasks (Crespo, 2003). Besides, there are fewer studies on how to help PTs to realize and discuss the quality of their mathematical tasks (Crespo & Sinclair, 2008). Thus, professional development (PD) studies must be conducted to increase PMTs’ capacity to develop tasks.

The study’s purpose is to improve the quality assessment task-writing skills of PMTs through feedback provided by different agencies such as researchers, peers, and students. It specifically aimed to answer the research question, “how does feedback provided by different agencies improve the quality of assessment tasks developed by PMTs.” This study also aimed to introduce the framework for mathematics competence levels (MoNE, 2015) to European society. Feedback was defined by Eggen and Kauchak (2004) as the information that teachers or students receive with regard to the accuracy or relevancy of their work through classroom practices. In this study, this term was used to refer to the information preservice mathematics teachers receive from researchers, students, and their peers about the quality and cognitive levels of their tasks.


Methodology, Methods, Research Instruments or Sources Used
The study’s data were drawn from design-based research that aimed (1) to examine and improve the senior preservice middle school mathematics teachers’ (PMT) understanding and development of cognitively demanding quality mathematical tasks which aim to assess students’ learning and (2) to develop, test and revise a conjecture map and professional development sequence serving for the first purpose. The research was conducted in an elective course that required a weekly meeting of three-course hours. Ten fourth-year PMT enrolled in a four-year middle grades (grades 5-8) mathematics teacher education program at a public university in Türkiye participated in the course. The course consisted of two phases, including several PD activities. In the first phase, PMTs investigated sample tasks and criticized and revised them, considering their quality and cognitive demand. They conducted an independent study in the second phase. They developed two cognitively demanding quality assessment tasks. The development process of both tasks was a cyclic process that required revisions considering the researchers’, peers’, and students’ feedback. This study focused on the development process of a contextual task written by one of the preservice teachers (Mert).

The task development process involved four cycles and was based upon an iterative task design cycle suggested by Liljedahl et al. (2007), consisting of predictive analysis, trial, reflective analysis, and adjustment. Our processes emphasized the importance of feedback to re-develop the task and reflect on experiences. Hence, each cycle ended with a new version of the tasks. PMTs wrote the first version of their contextual tasks in the baseline. In task development cycle 1 (TDC1), the tasks were peer-reviewed by pairs of PMTs and criticized during the class discussion regarding their cognitive demand and quality. The researcher provided written feedback on the second version of the tasks in TDC2. In TDC3, PMTs interviewed middle school students using the third version of their task, while in TDC4, they implemented the tasks in real classrooms. They shared students’ thinking with their peers in the PD course after TDC3 and TDC4. They revised their task considering what they noticed about students’ thinking or difficulties and their peers’ feedback and prepared the last versions. Mert’s reflections at the end of each cycle, his reflections on the interviews with students and class implementation, his project report, and the post-interview provided the data for this study.

Conclusions, Expected Outcomes or Findings
Mert developed a cognitively demanding quality assessment task that involved the context in which a car turned around a center of rotation and consisted of two multiple-choice questions in the baseline. The first question asked students to compare the speeds of all wheels shown in the figures. The second question asked to choose the correct interpretation of the ratio of the front-right-wheel to the rear-left-wheel. Mert categorized its cognitive level as the highest level 4 and provided reasonable explanations.

In TDC1, peers criticized and gave feedback about the pedagogical and mathematical task qualities such as the task's clarity, appearance, cognitive level, and mathematical language. Mert changed the figures and the language he used in the second question. He asked to compare the rear-wheels’ speeds instead of comparing the speeds of a front- and a rear-wheel. However, he thought this second version's cognitive level was slightly weakened. In TDC2, Mert revised the second question’s options, made changes in its appearance considering the researcher’s feedback, and categorized the task as more qualified. He made radical changes in his task in TDC3. Students’ perspectives guided him to change the figure again and question types from multiple-choice to open-ended. He completely changed the second question and asked the difference between the distance traveled by the right- and left-rear-wheel. He also wanted students to support their explanation using algebraic expressions. He did not revise his task in TDC5 since “it was sufficient to be a cognitively demanding quality task” (Mert). In sum, each cycle contributed to the task’s quality. Having the opportunity to enact the task to the students, especially in a one-to-one setting, made the greatest contribution to the task’s pedagogical and mathematical quality. Hence this process revealed the significance of assessing students’ responses to realize the quality of tasks (Norton & Kastberg, 2012).


References
Acknowledgment: The paper was supported by the H2020 project MaTeK, no. 951822.

Crespo, S. (2003). Learning to pose mathematical problems: Exploring changes in preservice teachers’   practices. Educational Studies in Mathematics, 52(3), 243–270.
Crespo, S., & Sinclair, N. (2008). What makes a problem mathematically interesting? Inviting prospective teachers to pose better problems. Journal of Mathematics Teacher Education, 11(5), 395–415.
Hiebert, J. (2003). What research says about the NCTM standards. In J. Kilpatrick, G. Martin, & D. Schifter (Eds.), A research companion to principles and standards for school mathematics (pp. 5–26). Reston, Va.: NCTM.
Jäder, J., Lithner, J., & Sidenvall, J. (2015). A cross-national textbook analysis with a focus on mathematical reasoning–The opportunities to learn. Licentiate thesis, Linköping University.
Jones, K., & Pepin, B. (2016). Research on mathematics teachers as partners in task design. Journal of Mathematics Teacher Education, 19(2), 105–121.
Koh, K. H., Burke, L. E. C., Luke, A., Gong, W. & Tan, C. (2018). Developing the assessment literacy of teachers in Chinese language classrooms: A focus on assessment task design. Language Teaching Research, 22(3), 264–288. https://doi.org/10.1177/13621688166843
Leavy, A., & Hourigan, M. (2020). Posing mathematically worthwhile problems: developing the problem‑posing skills of prospective teachers. Journal of Mathematics Teacher Education, (23)4 p341-361. https://doi.org/10.1007/s10857-018-09425-w
Liljedahl, P., Chernoff, E., & Zazkis, R. (2007). Interweaving mathematics and pedagogy in task design: A tale of one task. Journal of Mathematics Teacher Education, 10(4–6), 239–249.
McDavitt, D. S. (1994). Teaching for understanding: Attaining higher order learning and increased achievement through experiential instruction. Technical Report. Retrieved from https://files.eric.ed.gov/fulltext/ED374093.pdf
Ministry of National Education [MoNE] (2015). Akademik becerilerin izlenmesi ve değerlendirilmesi. Retrieved from https://abide.meb.gov.tr/
Norton, A., & Kastberg, S. (2012). Learning to pose cognitively demanding tasks through letter writing.   Journal of Mathematics Teacher Education, 15, 109–130.
Silver, E. A., Mamona-Downs, J., & Leung, S. S. (1996). Posing mathematical problems: An exploratory study. Journal for Research in Mathematics Education, 27, 293–309.
Stevenson, H. W., & Stigler. J. W. (1992). The learning gap: Why our schools are failing and what we can learn from Japanese and Chinese education. NY: Summit Books.
Stiggins, R.J. (1991). Assessment literacy. Phi Delta Kappan, 72, 534−539.  
Tanujaya, B. (2016). Development of an instrument to measure higher order thinking skills in senior high school mathematics instruction. Journal of Education and Practice, 7(21), 144-148.
Vacc, N. (1993). Questioning in the mathematics classroom. Arithmetic Teacher, 41(2), 88–91.


09. Assessment, Evaluation, Testing and Measurement
Paper

Teachers’ Conceptions of Large-scale Assessment: Implications for Assessment Literacy

Serafina Pastore

University of Bari, Italy

Presenting Author: Pastore, Serafina

On the backdrop of the recent educational data movement (Marsh et al., 2015; Schildkamp et al., 2019), teachers are expected to use different kind of data to inform their instructional decision-making. However, different studies have already demonstrated that teachers are reluctant to change their assessment practices (and conceptions), especially when new practices are framed within the rationale of institutional reforms (Boardman & Woodruf, 2004; Brown, 2004; Klieger, 2016; Remesal, 2007), or in new scenarios such as those that emerged during the COVID-19 pandemic. Despite the recognition of the importance of assessment, some studies (Hopfenbeck, 2015; Looney et al., 2018) have also identified the lack of modernisation and have indicated that assessment has not changed materially. Rcent studies on the use of assessment data for decision-making and teaching practice have showed that although teachers recognise the importance of using data gathered through assessment, sometimes, they are not able to manage several sources of information including data from LSAs (Farrell & Marsh, 2016; Mandinach & Gummer, 2016; Schildkamp et al., 2014).

While LSAs have been progressively recognised as relevant components of educational accountability systems, teachers’ negative attitudes towards LSA programmes and the lack of assessment literacy have been highlighted (Fullan et al., 2018; Klinger & Rogers, 2011). In this perspective, research evidence (Hopster-den-Ottera et al., 2017; Monterio et al., 2021; Schildkamp et al., 2019) suggests that the identification of practical assessment challenges for teachers, as well as the understanding of teachers’ conceptions of assessment are of paramount importance in order to ensure teacher assessment literacy, teacher professionalism, and effective school improvement.

The present paper, with a focus on the Italian school system, tries to offer new insights for this debate. Despite the increasing interest in researching teachers’ assessment conceptions and in understanding how these conceptions affect the assessment literacy development, in Italy these research topics, unfortunately, are still neglected. Therefore, given the current lack of empirical studies on teachers’ LSA conceptions an exploratory qualitative study has been realized (Creswell, 2014; Strauss & Corbin, 2007).

In Italy, between 2007 and the time of writing this study (2022), only one LSA programme was adopted. This programme is administrated by the Italian National Institute for School Evaluation and Assessment of Learning (INVALSI) which is subject to the Minister of Education. The LSA programme, aligned with the national curriculum, comprises a census-based administration of cognitive tests (2°, 5°, 8°, 10°, and 13° grades) in the subjects of Italian, Mathematics, and English. INVALSI reports examine the quality of the national school system and support the school improvement. Since its introduction, the national LSA programme, however, caused different problems: teachers have attacked and boycotted the LSA programme. Still nowadays, they continue to perceive the INVALSI programme as a means of control for schools, teachers, and students.

During the COVID-19 pandemic, schools and teachers shifted to remote instructional activities and experienced difficulties in navigating new (and old) mechanisms within the extant assessment practice (e.g., marking and grading student work on-line or sharing feedback on assignments). In the school year 2019-2020, the INVALSI programme was not administrated. The teachers’ (and students’) positive reactions at the cancellation of the yearly INVALSI programme contrast with the need for an assessment that should be embedded within the school system and aligned with the aim of improving the school quality (Wiliam, 2013).

The study sought to better understand how Italian teachers conceptualise the LSA and how they use its results, addressing the following research questions:

  1. What do teachers think of the INVALSI programme?
  2. How do teachers use the INVALSI results for their instructional practice and decision-making (at classroom and school level)?

Methodology, Methods, Research Instruments or Sources Used
The present study was guided by the grounded theory interpretative method (Strauss & Corbin, 2007).
A total of 70 teachers from 5 schools in the district of XXXX (details removed to avoid identification) were selected to participate in the study. These schools have the same organization and jointly include grades 1°-5° (primary) and grades 6°-8° (middle). Only teachers of Italian and Mathematics were considered because the INVALSI tests pertain these two content domains.

Data were collected through semi-structured interviews by the author.
Drawing on relevant theoretical and empirical literature to design questions about teachers’ conceptions of the LSA programme, teachers’ experience with INVALSI data, and their instructional responses to data, the semi-structured interview track comprised 10 questions divided in two main sections:
1. Assessment conceptions: Questions in this section sought information on the teachers’ conceptions of LSA, its aims, and values; and
2. Data usage: This section aimed to analyse if, and how, teachers use large-scale data in their instructional practice and decision-making.
Moreover, during the interview, information on attended teacher education paths on educational assessment, and data on socio-demographic variables (e.g., gender, age, years of service) were gathered.
The data analysis followed a three steps process: open coding, axial coding, and selective coding.

The data set for this study is large and so what is presented here is only a selection of main inquiry categories:
1. There are no substantial differences in teachers’ conceptions of assessment: gender, age, and subject matter do not affect answers. The slight differences found in the conceptions of interviewed teachers are related only to the variable of their years of service.
2. Even though participants were prompted to reflect on their answers, the data demonstrate their simplistic conceptions of assessment.
3. Teachers are not able to provide a definition of assessment that goes beyond the mere dimension of measure of student learning. They appear very worried about large-scale assessment. They don’t see the real value of this kind of assessment and are scared about the idea that students’ results can be used for teachers’ performance appraisal and selection. For this reason, most of them admit teaching to the test and cheating, although they recognize these are malpractices.
4. Data relating to the fourth section of the interview reveal a composite scenario. The classroom assessment is frequently performed in a formal way. Teachers tend to not use large-scale-results to review and/or change their instructional practices.

Conclusions, Expected Outcomes or Findings
The relationships among LSA, the teaching-learning process, and the Italian school system are ambiguous and incoherent. While LSA is perceived as disconnected from school and teaching practice, classroom-based assessments are considered not entirely reliable although they provide more information about student learning processes. However, the teachers in this study admitted their assessment illiteracy with respect to some practical aspects (e.g., how to gather valid and robust data in summative assessment). They said that they were not able to read, interpret, understand, and use the data gathered through the INVALSI programme. The major hindrance is the teachers’ conceptions of the LSA programme that is rarely used to refocus and improve teaching for individual students (Herman, 2016). Even though national LSA programmes have largely spread across different countries (Verger et al., 2019), research evidence points how such assessments are sometimes perceived as a threat to the teachers’ practice and professionalism (Emler et al., 2019).
In the Italian school system, there is an urgent need to invest in teachers’ assessment literacy and evaluation culture (Emler et al., 2019; Klinger & Rogers, 2011). The challenge is to allow teachers to reach out with this knowledge and to push the use of assessments forward in a more responsive manner. The teachers’ negative conceptions of LSA and assessment illiteracy can lead to the inappropriate use of INVALSI results over time; it is not surprising that the positive effects of LSA are absent and were not perceived by the interviewed teachers (Cizek, 2001).

Despite the recognition of assessment as relevant components of teacher professionalism, assessment literacy paths are not responsive to teacher learning needs in this area. The increased relevance of data represents a challenge for teachers in terms of data use, decision-making, and public reporting.

References
Boardman, A. G., & Woodruff, A. L. (2004). Teacher change and “high stakes” assessment: What happen to professional development?. Teaching and Teacher Education 20(6): 545-557.
Brown, G. T. L. (2004). Teachers’ conceptions of assessment: Implications for policy and professional development. Assessment in Education 11(3): 301-318. doi:10.1080/0969594042000304609.
Cizek, G. J. (2001). More unintended consequences of high-stakes testing. Educational Measurement: Issues and Practice 20(4): 19-27. doi: 10.1111/j.1745-3992.2001.tb00072.x.
Corbin, J., & Strauss, A. (2007). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (3rd ed.). Thousand Oaks, CA: Sage.
Creswell, J. W. (2014). Research Design: Qualitative, Quantitative and Mixed Methods Approaches (4th ed.). Thousand Oaks, CA: Sage.
Emler, T. E., Zhao, Y., Deng, Z., Yin, D., & Wang, Y. (2019). Side effects of large-scale assessments in education. Review of Education 2(3): 279-296. doi: 10.1177/2096531119878964.
Farrell, C. C., & Marsh, J. A. (2016). Contributing conditions: A qualitative comparative analysis of teachers’ instructional responses to data. Teaching and Teacher Education 60(1): 398-412.
Hopster-den Ottera, D., Woolsb, S., Eggena, T. J. H. M., & Veldkamp, B. P. (2017). Formative use of test results: A user’s perspective. Studies in Educational Evaluation 52: 12-23. doi: 10.1016/j.stueduc.2016.11.002.
Klieger, A. (2016). Principals and teachers: Different perceptions of large-scale assessment. International Journal of Educational Research, 75: 143-145. doi: 10.1016/j.ijer.2015.11.006.
Looney, A., Cumming, J., van Der Kleij, F., & Haris, K. (2018). Reconceptualizing the role of teachers as assessors: Teacher assessment identity. Assessment in education: Principle, Policy & Practice 25(5): 442-467.
Mandinach, E. B., &. Gummer, E. S. (2016). Data Literacy for Teachers: Making it Count in Teacher Preparation and Practice. New York, NY: Teachers College Press.
Marsh, J. A., Bertrand, M., Huguet, A. (2015). Using data to alter instructional practice: The mediating role of coaches and professional learning communities. Teachers College Record 117(4): 1-41.
Remesal, A. (2007). Educational reform and primary and secondary teachers’ conceptions of assessment: the Spanish instance. Building upon Black and Wiliam (2015). The Curriculum Journal 18(1): 27-38. doi:10.1080/09585170701292133.
Schildkamp, K. (2019). Data-based decision-making for school improvement: Research insights and gaps. Educational Research 61: 257-273.
Schildkamp, K., Karbautzki, L., & Vanhoof, J. (2014). Exploring data use practices around Europe: Identifying enablers and barriers. Studies in Educational Evaluation, 42: 15-24.


09. Assessment, Evaluation, Testing and Measurement
Paper

Becoming Assessment Literate – Enhancing Teacher Students Assessment Literacy

Johanna Kainulainen, Mirja Tarnanen

University of Jyväskylä, Finland

Presenting Author: Kainulainen, Johanna

Teacher education plays a significant role in developing assessment culture and practices as well as teachers' assessment literacy (e.g. Xu & Brown, 2016; Atjonen et al. 2019). Assessment literacy (AL) consists of e.g. individual’s and community’s awareness of the values and principles related to assessment, as well as information related to the purpose of assessment, assessment targets, strategies and assessment practices (e.g. Atjonen, 2017; Xu & Brown, 2016). It is based on an understanding of learning processes and environments, an understanding of the impact of the individual and the community on them, an understanding of the goals set for learning processes and the monitoring of their realization throughout the entire learning process (e.g. Atjonen, 2017). When considering assessment as support of learning and its importance for learning is recognized, there is a need for teachers to deepen their understanding of the purpose of assessment and implementation of it i.e. to be assessment literate (DeLuca & Braund, 2019).

AL can be viewed as both a teacher's and a learner's competence. Learners' AL is reflected, among other things, in the skills of receiving and utilizing feedback and in the learner's activity in developing their own competence (Atjonen et al., 2019). Assessment should therefore be an interactive process in which the teacher guides and supports not only learning but also the development of the learner's assessment skills, ie his or her learning skills. Teacher education can prepare teacher students for their future work by focusing on assessment that, on the one hand, develops their assessment skills as a student, such as self-assessment skills, and, on the other hand, provides the ability to plan, implement and develop assessment as a teacher. (Boud et al., 2013; Kearney & Perkins, 2014). Teacher education plays an important role in building the foundations for teacher’s AL and as a basis for continuous development throughout a teacher's career (DeLuca & Braund, 2019).

The shaping of teachers’ AL is comprehensively described using the TALiP framework (Teacher Assessment Literacy in Practice, Xu & Brown, 2016). According to the framework, a teacher's AL can be thought of as a complex identity work in which the teacher builds their assessment concepts on the basis of diverse assessment information and pedagogical content information, which in turn is influenced by e.g. curricula. Teachers’ perceptions of assessment are shaped by the influence of the teacher's perception of learning as well as the cognitive dimension and the affective dimension. Assessment concepts are structured as assessment practices and are influenced by, for example, macro-cultural socio-cultural conditions related to the school system and, at the micro-level, for example, the traditions and customs of the school or work community. According to the framework, teachers ’own assessment practices are ultimately the result of a number of compromises that the teacher has to make with his or her own knowledge, perceptions, beliefs, and external pressures. The teacher’s assessment identity is built through the teacher’s experiences, learning attitude and reflection, and inclusion and interaction. At its best, it develops throughout teacher’s professional career. (Xu & Brown, 2016; DeLuca & Braund, 2019)

In this study, we explore how assessment literacy (AL) of teacher students is constructed through self- and group-assessments and self- and group-reflections. The research questions are

RQ1: What kind of challenges and opportunities teacher students reason for when reflecting on their development of assessment literacy?

RQ2: What do teacher students consider most significant impact on in developing their assessment literacy?


Methodology, Methods, Research Instruments or Sources Used
The data was collected in the context of an experimental research-based holistic learning unit as part of the master's studies in special education teacher training and in elementary school teacher training. The learning unit lasted 8 weeks and it consisted of intensive teaching period at university, independent and group learning tasks, and a five-week training period in school context, where teacher students (N=18) were working together in multi-professional teams. Teams can be called multi-professional because, in Finland, elementary school teacher students and special education teacher students study in separate master's programmes and have different eligibility criteria in Finnish educational legislation.

The data consists of elementary school teacher students' and special education teacher students' self-reflections and self-assessments during the learning process in the learning unit and teacher-student teams’ interview data and group reflections collected at the end of the learning unit.  

In this presentation, we examine teacher students' reflections on their assessment literacy based on theory-informed and data-driven content analysis. The data was analyzed using qualitative data-driven and theory-informed content analysis (Vaismoradi et al., 2016). The data were thematised to capture both explicit and underlying reflections on assessment literacy and its development, and also reflections on assessment and evaluation in general. The data analysis consists of three interactive sub processes: generating initial codes based on the qualitative content analysis, reviewing themes, and naming the main categories. The analysis can be also characterized theory-informed because part of the data consisted of reflections, in which students were asked to reflect on their own assessment literacy by mirroring it to the TALiP framework (Xu & Brown, 2016).

Conclusions, Expected Outcomes or Findings
The preliminary findings indicate that teacher students reason for the necessity for continuous reflection and they are readiness to reflect on and develop their assessment literacy from multiple angles in future as well. They are aware of the multidimensionality, importance and challenges of assessment in guiding learning and in instructional support (see Hamre et al., 2007). The students highlighted as challenges, for example, the lack of experience and insufficient preparation during teacher education programme, the comprehensive challenge of the teacher's work and pedagogical expertise, and especially the influence of their own beliefs and experiences when considering assessment. On the other hand, the students' reflections revealed a belief in and favor of development and a desire to constantly develop assessment literacy both individually and in collaboration in school communities. They strongly emphasized the importance of reflection in building their own competence and identifying development needs, as well as in developing their understanding and knowledge base of assessment.
Teacher students consider holistic and participatory ways of learning about assessment (as was done in the learning unit), their own active knowledge building, and continuous reflection and questioning of their own beliefs to be the most significant for developing their assessment literacy.

On the basis of the results, it may be concluded that teacher education should offer to teacher students opportunities to build their assessment literacy consciously throughout their studies and provide comprehensive support for its development in authentic environments.

In our presentation, we will also discuss how assessment literacy could be more closely bridged with teacher students’ learning processes and professional development and what kinds of pedagogical practices would be relevant for developing assessment literacy in teacher education. - Most of the data will be analyzed in spring 2023 and the results will be discussed based on that.

References
Atjonen, P. (2017). Arviointiosaamisen kehittäminen yleissivistävän koulun opettajien koulutuksessa – Opetussuunnitelmatarkastelun virittämiä näkemyksiä [Developing assessment literacy in general education school teacher training - Insights from curriculum review]. Teoksessa V. Britschgi & J. Rautopuro (toim.) Kriteerit puntarissa. Suomen kasvatustieteellinen seura. Kasvatusalan tutkimuksia 74, 132–169.

Atjonen, P., Laivamaa, H., Levonen, A., Orell, S., Saari, m., Sulonen, K., Tamm, M., Kamppi, P., Rumpu, N., Hietala, R. & Immonen, J. (2019). ”Että tietää missä on menossa”. Oppimisen ja osaamisen arviointi perusopetuksessa ja lukiokoulutuksessa [Assessment of learning and competence in basic education and upper secondary education]. Kansallinen koulutuksen arviointikeskus, Julkaisut 7:2019.

Boud, D., Lawson, R. & Thompson, D. G. (2013). Does student engagement in self-assessment calibrate their judgement over time? Assessment & Evaluation in Higher Education, 38:8, 941–956.

DeLuca, C. & Braund, H. (2019). Preparing Assessment Literate Teachers. Oxford Research Encyclopedias of Education.

Hamre, B. K., Pianta, R. C., Mashburn, A. J., & Downer, J. T. (2007). Building a science of classrooms: Application of the CLASS framework in over 4,000 U.S. early childhood and elementary classrooms. New York: Foundation for Child Development.

Kearney, S. P. & Perkins, T. (2014). Engaging students through assessment: the success and limitations of the ASPAL (Authentic Self and Peer assessment for Learning) model. Journal of University Teaching & Learning Practice, 11 (3), 2014.

Vaismoradi M, Jones J, Turunen H, and Snelgrove, S. (2016). Theme development in qualitative content analysis and thematic analysis. Journal of Nursing Education and Practice 6(5), 100–110.

Xu, Y. & Brown, G. T. L. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education. Vol. 58, Aug. 2016, 149–162.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany