Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th May 2024, 03:51:38am GMT

 
 
Session Overview
Session
09 SES 05.5 A: General Poster Session
Time:
Wednesday, 23/Aug/2023:
12:15pm - 1:15pm

Location: Gilbert Scott, Hunter Halls [Floor 2]


General Poster Session

Show help for 'Increase or decrease the abstract text size'
Presentations
09. Assessment, Evaluation, Testing and Measurement
Poster

Assessment of Speaking in English as a Second Language: Teachers’ Assessment Practices Through the Lens of Note-taking

Liliann Byman Frisén

Karlstad University, Sweden

Presenting Author: Byman Frisén, Liliann

In a linguistically and culturally diversified Europe, the teaching, learning and assessment of foreign languages have gained attention over the last decades (European Commission, EACEA, & Eurydice, 2015). As a result, national tests in languages are currently administered in almost all European countries, and in a majority they carry high-stakes for students. The teaching and assessment of English as a foreign/second language (L2) has a particular position in Europe, where all but one of the countries administer national tests in L2 English for secondary students (European Commission et al., 2015). However, not even half of these countries include all four skills (reading, listening, writing and speaking) in national tests, and speaking is the least assessed language competence (European Commission et al., 2015). A plausible reason is the fact that speaking is the most difficult skill to assess in a reliable way (Alderson & Bachman, 2004), partly because raters need to consider numerous aspects simultaneously, and therefore, raters may pay attention to different aspects of speakers’ utterances (Bøhn, 2015). Moreover, the social situation in which a test is set affects assessment (Borger, 2019), making standardized testing of speaking particularly challenging. There is a gap in research regarding how raters of national and/or standardized tests orient to the challenges of assessing speaking when operationalizing assessment, something that could provide insight into raters’ assessment processes and in turn, how these can inform development of speaking tests.

In Sweden, which constitutes the empirical case for the present study, all students in grades 6 and 9 (12-13 and 15-16 years of age) take a national test in L2 English where students’ listening, reading, writing, and speaking skills are assessed. It is administered by the Swedish National Agency for Education but assessed by students’ own teachers. Speaking is included in part A, the National English Speaking Test (NEST). There is no specific rater training for teachers involved as raters of the NEST, but they are provided with extensive assessment guidelines from the Agency of Education. Despite these guidelines, teachers commonly construct their own note-taking document to use as a scoring template when assessing the NEST (Byman Frisén et al., 2021). Although scoring templates are recognized as mediators between the observed performances and the score awarded (McNamara, 1996), few studies have examined the role(s) they play in the assessment process. The aim of this study is to contribute to a clearer understanding of raters’ scoring processes when assessing L2 English speaking, as these emerge from raters’ reports of their note-taking practices in the assessment situation. Research questions adress how teachers take notes in the assessment situation, in what way they draw upon their notes when deciding the score, and reasons behind the creation and use of an own note-taking document.

The theoretical framework for the study is the Anthropological Theory of Didactics (ATD, Chevallard, 2007), and the idea of praxeologies that according to the theory need to be taken into account to examine ‘true’ knowledge (Chevallard, 2007, p. 133). Praxeologies consist of praxis and logos. Praxis is a type of task as well as the technique used to carry out the task, whereas logos is the logic behind using that particular technique for that particular task (the technology of the technique) as well as theory justifying the technology. Viewing note-taking documents as the technique used to carry out the task of assessing speaking in L2 English, the ATD framework is used in this study to analyze how teachers use this technique as well as analyzing the logos behind it – i.e., the discourse of why and how note-taking is beneficial for carrying out the task.


Methodology, Methods, Research Instruments or Sources Used
Data consist of interviews (N=13) with teachers of English in Sweden, all women, that acted as raters of the speaking part of the NEST, in grade 6 and/or grade 9. Data were retrieved in two steps; a first step where five interviews were conducted, and a second step consisting of eight interviews. In the first step, interviews were conducted in connection with a previous project (Byman Frisén et al., 2021). After all five interviews in the first step had been conducted, new questions about teachers’ practices when using their note-taking document for assessment and scoring of the NEST arose, resulting in revisions of the interview guide allowing for more in-depth questions to interviewees in the second step. New interviewees were recruited from professional networks of teachers in year 6 and/or year 9. A semi-structured interview guide (Kvale, 1997) was used when interviewing participants in both steps, and all interviews were conducted individually (face-to-face or via a web-based program for online meetings).

Interviewees came from both urban and more rural areas across Sweden. Twelve of the teachers had long experience from teaching English, between 11 – 25 years, whereas one of the teachers had taught English for 5 years. Since none of the teachers assessed the NEST every year, the numbers for teacher experience and times as rater of the NEST differed, where teachers reported having acted as rater for the NEST between 4 – 17 times. Several of the participating teachers worked in schools with both year 6 and year 9 students and thus had experience from teaching and assessing English for both groups of students. However, as most teachers were employed either as teachers for years 4–6 or years 7–9, they predominantly assessed either NEST year 6 or NEST year 9.

Interviews from both steps of data retrieval were audio-recorded and transcribed ortographically. Data were analyzed using qualitative thematic analysis (Braun and Clarke, 2006) for which the software program NVivo 12 was used. Analysis was guided by the research questions for the study as well as the theoretical framework Anthropological Theory of Didactics (Chevallard, 2007).

Conclusions, Expected Outcomes or Findings
Preliminary results show that teachers applied note-taking documents in a two-step process. In the first step, notes were taken to capture observations of students’ performances. In the second step, teachers drew on these notes to decide the score. The process was not linear but reported to go back and forth. In addition, the numerous aspects to pay attention to simultaneously during the test situation called for a need for pre-printed criteria, so that one would attend to these alone. Nonetheless, additional comments were noted down by all of the interviewed teachers when listening to students.

The complexity of the task at hand was mirrored in teachers’ talk about their assessment practices. Firstly, assessment of the NEST was carefully planned and prepared for, both in terms of preparing students for the assessment situation, and to prepare oneself for the role as rater by scrutinizing assessment guidelines. Secondly, in the second step of note-taking, teachers reported to discuss scoring decisions with colleagues or with oneself. For the most part, each rating criterion was then considered and negotiated before coming to a score decision. Thirdly, teachers reported to create a document for quick note-taking, where own symbolic systems were used for this purpose. This practice indicates a need for instant recording of one´s observations of speaking competences. In addition, although the outcome of the test was a summative score, both creation and use of note-taking documents indicated formative assessment practices. Thus, accountability was part of the discourse behind the use of the technique. Moreover, the study shows that teachers who are involved in assessment of the NEST acquire in-depth knowledge of the test construct that might contribute to their classroom-teaching of speaking skills.

References
Alderson, J., and Bachman, L. (2004). Series editors preface to Assessing Speaking. In J. Alderson and L. Bachman (Eds.), Assessing Speaking, pp. ix–xi. Cambridge University Press

Borger, L. (2019). Assessing interactional skills in a paired speaking test: Raters’ interpretation of the construct. Apples–Journal of Applied Language Studies 13: 151–74.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology 3:77. doi:10.1191/1478088706qp063oa.

Byman Frisén L., Sundqvist P., Sandlund E. (2021). Policy in Practice: Teachers’ Conceptualizations of L2 English Oral Proficiency as Operationalized in High-Stakes Test Assessment. Languages, 6(4):204. https://doi.org/10.3390/languages604020

Bøhn, H. (2015). Assessing spoken EFL without a common rating scale. SAGE Open 5: 1–12.

Chevallard, Y. (2007). Readjusting Didactics to a Changing Epistemology. European Educational Research Journal, 6(2), 131-134.

European Commission, EACEA, & Eurydice. (2015). Languages in Secondary Education: An Overview of National Tests in Europe – 2014/15.  https://op.europa.eu/en/publication-detail/-/publication/62ac43c3-dac4-11e5-8fea-01aa75ed71a1/language-en

Kvale, S. (1997). Den kvalitativa forskningsintervjun. Studentlitteratur

McNamara, T. (1996). Measuring second language performance. Longman


09. Assessment, Evaluation, Testing and Measurement
Poster

An investigation of Teachers’ Perceptions of Applied use of Formative Assessment in a Public School (primary) in Kazakhstan.

Nazerke Nurgali

Nazarbayev Intellectual school, Kazakhstan

Presenting Author: Nurgali, Nazerke

Schools in Kazakhstan witnessed the most profound changes to the national curriculum and in terms of assessment in order to meet international standards. As the deep educational reforms are implements in the classroom, a phased transition of classes from the grading system to criteria-based assessment was initiated for primary classes in the 2015-2016 academic year (MoES Strategic Plan for 2016; Fifth report, 2016). The assessment reformation was needed because, in the past, teachers evaluated students' knowledge using a 1 to 5-rated scale throughout compulsory schooling (OECD,2014). However, these grades did not provide a clear picture of the students’ true academic performance.

This shifting process began in the NIS (Nazarbayev Intellectual schools). The reason for this, according to ‘The State Programme of the Education Development in the Republic of Kazakhstan for 2011-2020’ (MoES, 2010), was that the main tasks of NIS were to act as a platform for the introduction of new policies and the transfer teachers’ experience to mainstream schools. Thus, the criteria-based assessment was experienced between 2011-2016 in NIS (National Academy of Education. I. Altynsarina, 2015). From 2015, 30 pilot schools started implementing a new curriculum and criteria-based assessment, while OECD countries experienced this stage in 1980-90 (Irsaliyev et al., 2017). It was only eight years since implementation of criteria-based assessment. As a researcher I am interested to get teachers views on implementation of criteria-based assessment.

There are many purposes of FA in the classroom and the learning process in general. Students frequently have misconceptions about what are they learning, and also about the why they are learning something (White & Frederiksen, 1998). FA is its provision of quality feedback for learners, which allows the current level of students to be assessed and adequate instruction which outlines the next steps to be taken (Black et al., 2003). Thus, a main objective of FA is to identify students’ strengths and weaknesses and to inform students of their current level based on evidence, and then directing them as to how their learning needs can be met (ARG, 2002). In addition, Roskos and Neuman (2012) noted that FA helps teachers to be more conscious of the gaps in every students’ development. Teachers using FA are better able to meet the learning needs of their students, give constructive feedback, support independent learning, organize interactive assessments of students' attainments, adjust the teaching process, and find students’ weaknesses and strengths (OECD, 2014; Black &William, 1998). Hence, this positive interaction with students may only be achieved when the teacher believes in the advantages of FA. Thus, it is crucial to gain information about what do teachers think about FA. There is a gap in the literature about teachers’ perceptions of FA in Kazakhstan; I was only able to find a minimal amount of research in this context. Therefore, this study focusses on the teacher’s experience, perception, attitudes, and beliefs about FA. This study's first purpose is to gain a deeper understanding of the main opportunities and obstacles to implementing FA according to teachers' realities, whilst the second is to get teachers’ suggestions regarding the integration of FA.

Based on the purposes mentioned in the research scope and research value, two research questions have been generated to guide this study:

1) What do primary school teachers perceive as the opportunities and obstacles to using FA to support learning in their classrooms?

2) What recommendations would teachers give to improve support the use of formative FA in primary schools in Kazakhstan?

This qualitative research interviewed six primary school teachers from a Kazakhstani public school, and the data will be analysed through thematic analysis.


Methodology, Methods, Research Instruments or Sources Used


  This study aims to understand teacher perception of formative assessment better and identify opportunities and obstacles in implementation of classroom practice. Therefore, a qualitative research design was chosen to interpret teachers’ experiences.
The philosophical stance that I hold for this research is interpretivism. Interpretivism is an epistemological position that is based on people's understandings of social phenomena. My aim in this research, as an interpretivist, is to seek the reality of participants in the way they understand it rather than giving the general results of research as a complete truth. Compared to positivists who accept one correct answer, interpretivists accept multiple viewpoints of subjects (Quang, 2015). Thus, according to Than & Than (2015), the idea of multiple perspectives in interpretivism allows researchers to do in-depth research. As my research questions focus on primary school teachers’ opinions, attitudes and beliefs, I use an interpretive approach in this research.
Since my research is based on gathering qualitative data and the research purpose is to explore teacher perceptions about opportunities and obstacles in implementing formative assessment, I decided to use purposeful sampling. Six participants were chosen.. As I am looking to obtain teachers’ perceptions, I decided to use semi-structured interview as the main instrument of data collection. As my research questions explore individual teachers’ views and opinions about formative assessment practice, to use this technique provided the opportunity to communicate with participants, listen to their attitudes and provide recommendation that could not be done had a questionnaire been used instead.
Before the actual research, a pilot study was designed to test the implications of the method.
When I analyzed the data , I transcribed all interviews in Kazakh language and proof-read against the recordings. Apparently, there were difficulties in dealing with participants’ unfinished thoughts and repeated words. When all the interviews had been transcribed and anonymized to investigate teachers’ perceptions about formative assessment, an inductive thematic analysis was done.
I started the interviewing process when I received ethical approval from the University of Bristol Ethics Committee. I ensured that the participants were not harmed from the beginning of the research up until the final report. This is because the involvement of human beings in research emphasizes the critical role of principles such as autonomy, privacy, confidentiality, and the well-being of participants.  

Conclusions, Expected Outcomes or Findings
Mainly, it was found that teachers believe that FA could promote effective learning and teaching (Black & William, 1998). They claimed that FA develops life-long skills amongst students and that it meets international standards. They could see opportunities for FA in terms of providing feedback and promoting individual work . They also noted that they reflected on their use of FA methods. It seems that self-reflection helps them to make corrections and generally improve subsequent lessons.
With regards to obstacles, teachers considered certain challenges that could prevent the implementation of FA. Thus, though they were keen to use FA, they pointed out that it is not easy to do so in practice (Torrance & Pryor, 2001). All the primary school teachers involved in the present study emphasized parents' and students' resistance to FA and time constraints to its implementation. Therefore, participants suggested recommendations to cope with these issues. The primary school teachers perceived PDP as a means for developing the assessment literacy of teachers as necessary, but they felt that PDP would only be effective when it is practical and ongoing.
The teachers were asked to give their suggestions to understand what issues teachers need to solve with regard to the previously mentioned obstacles to the implementation of FA. Firstly, teachers highlighted the importance of having appropriate technology such as a computer, printer, and interactive board in the school. They believed that using technology is important to the effective use of FA. They did note that they already had such facilities in their school, but did not have sufficient for all the teachers to use. Secondly, they needed to have ongoing  PDP to support substantial classroom changes. Thirdly, teachers asked for organized workshops and training sessions for parents to explain the opportunities offered by formative assessment from the school administration.

References
Absolum, M., Flockton, L., Hattie, J. & Hipkins, R. (2009) Directions for Assessment in New Zealand.Available from : http://www.tki.org.nz/r/assessment/pdf/danz.pdf [Accessed 20th June 2020].
Adams, W.C. (2015) Conducting Semi-Structured Interviews. In: Wholey, J.S., Harty, H.P. & Newcomer, K.E. (eds.) Handbook of Practical Program Evaluation.Jossey-Bass, San Francisco, pp. 492-505.
Andrade, H. & Du, Y. (2007) Student responses to criteria-referenced self-Assessment. Assessment and Evaluation in Higher Education. 32 (2), 159-18.
Ash, D. & Levitt, K. (2003) Working within the zone of proximal development: Formative assessment as professional development. Journal of Science Teacher Education. 14(1),1–26.
Askew, S. & Lodge, C. (2000) Gifts, ping-pong and loops - linking feedback and learning. In: Askew, S. (eds.) Feedback for Learning. London: RoutledgeFalmer, pp.1-18.
Aslamtas,I. (2016) Teachers’ Perceptions of Using Formative Assessment Methods in the Classroom. MA Thesis , University of East Anglia.
Bailey, B. (2000) The impact of mandated change on teachers. In: A. Hargreaves & N. Bascia (eds.) The Sharp Edge of Change. Teaching, Leading and the Realities of Re/'orm. London: Falmer Press.
Bell,J.&Waters,S (2018) Doing your research project.Seventh edition. London, Open university Press. p.29.
Black, P. & Wiliam, D. (2001) Inside the Black Box: Raising standards through classroom assessment. Available from : http://weaeducation.typepad.co.uk/files/blackbox-1.pdf  [Accessed 11th July 2020].
Black, Paul and Wiliam, Dylan. (1998) 'Assessment and Classroom Learning'. Assessment in Education: Principles, Policy & Practice, 5(1), 7 – 74.
Black,P & William,D( 2009) Developing the theory of formative assessment. Educational assessment, Evaluation and  Accountability.21(1),5-31. Available from: https://kclpure.kcl.ac.uk/portal/files/9119063/Black2009_Developing_the_theory_of_formative_assessment.pdf [Accessed 5th June 2020].
Clark,I. (2012)Formative Assessment: Assessment Is for Self-regulated Learning .Educ Psychol Rev.  24,205–249. Available at: DOI 10.1007/s10648-011-9191-6. [Accessed 20th August 2020].
Cohen, L., Manion, L. & Morrison, K. (2011) Research Methods in Education (7th Edition). London,Routledge Farmer, pp.289-290.
Cowie, B & Bell,B. (1999) A Model of Formative Assessment in Science Education. Assessment in Education: Principles, Policy & Practice. 6(1), 101-116. Available at: DOI: 10.1080/09695949993026 [Accessed 10th  June 2020]
Fullan, M. (2007) The New Meaning of Educational Change. Fourth Edition. New York: Teachers College Press
Kallio,H.,Pietila,A. M.,Johnson,M & Kangasniemi,M.(2016) Systematic methodological review: developing a framework for a qualitative semi-structured interview guide. Journal of Advanced Nursing.72(12), 2954. Availableat:2965. doi: 10.1111/jan.13031/.[Accessed 11th June 2020].
National Academy of Education. I. Altynsarin, (2015) Methodological and Educational-methodological basis introduction to critical assessment system . MoES


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany