Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th May 2024, 03:04:39am GMT

 
 
Session Overview
Session
11 SES 09 A: School Performance and Quality Models
Time:
Thursday, 24/Aug/2023:
9:00am - 10:30am

Session Chair: Andra Fernate
Location: Sir Alexander Stone Building, 204 [Floor 2]

Capacity: 55 persons

Paper Session

Show help for 'Increase or decrease the abstract text size'
Presentations
11. Educational Improvement and Quality Assurance
Paper

The school improvement wheel: Shaping Transitional Processes in Dynamic Societies

Frank Brückel1, Heike Beuschlein1, Rachel Guerra2, Reto Kuster1, Susanna Larcher1, Regula Spirig1

1Pädagogische Hochschule Zürich, Switzerland; 2Schulamt Fürstentum Liechtenstein, Liechtenstein

Presenting Author: Brückel, Frank; Beuschlein, Heike

Remark: An earlier version of the project has been presented two years ago. We previously presented preliminary results of this study.

Our rapidly changing society influences the work of schools. Today, public schools in different countries are under enormous pressure to adapt and change as they need to to deal with technological improvements and cultural changes that leads to the increase heterogenization of societies (cf. e.g. OECD 2019, 2016; Imlig, Lehmann & Manz 2018; European Union 2016; Altrichter & Maag-Merki 2016).

In spite of the abundant literature about the conceptual and practical notions and processes of educational change, there is still a need to learn more about the ways to conceptualize and put into practice transitional change processes in individual schools and how to conceptualize and put into practice transitional change in individual schools and school district (Steffens, Heinrich, Dobbelstein, 2016; Reinbach 2016). Transitional change processes refer to the steps and strategies that a district takes to implement significant changes in its operations and policies such as curriculum reform, or the implementation of new technologies. Transitional change processes can be complex and multi-faceted, involving a range of different stakeholders, such as students, teachers, administrators, and community members.

For principals, teacher leaders and project managers in charge of implementation processes at different levels, these accelerated changes raise the question about how to bring successful and sustainable implementation that is accepted and carried through by all those involved?

Many leaders have assumed and taken on willingly the challenge of implementing numerous, simultaneous and, in part, highly complex transitional processes. They are often confronted with scepticism, reserve, doubt and even outright boycott, often by politicians, staff and parents (Rasmussen 2017; Landert 2014).

This is where the project Shaping Transitional Processes in Dynamic Societies comes in. This project was conducted by a research group of the Zurich University of Teacher Education and the education authority of the Principality of Liechtenstein. The project aims at developing relevant and helpful support services and practices for school principals and project leaders with which current and future reforms and improvements can be mastered alongside the demanding daily school routine.

In order to answer the questions and to advance the research process of the project as transparently and comprehensibly as possible, the approach of Design Based Research (DBR) was used (Euler and Sloane 2014; Euler 2017). Due to its methodological design, the chosen approach opens up the possibility to think and design practical school improvement processes in complementarity with school improvement research (Rau, Gerber u. Grell 2022, 353) and thus to consider both the relevance for practice and the further development of theoretical findings.


Methodology, Methods, Research Instruments or Sources Used
The project follows an approach by Euler and Sloane (2014), which provides five different phases and is guided by the following design principles: The research started with a literature review on school improvement followed by interviews with principals, teachers and members of school authorities. Af-terwards the results were then triangulated: Results compared, terminology reviewed, sharpened and condensed, redundancies cleaned up, individual factors combined and subcategories formulated. A challenge was to understand the statements beyond the cultural context, to assign them to the factors accordingly, to compare and superimpose English and German terminology in order to gain a deeper understanding of how changes can be mastered successfully. At the end of this work there were sev-en factors with their sub-factors.
After finishing the literature review, the evaluation of the interviews and triangulation of the results, a prototype was developed. For the design of this prototype it was important that the factors can be presented and described so that
- the entire spectrum of factors important to school improvement are represented.
- the description is comprehensible and accessible without diminishing complexities,
- it supports principals and project leaders in constructively coping with transitional processes in their own schools
- it supports principals and project leaders in competently initiating transitional processes in their schools, to recognise snags early on and handle them constructively.
At the end of this step, a first prototype of a model was available at the end of 2020 (Brückel, Kuster et al. 2022).
To test whether the model and the material are relevant to stakeholders, dialogue workshops (Brück-el, Larcher et al. 2019; Bohm 2008) with school and project leaders (20 to 25 participants) were held.
The aim of the workshops was to test
- how target groups receive the model and the material (f.eg. are they using the framework to manage their change projects?),
- whether they are considered helpful and goal-oriented,
- what could be missing, and
- how the material can be supplemented and improved.
Each workshop was divided into two phases: in phase 1, each individual examines the model and the material for him or herself; in phase 2, the model’s and material’s practical sustainability was dis-cussed in a group. A diversified group is critical to these workshops to ensure that all relevant perspectives are considered. Finally, the result is a science-based model that meets the demands of practice.

Conclusions, Expected Outcomes or Findings
The paper presents the project and its objectives, shows how the design-based research process was conducted, and discusses the model and the school improvement factors that emerged from research as being salient. The results show that there are certain school improvement factors which are rele-vant and consistent in any change process. These factors can be determined beyond a national con-text and are not culturally driven: Learning of students, mindset, communication and cooperation, induvial and organisational competences, framework conditions, process design, multi-level school system,+ leadership and dynamics.
Some limitations became apparent during the research process for example, there is no consistent wording for school development, school improvement or educational change. In consequence a litera-ture review must address the question of which papers are considered. Due to a lack of resources, only a limited number of interviews were analyzed.
And finally the question of how to translate research results into a model which is relevant for princi-pals and project leaders has to be discussed critically.

References
Altrichter Herbert & Maag Merki Katharina (2016). Handbuch Neue Steuerung im Schulsystem. Wies-baden: Springer VS.
Bohm David (2008). Der Dialog. Das offene Gespräch am Ende der Diskussionen. Stuttgart: Klett-Cotta.
Brückel, Frank, Larcher, Susanna, Kuster, Reto, Spirig Regula, Guerra, Rachel, Annen, Luzia (2022). Das Schulentwicklungsrad. Eine Reflexionshilfe für die Führung schulischer Veränderungsprozes-se. #schuleverantworten 2022_2, 46–56, https://doi.org/10.53349/sv.2022.i2.a199
Brückel Frank, Larcher Susanna, Annen Luzia, Kuster Reto (2019). Entwicklung von praxisnahen Arbeitsmaterialien im Kontext Tagesschule/ Tagesstrukturen. In Sabine Maschke, Gunhild Schulz-Gade, Ludig Stecher (Hrsg.), Jahrbuch Ganztagsschule, Frankfurt: Debus Verlag, S. 212-228
European Union (2016). Smarter, greener, more inclusive? Indicators to support the Europe 2020 strategy. Luxembourg: Publications Office of the European Union.
Imlig, Flavian, Lehmann, Lukas & Manz, Karin (Hrsg.) (2018). Schule und Reform. Veränderungsab-sichten, Wandel und Folgeprobleme. Wiesbaden: Springer VS.
Landert, Charles (2014). Die Berufszufriedenheit der Deutschschweizer Lehrerinnen und Lehrer. Be-richt zur vierten Studie des Dachverbandes Lehrerinnen und Lehrer Schweiz (LCH). Zugriff unter http://www.lch.ch/fileadmin/files/documents/Medienmitteilungen/141209_MK_Berufszufriedenheitsstudie_Berufsauftrag/141209_05_Studie_Charles_Landert_zur_Berufszufriedenheit.pdf  [08.05.2015].
OECD (2016). Trends shaping education. Paris: OECD Publishing.
OECD (2019). OECD Future of Education and Skills 2030 Concept. http://www.oecd.org/education/2030-project/teaching-and-learning/learning/learning-compass-2030/OECD_Learning_Compass_2030_concept_note.pdf [30.12.2020]  
Rasmussen, Jens (2017). When Constructions of the Future Meet Curriculum Development and Teaching Practice. Hauptvortrag am ECER Kongress Copenhagen, 23.08.2017.
Reinbacher Paul (2016). Ein theoretischer Bezugsrahmen für "Schulentwicklung". In Schweizerische Zeitschrift für Bildungswissenschaften 38 (2016) 2, S. 295-318.
Steffens Ulrich, Heinrich Martin & Dobbelstein Peter (2019). Praxistransfer Schul- und Unterrichtsfor-schung – eine Problemskizze. In Claudia Schreiner, Christian Wiesner, Simone Breit, Peter Dob-belstein, Martin Heinrich & Ulrich Steffens (Hrsg.), Praxistransfer Schul- und Unterrichtsentwick-lung, S. 11 – 26


11. Educational Improvement and Quality Assurance
Paper

Making Sense of School Performance Feedback: Which Attributions Do Teachers and School Leaders Make?

Evelyn Goffin1,2, Rianne Janssen2, Jan Vanhoof1

1University of Antwerp, Belgium; 2KU Leuven, Belgium

Presenting Author: Goffin, Evelyn; Vanhoof, Jan

School performance feedback (SPF) systems present educational professionals with student achievement data in order to support self-evaluation and data-based decision making (Schildkamp & Teddlie, 2008; Visscher & Coe, 2003). However, in order to arrive at (formative) conclusions based on SPF, recipients need to make sense of the (summative) data they are presented with (Schildkamp, 2019; van der Kleij et al., 2015). Attribution, or reflecting on the causes of (learning) outcomes, is an integral part of this sensemaking process (Coburn & Turner, 2011). In line with the basic propositions of attribution theory (Weiner, 1985, 2010), the nature of educators’ causal explanations for student outcomes has been found to affect their emotions and subsequent (instructional) behavior (Wang & Hall, 2018).

Research finds that educational professionals often struggle to pinpoint factors that (may) have led to certain outcomes, particularly when those outcomes are unfavorable (Verhaeghe et al., 2010). Moreover, in defiance of ideals relating to data-based decision making, student performance is often attributed to external causes such as student characteristics, rather than matters internal to educational professionals, such as instruction (Evans et al., 2019; Schildkamp et al., 2016). This is especially apparent in cases of student failure (Wang & Hall, 2018). Consequently, it can be difficult to formulate productive decisions and constructive actions based on SPF (Schildkamp et al., 2016).

In the present study, we examine educational professionals’ causal explanations for results presented in a SPF report from a low-stakes national assessment (NA) in Flanders, Belgium. Like typical external standardized assessments, the Flemish NA relate school performance to standards and to the performance of reference groups (AERA et al., 2014; Visscher & Coe, 2003). We investigate educational professionals’ attributions of these data, with a particular interest in the locus of causality of the attributions they make. To what extent is the SPF interpreted introspectively (i.e., with regard to aspects of school policy and instructional practice that can be improved or sustained) and to what extent is school performance ascribed to external factors (such as aspects of the test itself, or input from students)?

Our review of the literature suggests that perceptions of school leaders remain somewhat underexposed in studies on attribution in educational data use. However, SPF intends to inform both school policy and instructional practice. Consequently, we will not only focus on teachers’ attributions in the present study, but also on causal explanations made by school leaders. Furthermore, we will examine causal explanations for both outcomes perceived as favorable, and those perceived as unfavorable. Perhaps in line with the very term “diagnosis”, we find that the attributions and attributional processes discussed in empirical literature are predominantly focused on explanations for student failure (van Gasse & Mol, 2021; Verhaeghe et al., 2010) and not so much for student success. However, school improvement is not only a narrative of identifying problems, but also of fostering what works.

In summary, the three research questions we address, are:

RQ1 To which internal and external factors do teachers and school leaders attribute their school’s performance on a national assessment?

RQ2 Do attributions differ according to the attributor’s work role?

RQ3 Do attributions differ according to the perceived favorability of the result?

We adopt a qualitative approach and make use of authentic educational data, because our aim is to illuminate how individuals and groups make meaning of something they experience (here, their schools’ SPF) from their own perspectives (Savin-Baden & Major, 2013).


Methodology, Methods, Research Instruments or Sources Used
Data were collected in Flanders, the Dutch-speaking region of Belgium. Periodically, national assessments (NA) are organized in order to monitor whether attainment targets are met on system-level and in order to explore whether school-, class- of student-level variables explain differences in achievement. These NA are conducted in representative samples of schools, who afterwards receive a personalized SPF report. School results are never publicized, nor do outcomes carry any formal consequences for participants.
Participants for the present study were recruited from the Flemish primary schools that had taken part in the May 2019 NA of People and Society (formerly a subdomain of the World Studies curriculum) in the sixth grade. In pursuit of maximum variation (Savin-Baden & Major, 2013), schools were categorized into four profiles based on aspects of their criterion-referenced and norm-referenced results on one focal test: Spatial use, traffic and mobility. Approximately one week after having received the SPF report, in the autumn of 2020, a random selection of schools within each profile was approached.
In total, semi-structured interviews were conducted with 22 participants (11 school leaders and 11 sixth grade teachers) from 13 schools. The interview protocol included open-ended questions about participants’ appraisal of the schools’ results and about how they causally explained these results. These questions followed a think-aloud section in which participants were asked to describe and interpret the tables and graphs in their schools’ SPF report. Due to societal restrictions relating to the COVID19-pandemic, the interviews were conducted online using video-conferencing software. Recordings were transcribed verbatim.
Data were coded with NVivo. Framework analysis served as an overall analytical method, as it is fit to both organize and interpret data, allows for a combination of inductive and deductive techniques, and facilitates the development of matrices to condense findings and explore patterns (Gale et al., 2013). In order to identify trends, the qualitative interview data were also ‘quantitized’ (Sandelowski et al., 2009).

Conclusions, Expected Outcomes or Findings
Participants attribute their schools’ performance to a wide array of factors on the level of the school, the classroom, the student, and the assessment itself. School-level factors and class-level factors can be categorized as internal or external, based on the source of the attribution. We see, for instance, that school leaders reflect on factors such as teacher professionalism: a factor internal to teachers but external to themselves. Overall, school-level factors (such as the curricular line for the subject that was tested) and student-level factors (such as pupils’ cognitive capacity) are invoked most frequently, especially by school leaders and teachers, respectively.
Throughout the dataset, external attributions dominate. We might relate this to educational professionals' professional attitude: to where does my responsibility for student outcomes extend? Overall, we find few differences in attributional patterns between participants from schools that scored well and from those that scored poorer. However, reservations and concerns about (the design and the conditions of) the assessment – an external factor – are uttered primarily to explain negative results.
Most participants mention a whole range of factors when making causal ascriptions for their schools’ results in the SPF report. This suggests that educational professionals acknowledge that learning outcomes are the product of different building blocks. However, it also establishes why it is not easy or straightforward to formulate an unambiguous analysis and an actionable diagnosis based on SPF. Finally, the finding that teachers and school leaders (even within schools) emphasize different factors to interpret the (same) outcomes, illustrates the importance of collective sensemaking in order to piece together a complete story.
These insights are relevant not only for research on data-informed decision-making in schools, but also for educational professionals themselves, as well as for those who train them, and for those who design, offer and mandate assessments.

References
AERA, APA, & NCME. (2014). Standards for Educational and Psychological Testing. American Educational Research Association.
Coburn, C. E., & Turner, E. O. (2011). Research on Data Use: A Framework and Analysis. Measurement: Interdisciplinary Research & Perspective, 9(4), 173–206.
Evans, M., Teasdale, R. M., Gannon-Slater, N., Londe, P. G. la, Crenshaw, H. L., Greene, J. C., & Schwandt, T. A. (2019). How Did that Happen? Teachers’ Explanations for Low Test Scores. Teachers College Record: The Voice of Scholarship in Education, 121(2), 1–40.
Gale, N. K., Heath, G., Cameron, E., Rashid, S., & Redwood, S. (2013). Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Medical Research Methodology, 13(1), 117.
Sandelowski, M., Voils, C. I., & Knafl, G. (2009). On quantitizing. Journal of Mixed Methods Research, 3(3), 208–222.
Savin-Baden, M., & Major, C. H. (2013). Qualitative research: The essential guide to theory and practice. Routledge.
Schildkamp, K. (2019). Data-based decision-making for school improvement: Research insights and gaps. Educational Research, 61(3), 257–273.
Schildkamp, K., Poortman, C. L., & Handelzalts, A. (2016). Data teams for school improvement. School Effectiveness and School Improvement, 27(2), 228–254.
Schildkamp, K., & Teddlie, C. (2008). School performance feedback systems in the USA and in The Netherlands: a comparison. Educational Research and Evaluation, 14(3), 255–282.
van der Kleij, F. M., Vermeulen, J. A., Schildkamp, K., & Eggen, T. J. H. M. (2015). Integrating data-based decision making, Assessment for Learning and diagnostic testing in formative assessment. Assessment in Education: Principles, Policy & Practice, 22(3), 324–343.
Van Gasse, R., & Mol, M. (2021). Student guidance decisions at team meetings: do teachers use data for rational decision making? Studia Paedagogica, 26(4), 99–117.
Verhaeghe, G., Vanhoof, J., Valcke, M., & van Petegem, P. (2010). Using school performance feedback: perceptions of primary school principals. School Effectiveness and School Improvement, 21(2), 167–188.
Visscher, A. J., & Coe, R. (2003). School Performance Feedback Systems: Conceptualisation, Analysis, and Reflection. School Effectiveness and School Improvement, 14(3), 321–349.
Wang, H., & Hall, N. C. (2018). A Systematic Review of Teachers’ Causal Attributions: Prevalence, Correlates, and Consequences. Frontiers in Psychology, 9(DEC), 1–22.
Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92(4), 548–573.
Weiner, B. (2010). The Development of an Attribution-Based Theory of Motivation: A History of Ideas. Educational Psychologist, 45(1), 28–36.


11. Educational Improvement and Quality Assurance
Paper

Validating the Kazakhstani Teacher Observational Protocol: A Pilot Study

Janet Helmer1, Matthew Courtney1, Bridget Goodman1, Kathy Malone2, Kulzhan Beysembayeva3, Filiz Polat1

1Nazarbayev University, Kazakhstan; 2Univeristy of Hawaii, USA; 3Eurasian National Uniiiversity, Kazakhstan

Presenting Author: Helmer, Janet; Goodman, Bridget

This three-year study aims to design and validate the Kazakhstani Teacher Observational Protocol (KTOP) aimed at gauging the degree to which teachers/ instructors in Kazakhstan successfully implement reform-based practices in high school and undergraduate STEM lessons.

The 2011-2020 Educational Strategy for Kazakhstan outlined the goal of further development of the ‘training system and professional development’ with the primary objective being to improve teachers' learning and professional mastery in the Kazakhstani school system. Education reform has been considered necessary as the secondary curriculum was perceived as overly dense, with a focus on rote learning that resulted in only providing superficial knowledge rather than deep mastery of topics (Fimyar, Yakavets, & Bridges, 2014).

Education reform across contexts has focused on teachers' critical role in improving education quality (Schleicher, 2016). The question remains as to how initiated changes are validated or measured. Teacher observation instruments have been developed to measure effective teaching (Mantzicopoulos, et al., 2018). In addition to being used to provide standardized data to monitor education, these instruments can help to determine if professional development programs or the use of reformed curricula are producing changes in teaching practices (MacIsaac et al., 2001). Classroom observation instruments can be constructive tools for aiding in the evaluation of teachers and designing professional development (Evenhouse et al., 2018). Classroom observation allows observers to gather information on student and teacher behaviours and the classroom environment within an authentic setting, which is one way that links theory and practice in order to better understand the classroom environment (Snyder, 2012). Observations have been used to gather data on teachers’ integration of technology (Helmer et al., 2018); student-teacher interactions (Darling-Hammond, 2006); and explicit subject-area learning (Waxman et al., 2009). In Kazakhstan, the fast pace and wide range of reforms raises questions about teachers’ readiness to implement reforms in the classroom. Moreover, historically “open lessons” where teachers and administrators observe lessons may be rehearsed without systematic criteria for evaluation.

To address these issues, this paper reports on the pilot study where researchers collected data to determine how well the KTOP is suited to the reform-based practice in Kazakhstan. The reform-based practice was conceptualized by way of the following six subscales: lesson design and implementation, methods/teaching strategies, communicative interactions, student-teacher relations, assessment interactions, and integration of content and language in teaching.

. The specific questions addressed:

RQ1: What is the overall level of inter-rater reliability of the Kazakhstan Teaching Observational Protocol (KTOP) instrument?

RQ2: What is the level of inter-rater reliability of the KTOP instrument’s (a) lesson design and implementation (5 items), (b) methods/teaching strategies (5 items), (c) classroom culture (communicative interactions; 6 items), (d) classroom culture (student-teacher relationships; 5 items), (e) assessment interactions (3 items), and (f) integrating content and language in teaching (2 items) subscales?

RQ3: How well targeted is the KTOP instrument for identifying the higher- and lower-reform-based teaching practice in the Kazakhstani high school and higher educational contexts?


Methodology, Methods, Research Instruments or Sources Used
The Protocol was adapted from the Reformed Observation Teaching Protocol (RTOP) (Sawada et al., 2002). The same 21st-century reform practices on which the RTOP is based are included in many current international educational reform efforts, including those of Kazakhstan (e.g., Yakavets & Dzhadrina, 2014). In the development of this tool, the team considered the best attributes of other tools such as the COPUS (Smith et al., 2013) COPED (Wheeler et al., 2019), and CLIL (deGraaff et al., 2007) with additional sections added to consider the teaching of STEM in English and inclusive teaching practices as these are both current priorities in Kazakhstani educational reforms.

Instrumentation
A total of 26 items were used in the overall KTOP instrument. The 26 items were comprised of the six separate scales mentioned in RQ2. Each item was anchored by 0 = Never occurred and 4 = Very descriptive.

Sampling
This study employed convenience sampling with schools that volunteered to participate in trialling the observation instrument in their classes. A total of 25 unique lessons delivered by 13 unique teachers from five distinct schools were of focus for the current study. Lesson observations took place between February and May 2022.

On average, the 13 unique teachers who participated in the current study had acquired 10.78 (SD = 6.44) years of teaching experience. The teachers were all drawn from educational institutions in the Astana, Kazakhstan region (three high schools and two universities). The lessons observed included the following eight different subject areas: physics, calculus, math, chemistry, ICT, computer-aided engineering, algebra, biomechanical engineering, and general math. All classes were taught in English.

For each lesson, two trained observers were assigned to judge each of the 25 lessons. For this study, of the total 50 judgements, individual observers contributed to a different total number of lessons.

Conclusions, Expected Outcomes or Findings
Overall Level of Inter-Rater Reliability (RQ1)
The intra-class correlation for the 26-item instrument was estimated at .72 (N = 650) which can be considered acceptable (0.50 < icc ≤ .749; lower 95% CI = .667, upper 95% CI = .755).

Level of Inter-Rater Reliability of the KTOP Instrument’s Six Subscales (RQ2)

Table 1
Inter-Rater Reliability of the Six KTOP Subscales
Abbr. Scale ICC L 95% CI U 95% CI
LDAI Lesson design and implementation .689 .558 .782
MTS Methods/teaching strategies .683 .549 .778
CC(CI) Classroom culture (communication interactions) .701 .588 .783
CC(STR) Classroom culture (student-teacher relationships) .577 .398 .703
AI Assessment interactions .730 .574 .829
ICLIT Integrating content and language in teaching .785 .622 .878
Note. Abbr. = scale abbreviation; L/U 95% CI = upper/lower 95% confidence interval for the ICC estimate; acceptable ICC underlined, good ICCs in bold.


All ICC subscales were considered moderate, with the exception of the ICLIT, which exhibited a slightly higher level of “good” inter-rater reliability (ICC = .785).

 KTOP Targeting (RQ3)
    
In terms of targeting, the items did not function optimally. As explained, each item had five levels (0-4). However, the lowest level (zero) was not ‘reached’ by students for 21 of the total 26 items. However, items did discriminate well, with only one item exhibiting negative item-rest point biserial correlation (MTS.8, r = -.040) and only one item, CC(CI).16 was only slightly underfitting to the Rasch model with an outfit value of 1.61 (p < .05).


 Given the positive results of the pilot study, the tool will be translated into Kazakh and used with a larger sample of teachers/instructors, as the psychometric analysis in the current study is very much underpowered.


References
Darling-Hammond, L. (2006). Constructing 21st-century teacher education. Journal of Teacher Education, 57(3), 300-314.

de Graaff, R., Koopman, G. J., Anikina, Y & Westhoff, G. (2007). An Observation Tool for Effective L2 Pedagogy in Content and Language Integrated Learning (CLIL), International Journal of Bilingual Education and Bilingualism, 10(5), 603-624, DOI: 10.2167/ beb462.0

Evenhouse, D., Zadoks, A., Silva de Freitas, C. C., Patel, N., Kandakatla, R., Stites, N. & DeBoer, J. (2018). Video coding of classroom observations for research and instructional support in an innovative learning environment. Australasian Journal of Engineering Education, 23(2), 95-105.

Fimyar, O., Yakavets, N. and Bridges, D. (2014). Educational Reform in Kazakhstan: the contemporary policy agenda. In D. Bridges (Ed.) (2014) Educational reform and internationalisation: the case of school reform in Kazakhstan. Cambridge: Cambridge University Press.
 
MacIsaac, D., Sawada, D., & Falconer, K. (2001, April). Using the Reformed Teaching Observation Protocol (RTOP) as a Catalyst for Self-Reflective Change in Secondary Science Teaching. Paper presented at the American Educational Research Association, Seattle, WA

Mantzicopoulos, P., Patrick, H., Strati, A., & Watson, J. S. (2018). Predicting kindergarteners' achievement and motivation from observational measures of teaching effectiveness. The Journal of Experimental Education, 86(2), 214-232.

Sawada, D., Piburn, M. D., Judson, E., Turley, J., Falconer, K., Benford, R., & Bloom, I. (2002). Measuring reform practices in science and mathematics classrooms: The reformed teaching observation protocol. School Science and Mathematics, 102(6), 245-253.

Schleicher, A. (2016). Teaching excellence through professional learning and policy reform: Lessons from around the world, International Summit on the Teaching Profession. Paris: OECD Publishing. http://dx.doi.org/10.1787/978926452059-en

van der Lans, R. M., van de Grift, W. J., & Van Veen, K. (2018). Developing an instrument for teacher feedback: Using the Rasch model to explore teachers' development of effective teaching strategies and behaviors. The Journal of Experimental Education, 86(2), 247-264.

Waxman, H. C., Padrón, Y. N., Franco-Fuenmayor, S. E., & Huang, S. L. (2009). Observing classroom instruction for ELLs from student, teacher, and classroom perspectives. TABE Journal, 11(1), 63-95.

Wheeler, L. B., Navy, S. L., Maeng, J. L., Whitworth, B. A. (2019). Development and validation of the Classroom Observation Protocol for Engineering Design (COPED). Journal of Research in Science Teaching, 56, 1285-1305.


Yakavets, N., & Dzhadrina, M. (2014). Educational reform in Kazakhstan: Entering the world arena. In D. Bridges (ed.), Educational reform and internationalisation: The case of school reform in Kazakhstan (pp. 28-52). Cambridge, UK: Cambridge University Press.