Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th May 2024, 05:23:46am GMT

 
 
Session Overview
Session
09 SES 13 A JS: Advancing Assessment Tools and Strategies in Subject-Specific Contexts
Time:
Thursday, 24/Aug/2023:
5:15pm - 6:45pm

Session Chair: Serafina Pastore
Location: Gilbert Scott, EQLT [Floor 2]

Capacity: 120 persons

Joint Paper Session NW09 and NW 27

Show help for 'Increase or decrease the abstract text size'
Presentations
09. Assessment, Evaluation, Testing and Measurement
Paper

Construction and Validation of a Reading Literacy Test for English Language Learners in Kazakhstan

Aliya Olzhayeva

Nazarbayev University, Kazakhstan

Presenting Author: Olzhayeva, Aliya

Standardized testing is a manifestation of the neoliberal agenda and human capital theory (Rizvi & Lingard, 2010). Testing is perceived as one of the instruments to hold teachers and schools accountable for students’ performance which can lead either to rewards or sanctions. Standardized testing is one of the means of implicit control and governance that allow policymakers and politicians to audit the education system (Graham & Neu, 2004). Critics of standardized testing argue that it widens the gap between different groups of the student population (Au, 2016), encourages teachers to teach to the test and ignore the unassessed curriculum content and other subjects (Lingard, 2011; Koretz, 2017; Bach, 2020), and facilitates the practice of gaming the system to illustrate the growth in student performance (Rezai-Rashti & Segeren, 2020; Heilig & Darling-Hammond, 2008). Despite the severe criticism of standardized testing, it still can be used as an effective tool to inform teaching and learning. Testing can help curriculum designers, test developers, teachers, and educators identify students’ needs and tailor instruction in relation to those needs (Hamilton et al., 2002; Brown, 2013; Singh et al., 2015). It can also allow policymakers to evaluate the success and efficacy of the education system and identify the potential issues that could be addressed (Campbell and Levin, 2009).

The purpose of the study is to construct and validate reading assessments that account for the local contextual factors such as curriculum standards and expectations and that could provide formative information to students and teachers. The current research study includes several stages: pre-pilot study, pilot study, and main studies. In this abstract, the results of the pilot study will be presented.

The study aims to answer the following research questions:

What are the students' perceptions of the proposed testing instrument?

What are the psychometric properties of the pilot test?

The theoretical framework that guides my research is entitled evidence-centered design by Mislevy and Riconscente (2006). ECD employs the concept of layers where each layer possesses its own characteristics and processes. The goal of domain analysis is to collect substantive information about the target domain and to determine the knowledge, skills, and abilities about which assessment claims will be made. Domain modelling organizes the results of domain analysis to articulate an assessment argument that links observations of student actions to inferences about what they know or can do. Design patterns in domain modelling are arguments that enable assessment specialists and domain experts to gather evidence about student knowledge (Mislevy & Haertel, 2006). The third layer – conceptual assessment framework - provides the internals and details of operational assessments. The structure of the conceptual assessment framework (CAF) is expressed as variables, task schemas, and scoring mechanisms. This layer generates a blueprint for the intended assessment and gives a concrete shape to it. Assessment implementation constructs and prepares all operational elements specified in CAF: authoring tasks, finalizing scoring rubrics, establishing parameters in measurement models, and the like. The assessment delivery layer is where students are engaged with the assessment tasks, their performances are evaluated and measured, and feedback and reports are produced. Thus, ECD provides an essential framework for approaching test design, scoring, and analysis. In my study, the ECD framework will act as guidance to ensure that each layer is constructed, and relevant evidence accumulated.


Methodology, Methods, Research Instruments or Sources Used
The instrument of the proposed research is designed to assess students’ reading literacy in English. A number of standardized tests that measure reading literacy were reviewed. The main criteria for selecting tests were: 1) Anglophone tests; 2) availability of an online test for the public use; 3) standardized reading literacy tests; 4) tests for secondary and high school students; 5) grade-appropriate language and cognitive difficulty levels of the reading passages and test items; 6) a sufficient number of test items; 7) tests that have been used with big populations. Texts that displayed cultural bias and other features that might negatively impact test validity and reliability were not selected.
Since it is important to ensure alignment between assessment instrument and curriculum, subject experts were involved in the present study. I also used one element of Webb’s alignment model (1997) which is depth-of-knowledge (DOK) criteria that include the levels of cognitive complexity: recall of information, basic reasoning, complex reasoning, and extended reasoning.
First, experts matched DOK level and curriculum objectives with the test items. Before independently coding test items, the subject experts independently coded between five to ten items and then compared DOK levels assigned to the test items and corresponding learning objectives (Webb, 2002). After this stage, experts identified one or two objectives from the curriculum which correspond to each test item. It is not required that all experts should reach a unanimous decision about the correspondence between items and objectives. Teachers’ feedback would help to eliminate items that (1) do not map with the curriculum, or (2) might be considered ambiguous or confusing for students. Expert judgement may also help to identify potential sources of irrelevantly difficult items or items that might be far too easy for lower ability students (AERA, 2014).
Piloting assessment items is one of the ways to ensure test validity. Standard 3.3 of AERA (2014) states that analyses carried out in pilot testing should identify the aspects of the test design, content, and format that might distort the interpretations of the test scores for the intended population. In the current study, pre-test items were piloted with 11th grade students in one of the target schools.
After test piloting, retrospective probing were conducted. The main goal of retrospective probing is to examine participants’ understanding of the tasks or questions (Leighton, 2017).

Conclusions, Expected Outcomes or Findings
Five Grade 11 students were interviewed regarding their perceptions of the reading test. Three female and two male students participated in the interview. Overall, students made recommendations regarding some of the questions and distractors. For instance, students pointed out the unclarity of the distractors in some questions. Furthermore, some students argued that two correct options are possible in one of the questions. The questions that were identified problematic and confusing for students were reviewed and the corresponding changes were made.
The reading literacy test comprised 32 multiple-choice questions (31 items were dichotomous while one item was partial credit: 0, 1, 2). Test was administered among 69 Grade 11 students in a pilot school site. The mean of the students’ responses was 17.23 (SD = 5.71). The minimum score was 5 and the maximum score was 29.
Cronbach’s alpha was estimated to be .79 which is an indicator of an acceptable level of test reliability (DeVellis, 2017). However, the estimation of point-biserial correlations illustrate that some items have low levels of item discrimination even though all items exhibited positive values suggesting that all items are tapped to the reading construct.
Test items were analyzed employing the Rasch model with the assistance of the TAM package (Robitzsch, et al., 2021) using marginal maximum likelihood (MML) estimation (Bock & Aitkin, 1981). As one item involved partial scoring, the Masters (1982) partial credit Rasch model was used. Item difficulty estimates were constrained to zero, though the mean difficulty estimate was -0.09 (SD = 0.87).
Item fit analysis revealed some of the problematic items that should be reviewed prior testing with larger population of students.

References
American Educational Research Association (2014). Standards for educational psychological testing. American Educational Research Association.
Au, W. (2016). Meritocracy 2.0: High-stakes, standardized testing as a racial project of neoliberal multiculturalism. Educational Policy, 30(1), 39-62.
Bach, A. J. (2020). High-Stakes, standardized testing and emergent bilingual students in Texas. Texas Journal of Literacy Education, 8(1), 18-37. Retrieved September 30, 2021, from https://www.talejournal.com/index.php/TJLE/article/view/42
Brown, G. T. (2013). asTTle–A National Testing System for Formative Assessment: how the national testing policy ended up helping schools and teachers. In M. Lai & S. Kushner, A developmental and negotiated approach to school self-evaluation (pp. 39-56). Emerald Group Publishing Limited.
Campbell, C., & Levin, B. (2009). Using data to support educational improvement. Educational Assessment, Evaluation and Accountability, 21(1), 47-65.

DeVellis, R. F. (2017). Scale development. Theory and Applications (4th ed.). SAGE.
Hamilton, L. S., Stecher, B. M., & Klein, S. P. (2002). Introduction. In L.S. Hamilton, B.M. Stecher & S.P. Klein (Eds.), Making sense of test-based accountability in education (pp.1-12). RAND.
Heilig, J. V., & Darling-Hammond, L. (2008). Accountability Texas-style: The progress and learning of urban minority students in a high-stakes testing context. Educational Evaluation and Policy Analysis, 30(2), 75-110.
Koretz, D. (2017). The testing Charade: Pretending to make schools better. The University of Chicago Press.
Leighton, J.P. (2017). Using think-aloud interviews and cognitive labs in educational research. Oxford University Press.
Lingard, B. (2011). Policy as numbers: Ac/counting for educational research. The Australian Educational Researcher, 38(4), 355-382.
Masters, G. N. (1982). A Rasch model for partial credit scoring. Psychometrika, 47, 149-174 doi:10.1007/BF02296272
Mislevy, R. J., & Haertel, G. (2006). Implications of evidence-centered design for educational assessment. Educational Measurement: Issues and Practice, 25, 6–20.
Mislevy, R. J., & Riconscente, M. M. (2006). Evidence-centered assessment design: Layers, concepts, and terminology. In S. Downing & T. Haladyna (Eds.), Handbook of test development (pp. 61–90). Erlbaum.
Rezai-Rashti, G. M., & Segeren, A. (2020). The game of accountability: perspectives of urban school leaders on standardized testing in Ontario and British Columbia, Canada. International Journal of Leadership in Education, 1-18. doi.org/10.1080/13603124.2020.1808711
Robitzsch, A., Kiefer, T., & Wu, M. (2021). TAM: Test Analysis Modules. R package version 3.7-16. https://CRAN.R-project.org/package=TAM
Singh, P., Märtsin, M., & Glasswell, K. (2015). Dilemmatic spaces: High-stakes testing and the possibilities of collaborative knowledge work to generate learning innovations. Teachers and Teaching, 21(4), 379-399.


09. Assessment, Evaluation, Testing and Measurement
Paper

Developing A Linear Scaled Assessment-Tool For Mathematical Modelling In Chemistry

Benjamin Stöger, Nerdel Claudia

Technical University of Munich; Associate Professorship of Life Science Education

Presenting Author: Stöger, Benjamin

All model assumptions in the natural sciences are based on mathematical concepts, regularities or assumptions. For this reason, mathematical modelling is central to understanding the development and validation of models in the natural sciences. The ability to evaluate, change and apply models in the sense of gaining knowledge is understood as modelling competence. With the help of modelling cycles, the modelling process can be divided into individual steps. This enables an insight into the modelling process.

Blum & Leiß (2005) developed a framework for mathematical modelling. They distinguished between two main dimensions, "rest of the world" (which includes real-world problems, their structuring, mathematical description, and the interpretation and evaluation of mathematical results) and "mathematics". The translation between these dimensions is understood as mathematical modelling. Based on these dimensions, a seven-step modelling cycle was developed. Starting from a real situation/problem, the steps are to understand the situation (1), to simplify and structure it with a focus on the problem (2) followed by mathematisation (3), which results in the transition to the dimension of "mathematics". There, results are generated with mathematical methods (4) and translated back into the context and thus back into the dimension "rest of the world" with a focus on the problem (5). Now these results are validated in relation to the context (6) and an answer is given to the concrete problem (7).

Based on the cycle for mathematical modelling developed by Blum & Leiß (2005), various subject-specific modelling cycles were derived. Goldhausen & Di Fuccia (2014) derived a mathematical modelling cycle for the subject of chemistry. For this purpose, an additional dimension "chemistry" was added that is located between "rest of the world" and "mathematics". This is necessary because a real chemical situation (e.g. chemical experiment) must first be transferred into subject-specific models in order to be able to describe and interpret a situation.The steps of the mathematical modelling cycle were adapted to the specific requirements of a chemical contextualisation. In the first step, a problem/experiment is identified on a macroscopic level and a situation model is created (1). This is then translated into a chemical model (submicroscopic or symbolic level) ( Johnstone, 1991) (2). The chemical model is then mathematised (3), for which, according to Kimpel (2018), a deeper understanding of the model is necessary. With the developed mathematical model, mathematical results can be generated with the help of mathematical tools, similar to Blum & Leiß (2005) (4). These can then be translated back into the chemical model (5) and checked for their professional usefulness (6) so that they can finally be applied to the experiment/problem (7).

As diagnostic models, modelling cycles offer the possibility of gaining an insight into the complex cognitive processes of learners during modelling. In the field of mathematics didactics, modelling cycles have been used to develop a test instrument to measure mathematical modelling ability (Haines, Crouch & Davis., 2001; Brand, 2014; Hankeln, Adamek & Greefrath, 2019). In all cases, the steps of a modelling cycle were divided into empirically based categories. Items were constructed for these categories. Prior to testing, various models were postulated on the basis of empirical studies for the items. With the help of Rasch measurement, the data has been compared with the postulated models.

Since this type of test development has so far only been conducted for mathematical modelling in general, this study investigated if a questionnaire can be used to assess learners' mathematical modelling skills.


Methodology, Methods, Research Instruments or Sources Used
A test instrument for mathematical modelling was developed on the basis of the modelling cycle by Goldhausen & Di Fuccia (2014) and the methodological approach by Brand (2014). For this purpose, the cycle is divided into five sections (A1- A5). Four sections (A1, A2, A4, A5) each describe the change between the dimensions described in the model (rest of the world, chemistry and mathematics). For this categories, 12 items ( including question and assoociate answer format) from different chemical subject areas as well as different contexts from nature and technology were constructed. Category A3 focuses on answering mathematical questions and tasks from school mathematics of varying difficulty. Twelve areas were also developed for this, each containing three items of varying difficulty. In total, there were 36 items. Each of these categories focuses on a specific aspect of mathematical modelling ability.
For example, category A1 includes questions that focus on understanding and constructing a problem or on structuring and simplifying problems or tasks. In addition, this category includes tasks in which relevant aspects of an issue have to be identified or suitable chemical models have to be selected. Category A2 revolves around mathematising the selected model. This means selecting suitable mathematical formulae, describing mathematical relationships or developing mathematical formulae. The third category (A3) is about working mathematically. Accordingly, mathematical concepts, working methods and solutions are applied here. In category four (A4), mathematical results have to be classified technically. For example, identifying the unit of a mathematical result, assigning mathematical results to variables or classifying mathematical results in the subject context. The last category (A5) of the cycle describes the interpretation of the result considering the initial situation. This means checking a result for its meaningfulness, checking whether the result fits the model used or also to generate answer sentences.
All items in all categories have a closed answer format, with five answer options each. One correct answer, two 'plausible' answers based on misconceptions and two incorrect answers. All items were distributed across twelve test booklets each containing three (nine for A3) items per category. In order to obtain a linear scale, the coded data sets are evaluated using Rasch analyses (Boone, 2014). The person measures obtained for the individual categories served as the basis for a correlation analysis of the individual categories with the overall instrument.

Conclusions, Expected Outcomes or Findings
The data for validating the test instrument was collected by questioning students. For this purpose, students in the STEM fields of chemistry, mathematics, physics, biology and mechanical engineering have been surveyed so far. The data collection will continue until the end of February 2023. N=296 students have participated in the study until now. On the basis of this data, a first analysis was made in order to be able to identify a first trend regarding the results. For this purpose, the interview data was coded with view to the distractors. With the help of the programme Winsteps (Lincare, 2000), the data was examined using a Rasch analysis. In a first step, the quality of the items used was analysed. So, it was to check how well the items fit. Mean-square fits outside the reasonable range were found for only three of the 90 items. In the individual categories, a misfit was calculated for  a few further items (A1: none; A2: none; A3: 4 out of 36; A4: 1 out of 12; A5: 1 out of 12). Subsequently, the item reliabilities of the overall model and the individual categories were determined separately. These already showed values above 0.8 in almost all categories (A1-A5: 0.94; A1: 0.95; A2: 0.82; A3=0.87; A4=0.88; A5=0.82). In addition, the students' person abilities calculated using Rasch analysis were used for a correlation analysis. Significant and highly significant correlations between the individual dimensions were calculated pairwise. Examples include the correlation of category A3 with categories A1, A2, A4 and A5 [A1 (r=.325**, p<.001, n=159); A2 (r=.214**, p=.007, n=159); A4 (r=.288**, p<.001, n=140); A5 (r=.401**, p<.001, n=137)]. This indicates that the individual dimensions capture the overall construct of mathematical modelling well.
References
Blum, W., & Leiß, D. (2005). Modellieren im Unterricht mit der „Tanken “-Aufgabe. Mathematik lehren (128), p. 18-21, Karlsruhe.
Boone, W. J., Staver, J. R., & Yale, M. S. (2013). Rasch analysis in the human sciences. Springer Science & Business Media.
Brand, S. (2014). Erwerb von Modellierungskompetenzen: Empirischer Vergleich eines holistischen und eines atomistischen Ansatzes zur Förderung von Modellierungskompetenzen. Springer-Verlag.
Goldhausen, I., Di Fuccia, D.-S., (2014) ‚Mathematical Models in Chemistry Lessons’, Proceedings of the International Science Education Conference (ISEC) 2014, 25-27 November 2014, National Institute of Education, Singapore
Haines, C., Crouch, R., & Davis, J. (2001). Understanding students' modelling skills. In Modelling and mathematics education (pp. 366-380). Woodhead Publishing.
Hankeln, C., Adamek, C., & Greefrath, G. (2019). Assessing sub-competencies of mathematical modelling—Development of a new test instrument. In Lines of inquiry in mathematical modelling research in education (pp. 143-160). Springer, Cham.
Johnstone, A. H. (1991). Why is science difficult to learn? Things are seldom what
they seem. Journal of computer assisted learning, 7(2):75-83.
Kimpel, L. (2018). Aufgaben in der Allgemeinen Chemie: zum Zusammenspiel von chemischem Verständnis und Rechenfähigkeit (Vol. 249). Logos Verlag Berlin GmbH.
Linacre, J. M., & Wright, B. D. (2000). Winsteps. URL: http://www. winsteps. com/index. htm [accessed 2013-06-27]


09. Assessment, Evaluation, Testing and Measurement
Paper

The Backwash Effect of Exam Preparation in IBDP English A and B Courses on Developing Real-life Skills.

Botagoz Issabekova, Aliya Baratova

Nazarbayev Intellectual School in Astana, Kazakhstan

Presenting Author: Issabekova, Botagoz; Baratova, Aliya

“Will we have it in our summative?” … is quite a familiar phrase for English Language and Literature, and not only, teachers of Diploma Program (DP) in International Baccalaureate, isn’t it? While we are proud of our students’ performance and diligence, we are painfully aware that their academic achievements might have been gained at the cost of both teaching (curriculum, teaching methods, delivery) and learning (curriculum, content, life skills). This influence is also evident from the literature, where Gates (1995) defines it as “a washback effect”, i.e., “the influence of testing on teaching and learning” (p.102). Prodromou (1995), while strictly differentiating between ‘testing’ and ‘teaching’, suggests a negative effect of backwash on teaching, which he believes, greatly complicates it.

Although external exams of English A and B courses (Paper 1, Paper 2 and Individual Oral) are barely considered to be tests, as they are full-fledged final written and spoken exams, we believe that the term “backwash effect” also carries some influence on language and literature learning and teaching. Teachers have to cover a huge amount of material within two years of DP, while students, along with other subjects, have to process this huge amount of material. So many exams and IB components, so little time. And no wonder, teachers are more and more leaning to the program to be optimised, which to our concerns, can lead to keeping only essentials.

Now, the question arises what the “essential” is and what is not. Hughes (1989) proposes the following to ensure positive backwash effect: test should develop certain skills; test content should be varied and cover wide-spectrum areas; it should have an effect of unpredictability; make both teachers and students understand the procedure of the test, and others. In the same vein, Bailey (2016) puts forward the following aspects: meeting language learning objectives and requirements, authenticity of the tests and samples, ensuring learner autonomy and self-assessment, providing feedback of the test results.

In our search for a balance between students’ academic achievements and their future non-academic life, there is a sinking feeling in our teaching practice that we might be overlooking the opportunities to develop their career/real life aptitudes and skills. Although there is a sufficient literature in negative effects of backwash of testing, such as IELTS, high stake tests, standardized testing (Paker, 2013; Watkins, Dahlin & Ekholm, 2005) and ways to turn it into positive one, we seek to explore the backwash effect of written and spoken exams on language teaching and learning. We believe that the assignments covered in Diploma Program are quite challenging and intensive. They require time-managements and analytical skills as leaners are asked to write a textual analysis on a given text type and a comparative essay based on at least two literary works they have studied in class (Language A). With regards to Language B, they have to produce the text type, which requires more than mere selection of the right answer from multiple choices. Despite such a difference in requirements of the final exams, we are still concerned about the fact that there are more exams in classroom than skills development, which is, we believe, a prevailing phenomenon in education.

With that in mind, the following research questions have been addressed:

1) What are both benefits and drawbacks of predominance of the exam preparation in English class?

2) How can the classroom be modified to prepare students for life (not just exams)?


Methodology, Methods, Research Instruments or Sources Used
The given study is a collaborative research of an English Language A and Language B  teachers as part of Action research conducted in 2022-2023 academic year in Nazarbayev Intellectual School (IB World School). The sample consists of two groups. The first group is an entire class of twelveth-grade students (10), was a part of the research. These students were selected for several reasons. The major reason is that they are exposed to the curriculum at the moment, their exams are not yet passed and they have the freshest memory of the classroom. They can sincerely share their thoughts and feelings about their preparation and readiness for the upcoming exams. The second group (12) are a mix of graduates of 2020-2021 and 2021-2022 cohorts. They are currently pursuing their careers in universities and are a good source of investigating their opinions towards the skills they gained at school, whether they are beneficial or not in their academic paths.
To address research questions, a mixed research design was employed. Combining both quantitative and qualitative data provides “a very powerful mix” (Miles & Huberman, 1994, p. 42). First, Subject reports of two cohorts (2020-2021, 2021-2022) have been analysed to compare students’ performance in the beginning of the 11th grade (MOCK exams) and their final exams. Apart from this, open-ended interviews that provided students’ reflection offered different perspectives on the research topic, providing “a complex structure of the situation” (Creswell, p. 537).

Conclusions, Expected Outcomes or Findings
Having analysed the data, it has become evident that so called backwash effect on Language learning and teaching has had mostly positive effect on students’ learning. This can be seen from the final results of our school compared to the world results, where in 2022 our students received on average IB 5,47 out of 7, compared to world result IB 5,43 (Language A). Regrading Language B, our students’ result IB 6,16 was equal to the world result IB 6,16. It should be noted that, compared to the world, English for our students is a third language (after Kazakh and Russian) and taught in a non-English speaking country.
Open-ended interviews have shown students’ positive attitudes towards exam preparations as they “don’t feel threatened” being accustomed to working under strict deadlines and being bound to time restrictions. This, inevitably, developed their self-organisation skills (staying focused, being mindful, stress-resistant). Another positive aspect they mentioned was the ability to work with broad range of text types that students are exposed to in their academic lives at universities. Also, the skills they learnt in English class (coding, decoding, analysing, evaluating authors’ choices) have been indirectly assisting them in their more extensive mid-term and final papers at universities.
However, data derived from open-ended interviews, revealed that this backwash effect on Language learning and teaching has had negative effect on teachers’ teaching the course. Students’ responses resonated with our concerns about a change in teaching methods. Students noted that, even though, such approach is effective, it is monotonous and quite repetitive. This is a call for English DP teachers to think and vary the methods used in classroom, as even though students’ academic performance is high and they enter top-tier universities worldwide, teachers need to fulfill all facet of their profession.

References
Bailey, K. M. (1996). Working for washback: A review of the washback concept in language testing. Language testing, 13(3), 257-279.

Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.

Gates, S. (1995). Exploiting washback from standardized tests. Language testing in Japan.

Hughes, A. (1989) Testing for Language Teachers. Second Edition. Cambridge University press.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. sage.

Paker, T. (2013). The backwash effect of the test items in the achievement exams in preparatory classes. Procedia-Social and Behavioral Sciences, 70, 1463-1471.

Prodromou, L. (1995). The backwash effect: from testing to teaching. 13-25.

Watkins, D., Dahlin, B., & Ekholm, M. (2005). Awareness of the backwash effect of assessment: A phenomenographic study of the views of Hong Kong and Swedish lecturers. Instructional Science, 33, 283-309.


09. Assessment, Evaluation, Testing and Measurement
Paper

Modeling as a Tool for Formative Assessment in Biology Lessons.

Aigul Koishigarina, Gulnar Kashkinbayeva

Nazarbayev Intellectual school in Aktobe, Kazakhstan

Presenting Author: Kashkinbayeva, Gulnar

Modern education is inextricably linked with three main concepts: learning, teaching and assessment. The teacher of the 21st century must equip students with solid knowledge that will help them in life. At present, the teacher directs and coordinates the work of students, and contributes to the development of students' skills of independence, self-criticism and the ability to find the necessary, reliable information in a huge flow of knowledge and information. Thus, the student must be able to find, analyze and apply the necessary knowledge from a numerous flow of information.

According to PISA research, “Kazakh schoolchildren have subject knowledge at the level of their reproduction or application in a familiar educational situation, but they have significant difficulties in applying knowledge in real life situations…” [1].

According to the results of the international PISA study for 2019, it was revealed that 9th grade students found it difficult to complete assignments for formulating scientific questions, which amounted to 11.6% and tasks for the scientific interpretation of evidence data, which amounted to 15.9%.

Taking into account the data of an international study, we conducted a comparative analysis of the curriculum in biology, and saw that from the 7th to the 9th grades, the number of hours for modeling various processes increases, according to certain topics and sections of the curriculum. So, for example, if in the 7th and 8th grades 5 hours are allotted for modeling, then in the 9th grade it is already 6 hours. Therefore, the teacher needs to develop the skills of working with models and apply this method of pedagogical approach to the organization of the educational process, starting from the 7th grade.

These results influenced the research and the search for solutions to the questions that arose.

The main research questions are:

-How to teach a student to master the relevant skills in life?

-How can subject knowledge be improved?

To answer questions, the teacher needs to think about teaching, think about new methods of assessing students' knowledge.

The updated criteria-based assessment model helps participants in the educational process to understand and define “At what stage of learning is the student?”, “What does the student need to do to achieve the expected results?”.[2]

The aim of the study: the use of modeling in biology lessons will help students improve the quality of knowledge and will contribute to the development of functional literacy and creative thinking skills.

The presented work is the result of many years of experience of the authors, which has been tested in the educational process and supplemented in accordance with modern requirements. We sincerely hope that the methodological recommendations in the work will help teachers to increase the effectiveness of the educational process in order to educate competitive students.


Methodology, Methods, Research Instruments or Sources Used
The authors of personal development technology V.V. Davydov, V.I. Zvyagintsev, V.V. Kraevsky, I.Ya. Lerner, M.N. Skatkin, I.S. Yakimanskaya believe that it is necessary to pay attention not only to the knowledge of students, but also to their personal characteristics in the learning process.
The famous psychologist Jean Piaget argued that the models help to consolidate the knowledge gained and apply them in practice.
In the works of teachers V.B. Filimonov, G.A. Korovkin, G.I. Patyako, M.A. Danilova, V.P. Strokova, V.V. Shtepenko, the differences in concepts that determine the creative abilities of students are considered, without which it is not possible to implement the modeling technique paying attention to the emotional attitude of the participants  and their interest in the process being studied.
It should be noted that there is relationship between STEM technology and modeling. When creating models, the student must use the acquired knowledge not only in the field of one subject, but also in other subject areas and skills. Such interdisciplinary interaction will allow students to develop research skills, develop creativity and creative thinking skills and will contribute to the development of communication skills and teamwork.[4]
After studying the works of the above authors, we began to conduct research on the topic. To conduct the study , class 8 (E and F) was taken and the control group E was selected (where this modeling technique was not used), and the experimental group F (the technique was used in the classroom).
Note This study was conducted in the span of  three years, that is, students in grade 8 were already enrolled in grade 10 upon completion of the study.
Study Passport: Class 8 E/F Number of students: 12
Study start: 2019–2020 / Study completion time: 2021–2022.
Results of the study: In the experimental classes, the results of summative assessment for a quarter and for sections are + 10% higher than in the control groups, where the modeling technique was not used. The difference between SAU and SAT indicates the objectivity of the assessment. The effectiveness of the application of modeling can be seen through the results of the exam of students in grade 10, which is 100% (in the experimental group - subject of choice - biology).

Conclusions, Expected Outcomes or Findings
Every creative teacher should create favourable conditions for teaching students and provide an opportunity for the development of abilities that would help them in the future in determining the profession and in life in general.
In grades where this technique was used, the effectiveness of students' participation in olympiads and project activities increased, mostly at the Republican and International levels, by 40–60%.
So, the proposed technique can be used at various stages of the lesson. These classes allow you to:
1. develop the skills of creative thinking, analysis, research and application of knowledge;
2. use as an effective way of formative assessment;
3. help students develop key competencies in education: the ability to solve problems and manage processes independently;
4. involve all students in the active work of learning;
5.improve the quality of learning;
6. prevent students from mechanical memorization, relieves stress before the perception of educational material.[5]
Lessons using models become interesting and engaging, and help develop students' research skills and interest in the subject.[6]
Modeling in education is heuristic in nature and develops speech, memory and logic of thinking. [7] Teachers can use the technique to organize project work with students, as well as to conduct and organize elective courses and sections in order to develop creative thinking skills.
  When working with models in the educational process, the following difficulties may arise: lack of time (time management) when executing models at the initial stages of developing modeling skills; assessment of models according to criteria (originality / completeness / deliberation).[8]
The authors of the paper hope that this study will help school  teachers develop students' creative thinking skills and contribute to shaping the education of competitive students.

References
1. Guidelines for the development of natural science literacy of students. Nur-Sultan: branch "Center for Educational Programs" of AEO "Nazarbayev Intellectual Schools", 2020. - 56 pages, ISBN 978-601-328-922-9.
2. N. A. Avdeenko, M. Yu. Demidova, G.S. Kovaleva, O. B. Loginova, A. M. Mikhailova, S. G. Yakovleva. Monitoring the formation and evaluation of functional literacy. Creative thinking / 2019.-17c [electronic resource]: https://inlnk.ru/VoV34z
3. Wagner T. The Global Achievement Gap: Why Even Our Best Schools Don’t Teach the New Survival Skills Our Children Need – And What We Can Do About It. – Basic Books, 2008.
4. Azizov R. Education of a new generation: 10 advantages of STEM education [Electronic resource]: https://ru.linkedin.com/pulse/ -stem-rufat-azizov.
5. Modeling as a method of knowledge. Classification and forms of representation of models [Electronic resource]: https://clck.ru/KcW26
6. Tarasova S.A. The modeling method as a means of achieving metasubject results in the study of biology [Electronic resource]: https://www.prodlenka.org/srednjaja-shkola/3364-method-modelirovanija-kak-sredstvo-dostizhenij.html (06/22/2018)
7. Tom Bielik, Sebastian T. Opitz, Ann M. Novak, Supporting Students in Building and Using Models: Development on the Quality and Complexity Dimensions, Education Sciences. 2018, 8(3), 149; https://doi.org/10.3390/educsci8030149
8. Julie Dirksen. The art of teaching. How to make any learning fun and effective // Mann, Ivanov and Ferber.; Moscow, 2013, 276 p.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany