Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 11:13:29 EEST

 
 
Session Overview
Session
09 SES 13 A: Exploring Innovative Approaches to Assessment and Feedback
Time:
Thursday, 29/Aug/2024:
17:30 - 19:00

Session Chair: Tracy Whatmore
Location: Room 013 in ΧΩΔ 02 (Common Teaching Facilities [CTF02]) [Ground Floor]

Cap: 60

Paper Session

Show help for 'Increase or decrease the abstract text size'
Presentations
09. Assessment, Evaluation, Testing and Measurement
Paper

One Attempt to Measure Collaboration Between Students During Group Work

Marina Videnovic1, Smiljana Josic2, Dragica Pavlović Babić1

1University of Belgrade, Faculty of Philosophy, Serbia; 2Institute for Educational Research, Belgrade

Presenting Author: Videnovic, Marina

Collaborative problem-solving (CPS) skills have become an inevitable part of workforce readiness in contemporary society (Graesser et al., 2018). Numerous studies have shown that CPS is a powerful learning tool that could lead to more creative, efficient and comprehensive solutions than other approaches (Fiore, 2008). Sometimes it is the only possible way to solve complex problems. That is not surprising that the Organisation for Economic Cooperation and Development (OECD, 2019) includes the development of collaboration skills in the education development agenda for 2030. A lot of attempts were made to introduce CPS in everyday educational practice. However, the benefits of the CPS often fail to be achieved (Le et al., 2018).

Collaborative problem-solving is usually defined as working together toward a common goal (Hesse et al., 2015). It includes interdependency between group members in joint activity and shared responsibility for the group results.

Despite many contributions, there is a lack of instruments for measuring student-specific versions of collaborative processes during group work (Wang et al., 2009). The focus is often on the effect of this type of learning assessed through achievement data (Jansen, 2010) while the quality of the collaborative process is beyond research aims. Usually, self-assessment tools were used for this purpose accompanied by methodological limitation of subjective assessments. In these attempts, students' perceptions and experience with CPS are not distinguished from the quality of collaboration present during group work. Also, collaboration is assessed as an individual skill separate from its nature as a joint activity. This study aims to construct an instrument for assessing the quality of collaboration between students while trying to solve a complex problem. This study is part of the larger project PEERSovers with a focus on designing an evidence-based training program for enhancing high-school students' collaborative skills. The theoretical background for constructing the instrument involves a qualitative systematic literature review of 160 articles published between 2021 and 2022 that investigated differences between productive and unproductive peer collaboration (Baucal et al., 2023). Four aspects of peer interaction were identified as a result of this analysis. The first covers cognitive exchange between group members. Research shows that productive CPS includes argumentative dialogue between team members and constructive evaluation of ideas. Also, the effort is made to move from the personal opinion toward a shared understanding of the problem. Well-known Mercer studies (for example, Mercer et al., 2019; Mercer & Dawes, 2014) pointed out that exploratory talk during group work enhances the co-construction of joint cognitive activity, fosters critical thinking skills and contributes to the overall learning experience in educational settings. The second aspect refers to the emotional aspect of group work manifested through group atmosphere, presence of conflicts and tension, group cohesion, members' sense of belonging, mutual tolerance and empathy. In unproductive groups, members are disrespected and prevented from fully participating. Often the inequality in power is present during group work. Some members dominate in the dialogue space and prevent others from contributing. The third and fourth aspects are dedicated to two domains of group regulation: task activity regulation (time management, coordination of the activity, planning group activity, task-focus approach) and relationship regulation (group norms, sharing responsibility, dividing the assignment, efficient conflict management etc.). An unproductive group is often characterised by lots of off-task behaviour. Usually, few or only one participant takes overall responsibility for group work. We tried to operationalize these four aspects as dimensions of the instrument used for evaluating a CPS.


Methodology, Methods, Research Instruments or Sources Used
Sample: Participants were selected from 12 secondary schools in Belgrade (6 vocational and 6 general/gymnasium schools). School counsellors, guided by the students’ preferences, formed triads of male or female students from the same class. The sample included 64 groups of three students (192  participants), of which 37 were girls and 27 were men. All students involved in the research had formal parental consent and their assent.
Procedure: Students’ triads participate in CPS sessions trying to solve a single but complex real-life problem. Problem tasks used in this study were related to four community-relevant themes: (1) ecology (2) teen behaviour, (3) media, and (4) education. The assigned task for each group involved generating a written solution to the presented problem, subsequently assessed for its quality. The entire interaction during the CPS process was video-recorded video for subsequent analysis. CPS sessions were conducted on school premises during the regular school day. The average duration of a CPS session was 97 minutes (SD = 30; range = 19-167).
Instruments: CPS observational grid (CPS-OG). The quality of collaboration was assessed based on video recordings of CPS sessions. Each session was rated by two independent reviewers using a 22-item observational grid. The grid was designed to capture four dimensions of productive CPS: socio-cognitive (SC - 9 items, 2 reverse-scored; e.g., Group members sought and/or provided explanations for presented ideas and suggestions); socio-emotional (SE - 4 items, 1 reverse-scored; e.g., Group members worked together, as a team); task management (TM - 5 items, 1 reverse-scored; e.g., The group planned its approach to solving the task); relationship management (RM - 5 items, 2 reverse-scored; e.g., Throughout the work, group members purposefully coordinated group and individual activities). Each item is scored on a 5-point Likert scale, ranging from 0 (not at all) to 4 (to a large extent).
Data Analyses: Analyses were performed to examine the structural and reliability properties of measures designed specifically for this study. The unidimensionality of CPS-OG  subscales was inspected via Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA).  Internal consistency for CPS-OG dimensions was determined by calculating Cronbch’s alpha coefficient.

Conclusions, Expected Outcomes or Findings
The results confirmed good psychometric characteristics of the CPS observational grid. Exploratory factor analysis (Principal component analysis with Oblimin rotation) resulted in four factors explaining 76% of the total variance. Correlations between factors were moderate with a maximum value of 0.44. The first factor (50% of the variances) mainly included SC variables. The second factor (15 % of the variances) corresponds to the TM dimension. The third (6%) factor represents a mix of the SE and RM variables. It includes statements about negative relationships in the group (present tension, conflicts and isolation of the members). Finally, the fourth factor (5% of the variance) covers the absence of an authoritative leader and good conflict management as aspects of the RM dimension. The correlations between the first factor and the other three are moderate (from -0.33 to 0.44). The correlations between the other factors are low. Confirmatory factor analysis (CFA) confirmed a single-factor solution for all dimensions, except the TM. Item-level intraclass correlation (ICC) for CPS observational grid (CPS-OG) indexes reached excellent values (Cicchetti, 1994), ranging from .75 to .95. Dimension-level ICC values were also excellent: .94 for SC, .90 for SE, .93 for TM, .85 for RM. Internal consistency (Cronbach's alpha) ranges from good to excellent (.921 for SC, .914 for SE, .856 for TM, .791 for RM.)
The next research step will include the external validation of the instrument. We will examine the association between the dimensions of the CPS observational grid and the quality of the proposed group solution. The quality of the solution will cover several dimensions: whether the solution is realistic; an assessment of the proposal's creativity; an assessment of the degree to which the proposal is well-argued with various perspectives.

References
Baucal, A., Jošić, S., Ilić, I. S., Videnović, M., Ivanović, J., & Krstić, K. (2023). What makes peer collaborative problem solving productive or unproductive: A qualitative systematic review. Educational Research Review, 100567. https://doi.org/10.1016/j.edurev.2023.100567
Fiore, S. M., Graesser, A., & Greiff, S. (2018). Collaborative problem solving education for the 21st century workforce. Nature: Human Behavior, 2(6), 367–369. https://doi.org/10.1038/s41562-018-0363-y
Graesser, A. C., Fiore, S. M., Greiff, S., Andrews-Todd, J., Foltz, P. W., & Hesse, F. W. (2018). Advancing the science of collaborative problem solving. Psychological science in the public interest, 19(2), 59-92. https://doi.org/10.1177/1529100618808244
Hesse, F., Care, E., Buder, J., Sassenberg, K., & Griffin, P. (2015). A framework for teachable collaborative problem solving skills. Assessment and teaching of 21st-century skills: Methods and approach, 37-56. DOI: 10.1007/978-94-017-9395-7_2
Le, H., Janssen, J., & Wubbels, T. (2018). Collaborative learning practices: teacher and student perceived obstacles to effective student collaboration. Cambridge Journal of Education, 48(1), 103-122. https://doi.org/10.1080/0305764X.2016.1259389
Mercer, N. & Dawes, L (2014). The study of talk between teachers and students, from the 1970s until the 2010s. Oxford Review of Education, 40 (4) (2014), pp. 430-445. https://doi.org/10.1080/03054985.2014.934087
Mercer, N., Hennessy, S., & Warwick, P. (2019). Dialogue, thinking together and digital technology in the classroom: Some educational implications of a continuing line of inquiry. International Journal of Educational Research, 97, 187–199. https://doi.org/10.1016/j.ijer.2017.08.007
OECD. (2019). An OECD Learning Framework 2030 (pp. 23-35). Springer International Publishing.
Wang, L., MacCann, C., Zhuang, X., Liu, O. L., & Roberts, R. D. (2009). Assessing teamwork and collaboration in high school students: A multimethod approach. Canadian Journal of School Psychology, 24(2), 108-124. DOI: 10.1177/0829573509335470


09. Assessment, Evaluation, Testing and Measurement
Paper

An Exploration of Constructive Verbal Feedback in Secondary School Classrooms

Yeraly Baizhanov, Nurgul Kulshymbayeva, Saya Niyazmaganbetova, Marziya Tanzharikova

NIS Aktobe, Kazakhstan

Presenting Author: Baizhanov, Yeraly; Kulshymbayeva, Nurgul

Verbal feedback is the oral communication between teachers and students that aims to provide constructive guidance on students’ progress, strengths, and areas for improvement, according to numerous educational scholars (Black & Wiliam, 2009; Hattie & Timperley, 2007; Shute, 2008). In secondary school settings providing effective feedback is a key component of a good education. The effectiveness of feedback in education is a widely studied and acknowledged aspect of the learning process (Black & Wiliam, 1998; Hattie, 2009; Karaman, 2021; Wisniewski et al., 2020). As teachers continually work to improve the learning outcomes for their students, the role of feedback, especially verbal feedback that takes place in classrooms everyday, becomes increasingly important.

The “Feed Up, Feed Back, Feed Forward” model, introduced by John Hattie and Helen Timperley in their influential 2007 paper, “The Power of Feedback,” presents a cyclical approach comprising three essential stages of effective feedback. These stages encompass setting clear objectives or “feed up,” delivering feedback on present performance, and proposing strategies for enhancement or “feed forward.” This implies that teachers should provide constructive feedback that is descriptive and focused on providing specific, actionable information aimed at helping the recipient improve or enhance their performance, skills, or understanding.

Many studies on verbal feedback have been conducted in the field of foreign or second language learning, exploring different types and functions of corrective feedback and their effects on language proficiency (Lyster & Saito, 2010). These studies have shown that providing oral corrective feedback not only helps students improve their accuracy and fluency in speaking, but also enhances their overall language proficiency. Although teachers might have experience or undergo professional development courses, their formative assessment practices could not be always effective. According to certain research findings, teachers’ attitudes about the usage of various forms of oral corrective feedback in the classroom do not necessarily align with their actual practices (Kim & Mostafa, 2021). Further comprehensive research on corrective feedback is necessary to investigate the alignment between teachers’ actual practices and their underlying ideas about feedback (Karimi & Asadnia, 2015). Therefore, this study focuses on the following research question: “To what extent do secondary school teachers provide constructive verbal feedback in classroom?”


Methodology, Methods, Research Instruments or Sources Used
The study has taken place at Nazarbayev Intellectual school in Aktobe, Kazakhstan, and employed a quantitative research design. The sampling for lesson analysis consisted of 17 teachers representing different subjects, grade levels and teaching experience (from several months to more than ten years). The data was collected through recording videos of the 17 lessons and online survey among participants to understand their attitude on constructive verbal feedback. Ethical considerations have been considered during data collection. All teachers took part in the study voluntarily and agreed their lessons to be recorded. The confidentiality and anonymity of the participants have been ensured. The link to the survey was sent to the corporate emails. 46 teachers participated in an anonymous online survey.
The analysis of video recordings was completed according to observation protocol for constructive verbal feedback influenced by observation protocols for formative assessment dimensions by Cisterna and Gotwals (2018). Our protocol consisted of four different levels of constructive feedback practice (1 being the lowest and 4 being the highest). Level 1 indicated the absence of teacher’s verbal feedback, while level 2 implied evaluative feedback where teachers had used very general and ambiguous comments like “Good job”, “Correct” or “That’s not the right answer”. Level 3 verbal feedback was mainly descriptive, focusing on the task completion, however, being not completely constructive and stimulating. The highest level of verbal feedback practice (level 4) was described as purely descriptive and specific with elaborated comments that stimulates students’ learning. Each level received respective score (1-4).
Three researchers independently analysed the videos using the lesson observation protocol and the means of their scores was used to evaluate teachers’ overall oral feedback practice. The observation protocol has been designed in cooperation and discussed by all researchers before the lesson analyses to ensure validity and reliability.  

Conclusions, Expected Outcomes or Findings
The analysis of video recordings has revealed that the mean score for teachers’ overall oral feedback practice was 2.6, which indicates that feedback there is room for improvement in providing more detailed and constructive feedback to students. Teachers usually gave more evaluative feedback compared to descriptive one. When giving feedback, they mostly responded with the words “Good”, “good job”, and “correct” as well as conveyed it through gestures. This observation suggests that teachers should focus on enhancing their oral feedback practices by providing more specific and elaborated feedback that would help students understand about the ways to improve their learning.
Findings from the survey demonstrate that more than half of the respondents agree that constructive feedback is time-consuming to conduct effectively. The majority of the teachers admitted that they did not take notes of the student’s progress. 43% of the teachers acknowledged that they lacked knowledge of effective feedback providing techniques, whereas the half believed in having sufficient constructive feedback skills.

References
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7-74. https://doi.org/10.1080/0969595980050102
Cisterna, D., & Gotwals, A. W. (2018). Enactment of ongoing formative assessment: Challenges and opportunities for professional development and practice. Journal of Science Teacher Education, 29(3), 200-222.
Hattie, J. (2009). Visible Learning: A Synthesis of 800+ Meta-Analyses on Achievement. London: Routledge
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. https://doi.org/10.3102/003465430298487
Karaman, P. (2021). The Effect of Formative Assessment Practices on Student Learning: A Meta-Analysis Study. International Journal of Assessment Tools in Education, 8(4), 801-817. https://doi.org/10.21449/ijate.870300
Karimi, M. N., & Asadnia, F. (2015). EFL Teacher’s Beliefs About Oral Corrective Feedback and their Feedback-providing Practices Across Learners’ Proficiency Levels. Teaching English as a Second Language Quarterly (Formerly Journal of Teaching Language Skills), 34(2), 39-68.
Kim, Y., & Mostafa, T. (2021). Teachers’ and Students’ Beliefs and Perspectives about Corrective Feedback. In H. Nassaji & E. Kartchava (Eds.), The Cambridge Handbook of Corrective Feedback in Second Language Learning and Teaching (pp. 561–580). chapter, Cambridge: Cambridge University Press.
Lyster, R., & Saito, K. (2010). Oral feedback in classroom SLA: A meta-analysis. Studies in second language acquisition, 32(2), 265-302.
Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power of feedback revisited: A meta-analysis of educational feedback research. Frontiers in psychology, 10, 3087.


09. Assessment, Evaluation, Testing and Measurement
Paper

Unpacking Assessment and Feedback: International Student’s experience during postgraduate study

Tracy Whatmore1, David Meechan2, Lizana Oberholzer3

1University of Birmingham, United Kingdom; 2University of Northampton, United Kingdom; 3University of Wolverhampton, United Kingdom

Presenting Author: Whatmore, Tracy

Assessment and feedback are fundamental aspects of student experience in higher education as a measure of progress and achievement, and a central tenant of learning and engagement. Assessment and feedback play a pivotal role in increasing student knowledge and understanding, developing key skills, and promoting motivation and academic advancement.

The ever-increasing number of international students in higher education, representing a significant percentage student body, necessitates focused consideration of their experience of assessment and feedback. The research-based paper provides evidence based on the investigation of international students, and presents their voices in relation to the lived experience of assessment and feedback. The research focussed on elements of student experiences with regard to assessment and feedback, addressing the following Research Questions:

RQ 1- What do students currently encounter in terms of assessment and feedback?

RQ 2- How can we evolve assessment and feedback strategies to enhance the experience for international students?

The research utilised an interpretivist paradigm, concentrating on the perspectives of the respondents to develop knowledge of their experience and interpretation of this. Unlike research paradigms which primarily aim to universalise results, interpretivist research focuses on understanding the viewpoints of participants within their specific settings. It acknowledges that these viewpoints and behaviours are dynamic, altering based on temporal and situational factors. This facilitates the contrasting of outcomes across different time frames or locales (Cohen et al., 2017).

Higher Education needs to be ever responsive to technological innovation, pedagogical shifts, and the increasing diversity of the student body. The role of assessment and feedback within higher education remains central, acting as both a measure and a driver of student learning and engagement. Dr Katherine Hack, principal adviser in teaching and learning at the Higher Education Academy (HEA), stated that assessments and feedback are two of the most influential tools teachers have to direct and support learning (Advance HE, 2022). Indeed, assessment and feedback are an inherent and significant part of a student’s experience, and the prominence of this is captured annually in surveys such as the National Student Survey (NSS) for undergraduates (NSS) and the Postgraduate Taught Experience Survey (PTES). As such, continually re-evaluating and refining assessment and feedback, to align with the changing educational environment, is essential to keep practices contemporary and responsive to the student body and experience.

International students account for a notable percentage of postgraduate students across Higher Education Institutions (HEIs) worldwide, and represent a wide array of cultural and educational backgrounds. The difficulties for students needing to navigate an unfamiliar culture are well documented (Haider, 2018; Xie et al, 2019). Simultaneously, international students must also navigate new assessment and feedback practices as part of the transitional journey to Higher Education Institutions (HEIs), often in a different country. This presents a unique set of challenges, academic and cultural, adding an additional layer of complexity to an already nuanced landscape.

HEIs are faced with the task of ensuring that their assessment and feedback practices are inclusive and equitable catering to a diverse student body. The Quality Assurance Agency (QAA) began embedding equality, diversity, and inclusion (EDI) in its subject benchmarks in 2021, in the UK. This was part of a wider commitment to promoting EDI across HEIs, and to ensure that all students have an equal opportunity to succeed. This requires a continued commitment to academic rigour while adapting to the evolving needs and expectations of a diverse student body. This requirement can be applied globally, as HEIs seek to ensure that EDI is integrated within assessment and feedback. The paper seeks to investigate how this can be achieved.


Methodology, Methods, Research Instruments or Sources Used
A mixed methods approach was utilised, and two research instruments were devised for the investigation:
1- Online questionnaire
2- Face to face focus groups
An online questionnaire served as the initial tool for gathering qualitative data, with the aim of enabling a detailed examination of individual perspectives and an assessment of collective viewpoints within the sample (Clark et al., 2021). Countering the prevalent misconception that qualitative research lacks numerical components, Sandelowski (2001) argued that numeric data can play various roles affecting both the structure of the research and its ultimate categorisation. On this basis some numerical data was drawn upon to contextualise and inform the findings and analysis, and as an indicator of the respondent's experiences.
The questionnaire consisted of both fixed-response and open-response items. Fixed-response questions enabled respondents to select options that best suited their specific circumstances, whereas open-response questions offered the opportunity for more detailed personal reflections. The specific questions were informed by preliminary dialogues with international students on postgraduate courses. The questions were aligned with the Research Questions, but also sought to identify and provide opportunities for respondents to include details of their lived experiences.
101 students, undertaking postgraduate study in three universities, responded to the detailed questionnaire, and the data was systematically analysed and key themes identified.
Following on from the questionnaire, face-to-face focus groups were then undertaken to gather qualitative data, aimed at a nuanced exploration of individual viewpoints, as well as the aggregated opinions of the participants (Kamberelis and Dimitriadis, 2013). The framework for the focus groups included both pre-defined discussion themes and open-ended questions, allowing respondents to elaborate on their unique perspectives and experiences. Thematic analysis of the focus group transcripts was carried out. The themes were subsequently examined, and cross-referenced against pertinent statistical data and research based findings. The individual viewpoints, perceptions and experiences of the respondents are included in the paper, to ensure that their distinct 'voices' are captured and highlighted.



Conclusions, Expected Outcomes or Findings
The research-based paper aims to contribute to a growing body of knowledge on assessment and feedback in Higher Education, with a specific focus on international students. Utilising a qualitative approach that incorporates data collected via questionnaire and focus group, the research provides a range of insights regarding how international students experience and perceive assessment and feedback during their postgraduate courses. The research contributes to academic discourse, and offers practical insights for HEIs and academics moving forwards in providing effective provision for an increasingly diverse and global student population.
The research contributes to narrowing the research gap identified, and the need for a nuanced understanding of assessment and feedback practices in higher education settings for international students. By offering a multi-faceted view that considers transitional experiences, individual preferences and challenges, and emotional impacts, the research provides a richer, more complex understanding of assessment and feedback. It underscores the need for higher education institutions to adopt a more adaptive, personalised, and emotionally intelligent approach to enhance the student experience of assessment and feedback.
The research adds depth and breadth to the existing literature by highlighting key considerations that need to be addressed when working with international students, and places the international student at the forefront. This provides a student voice and perspective that emphasises their particular needs, concerns and challenges. The research provides recommendations and a six phased template that could be utilised in the design and implementation of higher educational assessment and feedback provision for international students, across global HEIs.


References
Arthur, N. (2017) Supporting international students through strengthening their social resources. Studies in Higher Education, 42(5), 887–894. https://doi.org/10.1080/03075079.2017.1293876

Baughan, P. (2021) Assessment and Feedback in a Post-Pandemic Era: A Time for Learning and Inclusion. Advance HE.

Cook, D.A., and Artino, A.R. (2016) Motivation to learn: an overview of contemporary theories. Medical Education. 50(10), 997–1014.

Chew, E. (2014) “To listen or to read?” Audio or written assessment feedback for international students in the UK. On the Horizon. 22(2), 127–135.

Dawadi, S., Shrestha, S., and Giri, R. A. (2021) Mixed-Methods Research: A Discussion on its Types, Challenges, and Criticisms. Journal of Practical Studies in Education, 2(2), 25-36 DOI: https://doi.org/10.46809/jpse.v2i2.20

Grainger, P. (2020) How do pre-service teacher education students respond to assessment feedback? Assessment & Evaluation in Higher Education. 45(7), 913–925.

Haider, M. (2018) Double Consciousness: How Pakistani Graduate Students Navigate Their Contested Identities in American Universities. In Y. Ma & M. A. Garcia-Murillo, eds. Cham: Springer International Publishing, pp. 107–125.

Henderson, M., Ryan, T and Phillips, M (2019) The challenges of feedback in higher education, Assessment & Evaluation in Higher Education, 44:8, 1237-1252, DOI: 10.1080/02602938.2019.1599815

Koo, K., and Mathies, C. (2022) New Voices from Intersecting Identities Among International Students Around the World: Transcending Single Stories of Coming and Leaving. Journal of International Students. 12(S2), 1–12.

Lomer, S., and Mittelmeier, J. (2023) Mapping the research on pedagogies with international students in the UK: a systematic literature review. Teaching in Higher Education. 28(6),
1243–1263.

McCarthy, J. (2015) Evaluating written, audio and video feedback in higher education summative assessment tasks. Issues in Educational Research, 25(2), 153-169. http://www.iier.org.au/iier25/mccarthy.html

Oldfield, A., Broadfoot, P., Sutherland, R and Timmis, S (nd) Assessment in a Digital Age. University of Bristol, Graduate School. https://www.bristol.ac.uk/media-library/sites/education/documents/researchreview.pdf

Schillings, M., Roebertsen, H., Savelberg, H., Whittingham, J., Dolmans, D. (2020) Peer-to-peer dialogue about teachers’ written feedback enhances students’ understanding on how to improve writing skills. Educational Studies. 46(6), 693–707.

Winstone, N.E., Nash, R.A., Parker, M., and Rowntree, J. (2017) Supporting Learners’ Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipient Processes. Educational Psychologist. 52(1), 17–37.

Xie, M., Liu, S., Duan, Y., Qin, D.B. (2019) “I Can Feel That People Living Here Don’t Like Chinese Students”: Perceived Discrimination and Chinese International Student Adaptation H. E. Fitzgerald, D. J. Johnson, D. B. Qin, F. A. Villarruel, & J. Norder, eds. , 597–614.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2024
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany