Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th May 2024, 06:05:54am GMT

 
 
Session Overview
Session
09 SES 07 B: Exploring Student Perspectives and Teacher Experiences: Feedback in Education
Time:
Wednesday, 23/Aug/2023:
3:30pm - 5:00pm

Session Chair: Gudrun Erickson
Location: Gilbert Scott, 253 [Floor 2]

Capacity: 40 persons

Paper Session

Show help for 'Increase or decrease the abstract text size'
Presentations
09. Assessment, Evaluation, Testing and Measurement
Paper

Student Perceptions of Self-generated Feedback: “It Made the Course Make Sense”.

William McGuire, David Nicol, Gemma Haywood

University of Glasgow, United Kingdom

Presenting Author: McGuire, William

In the Professional Enquiry and Decision-Making course at the University of Glasgow, part of the MEd in Professional Practice, students write a 1500-word assignment, which they often find challenging. The task is new and complex; their work must be original, and they are often unclear about requirements associated with each assessment criterion. Rubrics, descriptions of what is required, are of limited help. Existing support, where students received peer and tutor feedback prior to assignment submission improves outcomes but incurs high staff workload and does not necessarily foster independence. Therefore, a complementary intervention was devised in which students generated their own feedback (Nicol, 2021).

International studies have shown that feedback is an area of European or even global concern for students even though they can create their own feedback by comparing their work against rubrics, exemplars, or peers’ work (e.g., Lipnevich et al, 2014; Nicol and McCallum, 2022). Indeed, Nicol (2021) has developed a model to explain this in which the core feedback generation mechanism is comparison, thus arguing that capacity-building for self-regulation requires student development of inner feedback capability via explicit comparisons. (Nicol and Selveretnam, 2022).

Prior research gives exemplars before student work to clarify requirements, although recently some have argued for their use after student work: a form of post-production feedback (To, Panadero and Carless, 2021). However, we argue that both modes support self-feedback production. Exemplars can be similar in presentation format and subject topic to the work the student has produced or similar in format but different in topic. With this assignment, the latter enables a focus on writing (e.g., structure, argument) without distraction from content.

Five aspects of the MEd assignment served as focus for feedback improvements: the writing of literature search strategies, literature review, ethics application, research dissemination and limitations in research designs. For each, students: (i) compared exemplars of quality work (different topic/similar format) selected from students in previous years and identified common principles; (ii) produced their own work; and (iii) compared their findings from (1) with own work. The tutor guided students through the first comparison in class with second completed individually out of class (Nicol, 2021).


Methodology, Methods, Research Instruments or Sources Used
An online survey was deployed to generate quantitative data on the students’ perceptions of the extent of learning from the different comparison processes.  The survey was constructed based on the findings two focus groups. The process of reflexive thematic analysis designed by (Braun and Clarke, 2006, 2012, 2014, 2019) and developed in (Braun and Clark 2020) was deployed to identify, analyse, and report on emergent themes within the data sets. (Braun and Clarke 2006:79). The use of a Big Q approach enabled us to use both qualitative and quantitative data which mapped onto our research design to test a theory and to let the data lead.
Conclusions, Expected Outcomes or Findings
Students were extremely positive about this approach. The before comparison clarified understanding of task requirements thereby reducing anxiety and enabled them to generate feedback while producing their own work, although this did reduce the need for the second comparison. We will discuss how to address this issue. Most reported that delineating the comparison process raised awareness that they could take more agency over feedback processes.
Results had already been been excellent on this programme, but post-intervention results were outstanding with 17/30 students being awarded first class marks in their  dissertations. The use of partial exemplars, which were more palatable for students, proved to be helpful for students. The use of exemplars both as part of the feedback design protocol and as part of the peer and tutor review process was felt to be beneficial by participants.
Next steps include a scaling up and out of the protocol and so further testing with a larger cognate group, such as a PGDE class or an M Educ class in which numbers are much higher. Another possibility would be to trial the process in a non-cognate group or even to trail the use of non-exemplars. Perhaps the greatest benefit beyond student satisfaction and attainment is the potential to develop much greater student agency.

References
Alfieri, L., Nokes-Malach, T.J., and Schunn, C.D. (2013). Learning through case comparisons: A meta-analytic review, Educational Psychologist, 48:2, 87-113, DOI: 10.1080/00461520.2013.775712

Braun, V., Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101, DOI: 10.1191/1478088706qp063oa

Braun, V., Clarke, V. (2012).  Thematic analysis. In: Cooper, H., Camic, P.M., Long, D.L., Panter, A.T., Rindskopf, D., Sher, K.J. (eds.) APA Handbook of Research Methods in Psychology, Research Designs, vol. 2, pp. 57–71. Ameri-can Psychological Association, Washington.

Braun, V., Clarke, V. (2014) Thematic analysis. In: Teo, T. (ed.) Encyclopaedia of Critical Psychology, pp. 1947–1952. Springer, New York.

Braun, V., Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health. Volume 11, 2019 – Issue 4, DOI: 10.1080/2159676X.2019.1628806

Braun, V., Clarke, V. (2020). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qual. Res. Psychology, DOI: 10.1080/14780887.2020.1769238

Lipnevich, A.A., McCallen, L.N., Miles, K.P., and Smith, J.K.  (2014). Mind the gap! Students’ use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4) pp.539–559, DOI:10.1007/s11251-013-9299-9

Nicol D. (2021). ‘The power of internal feedback: exploiting natural comparison processes’, Assessment & Evaluation in Higher Education, 46(5) pp,756-778, DOI: 10.1080/02602938.2020.1823314

Nicol, D., and McCallum, S. (2022). Making internal feedback explicit: exploiting the multiple comparisons that occur during peer review, Assessment & Evaluation in Higher Education, 47(3) pp.424-443, DOI: 10.1080/02602938.2021.1924620
Nicol, D. and Selveretnam, G. (2022) Making internal feedback explicit: harnessing the comparisons students make during two-stage exams. Assessment and Evaluation in Higher Education, 47(4), pp. 507-522, DOI: 10.1080/02602938.2021.1934653

To, J., E. Panadero, and D. Carless. (2022). A systematic review of the educational uses and
effects of exemplars, Assessment & Evaluation in Higher Education, 47:8, 1167
1182, DOI: 10.1080/02602938.2021.2011134


09. Assessment, Evaluation, Testing and Measurement
Paper

Does Gender predict Upper Secondary School Students’ Perceptions of Teacher Feedback?

Katharina Dreiling, Ariane S. Willems

University of Göttingen, Germany

Presenting Author: Dreiling, Katharina

A key assumption of established models of school effectiveness and improvement is that factors of the teaching quality significantly affect the development of students’ competencies and attitudes (Kyriakides & Creemers, 2008). In particular, the power of feedback as a component of teaching quality has been stressed (Hattie & Timperley, 2007; Lipnevich & Smith, 2018; Seidel & Shavelson, 2007). Drawing on the feedback theory of Hattie and Timperley (2007) three dimensions of feedback quality are distinguished: On the task level feedback informs the learner on their actual state of learning and/or performance. On the process level feedback provides information on the progress students have made toward meeting and gives hints on how to improve. On the self-regulation level feedback encourages students to regulate and evaluate their own learning process. Additionally, Willems and Dreiling (in press) suggested a dialogue related dimension of feedback involving peers as a source of feedback in evaluating students’ performances. Existing studies show that the quality of feedback affects learning outcomes on both cognitive (e.g., achievement) and motivational levels (e.g., intrinsic motivation) (Rakoczy et al., 2008; Wisniewski et al., 2020). The impact of feedback, however, is not necessarily positive which indicates that individual students differ considerably in the ways that they perceive and use the feedback they receive (Wisniewski et al., 2020). In current social constructivist models, the learner is assumed to be an active agent in receiving, perceiving, and processing feedback information (Thurling et al., 2013). Recently, Lipnevich et al. (2016) proposed a student interaction model of feedback that highlights how feedback is received by the learner and how subsequent action on feedback is influenced by the learner’s individual characteristics. Hence, examining students’ perception of feedback and its determinants has been the focus of much recent feedback research (Lipnevich & Lopera-Quendo, 2022; Winstone et al., 2017).

There is also evidence that points to gender differences in the perception and processing of feedback (Chen et al., 2011; Hoya, 2021). Yet, studies on gender differences in perceptions of feedback are limited, primarily because of the lack of existing instruments that measure multiple dimensions of feedback perception (Lipnevich & Lopera-Quendo, 2022). Against this background, we aim to investigate whether boys and girls differ in their perception of feedback in German language classes. We adopt a multidimensional view on feedback (Strijbos et al., 2021) by differentiating simple and elaborated dimensions of feedback quality that influence how the feedback is perceived and used for further learning. In order to make meaningful comparisons of means across gender groups, measurement invariance of the instrument must be established (Millsap, 2011). Thus, the purpose of the current study is threefold. First, we discuss the validation of an instrument to measure multiple dimensions of perceived feedback quality. Second, we examine the measurement invariance of the feedback perception questionnaire across gender and investigate mean differences in the feedback perception scales between gender groups. Third, we explore whether the assumed relations between gender and feedback perception exist even under control of individual performance as well as the students’ intrinsic learning motivation.


Methodology, Methods, Research Instruments or Sources Used
The presented results are based on data of the German study FeeHe (‘Feedback in the context of heterogeneity’). To the best of our knowledge, FeeHe is the first study in which different theoretically and empirically derived dimensions of feedback are systematically measured from the perspective of high school students in German language classes. A repeated-measures design with two measurement points was used to investigate students’ perception of teacher feedback and the interplay between the perceived feedback and the students’ individual characteristics. At the beginning of a school semester (t1) a total of n=810 students (Meanage= 16.69 [SD=.84]; female = 53.8%) attending the 11th and 12th grade in 49 German language courses participated in the questionnaire study. After one school semester (t2) n=696 of the students (Meanage= 17.17 [SD=.90]; female = 55.2%) were surveyed again. To assess the students’ perception of teacher feedback, we developed a new instrument which distinguishes four dimensions of perceived feedback quality: (i) a task-oriented dimension, (ii) a process-oriented dimension, (iii) a self-regulation-oriented dimension (4 items) and (iv) a dialogue-oriented dimension. All dimensions of feedback perception were assessed by four items per dimension. The internal consistencies of the scales are satisfactory to good (t1: .68≤α≤.72; t2: .76≤α≤.80). Student gender data were gathered from teacher interviews at t1. To assess students’ performance the teachers were asked to provide their current grade in German. Grades range from 1 (excellent) to 6 (insufficient/fail). For the analyses reported in this paper, grades were recoded so that higher numbers represent better performance. Students’ intrinsic motivation for German was assessed by a scale consisting of six items, measured at the beginning of the school semester (t1). The internal consistency of the scale was very good (α=.93). All scales were answered on a 4-point Likert scale ranging from 1 (totally agree) to 4 (totally disagree).
To detect structural validity of the feedback perception scales Confirmatory Factor Analyses were conducted for each measurement point. Measurement invariance and gender differences in the feedback perception were explored by applying Multigroup Confirmatory Factor Analysis. Subsequently, Longitudinal Structural Equation Modeling was used to investigate how students’ gender, performance and intrinsic learning motivation predict the initial level (t1) and changes in feedback perceptions over time (t2-t1).

Conclusions, Expected Outcomes or Findings
The results on the factor structure of individual students' perceptions of feedback quality are in line with previous research and strengthen the distinction of perceived dimensions of feedback quality (Hattie & Timperley, 2007; Willems & Dreiling, in press). These results show that students are generally able to distinguish between the four dimensions of feedback quality in their ratings. However, this distinction is not perfect as indicated by the high correlations between the task-oriented and process-oriented dimension (t1: r = .83, t2: r = .84). We argue that this is not merely a measurement issue, but rather reflects a teacher practice of providing concurrent feedback on student achievement and progress. Concerning the second research question, Multigroup Confirmatory Factor Analysis revealed measurement invariance of the identified factor structure across gender groups. Given that measurement invariance was established, the measures of students’ perceptions can be used to compare means of perceived feedback quality between boys and girls. Contrary to our expectations, we could not find any gender-specific mean differences in upper secondary school students’ perceptions of various dimensions of feedback quality. Results from Longitudinal Structural Equation Modeling revealed that initial intrinsic learning motivation and performance are significant predictors of interindividual differences in the initial level and change of feedback perceptions.
Overall, our results highlight that differences in perceptions of feedback quality can be explained by students’ individual motivational and cognitive learning characteristics rather than by their gender, and that such interindividual differences in perceptions must be taken into account when examining the effectiveness of feedback.

References
Chen, Y., Thompson, M.S., Kromrey, J.D., & Chang, G.H. (2011). Relations of student perceptions of teacher oral feedback with teacher expectancies and student self-concept. The Journal of Experimental Education, 79(4), 452–477.
Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), 81–112.
Hoya, F. (2021). Unterschiede in der Wahrnehmung positiven und negativen Feedbacks von Mädchen und Jungen im Leseunterricht der Grundschule. Unterrichtswissenschaft, 49, 423–441.
Kyriakides, L., & Creemers, B.P.M. (2008). Using a multidimensional approach to measure the impact of classroom level factors upon student achievement. School Effectiveness and School Improvement, 19(2), 183–205.
Lipnevich, A. A., Berg, D. A. G., & Smith, J. K. (2016). Toward a model of student response to feedback. In G. T. L. Brown & L. R. Harris (Eds.), The handbook of human and social conditions in assessment (pp. 169–185). New York: Routledge.
Lipnevich, A. A., & Smith, J. K. (Eds.). (2018). The Cambridge handbook of instructional feedback. Cambridge: Cambridge University Press.
Lipnevich, A. A., & Lopera-Oquendo, C. (2022). Receptivity to instructional feedback: A validation study in the secondary school context in Singapore. European Journal of Psychological Assessment. Advance online publication.
Millsap, R. E. (2011). Statistical approaches to measurement invariance. New York, NY: Routledge.
Rakoczy, K., Klieme, E., Bürgermeister, A., and Harks, B. (2008). The interplay between student evaluation and instruction. J. Psychol. 216, 111–124
Seidel, T., and Shavelson, R. (2007). Teaching effectiveness research in the past decade: the role of theory and research design in disentangling meta-analysis results. Review of Educational Research, 77, 454–499.
Thurlings, M., Vermeulen, M., Bastiaens, T., and Stijnen, S. (2013). Understanding feedback: a learning theory perspective. Educational Research Review, 9, 1–15.
Willems, A. S. & Dreiling, K. (in press). Erklären individuelle Motivationsprofile von Schülerinnen und Schülern Unterschiede in ihrer Feedbackwahrnehmung im Deutschunterricht der gymnasialen Oberstufe? Journal for Educational Research Online.
Winstone, N. E., Nash, R. A., Parker, M. & Rowntree, J. (2017). Supporting Learners’ Agentic Engagement with Feedback: A Systematic Review and a Taxonomy of Recipience Processes. Educational Psychologist, 52(1), 17–37.
Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power of feedback revisited: A meta-analysis of educational feedback research. Frontiers in Psychology, 10, Article 3087.


09. Assessment, Evaluation, Testing and Measurement
Paper

Upper Secondary Teachers` Experiences with Use of Video Feedback in Student Assessment

Dorthea Sekkingstad, Ann Karin Sandal

Western Norway University of Applied Sci, Norway

Presenting Author: Sekkingstad, Dorthea; Sandal, Ann Karin

Background and theoretical framework

Research in assessment recognize formative feedback as crucial to enhance student learning and achievement (Black et al., 2004; Black & Wiliam, 1998; The Assessment Reform Group, 1999). Feedback can be related to giving information about the gap between actual level of performance and the desired outcome of a learning process, aiming to reduce the gap through formative feedback to the learner (Ramaprasad, 1983; Hattie & Timperley, 2007). Thus, feedback comprises information to the learner about performance as well as information about how to reach the learning aims. This points to an understanding of assessment as both formative and summative and the well-known concepts of assessment for and of learning (Wiliam, 2011). These perspectives have been developed in a comprehensive body of research in assessment internationally and integrated in classroom practice in numerous countries (Baird et al., 2014; Hattie & Timperley, 2007). Research in formative feedback has shown promising result as regards student learning. However, effective formative feedback must be practiced within a teaching design promoting students` use of feedback, develop common understanding of learning aims and provide quality feedback, such as timely and specific feedback, and detailed information about next step in the learning process (Hattie & Timperley, 2007).

In the Norwegian context, formative assessment and assessment for learning have been implemented in the curricula and assessment regulations since 2006, including students` self-assessment. Formative assessment as a concept is an essential part of the assessment regulations, describing when and how to assess formatively. Several national initiatives to support implementation of formative assessment in schools have been developed since 2007 (Norwegian Directorate for Education and Training, 2018). Despite this effort, there is still need for further development of formative assessment to support learning, and the annual student survey reveals decrease in formative feedback the higher in the education system (Norwegian Directorate for Education and Training, 2023). Rambøll Management Consulting (2020) shows lack of enough time to give feedback and that grades overshadow students` awareness of the learning potential in formative feedback.

Practical and pedagogical challenges related to implementation of formative feedback in teaching design call for new tools for working with formative assessment as a resource for learning in classrooms (Heitink et al., 2016) and several studies have investigated use of digital tools as useful for effective formative feedback in education (Henderson & Philip, 2015; Dawson et al., 2019; Mahoney et al., 2019). More specific, several studies show that use of video in formative feedback to students provides new opportunities for quality feedback and demonstrate more timely, detailed and personalized formative feedback which students use in further learning (Dawson et al., 2019; Kay & Bahula, 2020; Mahoney et al., 2019).

The literature review indicates that former research in VF is related to higher education and higher education institutions outside the Nordic countries. Research in VF also focus mostly on VF in language studies and studies where students receive feedback on written text (Bakla, 2017; Mahoney et al., 20219; Kay & Bahula, 2020). To our knowledge, there are few studies of use of video feedback (VF) in formative assessment in upper secondary school and from teachers` perspective. This study therefor aims to investigate upper secondary teachers` experiences with the use of video feedback to enhance students` learning. The study focus on whether VF provides new conditions for formative assessment. The research questions are:

a) How do teachers use video in formative assessment?

b) What kind of advantages and challenges can be identified in using video feedback?


Methodology, Methods, Research Instruments or Sources Used
Methods
Data are based on qualitative individual interviews with eight teachers in two upper secondary school in Norway. The teacher informants have from 10 to 30 years of experience as teachers, and from two to ten years’ experience with VF. The initial recruitment of informants was supported by the schools` headmaster, followed by teachers recruiting colleagues according to the selection criterion. The informants represent teaching in a broad range of subjects, both in general study programs and vocational programs in upper secondary school, e.g., languages, social sciences, natural science and mathematics, and economy/ business studies.
The semi-structured interview guide comprises five topics: 1) background and motivation for using VF, 2) the use of VF and VF as part of planning of teaching and assessment, 3) advantages and challenges in using VF, 4) experiences with other assessment tools and forms of assessment, and 5) experiences with supportive colleagues and leadership in schools.

To analyse data, we used thematic content analysis as a flexible framework for identify and classify patterns in the data (Krippendorff, 2004). The analysis started with an open inductive coding by two researchers, and the coding was compared and negotiated in several cycles. In the next step of the analysis process, we analysed the codes deductive and in relation to the research questions. This analysis lay ground for establishing topics and themes. The themes are the basis of the categories, presented under Results. Drawing on Geertz (1983), the themes and categories are defined and named close to the informants’ descriptions and stories. In both analysis steps, the researchers interpreted and coded the material individually before the negotiation and discussion of interpretations and establishing the themes. To analyse data in several cycles and by more than one researcher might have strengthened the validity and reliability of the study. The common interpretation of data and negotiation of codes and themes also helped to validate different interpretations of the data (Malterud, 2017). The study is conducted according to ethical guidelines in research (NSD-Norwegian centre for research data).

Conclusions, Expected Outcomes or Findings
Results and conclusion
The main findings show a formative use of VF to promote learning in the formative assessment in school subjects. VF is referred to as a new prerequisite for assessment and for increased quality feedback. The teachers report that the students engage in and use the VF during their learning processes, compared to written feedback, which to a lesser extent are read and used by the students. The findings also show that VF is a flexible format for giving feedback. VF provides scope and space for detailed, extensive and individual feedback, as well as information about the next step in learning. For example, the function “show and tell” in the computer program is valued by the teachers as an important tool for formative feedback. The use of VF also supports building relationship between the students and the teacher, although the communication is asynchronous. According to the teachers, the students experience to be “seen” and recognized by the teacher through the quality feedback. One of the challenges in the use of VF is related to the school leadership and priorities. Important prerequisite for using VF is the funding of adequate software and access to study office.

Although this study investigates VF with a small sample of informants, we find the results interesting due to VF as a tool for quality feedback and that the results are in line with previous research. Our study investigates use of VF from teachers` perspective and the findings must be interpreted with respect of self-report bias. Further research including students as informants might bring forward important information about VF in formative feedback from a student perspective. Analyses of the videos is also a relevant topic to investigate related to the research in VF.



References
Baird, J-A., Hopfenbeck, T., Newton, P., Stobart, G., & Steen-Utheim, A. (2014). State of the field review. Assessment and learning. Norwegian Knowledge Centre for Education (case number 13/4697).

Bakla, A. (2017). An Overview of Screencast Feedback in EFL Writing: Fad or the Future? Conference Paper: International Foreign Language Teaching and Teaching Turkish as a Foreign Language (27-28 April 2017), Bursa, Turkey.  

Black, P. & Wiliam, D. (1998). Inside the Black Box: Raising Standards through Classroom. Phi Delta Kappan (92)1, 81-90.

Black, P. et al. (2004). Working Inside the Black Box: Assessment for Learning in the Classroom. Phi Delta Kappan. September 2004.

Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: staff and student perspectives, Assessment & Evaluation in Higher Education, 44(1), s. 25-36, DOI:10.1080/02602938.2018. 1467877

Geertz, C. (1983). Local Knowledge. Further Essays in Interpretive Anthropology. Basic Books.

Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81-112.

Heitink, M.C., van der Kleij, F.M., Veldkamp, B.P., Schildkamp, K., & Kippers, W.B. (2016). A systematic review of prerequisites for implementing assessment for learning in classroom practice. Educational Research Review, 17 (2016) 50-62.

Henderson, M. & Philips, M. (2015). Video-based feedback on student assessment: scarily personal. Australasian Journal of Educational Technology, 31(1).

Kay, R.H. & Bahula, T. (2020). A Systematic Review of the Literature on Video Feedback Used in Higher Education. Conference: EDULearn 2020 - International Conference on Education and New Learning Technologies. Seville, Spain, July 2020. DOI:10.21125/edulearn.2020.0605

Krippendorff, Klaus (2004). Content analysis: An introduction to its methodology. Sage Publications.

Mahoney, P., Macfarlane, S., & Ajjawi, R. (2019) A qualitative synthesis of video feedback in higher education. Teaching in Higher Education, 24(2), 157-179, DOI:10.1080/13562517.2018.1471457

Malterud, K. (2017). Kvalitative forskningsmetoder for medisin og helsefag (4. utg.). Universitetsforlaget.

Norwegian Directorate for Education and Training (2023). The Student Survey. https://www.udir.no/tall-og-forskning/statistikk/elevundersokelsen/
Norwegian Directorate for Education and Training. (2018). Observations on the National Assessment for Learning Programme (2010–2018). Skills development in networks. Final report 2018.

NSD-Norwegian centre for research data. https://www.nsd.no/en/find-data

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4–13.

Rambøll Management Consulting (2020). Vurdering i skolen. [Assessment in School]. Report.

The Assessment Reform Group (1999). Assessment for Learning: Beyond the Black Box. University of Cambridge School of Education. https://www.nuffieldfoundation.org/sites/default/files/files/beyond_blackbox.pdf

Wiliam, D. (2011). What is assessment for learning? Studies in educational evaluation, 37, 3-14.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany