Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th May 2024, 05:23:50am GMT

 
 
Session Overview
Session
22 SES 14 D
Time:
Friday, 25/Aug/2023:
9:00am - 10:30am

Session Chair: Edgar Valencia
Location: Adam Smith, 711 [Floor 7]

Capacity: 35 persons

Paper Session

Show help for 'Increase or decrease the abstract text size'
Presentations
22. Research in Higher Education
Paper

Students’ Observations about Academic Dishonesty in Higher Education

Kerime Kofunyeli1, Yesim Capa-Aydın2

1Gazi University, Turkiye; 2Middle East Technical University, Turkiye

Presenting Author: Kofunyeli, Kerime

There are several different descriptions of academic dishonesty. Yet, they can all be summarized as any unethical means to achieve better results in learning assessments (Miller et al., 2017). The prevalence of academic dishonesty in higher education institutions is supported by research worldwide (Murdock et al., 2006). When cheating becomes common, it results in consequences for students and higher education institutions. The so-called victimless crime prevents students from attaining the necessary knowledge and capabilities that are being transferred by their programs and disturbs the equity of assessment (Miller et al., 2017). In addition, when students observe their peers cheating and not getting punished, they cheat to level the playing field, creating a campus culture that involves cheating (McCabe et al., 1993). Higher education institutions are also damaged by cheating scandals, which reduces the public’s confidence in every qualification (Harding et al., 2004). It is evident that academic dishonesty has victims; therefore, building an understanding is crucial in developing prevention strategies.

Extensive research has been done to find out why students cheat. Brimble et al. (2005) discovered that students might have different perceptions of which behaviors are considered academic cheating than university staff. To prevent cheating and to reduce incidences where cheating occurs because of confusion, academic dishonesty regulations are put in motion. Even so, research suggests that students’ understanding of academic dishonesty policies is low (Bretag et al., 2014). Furthermore, studies reveal that university staff tends to ignore student cheating (Coren, 2011). Students give various reasons for why they cheat. Among these reasons believing that others are cheating gains prevalence (Awdry et al., 2021). Relation between assessment type and cheating has limited research, yet Harper et al. (2019) found that it is a contributing factor. Moreover, the use of the internet gave another channel for students to cheat. For instance, whereas buying essays is not new, the internet is a very convenient medium for such behavior.

The current study investigated cheating among undergraduate students to describe their points of view. Also, as this study took place during the Covid-19 pandemic, and as emerging studies pointed out an increase in the prevalence of academic dishonesty during the lockdown period (Comas-Forgas et al., 2021), student opinions related to cheating trends during the Covid-19 pandemic emergency remote teaching (ERT) were gathered (Comas-Forgas et al., 2021). Furthermore, students’ observations about contract cheating were examined. More specifically, the research questions were: What are undergraduate students’ perceptions and opinions on academic dishonesty? What are undergraduate students’ beliefs on dishonesty frequency during emergency remote education compared to in-person education, and how do students explain the reasons for this difference? What are undergraduate students’ observations of contract cheating?


Methodology, Methods, Research Instruments or Sources Used
A survey design was used for this study. Data were collected from 442 students through an online survey in the Spring of 2021 from a university in Turkey.
The Academic Dishonesty Questionnaire used in this study was developed for this research. First, the items in the questionnaire were written with the help of literature. Next, they were reviewed with the help of a measurement specialist and a Turkish language specialist to eliminate grammar and vocabulary problems, also ambiguity (Devillis, 2016). Afterward, a cognitive interview was held with seven target respondents to eliminate errors in the questionnaire and to confirm that items were understood consistently (Fowler, 2013).
The questionnaire has 13 items. The first item ask whether students are aware of academic dishonesty regulations in their university. Also, student perceptions of their peers’ cheating and peers’ knowledge of academic dishonesty regulations are examined on a 5-point scale. Another item collects information about possible student reasons for academic dishonesty. Furthermore, reporting behaviors of instructors and peers are asked on a 5-point scale ranging from “never” to “always.” Also, student perceptions of their peers’ cheating, and peers’ knowledge of academic dishonesty regulations are examined with two similarly styled items. Moreover, their opinions about the relationship between cheating and assessment types are gathered with three items with unordered response categories about assessment type, cognitive process, and assignment deadlines/their place in the overall evaluation. These response categories were written using the assessment preference inventory (Birenbaum, 1994). Student opinions on cheating during ERT were questioned by one categorical and one open-ended question. Lastly, one yes/no and one open-ended question gathered their observations about contract cheating.
Descriptive statistics and content analyses were conducted. Responses to two open-ended questions were read extensively, and lists of codes were formed using the related literature. The list of codes and responses were shared with another researcher to ensure inter-coder reliability (Marshall et al., 2016). Cohen’s kappas were calculated as .74 and .91 for two questions, indicating a substantial to an almost perfect agreement between the two coders (Landis et al., 1977).

Conclusions, Expected Outcomes or Findings
More than half of the participants did not know the academic dishonesty regulations of their university. They rated their peer’s understanding of the regulations as less than good. Also, they reported that they believed their peers tend to cheat; however, instructors and other students were more likely not to report the incidents. Results imply a belief that cheating happens and punishment is scarce. Moreover, students declared that they mostly cheated to “achieve higher GPA” and “because of coinciding assignment deadlines and exam dates.” The majority of the participants indicated that students would be more likely to cheat in multiple-choice questions, knowledge-based questions, assignments with short time to complete, and highly weighted assignments.
They were also asked about their beliefs on changes in academic dishonesty incidences during the Covid-19 pandemic. Most (69.82%) reported they believed cheating increased. In comparison, 25% said it stayed the same, and only 5.18% reported it decreased. Students shared what they think is the reason for changes in the number of cheating incidences. Six themes emerged: exam security issues; dissatisfaction with online education; instructor behavior and attitude; assessment design; personal characteristics of students; Covid-19 pandemic-related issues. It should be noted that their answers shifted focus from their circumstances and concentrated mostly on quality-related issues.
One-third of the participants indicated they had observed contract cheating. Participants reported coming up with social media accounts that offered to finish assignments, websites that offered contract cheating services, and adverts for such websites. Some mentioned that they requested money for these services. A few also said that they encountered some instances, such as requests for assistance from other students, assignments completed jointly, and assignments completed with task sharing. Overall, student observations point out that cheating is widespread on campuses and resistant to precautions.

References
Ahsan, K., Akbar, S. & Kam, B. (2021) Contract cheating in higher education: a systematic literature review and future research agenda. Assessment & Evaluation in Higher Education, https://doi.org/10.1080/02602938.2021.1931660
Awdry, R., & Ives, B. (2021). Students cheat more often from those known to them: situation matters more than the individual. Assessment and Evaluation in Higher Education, 46(8), 1254–1268. https://doi.org/10.1080/02602938.2020.1851651
Birenbaum, M. (1994). Toward adaptive assessment - The student’s angle. Studies in Educational Evaluation, 20(2), 239–255. https://doi.org/10.1016/0191-491X(94)90011-6
Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., van Haeringen, K., … Rozenberg, P. (2019). Contract cheating and assessment design: exploring the relationship. Assessment and Evaluation in Higher Education, 44(5), 676–691. https://doi.org/10.1080/02602938.2018.1527892
Bretag, T., Mahmud, S., Wallace, M., Walker, R., McGowan, U., East, J., … James, C. (2014). “Teach us how to do it properly!” An Australian academic integrity student survey. Studies in Higher Education, 39(7), 1150–1169. https://doi.org/10.1080/03075079.2013.777406
Brimble, M., & Stevenson-Clarke, P.A. (2005). Perceptions of the prevalence and seriousness of academic dishonesty in Australian universities. The Australian Educational Researcher, 32, 19-44. Retrieved from https://files.eric.ed.gov/fulltext/EJ743503.pdf
Comas-Forgas, R., Lancaster, T., Calvo-Sastre, A., & Sureda-Negre, J. (2021). Exam cheating and academic integrity breaches during the COVID-19 pandemic: An analysis of internet search activity in Spain. Heliyon, 7(10), e08233. https://doi.org/10.1016/j.heliyon.2021.e08233
Coren, A. (2011). Turning a blind eye: faculty who ignore student cheating. Journal of Academic Ethics, 9(4), 291–305. https://doi.org/10.1007/s10805-011-9147-y
DeVellis, R., F. (2016). Scale development: Theory and applications (4th ed.). Sage Publications, Inc.
Fowler, F. J. (2013). Survey research methods (5th ed.). Sage Publications, Inc.
Harper, R., Bretag, T., Ellis, C., Newton, P., Rozenberg, P., Saddiqui, S., & van Haeringen, K. (2019). Contract cheating: a survey of Australian university staff. Studies in Higher Education, 44(11), 1857–1873. https://doi.org/10.1080/03075079.2018.1462789
Landis, J. R., & Koch, G. G. (1977). The Measurement of Observer Agreement for Categorical Data. Biometrics, 33(1), 159. https://doi.org/10.2307/2529310
Marshall, C., & Rossman, G.B. (2016). Designing qualitative research (6th ed.). SAGE Publications, Inc.
McCabe, D. L., Treviño, L. (1993). Academic dishonesty : honor codes and other contextual influences. The Journal of Higher Education, 64 (5), 522-538. https://doi.org/10.2307/2959991
Miller, A. D., Murdock, T. B., & Grotewiel, M. M. (2017). Addressing academic dishonesty among the highest achievers. Theory into Practice, 56(2), 121–128. https://doi.org/10.1080/00405841.2017.1283574
Murdock, T.B., Anderman, E.M. (2006). Motivational perspectives on student cheating: Toward an integrated model of academic dishonesty. Educational Psychologist, 41(3), 129-145. https://doi.org/10.1207/s15326985ep4103_1


22. Research in Higher Education
Paper

Strategies and Criteria During Self-assessment in Higher Education

Daniel García-Pérez1, Ernesto Panadero2, Javier Fernández Ruiz3, Juan Fraile4, Iván Sánchez Iglesias1, Gavin Brown5

1Universidad Complutense de Madrid, Spain; 2Universidad de Deusto, Spain/ IKERBASQUE, Basque Foundation for Science; 3Universidad de Burgos, Spain; 4Universidad Francisco de Vitoria, Spain; 5The University of Auckland, New Zealand

Presenting Author: García-Pérez, Daniel

This study is framed in a research project that analyses how higher education students deploy self-assessment (SA) strategies and considers different factors affecting it. With the term SA we refer to “a wide variety of mechanisms and techniques through which students describe (i.e., assess) and possibly assign merit or worth to (i.e., evaluate) the qualities of their own learning processes and products” (Panadero et al., 2016, p. 804). Research in the area of formative assessment has shown that SA is a strategy that can positively affect self-regulation (Yan, 2019) and achievement (Brown & Harris, 2013).

In this communication we present part of the results of a randomized experiment carried out in higher education. Specifically, we analyze how different types of feedback affect the strategies and criteria deployed by higher education students during a SA task.

We selected the type of feedback as a key component of the experiment because it is a powerful instructional practice that intertwines with self-regulation (Butler & Winne, 2016) and it has an important effect on academic achievement (Wisniewski et al., 2020). Understanding the effects of external feedback on students’ SA could help us understand how to better integrate both for instructional purposes. For this reason, we compared how higher education students self-assessed before and after receiving different types of feedback (rubric vs. instructor’s feedback vs. combination of rubric and instructor’s feedback), and we analysed how these two manipulations (moment and types of feedback) could affect the quality and quantity of the strategies and criteria used by students while they self-assessed their work.

Regarding feedback, the different conditions included two types. While the use of instructor’s feedback is very common, rubrics have gained a prominent role as feedback tools in the last years due to its positive effects for students, teachers, and programs (Dawson, 2017). Although the use of rubrics seems to be more effective without combining it with exemplars (Lipnevich et al., 2014), we do not know how the combination of instructors’ feedback with rubric can affect SA.

The contrast of how students perform self-assessment before and after receiving feedback could inform us on the time we should provide feedback in relation to SA.

Therefore, this communication aims to explore 2 research questions:

- RQ1: What are the self-assessment strategies and criteria that higher education students implement before and after feedback?

H1: Self-assessment strategies and criteria will decrease when feedback is provided.

- RQ2. What are the effects of feedback type and feedback occasion on self-assessment behaviors (i.e., number and type of strategy and criteria)?

H2: Rubric feedback will provide better self-assessment practices than other feedback types.


Methodology, Methods, Research Instruments or Sources Used
- Participants
126 undergraduate psychology students (88.1% females) across first, second and third year of study (34.9%, 31.7%, and 33.3%, respectively) participated in the study in one of three feedback conditions: rubric (n = 43), instructor’s feedback (n = 43), and rubric + instructor’s feedback combined (n = 40).
- Instruments
Thinking aloud protocols: participants were asked to state out loud their thoughts, emotions, and other processes that they experienced during the SA. They were prompted to think aloud if they remained silent for more than 30 seconds. These protocols were coded using categories from a previous study of the team. They covered the strategies and criteria that students deployed during the SA task.  
      Procedure
The procedure consisted of two parts. First, participants attended a seminar on academic writing, where they wrote an essay that was assessed by the instructor (pre-experimental phase). Later, participants went individually to the laboratory, where they self-assessed their original essay following the instructions to think aloud. Then, they were asked to self-assess again after receiving the feedback corresponding to their condition (rubric vs. instructor vs. combination). During this process they filled some questionnaires 3 times (data not included in this study).
Intervention prompts
Rubric: it was an analytic rubric created for this study that included 3 levels of quality (low, average, and high) about the contents of the workshop: a) writing process, b) structure and coherence of the text and c) sentences, vocabulary and punctuation.
Instructor’s feedback: the instructor provided comments to each essay using the same categories as the rubric (except for the “writing process” criterion that could not be observed by the instructor). Additionally, it included a grade ranging from 0 to 10 points.
Data analysis
The thinking aloud protocol was coded by two judges. After 3 rounds of coding different videos and discussions, they reached a Krippendorff’s α=0.87.
The categorical variables were described with multiple dichotomous frequency tables, as each participant could display more than one behavior. To study the effect of the factors (feedback occasion and condition) on self-assessment strategies and criteria frequencies, we conducted ANOVAs and square test.

Conclusions, Expected Outcomes or Findings
Regarding RQ1, the most common SA strategies used by the participants had a low level of complexity, but there were also some advanced strategies (e.g., thinking different responses). The strategies used before and after feedback were similar, with the logical inclusion of strategies focused on the content on the feedback after it was received. The criteria used to assess the task were also similar, but after feedback the use of 3 criteria increased in conditions 1 (rubric) and 3 (rubric + instructors’ feedback) according to Binomial χ2 comparisons: writing process (p< 0.001 in both conditions), paragraph structure (p <0.05 in the rubric condition) and punctuation marks (p>0.05 in both conditions). In the instructors’ feedback condition there was a non-significant decrease in the writing process and the analysis of sentences.
Regarding RQ2, after feedback there were not significant differences in the number of strategies used in each condition. However, the number of criteria differed substantially F(2,121) = 25.30, p < 0.001, η2 = 0.295) with post hoc differences for Rubric (M = 4.48, SD = 0.165) and combined conditions (M = 4.50, SD = 0.171) that outperformed the instructor condition (M = 3.02, SD = 0.169), both at p < 0.001. Also, the pre-post increase in number of strategies deployed was greater (post hoc p=0.002) in the rubric (M=0.938, SE=0.247) than in the instructor’s feedback (M= −0.291, SE=0.253) condition.
The study has several implications. First, rubric feedback seems to be a better scaffold when students self-assess, providing an increase in the number of criteria used and stimulating student reflection (Brookhart, 2018). Second, the instructor’s feedback showed worse results in the deployment of SA strategies and criteria, maybe because students are in a more passive position. Finally, it seems that feedback presented once students have self-assessed could be better, since it would allow students to exhibit constructive strategies and criteria.

References
Brookhart, S. M. (2018). Appropriate Criteria: Key to Effective Rubrics. Frontiers in Education, 3, 22. https://doi.org/10.3389/feduc.2018.00022
Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. H. McMillan (Ed.), The Sage Handbook of research on classroom assessment (pp. 367–393). Sage.
Butler, D. L., & Winne, P. H. (2016). Feedback and Self-Regulated Learning: A Theoretical Synthesis. Review of Educational Research, 65(3), 245–281. https://doi.org/10.3102/00346543065003245
Dawson, P. (2015). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347–360. https://doi.org/10.1080/02602938.2015.1111294
Lipnevich, A. A., McCallen, L. N., Miles, K. P., & Smith, J. K. (2014). Mind the gap! Students’ use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4), 539–559. https://doi.org/10.1007/s11251-013-9299-9
Panadero, E., Brown, G. T. L., & Strijbos, J. W. (2016). The Future of Student Self-Assessment: a Review of Known Unknowns and Potential Directions. Educational Psychology Review, 28(4), 803–830. https://doi.org/10.1007/S10648-015-9350-2/TABLES/1
Wisniewski, B., Zierer, K., & Hattie, J. (2020). The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research. Frontiers in Psychology, 10, 3087. https://doi.org/10.3389/FPSYG.2019.03087/BIBTEX
Yan, Z. (2019). Self-assessment in the process of self-regulated learning and its relationship with academic achievement. Assessment & Evaluation in Higher Education, 45(2), 224–238. https://doi.org/10.1080/02602938.2019.1629390


22. Research in Higher Education
Paper

The Trajectories of Student Evaluations: a Human-Figurational Analysis of Qualities

Kasja Weenink1, M.N.C. Aarts1, S.H.J. Jacobs2

1Radboud University, the Netherlands; 2University of Amsterdam, the Netherlands

Presenting Author: Weenink, Kasja

Topic, objective and analytical framework

This study follows the trajectories of student evaluations in a research university in the Netherlands. It analyses how they are adjusted and used at different instances by different actors involved, how they relate with understandings of higher education quality, and which values, purposes and social consequences are thereby taken into account.

Higher education quality is a multiple, elusive not always clearly articulated concept. Student evaluations of education and teaching are related to different purposes of higher education quality and the assessment of aspects like student learning, program quality, teacher effectiveness and faculty performance (Harvey & Green, 1993; Tam, 2001; Weenink et al., 2022). While they are used to improve teaching and learning, they have also become a disciplinary device to shape academic conduct (Barrow & Grant, 2016; Hornstein, 2017). It is not clear when, where and what student evaluations are formally and informally used for by different academic actors and how quality is thereby measured and understood.

(Esarey & Valdes, 2020) note that the scholarly debate on student evaluations focused on teacher effectiveness and aspects like reliability, validity and bias. They identify mixed perspectives concerning the reliability and validity of measuring teaching effectiveness and argue that student evaluations are at best moderately correlated with student learning and/or instructional best practices. Recent studies shift attention to issues concerning fairness and social effects in using them. (Heffernan, 2022) draws attention to the negative consequences of bias for specific groups such as women and minority groups, which are increasingly subject to abusive comments. Focus groups with academics suggest furthermore that student evaluations are most critical for early career scholars’ careers [authors, under review]. Unbiased, reliable and valid evaluations can be unfair and fail to identify the best teacher (Esarey & Valdes, 2020).

Several studies argue for combining student evaluations with other dissimilar measurements of teaching like self-assessment and peer review of courses in personnel decisions, and for statistical adjustments before using them for any purpose (Esarey & Valdes, 2020; Hornstein, 2017). This ‘broad quality perspective’ can include more than student attainment and also assess the role and performance of lecturers in the educational process (Onderwijsraad, 2016; Tam, 2001; Weenink et al., 2022). One could even include the social consequences of the uses of student evaluations. It is however not known which values are brought forward in using and constructing student evaluations within academia. While the student evaluations are critiqued, there is actually a lack of knowledge on what they are used for and how they relate with quality understandings, and there are different degrees of freedom to adjust them to situated practices and purposes.

This study analyses the trajectories of student evaluations for different social sciences in a Dutch university. Various academic actors like institutional- and faculty management, educational committees, directors, course coordinators, lecturers and students can engage with them for different purposes and adjust them, for example by adding questions. These actors thereby articulate what they find valuable. (Heuts & Mol, 2013) conducted such an analysis of values for tomatoes from an Actor-Network Theory perspective, and followed them from developers and growers to so-called consumers. They identified different registers of worth which are draw upon and sometimes clash when making a ‘good tomato’. We add Norbert Elias’ notion of human figurations (Elias, 1968, 1978) to this perspective to further assess how they engage with their environment in using and adjusting student evaluations.

Research question

What are the trajectories of student evaluations in a Dutch research university, and how are different notions of quality taken into account in its uses and adaptations?


Methodology, Methods, Research Instruments or Sources Used
A single case-study is conducted at a Dutch research university, to provide an extensive analysis of the trajectories that student evaluations go through, and develop a broad understanding of how different actors shape and engage with them (Flyvbjerg, 2006). For practical reasons, the study focuses on different social sciences. The trajectory can however transcend the social sciences faculty level. Norbert Elias’ notion of human figurations provides a human-centered, networked perspective to analyze the role of relevant actors and sites within the university for student evaluations in various social sciences.

A human figuration is a constellation of mutually oriented and dependent people, with shifting asymmetrical power balances: a nexus of human interdependencies (Elias, 1968). Power develops within relationships as people are mutually dependent; lecturer and student have control over each other as they are both needed to realize educational quality. Interdependencies are at least bipolar, but often multipolar, and for example also engage higher management or even policy makers.  Figurations are in this sense interdependency networks (Elias, 1978).  These interdependencies restrict and enable what people can do with student evaluations, given their relative position in the network. A director might have more room to discuss and adjust uses and scope than a lecturer.

To reconstruct the trajectories of shaping and using student evaluations, different sources are combined (Flick, 2004). The analysis starts with interviews with faculty support staff to reconstruct the formal trajectory and map the process, actors, documents and systems involved. Documents and other sources are interpreted, to then proceed with interviews with actors identified. These interviews are first used to understand  the actor’s roles and positions within the figuration. It is not yet clear who is involved in shaping and using the student evaluations, and when and how students and lecturers are engaged. Second, the interviews are used to assess the actor’s quality views and their uses and values, motivations and room to change the student evaluations.

A previous study addressed the quality views of social science educational directors. These interviews are (with permission) re-interpreted for the uses and adaptations of student evaluations.

The interviews are transcribed verbatim and combined with other sources in a network reconstruction using Atlas-TI. A language-centered grounded theory approach is used to interpret how the student evaluations are used and adjusted by different actors, what they find salient, and how they relate to their views on higher education quality and its measurement (Charmaz, 2014).


Conclusions, Expected Outcomes or Findings
The study is in its initial stage, and the analysis of the different trajectories will be finished before the summer. The preliminary analysis of interviews with educational directors indicates that they do have some room to change the scope of the student evaluations, and add domain-specific questions for their programs. Their room to change the uses and purposes of student evaluations is however limited by institutional rules, systems and practices. Most have a limited view of the trajectory of student evaluations within the institutions beyond their own institute or program. They are aware of bias and limitations in measuring educational quality, and some try to increase their validity. There is however also reluctance to discuss the social consequences and change its uses. In line with the ‘broad perspective’ on quality, the student evaluations are enriched and combined with other assessments.

Educational directors in the position of full professor display a broader view and seem to have somewhat more room to adjust the student evaluations than assistant- or associate professors or support staff. They also have more responsibilities concerning human resource management, and use student evaluations to value academic performance when it is a formal criterion - bringing the argument across that they enrich them to broaden their views. While attention is paid to bias, the initial findings suggest that the social consequences of using student evaluations play a limited role in using and adjusting the student evaluations. Our further analysis of the trajectories will provide more insight herein.

The preliminary findings that the space for maneuver is limited and its uses are not contested are consonant with (Barrow & Grant, 2016; Pineda & Seidenschnur, 2022), who identified a focus on metrification and further disciplinary effects.
 

References
Barrow, M., & Grant, B. M. (2016). Changing mechanisms of governmentality? Academic development in New Zealand and student evaluations of teaching. Higher Education, 72(5), 589–601. https://doi.org/10.1007/s10734-015-9965-8

Charmaz, K. (2014). Constructing grounded theory (2nd ed.).

Elias, N. (1968). The Civilizing Process. Sociogenetic and Psychogenetic Investigations (E. Dunning, J. Goudsblom, & S. Mennell, Eds.; Revised Ed). Blackwell Publishing Ltd.

Elias, N. (1978). What is Sociology? (S. Mennell, G. Morrissey, & R. Bendix, Eds.; 1978th ed.). Columbia University Press.

Esarey, J., & Valdes, N. (2020). Unbiased, reliable, and valid student evaluations can still be unfair. Assessment and Evaluation in Higher Education, 45(8), 1106–1120. https://doi.org/10.1080/02602938.2020.1724875

Flick, U. (2004). Triangulation in qualitative research. In U. Flick, E. von Kardorff, & I. Steinke (Eds.), A companion to qualitative research (pp. 178–183). Sage Publications Ltd. .

Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219–245. https://doi.org/10.1177/1077800405284363

Harvey, L., & Green, D. (1993). Defining Quality. Assessment & Evaluation in Higher Education, 18(1), 9–34. https://doi.org/10.1080/0260293930180102

Heffernan, T. (2022). Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching. Assessment and Evaluation in Higher Education, 47(1), 144–154. https://doi.org/10.1080/02602938.2021.1888075

Heuts, F., & Mol, A. (2013). What Is a Good Tomato? A Case of Valuing in Practice. Valuation Studies, 1(2), 125–146. https://doi.org/10.3384/vs.2001-5992.1312125

Hornstein, H. A. (2017). Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance. In Cogent Education (Vol. 4, Issue 1). Taylor and Francis Ltd. https://doi.org/10.1080/2331186X.2017.1304016

Onderwijsraad. (2016). De volle breedte van onderwijskwaliteit. https://www.onderwijsraad.nl/upload/documents/publicaties/volledig/De-volle-breedte-van-onderwijskwaliteit1.pdf

Pineda, P., & Seidenschnur, T. (2022). Translating student evaluation of teaching: how discourse and cultural environments pressure rationalizing procedures. Studies in Higher Education, 47(7), 1326–1342. https://doi.org/10.1080/03075079.2021.1889491

Tam, M. (2001). Measuring Quality and Performance in Higher Education. Quaity in Higher Education, 7(1), 47–54. https://doi.org/10.1080/13538320120045076

Weenink, K., Aarts, N., & Jacobs, S. (2022). ‘We’re stubborn enough to create our own world’: how programme directors frame higher education quality in interdependence. Quality in Higher Education, 28(3), 360–379. https://doi.org/10.1080/13538322.2021.2008290


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany