Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th May 2024, 03:52:18am GMT

 
 
Session Overview
Session
22 SES 04 C
Time:
Wednesday, 23/Aug/2023:
9:00am - 10:30am

Session Chair: Jani Ursin
Location: Adam Smith, 717 [Floor 7]

Capacity: 35 persons

Paper Session

Show help for 'Increase or decrease the abstract text size'
Presentations
22. Research in Higher Education
Paper

What Makes Student Evaluations of Teaching an Accurate Measure of Teaching Effectiveness in Higher Education? Meta-analytic Evidence

Daniel - Emil Iancu, Marian Ilie, Laurențiu Paul Maricuțoiu

West University of Timișoara, Romania

Presenting Author: Iancu, Daniel - Emil

Student Evaluation of Teaching (SET) is the procedure by which students evaluate and rate teaching performance. Usually, during a SET procedure students complete rating forms or questionnaires about different aspects related to their teachers, but mostly about their teaching practices. Universities or higher education institutions from all over the world implement SET procedures to achieve 3 main purposes. Generally, and from a practical point of view, the main purpose of implementing this type of procedure in most universities is the necessity of reporting SET results to quality assurance agencies. The other main goal of SET procedures, and surely the most important one from an educational perspective, is to provide feedback to academics about their teaching practices and/or to design teacher training programs focused on developing teaching skills. Another important use of SET results is related to evaluating evidence of teaching performance to use the results for academic career advancements or other ways of rewarding teaching effectiveness.

The topic of Student Evaluations of Teaching is one of the most researched ones in the domain of educational research, with over 2000 studies published in peer-reviewed journals over a period of a little more than 100 years (Spooren et al., 2017). One of the earliest debates in this field of research is about the validity of the SET scales and procedures. The main question was whether the measurement instruments applied to students during these procedures can accurately measure teaching effectiveness. Even if this debate was most active in the 1970s and the evidence was inclining more towards the affirmative answer to the question in case (see reviews from Richardson, 2005 and Marsh, 2007), a recently published meta-analysis (Uttl et al., 2017) presented some evidence which seriously threatens the validity of SET results. The results of the mentioned study strongly suggest that there is no relationship between the SET results of a teacher and the level of their students’ achievement/learning.

The existence of this relationship is vital to the SET validity debate starting from the premise that if SET results accurately reflect teaching effectiveness, then teachers identified as more effective should facilitate a higher level of learning and achievement among their students. Put simply, good teachers can help their students learn more and if SET results are valid, they should correlate with student achievement.

At the same time, several SET scales were rigorously developed from a theoretical and psychometrical point of view (e.g., SEEQ, CEQ, ETCQ). Also, there is a lot of evidence that those specific SET scales can accurately measure and offer support in developing teaching skills (Marsh, 2007; Richardson, 2005).

Starting from these, and referring to the meta-analytic results presented above, the main question that arises is whether the relationship between SET results and student learning is stronger when the utilized SET scale is more rigorously developed and validated.

Thus, the research questions that guide the present study are the following:

1. What is the average effect-size of the relationship between SET results and student achievement, in all the multi-section SET studies published to date?

2. Is the average effect-size of the relationship between SET results and student achievement different as a function of the SET measure validity evidence?


Methodology, Methods, Research Instruments or Sources Used
To be included in the present meta-analysis a study had to pass the following inclusion criteria:
(1) The study had to present correlational results between SET results and student achievement in higher education.
(2) The study had to examine the relationship between SET results and student achievement in multiple sections of the same discipline.
(3) Students from every section should complete the same SET and achievement measures.
(4) The achievement results had to be collected through objective measures which focus on real learning, rather than students’ perception of it.
(5) The correlation between SET results and student achievement had to be estimated using data averaged at the section level instead of the students’ level.
 
The literature search was conducted by the means of three procedures. First, we analyzed the reference list of previous meta-analyses in the field. Second, we examined all the articles citing Uttl et al. (2017). And finally, we analyzed, using a search algorithm, the following databases: Academic Search Complete, Scopus, PsycINFO, and ERIC. After analyzing abstracts and reading the full text of the studies that showed promise, we identified and managed to extract 43 studies that passed the inclusion criteria described above.

From each study, we extracted statistical information referring to correlation indices, the number of sections included in the study, and the number of students from the entire research sample. We also extracted information of interest about the following characteristics of the examined studies: psychometric properties of the SET measure, specific items of SET measures, type of achievement measure, and adjustment for prior achievement (where it was the case).

For examining and coding the degree of available evidence for the reliability and validity of the SET measures used for gathering student responses, we adapted a specific framework of psychometric evaluation criteria, proposed by Hunsley & Mash (2008). In adapting the before-mentioned evaluative framework, we also considered the recommendations advanced by Spooren et al. (2013) in their SET validity review, by Onwuegbuzie et al. (2009) meta-validation model for assessing the score-validity of SETs, and by AERA, APA & NCME (2014) in their joint work on psychological and educational testing standards. The final criteria against which each SET measure was evaluated and coded from a psychometric perspective, are the following: (1) internal consistency, (2) inter-rater reliability; (3) test-retest reliability; (4) structural validity; and (5) relations with other variables of interest (convergent and/or predictive validity).

Conclusions, Expected Outcomes or Findings
Our results regarding the overall effect-size indicate a marginally statistically significant relationship between SET ratings and student achievement (r = .187, Z = 5.827, p < .058, k = 87) across all the 87 effects presented in the 43 examined studies.

Results obtained by analyzing the above-mentioned 3 groups of studies indicate that the degree of reliability and structural validity of the SET measures is not statistically significantly related to the effect size reported in those studies (Q(2) = 3.960, p = .138). However, we can observe a tendency for higher effect sizes when SET measures have more evidence for reliability and validity, starting from no or bad evidence (r = .137, 95% CI [.053, .220]), increasing to some evidence (r = .215, 95% CI [.111, .315]) and increasing even more when adequate or good evidence is presented (r = .324, 95% CI [.145, .483]). This suggests that, thus far, the degree of reliability and structural validity of SET measures does not moderate the overall effect size between SET ratings and student achievement.

The presented findings suggest that there is a tendency for higher associations between SET ratings and student achievement. The lack of statistical significance could come from the relatively slow number (k = 11) of effects for which we found adequate or good evidence of reliability and structural validity. Also, on a closer look, we found that the heterogeneity of effects is relatively similar inside each group based on the level of available evidence related to SET scales. This means that we have both small and large correlations between the SET ratings and student achievement inside each group, which suggests that this relation could be a function of something other than the presented evidence of the SET scale.

References
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

Hunsley, J., & Mash, E. J. (2008). Developing criteria for evidence-based assessment: An introduction to assessments that work. A guide to assessments that work, 2008, 3-14.

Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In P.R., Pintrich & A. Zusho (Coord.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319-383). Springer, Dordrecht.

Onwuegbuzie, A. J., Daniel, L. G., & Collins, K. M. (2009). A meta-validation model for assessing the score-validity of student teaching evaluations. Quality & Quantity, 43(2), 197-209.

Richardson, J. T. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment & evaluation in higher education, 30(4), 387-415.

Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research, 83(4), 598-642.

Spooren, P., Vandermoere, F., Vanderstraeten, R., & Pepermans, K. (2017). Exploring high impact scholarship in research on student's evaluation of teaching (SET). Educational Research Review, 22, 129-141.

Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.


22. Research in Higher Education
Paper

Exploring Staff Attitudes Towards Academic Integrity in Irish Higher Education

Andrew Gibson, Angeliki Lima, John Walsh

Trinity College Dublin, Ireland

Presenting Author: Gibson, Andrew

Academic integrity in higher education has been an increasing focus in recent years (Macfarlane et al. 2014), and this interest intensified with the pivot to emergency remote teaching and assessment during COVID-19 (Holden et al. 2020; Eaton et al. 2021). For those working and studying in Irish higher education, a perception of a rapidly expanding range and types of academic misconduct has prompted renewed institutional and sectoral efforts to safeguard academic integrity (e.g. QQI 2018a, 2018b; NAIN 2021a, 2021b). The project informing this paper explores the costs and benefits of using the ICAI-McCabe surveys, which are used in other countries to research students’ and academic staff’s behaviours, attitudes, and beliefs about academic integrity.

Central to dealing with the question of academic integrity, given the implications it can have on students and academic staff alike, is to initially explore what those working and studying in Irish higher education know and believe about academic integrity. As such, this talk will evaluate the challenges, costs, and benefits of adapting the ICAI-McCabe surveys of faculty and student attitudes and beliefs about academic integrity for use within the HEI sector in Ireland. The goal here is to explore how ‘academic integrity’ is understood in the Irish context specifically, and how these understandings may differ from the institutional and national settings in which these ICAI-McCabe surveys were developed and are currently in use.

The proposed approach to a cost-benefit analysis of using AIS in Ireland is informed by a virtue ethics philosophical perspective, extending the question of academic integrity beyond institutional and/or procedural viewpoints (Williams 2010). Such a perspective aligns with the ‘values-based’ approach to academic integrity, and draws on a body of work within academic integrity research that focuses on ‘virtue ethics’ approaches to what is consequently termed ‘educational integrity’ (Bretag 2016).

This open perspective avoids reducing ‘costs’ and ‘benefits’ to quantitative or monetary notions, and explores beliefs, attitudes and values. This innovative approach addresses the range of cultural and philosophical underpinnings of academic integrity in what would be a new national setting for the AIS, and with a view to the diversity of the Irish higher education context. Thus we propose a mixed-method approach with a significant qualitative component and a wider sampling perspective of the responsible actors - beyond those charged with institutional responsibility for quality assurance.


Methodology, Methods, Research Instruments or Sources Used
This project has a three-part, linked structure through which data is being generated: (i) a literature review; (ii) qualitative, semi-structured interviews; and (iii) focus groups. Full ethical approval in line with institutional requirements is being obtained for all stages.

A broad and purposive sample of staff and students across Irish higher education will be taken for the interviews and focus groups. Interviews and focus groups can take place primarily online, but it would be worthwhile to undertake some in person. For the institutional dimension, representativeness is proposed: two universities, two technological universities, one institute of technology, one private (i.e. non-HEA funded) institution, and one college of education (seven HEIs in total). For each of these institutions, we propose interviewing relevant actors (senior academic officer, heads of school/faculty, and programme coordinators), aiming also for representativeness in terms of diversity of disciplinary backgrounds too.

For the national dimension, we propose interviewing actors from across Irish higher education, both state and non-governmental. The focus groups with institutions and student representatives will engage informed participants to discuss both of the McCabe-ICAI Academic Integrity Surveys.

The analysis will require interviews to be transcribed, and notes to be taken at focus groups. Qualitative data analysis software (e.g. NVivo) will serve both as repository and tool of analysis for data. The analytic framework will be informed by the intentionally open approach taken to the topic, aimed at eliciting a diversity of understandings. As such analysis will be informed by phenomenographic and grounded theory approaches, rather than simply ‘identifying themes’ (Larsson & Holmström 2007; Charmaz 2006; Bazeley 2009).

Conclusions, Expected Outcomes or Findings
An initial projected outcome is that a clearer sense of the feasibility of using the McCabe-ICAI Academic Integrity Surveys in the Irish HEI sector will be developed, with the benefits and costs of introducing and/or adapting the surveys. An important contribution will be to the professional development of participants, in the subsequent discussion, through developing shared understandings of academic integrity in Irish higher education. It will do so by developing a critical exploration of how to determine and assess attitudes and beliefs among HE staff and students towards academic integrity in Ireland. One projected outcome is that these views can then inform institutional and national policy development. It will also produce a credible, robust, open-access evidence base to facilitate and inform future research initiatives in academic integrity in Irish HE. This will also facilitate the further development of the community of practice around academic integrity in Ireland, and for other countries considering their own approaches to academic integrity today.
References
Bazeley, P. (2009). Analysing Qualitative Data: More than Identifying Themes. Malaysian Journal of Qualitative Research, 2(9), 6–22.
Bretag, T. (2016). Educational Integrity in Australia. In Handbook of Academic Integrity, ed. T. Bretag. Springer, 23-38.
Charmaz, K. (2006). Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. SAGE Publications.
Eaton, A.E., & Turner, K.L. (2020). Exploring academic integrity and mental health during COVID-19: Rapid review. Journal of Contemporary Education Theory & Research, 4(1): 35-41.
Holden, O.L., Norris, M.E., & Kuhlmeier, V.A. (2021). Academic Integrity in Online Assessment: A Research Review. Frontiers in Education, 6(39814).
Larsson, J., & Holmström, I. (2007). Phenomenographic or phenomenological analysis: Does it matter? Examples from a study on anaesthesiologists work. International Journal of Qualitative Studies on Health and Well-Being, 2(1), 55–64.
Macfarlane, B., Zhang, J., Pun, A. (2014). Academic integrity: a review of the literature. Studies in Higher Education, 39(2): 339-358.
Williams, B. (2010). Ethics and the Limits of Philosophy. Routledge.


22. Research in Higher Education
Paper

Variables Determining Motivation For Success In University Students In Education

Jesús García-Álvarez, Mar Lorenzo Moledo, Ana Vázquez-Rodríguez

University of Santiago de Compostela, Spain

Presenting Author: Lorenzo Moledo, Mar

One of the main challenges faced by university graduates is that of achieving early labor market insertion adjusted to the professional profile for which they have been trained. Not achieving this objective causes them to have significantly low expectations for the future (Gómez Acuñas et al., 2009), in which factors such as the development of training and labor market insertion actions (professional orientation), the possibilities of the labor market in terms of accessing and maintaining a job (employability factors) or the promotion of transversal competencies linked to greater employability, such as motivation for success (García-Álvarez et al., 2023)

This requires an assessment of the conditions of the labor market, since important changes have gradually been introduced in the logic of access to the world of employment. In a changing and competitive scenario such as the current one, companies require people not only capable of intervening at a technical level, but also with skills that allow them to relate to the environment, work in teams, adapt to change, self-motivate and be proactive (Riaño, 2012). In other words, they are expected to have certain skills that help them to define and manage a professional project in the short and medium term, for which they must set realistic academic goals, be perseverant, or develop tolerance to stress, issues linked to adequate achievement motivation (Preckel and Brunner, 2015).

It is a fact that motivation has a direct impact on students' education, constituting one of the most valuable competencies in order to achieve an effective academic and professional promotion (Polanco, 2005). Traditionally, students' effort has been linked to the overcoming of certain situations involving performances (Eison, 1979), as well as the establishment of educational standards that are oriented towards deep learning strategies and that allow achieving a remarkable mastery of skills for personal autonomy and adaptation to the productive environment (Cervantes et al., 2018; Kadioglu & Uzuntiryaki-Kondacki, 2014).

So much so that most of the studies developed have focused on the perspective of results, analyzing the influence of motivation on academic performance, taking into account variables such as learning situations, the type of content or the characteristics of the assessment (Cox, 2012). On the contrary, we did not find references that investigate the factors that determine the degree of motivation of students, such as previous education, social capital, personal characteristics or the socio-economic context.

Considering this last approach, and given the existing differences between degrees (training fields, professional opportunities, insertion logics, access channels, etc.), the aim of the work we present here is to analyze the factors that may influence the perception of a greater or lesser motivation for success in university students of education, selecting academic, personal and work-related variables.


Methodology, Methods, Research Instruments or Sources Used
By means of a quantitative methodology, the effect of certain variables on the motivation for success in university students of Education has been analyzed. Specifically, the information was collected through a questionnaire administered at the Faculty of Education Sciences of the University of Santiago de Compostela (Spain), which was completed by a total of 259 undergraduates. The 87.4% were women and 12.6% men. They have no previous training (81.9%) and, at the time of completing the survey, 17.4%, in addition to studying, were in active employment. In relation to the motivation for the choice of career, about half of the respondents (47.1%) stated that they had made a vocational choice.
For data analysis we used the Mann-Whitney U test to study the existence of significant differences in students' achievement motivation according to the following variables: gender, vocation, previous education, average academic record and employment status. To analyze the influence of parents' educational level on student motivation, the Kruskall-Wallis test was applied; in this case, post-hoc comparisons were performed using the Dunn-Bonferroni method. The significance level chosen for all tests was α=.05.

Conclusions, Expected Outcomes or Findings
The analysis of the data shows the existence of significant differences in terms of gender (U=2913.5, p=.048), in such a way that the female group claims to be more motivated. The fact of having previous studies is also a conditioning factor, finding differences compared to those who did not have previous degrees (U=7177, p<.000). In addition, a greater motivation is perceived in the students who carried out their studies in a vocational manner (U=7113, p=.041).
Regarding to social capital, differences are observed in the level of education of both fathers (H=16.11, p<.000) and mothers (H=12.21, p=.002). Moreover, in the pairwise comparisons there is a strong significance in the fact that fathers (p=.03) and mothers (p<.000) have university studies, in relation to those who have not reached this educational level. The same occurs if the studies they have completed are primary or secondary, compared to those fathers (p<.000) and mothers (p<.000) who have no studies. However, no differences in motivation are observed when we compare parents with primary or secondary studies and those with university studies.
Finally, no significant differences in motivation were found in relation to variables such as academic record or employment status. Thus, we identify a student profile whose self-perceived achievement motivation varies according to variables such as gender, previous education, vocation, or social capital.

References
Cervantes, D. I., Valadez, M. D., Valdés, A. A., & Tánori, J. (2018). Differences in academic self-efficacy, psychological wellbeing and achievement drive in university students with high and low academic performance. Psicología desde el Caribe, 35(1), 7-17. https://doi.org/10.14482/psdc.35.1.11154
Cox, B. (2012). College Students, Motivation, and Success. International Journal of Learning & Development, 2(3), 139-143. https://doi.org/10.5296/ijld.v2i3.1818
Eison, J. A. (1979). The development and validation of a scale to assess different student orientations towards grades and learning. University of Tennessee.
García-Álvarez, J., Vázquez-Rodríguez, A., Quiroga-Carrillo, A., & Priegue, D. (2022). Transversal Competencies for Employability in University Graduates: A Systematic Review from the Employers’ Perspective. Education Sciences, 12(3), 1-36. https://doi.org/10.3390/educsci12030204
Gómez Acuñas, M., Pérez-Vacas, C., & Sánchez Herrera, S. (2009). Percepción del mercado laboral de jóvenes estudiantes universitarios: una aproximación cualitativa. International Journal of Developmental and Educational Psychology, 1(9), 221-230.
Kadioglu, C., & Uzuntiryaki-Kondacki, E. (2014). Relationship between learning strategies and goal orientation: A multinivel analysis. Eurasian Journal of Educational Research, 56, 1-22. https://doi.org/10.14689/ejer.2014.56.4
Olanco, A. (2005). La motivación en los estudiantes universitarios. Actualidades Investigativas en Educación, 5(2), 1-13. https://doi.org/10.15517/aie.v5i2.9157
Preckel, F., & Brunner, M. (2015). Academic self-concept, achievement goals, and achievement: Is their relation the same for academic achievers and underachievers? Gifted and Talented International, 30(1), 68-84. https://doi.org/10.1080/15332276.2015.1137458
Riaño, J. (2012). Mercado laboral y formación continua universitaria. Universidad de Deusto.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECER 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany