SEEQ-DE
Konstruktion und Überprüfung einer deutschsprachigen Adaption des Instruments „Student Evaluation of Educational Quality“ (SEEQ; Marsh, 1982, 2007)
Abstract
Zusammenfassung. Berichtet wird die Konstruktion und Überprüfung einer deutschsprachigen Adaption des Fragebogens „Student Evaluation of Educational Quality“ (SEEQ) von Marsh (1982, 2007), ein umfassend überprüftes und international etabliertes Instrument zur Erfassung von Studierendenurteilen der Lehrqualität. Es wurde übersetzt, geringfügig erweitert und anhand einer Stichprobe von 76 687 Studierendenurteilen zu 3 660 Lehrveranstaltungen überprüft. Interne Konsistenzen und Intraklassenkorrelationen indizierten eine hohe Messgenauigkeit. Faktorenanalysen bestätigten die im SEEQ unterschiedenen Dimensionen. Neben den Produktfaktoren (Lernzuwachs, Gesamtbeurteilung) ließen sich als Faktoren des Lehrhandelns Engagement, Stoffstrukturierung und -präsentation, Aktivierung der Studierenden, Sozialklima, Stoffbreite, Leistungsbewertung und Aufgaben wie im Originalinstrument unterscheiden. Mit der Adaption wird studentische Beiträge als optionaler Faktor vorgeschlagen. Das Instrument erwies sich als messinvariant über verschiedene Veranstaltungsformen hinweg. Insgesamt legen die Ergebnisse nahe, dass mit der deutschsprachigen Adaption des SEEQ die Qualität hochschulischer Lehre international anschlussfähig und mit hoher Güte erfasst werden kann.
Abstract. The Student Evaluation of Educational Quality Questionnaire (SEEQ) by Marsh (1982, 2007) was adapted to German and tested using assessments from 76,687 students in 3,660 university courses. Internal consistencies and intraclass correlations indicted a high reliability. Two-level CFAs and ESEM analyses confirmed the separability of all original SEEQ dimensions: learning and overall as product factors, and enthusiasm, organization / clarity, group interaction, individual rapport, breadth, examinations / grading, and assignments / readings as factors on the level of instructional behaviors. In this adaption, we additionally proposed student contributions as an optional factor (the extent to which contributions of fellow students are considered helpful, and whether they are effectively controlled by the instructor), especially for contexts – such as those found in Germany – where student-directed teaching methods are prevalent. Additionally, we expanded the overall course rating by adding two items using a grade scale. These two adaptions are optional, and the scale worked equally well without them. We could confirm measurement invariance across different types of courses. Taken together, our findings indicate that the German adaption of the SEEQ measures teaching quality in accordance with established testing standards.
Literatur
2007).
(The dimensionality of student ratings of instruction: What we know and what we do not . In R. P. Perry and J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 385 – 456). Dordrecht, Netherlands: Springer.2013). Structural equation modeling with Mplus. New York, NY: Routledge.
(2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14, 464 – 504. https://doi.org/10.1080/10705510701301834
(2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233 – 255. https://doi.org/10.1207/S15328007SEM0902_5
(2021). Do teachers’ achievement goals and self-efficacy beliefs matter for students’ learning experiences? Evidence from two studies on perceived teaching quality and emotional experiences. Learning and Instruction. [Advanced online publication] https://doi.org/10.1016/j.learninstruc.2021.101458
(2005). Für Evaluation und gegen Evaluitis. In B. Berendt, H.-P. Voss & J. Wildt (Hrsg.), Neues Handbuch Hochschullehre. (S. 2 – 22). Berlin: Raabe.
(2011). Counseling university instructors based on student evaluations of their teaching effectiveness. Research in Higher Education, 52, 717 – 737. https://doi.org/10.1007/s11162-011-9214-7
(2008). Onlineevaluation von Lehrveranstaltungen. Zeitschrift für Evaluation, 7, 183 – 211.
(2014). From alpha to omega. British Journal of Psychology, 105, 399 – 412. https://doi.org/10.1111/bjop.12046
(2007). Sensitivity of fit indices to model misspecification and model types. Multivariate Behavioral Research, 42, 509 – 529. https://doi.org/10.1080/00273170701382864
(2017). How reliable are students’ evaluations of teaching quality? Assessment & Evaluation in Higher Education, 42, 1263 – 1279. https://doi.org/10.1080/02602938.2016.1261083
(1976). The superior college teacher from the students’ view. Research in Higher Education, 5, 243 – 288. https://doi.org/10.1007/BF00991967
(1985).
(Learning theory and research . In J. Smart (Ed.), Higher education (pp. 63 – 96). New York, NY: Agathon.2015). Zur Validität von studentischen Lehrveranstaltungsevaluationen. Diagnostica, 61, 124 – 135. https://doi.org/10.1026/0012-1924/a000141
(2014). Essentials of statistics for the behavioral sciences (8th ed.). Belmont, CA: Wadsworth.
(1996). Studentische Evaluation der Lehre. Zeitschrift für Pädagogische Psychologie, 10(3/4), 181 – 186.
(2005). Modelling non‐ignorable missing‐data mechanisms with item response theory models. British Journal of Mathematical and Statistical Psychology, 58 (1), 1 – 17. https://doi.org/10.1348/000711005x47168
(2020). Open Access Evaluation: Lehr-Evalution Online (LEO) als Instrument zu studentischen Lehrveranstaltungsevalution. Qualität in der Wissenschaft, 14 (4), 120 – 125.
(2015). Principles and practice of structural equation modeling. New York: Guilford.
(2009). Assessing the impact of learning environments: How to use student ratings of classroom or school characteristics in multilevel modeling. Contemporary Educational Psychology, 34 (2), 120 – 131. https://doi.org/10.1016/j.cedpsych.2008.12.001
(1982). SEEQ: A reliable, valid, and useful instrument for collecting students’ evaluations of university teaching. British Journal of Educational Psychology, 52 (1), 77 – 95. https://doi.org/10.1111/j.2044-8279.1982.tb02505.x
(2007).
(Students’ evaluations of university teaching . In R. PerryJ. Smart (Eds.), The scholarship of teaching and learning in higher education (pp. 319 – 383). Dordrecht, Netherlands: Springer.1992). Students’ evaluations of university teaching. International Journal of Educational Research, 11, 253 – 388. https://doi.org/10.1016/0883-0355(87)90001-2
(1998). Confirmatory factor analyses of Chinese students’ evaluations of university teaching. Structural Equation Modeling, 5, 143 – 164. https://doi.org/10.1080/10705519809540097
(1991). Students’ evaluations of teaching effectiveness. Teaching and Teacher Education, 7, 303 – 314. https://doi.org/10.1016/0742-051X(91)90001-6
(2009). Exploratory structural equation modeling. Structural Equation Modeling, 16, 439 – 476. https://doi.org/10.1080/10705510903008220
(1979). Class size, students’ evaluations, and instructional effectiveness. American Educational Research Journal, 16 (1), 57 – 70. https://doi.org/10.3102/00028312016001057
(1997). Making students’ evaluations of teaching effectiveness effective. American Psychologist, 52, 1187 – 1197. https://doi.apa.org/doi/10.1037/0003-066X.52.11.1187
(2017). Mplus User’s guide. Los Angeles, CA: Muthén & Muthén.
(Eds.). (2007). The scholarship of teaching and learning in higher education. Dordrecht, Netherlands: Springer.
(2004). Missing data in educational research. Review of Educational Research, 74, 525 – 556. https://doi.org/10.3102/00346543074004525
(2009). Lehrevaluation (2. Aufl.). Landau: VEP.
(2003). Evaluating the fit of structural equation models. Methods of Psychological Research Online, 8, 23 – 74.
(2016). Überprüfung und Anwendung von Multilevel-Messmodellen für Fragebögen zur Lehrveranstaltungsevaluation (Doktorarbeit). Friedrich-Schiller-Universität Jena, Jena.
(2015). Multilevel Faktorenanalyse für Fragebögen zur Lehrveranstaltungsevaluation. Diagnostica, 61, 116 – 123. https://doi.org/10.1026/0012-1924/a000140
(2020). Multilevel analysis (2nd ed.). London, UK: Sage.
(2018). Empfehlungen zur Qualitätssicherung in Studium und Lehre. Psychologische Rundschau, 69, 183 – 192. https://doi.org/10.1026/0033-3042/a000408
(2011).
(Evaluation von Hochschullehre . In L. HornkeM. Amelang (Hrsg.), Enzyklopädie der Psychologie (S. 617 – 667). Göttingen: Hogrefe.2013). On the validity of student evaluation of teaching: The State of the Art. Review of Educational Research, 83, 598 – 642. https://doi.org/10.3102/0034654313496870
(2016). Prädiktoren studentischer Lehrveranstaltungsevaluationen. Diagnostica, 62, 44 – 59. https://doi.org/10.1026/0012-1924/a000142
(2012). Measuring teaching effectiveness: Correspondence between students’ evaluations of teaching and different measures of student learning. Research in Higher Education, 53, 888 – 904. https://doi.org/10.1007/s11162-012-9260-9
(2011).
(Zur Validität studentischer Lehrbeurteilungen . In M. KrämerS. PreiserK. Brusdeylins (Hrsg.), Psychologiedidaktik und Evaluation VII. (S. 347 – 356). Aachen: Shaker.2005). A multilevel factor analysis of students’ evaluations of teaching. Educational and Psychological Measurement, 65, 272 – 296. https://doi.org/10.1177/0013164404268667
(2015). Ergebnisdarstellung in der Lehrveranstaltungsevaluation. Diagnostica, 61, 153 – 162. https://doi.org/10.1026/0012-1924/a000128
(1994). Student evaluations of university teaching: A cross-cultural perspective. Research in Higher Education, 35, 251 – 266. https://doi.org/10.1007/BF02496704
(