Skip to main content
Originalarbeit

SEEQ-DE

Konstruktion und Überprüfung einer deutschsprachigen Adaption des Instruments „Student Evaluation of Educational Quality“ (SEEQ; Marsh, 1982, 2007)

Published Online:https://doi.org/10.1026/0012-1924/a000274

Zusammenfassung. Berichtet wird die Konstruktion und Überprüfung einer deutschsprachigen Adaption des Fragebogens „Student Evaluation of Educational Quality“ (SEEQ) von Marsh (1982, 2007), ein umfassend überprüftes und international etabliertes Instrument zur Erfassung von Studierendenurteilen der Lehrqualität. Es wurde übersetzt, geringfügig erweitert und anhand einer Stichprobe von 76 687 Studierendenurteilen zu 3 660 Lehrveranstaltungen überprüft. Interne Konsistenzen und Intraklassenkorrelationen indizierten eine hohe Messgenauigkeit. Faktorenanalysen bestätigten die im SEEQ unterschiedenen Dimensionen. Neben den Produktfaktoren (Lernzuwachs, Gesamtbeurteilung) ließen sich als Faktoren des Lehrhandelns Engagement, Stoffstrukturierung und -präsentation, Aktivierung der Studierenden, Sozialklima, Stoffbreite, Leistungsbewertung und Aufgaben wie im Originalinstrument unterscheiden. Mit der Adaption wird studentische Beiträge als optionaler Faktor vorgeschlagen. Das Instrument erwies sich als messinvariant über verschiedene Veranstaltungsformen hinweg. Insgesamt legen die Ergebnisse nahe, dass mit der deutschsprachigen Adaption des SEEQ die Qualität hochschulischer Lehre international anschlussfähig und mit hoher Güte erfasst werden kann.


Construction and Confirmation of a German Adaption of the Student Evaluation of Educational Quality Questionnaire (SEEQ)

Abstract. The Student Evaluation of Educational Quality Questionnaire (SEEQ) by Marsh (1982, 2007) was adapted to German and tested using assessments from 76,687 students in 3,660 university courses. Internal consistencies and intraclass correlations indicted a high reliability. Two-level CFAs and ESEM analyses confirmed the separability of all original SEEQ dimensions: learning and overall as product factors, and enthusiasm, organization / clarity, group interaction, individual rapport, breadth, examinations / grading, and assignments / readings as factors on the level of instructional behaviors. In this adaption, we additionally proposed student contributions as an optional factor (the extent to which contributions of fellow students are considered helpful, and whether they are effectively controlled by the instructor), especially for contexts – such as those found in Germany – where student-directed teaching methods are prevalent. Additionally, we expanded the overall course rating by adding two items using a grade scale. These two adaptions are optional, and the scale worked equally well without them. We could confirm measurement invariance across different types of courses. Taken together, our findings indicate that the German adaption of the SEEQ measures teaching quality in accordance with established testing standards.

Literatur

  • Abrami, P. C., d’Apollonia, S. & Rosenfield, S. (2007). The dimensionality of student ratings of instruction: What we know and what we do not. In R. P. Perry and J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 385 – 456). Dordrecht, Netherlands: Springer. First citation in articleGoogle Scholar

  • Byrne, B. (2013). Structural equation modeling with Mplus. New York, NY: Routledge. First citation in articleCrossrefGoogle Scholar

  • Chen, F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14, 464 – 504. https://doi.org/10.1080/10705510701301834 First citation in articleCrossrefGoogle Scholar

  • Cheung, G. & Rensvold, R. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233 – 255. https://doi.org/10.1207/S15328007SEM0902_5 First citation in articleCrossrefGoogle Scholar

  • Daumiller, M., Janke, S., Hein, J., Rinas, R., Dickhäuser, O. & Dresel, M. (2021). Do teachers’ achievement goals and self-efficacy beliefs matter for students’ learning experiences? Evidence from two studies on perceived teaching quality and emotional experiences. Learning and Instruction. [Advanced online publication] https://doi.org/10.1016/j.learninstruc.2021.101458 First citation in articleCrossrefGoogle Scholar

  • Döring, N. (2005). Für Evaluation und gegen Evaluitis. In B. Berendt, H.-P. Voss & J. Wildt (Hrsg.), Neues Handbuch Hochschullehre. (S. 2 – 22). Berlin: Raabe. First citation in articleGoogle Scholar

  • Dresel, M. & Rindermann, H. (2011). Counseling university instructors based on student evaluations of their teaching effectiveness. Research in Higher Education, 52, 717 – 737. https://doi.org/10.1007/s11162-011-9214-7 First citation in articleCrossrefGoogle Scholar

  • Dresel, M. & Tinsner, K. (2008). Onlineevaluation von Lehrveranstaltungen. Zeitschrift für Evaluation, 7, 183 – 211. First citation in articleGoogle Scholar

  • Dunn, T., Baguley, T. & Brunsden, V. (2014). From alpha to omega. British Journal of Psychology, 105, 399 – 412. https://doi.org/10.1111/bjop.12046 First citation in articleCrossrefGoogle Scholar

  • Fan, X. & Sivo, S. (2007). Sensitivity of fit indices to model misspecification and model types. Multivariate Behavioral Research, 42, 509 – 529. https://doi.org/10.1080/00273170701382864 First citation in articleCrossrefGoogle Scholar

  • Feistauer, D. & Richter, T. (2017). How reliable are students’ evaluations of teaching quality? Assessment & Evaluation in Higher Education, 42, 1263 – 1279. https://doi.org/10.1080/02602938.2016.1261083 First citation in articleCrossrefGoogle Scholar

  • Feldman, K. (1976). The superior college teacher from the students’ view. Research in Higher Education, 5, 243 – 288. https://doi.org/10.1007/BF00991967 First citation in articleCrossrefGoogle Scholar

  • Fincher, C. (1985). Learning theory and research. In J. Smart (Ed.), Higher education (pp. 63 – 96). New York, NY: Agathon. First citation in articleGoogle Scholar

  • Fondel, E., Lischetzke, T., Weis, S. & Gollwitzer, M. (2015). Zur Validität von studentischen Lehrveranstaltungsevaluationen. Diagnostica, 61, 124 – 135. https://doi.org/10.1026/0012-1924/a000141 First citation in articleLinkGoogle Scholar

  • Gravetter, F. & Wallnau, L. (2014). Essentials of statistics for the behavioral sciences (8th ed.). Belmont, CA: Wadsworth. First citation in articleGoogle Scholar

  • Helmke, A. (1996). Studentische Evaluation der Lehre. Zeitschrift für Pädagogische Psychologie, 10(3/4), 181 – 186. First citation in articleGoogle Scholar

  • Holman, R. & Glas, C. A. (2005). Modelling non‐ignorable missing‐data mechanisms with item response theory models. British Journal of Mathematical and Statistical Psychology, 58 (1), 1 – 17. https://doi.org/10.1348/000711005x47168 First citation in articleCrossrefGoogle Scholar

  • Janke, S. et al. (2020). Open Access Evaluation: Lehr-Evalution Online (LEO) als Instrument zu studentischen Lehrveranstaltungsevalution. Qualität in der Wissenschaft, 14 (4), 120 – 125. First citation in articleGoogle Scholar

  • Kline, R. (2015). Principles and practice of structural equation modeling. New York: Guilford. First citation in articleGoogle Scholar

  • Lüdtke, O., Robitzsch, A., Trautwein, U. & Kunter, M. (2009). Assessing the impact of learning environments: How to use student ratings of classroom or school characteristics in multilevel modeling. Contemporary Educational Psychology, 34 (2), 120 – 131. https://doi.org/10.1016/j.cedpsych.2008.12.001 First citation in articleCrossrefGoogle Scholar

  • Marsh, H. (1982). SEEQ: A reliable, valid, and useful instrument for collecting students’ evaluations of university teaching. British Journal of Educational Psychology, 52 (1), 77 – 95. https://doi.org/10.1111/j.2044-8279.1982.tb02505.x First citation in articleCrossrefGoogle Scholar

  • Marsh, H. (2007). Students’ evaluations of university teaching. In R. PerryJ. Smart (Eds.), The scholarship of teaching and learning in higher education (pp. 319 – 383). Dordrecht, Netherlands: Springer. First citation in articleGoogle Scholar

  • Marsh, H. & Dunkin, M. (1992). Students’ evaluations of university teaching. International Journal of Educational Research, 11, 253 – 388. https://doi.org/10.1016/0883-0355(87)90001-2 First citation in articleCrossrefGoogle Scholar

  • Marsh, H., Hau, K., Chung, C. & Siu, T. (1998). Confirmatory factor analyses of Chinese students’ evaluations of university teaching. Structural Equation Modeling, 5, 143 – 164. https://doi.org/10.1080/10705519809540097 First citation in articleCrossrefGoogle Scholar

  • Marsh, H. & Hocevar, D. (1991). Students’ evaluations of teaching effectiveness. Teaching and Teacher Education, 7, 303 – 314. https://doi.org/10.1016/0742-051X(91)90001-6 First citation in articleCrossrefGoogle Scholar

  • Marsh, H., Muthén, B., Asparouhov, T., Lüdtke, O., Robitzsch, A., Morin, A. et al. (2009). Exploratory structural equation modeling. Structural Equation Modeling, 16, 439 – 476. https://doi.org/10.1080/10705510903008220 First citation in articleCrossrefGoogle Scholar

  • Marsh, H., Overall, J. & Kesler, S. (1979). Class size, students’ evaluations, and instructional effectiveness. American Educational Research Journal, 16 (1), 57 – 70. https://doi.org/10.3102/00028312016001057 First citation in articleCrossrefGoogle Scholar

  • Marsh, H. & Roche, L. (1997). Making students’ evaluations of teaching effectiveness effective. American Psychologist, 52, 1187 – 1197. https://doi.apa.org/doi/10.1037/0003-066X.52.11.1187 First citation in articleCrossrefGoogle Scholar

  • Muthén, L. & Muthén, B. (2017). Mplus User’s guide. Los Angeles, CA: Muthén & Muthén. First citation in articleGoogle Scholar

  • Perry, R. & Smart, J. (Eds.). (2007). The scholarship of teaching and learning in higher education. Dordrecht, Netherlands: Springer. First citation in articleGoogle Scholar

  • Peugh, J. & Enders, C. (2004). Missing data in educational research. Review of Educational Research, 74, 525 – 556. https://doi.org/10.3102/00346543074004525 First citation in articleCrossrefGoogle Scholar

  • Rindermann, H. (2009). Lehrevaluation (2. Aufl.). Landau: VEP. First citation in articleGoogle Scholar

  • Schermelleh-Engel, K., Moosbrugger, H. & Müller, H. (2003). Evaluating the fit of structural equation models. Methods of Psychological Research Online, 8, 23 – 74. First citation in articleGoogle Scholar

  • Sengewald, E. (2016). Überprüfung und Anwendung von Multilevel-Messmodellen für Fragebögen zur Lehrveranstaltungsevaluation (Doktorarbeit). Friedrich-Schiller-Universität Jena, Jena. First citation in articleGoogle Scholar

  • Sengewald, E. & Vetterlein, A. (2015). Multilevel Faktorenanalyse für Fragebögen zur Lehrveranstaltungsevaluation. Diagnostica, 61, 116 – 123. https://doi.org/10.1026/0012-1924/a000140 First citation in articleLinkGoogle Scholar

  • Snijders, T. A. B. & Bosker, R. J. (2020). Multilevel analysis (2nd ed.). London, UK: Sage. First citation in articleGoogle Scholar

  • Spinath, B., Antoni, C., Bühner, M., Elsner, B., Erdfelder, E., Fydrich, T. et al. (2018). Empfehlungen zur Qualitätssicherung in Studium und Lehre. Psychologische Rundschau, 69, 183 – 192. https://doi.org/10.1026/0033-3042/a000408 First citation in articleLinkGoogle Scholar

  • Spinath, B. & Stehle, S. (2011). Evaluation von Hochschullehre. In L. HornkeM. Amelang (Hrsg.), Enzyklopädie der Psychologie (S. 617 – 667). Göttingen: Hogrefe. First citation in articleGoogle Scholar

  • Spooren, P., Brockx, B. & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The State of the Art. Review of Educational Research, 83, 598 – 642. https://doi.org/10.3102/0034654313496870 First citation in articleCrossrefGoogle Scholar

  • Staufenbiel, T., Seppelfricke, T. & Rickers, J. (2016). Prädiktoren studentischer Lehrveranstaltungsevaluationen. Diagnostica, 62, 44 – 59. https://doi.org/10.1026/0012-1924/a000142 First citation in articleLinkGoogle Scholar

  • Stehle, S., Spinath, B. & Kadmon, M. (2012). Measuring teaching effectiveness: Correspondence between students’ evaluations of teaching and different measures of student learning. Research in Higher Education, 53, 888 – 904. https://doi.org/10.1007/s11162-012-9260-9 First citation in articleCrossrefGoogle Scholar

  • Stehle, S. & Spinath, B. (2011). Zur Validität studentischer Lehrbeurteilungen. In M. KrämerS. PreiserK. Brusdeylins (Hrsg.), Psychologiedidaktik und Evaluation VII. (S. 347 – 356). Aachen: Shaker. First citation in articleGoogle Scholar

  • Toland, M. & De Ayala, R. (2005). A multilevel factor analysis of students’ evaluations of teaching. Educational and Psychological Measurement, 65, 272 – 296. https://doi.org/10.1177/0013164404268667 First citation in articleCrossrefGoogle Scholar

  • Vetterlein, A. & Sengewald, E. (2015). Ergebnisdarstellung in der Lehrveranstaltungsevaluation. Diagnostica, 61, 153 – 162. https://doi.org/10.1026/0012-1924/a000128 First citation in articleLinkGoogle Scholar

  • Watkins, D. (1994). Student evaluations of university teaching: A cross-cultural perspective. Research in Higher Education, 35, 251 – 266. https://doi.org/10.1007/BF02496704 First citation in articleCrossrefGoogle Scholar