Skip to main content
Open AccessOriginal Article

Change my mind

The impact of feedback in online self-assessments for study orientation on change in motivation of prospective students

Published Online:https://doi.org/10.1024/1010-0652/a000379

Abstract

Abstract: High dropout rates at universities, often caused by false expectations and a lack of motivation, pose a serious problem in higher education. Online self-assessments (OSAs) assess expectations regarding a field of study (major) and provide feedback on the reality of the major, thus pointing out expectation-reality discrepancies as well as helping prospective students choose a major. Based on cognitive dissonance theory, pointing out expectation-reality discrepancies should be related to changes in motivation for the major (expectancies for success, subjective values, intention to choose a major) and this relationship should be strengthened by feedback. Past research has shown that OSAs can correct expectations and that expectation-reality discrepancies are related to motivation but has not investigated the role of feedback for this process. Therefore, we extend past research by examining whether the positive relationships between expectation-reality discrepancies and changes in motivation for a major are stronger for prospective students who receive feedback on their expectation-reality discrepancies than for prospective students who do not receive feedback after the assessment. We conducted a field experiment in which 234 prospective students were randomly assigned to one of two groups (EG1 = OSA including feedback; EG2 = OSA without feedback). As hypothesized larger expectation-reality discrepancies were associated with larger changes in motivation for a major (expectancies for success, subjective values, intention to choose a major). Beyond that, we found a moderation effect of the feedback condition showing that the positive relationships between expectation-reality discrepancies and expectancies for success were stronger when prospective students received feedback (vs. no feedback). As feedback only showed effects beyond expectation-reality discrepancies in one of the considered outcomes, both the development of assessment and feedback should be targeted to optimize the effectiveness of OSAs.

Change my Mind. Der Einfluss von Feedback in Online-Self-Assessments zur Studienorientierung auf die Änderung von Motivation von Studieninteressierten

Zusammenfassung: Hohe Studienabbruchquoten an Hochschulen, die oft durch falsche Erwartungen und mangelnde Studienmotivation verursacht werden, stellen ein zentrales Problem in der Hochschulbildung dar. Online-Self-Assessments (OSAs) erfassen die Erwartungen an ein Studienfach, geben Rückmeldung über die Studienrealität, decken so Diskrepanzen zwischen Erwartungen an das Studienfach und der Studienrealität auf, um Studieninteressierte bei der Wahl eines Studienfachs zu unterstützen. Basierend auf der Theorie der kognitiven Dissonanz sollte das Aufzeigen von Erwartungs-Realitäts-Diskrepanzen mit Veränderungen der Motivation für das Studienfach (Erfolgserwartungen, subjektive Werte, Studienwahlintention) zusammenhängen und dieser Zusammenhang sollte durch Feedback gestärkt werden. Frühere Forschung zeigte, dass OSAs Erwartungen korrigieren können und dass Erwartungs-Realitäts-Diskrepanzen mit der Motivation der Studieninteressierten für ein Studienfach zusammenhängen, untersuchte jedoch nicht die Rolle von Feedback für diesen Zusammenhang. Dementsprechend erweitern wir bisherige Forschung, indem wir untersuchen, ob die positiven Zusammenhänge zwischen Erwartungs-Realitäts-Diskrepanzen und der Veränderung der Motivation für ein Studienfach stärker sind bei Studieninteressierten, die ein Feedback zu ihren Erwartungs-Realitäts-Diskrepanzen erhalten als bei Studieninteressierten, die nach dem Assessment kein Feedback erhalten. Wir führten ein Feldexperiment durch, bei dem 234 Studieninteressierte randomisiert einer von zwei Gruppen zugeordnet wurden (EG1 = OSA mit Feedback; EG2 = OSA ohne Feedback). Wie erwartet gingen größere Erwartungs-Realitäts-Diskrepanzen mit größeren Veränderungen der Motivation für ein Studienfach (Erfolgserwartung, subjektive Werte, Studienwahlintention) einher. Darüber hinaus zeigte sich ein Moderationseffekt der Feedback-Bedingung dahingehend, dass der positive Zusammenhang zwischen Erwartungs-Realitäts-Diskrepanzen mit Veränderungen der Erfolgserwartungen stärker war, wenn die Studieninteressierten Feedback erhielten (vs. kein Feedback erhielten). Da das Feedback nur für einen der betrachteten Outcomes einen Effekt über das Assessment hinaus zeigte, sollten sowohl die Entwicklung des Assessments als auch des Feedbacks bei der Entwicklung von OSAs im Fokus stehen, um deren Wirksamkeit zu optimieren.

The current dropout rate for undergraduate students at German universities is around 28 percent (Heublein & Schmelzer, 2018), rendering early dropout a serious problem in higher education. Examining the reasons for dropout reveals that over half of the students who dropped out started their studies with false expectations about the content of their field of study (major), and for eight percent this was even the decisive point for dropping out. Most of both graduates and dropouts consider their level of information at the beginning of their studies to be insufficient (Heublein et al., 2017).

Online self-assessments (OSAs) in the context of the choice of a major are web-based advice and information tools and thus can support prospective students in choosing a major that is suitable for them. Expectation tests in OSAs offer this support by assessing students' expectations about the content of a major and subsequently give feedback on the reality of the major. This is a two-step process in which discrepancies between students' expectations about the content of a major and the reality of the major are pointed out.

According to the theory of cognitive dissonance (Festinger, 1957), this new information about the reality of the major obtained from the assessment and feedback should cause unpleasant cognitive dissonances with the initial motivation for a major (expectancies for success, values, intention to choose a major). This motivation is based on the initial expectations about the content of a major. To restore consonance, prospective students could change their initial motivation for a major according to their change in expectations about the content of a major. Based on the Expectancy-Value Model, those changes in motivation for the major should ultimately influence prospective students' choice of the major (Eccles & Wigfield, 2002; Guo, Parker, Marsh & Morin, 2015).

Initial studies are able to support the previous assumptions and show that discrepancies in expectations are indeed associated with changes in expectancies for success in a major, the values of the respective major and the intention to choose the major (Karst, Ertelt, Frey & Dickhäuser, 2017), all of which influence students' choice of a major (Eccles & Wigfield, 2002; Guo et al., 2015). However, these analyses took a more global assessment approach and considered expectation tests as a whole. Thus, it remains unclear whether changes in motivation are especially driven by the feedback element of expectation tests, which is necessary knowledge to optimize current online self-assessment practices and better assist prospective students in their choice of a major. We aim to fill this research gap and contribute to theoretical framing and empirical evidence on how OSAs and feedback in particular influence prospective students' changes in motivation for a major. Therefore, we conducted a field experiment and examined the extent to which the use of feedback in an expectation test strengthens the relationship between expectation-reality discrepancies and changes in motivation for a major.

Motivation for a major influences choice of a major

Based on the Expectancy-Value Model of Achievement-Related Choices (Eccles et al., 1983), individuals' expectancies for success and the importance or value individuals attach to different behavioral options are important determinants of their task choices (Eccles & Wigfield, 2002). The more positively a behavioral option is valued relative to other options and the higher the subjectively perceived expectancies for success, the more likely it becomes that an individual will choose that option (Eccles & Wigfield, 2002).

The model can also be applied in a broader context, such as for decisions regarding educational or career paths (Eccles & Wigfield, 2002). According to the model, educational or career decisions should be influenced by four value components (intrinsic value, attainment value, utility value and costs) and expectancies for success. Applied to the context of the choice of a major (Guo et al., 2015; Karst et al., 2017), the intrinsic value indicates how much joy prospective students expect to experience in the respective major. Attainment value indicates the degree to which individuals believe that studying in a major will contribute to their self-affirmation. Utility value indicates the anticipated usefulness of the major (e.g., financial and family benefits for the future). Finally, costs form a negative value component. They indicate the extent to which prospective students assume they will give up on alternatives or will spend a lot of time on the major. Expectancies for success ultimately reflect the extent to which prospective students believe they can be successful in the respective major. Since the Expectancy-Value Model is an explanatory model of the formation of a choice and no choice has been finalized at the time of completing an OSA, intention serves as a behavioral proximate measure for future choice of a major (Karst et al., 2017).

First empirical evidence supports this model in the higher education context. For example, research regarding pathways into STEM majors (Science, Technology, Engineering and Mathematics) showed that a higher intrinsic value and utility value for math predicted a higher likelihood of choosing a STEM major (Guo et al., 2015). Additionally, students with slower declines in expectancy and value and slower increases in effort cost achieved higher grades and were more likely to remain in an engineering major (Robinson et al., 2019).

Thus, in the context of the choice of a major, expectancies for success and the value that prospective students attach to a major should determine prospective students' choice of a major and persistence in a major and are therefore relevant variables in their self-selection process. Unlike other academic choices where individuals can base their expectancies for success and subjective values on previous experiences (Eccles et al., 1983), for the choice of a major, prospective students need to rely on expectations about the content of the respective major (Karst et al., 2017). However, research has shown that many prospective students have inaccurate expectations about the content of the major, which is a common reason for dropping out of a major (Heublein, Hutzsch, Schreiber, Sommer & Besuch, 2010). This is where expectation tests in OSAs can help to improve prospective students' self-selection.

Expectation tests influence motivation for a major

Expectation tests in OSAs assess prospective students' expectations regarding the extent of different content of the major (assessment) and in a second step contrast them with expert estimates of the reality of the major (feedback) to show to which extent prospective students' expectations match or differ from reality (Merkle, Schiltenwolf, Kiesel & Dickhäuser, 2021). Expectation tests therefore point out expectation-reality discrepancies to prospective students, which leads to more accurate expectations after using expectation tests in OSAs than before (Hasenberg & Stoll, 2015).

Scholars proposed that expectation-reality discrepancies that are pointed out in expectation tests are, in turn, related to prospective students' motivation for a major. This relationship should strongly depend on whether prospective students are informed about these discrepancies through feedback procedures (Karst et al., 2017). We extend previous work by arguing why pointing out expectation-reality discrepancies should lead to changes in the motivation for a major and why feedback should strengthen this process.

According to the theory of cognitive dissonance (Festinger, 1957), cognitions can include any knowledge, opinion, or belief about the environment, about oneself, or about one's behavior and can be distinguished as consonant and dissonant. The first basic assumption of the theory is that inconsistency among cognitions causes a tension, which leads individuals to strive to restore balance within their cognitive system by reducing dissonance. Such dissonance could occur if a discrepancy becomes obvious between the expected content of a major and the reality of the major. One way to reduce cognitive dissonance is to change one or more cognitions involved in the dissonant relations. This could be done for example by adjusting the intrinsic value for a major (e.g., ‘I think the content of the major will be interesting’ to ‘I don't think the content of the major will be interesting’). This theoretical argumentation explains that due to pointing out expectation-reality discrepancies in expectation tests, prospective students change their expectations about the content of a major. As a result of this change in expectations, prospective students change their initial motivation for the major (expectancies for success, subjective values, intention to choose a major). First empirical evidence supports this theoretical argumentation showing that expectation-reality discrepancies are related to prospective students' motivation for a major (Karst et al., 2017). However, so far research has investigated the impact of expectation tests as a whole. This makes it impossible to conclude how important the feedback is for the effect of expectation tests on motivation.

Feedback strengthens the effect of expectation tests on motivation for a major

Scholars propose that feedback should be an important factor for the effect of expectation tests on motivation (Karst et al., 2017). Under the assumption that feedback points out expectation-reality discrepancies, our previous theoretical argumentation that identified the pointing out of expectation-reality discrepancies as a critical starting point for expectation change and subsequent change in motivation for a major, supports this proposition. Nevertheless, it is also possible that expectation-reality discrepancies already become evident during the assessment in expectation tests. For example, if prospective students indicate to what extent they expect to attend lectures in English in a specific major, they might already assume that English is an important part of the respective major even if they did not expect this beforehand. Particularly in the case of specifically defined assessments that provide very detailed information about the content or requirements of majors, it is likely that participants receive new information through the items used in the assessment and will compare the provided information with their expectations. This self-reflection might already lead to expectation-reality discrepancies and corresponding dissonance in the assessment phase. However, we assume that feedback in expectation tests intensifies the pointing out of expectation-reality discrepancies to prospective students as it can provide specific confirmation, additional information, replace incorrect information with correct information, help differentiate, and assist in restructuring existing knowledge as well as preconceptions regarding the reality of the major (Butler & Winne, 1995).

As far as we are aware, there is no empirical support for this assumption in the context of the choice of a major. First indications derived from empirical work in the broader educational context speak for the important role of feedback to trigger cognitive dissonances and subsequent change processes. A respective study has shown that receiving assessment and feedback led to a greater learning effect than assessments taken alone regarding math performance (Fyfe & Rittle-Johnson, 2016).

Research question and hypotheses

We investigate the role of feedback in expectation tests for study orientation. We focus on the impact of feedback on the relationships between expectation-reality discrepancies and changes in motivation for a major. More precisely, we focus on expectancies for success, intrinsic value, attainment value, utility value, costs, as well as intention to choose a major.

Based on the cognitive dissonance theory (Festinger, 1957), we assume that pointing out expectation-reality discrepancies in (the assessment or in assessment and feedback of) expectation tests for a specific major can trigger unpleasant cognitive dissonances with initial motivation for a major, which prospective students reduce by changing their motivation for the respective major. Thus, we hypothesize that there are relationships between expectation-reality discrepancies and changes in motivation for a major:

H1: Larger expectation-reality discrepancies that are pointed out in expectation tests (in the assessment or in assessment and feedback) are related to larger changes in motivation for a major.

Further, we propose that feedback in expectation tests is especially important for highlighting expectation-reality discrepancies because it can add, correct, and restructure information about the reality of the major (Butler & Winne, 1995). Therefore, feedback plays an important role in the relationships between expectation-reality discrepancies pointed out in expectation tests and changes in motivations for the major:

H2: Feedback in expectation tests moderates the relationships between expectation-reality discrepancies and changes in motivation for a major. The relationships are stronger for prospective students who receive feedback on their expectation-reality discrepancies after the assessment of their expectations than for prospective students who do not receive feedback after the assessment.

Method

Design and procedure

Prospective students participated in an OSA for the choice of a major at a public German university with a focus on economic and social sciences. The sample was a self-selected convenience sample which included all participants who voluntarily took part in the OSA up to the time of the evaluation. No compensation or certification was associated with participation. For detailed documentation of the development process, see Messerer, Bürkle, Karst and Janke (2020). The OSA followed a two-step procedure: At first, prospective students completed a subject-unspecific screening test, which was meant to help prospective students identify majors that could be interesting for them. Afterwards, the prospective students had the option to choose between different subject-specific expectation tests, which were the object of our investigation. More specifically, participants could choose between three majors that were offered at the time of the evaluation: the Bachelor's Program in Economic and Business Education, the Bachelor's Program in Sociology, and the Integrated LL.B. and State Examination Program in Law.

The study followed a pre-post design with experimental group 1 (EG1 = OSA including feedback) and experimental group 2 (EG2 = OSA without feedback). At t1: participants of both groups answered a survey on their motivation for a major. After that, they conducted the assessment of the expectation tests. In the assessment of the expectation tests, prospective students had to answer very specific items on the extent to which they expected certain content of the major to be relevant in the respective major. The typical item form was ‘To what extent do you expect to deal with [e.g., for the Bachelor's Program in Sociology: reading classic sociological books (e.g., Karl Marx, Max Weber, etc.)] in the major’, and prospective students indicated their expectations on a six-point scale ranging from 1 (not at all) to 6 (to a very large extent)1. The complete set of items is provided in the electronic supplementary material (ESM) 1. Completing the assessment in one of the subject-specific expectation tests took approximately 20 minutes.

Subsequently, participants were randomly assigned to one of two groups. EG1 (OSA including feedback, n = 103) received feedback on how well their expectations matched the content of the major before answering the post-survey (t2) on their motivation for the choice of the respective major again (see Measures). In comparison, EG2 (OSA without feedback, n = 131) received feedback after they had answered the post-survey (t2) again2.

The feedback itself was based on the information of advanced students who provided data on the actual extent to which certain content of the major was present in the major in question. The accuracy of prospective students' assessments was then calculated based on the mean values of the advanced students' responses and reported back graphically with additional text that informed prospective students on their degree of accuracy (e.g. ‘Your expectations […] correspond perfectly with the study reality’ to ‘You have a strongly above-average/below-average expectation […]compared to the study reality’). For detailed documentation of the structure of the feedback, see Bürkle, Messerer, Karst and Janke (2022).

Participants

In total, 234 prospective students participated in our study (70.94 percent female; Mage = 19.12 years, SDage = 3.57). About half of the participants had already finished school (51.28 percent). The other half attended 10th–13th grade (48.29 percent, of which 26.92 percent attended 12th grade)3. See ESM 2, part 1 for further sample descriptions, part 2 for power analysis.

Measures

We measured changes in motivation for a major (expectancies for success, intrinsic value, attainment value, utility value, costs, intention to choose a major) before and after the OSA (see Design regarding experimental variation in the timing of the second measurement and see ESM 1 for an overview of the pre- and post-survey). Therefore, we calculated the difference between the post- and the pre-measurement and transformed this score into an absolute difference. The larger this value, the greater the change in prospective students' motivation for a major, the direction of change was not taken into account. This procedure was applied to be able to meaningfully interpret the results (for further details, see the limitation section in the discussion).

Expectancies for success

Expectancies for success were measured with a German questionnaire inspired by items from Karst et al. (2017). It consisted of 3 items: e.g. “I will learn the content of the major … ” (… very slowly [1] to … very quickly. [7]); The reliability was acceptable (αt1 = .74, αt2 = .80).

Subjective values of the major

Subjective values were measured on each dimension (intrinsic value, attainment value, utility value and costs) with three items on a 5-point Likert scale with responses ranging from 1 (not at all) to 5 (completely true) (see Karst et al., 2017; Steinmayr & Spinath, 2010). Reliabilities were acceptable ranging from .70 ≤ αt1 ≤ .88 and .86 ≤ αt2 ≤ .93.

Intention to choose a major

Intention to choose a major was measured with two separate items (see Karst et al., 2017), one item for certainty to choose a major: “How certain are you at this point that you will enter this major?” which was assessed with a scale ranging from 1 (0%) to 11 (100%) and one item for decisiveness to choose a major: “How decided are you right now to enter this major?which was measured on a 4-point Likert scale ranged from 1 (very undecided) to 4 (very decided). The items correlated in a substantial way (pre-measures: r = .80, post-measures: r = .79). Therefore, we took the mean of these items at the pre- and post-level respectively, after performing a z-standardization to account for the different scaling of the items.

Expectation-reality discrepancies

The discrepancies between the expectations of prospective students regarding the major and the reality of the major were calculated with the same method as applied by Karst et al. (2017). First, the difference between prospective students' expectations and the mean of the expert ratings was computed for each item, which was transformed into the absolute value. In a second step, the average expectation-reality discrepancies were computed by calculating the mean value of all single discrepancy scores. The larger this value, the greater the inaccuracy of prospective students' expectations. The over- and underestimation of specific content of the major were not taken into account.

Data analysis

We tested our hypotheses using hierarchical multivariate moderated regression analyses with R (Version 4.1.2, R Core Team, 2021). All continuous predictor variables were z-standardized to be able to better interpret the results. In step one, we entered the expectation-reality discrepancies (hypothesis 1), in step two we added the experimental feedback condition (OSA without feedback = 0; OSA including feedback = 1) and in step three, to analyze the moderation effect of hypothesis 2, the interaction between the expectation-reality discrepancies and the feedback condition was entered. As outcomes we included the changes in motivation for a major: expectancies for success, subjective values of the major and intention to choose a major. A multivariate analysis was conducted to show for each predictor whether it contributes significantly to explaining changes in motivation for a major (all outcomes considered together). Univariate analyses were conducted to provide more insights about each predictor's contribution to explain changes in each specific motivation for a major (all outcomes considered separately). The significance level (alpha) chosen for the analysis was set at α = 0.05.

Results

The descriptives and zero-order correlations between all continuous variables in the model are depicted in ESM 3, Table E1. The multivariate analysis pointed out that expectation-reality discrepancies (Pillai's trace = .10, F(6,227) = 4.26, p < .001) proved to be an overall significant predictor for changes in motivation for a major, while neither feedback condition alone (feedback received vs. no feedback received, Pillai's trace = .03, F(6,226) = 1.21, p = .292) nor the interaction between expectation-reality discrepancies and the experimental feedback condition (Pillai's trace = .05, F(6,225) = 1.78, p = .105) could significantly contribute to the prediction beyond expectation-reality discrepancies. Accordingly, prospective students changed their motivation for a major in accordance with their level of expectation-reality discrepancies.

The results of the hierarchical univariate moderated regression analyses are depicted in Table 1. As expected in our first hypothesis, larger expectation-reality discrepancies positively predicted larger changes in all outcomes, namely in expectancies for success, in intrinsic value, in utility value, in attainment value, in costs and in intention to choose a major.

Table 1 Hierarchical univariate moderated regression analyses predicting the absolute value of change in motivation for a study major with expectation-reality discrepancies, feedback condition (yes/no) and their interaction

Receiving feedback about the expectation-reality discrepancies after the assessment of expectations in the OSA (vs. no feedback received after the assessment) did not have any main effect on changes in motivation for a major except for intention to choose a major when the expectation-reality discrepancies were controlled for.

In line with our second hypothesis, our results showed that feedback in expectation tests moderated the relationships between expectation-reality discrepancies and changes in expectancies for success. The relationship was stronger for prospective students who received feedback on their expectation-reality discrepancies after the assessment of their expectations than for prospective students who did not receive feedback after the assessment. Contrary to our hypotheses, there were no significant moderation effects of the experimental feedback condition for all changes in value variables, in costs as well as in intention to choose a major.

General discussion

In the presented study, we investigated the role of feedback in expectation tests for the relationships between expectation-reality discrepancies and changes in motivation for a major (expectancies for success, intrinsic value, attainment value, utility value, costs, intention to choose a major). We found that larger expectation-reality discrepancies were related to larger changes in expectancies for success, intrinsic value, utility value, attainment value, costs, and intention to choose a major. Additionally, our results showed that feedback in expectation tests moderated the relationship between expectation-reality discrepancies and changes in expectancies for success. The relationship was stronger for prospective students who received feedback on their expectation-reality discrepancies after the assessment of their expectations than for prospective students who did not receive feedback after the assessment. However, feedback did not strengthen the relationship between expectation-reality discrepancies and changes in motivation regarding all value variables, in costs as well as in intention to choose a major.

Our results suggest that for all outcomes the extent of change in motivation for a major in expectation tests is related to the extent of expectation-reality discrepancies. For one outcome (expectancies for success), feedback not only strengthens the relationship between expectation-reality discrepancies and change in motivation for a major but even seems to be a critical driver, meaning that expectation-reality discrepancies only resulted in changes in motivation for a major when feedback was provided in addition to the assessment of expectations. However, for other outcomes (intrinsic value and attainment value), our findings suggest that the assessment of prospective students' expectations about the content of the major (without feedback) already played a role in the relationship between expectation-reality discrepancies and changes in motivation regarding intrinsic value and attainment value. Therefore, the assessment is probably a strong starting point for reflective processes about potential content of the major.

Theoretical implications

We found that larger expectation-reality discrepancies were related to larger changes in expectancies for success, in intrinsic value, in utility value, in attainment value, in costs, and in intention to choose a major. These findings are in line with our theoretical argumentation that according to the theory of cognitive dissonance (Festinger, 1957), expectation-reality discrepancies yield the potential to cause unpleasant cognitive dissonances between prospective students' new expectations of the major and their initial motivation for the major. In order to restore consonance, prospective students change their initial motivation for a major according to the extent of their expectation-reality discrepancies. Additionally, our results are in line with past empirical evidence showing that prospective students' expectation-reality discrepancies are related to motivation (Karst et al., 2017). Our research demonstrates that this finding also holds for other expectation tests in OSAs and provides a theoretical framework for the translation of expectation-reality discrepancies in changes in motivation for a major.

Additionally, we found that feedback in expectation tests moderated the relationship between expectation-reality discrepancies and changes in expectancies for success. The relationship was stronger for prospective students who received feedback on their expectation-reality discrepancies after the assessment of their expectations than for prospective students who did not receive feedback after the assessment. These findings support our theoretical assumption that feedback points out expectation-reality discrepancies which are a critical starting point for the above explained process of change in motivation for a major. Additionally, this finding is in line with past research that showed that a combination of assessment and feedback had stronger effects on learning than assessment alone (Fyfe & Rittle-Johnson, 2016). However, for most of the outcomes, feedback did not strengthen the relationships between expectation-reality discrepancies and changes in motivation. In summary, these results suggest that the effect of feedback can also be applied to choosing a major and extend the theoretical framework for the relationships between expectation-reality discrepancies of expectation tests and changes in motivation for a major. However, the question arises as to why the feedback to a large extent did not show the expected moderating effect beyond expectation-reality discrepancies and why the amount of variance explained by the extent of the expectation-reality discrepancies (independent of whether feedback was received or not) was as strong as or even stronger than the variance explained by the moderation effect.

One explanation for this unexpected finding is that new information can be absorbed and processed particularly through active engagement with the content. This is promoted by answering the assessment, while the new information from the feedback about the fit of one's own expectations to the reality of the major is only passively absorbed. It can be assumed that active engagement with the content during the assessment provides greater processing of the information than the feedback. Accordingly, the assessment can possibly better and additionally earlier trigger cognitive dissonances and thus have a greater influence on prospective students than the feedback itself. Thus, future research should examine whether a higher active cognitive engagement with the feedback (e.g., measured through time spent engaging with the feedback, or triggered through instructions or questions regarding the feedback) results in a greater influence for prospective students' change in motivation for a major.

This is especially likely in our sample because it consists mostly of people who have a high need for further information about the majors at hand and are still uncertain about their choice of major, thus a very attentive sample (see ESM 2, part 1 for more information on sample description). While this is a limitation of the generalizability of the results to other, less attentive samples, this limitation has little practical impact as our sample represents a typical sample of OSA users, so the effects are typical of those for whom OSAs are important.

Additionally, it is also possible that the processing of the assessment even leads participants to actively try to avoid information like the information in the following feedback. Despite voluntarily seeking out counseling and guidance services, such as the OSAs examined in this study, and the desire to obtain an accurate assessment on the content of the major, individuals tend to want to protect their self-worth, which can lead to, among other things, avoidance of feedback and no further reflection beyond the reflection that already took place during the assessment (Behnke, 2016). This is especially likely for prospective students who perform poorly (Ashford & Cummings, 1983). Thus, in our case it is especially true for those participants who pay close attention to the assessment questions already suspect discrepancies between their own expectations and the reality of the major during the assessment (e.g., in the assessment participants had to answer a lot of questions regarding their expectations about their use of English during studying but before the assessment they did not know that English was necessary to study sociology). If prospective students feel their options are limited (e.g., I definitely need to study sociology to get my dream job) or foresee high effort if they accept the feedback (for example, due to the new orientation and information search on further majors), one possible response would be that prospective students avoid further information and accordingly further reflection about their expectation-reality discrepancies by avoiding further feedback (Behnke, 2016). Accordingly, by avoiding feedback, no reflection based on the new information (beyond the information already obtained in the assessment) takes place, which could lead to an additional adjustment of expectations and a further change in motivation for a major. This may explain why we found only small or no incremental variance explained by feedback beyond the variance explained by the expectation-reality discrepancies and a lack of the feedback moderation effect for most outcomes.

Limitations and future research

A methodological limitation of our study is that we lost information by calculating the absolute value of the change in motivation for a major as well as the absolute value of expectation-reality discrepancies. However, in order to predict the direction of the change in motivation for a major we need to know whether prospective students' expectations are disappointed or exceeded. Disappointed or exceeded expectations should not be equated with expectation-reality discrepancies (Merkle et al., 2021). Expectation-reality discrepancies tell us whether prospective students expected to spend more or less time on the content of a major than was actually the case. In order to know whether these wrong expectations are disappointed or exceeded one would need to take into account the value of the specific content for the specific student (e.g., less of the content I am interested in – disappointed expectations vs. less of the content I am not interested in – exceeded expectations) which should be examined in future studies (e.g., Karst et al., 2017; Merkle et al., 2021). Thus, for the present study, the only suitable method was to look at the absolute values, which let us conclude that higher inaccuracy of prospective students' expectations is related to larger changes in their motivation for a major (not taking into consideration the directions of discrepancies or change) which is an important next step for future research on OSA feedback. Additionally, in our study we measured expectation-reality discrepancies by calculating the difference between expectations of participants and the study reality. This measure does not capture whether prospective students actually experienced these discrepancies – particularly during the assessment period. For future studies it would be valuable to additionally measure subjective experienced expectation-reality discrepancies after the assessment as well as after the feedback which would help to better disentangle the effects of the assessment and of the provided feedback.

Furthermore, we have to note that we conducted our research with an OSA that only included items on contents that are actually part of the majors at hand. This means that in contrast to some OSAs (Hasenberg & Schmidt-Atzert, 2013; Karst et al., 2017) our tool did not include items on misconceptions. Investigating the impact of the inclusion of such misconceptions would have been particularly interesting as they would make it more difficult for individuals to learn without proper feedback. This is because the assessment alone is less informative for the study reality if it consists of items that do not align with this study reality. Therefore, we would expect stronger effects from feedback if the assessment contains misconceptions because the assessment becomes less informative due to the misconceptions. In sum, we think that further research that includes such misconceptions could contribute to a better understanding of the relative importance of assessment and feedback for the impact of OSAs.

The theory of cognitive dissonance has been tested in a wide variety of contexts (Vaidis & Bran, 2020), and we therefore assume that the mechanism should be generalizable across different majors. However, it is possible that different majors may be linked to different moderators that could strengthen or weaken the proposed mechanisms, e.g., it has been shown that study choice motives vary between different majors (Janke, Messerer, Merkle & Krille, 2023), and one could imagine that different motivations for choosing a major would strongly influence these processes, e.g., more intrinsically motivated individuals should change their motivation more in accordance with feedback about unexpected study content, whereas extrinsically motivated individuals would be less likely to do so. As our OSA was voluntary rather than mandatory, it is less likely that extrinsically motivated prospective students would have participated. Nonetheless this question could be explored in mandatory OSAs in future research.

Additionally, it seems plausible that participants in OSAs for majors that closely align with school subjects (such as biology) have lower expectation-reality discrepancies compared to participants in OSAs for majors that are not part of the standard school curriculum (like law). This restriction of variance in expectation-reality discrepancies could lead to no or smaller relations between expectation-reality discrepancies and changes in motivation in majors that are similar to school subjects. The three majors in our study (Economic and Business Education, Sociology, Integrated LL.B. and State Examination Program in Law) are barely similar to school subjects taught in Germany with Economic and Business Education being the most similar (compared to Law and Sociology). While we do not expect large variance restrictions due the familiarity, exploratory analyses indeed revealed that participants in an Economic and Business Education major test showed significantly lower expectation-reality discrepancies than participants in an OSA for a law major and a descriptively lower standard deviation which supports our reasoning.

However, the current study covered only three different majors and our sample sizes in the three majors are quite small, thus we don't have enough power to test for differences in our effects among the three majors, see ESM 2, part 3 for power analyses within each major separately. Thus, extending the findings to multiple majors with larger sample sizes, would be another important step to ensure the transferability of the given results.

Finally, effect sizes of the found results were small which raises doubts about the meaningfulness of the found effects. However, considering the fact that the present study was an ecologically valid field study and that many prospective students can benefit from these rather small effects without increasing costs for universities, the meaningfulness of the results becomes clear due to their high cost-benefit potential. Nonetheless, future studies should examine in more detail what additional factors influence the change in motivation for a major in the context of participation in an OSA, e.g., other feedback scores, such as interests, the complexity of the feedback, or the participants' active cognitive engagement with the feedback (e.g., De Villiers, 2013).

Practical implications

Provided that future research supports these findings and remedies the limitations, important practical implications can be derived from the findings of the present study. University-specific and subject-specific expectation tests were used in the present paper. However, our findings can also be applied to other types of expectation tests (e.g., university- and/or subject-unspecific) as well as interest or skills tests that contain an assessment and a feedback component.

Based on our results, it has been shown that both feedback and assessment themselves can have an impact on the change in motivation of prospective students. Accordingly, we recommend that future OSAs focus on both parts. Since assessment has already been shown to influence motivation for the choice of a major, high content validity of the assessment is important. This includes that content about specific majors is fully covered in the assessment, content is not over- or underrepresented, and content is well-structured (Merkle et al., 2021). Since processes of the change in motivation for a major are already triggered in the assessment, we also warn against including common misconceptions in OSAs that could lead to incorrect changes in motivation for a major. This is especially necessary when prospective students do not actively cognitively engage with the feedback. The development should follow strict scientific standards and take into account the current state of research in the field of aptitude diagnostics. Professional expertise (= expertise of teachers, study administration, and students) should be included in addition to diagnostic expertise (= expertise of test constructors; Messerer et al., 2020).

In our paper we computed the absolute value of expectation-reality discrepancies as an operationalization of the expectation-reality discrepancies. Since this operationalization only provided inconsistent and small effects on changes in motivation for a major, we recommend additionally using other variants of computations of the expectation-reality discrepancies and implementing those in new feedback designs. These other variants (e.g., the combination of expectation-reality discrepancies with interest, Merkle et al., 2021) could improve the feedbacks' influence on prospective students' change in motivation.

Conclusion

The goal of the present study was to find out what changes the minds of prospective students in the process of choosing a major. As expected, we found that expectation-reality discrepancies were related to changes in motivation for a major independent of whether feedback was received or not. Surprisingly, receiving feedback on expectation-reality discrepancies in OSAs for study orientation could strengthen the relationship between assessed expectation-reality discrepancies and change in motivation for a major only for expectancies for success. Thus, the present study highlights the important role of both feedback and assessment of expectation-reality discrepancies in changing the minds of prospective students and shows that in addition to the development of useful feedback procedures, the selection of content-valid items in OSAs is of central importance for their intended effectiveness.

Electronic supplementary material

The electronic supplementary material (ESM) is available with the online version of the article at https://doi.org/10.1024/1010-0652/a000379

References

  • Ashford, S. J. & Cummings, L. L. (1983). Feedback as an individual resource: Personal strategies of creating information. Organizational Behavior & Human Performance, 32, 370–398. https://doi.org/10.1016/0030-5073(83)90156-3 First citation in articleCrossrefGoogle Scholar

  • Behnke, K. (2016). Umgang mit Feedback im Kontext Schule: Erkenntnisse aus Analysen der externen Evaluation und des Referendariats [Handling feedback in the context of school: Findings from analyses of external evaluation and teacher training]. Springer. https://doi.org/10.1007/978-3-658-10223-4 First citation in articleCrossrefGoogle Scholar

  • Bürkle, H., Messerer, L. A. S., Karst, K. & Janke, S. (2022). Verfahrensdokumentation für den mehrstufigen Studienwahltest der Universität Mannheim [Procedural documentation for the multi-stage test for choosing a study major at the University of Mannheim]. Universität Mannheim. https://www.sowi.uni-mannheim.de/media/Lehrstuehle/sowi/Karst/smart2/DIN_Norm_sMArt_Final_2709.pdf First citation in articleGoogle Scholar

  • Butler, D. L. & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281. https://doi.org/10.3102/00346543065003245 First citation in articleCrossrefGoogle Scholar

  • De Villiers, R. (2013). 7 Principles of highly effective managerial feedback: Theory and practice in managerial development interventions. The International Journal of Management Education, 11(2), 66–74. https://doi.org/10.1016/j.ijme.2013.01.002 First citation in articleCrossrefGoogle Scholar

  • Eccles, J. S., Adler, T. F., Futterman, R., Goff, S. B., Kaczala, C. M., Meece, J. L. & Midgley, C. (1983). Expectancies, values, and academic behaviors. In J. T. Spence (Ed.), Achievement and achievement motivation (pp. 75–146). W. H. Freeman. First citation in articleGoogle Scholar

  • Eccles, J. S. & Wigfield, A. (2002). Motivational beliefs, values and goals. Annual Review of Psychology, 53, 109–132. https://doi.org/10.1146/annurev.psych.53.100901.135153 First citation in articleCrossrefGoogle Scholar

  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press. First citation in articleCrossrefGoogle Scholar

  • Fyfe, E. R. & Rittle-Johnson, B. (2016). Feedback both helps and hinders learning: The causal role of prior knowledge. Journal of Educational Psychology, 108(1), 82–97. https://doi.org/10.1037/edu0000053 First citation in articleCrossrefGoogle Scholar

  • Guo, J., Parker, P.D., Marsh, H.W. & Morin, A.J. (2015). Achievement, motivation, and educational choices: A longitudinal study of expectancy and value using a multiplicative perspective. Developmental Psychology, 51(8), 1163–1176. https://doi.org/10.1037/a0039440 First citation in articleCrossrefGoogle Scholar

  • Hasenberg, S. & Schmidt-Atzert, L. (2013). Die Rolle von Erwartungen zu Studienbeginn: Wie bedeutsam sind realistische Erwartungen über Studieninhalte und Studienaufbau für die Studienzufriedenheit? [The role of expectations at the beginning of academic studies: How important are realistic expectations for student's satisfaction?]. Zeitschrift für Pädagogische Psychologie, 27(1–2), 87–93. https://doi.org/10.1024/1010-0652/a000091 First citation in articleLinkGoogle Scholar

  • Hasenberg, S. & Stoll, G. (2015). Erwartungschecks in Self-Assessments: Zur Erfassung und Korrektur von Studienerwartungen [Expectation checks in self-assessments: On the assessment and correction of study expectations]. Das Hochschulwesen, 63, 104–118. First citation in articleGoogle Scholar

  • Heublein, U., Ebert, J., Hutzsch, C., Isleib, S., König, R., Richter, J. & Woisch, A. (2017). Motive und Ursachen des Studienabbruchs an baden-württembergischen Hochschulen und beruflicher Verbleib der Studienabbrecherinnen und Studienabbrecher [Motives and causes of dropouts at Baden-Württemberg universities and careers of students who dropped out]. DZHW. https://www.dzhw.eu/pdf/21/BaWue_Bericht_gesamt.pdf First citation in articleGoogle Scholar

  • Heublein, U., Hutzsch, C., Schreiber, J., Sommer, D. & Besuch, G. (2010). Ursachen des Studienabbruchs in Bachelor- und in herkömmlichen Studiengängen [Causes of dropout in bachelor's and conventional degree programs]. HIS. https://www.dzhw.eu/pdf/21/studienabbruch_ursachen.pdf First citation in articleGoogle Scholar

  • Heublein, U. & Schmelzer, R. (2018). Die Entwicklung der Studienabbruchquoten an den deutschen Hochschulen: Berechnungen auf Basis des Absolventenjahrgangs 2016 [The development of dropout rates at German universities: Calculations based on the 2016 graduating class]. DZHW. https://www.dzhw.eu/pdf/21/studienabbruchquoten_absolventen_2016.pdf First citation in articleGoogle Scholar

  • Janke, S., Messerer, L. A. S., Merkle, B. & Krille, C. (2023). STUWA: Ein multifaktorielles Inventar zur Erfassung von Studienwahlmotivation [STUWA: A multi-factorial inventory to measure motivation for enrollment]. Zeitschrift Für Pädagogische Psychologie, 37(3), 215–231. https://doi.org/10.1024/1010-0652/a000298 First citation in articleLinkGoogle Scholar

  • Karst, K., Ertelt, B.-J., Frey, A. & Dickhäuser, O. (2017). Studienorientierung durch Self-Assessments: Veränderung von Einstellungen zum Studienfach während der Bearbeitung eines Selbsttests [Academic orientation using self-assessments: Attitudechange towards subject of study while conducting an online-self-assessment]. Journal for Educational Research Online, 9(2), 205–227. https://doi.org/10.25656/01:14935 First citation in articleCrossrefGoogle Scholar

  • Merkle, B., Schiltenwolf, M., Kiesel, A. & Dickhäuser, O. (2021). Entwicklung und Validierung eines Erwartungs- und Interessenstests (E × I – Test) zur Erkundung studienfachspezifischer Passung in einem Online-Self-Assessment [Development and validation of an Expectation-Interest Test (E × I – Test) to explore fit for a specific major in an online self-assessment]. Zeitschrift Für Empirische Hochschulforschung, 5(2), 162–183. https://doi.org/10.3224/zehf.5i2.05 First citation in articleCrossrefGoogle Scholar

  • Messerer, L., Bürkle, H., Karst, K. & Janke, S. (2020). Nutzung hochschulinterner Expertise zur Entwicklung von Online-Selbstreflexionstests für Studieninteressierte [Using university-internal expertise for the development of online self-assessments for prospective students]. Das Hochschulwesen, 68(3), 81–87. First citation in articleGoogle Scholar

  • R Core Team. (2021). R: A language and environment for statistical computing (Version 4.1.2) [Computer software]. R Foundation for Statistical Computing. https://www.R-project.org/ First citation in articleGoogle Scholar

  • Robinson, K. A., Lee, Y., Bovee, E. A., Perez, T., Walton, S. P., Briedis, D. & Linnenbrink-Garcia, L. (2019). Motivation in transition: Development and roles of expectancy, task values, and costs in early college engineering. Journal of Educational Psychology, 111(6), 1081–1102. https://doi.org/10.1037/edu0000331 First citation in articleCrossrefGoogle Scholar

  • Steinmayr, R. & Spinath, B. (2010). Konstruktion und erste Validierung einer Skala zur Erfassung subjektiver schulischer Werte (SESSW) [Construction and initial validation of a scale for the assessment of subjective school values]. Diagnostica, 56(4), 195–211. https://doi.org/10.1026/0012-1924/a000023 First citation in articleLinkGoogle Scholar

  • Vaidis, D. C. & Bran, A. (2020). Cognitive Dissonance Theory. In D. S. Dunn (Ed.), Oxford Bibliographies Online: Psychology. Oxford University Press. https://doi.org/10.1093/obo/9780199828340-0156 First citation in articleCrossrefGoogle Scholar

1Participants also indicated their enjoyment, but this part of the assessment and feedback will not be the focus of the present paper and thus will not be described any further; for more details, see Bürkle et al. (2022).

2The uneven distribution of participants across groups could not be attributed to differential dropouts but could result from the randomization mechanisms being set completely at random (instead of evenly random). A >χ2 test revealed that prospective students in the EG2 did not differ significantly in their likelihood for dropping out compared to people from the EG1, >χ2 (1, N = 293) = 0.05, p = .820.

3One person indicated to not give any information on this question.