Skip to main content
Open AccessReview Article

The Day of the Week Effect on Subjective Well-Being in the European Social Survey

An Individual-Participant Meta-Analysis

Published Online:https://doi.org/10.1027/2151-2604/a000436

Abstract

Abstract. In large-scale social surveys, respondents are typically interviewed on different days of the week. Because previous research established systematic daily fluctuations of people’s mood, it was hypothesized that subjective well-being ratings might be similarly affected by the day the interview takes place. Therefore, an individual-participant meta-analysis of 221 representative samples from the European Social Survey including 408,637 participants is presented. The random-effects meta-analysis found a negligible day of the week effect on life satisfaction and happiness ratings, even after accounting for selection and interviewer effects. Although significantly different ratings were observed on Sundays, the size of the obtained effects was trivial. These findings provide little evidence that the interview day has a meaningful impact on subjective well-being research in cross-sectional, large-scale studies.

Subjective well-being (SWB) refers to people’s cognitive and affective appraisals of their lives and represents a central determinant of one’s quality of life (Skevington & Böhnke, 2018). Individuals who generally perceive their lives favorably and frequently experience positive emotions such as joy and happiness have, for example, better health, higher income, and more satisfying interpersonal relationships (Diener et al., 2018). To monitor SWB conditions in different populations and their development over time, it is routinely assessed in large-scale social and economic surveys. The measurement of SWB in these studies is guided by the assumption that SWB is a stable characteristic, at least in the medium run. Unfortunately, in empirical applications, SWB ratings can also be affected by the current situational context (e.g., Gnambs & Buntins, 2017; Lucas & Donnellan, 2007, 2012). Among others, it has been suggested that fluctuations in daily mood might affect SWB ratings (Eid & Diener, 2004; Schwarz & Strack, 1999). Particularly, weekends have been suggested to be accompanied by elevated mood states, while more negative moods have been observed during the week (Areni et al., 2011; Helliwell & Wang, 2014, 2015; Tsai, 2019). If daily mood swings affect SWB ratings, these might systematically distort SWB assessments in representative social surveys when respondents are interviewed on different days of the week (DOW). Previous findings on DOW effects on SWB were rather heterogeneous, with some supporting respective assumptions (Akay & Martinsson, 2009; Tumen & Zeydanli, 2014) and others not (Helliwell & Wang, 2014, 2015; Tsai, 2019). Therefore, an individual-participant meta-analysis is presented that evaluated DOW effects on SWB in representative samples of the European Social Survey (ESS; Schnaudt et al., 2014).

The Short-Term Stability of Subjective Well-Being

On the state-trait continuum of psychological constructs, SWB takes a position in the middle. SWB has a rather stable core exhibiting developmental changes across the life course (see López Ulloa et al., 2013, for a review). At the same time, SWB can also be affected by changing life circumstances. After a major life event such as marriage, childbirth, or the loss of a beloved person, people frequently report a pronounced increase or drop in SWB (Bleidorn et al., 2018; Denissen et al., 2019). Even momentary situational conditions can influence SWB ratings: For example, respondents reported higher SWB after they watched a win (as compared to a tie) on an important soccer game (Schwarz et al., 1987) or when interviewed on a sunny (as compared to a cloudy or rainy) day (Kämpfer & Mutz, 2013). Empirical studies estimated that about 10 to 20 percent of the observed variance in SWB ratings can be traced back to momentary situational influences (Lucas & Donnellan, 2007, 2012). For single-item measures of SWB, this proportion is even higher and can reach 25% (Gnambs & Buntins, 2017). The prevalent explanation for these short-term fluctuations in SWB is differences in the respondents’ positive and negative mood that affect the state-component of SWB (Eid & Diener, 2004; Schwarz & Strack, 1999). Particularly, the frequency (rather than the intensity) of experienced emotions affects SWB ratings (Diener et al., 1990). Thus, when people are asked to make SWB judgments, the balance of their positive and negative emotions influences their responses (Kuppens et al., 2008; Tončić & Anić, 2020).

Interestingly, mood follows a circaseptan rhythm (i.e., 7-day cycle) with systematically varying patterns across the days of a week (Csikszentmihalyi & Hunter, 2003; Stone et al., 2012). Supposedly, the mood is worst on Monday (colloquially termed “Blue Monday”) while slightly improving during the following days and sharply increasing on Friday (“Thank God it’s Friday”) to reach its peak on Saturday followed by a sharp decline on Sunday (“Sunday neurosis”; Areni & Burger, 2008). Prevalent explanations for this pattern refer to anticipatory effects for the upcoming week (Stone et al., 1985): Near the end of the week more pleasant events are expected (e.g., the opportunity to meet friends or engage in leisure activities), whereas on Sundays and Mondays people more strongly plan ahead for the upcoming (presumably unpleasant) workdays. Thus, the current mood is determined by the expected activities in the near future which result in the mood variations observed across the DOW. Empirical support for this pattern is mixed. While several studies corroborated a general weekend effect (Helliwell & Wang, 2014, 2015; Stone et al., 2012; Tsai, 2019), specific mood trends for Sunday or Friday were found by some authors (e.g., Csikszentmihalyi & Hunter, 2003; Stieger & Reips, 2019) but not by others (e.g., Stone et al., 2012). Regarding a putative Monday effect, a meta-analysis including 11 effect sizes reported a small (albeit significant) effect of Δ = −0.07 that fell in line with the hypothesis of worse mood at the beginning of the week (Areni et al., 2011).

Far less research has examined whether these DOW effects on mood also extend to SWB. Using data from the German Socio-Economic Panel, Akay and Martinsson (2009) documented that respondents who were interviewed on a Sunday reported lower life satisfaction than those interviewed during the week. Similarly, Tumen and Zeydanli (2014) identified lower happiness ratings in the British Household Panel on Sundays and Mondays, even after correcting for selection effects introduced by the nonrandom assignment of respondents to the different days of the week. In contrast, other studies examining general weekend effects were unable to replicate these DOW effects on SWB (Helliwell & Wang, 2014, 2015; Tsai, 2019).

The Present Study

Despite the demonstrated connection between experienced emotions and SWB (Diener et al., 1990; Kuppens et al., 2008; Tončić & Anić, 2020), limited research has been devoted to the study of DOW effects on SWB, with the few available findings being highly inconsistent. Therefore, the present study presents an individual-participant meta-analysis (Debray et al., 2015) that examines DOW effects in multiple samples from over 30 countries in the ESS (Schnaudt et al., 2014). The focus on a single survey program adopting highly standardized assessment settings across all samples allows better generalizations of potential DOW effects because differences in measurement conditions do not bias the meta-analytic estimates. Moreover, the use of a well-defined and readily accessible population of samples prevents distortions resulting from publication bias.

The research is guided by four main hypotheses postulating that SWB ratings are significantly lower on a (Hypothesis 1, H1) Monday or (Hypothesis 2, H2) Sunday and significantly higher on a (Hypothesis 3, H3) Friday or (Hypothesis 4, H4) Saturday as compared to the weekly average SWB rating. Moreover, previous research (e.g., Akay & Martinsson, 2009; Helliwell & Wang, 2014, 2015; Stone et al., 1985) suggested that DOW effects might be limited to or more pronounced among respondents in paid work as compared to other groups (e.g., retired). The basic premise in these studies seems to be that work is frequently conceived as an unpleasant event giving rise to Blue Monday or Sunday Neurosis effects. However, this view might be challenged: Increasing evidence shows that work might also serve as a protective factor giving access to various latent benefits (Selenko et al., 2011); job-related stress might even be seen as a positive challenge for employees (Cavanaugh et al., 2000). Therefore, sensitivity analyses are conducted that explore the robustness of the central results for respondents in paid work, education, and retirement.

Method

Individual-Participant Data

The ESS (Schnaudt, et al., 2014) is a biennial, cross-sectional, and representative survey measuring social and political attitudes, well-being, and living conditions in varying European countries. The first round was fielded in 2002, with the last round being available from 2018. The nine rounds of the ESS include N = 440,583 participants in 232 samples. For the present analyses, five samples were excluded because they did not record the interview date or provide a unique interviewer identification number that would allow matching respondents to interviewers. Moreover, six samples from Israel were not considered because, in contrast to the remaining countries, Saturdays are considered a rest day, and Sundays are regular workdays. From the remaining samples, 1.6% of the participants were discarded because no information was available on the weekday of the interview, or the interview was conducted on a public holiday. Information on public holidays in a country was taken from two public databases at https://date.nager.at and https://timeanddate.com. Thus, the individual-participant dataset analyzed in this study included 221 independent samples from 37 European countries with N = 408,637 respondents (54% female). The sample sizes varied between 103 and 3,008 with a median of 1,830. The mean age of the respondents was M = 48.26 years (SD = 18.57). Most respondents (96%) were citizens of their country of residence and were in paid work (49%), education (8%), or retired (25%). All surveys were conducted by local survey institutes as personal, paper- or computer-assisted interviews using pretested questionnaires and standardized interview protocols. In total, these surveys employed 30,615 interviewers (69% female) that each conducted a median of 10 interviews. A summary of the key characteristics of each sample is given in the Electronic Supplementary Material (ESM 1).

Measures

The cognitive and hedonic components of SWB were measured with two single items that are frequently administered in large-scale social surveys (cf. Cheung & Lucas, 2014): Life satisfaction (“All things considered, how satisfied are you with your life as a whole nowadays?”) was rated on an 11-point scale from 0 = extremely dissatisfied to 10 = extremely satisfied, whereas happiness (“Taking all things together, how happy would you say you are?”) was evaluated on an 11-point scale from 0 = extremely unhappy to 10 = extremely happy. The two items correlated at r = .69 (p < .001) and resulted in means of M = 6.88 (SD = 2.29) and M = 7.24 (SD = 2.00), respectively.

The weekday of the interview (Monday to Sunday) was derived from the interview date. Moreover, the interview time was noted as a metric variable between 8 hr and 22 hr. For about 0.14% of the respondents, an interview time before 8 hr or after 22 hr was recorded. To avoid distorted results from extreme values, these times were winsorized. Finally, the quarter of the interview year was noted.

Several control variables were acknowledged to correct for potential selection effects. These included the respondent’s sex (0 = male, 1 = female), age (in years), number of years in education, self-reported religiousness (on a 11-point response scale from 0 = not at all to 10 = very), main activity during the last 7 days (0 = paid work, 1 = education, 2 = retired, 3 = other), total contracted hours of paid work each week, migration background (0 = without, 1 = with), and whether a respondent lived with a partner (0 = no, 1 = yes) or a child (0 = no, 1 = yes). Moreover, from Schwartz’s (1992) values model the two higher-order dimensions “openness to change versus conservation” and “self-transcendence versus self-enhancement” were included that were measured with 21 items on 6-point response scales (1 = very much like me, 6 = not like me at all).

Meta-Analytic Procedure

Meta-Analytic Model

The individual-participant meta-analysis adopted a one-stage approach with maximum likelihood estimation that simultaneously analyzed all available samples in a single model while fully accounting for the between-sample heterogeneity (Debray et al., 2015). Because previous research showed pronounced interviewer effects on well-being ratings (Beullens et al., 2019; Beullens & Loosveldt, 2016), the meta-analysis acknowledged a hierarchical random effects structure with interviewers being nested in samples which, in turn, were nested in countries. The respective statistical model can be summarized under the generalized linear mixed-effects framework as

(1)
(2) with ycsiv representing the SWB rating (life satisfaction or happiness) for respondent v interviewed by interviewer i in sample s of country c, xcsivd giving the effect-coded variable indicating a specific DOW d, β0 as the grand mean well-being score across all 7 days, βd denoting the effect of a specific DOW d, uc, us, ui, and uid as the random effects (for countries, samples, and interviewers), and εcsiv reflecting the residual term. The effect-coding scheme for the variable indicating a specific DOW made it possible to examine whether well-being reports on a specific day differed from the grand mean rating across all 7 days (Wendorf, 2004). In line with previous research (Stone et al., 2012), DOW effects were scrutinized for four specific days: Monday, Friday, Saturday, and Sunday. Effect sizes were derived by z-standardization of the outcome ycsiv; so, the parameter βd can be interpreted as the standardized mean difference Δ between day d and the grand mean across all 7 days. DOW effects on SWB were examined separately for life satisfaction and happiness. Because preliminary analyses showed negligible random slope variances across samples and countries (see ESM 1), only random interviewer slopes uid are modeled. The percentage of variability in random slopes not caused by sampling error was quantified with an I2 (Higgins & Thompson, 2002)-analogous measure; For the effect denoting a specific day of the week βd, let and SEd represent the random slope variance and standard error, respectively. Then, the percentage of variability in βd not caused by sampling error was calculated as . This definition of the relative random variance is conceptually similar to I2 in meta-analyses (Higgins & Thompson, 2002) and the intraclass correlation (ICC; Liljequist et al., 2019).

Correction for Selection Effects

Because respondents were not randomly assigned to the different days of the week, potential selection effects were addressed following Heckman (1979). This involves two steps through the estimation of a selection and an analysis model: First, the selection model (step 1) in the form of a probit regression estimated the respondent’s choice of a specific interview day (e.g., Monday) from 11 control variables (see above), the time of the day, and the quarter of the year for the interview. As an identification constraint, the Heckman (1979) selection-correction approach requires at least one predictive variable in the selection model that is not associated with SWB and, thus, is not included in the analysis model (step 2). Following previous research (Tumen & Zeydanli, 2014), the interviewer was used as an exclusion criterion. The interviewers were expected to vary in their probability of conducting interviews on a certain day (e.g., Monday). This implies that, to some degree, the interviewer’s contact behavior before the actual interview determines the interview day but not the well-being score for a given respondent. Consequently, for each interviewer, the relative number of interviews conducted on a specific day was calculated (cf. Tumen & Zeydanli, 2014). The respective variable was included as another predictor in the selection equation. From these results, the inverse Mills (1926) ratio was calculated. Second, the seven control variables and the inverse Mills ratio were included in the mixed-effects regression of well-being on the DOW in Equation (1) to account for the selection effects. In this regression, the estimated effect for the DOW represents the impact of a specific day on SWB ratings corrected for biasing selection effects.

Weighting Procedure

The analyses acknowledged the post-stratification weights provided in the ESS dataset to account for differences in sampling designs and non-response errors that might have resulted in a systematic underrepresentation of certain respondent groups. For three samples, respective weights were not available; thus, unit weights were used (i.e., values of 1).

Missing Values

For 0.53% and 0.67% of the respondents, life satisfaction or happiness ratings were not available. Moreover, some control variables had missing rates up to 4.03%. Because missingness was extremely rare for all variables, missing values were imputed with the mode (for categorical variables) or median (for metric variables).

Evaluation of Effects

The pooled DOW effects are evaluated based on their significance using an α of 5%. Moreover, effect sizes are considered practically meaningful if they exceed |Δ| = 0.05. Empirical effect size distributions in various psychological fields (Bosco et al., 2015; Paterson et al., 2016) highlight that the average difference between two deciles of the distribution typically falls between about d = 0.04 and 0.08. For example, Paterson and colleagues (2016) reported average effect sizes at the 40th and 50th percentiles of d = 0.35 and 0.41, respectively. Therefore, DOW effects shifting an effect size by about one decile of these distributions were considered practically meaningful.

Statistical Software

The analyses were conducted in R version 4.0.3 (R Core Team, 2020) using the lme4 package version 1.1-25 (Bates et al., 2015), lmerTest package version 3.1-3 (Kuznetsova et al., 2017), and arm package version 1.11-2 (Gelman & Su, 2020). Moreover, data processing and preparation were supported by the tidyverse package version 1.3.0 (Wickham et al., 2019).

Open Practices

The survey material and raw data are freely available at https://europeansocialsurvey.org. The computer code and the results of the statistical analyses reported in this manuscript are provided at https://doi.org/10.23668/psycharchives.4369. Moreover, the checklist for the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (Stewart et al., 2015) is given in the ESM 2. The meta-analysis was not preregistered.

Results

The interview days were fairly evenly distributed across the days from Monday to Saturday (13%–17%); in contrast, on Sunday somewhat fewer interviews were observed (9%). The distribution of the life satisfaction scores in Figure 1 showed slightly lower scores on Saturday (M = 6.52, SD = 2.39) and Sunday (M = 6.24, SD = 2.45) as compared to the average across the entire week (M = 6.88, SD = 2.29). In contrast, on Monday (M = 7.04, SD = 2.22) slightly higher ratings were observed. A similar pattern appeared for happiness (see Figure 1). However, these results might be misleading because of potential differences between countries and interviewers were not acknowledged. Therefore, the variance in well-being scores attributable to samples, interviewers, days of the week, and the time of the day was quantified using the ICC (Liljequist et al., 2019). These analyses highlighted that differences between interviewers (8.65% and 9.24%) and countries (15.87% and 11.65%) accounted for a substantial proportion of variance in life satisfaction and happiness ratings, whereas differences between samples did not (1.18% and 1.12%). Similarly, the day of the week (0.004% and 0.002%) and the time of the interview day (0.041% and 0.040%) were less relevant for the observed heterogeneity in well-being ratings.

Figure 1 Mean subjective well-being ratings on seven days of the week in the European Social Survey. Means and standard deviations are given at the top and sample sizes are at the bottom.

Unconditional Meta-Analyses

The results of the meta-analyses are summarized in Table 1. The fixed-effects model showed significant (p < .05) weekend effects for life satisfaction with Δ = −0.113 for Saturday and −0.172 for Sunday. Surprisingly, the direction of the Saturday effect fell in the wrong direction and showed lower life satisfaction as compared to the weekly average. In contrast, for Monday and Friday, the respective effect estimates were markedly lower |Δ| < 0.03. For happiness, a highly similar pattern was observed with more pronounced weekend effects as compared to the other days. However, these results might be biased in the case of unmodeled between-sample or between-interviewer heterogeneity. Indeed, random effects models showed a substantially better fit to the data as compared to the fixed effects model (see ESM 1). After accounting for these random variance components, the estimated DOW effects drastically reduced and, in most cases, did not reach significance (see Table 1). The respective Sunday effects were significant at an α-level of 5% and fell at Δ = −0.010 for life satisfaction and −0.011 for happiness. Thus, the size of these effects did not support practically meaningful DOW effects. For the remaining days, no significant effects were found.

Table 1 Meta-analytic results for the day of the week effect on subjective well-being ratings

Meta-Analyses Correcting for Selection Effects

In case of substantial selection effects, the previously reported results might be misleading. Indeed, probit regressions of the interview day on various covariates (see ESM 1) found several variables explaining the choice of the interview day. For example, older respondents and retired interviewees were less likely to participate in the weekend, whereas those in education showed higher participation probabilities on Fridays and Saturdays. Therefore, the meta-analyses were repeated accounting for these selection effects (see Table 1). These analyses showed slightly larger effects of Sunday on life satisfaction and happiness ratings. However, the size of these effects remained negligible, Δ = −0.012, and −0.012. Moreover, after accounting for selection effects also Saturdays exhibited significantly (p < .05) different happiness ratings. But the respective effect size was even lower as for Sunday (Δ = −0.006). Taken together, these analyses did not corroborate meaningfully different well-being ratings depending on the DOW.

Heterogeneity Analyses

The random-effects meta-analyses reported in Table 1 observed pronounced heterogeneity related to the person of the interviewer. The respective random variances varied between σi = 0.110 and 0.127 indicating different DOW effects depending on the interviewer. The respective empirical Bayes estimates (cf. Gelman & Hill, 2006) of the Sunday effects are given in Figure 2.1 These show that 90% of the estimated random effects on life satisfaction and happiness fell between Δ = −0.06 and 0.06. Moreover, these estimates were highly uncertain and could not be distinguished from zero. Thus, even acknowledging the heterogeneity in the DOW effects would not result in substantial daily differences in SBW ratings.

Figure 2 Empirical Bayes estimates (with standard errors) for interviewer effects of Sunday on subjective well-being. (A) Life Satisfaction; (B) Happiness. Effects are presented for 100 randomly drawn interviewers.

Finally, moderating analyses explored whether different effects emerged depending on the work status of the respondents. Therefore, interaction effects between the weekday and work status were added to the regression model in Equation (1). These analyses showed significantly (p < .05) different Saturday and Sunday effects, while no moderating influences were observed for Monday and Friday (see ESM 1). For example, respondents in paid work reported significantly lower life satisfaction (Δ = −0.011) on Sundays, while no significant (p > .05) effects were observed for retired respondents (Δ = −0.010) or those in education (Δ = −0.002). In contrast, happiness ratings given on Sundays were markedly lower among retired respondents (Δ = −0.015) as compared to those in paid work (Δ = −0.009) or education (Δ = 0.004). Overall, the size of all effects did not give rise to meaningful differences in well-being ratings depending on the day of the interview.

Discussion

Large scale social surveys strive to implement highly standardized assessment conditions to avoid distorted measurements resulting from methodological artifacts regarding how psychological constructs were obtained. Because these surveys are voluntary, standardizations on the exact timing when a respondent takes his or her interviews are rarely feasible. Rather, interviewees have large freedom on when to participate. Because accumulating evidence indicated systematic differences in momentary mood depending on the DOW (Areni et al., 2011; Helliwell & Wang, 2014, 2015; Stone et al., 2012; Tsai, 2019), it has been suggested that SWB might be similarly affected by different interview days (Akay & Martinsson, 2009; Tumen & Zeydanli, 2014). Therefore, the present individual-participant meta-analysis explored DOW effects in the ESS, a repeated, cross-country panel study including over 400,000 participants. Overall, these analyses showed negligible DOW effects on life satisfaction and happiness ratings. Although Sundays consistently resulted in significantly lower SWB ratings (H2), the respective effect sizes of about Δ = −0.01 did not suggest a practically meaningful impact. Research on SWB is unlikely to be distorted because respondents were interviewed on different DOW. For the other examined DOW, no or inconsistent evidence resulted. In contrast to H4, Saturdays resulted in significantly lower happiness ratings, while Monday’s and Friday’s effects (H1 and H3) were observed for neither of the two measures. Sensitivity analyses revealed that work status moderated the DOW effects with more pronounced effects for respondents in paid work as compared to those in education or retirement. However, even after accounting for the work status the effect sizes did not raise to a meaningful level and remained trivial. Taken together, these results do not confirm meaningful DOW effects on SWB ratings in large-scale social surveys.

Implications

The consequences of these findings for social science research are encouraging. Simply, because respondents are interviewed on different days of the week does not seem to bias SWB ratings in large-scale surveys. Thus, the prevalent practice of (to some degree) letting participants choose their preferred DOW for an interview seems appropriate and should facilitate operational survey management. On a more theoretical stance, the near null effects obtained in the present study might call into question the putative spill-over effects of daily mood on SWB (cf. Schwarz & Strack, 1999), despite the frequently observed circaseptan rhythm of mood (e.g., Csikszentmihalyi & Hunter, 2003; Stone et al., 2012). Indeed, recent replications of contextual influences on SWB (e.g., weather effects; Lucas & Lawless, 2013; Schmiedeberg & Schröder, 2014) or, more generally, the effect of mood on SWB ratings (Jayawickreme et al., 2017; Yap et al., 2017) failed to establish pronounced transfer effects. Rather, SWB was largely unaffected by respondents’ momentary mood. These findings corroborate the interpretation of SWB as a relatively stable characteristic that is negligibly affected by transient factors at the time of the SWB judgment.

Limitations

The interpretation of the presented findings might be constrained by some weaknesses. First, the basic proposition of the DOW effect on SWB relied on daily variations of mood that supposedly affects SWB ratings (Schwarz & Strack, 1999). Because the present study did not include mood measures, this spill-over effect could not be empirically examined. Second, the administered SWB measures consisted of single items. Although in large-scale surveys SWB is typically measured in such a way, single-item measures have a somewhat limited variance as compared to multi-item scales (Gnambs & Buntins, 2017). Thus, it might be speculated that DOW effects are more pronounced for longer scales. However, given the size of the observed effect in the present study, it remains doubtful whether effect sizes would increase to a non-trivial size. Finally, the meta-analysis relied on a single panel study. It cannot be excluded that study-specific idiosyncrasies, for example, on how the interviews were conducted, might have influenced the observed findings to some degree and thus, somewhat limits the generalizability of the results.

Conclusion

Systematic mood fluctuations across the days of the week suggested that SWB ratings in large-scale social surveys might also be affected by the day the interview takes place. An individual-participant meta-analysis of over 200 samples in the ESS found negligible DOW effects on SWB. Although significant Sunday effects were observed, the size of the obtained effects was trivial. These findings provide little evidence that DOW effects have a meaningful impact on SWB research.

1The study included a total of 30,615 interviewers. Plotting all interviewer effects in a single chart would result in a highly complex presentation that does not allow discerning meaningful patterns. Therefore, a random sample of 100 interviewers is plotted in Figure 2.

References *References included in the meta-analysis

  • Akay, A., & Martinsson, P. (2009). Sundays are blue: Aren’t they? The day-of-the-week effect on subjective well-being and socio-economic status (No. 4563; IZA Discussion Paper). http://hdl.handle.net/10419/36331 First citation in articleCrossrefGoogle Scholar

  • Areni, C. S., & Burger, M. (2008). Memories of “bad” days are more biased than memories of “good” days: Past Saturdays vary, but past Mondays are always blue. Journal of Applied Social Psychology, 38(6), 1395–1415. https://doi.org/10.1111/j.1559-1816.2008.00353.x First citation in articleCrossrefGoogle Scholar

  • Areni, C. S., Burger, M., & Zlatevska, N. (2011). Factors affecting the extent of Monday blues: Evidence from a meta-analysis. Psychological Reports, 109(3), 723–733. https://doi.org/10.2466/13.20.PR0.109.6.723-733 First citation in articleCrossrefGoogle Scholar

  • Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1). https://doi.org/10.18637/jss.v067.i01 First citation in articleCrossrefGoogle Scholar

  • Beullens, K., & Loosveldt, G. (2016). Interviewer effects in the European Social Survey. Survey Research Methods, 10, 103–118. https://doi.org/10.18148/SRM/2016.V10I2.6261 First citation in articleGoogle Scholar

  • Beullens, K., Loosveldt, G., & Vandenplas, C. (2019). Interviewer effects among older respondents in the European Social Survey. International Journal of Public Opinion Research, 31(4), 609–625. https://doi.org/10.1093/ijpor/edy031 First citation in articleCrossrefGoogle Scholar

  • Bleidorn, W., Hopwood, C. J., & Lucas, R. E. (2018). Life events and personality trait change: Life events and trait change. Journal of Personality, 86(1), 83–96. https://doi.org/10.1111/jopy.12286 First citation in articleCrossrefGoogle Scholar

  • Bosco, F. A., Aguinis, H., Singh, K., Field, J. G., & Pierce, C. A. (2015). Correlational effect size benchmarks. Journal of Applied Psychology, 100(2), 431–449. https://doi.org/10.1037/a0038047 First citation in articleCrossrefGoogle Scholar

  • Cavanaugh, M. A., Boswell, W. R., Roehling, M. V., & Boudreau, J. W. (2000). An empirical examination of self-reported work stress among US managers. Journal of Applied Psychology, 85(1), 65–74. https://doi.org/10.1037/0021-9010.85.1.65 First citation in articleCrossrefGoogle Scholar

  • Cheung, F., & Lucas, R. E. (2014). Assessing the validity of single-item life satisfaction measures: Results from three large samples. Quality of Life Research, 23(10), 2809–2818. https://doi.org/10.1007/s11136-014-0726-4 First citation in articleCrossrefGoogle Scholar

  • Csikszentmihalyi, M., & Hunter, J. (2003). Happiness in everyday life: The uses of experience sampling. Journal of Happiness Studies, 4(2), 185–199. https://doi.org/10.1023/A:1024409732742 First citation in articleCrossrefGoogle Scholar

  • Debray, T. P. A., Moons, K. G. M., van Valkenhoef, G., Efthimiou, O., Hummel, N., Groenwold, R. H. H., & Reitsma, J. B. (2015). Get real in individual participant data (IPD) meta-analysis: A review of the methodology. Research Synthesis Methods, 6(4), 293–309. https://doi.org/10.1002/jrsm.1160 First citation in articleCrossrefGoogle Scholar

  • Denissen, J. J. A., Luhmann, M., Chung, J. M., & Bleidorn, W. (2019). Transactions between life events and personality traits across the adult lifespan. Journal of Personality and Social Psychology, https://doi.org/10.1037/pspp0000196 First citation in articleCrossrefGoogle Scholar

  • Diener, E., Oishi, S., & Tay, L. (2018). Advances in subjective well-being research. Nature Human Behaviour, 2(4), 253–260. https://doi.org/10.1038/s41562-018-0307-6 First citation in articleCrossrefGoogle Scholar

  • Diener, E., Sandvik, E., & Pavot, W. G. (1990). Happiness is the frequency, not intensity, of positive versus negative affect. In F. StrackM. ArgyleN. SchwarzEds., The social psychology of subjective well-being (pp. 119–139). Pergamon Press. First citation in articleGoogle Scholar

  • Eid, M., & Diener, E. (2004). Global judgments of subjective well-being: Situational variability and long-term stability. Social Indicators Research, 65(3), 245–277. https://doi.org/10.1023/B:SOCI.0000003801.89195.bc First citation in articleCrossrefGoogle Scholar

  • *European Social Survey. (2002). ESS round 1: European Social Survey round 1 data. Data file edition 6.6, NSD – Norwegian Centre for Research Data, Norway – Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS1–2002 First citation in articleGoogle Scholar

  • *European Social Survey. (2004). ESS round 2: European Social Survey round 2 data. Data file edition 3.6, NSD – Norwegian Centre for Research Data, Norway – Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS2–2004 First citation in articleGoogle Scholar

  • *European Social Survey. (2006). ESS round 3: European Social Survey round 3 data. Data file edition 3.7, NSD – Norwegian Centre for Research Data, Norway – Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS3–2006 First citation in articleGoogle Scholar

  • *European Social Survey. (2008). ESS round 4: European Social Survey round 4 data. Data file edition 4.5, NSD - Norwegian Centre for Research Data, Norway – Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS4–2008 First citation in articleGoogle Scholar

  • *European Social Survey. (2010). ESS round 5: European Social Survey round 5 data. Data file edition 3.4, NSD - Norwegian Centre for Research Data, Norway – Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS5–2010 First citation in articleGoogle Scholar

  • *European Social Survey. (2012). ESS round 6: European Social Survey round 6 data. Data file edition 2.4, NSD – Norwegian Centre for Research Data, Norway–Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS6–2012 First citation in articleGoogle Scholar

  • *European Social Survey. (2014). ESS round 7: European Social Survey round 7 data. Data file edition 2.2, NSD - Norwegian Centre for Research Data, Norway–Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS7–2014 First citation in articleGoogle Scholar

  • *European Social Survey. (2016). ESS round 8: European Social Survey round 8 data. Data file edition 2.1, NSD - Norwegian Centre for Research Data, Norway–Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS8–2016 First citation in articleGoogle Scholar

  • *European Social Survey. (2018). ESS round 9: European Social Survey round 9 data. Data file edition 2.0, NSD - Norwegian Centre for Research Data, Norway–Data Archive and distributor of ESS data for ESS ERIC. https://doi.org/10.21338/NSD-ESS9–2018 First citation in articleGoogle Scholar

  • Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models, Cambridge University Press. First citation in articleCrossrefGoogle Scholar

  • Gelman, A., & Su, Y.-S. (2020). arm: Data analysis using regression and multilevel/hierarchical models (Version 1.11-2) [Computer software]. https://CRAN.R-project.org/package=arm First citation in articleGoogle Scholar

  • Gnambs, T., & Buntins, K. (2017). The measurement of variability and change in life satisfaction: A comparison of single-item and multi-item instruments. European Journal of Psychological Assessment, 33(4), 224–238. https://doi.org/10.1027/1015-5759/a000414 First citation in articleLinkGoogle Scholar

  • Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica, 47(1), 153–161. https://doi.org/10.2307/1912352 First citation in articleCrossrefGoogle Scholar

  • Helliwell, J. F., & Wang, S. (2014). Weekends and subjective well-being. Social Indicators Research, 116(2), 389–407. https://doi.org/10.1007/s11205-013-0306-y First citation in articleCrossrefGoogle Scholar

  • Helliwell, J. F., & Wang, S. (2015). How was the weekend? How the social context underlies weekend effects in happiness and other emotions for US workers. PLoS One, 10(12), e0145123. https://doi.org/10.1371/journal.pone.0145123 First citation in articleCrossrefGoogle Scholar

  • Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21(11), 1539–1558. https://doi.org/10.1002/sim.1186 First citation in articleCrossrefGoogle Scholar

  • Jayawickreme, E., Tsukayama, E., & Kashdan, T. B. (2017). Examining the effect of affect on life satisfaction judgments: A within-person perspective. Journal of Research in Personality, 68, 32–37. https://doi.org/10.1016/j.jrp.2017.04.005 First citation in articleCrossrefGoogle Scholar

  • Kämpfer, S., & Mutz, M. (2013). On the sunny side of life: Sunshine effects on life satisfaction. Social Indicators Research, 110(2), 579–595. https://doi.org/10.1007/s11205-011-9945-z First citation in articleCrossrefGoogle Scholar

  • Kuppens, P., Realo, A., & Diener, E. (2008). The role of positive and negative emotions in life satisfaction judgment across nations. Journal of Personality and Social Psychology, 95(1), 66–75. https://doi.org/10.1037/0022-3514.95.1.66 First citation in articleCrossrefGoogle Scholar

  • Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 82(13). https://doi.org/10.18637/jss.v082.i13 First citation in articleCrossrefGoogle Scholar

  • Liljequist, D., Elfving, B., & Skavberg Roaldsen, K. (2019). Intraclass correlation – A discussion and demonstration of basic features. PLoS One, 14(7), e0219854. https://doi.org/10.1371/journal.pone.0219854 First citation in articleCrossrefGoogle Scholar

  • López Ulloa, B. F., Møller, V., & Sousa-Poza, A. (2013). How does subjective well-being evolve with age? A literature review. Journal of Population Ageing, 6(3), 227–246. https://doi.org/10.1007/s12062-013-9085-0 First citation in articleCrossrefGoogle Scholar

  • Lucas, R. E., & Donnellan, M. B. (2007). How stable is happiness? Using the STARTS model to estimate the stability of life satisfaction. Journal of Research in Personality, 41(5), 1091–1098. https://doi.org/10.1016/j.jrp.2006.11.005 First citation in articleCrossrefGoogle Scholar

  • Lucas, R. E., & Donnellan, M. B. (2012). Estimating the reliability of single-item life satisfaction measures: Results from four national panel studies. Social Indicators Research, 105(3), 323–331. https://doi.org/10.1007/s11205-011-9783-z First citation in articleCrossrefGoogle Scholar

  • Lucas, R. E., & Lawless, N. M. (2013). Does life seem better on a sunny day? Examining the association between daily weather conditions and life satisfaction judgments. Journal of Personality and Social Psychology, 104(5), 872–884. https://doi.org/10.1037/a0032124 First citation in articleCrossrefGoogle Scholar

  • Mills, J. P. (1926). Table of the ratio: Area to bounding ordinate, for any portion of normal curve. Biometrika, 18(3–4), 395–400. https://doi.org/10.1093/biomet/18.3-4.395 First citation in articleCrossrefGoogle Scholar

  • Paterson, T. A., Harms, P. D., Steel, P., & Credé, M. (2016). An assessment of the magnitude of effect sizes: Evidence from 30 years of meta-analysis in management. Journal of Leadership & Organizational Studies, 23(1), 66–81. https://doi.org/10.1177/1548051815614321 First citation in articleCrossrefGoogle Scholar

  • R Core Team. (2020). R: A language and environment for statistical computing (Version 4.03) [Computer software], R Foundation for Statistical Computing. https://www.R-project.org First citation in articleGoogle Scholar

  • Schmiedeberg, C., & Schröder, J. (2014). Does weather really influence the measurement of life satisfaction? Social Indicators Research, 117(2), 387–399. https://doi.org/10.1007/s11205-013-0350-7 First citation in articleCrossrefGoogle Scholar

  • Schnaudt, C., Weinhardt, M., Fitzgerald, R., & Liebig, S. (2014). The European Social Survey: Contents, design, and research potential. Schmollers Jahrbuch, 134(4), 487–506. https://doi.org/10.3790/schm.134.4.487 First citation in articleCrossrefGoogle Scholar

  • Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In M. P. ZannaEd., Advances in experimental social psychology (Vol. 25, pp. 1–65). Elsevier. https://doi.org/10.1016/S0065-2601(08)60281-6 First citation in articleCrossrefGoogle Scholar

  • Schwarz, N., & Strack, F. (1999). Reports of subjective well-being: Judgmental processes and their methodological implications. In D. KahnemannE. DienerN. SchwarzEds., Well-being: The foundations of hedonic psychology (pp. 61–84). Russell Sage Foundation. First citation in articleGoogle Scholar

  • Schwarz, N., Strack, F., Kommer, D., & Wagner, D. (1987). Soccer, rooms, and the quality of your life: Mood effects on judgments of satisfaction with life in general and with specific domains. European Journal of Social Psychology, 17(1), 69–79. https://doi.org/10.1002/ejsp.2420170107 First citation in articleCrossrefGoogle Scholar

  • Selenko, E., Batinic, B., & Paul, K. (2011). Does latent deprivation lead to psychological distress? Investigating Jahoda’s model in a four-wave study: Latent deprivation and distress. Journal of Occupational and Organizational Psychology, 84(4), 723–740. https://doi.org/10.1348/096317910X519360 First citation in articleCrossrefGoogle Scholar

  • Skevington, S. M., & Böhnke, J. R. (2018). How is subjective well-being related to quality of life? Do we need two concepts and both measures? Social Science & Medicine, 206, 22–30. https://doi.org/10.1016/j.socscimed.2018.04.005 First citation in articleCrossrefGoogle Scholar

  • Stewart, L. A., Clarke, M., Rovers, M., Riley, R. D., Simmonds, M., Stewart, G., & Tierney, J. F. (2015). Preferred Reporting Items for a Systematic Review and Meta-analysis of individual participant data: The PRISMA-IPD statement. Journal of the American Medical Association, 313(16), 1657–1665. https://doi.org/10.1001/jama.2015.3656 First citation in articleCrossrefGoogle Scholar

  • Stieger, S., & Reips, U.-D. (2019). Well-being, smartphone sensors, and data from open-access databases: A mobile experience sampling study. Field Methods, 31(3), 277–291. https://doi.org/10.1177/1525822X18824281 First citation in articleCrossrefGoogle Scholar

  • Stone, A. A., Hedges, S. M., Neale, J. M., & Satin, M. S. (1985). Prospective and cross-sectional mood reports offer no evidence of a “blue Monday” phenomenon. Journal of Personality and Social Psychology, 49(1), 129–134. https://doi.org/10.1037/0022-3514.49.1.129 First citation in articleCrossrefGoogle Scholar

  • Stone, A. A., Schneider, S., & Harter, J. K. (2012). Day-of-week mood patterns in the United States: On the existence of “Blue Monday”, “Thank God it’s Friday” and weekend effects. The Journal of Positive Psychology, 7(4), 306–314. https://doi.org/10.1080/17439760.2012.691980 First citation in articleCrossrefGoogle Scholar

  • Tončić, M., & Anić, P. (2020). Effects of momentary affect on satisfaction judgments: A between- and within-person longitudinal study. Journal of Individual Differences, 41(2), 61–67. https://doi.org/10.1027/1614-0001/a000304 First citation in articleLinkGoogle Scholar

  • Tsai, M.-C. (2019). The good, the bad, and the ordinary: The day-of-the-week effect on mood across the globe. Journal of Happiness Studies, 20(7), 2101–2124. https://doi.org/10.1007/s10902-018-0035-7 First citation in articleCrossrefGoogle Scholar

  • Tumen, S., & Zeydanli, T. (2014). Day-of-the-week effects in subjective well-being: Does selectivity matter? Social Indicators Research, 119(1), 139–162. https://doi.org/10.1007/s11205-013-0477-6 First citation in articleCrossrefGoogle Scholar

  • Wendorf, C. A. (2004). Primer on multiple regression coding: Common forms and the additional case of repeated contrasts. Understanding Statistics, 3(1), 47–57. https://doi.org/10.1207/s15328031us0301_3 First citation in articleCrossrefGoogle Scholar

  • Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T., Miller, E., Bache, S., Müller, K., Ooms, J., Robinson, D., Seidel, D., Spinu, V., … Yutani, H. (2019). Welcome to the Tidyverse. Journal of Open Source Software, 4(43), Article 1686. https://doi.org/10.21105/joss.01686 First citation in articleCrossrefGoogle Scholar

  • Yap, S. C. Y., Wortman, J., Anusic, I., Baker, S. G., Scherer, L. D., Donnellan, M. B., & Lucas, R. E. (2017). The effect of mood on judgments of subjective well-being: Nine tests of the judgment model. Journal of Personality and Social Psychology, 113(6), 939–961. https://doi.org/10.1037/pspp0000115 First citation in articleCrossrefGoogle Scholar

Timo Gnambs, Leibniz Institute for Educational Trajectories, Wilhelmsplatz 3, 96047 Bamberg, Germany, E-mail