Skip to main content
Open Access

The Effects of Clinical Meditation Programs on Stress and Well-Being

An Updated Rapid Review and Meta-Analysis of Randomized Controlled Trials (RCTs) With Active Comparison Groups

Published Online:https://doi.org/10.1027/2151-2604/a000510

Abstract

Abstract. Many people suffer from chronic conditions, such as cancer, diabetes, or depression. The use and development of meditation interventions to offer complementary psychological treatment for such patients is increasing, as is criticism of research on this topic. Therefore, the aim of the present rapid review and meta-analysis is to investigate the effects of meditation interventions in randomized controlled trials of clinical populations on perceived stress and well-being. A search was conducted in MEDLINE, Web of Science, PsycInfo, CINAHL, PsycArticles, and PSYNDEX between July 2013 and April 13, 2021. Three-level random effect models were estimated. Based on 316 effect sizes, small effects of meditation interventions were found (stress: g = 0.18; well-being: g = 0.25) largely paralleling findings of a previous meta-analysis. An important limitation is the potentially high risk of bias for individual studies. Overall, meditation interventions appear to be beneficial for complementary treatment of chronic clinical conditions.

Although there is a worldwide tendency for people to live longer, the rates of people who are burdened with noncommunicable illnesses, such as cancer, heart disease, diabetes, or mental disorders, at some point in their lives have remained constant in recent years (World Health Organization, 2021). A factor that has been hypothesized to negatively impact the development and course of many chronic conditions as well as a patient’s well-being and quality of life is chronic psychological stress (Cohen et al., 2012). Therefore, it can be important to complement medical treatment for patients suffering from chronic health conditions with psychological interventions targeting the negative effects of chronic psychological stress (Cohen et al., 2007).

Historically, meditation has been seen as a way to reduce chronic stress and increase well-being (Dahl et al., 2015), an aspect that was taken up and applied to the complementary treatment of clinical conditions in Western medicine by Kabat-Zinn (1982). There is a wide variety of meditation traditions offering many forms of meditation; in Western cultures, one of the most popular and well-known is mindfulness meditation (Lieberman, 2018). The program introduced by Kabat-Zinn, mindfulness-based stress reduction (MBSR), incorporates mindfulness meditation and has been shown to be effective in reducing symptoms in patients with chronic pain (Kabat-Zinn, 1982) or promoting healing during phototherapy in psoriasis (Kabat-Zinn et al., 1998). In particular, MBSR teaches selected meditation techniques and how to apply them to stress, pain, and symptoms experienced by patients. The goal of the program is not to change the biological causes of the conditions but how patients cope with it (Kabat-Zinn, 2003). Since then, a number of new programs incorporating mindfulness meditation practice (Bowen et al., 2021) or the idea of mindfulness (Hayes et al., 2009) have been brought forward, and some have been shown to be effective for particular conditions (Segal et al., 2010).

However, studies on mindfulness meditation interventions, in particular, have been heavily criticized for their lack of methodological quality (Farias et al., 2016; Goldberg et al., 2017; Rosenkranz et al., 2019). An important point is that randomized controlled trials (RCTs) employing appropriate active comparison groups, which control for the time and attention allocated to the intervention group, only make up a small proportion of studies on meditation interventions (Davidson & Kaszniak, 2015). It is important to perform meta-analysis on these high-quality RCTs to have a new gold standard for empirically supported treatments (Tolin et al., 2015). To address the criticism and determine whether it is justified, Goyal and colleagues (2014) used a meta-analytic approach to investigate the effectiveness of clinical meditation interventions in comparison to active conditions. Patients randomized to these active comparison conditions received another form of intervention, such as education or psychotherapy, to control for time and attention employed. They found that mindfulness meditation programs significantly reduced anxiety (d = 0.38), depression (d = 0.30), and pain (d = 0.33) in patients with a medical or psychiatric condition in a sample of k = 47 RCTs. Generally, no differences in positive affect, quality of life (QoL), and stress-related behaviors, such as sleeping, eating, and substance use, were found between meditation, mindfulness, or mantra-based and active comparison groups. Furthermore, in comparison to established forms of treatment, such as cognitive behavioral therapy (Beck & Beck, 2011), no significant differences were found. Nevertheless, the authors conclude that the body of evidence was inconsistent, had questionable statistical power levels, and was at risk of bias, thereby calling for further and qualitatively better studies in the area.

Accordingly, one aim of the present rapid review and meta-analysis is to explore if study quality has improved since 2013, the due date used by Goyal and colleagues (2014) for including studies. In fact, since the publication of the systematic review by Goyal and colleagues (2014), a wealth of new systematic reviews on clinical meditation programs has been published. Mostly, these systematic reviews address narrower review questions focusing on the effects of specific programs, such as MBSR (Lauche et al., 2013), or investigating specific conditions, such as depression (Strauss et al., 2014).

This rapid review aims to investigate a broader research question and replicate findings by Goyal and colleagues (2014) to shed some light on the question of what effects clinical meditation programs have on psychological stress and well-being in people suffering from chronic noncommunicable diseases. In line with methodological requests on primary studies (e.g., Van Dam et al., 2018), the present rapid review and meta-analysis will be focused on RCTs with active comparison groups of different types (placebo, active, and psychotherapy). The following hypotheses will be tested: Meditation interventions will result in lower levels of stress and higher levels of subjective well-being compared to any other intervention except already well-established interventions, such as psychotherapy.

In line with Goyal and colleagues (2014), the relationships might possibly be moderated by the (a) type of meditation practice, with higher effects expected for mindfulness meditation compared to mantra; (b) comparison group type, with more pronounced effects expected in comparison with placebo and specific active comparison groups and no difference to established forms of psychotherapy; and (c) assessment time, with greater differences expected between meditation and comparison groups directly after the treatment than at later follow-up.

Method

Supplementary materials are available online in PsychArchives (see Seekircher, 2022a, 2022b, 2022c).

Inclusion and Exclusion Criteria

An overview of the inclusion criteria, ordered according to the participant, intervention, comparison, outcome, and study design framework (Centre for Reviews and Dissemination, 2008), can be found in Table 1. Included in the review were studies on adults with a medical or psychiatric diagnosis. Interventions were structured meditation programs with at least 4 h of instructed training and reference to home practice. Studies were excluded if the intervention they employed did not have meditation at the core of the program, for example, acceptance and commitment therapy, or if the meditation practice was movement-based, such as yoga or Tai chi.

Table 1 Study inclusion and exclusion criteria

Moreover, studies had to include an active comparison group, designed to control exactly for the time and attention attributed to the meditation group. Additional requirements included acceptable reliability estimates and indications of construct validity for measures of psychological stress or subjective well-being (Johnston et al., 2022). For reliability, Cronbach’s α was required to be larger than 0.7 for any clinical population (Taber, 2018) to ensure that possible intervention effects are not obscured by measurement error. Evidence for construct validity was taken from already demonstrated correlation of the measure with at least one other established measure for the same construct from the literature (r > 0.29; Schober et al., 2018; Waltz et al., 2004). Finally, only studies reporting the use of an RCT design and randomized on the level of the individual were included. Reports that did not contain sufficient information for analysis or to judge eligibility were excluded.

Information Sources

The specific search strategy records are included in Appendix A. Terms used in the search were meditation, meditat*, mindful*, transcendental meditation, mindfulness-based stress reduction (MBSR), mindfulness-based cognitive therapy (MBCT), vipassana, zen, and yogic. Terms were connected with the Boolean operator OR. The search was restricted to articles published from July 2013 until April 2021 and limited to articles written in English or German via filters. Furthermore, to further restrict the search to RCTs, we applied a filter or search terms such as randomized controlled trial (RCT), random*, and random added to the search by AND.

Studies were identified via a search of electronic databases in April 2021. To identify studies from the life sciences, the databases MEDLINE and CINAHL were searched. Moreover, searches in PsycInfo, PsycArticles, and PSYNDEX were conducted to locate psychological studies on the topic. Due to resource limitations, the search was not extended to all databases used by Goyal and colleagues (2014). Additionally, a search of Web of Science was undertaken to include studies from other disciplines as well as conference abstracts. Furthermore, authors were contacted to obtain reports that are not accessible or information, which was not reported but necessary to confirm inclusion, to include the corresponding study in the analysis. During the review process, 27 authors were contacted. Eight authors responded, and six provided full texts or data. As publication bias is an additional issue for interpreting meta-analytic results, we tried to also search for gray literature, which are reports on research not published in traditional academic journals. To do this, a search of the platform opengrey.eu, which lists gray literature, was conducted.

Study Selection

All citations of studies identified in the search as well as their abstracts, if available, were exported to the reference management tool EndNote for the screening process. Study titles and abstracts were first screened for eligibility, and, if possible, eligible study full texts were obtained. In a next step, full texts were screened and it was finally decided whether the study was included into the analysis. Information relevant for analysis and some descriptive information were collected in a table for all included studies. Screening and later coding of reports were carried out by the first author (the screening manual and form are provided in Tables S1–S3 in supplementary materials).

Data Collection

Data were collected on the level of (1) report (authors, year, publication type), (2) study (country, setting, time frame, number of participants, diagnosis, demographic characteristics of participants), (3) treatment and comparison (treatment and comparison type, short description, duration of the program, homework, harm), (4) outcome (name of measure or scale, construct measured, minimum and maximum values, meaning of high values), and (5) effect size data (post/follow-up, group sizes, means and SDs of the groups, any adjustments to the mean). Accordingly, a data collection form and a manual were developed and pilot-tested on some reports (the coding manual and data collection forms are provided in Tables S4 and S5 in the supplementary materials). A second coder with a PhD and experience in coding extracted information from 40 studies selected by chance from the 77 studies. The inter-rater reliability was computed using the α coefficient of reliability proposed by Krippendorff (1980). A coefficient of 1 indicates perfect reliability. If the agreement between coders is as high as expected by chance, α is 0. A common requirement for an acceptable inter-rater reliability is an α of at least 0.8 (Krippendorff, 2004).

For most of the variables, an α higher than 0.9 was reached (details are shown in Appendix L). Disagreements were solved, and for the few variables with α less than 0.8, coder 1 was right in most cases, speaking for the high quality of the coding of all studies.

Risk of Bias in Individual Studies

The risk of bias was assessed via the Cochrane risk of bias 2 (RoB2) tool (version of 22 August 2019) for RCTs (Higgins et al., 2021; Sterne et al., 2019) as a proxy to determine internal validity of individual studies. The tool encompasses five bias domains (randomization process, deviations from the intervention, missing data, measurement of the outcome, and reporting), each including several signaling questions for which judgments are recorded. Domain judgments can be determined for each domain and an overall judgment as low risk of bias, no concern, or high risk of bias.

With the help of the tool, risk of bias was judged for each study or outcome in each domain by the first author (the results of these judgments are provided in ESM 4 in the supplementary material). As there are two versions of the tool, the version for the intention-to-treat effect was chosen because the effect of random assignment to intervention, rather than adherence to intervention, was of interest in this review.

Summary Measure

The data file and R code used for the analysis are included in the supplementary materials. Individual study effect sizes were calculated as standardized mean differences with the small sample correction factor by Hedge’s g via the escalc function in the R (R Core Team, 2021) package metafor (Viechtbauer, 2010). This effect size measure was used because the outcome measures included in the review are continuous, and we expected the outcomes would probably not be measured in the same way in all eligible studies. Positive average effect estimates indicate a benefit of the meditation group over the comparison group.

Methods of Synthesis

Effects across studies were combined in two meta-analytic models. Model 1 summarizes outcomes pertaining to measures of psychological stress, whereas Model 2 summarizes outcomes that measured subjective well-being. To combine data, two random effects models were used to estimate the average relative effect of the meditation interventions. Random effects models were chosen because the true average effect of the meditation programs is expected to vary between different studies depending on the implementation, setting, and different populations addressed by the program and possible other, unknown factors (Borenstein et al., 2010).

If data to calculate the effect size of an outcome in an individual study were missing and it was impossible to estimate them from the reported information, the authors were contacted to obtain the missing information as recommended by Pigott and Polanin (2020). In case the authors did not provide the necessary data, the outcome or, in some cases, the whole study was excluded from the analysis. Complete case analysis was used on the basis of the assumption that the observed data can be considered a random sample of the studies originally identified and that the missing data are missing completely at random (Little & Rubin, 2019).

Multiplicity of effect sizes was an issue in our data set either because of multiple measurement instruments measuring the same construct or because of the measurement of an outcome domain at multiple follow-up times. Dependency of effect estimates has to be accounted for to prevent underestimation of the standard error and ultimately a higher type I error (Park & Beretvas, 2019). Therefore, three-level random effects models were chosen (Van den Noortgate et al., 2015). Variance was assumed at three levels (Assink & Wibbelink, 2016): sampling variance (L1), variance between effect sizes within a study (L2), and variance between different studies (L3). To test the significance of heterogeneity within (L2) and between studies (L3), the full model was compared to reduced models without the respective variance component with a likelihood ratio test (LRT; Assink & Wibbelink, 2016).

Effect sizes were weighted with an inverse marginal variance–covariance matrix, and generalized least square regression is used for the summary effect (Viechtbauer, 2010). For each individual effect size, 95% confidence intervals were calculated to estimate the accuracy of the individual effects within studies. To address the distribution of the true effect sizes, prediction intervals for the summary estimates were calculated. Prediction intervals represent the expected range in which the effect size of a new study would fall (Borenstein et al., 2009, pp. 127–133). Thus, heterogeneity between the studies is also considered, and variation over different settings is reflected. It is recommended to report prediction intervals in meta-analyses (IntHout et al., 2016).

Model Fit

The models were scanned for outliers by means of standardized residuals, whereby effect sizes with absolute standardized deleted residuals larger than 1.96 were considered outliers (Viechtbauer & Cheung, 2010). Henceforth, studies that contained effect sizes were outliers according to the aforementioned definition, and their coding was inspected to detect possible errors. Additionally, influential cases, which are effect sizes that have a proportionally large influence on the summary measure, were examined by means of Cook’s distance (Cook, 1977). Furthermore, values appearing, upon visual inspection, to be apart from the rest also warranted further inspection. To further investigate this, hat values or leverages were estimated for each model with values larger than 3 × (1/k), with k indicating the number of effect sizes.

Metaregression

Confirmatory moderator analysis was planned to test the moderator hypotheses that (a) there are higher effects expected for mindfulness meditation compared to mantra, (b) effects are expected to be more pronounced in comparison with placebo and specific active comparison groups with no difference to established forms of psychotherapy, and (c) effects of the meditation intervention are expected to be higher assessed directly at the end of the treatment than in later assessments. Mixed-effects three-level models were conducted for data on psychological stress and well-being. LRT was used to assess model fit to identify factors that are significant in the model.

Sensitivity Analysis

To investigate the robustness of the results, sensitivity analysis was conducted with respect to effect estimates suspected to be outliers or influential by clearly falling out of the pattern of the funnel plots. The analyses were conducted for Models 1 and 2. Furthermore, a sensitivity analysis was conducted for Model 2 comparing the three-level random effects model with a correlated effects model with small sample corrections and random variance estimation (RVE) of standard errors. Reasons for and description of the RVE model can be found in Appendix B.

Publication Bias and Selective Reporting

Publication and selective reporting biases were addressed by means of contour-enhanced funnel plots (Peters et al., 2008). These were inspected visually and interpreted concerning asymmetry, and possible reasons thereof are discussed. Additionally, a rank correlation test was conducted, which is independent of the model used and at the moment the only possible way to formally test publication bias in rma.mv models (Viechtbauer, 2010). For a comprehensive analysis of potential publication bias, we conducted Trim and Fill (Duval & Tweedie, 2000), as well as PET/PEESE (i.e., Precision-Effect Test/Precision-Effect Estimate with Standard Errors) analyses (Carter et al., 2019). These did not point to considerable bias in the results of the meta-analysis due to nonpublished results. Further details of these analyses can be found in the R script (supplementary material 6) and the Appendix (supplementary material 1).

We used R (R Core Team, 2021), Version 4.1.0 (2021-05-18), for all statistical analyses. Metafor (Viechtbauer & Viechtbauer, 2015) and orchaRd (Nakagawa et al., 2021) as well as multiple helper functions that are explicitly cited in Appendix N were used.

Results

Study Selection

Of N = 6,721 individual citations whose abstracts were screened, full texts of N = 496 reports were obtained and assessed for eligibility. At this stage, records were excluded because of how meditation interventions were implemented, for example, length of instructed training, movement-based programs, or remote delivery of the program. Many records were excluded because the corresponding studies did not have a comparison group at all, the comparison group was not active, or it did not match the meditation group in time and attention. Finally, records were excluded at full text screening because they did not report outcomes relevant for the research question or outcomes were measured in a way that was not in line with the methodological requirements outlined in Table 1. After the screening process, N = 98 reports from k = 77 individually conducted studies were included in the analysis. See Figure 1 for a more detailed account of the number of reports included and excluded in each step of the process. A table with a sample of excluded studies and reasons for their exclusion can be found in Appendix C.

Figure 1 PRISMA 2009 flow diagram.

Study Characteristics and Results of Individual Studies

Characteristics of the individual studies can be found in Appendix D. Most trials employed meditation interventions with eight sessions and a length of 2 h in a group setting. Follow-up intervals ranged from 2 months up to 2 years. About half of the studies investigated medical patient populations, for example, cancer, diabetes, or chronically ill patients. The other half investigated psychiatric patient populations, for example, depression, anxiety disorders, or substance use disorder. Most trials were carried out in Europe, the United States, and the United Kingdom. Many trials were either pre-registered or retrospectively registered at ClinicalTrials.gov. Results of the individual studies, in form of effect estimates and corresponding confidence intervals, are reported in Appendix D and are visualized by means of an orchard and a caterpillar plot (Nakagawa et al., 2020).

Synthesis of Results

Psychological Stress

For outcomes on psychological stress, a three‐level random effects model with 256 effect estimates of 72 studies was fitted to the data (Model 1). For this model, absolute heterogeneity τ2 is 0.06, and this is the absolute estimated variance of the underlying true outcomes (Harrer et al., 2021). Participants who were randomized to a meditation intervention group reported significantly lower levels of psychological stress compared to participants who were randomized to an active comparison group (g = 0.18, 95% CI [0.12, 0.25]) after the intervention. Significant variability between effect sizes within studies (L2; LRT = 4.9, p = .03) as well as between studies (L3; LRT = 92.97, p < .0001) was found. The estimated amount of sampling variance (47%) suggests that heterogeneity can be regarded as substantial (Assink & Wibbelink, 2016). The amount of within-study variance (L2) was rather small (8%) compared to the amount of between-study variance (L3; 46%). Effects of future studies are expected to fall within the prediction interval [−0.3, 0.67] in 95% of the cases. A visual summary of results can be found in the orchard plot in Figure 2. An orchard plot is a scatterplot, including information on individual effect sizes and their precision, the confidence intervals, and prediction intervals (Nakagawa et al., 2021).

Figure 2 Orchard plot of all stress outcomes.
Subjective Well-Being

A three-level random effects model with 60 effect estimates from 27 studies was conducted for subjective well-being outcomes (Model 2). For this model, absolute heterogeneity τ2 is 0.12. Participants randomized to meditation intervention groups reported significantly higher levels of subjective well-being compared to participants randomized to active comparison groups (g = 0.25, 95% CI [0.1, 0.4]) after the intervention. The full model containing all three levels was significantly better than the reduced model without between-study variance (L3; LRT = 12.2, p < .0005). The estimate of 70% for the proportion of between-study variance in relation to the overall heterogeneity also indicates that variability between studies is substantial. Effects of future studies are expected to fall within the prediction interval [−0.46, 0.95] in 95% of the cases. However, the fit of the full model is not better than the fit of the reduced model, with within-study variance fixed to 0 (LRT = 0, p = 1), suggesting that a three-level model does not fit the data well. To investigate how robust these results are against the model used in the analysis, an RVE model was estimated. Results of the RVE model are similar to those of the three-level model (g = 0.25, 95% CI [0.08, 0.42]). Confidence intervals for both estimates are rather wide, but both clearly indicate that participants receiving meditation interventions did not report less subjective well-being than participants in the comparison conditions. Results are visualized in a caterpillar plot in Figure 3. A caterpillar plot is similar to the typical forest plot (Hurley, 2020), but more adequate for meta-analyses with many effect sizes (Nakagawa et al., 2021).

Figure 3 Caterpillar plot of all well-being outcomes.

Moderator Analysis

Psychological Stress

The three-level mixed-effects model for the moderator analysis of psychological stress outcomes contains comparison group type as a categorical moderator and assessment time point as a continuous moderator. The estimated average effects for the three levels of the factor comparison group are shown in Table 2. For this model, absolute heterogeneity τ2 is 0.05. Participants in the meditation group reported significantly lower levels of psychological stress compared to participants in a placebo group (g = 0.26, 95% CI [0.17, 0.35]). Moreover, this average effect was not significantly lower or higher when compared to the meditation intervention with a specific active comparison group. However, the average effect of the meditation intervention was significantly lower in comparison with psychotherapeutic groups (g = 0.08, 95% CI [−0.03, 0.19]). F-tests confirmed that the comparison group is a significant factor as a whole (F(2, 250) = 3.64, p = 0.03). The difference in the effect between the comparison with a specific active comparison group and the comparison with psychotherapy is not significant (F(1, 250) = 2.95, p = 0.09).

Table 2 Estimated average effects for three-level mixed-effects model with comparison group type and assessment time point

Notably, the confidence intervals for the comparison with psychotherapy include negative values, but the lower bounds are near zero and the interval is rather small, suggesting that participants who received meditation interventions did not report higher levels of psychological stress than participants receiving psychotherapy. In sum, there is a moderating effect of comparison group type (a) on reduction in psychological stress of participants in the meditation groups compared to participants in the comparison condition while there was no significant moderating effect found for assessment time (c). Results are visualized in Figure 4.

Figure 4 Effects of comparison group type and assessment time point on stress.

There is still a significant amount of unexplained variance remaining between all effect sizes in the data set after comparison group type and assessment time point have been added to the model (QE (250) = 476.04, p < .001). Additionally, there is still significant variability between effect sizes within studies (L2; LRT = 5.16, p = .02) as well as significant variability between studies (L3; LRT = 74.60, p < .0001). In comparison to a model without moderators, between-study variance was reduced (40.5%), which indicates that comparison group type explains some variance on this level (L3). Another model additionally containing the categorical moderators diagnosis (mainly psychiatric vs. somatic) and stress operationalization (negative affect vs. general stress) was fitted. The estimates for both moderators were not significant in the model. Additionally, LRT of reduced against the full model with all moderators were also not significant (diagnosis: LRT = 0.18, p = .67; outcome: LRT = 0.05, p = .82). This indicates that these moderators do not explain much of the sampling variance.

Subjective Well-Being

The moderator analysis of the data on well-being outcomes was performed according to the analysis for stress outcomes. For this model, absolute heterogeneity τ2 is 0.13. Participants in the meditation group reported significantly higher levels of subjective well-being compared to participants in a placebo group (g = 0.27, 95% CI [0.02, 0.51]). This effect was not significantly different for the comparison with active comparison and psychotherapy groups. An F-test did not confirm that comparison group as a whole is significant (F(2, 56) = 0.06, p = .94).

In sum, there is a moderating effect of comparison group type (a) on an increase in subjective well-being of participants in the meditation groups compared to participants in the placebo condition, but not in comparison to active comparison and psychotherapeutic groups. Assessment time point was not found to be significantly moderating the outcome. Additionally, there was no significant moderating effect found for diagnosis or outcome operationalization, which were added together with comparison group type in another model fitted. The estimated average effects for the three levels of the factor comparison group and forest plots to visualize the data can be found in Appendix K.

Sensitivity Analyses

Psychological Stress

The weighted average effect estimate of the model was robust to the exclusion of six effect size estimates that were visually far away in the funnel plot as well as potential outliers. The estimate and its 95% confidence interval did not change notably. Nevertheless, the results of the heterogeneity analysis were not robust and indicate that for data on psychological stress as well, the three-level model is possibly not the most suitable option for analysis. The reduced model, without potentially influential effect estimates, did not find significant variance between effect sizes within studies. Model fit in comparison with a full model including all levels via an LRT was not significant. Furthermore, the variance component for this level was estimated to be near 0. Both findings indicate that there is not much variance between effect sizes within studies. See Appendix E for the results of the sensitivity analyses.

Subjective Well-Being

To determine the robustness of the results of the correlated effects model with RVE, effect estimates that were distantly located from the general pattern in the funnel plot as well as possible outliers and influential cases were excluded from the analysis. In sum, six effect estimates were excluded. In comparison to the model, including all effect estimates, the model without them has a much smaller weighted average effect estimate which also has a narrower confidence interval (g = 0.17, 95% CI [0.07, 0.27]). This indicates that the results of Model 2 are not robust and should be interpreted with caution. The results of the new model are robust against the selection of different values for ρ (analysis not shown here). See Appendix E for the results of the sensitivity analyses.

Comparison to Findings of Goyal and colleagues (2014)

To make a comparative analysis to Goyal and colleagues (2014), separate analyses were carried out for anxiety and depression outcomes. For anxiety outcomes, there were 32 effect sizes from 28 individual studies at post and 25 effect sizes from 18 individual studies at follow-up. They were summarized in a multilevel random effects models (post: g = 0.15, 95% CI [0.07, 0.23]; follow-up: g = 0.11, 95% CI [−0.01, 0.23]). For depression outcomes, there were 62 effect sizes from 50 individual studies at post and 45 effect sizes of 31 individual studies at follow-up. They were summarized in multilevel random effects models (post: g = 0.19, 95% CI [0.11, 0.28]; follow-up: g = 0.12, 95% CI [0.03, 0.21]).

Assessment of Internal Validity of Individual Studies

Risk of bias for studies and individual outcomes within studies were judged with the help of the RoB2 tool (Sterne et al., 2019). Within both meta-analyses, the information that has been judged to be at high risk of bias overall is high (100% for well-being, almost the same for stress). In many studies, the information needed to make judgments was not reported in sufficient detail, leading to at least some concern in the corresponding domain. Many trials reported a high rate of missing outcome data, a general problem in longitudinal intervention studies (National Research Council, 2010), and the way of handling this problem was not always appropriate or transparently reported as recommended by the RoB2 tool (Sterne et al., 2019). Furthermore, most trials did not provide enough detail to soundly exclude the possibility of bias that is inherent in self-reported data, which led to at least some concern in this domain for all studies. In Appendix F, weighted summary bar plots visualize risk of bias judgment for the information included in both models separately. Additionally, traffic light plots for all studies show the risk of bias judgment for each domain in detail.

Publication and Reporting Bias

The search of unpublished studies at opengrey.eu was without results because none of the entries found satisfied the inclusion criteria. Thus, the body of data of this review mainly consists of published results. As an exception, five studies for which effect size data were not published but provided by the authors upon request could be included. A main problem was that effect size data for some unpublished studies were simply not available. An examination of contour-enhanced funnel plots showed most effect estimates lying in the area of nonsignificance for both models (Figure 5). Within the area of no significance for Model 1, small gaps appear on the side of negative effects with moderate precision and in the area of positive effects with rather high precision. Overall, negative significant effect estimates are missing in the plots of both models, which could be indicative of selective reporting because it is imaginable that researchers with conflicts of interest in a positive direction, trying to show benefits of meditation programs over other interventions, might have a tendency to omit significant negative results from their reports. However, no conclusions about this speculation can be drawn from the type of evidence that is available here.

Figure 5 Contour-enhanced funnel plots for both models.

Rank correlation tests did not find high or significant correlations between observed data and corresponding sampling variance. Therefore, there is no indication of funnel plot asymmetry by high correlation between observed data and sampling variance. In sum, some asymmetries were observed which could possibly be due to heterogeneity, chance, or selective reporting bias. However, it is not possible to say anything conclusive on this matter because the visual inspection of funnel plots is not very valid (Peters et al., 2008), and there are limited possibilities to formally test funnel plot asymmetry for rma.mv models at the moment (Viechtbauer, 2010).

Adverse and Harmful Effects

Over half of the included studies (68%) did not mention adverse or harmful effects of the employed meditation intervention in their reports, neither that they were assessed nor that they had or had not occurred. Only one included study (Wells et al., 2021) reports assessing adverse events systematically. Of the 24 studies that did report some information on the topic, 13 reported that no adverse or harmful events occurred during the study. Only two studies reported adverse events that appear to be related to the meditation intervention (Cherkin et al., 2016; Senders et al., 2019). For example, several participants experienced increased pain in relation to yoga exercises included in the intervention of the study by Cherkin and colleagues (2016). An overview of all adverse events reported in the studies included in this review is reported in Appendix G.

Discussion

In conclusion, it was found that patients with a clinical condition, who were randomized to a meditation intervention, report less psychological stress and more subjective well-being after the intervention than patients randomized to a placebo or specific active comparison condition. There were no differences found in comparison to psychotherapeutic interventions. This indicates that meditation programs may have specific effects as complementary treatments in chronic and long-term conditions over and above psychological placebo. Differences between the types of meditation programs (mantra vs. mindfulness) could not be investigated due to a lack of studies on mantra meditation. Furthermore, assessment time was not found to be a significant moderator of the above described relationships. This indicates that the abovementioned effects do not change much over time.

The findings on psychological stress are in line with findings by Goyal and colleagues (2014) although with smaller effect estimates found in the present review. Different than Goyal and colleagues (2014), the present rapid review did also find a significant effect for subjective well-being. Also differently, the differences between the sizes of the effect, comparing postassessment results to follow-up, could not be replicated in a moderator analysis with assessment time as a continuous variable. Overall, confidence intervals in the current meta-analysis are more precise. Goyal and colleagues (2014) did not use small sample correction (Hedges, 1981), which would be more conservative and could explain some of the downward differences in effect estimates.

Information is lacking about the quality of the evidence. First, risk of bias of individual studies was determined to be predominantly high mostly due to failure to report the information necessary to make adequate judgments in many cases. Generalizability of findings is limited. As only RCTs were included in the analyses, conclusions cannot be drawn about effectiveness under naturalistic conditions (Fortin et al., 2006). Furthermore, the majority of reports are from the United Kingdom or the United States, and the majority of participants included in the trials were White. As for the type of meditation employed in studies included in the analysis, the overwhelming majority of studies conveyed mindfulness meditation practice and theory, and hence, the findings only extrapolate to such interventions.

According to Cohen (1992), the effect size estimates found in this meta-analysis, which are around a value of 0.2, are small. This is, for example, comparable in size to the effect of antidepressant medication over placebo on symptoms in major depression (d = 0.29; Munkholm et al., 2019). Compared to what is considered the overall effect of psychotherapy on various outcomes (d = 0.68; Smith & Glass, 1977), among them physiological stress but also self-esteem, the effects found in this analysis are rather small.

Conclusion

In conclusion, the present review provides further and tentative evidence that it might be beneficial for patients with clinical conditions to take part in meditation intervention groups to complement their medical and psychological treatment. Beneficial effects were found for psychological stress, which is suspected to be linked to disease development and course (Miller et al., 2009), and subjective well-being. The programs can be delivered successfully to a group of people, which makes them very accessible in a clinical context. In terms of stress, effects appear to be stable in time and can still be found at follow-up. This suggests that patients are able to apply the things they learned in the meditation course in daily life even after the course has ended.

References

  • Assink, M., & Wibbelink, C. J. (2016). Fitting three-level meta-analytic models in R: A step-by-step tutorial. The Quantitative Methods for Psychology, 12(3), 154–174. 10.20982/tqmp.12.3.p154 First citation in articleCrossrefGoogle Scholar

  • Beck, J. S., & Beck, A. T. (2011). Cognitive behavior therapy: Basics and beyond (2nd ed.). Guilford Press. First citation in articleGoogle Scholar

  • Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2009). Introduction to meta-analysis. Wiley. First citation in articleCrossrefGoogle Scholar

  • Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2010). A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods, 1(2), 97–111. 10.1002/jrsm.12 First citation in articleCrossrefGoogle Scholar

  • Bowen, S., Chawla, N., Grow, J., & Marlatt, G. A. (2021). Mindfulness-based relapse prevention for addictive behaviors. Guilford Publications. First citation in articleGoogle Scholar

  • Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2(2), 115–144. 10.1177/2515245919847196 First citation in articleCrossrefGoogle Scholar

  • Centre for Reviews and Dissemination. (2008). Systematic reviews: CRD’s Guidance for undertaking reviews in health care: CRD. University of York. First citation in articleGoogle Scholar

  • Cherkin, D. C., Sherman, K. J., Balderson, B. H., Cook, A. J., Anderson, M. L., Hawkes, R. J., Hansen, K. E., & Turner, J. A. (2016). Effect of mindfulness-based stress reduction vs cognitive behavioral therapy or usual care on back pain and functional limitations in adults with chronic low back pain: A randomized clinical trial. JAMA: Journal of the American Medical Association, 315(12), 1240–1249. 10.1001/jama.2016.2323 First citation in articleCrossrefGoogle Scholar

  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. 10.1037/0033-2909.112.1.155 First citation in articleCrossrefGoogle Scholar

  • Cohen, S., Janicki-Deverts, D., Doyle, W. J., Miller, G. E., Frank, E., Rabin, B. S., & Turner, R. B. (2012). Chronic stress, glucocorticoid receptor resistance, inflammation, and disease risk. PNAS: Proceedings of the National Academy of Sciences of the United States of America, 109(16), 5995–5999. 10.1073/pnas.1118355109 First citation in articleCrossrefGoogle Scholar

  • Cohen, S., Janicki-Deverts, D., & Miller, G. E. (2007). Psychological stress and disease. JAMA: Journal of the American Medical Association, 298(14), 1685–1687. 10.1001/jama.298.14.1685 First citation in articleCrossrefGoogle Scholar

  • Cook, R. D. (1977). Detection of influential observation in linear regression. Technometrics, 19(1), 15–18. 10.2307/1268249 First citation in articleCrossrefGoogle Scholar

  • Dahl, C. J., Lutz, A., & Davidson, R. J. (2015). Reconstructing and deconstructing the self: Cognitive mechanisms in meditation practice. Trends in Cognitive Science, 19(9), 515–523. 10.1016/j.tics.2015.07.001 First citation in articleCrossrefGoogle Scholar

  • Davidson, R. J., & Kaszniak, A. W. (2015). Conceptual and methodological issues in research on mindfulness and meditation. American Psychologist, 70(7), 581–592. 10.1037/a0039512 First citation in articleCrossrefGoogle Scholar

  • Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56(2), 455–463. 10.1111/j.0006-341x.2000.00455.x First citation in articleCrossrefGoogle Scholar

  • Farias, M., Wikholm, C., & Delmonte, R. (2016). What is mindfulness-based therapy good for? Lancet Psychiatry, 3(11), 1012–1013. 10.1016/s2215-0366(16)30211-5 First citation in articleCrossrefGoogle Scholar

  • Fortin, M., Dionne, J., Pinho, G., Gignac, J., Almirall, J., & Lapointe, L. (2006). Randomized controlled trials: Do they have external validity for patients with multiple comorbidities? Annals of Family Medicine, 4(2), 104–108. 10.1370/afm.516 First citation in articleCrossrefGoogle Scholar

  • Goldberg, S. B., Tucker, R. P., Greene, P. A., Simpson, T. L., Kearney, D. J., & Davidson, R. J. (2017). Is mindfulness research methodology improving over time? A systematic review. PLoS One, 12(10), Article e0187298. 10.1371/journal.pone.0187298. First citation in articleCrossrefGoogle Scholar

  • Goyal, M., Singh, S., Sibinga, E. M., Gould, N. F., Rowland-Seymour, A., Sharma, R., Berger, Z., Sleicher, D., Maron, D. D., Shihab, H. M., Ranasinghe, P. D., Linn, S., Saha, S., Bass, E. B., & Haythornthwaite, J. A. (2014). Meditation programs for psychological stress and well-being: A systematic review and meta-analysis. JAMA Internal Medicine, 174(3), 357–368. 10.1001/jamainternmed.2013.13018. First citation in articleCrossrefGoogle Scholar

  • Harrer, M., Cuijpers, P., Furukawa, T., & Ebert, D. (2021). Doing meta-analysis with R: A Hands-On Guide (1st ed.). Chapman and Hall/CRC. 10.1201/9781003107347 First citation in articleCrossrefGoogle Scholar

  • Hayes, S. C., Strosahl, K. D., & Wilson, K. G. (2009). Acceptance and commitment therapy. American Psychological Association. First citation in articleGoogle Scholar

  • Hedges, L. V. (1981). Distribution theory for Glass's estimator of effect size and related estimators. Journal of Educational Statistics, 6(2), 107–128. 10.3102/10769986006002107 First citation in articleCrossrefGoogle Scholar

  • Higgins, J. P. T., Savović, J., Page, M. J., Elbers, R. G., & Sterne, J. A. C. (2021). Chapter 8: Assessing risk of bias in randomized trials. In J. P. T. HigginsJ. ThomasJ. ChandlerM. CumpstonT. LiM. J. PageV. Welch (Eds.), Cochrane Handbook for systematic Reviews of Interventions version 6.2 [updated February 2021]. Cochrane. First citation in articleGoogle Scholar

  • Hurley, J. C. (2020). Forrest plots or caterpillar plots? Journal of Clinical Epidemiology, 121, 109–110. 10.1016/j.jclinepi.2020.01.017. First citation in articleCrossrefGoogle Scholar

  • IntHout, J., Ioannidis, J. P. A., Rovers, M. M., & Goeman, J. J. (2016). Plea for routinely presenting prediction intervals in meta-analysis. BMJ open, 6(7), e010247. 10.1136/bmjopen-2015-010247. First citation in articleCrossrefGoogle Scholar

  • Johnston, B., Patrick, D., Devji, T., Maxwell, L., Bingham, I. C., Beaton, D., Boers, M., Briel, M., Busse, J. W., Carrasco-Labra, A., Christensen, R., da Costa, B. R., El Dib, R., Lyddiatt, A., Ostelo, R. W., Shea, B., Singh, J., Terwee, C. B., Williamson, P. R., …, & Guyatt, G. (2022). Chapter 18: Patient-reported outcomes. In J. P. T. HigginsJ. ThomasJ. ChandlerM. CumpstonT. LiM. J. PageW. Va (Eds.), Cochrane Handbook for Systematic Reviews of Interventions [version 6.3 (updated February 2022)]. Cochrane. www.training.cochrane.org/handbook First citation in articleGoogle Scholar

  • Kabat-Zinn, J. (1982). An outpatient program in behavioral medicine for chronic pain patients based on the practice of mindfulness meditation: Theoretical considerations and preliminary results. General Hospital Psychiatry, 4(1), 33–47. 10.1016/0163-8343(82)90026-3 First citation in articleCrossrefGoogle Scholar

  • Kabat-Zinn, J. (2003). Mindfulness-based interventions in context: Past, present, and future. Clinical Psychology: Science and Practice, 10(2), 144–156. 10.1093/clipsy.bpg016 First citation in articleCrossrefGoogle Scholar

  • Kabat-Zinn, J., Wheeler, E., Light, T., Skillings, A., Scharf, M. J., Cropley, T. G., Hosmer, D., & Bernhard, J. D. (1998). Influence of a mindfulness meditation-based stress reduction intervention on rates of skin clearing in patients with moderate to severe psoriasis undergoing phototherapy (UVB) and photochemotherapy (PUVA). Psychosomatic Medicine, 60(5), 625–632. 10.1097/00006842-199809000-00020 First citation in articleCrossrefGoogle Scholar

  • Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Sage First citation in articleGoogle Scholar

  • Krippendorff, K. (2004). Reliability in content analysis: Some common misconceptions and recommendations. Human Communication Research, 30(3), 411–433. 10.1093/hcr/30.3.411 First citation in articleCrossrefGoogle Scholar

  • Lauche, R., Cramer, H., Dobos, G., Langhorst, J., & Schmidt, S. (2013). A systematic review and meta-analysis of mindfulness-based stress reduction for the fibromyalgia syndrome. Journal of Psychosomatic Research, 75(6), 500–510. 10.1016/j.jpsychores.2013.10.010 First citation in articleCrossrefGoogle Scholar

  • Lieberman, B. (2018, January 29). Peering into the meditating mind:–Some people swear by it, but studies of mindfulness have a long way to go. Knowable Magazine. https://knowablemagazine.org/article/mind/2018/peering-meditating-mind First citation in articleCrossrefGoogle Scholar

  • Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data (3rd ed.). Wiley. First citation in articleGoogle Scholar

  • Miller, G., Chen, E., & Cole, S. W. (2009). Health psychology: Developing biologically plausible models linking the social world and physical health. Annual Review of Psychology, 60, 501–524. 10.1146/annurev.psych.60.110707.163551 First citation in articleCrossrefGoogle Scholar

  • Munkholm, K., Paludan-Müller, A. S., & Boesen, K. (2019). Considering the methodological limitations in the evidence base of antidepressants for depression: A reanalysis of a network meta-analysis. BMJ Open, 9(6), e024886. 10.1136/bmjopen-2018-024886 First citation in articleCrossrefGoogle Scholar

  • Nakagawa, S., Lagisz, M., O’Dea, R., Rutkowska, J., Yang, Y., Noble, D., & Senior, A. (2020). orchaRd: An R package for drawing ‘orchard’ plots (and ‘caterpillars’ plots) from meta-analyses and meta-regressions with categorical moderators First citation in articleGoogle Scholar

  • Nakagawa, S., Lagisz, M., O'Dea, R. E., Rutkowska, J., Yang, Y., Noble, D. W. A., & Senior, A. M. (2021). The orchard plot: Cultivating a forest plot for use in ecology, evolution, and beyond. Research Synthesis Methods, 12(1), 4–12. 10.1002/jrsm.1424 First citation in articleCrossrefGoogle Scholar

  • National Research Council. (2010). The prevention and treatment of missing data in clinical trials. The National Academies Press. 10.17226/12955 First citation in articleCrossrefGoogle Scholar

  • Park, S., & Beretvas, S. N. (2019). Synthesizing effects for multiple outcomes per study using robust variance estimation versus the three-level model. Behavior Research Methods, 51(1), 152–171. 10.3758/s13428-018-1156-y First citation in articleCrossrefGoogle Scholar

  • Peters, J. L., Sutton, A. J., Jones, D. R., Abrams, K. R., & Rushton, L. (2008). Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry. Journal of Clinical Epidemiology, 61(10), 991–996. 10.1016/j.jclinepi.2007.11.010 First citation in articleCrossrefGoogle Scholar

  • Pigott, T. D., & Polanin, J. R. (2020). Methodological guidance paper: High-quality meta-analysis in a systematic review. Review of Educational Research, 90(1), 24–46. 10.3102/0034654319877153 First citation in articleCrossrefGoogle Scholar

  • R Core Team. (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.r-project.org First citation in articleGoogle Scholar

  • Rosenkranz, M. A., Dunne, J. D., & Davidson, R. J. (2019). The next generation of mindfulness-based intervention research: What have we learned and where are we headed? Current Opinion in Psychology, 28, 179–183. 10.1016/j.copsyc.2018.12.022 First citation in articleCrossrefGoogle Scholar

  • Schober, P., Boer, C., & Schwarte, L. A. (2018). Correlation coefficients: Appropriate use and interpretation. Anesthesia & Analgesia, 126(5), 1763–1768. 10.1213/ane.0000000000002864 First citation in articleCrossrefGoogle Scholar

  • Seekircher, J. (2022a). Supplemental materials to “The effects of clinical meditation programs on stress and well-being: An updated rapid review and meta-analysis of randomized controlled trials (RCTs) with active comparison groups”. https://doi.org/10.23668/psycharchives.8408 First citation in articleGoogle Scholar

  • Seekircher, J. (2022b). Supplemental materials to “The effects of clinical meditation programs on stress and well-being: An updated rapid review and meta-analysis of randomized controlled trials (RCTs) with active comparison groups”. https://doi.org/10.23668/psycharchives.8409 First citation in articleGoogle Scholar

  • Seekircher, J. (2022c). Supplemental material to “The effects of clinical meditation programs on stress and well-being: An updated rapid review and meta-analysis of randomized controlled trials (RCTs) with active comparison groups”. https://doi.org/10.23668/psycharchives.8407 First citation in articleGoogle Scholar

  • Segal, Z. V., Bieling, P., Young, T., MacQueen, G., Cooke, R., Martin, L., Bloch, R., & Levitan, R. D. (2010). Antidepressant monotherapy vs sequential pharmacotherapy and mindfulness-based cognitive therapy, or placebo, for relapse prophylaxis in recurrent depression. Archives of General Psychiatry, 67(12), 1256–1264. 10.1001/archgenpsychiatry.2010.168 First citation in articleCrossrefGoogle Scholar

  • Senders, A., Hanes, D., Bourdette, D., Carson, K., Marshall, L. M., & Shinto, L. (2019). Impact of mindfulness-based stress reduction for people with multiple sclerosis at 8 weeks and 12 months: A randomized clinical trial. Multiple Sclerosis, 25(8), 1178–1188. 10.1177/1352458518786650 First citation in articleCrossrefGoogle Scholar

  • Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. The American Psychologist, 32(9), 752–760. 10.1037//0003-066x.32.9.752 First citation in articleCrossrefGoogle Scholar

  • Sterne, J., Savović, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., Cates, C. J., Cheng, H. Y., Corbett, M. S., Eldridge, S. M., Emberson, J. R., Hernán, M. A., Hopewell, S., Hróbjartsson, A., Junqueira, D. R., Jüni, P., Kirkham, J. J., Lasserson, T., Li, T., …, & Higgins, J. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ, 366. Article l4898. 10.1136/bmj.l4898. First citation in articleCrossrefGoogle Scholar

  • Strauss, C., Cavanagh, K., Oliver, A., & Pettman, D. (2014). Mindfulness-based interventions for people diagnosed with a current episode of an anxiety or depressive disorder: A meta-analysis of randomised controlled trials. PLoS One, 9(4), Article e96110. 10.1371/journal.pone.0096110 First citation in articleCrossrefGoogle Scholar

  • Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(6), 1273–1296. 10.1007/s11165-016-9602-2 First citation in articleCrossrefGoogle Scholar

  • Tolin, D. F., McKay, D., Forman, E. M., Klonsky, E. D., & Thombs, B. D. (2015). Empirically supported treatment: Recommendations for a new model. Clinical Psychology: Science and Practice, 22(4), 317–338. 10.1037/h0101729 First citation in articleCrossrefGoogle Scholar

  • Van Dam, N. T., van Vugt, M. K., Vago, D. R., Schmalzl, L., Saron, C. D., Olendzki, A., Meissner, T., Lazar, S. W., Kerr, C. E., Gorchov, J., Fox, K., Field, B. A., Britton, W. B., Brefczynski-Lewis, J. A., & Meyer, D. E. (2018). Mind the hype: A critical evaluation and prescriptive agenda for research on mindfulness and meditation. Perspectives on Psychological Science, 13(1), 36–61. 10.1177/1745691617709589 First citation in articleCrossrefGoogle Scholar

  • Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2015). Meta-analysis of multiple outcomes: A multilevel approach. Behavior Research Methods, 47(4), 1274–1294. 10.3758/s13428-014-0527-2 First citation in articleCrossrefGoogle Scholar

  • Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. 10.18637/jss.v036.i03 First citation in articleCrossrefGoogle Scholar

  • Viechtbauer, W., & Cheung, M. W. L. (2010). Outlier and influence diagnostics for meta-analysis. Research Synthesis Methods, 1(2), 112–125. 10.1002/jrsm.11 First citation in articleCrossrefGoogle Scholar

  • Viechtbauer, W., & Viechtbauer, M. W. (2015). Package ‘metafor’. The Comprehensive R Archive Network. Package ‘metafor’. http://cran.r-project.org/web/packages/metafor/metafor.pdf First citation in articleGoogle Scholar

  • Waltz, C. F., Strickland, O., & Lenz, E. R. (2004). Measurement in nursing and health research. Springer Publishing Company. First citation in articleGoogle Scholar

  • Wells, R. E., O'Connell, N., Pierce, C. R., Estave, P., Penzien, D. B., Loder, E., Zeidan, F., & Houle, T. T. (2021). Effectiveness of mindfulness meditation vs headache education for adults with migraine: A randomized clinical trial. JAMA Internal Medicine, 181(3), 317–328. 10.1001/jamainternmed.2020.7090 First citation in articleCrossrefGoogle Scholar

  • World Health Organization. (2021). World health statistics 2021: Monitoring health for the SDGs, sustainable development goals. World Health Organization. https://apps.who.int/iris/handle/10665/342703 First citation in articleGoogle Scholar