Skip to main content
Open AccessReview Article

Scientific Misconduct in Psychology

A Systematic Review of Prevalence Estimates and New Empirical Data

Published Online:https://doi.org/10.1027/2151-2604/a000356

Abstract

Abstract. Spectacular cases of scientific misconduct have contributed to concerns about the validity of published results in psychology. In our systematic review, we identified 16 studies reporting prevalence estimates of scientific misconduct and questionable research practices (QRPs) in psychological research. Estimates from these studies varied due to differences in methods and scope. Unlike other disciplines, there was no reliable lower bound prevalence estimate of scientific misconduct based on identified cases available for psychology. Thus, we conducted an additional empirical investigation on the basis of retractions in the database PsycINFO. Our analyses showed that 0.82 per 10,000 journal articles in psychology were retracted due to scientific misconduct. Between the late 1990s and 2012, there was a steep increase. Articles retracted due to scientific misconduct were identified in 20 out of 22 PsycINFO subfields. These results show that measures aiming to reduce scientific misconduct should be promoted equally across all psychological subfields.

Cases of scientific misconduct undermine the credibility of published results and ultimately reduce the confidence in the value of scientific research as a whole (Fang, Steen, & Casadevall, 2012). The detection of some spectacular cases of scientific misconduct (e.g., the case of Diederik Stapel; Callaway, 2011) has contributed to concerns over the validity of published results in psychology, especially in social psychology (e.g., see Rovenpor & Gonzales, 2015). For instance, Carey (2011), referring to expert evaluation, stated in a New York Times article that “the [Stapel] case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability”. Similarly, some psychological researchers themselves seem to be unsettled about the credibility of their field. For example, Motyl et al. (2017, p. 10) found that their sample of social and personality psychology researchers had the impression that “the field overall might be pretty rotten”.

Scientific misconduct includes data fabrication, data falsification, plagiarism, and other serious and intentional practices that distort scientific results or lead to incorrect information about contribution to research (e.g., undisclosed competing interests; Hofmann, Helgesson, Juth, & Holm, 2015; Resnik, Neal, Raymond, & Kissling, 2015). Honest errors or differences of opinion do not qualify as scientific misconduct (Office of Research Integrity, 2011; Office of Science and Technology Policy, 2000). Besides the negative effect on the credibility of scientific research, there is a large number of additional adverse effects of scientific misconduct. These negative consequences include the misplacement of monetary investments (e.g., grant funding) and research capacity, misinformation of the public and policy makers, damage of the careers of colleagues and graduate students unknowingly involved in fraudulent projects, the delay of scientific progress, and costs associated with the investigation of misconduct cases (Michalek, Hutson, Wicher, & Trump, 2010; Stroebe, Postmes, & Spears, 2012).

While the toxic consequences of scientific misconduct are indisputable, the prevalence of these practices has been subject to debate (Gross, 2016; Marshall, 2000). This question is particularly relevant because reliable data on the occurrence of a phenomenon are crucial to understanding its causes and to developing prevention strategies. Many factors contributing to the engagement in scientific misconduct have been discussed. Those include the academic “publish-or-perish” culture (e.g., De Rond & Miller, 2005) and academic capitalism (Münch, 2014) leading to competitive and individualist norms (Louis, Anderson, & Rosenberg, 1995; Motyl et al., 2017). Many researchers experience significant pressure to publish significant and preferably surprising results in high-ranking journals to achieve tenure or promotion (Nosek, Spies, & Motyl, 2012), job security or financial rewards (Franzoni, Scellato, & Stephan, 2011). There is some evidence that this pressure has increased in the last decades (e.g., Anderson, Ronning, De Vries, & Martinson, 2007).

Quantification of Scientific Misconduct

Three different approaches have been used to estimate the prevalence of scientific misconduct:

  1. (1)
    In survey studies, researchers anonymously indicate their involvement in scientific misconduct or estimate the involvement of their colleagues. A meta-analysis of survey studies (Fanelli, 2009) showed that a pooled weighted average of 1.97% of scientists from all scientific fields have admitted to have participated in fabricating, falsifying, or modifying data. 14.12% reported that they believed that their colleagues were involved in such practices. Survey studies on the prevalence of scientific misconduct have been criticized for providing varying estimates due to differences in item wording, survey distribution method, social desirability and other factors (Fanelli, 2009; Fiedler & Schwarz, 2016).
  2. (2)
    Through statistical (re)analyses of reported findings, some researchers attempt to identify statistical inconsistencies in published studies (e.g., inconsistencies between a reported p value and its test statistics) indicating scientific misconduct or questionable research practices (QRPs; e.g., inappropriately “rounding down” p values just over .05; e.g., Nuijten, Hartgerink, van Assen, Epskamp, & Wicherts, 2016). Yet, a considerable proportion of statistical inconsistencies may be a result of inadvertent honest errors rather than scientific misconduct (Bakker & Wicherts, 2011). Thus, studies based on statistical (re)analyses might strongly overestimate the prevalence of scientific misconduct.
  3. (3)
    The analysis of retracted articles and retraction notices has recently emerged as a main format for investigating scientific misconduct (for a review, see Hesselmann, Graf, Schmidt, & Reinhart, 2017). Analyses investigating scientific misconduct via retracted articles are mostly based on cases that after thorough investigations have been judged to be guilty of scientific misconduct. Yet, as it is often difficult to detect scientific misconduct (Stroebe et al., 2012), this approach provides an estimate only for the lower bound of the prevalence of scientific misconduct. Also, estimates derived from this approach are influenced by the quality of the monitoring systems implemented to detect scientific misconduct.

Taken together, all three approaches in the quantification of scientific misconduct possess unique strengths and weaknesses in their ability to investigate the prevalence, distribution, and development of scientific misconduct. Thus, findings from all three approaches should be integrated in a field addressing scientific misconduct.

The Present Study

The aim of the present study was to examine the prevalence and the development of scientific misconduct in psychology and its subfields. First, we conducted a systematic review of articles reporting quantitative prevalence estimates of scientific misconduct in psychology. Another concept that is linked to concerns about the validity of published psychological research are QRPs (e.g., Świątkowski & Dompnier, 2017). QRPs comprise practices that unambiguously qualify as scientific misconduct (e.g., falsifying data) and others that are less clear (e.g., failing to report all of a study’s dependent measures; John, Loewenstein, & Prelec, 2012; Motyl et al., 2017; Stürmer, Oeberst, Trötschel, & Decker, 2017). Thus, there is some degree of overlap between scientific misconduct and some of the behaviors subsumed under the term “QRPs”. Consequently, we also included prevalence estimates of QRPs in our review.

Second, we analyzed new empirical data on the prevalence and development of retractions due to scientific misconduct in psychology accounting for subfields of psychology, their size, and the number of unique authors responsible for scientific misconduct. A preliminary version of our data set was reported by Margraf (2015). This work did not take into account the retraction reasons (misconduct or not), nor psychological subfields or responsible authors. Our data, scripts for data analysis, and materials (for the systematic review and the empirical study of article retractions) are accessible via the PsychArchives repository https://doi.org/10.23668/psycharchives.872

Method

Systematic Review

We searched the databases PsycINFO and Scopus with the search-string “(prevalence OR incidence) AND (“scientific fraud” OR “research fraud” OR “scientific misconduct” OR “research misconduct” OR “scientific integrity” OR “data falsification” OR “data fabrication” OR plagiarism OR “research practices” OR “p-hacking” OR “HARKing” OR retract*)” in abstracts and titles (last update: June 2018). Results from Scopus were limited to the subject area “psychology”. No other limits were set. Additionally, we conducted an exploratory literature search by entering our key words in Google Scholar and by following up references in the included studies. Our only inclusion criterion was that studies had to report quantitative prevalence estimates of scientific misconduct or QRPs in psychological research. In three studies, prevalence estimates of scientific misconduct were measured but not reported. We contacted the corresponding authors of these articles via e-mail and received the relevant prevalence estimates from one article (Sacco, Bruton, & Brown, 2018). Studies addressing scientific misconduct in non-psychological research fields and in students (i.e., plagiarism and cheating in course work) were excluded.

Empirical Study

We used the search string “(retract*.ab. or retract*.ti.) and “01*”.pt.” (limit 1860–2017) to search PsycINFO for “retract*” in titles and abstracts of journal contributions (last update: January 2018). All records reporting that the respective article has been retracted or reporting the retraction of a previously published article were included in the analysis. Next, the original retraction notices were collected. Two independent raters categorized the retraction notices by reason for retraction (1. fraud, 2. plagiarism, 3. other misconduct, 4. author error, 5. publisher error, 6. other reason, 7. no explanation/justification). Categories 1, 2, and 3 were regarded as scientific misconduct. In case of scientific misconduct in multi-authored papers, responsible authors were identified based on the retraction notice. Coders were instructed to use the Retraction Watch Database (Center for Scientific Integrity, n.d.) to obtain additional information if needed. We used the articles’ content classification in PsycINFO to allocate the retracted articles to the respective psychological subfield. For the calculation of the prevalence rate, we divided the number of retracted articles or responsible authors by the size of the field (i.e., the number of records with document type “Journal Article” in the respective field). In the case that there was no clear indication which author of a retracted paper was responsible for the scientific misconduct, the entire author collective was incorporated as a single responsible author in the analyses.

Results

Systematic Review

The literature search yielded 136 results from PsycINFO and 56 results from Scopus leading to 139 results after removing duplicates and retraction notices. In the first step, we evaluated the titles and abstracts. We excluded 121 articles in this step because no quantitative prevalence estimates of scientific misconduct were measured. The full text of the remaining 18 articles was examined resulting in the inclusion of four articles for the systematic review. The explorative literature search and suggestions from the review process of this article yielded 12 additional relevant articles. In the final database, there were 16 studies: six survey studies, nine studies with statistical (re)analyses and one study analyzing retracted articles. Methods and prevalence estimates of scientific misconduct and QRPs from all included studies can be found in Table 1.

Table 1 Methods and prevalence estimates from all studies included in the systematic review

Empirical Study

Searching PsycINFO for “retract*” in title and abstract yielded 2,302 records, including 402 retractions. 401 original retraction notices could be collected and were categorized for retraction reason by two independent raters. Interrater agreement (100 × (number of agreeing values/number of all coded values)) was 82.54%. Discrepancies were resolved by consulting the original retraction notice and by discussion. Of the 401 retractions, 260 (64.84%) were attributable to scientific misconduct (29.18% fraud, 26.68% plagiarism, 8.98% other misconduct). The overall retraction rate (1860–2017) due to scientific misconduct was 0.82 journal articles per 10,000 journal articles in PsycINFO. The development of retractions due to scientific misconduct since 1982 is shown in Figure 1. The rate of articles retracted due to scientific misconduct in psychological subfields can be found in Table 2.

Figure 1 Development in number of journal articles retracted due to scientific misconduct per 10,000 published journal articles in PsycINFO from 1982 to 2017 by publication year of the retracted article.
Table 2 Number of article retractions due to scientific misconduct and number of authors responsible for scientific misconduct per 10,000 journal articles in PsycINFO subfields (1860–2017)

Discussion

Systematic Review

This is the first systematic review synthesizing existing studies reporting quantitative prevalence estimates of scientific misconduct and QRPs in psychology. In survey studies, self-admission rates for data falsification ranged between 0.6% and 2.3%. Prevalence estimates for the involvement of other researcher in data falsification ranged between 9.3% and 18.7%. Self-admission rates for other QRPs that may or may not qualify as scientific misconduct such as inappropriately altering or “cooking” research data (e.g., 6%, Braun & Roussos, 2012) or “rounding down” p values just over .05 (e.g., 33%, Motyl et al., 2017) were more prevalent. There was criticism regarding the prevalence definition applied in some of the survey studies (e.g., John et al., 2012) because the percentage of researchers who admitted to have engaged in a QRP at least once was equated with the prevalence of the respective QRP (Fiedler & Schwarz, 2016). Also, the validity of researcher’s estimates of their colleagues’ involvement in QRPs is questionable (Agnoli, Wicherts, Veldkamp, Albiero, & Cubelli, 2017; Fiedler & Schwarz, 2016).

Studies reporting statistical (re)-analyses found gross inconsistencies (i.e., reported p value significant, computed p value non-significant or vice versa) in 12.4%–20.5% of the published studies. However, the proportion of studies in which inconsistencies are attributable to scientific misconduct, QRPs or honest errors remains unclear. In the only study that investigated retractions (Grieneisen & Zhang, 2012), the number of analyzed retracted articles from psychology was low (n = 32 for psychology and n = 169 for Neurosciences; numbers derived from Supplementary Material). Also, the proportion of articles in psychology that were retracted due to scientific misconduct was not reported.

Taken together, the existing studies show that the self-admission rates for scientific misconduct are lower than self-admission rates for other QRPs that are regarded as less severe (see Sacco et al., 2018). Also, the self-admission rates for scientific misconduct were considerably lower than prevalence estimates regarding the actions of other psychological researchers and lower than the percentage of gross statistical inconsistencies. Even between the survey studies, estimates varied strongly and might overestimate (e.g., because of difficulties in item interpretation; e.g., Motyl et al., 2017) or underestimate (e.g., because of social desirability; Edwards, 1957) the prevalence of scientific misconduct. Thus, additional empirical data were required to obtain a reliable lower bound prevalence estimate of scientific misconduct in psychology.

Empirical Study

This study was the first empirical investigation analyzing a large number of psychological articles retracted due to scientific misconduct. Our empirical analyses revealed that the percentage of retractions that was attributable to scientific misconduct (64.84% in PsycINFO) was similar to the biomedical and life-science literature (67.40% in PubMed; Fang et al., 2012) and similar to estimates derived from a variety of scientific disciplines and databases (47% “publishing misconduct” and 20% “research misconduct”; Grieneisen & Zhang, 2012). The overall rate of journal articles retracted due to scientific misconduct was somewhat higher in PsycINFO (0.82 per 10,000 journal articles) compared to Medline (0.56 per 10,000 journal articles; Wager & Williams, 2011). Importantly, all comparisons with other disciplines should be interpreted with caution due to differences in methods and covered time periods. For example, Fang et al. (2012) consulted further information in addition to the retraction notices to classify reasons for retractions whereas other authors did not (e.g., Wager & Williams, 2011).

With regard to the temporal development, there was a steep increase in retractions due to scientific misconduct of journal articles in PsycINFO between the late 1990s and 2012. There were almost no retractions due to scientific misconduct in psychology before the late 1990s. Grieneisen and Zhang (2012) identified a similar trend in their study covering a wide range of scientific disciplines. This could either be explained by an increase in scientific misconduct or by changing mechanisms (e.g., plagiarism screening) and standards (e.g., journal policies) to detect and retract fraudulent articles. Fanelli (2013) argued that the increase in article retractions is attributable to improved detection and retraction systems. For instance, he found that the proportion of journals that retract articles has grown dramatically while the cases of misconduct identified by the US Office of Research Integrity have not increased. Interestingly, the trend that was identified for article retractions in psychology was not found for gross statistical inconsistencies in published psychological articles which are regarded as a potential indicator of scientific misconduct or QRPs (Nuijten et al., 2016). This finding supports Fanelli’s (2013) notion that the increase in article retractions is mostly attributable to improved detection and retraction systems (also see Gross, 2016). In recent years, the rate of articles retracted due to scientific misconduct seemed to decline. This is likely to be due to the time delay with which cases of scientific misconduct are usually detected (Fang et al., 2012).

In 20 out of 22 psychological subfields, there were articles retracted due to scientific misconduct. Based on the number of retracted journal articles, the largest prevalence was identified for Social Psychology. However, 80.65% of these cases were attributable to one author (D. Stapel). Based on the number of different responsible authors, Consumer Psychology had the highest prevalence. This finding shows, that the perception of some psychological subfields as being more fraudulent than others might be attributable to spectacular cases in which single authors were responsible for a large number of fraudulent studies (“repeat offenders”; Grieneisen & Zhang, 2012).

General Discussion and Limitations

Our systematic review showed that scientific misconduct including data falsification, data fabrication and other severe forms of misconduct in psychology is relatively rare in comparison to other QRPs. As expected, our empirical study yielded a somewhat lower prevalence estimate of scientific misconduct in comparison to survey studies. This reflects that scientific misconduct is not always detected. Yet, scientific misconduct was prevalent across a variety of geographic regions (Agnoli et al., 2017; Braun & Roussos, 2012; John et al., 2012) and in almost all psychological subfields.

Even single incidents of scientific misconduct can have immense effects (Michalek et al., 2010). Consequently, we believe that it is important to promote measures which diminish the incentives and possibilities to engage in scientific misconduct equally across all psychological subfields. In our eyes, a promising approach lies in the advancement of open data and open materials (Tenopir et al., 2011) and in the improvement of systems for reporting suspected scientific misconduct (Crocker & Cooper, 2011). However, we do not believe that scientific misconduct can be entirely prevented through detection systems. Thus, fostering an ethical organization culture clearly communicating acceptable and unacceptable behavior in psychology departments and research groups (e.g., through rewards systems; Kish-Gephart, Harrison, & Treviño, 2010) seems equally important.

Our study has, of course, some limitations. First, the number of studies in the systematic review was relatively low. The heterogeneity in methods did not allow meta-analytic integration of the results. Similarly, the number of retracted articles in our empirical study was low for some subfields so that comparisons between subfields should be interpreted with caution. Second, retraction notices provide an estimate only of the lower bound prevalence estimate of scientific misconduct as many cases can remain unnoticed. In this point, the investigation of scientific misconduct is similar to the calculation of crime rates, because only reported offenses are in the statistics (Bechtel & Pearson, 1985). Third, our empirical method was designed to quantify convicted cases of scientific misconduct. Other, subtler but potentially equally damaging (Simmons, Nelson, & Simonsohn, 2011) QRPs were only covered in our systematic review.

Despite these constraints, the present study contributes to the understanding of scientific misconduct in psychology. Our study yielded reliable lower bound estimates of scientific misconduct which showed that scientific misconduct occurs across almost all psychological subfields. Also, the increasing retraction rate in comparison to the 1980s and 1990s shows that there are mechanisms which generally have the ability to detect scientific misconduct. Thus, initiatives to strengthen these systems (e.g., by increasing research transparency) should be promoted across all psychological subfields and not be restrained to fields with prominent cases of scientific misconduct.

References *References marked with an asterisk were included in the systematic review.

  • *Agnoli, F., Wicherts, J. M., Veldkamp, C. L., Albiero, P., & Cubelli, R. (2017). Questionable research practices among Italian research psychologists. PLoS One, 12, e0172792. https://doi.org/10.1371/journal.pone.0172792 First citation in articleCrossrefGoogle Scholar

  • Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics, 13, 437–461. https://doi.org/10.1007/s11948-007-9042-5 First citation in articleCrossrefGoogle Scholar

  • *Bakker, M., & Wicherts, J. M. (2011). The (mis) reporting of statistical results in psychology journals. Behavior Research Methods, 43, 666–678. https://doi.org/10.3758/s13428-011-0089-5 First citation in articleCrossrefGoogle Scholar

  • *Bakker, M., & Wicherts, J. M. (2014). Outlier removal and the relation with reporting errors and quality of psychological research. PLoS One, 9, e103360. https://doi.org/10.1371/journal.pone.0103360 First citation in articleCrossrefGoogle Scholar

  • Bechtel, H. K. Jr., & Pearson, W. Jr. (1985). Deviant scientists and scientific deviance. Deviant Behavior, 6, 237–252. https://doi.org/10.1080/01639625.1985.9967676 First citation in articleCrossrefGoogle Scholar

  • *Bosco, F. A., Aguinis, H., Field, J. G., Pierce, C. A., & Dalton, D. R. (2016). HARKing’s threat to organizational research: Evidence from primary and meta‐analytic sources. Personnel Psychology, 69, 709–750. https://doi.org/10.1111/peps.12111 First citation in articleCrossrefGoogle Scholar

  • *Braun, M., & Roussos, A. J. (2012). Psychotherapy researchers: Reported misbehaviors and opinions. Journal of Empirical Research on Human Research Ethics, 7, 25–29. https://doi.org/10.1525/jer.2012.7.5.25 First citation in articleCrossrefGoogle Scholar

  • Callaway, E. (2011). Report finds massive fraud at Dutch universities. Nature, 479, 15. https://doi.org/10.1038/479015a First citation in articleCrossrefGoogle Scholar

  • *Caperos, J. M., & Pardo Merino, A. (2013). Consistency errors in p-values reported in Spanish psychology journals. Psicothema, 25, 408–414. https://doi.org/10.7334/psicothema2012.207 First citation in articleGoogle Scholar

  • Carey, B. (2011, November 2). Fraud case seen as a red flag for psychology research (pp. A3). New York, NY: New York Times. First citation in articleGoogle Scholar

  • Center for Scientific Integrity. (n.d.). Retraction watch database. Retrieved from http://retractiondatabase.org/RetractionSearch.aspx First citation in articleGoogle Scholar

  • *Cortina, J. M., Green, J. P., Keeler, K. R., & Vandenberg, R. J. (2017). Degrees of freedom in SEM: Are we testing the models that we claim to test? Organizational Research Methods, 20, 350–378. https://doi.org/10.1177/1094428116676345 First citation in articleCrossrefGoogle Scholar

  • Crocker, J., & Cooper, M. L. (2011). Addressing scientific fraud. Science, 334, 1182. https://doi.org/10.1126/science.1216775 First citation in articleCrossrefGoogle Scholar

  • De Rond, M., & Miller, A. N. (2005). Publish or perish: Bane or boon of academic life? Journal of Management Inquiry, 14, 321–329. https://doi.org/10.1177/1056492605276850 First citation in articleCrossrefGoogle Scholar

  • Edwards, A. L. (1957). The social desirability variable in personality assessment and research. Worth, TX: Dryden Press. First citation in articleGoogle Scholar

  • Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One, 4, e5738. https://doi.org/10.1177/1056492605276850 First citation in articleCrossrefGoogle Scholar

  • Fanelli, D. (2013). Why growing retractions are (mostly) a good sign. PLoS Medicine, 10, e1001563. https://doi.org/10.1371/journal.pmed.1001563 First citation in articleCrossrefGoogle Scholar

  • Fang, F. C., Steen, R. G., & Casadevall, A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences of the United States of America, 109, 17028–17033. https://doi.org/10.1073/pnas.1212247109 First citation in articleCrossrefGoogle Scholar

  • Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science, 7, 45–52. https://doi.org/10.1177/1948550615612150 First citation in articleCrossrefGoogle Scholar

  • *Franco, A., Malhotra, N., & Simonovits, G. (2016). Underreporting in psychology experiments: Evidence from a study registry. Social Psychological and Personality Science, 7, 8–12. https://doi.org/10.1177/1948550615598377 First citation in articleCrossrefGoogle Scholar

  • Franzoni, C., Scellato, G., & Stephan, P. (2011). Changing incentives to publish. Science, 333, 702–703. https://doi.org/10.1126/science.1197286 First citation in articleCrossrefGoogle Scholar

  • *Grieneisen, M. L., & Zhang, M. (2012). A comprehensive survey of retracted articles from the scholarly literature. PLoS One, 7, e44118. https://doi.org/10.1371/journal.pone.0044118 First citation in articleCrossrefGoogle Scholar

  • Gross, C. (2016). Scientific misconduct. Annual Review of Psychology, 67, 693–711. https://doi.org/10.1371/journal.pone.0044118 First citation in articleCrossrefGoogle Scholar

  • Hesselmann, F., Graf, V., Schmidt, M., & Reinhart, M. (2017). The visibility of scientific misconduct: A review of the literature on retracted journal articles. Current Sociology, 65, 814–845. https://doi.org/10.1177/0011392116663807 First citation in articleCrossrefGoogle Scholar

  • Hofmann, B., Helgesson, G., Juth, N., & Holm, S. (2015). Scientific dishonesty: A survey of doctoral students at the major medical faculties in Sweden and Norway. Journal of Empirical Research on Human Research Ethics, 10, 380–388. https://doi.org/10.1177/1556264615599686 First citation in articleCrossrefGoogle Scholar

  • *John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524–532. https://doi.org/10.1177/0956797611430953 First citation in articleCrossrefGoogle Scholar

  • Kish-Gephart, J. J., Harrison, D. A., & Treviño, L. K. (2010). Bad apples, bad cases, and bad barrels: Meta-analytic evidence about sources of unethical decisions at work. Journal of Applied Psychology, 95, 1–31. https://doi.org/10.1037/a0017103 First citation in articleCrossrefGoogle Scholar

  • Louis, K. S., Anderson, M. S., & Rosenberg, L. (1995). Academic misconduct and values: The department’s influence. The Review of Higher Education, 18, 393–422. https://doi.org/10.1353/rhe.1995.0007 First citation in articleCrossrefGoogle Scholar

  • Margraf, J. (2015). Zur Lage der Psychologie [On the state of psychology]. Psychologische Rundschau, 66, 1–30. https://doi.org/10.1026/0033-3042/a000247 First citation in articleLinkGoogle Scholar

  • Marshall, E. (2000). Scientific misconduct–How prevalent is fraud? That’s a million-dollar question. Science, 290, 1662–1663. https://doi.org/10.1126/science.290.5497.1662 First citation in articleCrossrefGoogle Scholar

  • *Mazzola, J. J., & Deuling, J. K. (2013). Forgetting what we learned as graduate students: HARKing and selective outcome reporting in I–O journal articles. Industrial and Organizational Psychology, 6, 279–284. https://doi.org/10.1111/iops.12049 First citation in articleCrossrefGoogle Scholar

  • Michalek, A. M., Hutson, A. D., Wicher, C. P., & Trump, D. L. (2010). The costs and underappreciated consequences of research misconduct: A case study. PLoS Medicine, 7, e1000318. https://doi.org/10.1371/journal.pmed.1000318 First citation in articleCrossrefGoogle Scholar

  • *Motyl, M., Demos, A. P., Carsel, T. S., Hanson, B. E., Melton, Z. J., Mueller, A. B., … Skitka, L. J. (2017). The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse? Journal of Personality and Social Psychology, 113, 34–58. https://doi.org/10.1037/pspa0000084 First citation in articleCrossrefGoogle Scholar

  • Münch, R. (2014). Academic capitalism: Universities in the global struggle for excellence. New York, NY: Routledge. First citation in articleCrossrefGoogle Scholar

  • Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 615–631. https://doi.org/10.1177/1745691612459058 First citation in articleCrossrefGoogle Scholar

  • *Nuijten, M. B., Hartgerink, C. H., van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48, 1205–1226. https://doi.org/10.3758/s13428-015-0664-2 First citation in articleCrossrefGoogle Scholar

  • Office of Research Integrity. (2011). Definition of research misconduct. Rockville, MD: US Department of Health and Human Services. Retrieved from http://ori.hhs.gov/definition-misconduct First citation in articleGoogle Scholar

  • Office of Science and Technology Policy (OSTP). (2000). Federal policy on research misconduct. Federal Register, 65, 76260–76264. Retrieved from https://ori.hhs.gov/federal-research-misconduct-policy First citation in articleGoogle Scholar

  • Resnik, D. B., Neal, T., Raymond, A., & Kissling, G. E. (2015). Research misconduct definitions adopted by US research institutions. Accountability in Research, 22, 14–21. https://doi.org/10.1080/08989621.2014.891943 First citation in articleCrossrefGoogle Scholar

  • Rovenpor, D. R., & Gonzales, J. E. (2015). Replicability in psychological science: Challenges, opportunities, and how to stay up-to-date. Psychological Science Agenda, 29(1). Retrieved from www.apa.org/science/about/psa/2015/01/replicability.aspx First citation in articleGoogle Scholar

  • *Sacco, D. F., Bruton, S. V., & Brown, M. (2018). In defense of the questionable: Defining the basis of research scientists’ engagement in questionable research practices. Journal of Empirical Research on Human Research Ethics, 13, 101–110. https://doi.org/10.1177/1556264617743834 First citation in articleCrossrefGoogle Scholar

  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. https://doi.org/10.1177/0956797611417632 First citation in articleCrossrefGoogle Scholar

  • Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific misconduct and the myth of self-correction in science. Perspectives on Psychological Science, 7, 670–688. https://doi.org/10.1177/1745691612460687 First citation in articleCrossrefGoogle Scholar

  • *Stürmer, S., Oeberst, A., Trötschel, R., & Decker, O. (2017). Early-career researchers’ perceptions of the prevalence of questionable research practices, potential causes, and open science. Social Psychology, 48, 365–371. https://doi.org/10.1027/1864-9335/a000324 First citation in articleLinkGoogle Scholar

  • Świątkowski, W., & Dompnier, B. (2017). Replicability crisis in social psychology: Looking at the past to find new pathways for the future. International Review of Social Psychology, 30, 111–124. https://doi.org/10.1027/1864-9335/a000324 First citation in articleCrossrefGoogle Scholar

  • Tenopir, C., Allard, S., Douglass, K., Aydinoglu, A. U., Wu, L., Read, E., … Frame, M. (2011). Data sharing by scientists: Practices and perceptions. PLoS One, 6, e21101. https://doi.org/10.1371/journal.pone.0021101 First citation in articleCrossrefGoogle Scholar

  • *Veldkamp, C. L., Nuijten, M. B., Dominguez-Alvarez, L., van Assen, M. A., & Wicherts, J. M. (2014). Statistical reporting errors and collaboration on statistical analyses in psychological science. PLoS One, 9, e114876. https://doi.org/10.1371/journal.pone.0114876 First citation in articleCrossrefGoogle Scholar

  • Wager, E., & Williams, P. (2011). Why and how do journals retract articles? An analysis of Medline retractions 1988–2008. Journal of Medical Ethics, 37, 567–570. https://doi.org/10.1136/jme.2010.040964 First citation in articleCrossrefGoogle Scholar

Armin Günther, Leibniz Institute for Psychology Information, Universitätsring 15, 54296 Trier, Germany,