Skip to main content
Free AccessEditorial

How to Make Sure Your Paper is Desk Rejected

A Practical Guide to Rejection in EJPA

Published Online:https://doi.org/10.1027/1015-5759/a000419

Desk Rejections – Really a Disaster for the Authors?

Virtually all established academic journals receive a much greater number of submissions than they can eventually publish, and this is also true for EJPA (Greiff, 2017). Consequently, the majority of papers are ultimately rejected, and only a small portion can be accepted for publication. Many journals report acceptance rates that are substantially below 20%. For EJPA, acceptance rates currently vary between 10% and 15%. In order to save the limited amount of resources that authors, reviewers, and editors have, an efficient way of dealing with the large numbers of submissions is to implement the desk rejection, that is, a rejection that is based solely on editorial screening, without external peer review.

It is interesting that many authors consider the desk rejection to be the least favorable outcome in the academic publication process. In fact, one can find a number of editorials and articles that inform readers about the precautions they can take to avoid desk rejections and guidelines for how to convince editors that their manuscript should not be relegated to the sad pile of papers that were dismissed without peer review (e.g., Billsberry, 2014; Galvin, 2014; Sun & Linton, 2014).

Just in case the unsuspecting reader is just reading along and not paying full attention to this point, we need to ask you to please put your tongue into your cheek and be sure to hold it there while reading the rest of this editorial. Okay, now you may proceed…1

Now, we feel that this long-standing implicit prejudice against the desk rejection has gone on for too long. Sadly, no one has tried to look at the desk rejection in a new light. What about their positive aspects? As no one has ever spoken out on the behalf of desk rejections before, we feel that it is our duty to point out that desk rejections offer a number of distinct advantages to authors:

  • Desk rejections are quick and efficient, and thus you, as the author, do not have to wait for editors to find reviewers, for reviewers to do their reviews, for editors to review the reviews, and so forth. You are set free from this whole long tedious process.
  • Desk rejections do not burden you with the need to rewrite your paper on the basis of any comments given by external reviewers or by the editor. In fact, if you feel like it, you can just entirely dismiss any comments provided by anybody as long as they are related to a desk rejection. What freedom! You don’t have to swallow your pride and write a letter thanking reviewers for their ridiculous ideas.
  • Desk rejections keep you busy and help you avoid boredom because they allow you to spend the weekend searching for new outlets for your work, registering in a new submission system, and writing new and personalized cover letters. Can you possibly think of a better way to spend your weekend?
  • Desk rejections have the potential to increase your skill and adaptability in reformatting your paper to meet specific and idiosyncratic guidelines, a skill that is needed in a number of areas across life (sometimes even considered a 21st century skill).
  • Desk rejections fuel anger, anger fuels motivation, and who doesn’t want to be motivated?

Now, given the distinct advantages of desk rejections, we were surprised to find so much work focusing on how to avoid them, but absolutely nothing when we searched for guidelines on how to ensure that your paper is desk rejected. Thus, the aim of this editorial is to fill this gap and to provide authors who aim to achieve desk rejections with some guidelines and practical advice on how to maximize their chances of actually receiving a desk rejection from EJPA.

Seven Reasons and Seven Guidelines to Ensure Your Paper Will Be Desk Rejected

From our editorial perspective, there are seven partly distinct, partly overlapping core reasons for desk rejections in EJPA, resulting in seven practical recommendations. Usually following one of them closely will suffice for the desired outcome, but to be extra sure that your paper will not be sent out for an external review, the advanced user will combine several of them.

  1. (1)
    Scope of EJPA
  2. Guideline 1: Submit an article that is grossly outside the journal’s scope
  3. EJPA focuses on high-quality research in assessment and publishes articles that provide relevant information on both theoretical and applied developments in the field of psychological assessment and its disciplines (please see also the website http://www.hogrefe.com/j/ejpa for more details). Ignoring the journal’s focus on assessment by submitting, for instance, a paper on the development of partnership quality during holiday time is a good start for ensuring a desk rejection. If you are an even bigger thrill-seeker, you may consider inflating the minor parts of your submission that are related to assessment (you know, the assessment-related parts that are a natural part of all empirical psychological research) to the largest extent possible. However, be careful when doing so. This approach should be reserved for advanced users only as there is some risk that the handling editor might actually consider your paper eligible and refrain from desk rejecting it.
  4. (2)
    Added Value and New Knowledge
  5. Guideline 2a: Don’t create substantial new knowledge
  6. As an outlet in the field of psychological assessment, EJPA aims to publish papers that can be expected to have a strong and lasting impact on the community. As a good starting point on your way to a desk rejection, make sure to present research questions that are extremely specific and offer hardly any advancement of knowledge. To make further sure that this is adequately perceived by the editor, omit any mention of impact and implications and fail to provide any rationale for why your manuscript should be of interest to the assessment community.
  7. Guideline 2b: Claim that you have created substantial new knowledge
  8. A somewhat different but equally effective way to ensure a desk rejection is to simply claim that your study substantially adds new knowledge by, for instance, attesting to an instrument’s reliability and validity without actually achieving these claims. In many cases, the construct being assessed is new, and little prior systematic, empirical research exists. Just ignoring this fact but providing some initial and tentative but (important!) very limited evidence, such as some kind of reliability (usually Cronbach’s alpha) and some kind of CFA for dimensionality, but at the same time not providing any information on convergent and discriminant validity will move your paper a long way toward a desk rejection. Please note that sometimes specific studies with tentative evidence, particularly if they are innovative, might enter the review process, so this guideline should be reserved for advanced users only.
  9. (3)
    Use of Methods and Statistical Analyses
  10. Guideline 3a: Use inappropriate and simplistic methods to analyze your data
  11. As an alternative to the previous Guideline 2, try adopting statistical analyses that are simplistic and do not fit your research questions. Do not waste any time explaining how the analyses in your submission serve to answer your research questions. Because EJPA requires a high level of state-of-the-art methodological approaches and analyses and a fully appropriate and convincing application of these methods, skipping this part in your submission in combination with some vague and unclear descriptions of your analyses is one of the best ways of absolutely guaranteeing a desk rejection.
  12. Guideline 3b: Report obviously contradictory and inconsistent results
  13. Guideline 3b can be combined perfectly with Guideline 3a and both together will make for a great desk rejection. For instance, EJPA has started (partly motivated by the current replicability crisis that is penetrating psychological science) to implement some means of statistical prechecking, and a large number of hits are prone to delight any editor. To give just one example, many papers contain some kind of confirmatory factor analysis. Often, a visually attractive figure is added to the text. Alternatively, the tested models are elaborately explained in the text. Both provide a really good way of doing some random checking, for instance, whether the correct degrees of freedom (df) are reported. Differences between the df reported in the manuscript and the df calculated for the described model are a sure-fire way to get desk rejected. Of note, about 50% of the papers that get rejected because of such inconsistencies are resubmitted later in the correct way.
  14. (4)
    The Sample and the Data
  15. Guideline 4: Create an implicit or explicit screw up with the sample or the data set
  16. Guideline 4 can be used either alone or in combination with the previous Guidelines 2 and 3. So, if you’re a real pro, you wouldn’t go for only inappropriate methods or inconsistent reporting, but you would do so with a sample and a data set that are insufficient, maybe even go for it all in terms of sample size, representativeness of the target population, and the measures employed. For instance, if your submission is about the workplace, then use a small student sample, and if you want to establish some form of construct validity, use measures that are problematic and for which little validity evidence exists (or even better: none at all). Obviously, there are a number of different ways this can be accomplished. Approaches can be related to the underlying data set and the sample that these data were collected on. In fact, several aspects of data quality and sample size have repeatedly been discussed over a long period of time in the literature (see, e.g., Bollen, 1990; MacCallum, Browne, & Sugawara, 1996; Reips, 2001), and there are a number of excellent ways to make sure your paper is desk rejected in this category by not following any of these suggestions and by failing to take any of them into consideration.
  17. (5)
    Editorial Advice and Journal Policy
  18. Guideline 5: Bluntly ignore any available information on journal policy
  19. It is common for journals – among them EJPA – to now and then publish information on the type of research and content they are interested in. Some of this can be found on the journal’s website. Editorials are also often used to communicate to readers some advice about policy and scope. Skimming some of the back issues from the journal to gather an idea of what is usually published can also be insightful. Our advice here is: Just don’t bother. Don’t familiarize yourself with EJPA and its contents, don’t check out the website, and don’t read any of the editorials. For example, according to its editorial policy, EJPA does not publish mere translations of existing measures into a new language unless the manuscript additionally provides an indication of measurement invariance with the original scale or, at the very least, strong additional validity insights that would be of interest to readers of all languages (Ziegler & Bensch, 2013). So, a great way to implement Guideline 5 is to submit a translation that meets none of the above and then go ahead and claim validity for the translated scale even though you don’t report any empirical evidence for it. Another example is the ABC of test construction, which states that each paper should detail (1) what the instrument measures, (2) for what purpose, and (3) in which target population (Ziegler, 2014). So it’s easy to get a desk rejection here: Just don’t describe what the instrument measures, don’t provide a purpose, and certainly don’t mention the target population. If you want to have a lasting effect, write a follow-up letter to the editor after receiving the desk rejection in which you make clear that you really feel the journal needs to rethink its editorial policy and that they are outright stupid for not publishing translations. Most editors will appreciate being called stupid and are really likely to reverse their decisions when you do this.
  20. (6)
    One Paper or Many Papers?
  21. Guideline 6: Go for “salami slicing” and cut your data into as many papers as possible
  22. Collecting data is hard work, and journal editors just might actually know this too, especially in the realm of assessment, where large samples of participants who work on many instruments are needed. Why not just go ahead and slice up the data into several papers? Have each paper take just a slightly different angle, for example, one paper can focus on test-retest reliability, another one on construct validity, and a third one on test-criterion validity. And hey, you can even submit all the papers to the same journal, and if you really want to make a splash, do so more or less simultaneously (let’s say within a week or so). The advantage is that you can easily receive several desk rejections at once – a nice accomplishment in the world of “publish-or-perish.”
  23. (7)
    Formal Requirements and Manuscript Structure
  24. Guideline 7: Be highly creative when it comes to formal style, language, and manuscript structure.
  25. We take a somewhat different approach for this final recommendation. This guideline is an excellent fallback option in case you find it difficult to relate to any of the previous guidelines. For example, if you are a rebel who hates guidelines in general, just don’t follow any and go wild with your formatting, language, and manuscript structure. The choices are endless so your inner creative artist can take control: Randomly change the sizes and types of font throughout the manuscript, integrate the Results and Discussion sections, or even better, entirely omit the Discussion (who needs discussions anyways?), use more manuscript pages for the references than for the actual manuscript, ignore page and word limits, and don’t check your final document before approval – there are just so many ways. We can assure you that many editors will look favorably upon such creativity when deciding which manuscripts should be exiled to the realm of desk rejections and which shouldn’t.
  26. If in the past or the future, your paper has been or will be desk rejected on the basis of any of these seven reasons, try to see the desk rejection as a learning experience or, even better, as some kind of empirical proof for this theoretical paper.2

Conclusion

With this editorial, our goal was to fill a striking gap in the existing literature on important author information for academic publishing and provide some guidance to authors on how to endure desk rejections that can be applied to EJPA but also in a more general way to many academic journals. As the current and former editors of EJPA, we firmly believe that if you follow these guidelines closely and diligently, your chances of achieving the desired outcome, a desk rejection from EJPA or another journal, are highly likely, if not to say almost certain.

If, however, your desire is to avoid getting your paper desk rejected and to see it enter the academic review and publication cycle, our advice is brief and depends on whether you want to take action beforehand or in retrospect.

For action in retrospect, you may simply consider rejecting the editorial decision of rejection by writing a rejection of the rejection to the editor. Chapman and Slade (2015) provide a great template that can be adopted in order to tackle all kinds of unwanted rejections. For action beforehand, we advise you to just ignore all the well-intentioned guidelines we provided in this editorial or maybe even work actively against them.

Whichever of the different piles you would like to see your manuscript land on, we wish you all the best for this enterprise and hope that the recommendations given here will guide you toward your chosen goals.

The authors thank Jane Zagorski for her excellent editorial support (as always!), but also for several invaluable suggestions with regard to the content of this editorial.

1Explicit disclaimer: The following passages are meant in a humoristic way. It was not our intention to aggravate or offend anyone. Please try to laugh as science and even desk rejections should be at least some fun!

2The editors of this journal would gladly be willing to consider an empirical follow-up study on the theoretical propositions made here for publication as a guest editorial in EJPA.

References

  • Billsberry, J. (2014). Desk-rejects. 10 top tips to avoid the cull. Journal of Management Education, 38, 3–9. doi: 10.1177/1052562913517209 First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. (1990). Overall fit in covariance structure models: Two types of sample size effects. Psychological Bulletin, 107, 256–259. doi: 10.1037/0033-2909.107.2.256 First citation in articleCrossrefGoogle Scholar

  • Chapman, C. & Slade, T. (2015). Rejection of rejection. A novel approach to overcoming barriers to publication. British Medical Journal, 351. Online publication. doi: 10.1136/bmj.h6326 First citation in articleCrossrefGoogle Scholar

  • Galvin, P. (2014). The view from the “other side of the desk”. Journal of Management & Organization, 20, 711–714. doi: 10.1017/jmo.2014.69 First citation in articleCrossrefGoogle Scholar

  • Greiff, S. (2017). The field of psychological assessment. Where it stands and where it’s going. A personal analysis of foci, gaps, and implications for EJPA. European Journal of Psychological Assessment, 33, 1–4. doi: 10.1027/1015-5759/a000412 First citation in articleLinkGoogle Scholar

  • MacCallum, R. C., Browne, M. W. & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. doi: 10.1037/1082-989X.1.2.130 First citation in articleCrossrefGoogle Scholar

  • Reips, U. D. (2001). The Web Experimental Psychology Lab. Five years of data collection on the Internet. Behavior Research Methods, Instruments, & Computers, 33, 201–211. doi: 10.3758/bf03195366 First citation in articleCrossrefGoogle Scholar

  • Sun, H. & Linton, J. D. (2014). Structuring papers for success. Making your paper more like a high impact journal than a desk reject. Technovation, 34, 571–573. doi: 10.1016/j.technovation.2014.07.008 First citation in articleCrossrefGoogle Scholar

  • Ziegler, M. (2014). Stop and state your intentions! Let’s not forget the ABC of test construction. European Journal of Psychological Assessment, 30, 239–242. doi: 10.1027/1015-5759/a000228 First citation in articleLinkGoogle Scholar

  • Ziegler, M. & Bensch, D. (2013). Lost in translation: Thoughts regarding the translation of existing psychological measures into other languages. European Journal of Psychological Assessment, 29, 81–83. doi: 10.1027/1015-5759/a000167 First citation in articleLinkGoogle Scholar

Samuel Greiff, Cognitive Science & Assessment, University of Luxembourg, 11, Porte des Sciences, 4366 Esch-sur-Alzette, Luxembourg,
Matthias Ziegler, Institut für Psychologie, Humboldt Universität zu Berlin, Rudower Chaussee 18, 12489 Berlin, Germany,