Skip to main content
Free AccessEditorial

Methodology Turns 10

The Future of Empirical Social Science Methodology

Published Online:https://doi.org/10.1027/1614-2241/a000092

Since its inception in February 2005, Methodology, the official journal of the European Association of Methodology, has been an online journal with a strong European vocation and dedication “to promote research and the development of empirical research methods in the fields of behavioral, social, educational, health and economic sciences, as well as in the field of evaluation research” (Ato & Eid, 2005; Ato & Hox, 2009).

In 2009, the then editors of Methodology, Manuel Ato and Joop Hox, wrote an editorial in which they looked back at the first 4 years of Methodology (Ato & Hox, 2009). Now, 6 years later, two different editors, Peter Lugtig of Utrecht University and Nekane Balluerka of the University of the Basque Country, have taken over the editorship of Methodology. As Methodology celebrates its 10th birthday, it is time to look back and forward.

Just like the editors before us, we firmly believe that the methodology underpinning the social sciences has many similarities across disciplines. Any differences between the sciences mostly reflect differences in the type of research questions asked, in research designs and data analysis practices, and in historical developments. Many of the articles published in Methodology have taken a cross-disciplinary perspective and many of the developments in social science methodology are similar across disciplines.

A large number of articles in Methodology have explored the methodology of the Generalized Linear Model (Schweizer, 2010; Voelkle & McKnight, 2012). Simulation studies have, for example, focused on the properties of multilevel models (Bell, Morgan, Schoeneberger, Kromrey, & Ferron, 2014; Pacagnella, 2011). Latent variables in general, and how to model the measurements, is probably the topic that has been written about most, whether in the context of psychometrics (Balluerka, Plewis, Gorostiaga, & Padilla, 2014; Botella & Suero, 2012; Gonzalez-Betanzos & Abad, 2012), or of factor analysis to estimate equivalence across groups (Kankaraš & Moors, 2011; Lugtig, Boeije, & Lensvelt-Mulders, 2012; Steinmetz, 2013). Other articles dealing with data collection methods have often focused on comparing different approaches to deal with violations of model assumptions (Blanca, Arnau, Lopez-Montiel, Bono, & Bendayan, 2013; Haupt, Lösel, & Stemmler, 2014; Schmider, Ziegler, Danay, Beyer, & Bühner, 2010; Wolff Smith, & Beretvas, 2014).

Apart from methodological advances in the field of data analysis, another development in social science research methodology is the availability of software tools. Here, a number of articles have discussed methods to implement new statistical modeling methodologies in software packages (e.g., Flora, 2011; Grilli & Variale, 2014). We envisage that in the coming years developments in this field of research will continue. The growing possibilities in terms of statistical modeling will increase demand for articles explaining which model should be used when, and how. For this reason, future issues of Methodology will feature a section with tutorial articles which show how to implement statistical modeling techniques in widely used software packages. To ensure such articles can be used by applied social science researchers, we will publish the data and code accompanying to these articles.

It is our continuing goal to feature special thematic issues edited by prominent researchers in their field. Since 2009, four special issues have been published. In 2009, Michael Eid and Fridtjof Nussbeck edited an issue to celebrate the 50th anniversary of the multitrait-multimethod matrix (Eid & Nussbeck, 2009). In 2010, Elmar Schlueter and Peter Schmidt provided an issue on “survey experiments” (Schlueter & Schmidt, 2010), while in the same year Andries Van der Ark and Jeroen Vermunt edited an issue on “new developments in missing data analysis” (Van der Ark & Vermunt, 2010). In 2013, Gordon Willis and Hennie Boeije contributed a special issue on the “systematic reporting of questionnaire development and pretesting” (Willis & Boeije, 2013). Readers of Methodology who are willing to edit a special issue, or have suggestions for future special issues, are kindly requested to e-mail us about their ideas.

As an online journal with 4 issues per volume and an annual print compendium, Methodology is not only available online via Hogrefe’s PsyJOURNALS platform (www.psyjournals.com), but also via the American Psychological Association’s PsycARTICLES fulltext database, which assures us a vast and appropriate audience for the information published and good accessibility for methodologists and applied researchers. We have added an “advance articles” feature on the platform, to ensure that articles get published very quickly after they have been accepted for publication.

We thank all authors, members of editorial board, reviewers, and the Hogrefe staff for their total confidence in the editors. Our special thanks go to Michael Eid, Manual Ato, Julio Sanchez-Meca, and Joop Hox, who each served as editor of Methodology for a number of years.

We also encourage all our readers to continue submitting papers to Methodology on topics of interest to the journal on data analysis, research methodology, and psychometrics.

References

  • Ato, M., & Eid, M. (2005). Methodology: A European perspective. Methodology, 1, 1. doi: 10.1027/1614-1881.1.1.1. First citation in articleAbstractGoogle Scholar

  • Ato, M., & Hox, J. (2009). Methodology – the first four years. Methodology, 5, 1–2. doi: 10.1027/1614-2241.5.1.1 First citation in articleAbstractGoogle Scholar

  • Balluerka, N., Plewis, I., Gorostiaga, A., & Padilla, J. L. (2014). Examining sources of DIF in psychological and educational assessment using multilevel logistic regression. Methodology, 9, 71–79. doi: 10.1027/1614-2241/a000076 First citation in articleGoogle Scholar

  • Bell, B. A., Morgan, G. B., Schoeneberger, J. A., Kromrey, J. D., & Ferron, J. M. (2014). How low can you go? Methodology, 10, 1–11. doi: 10.1027/1614-2241/a000062 First citation in articleAbstractGoogle Scholar

  • Blanca, M. J., Arnau, J., Lopez-Montiel, D., Bono, R., & Bendayan, R. (2013). Skewness and kurtosis in real data samples. Methodology, 9, 78–84. doi: 10.1027/1614-2241/a000057 First citation in articleAbstractGoogle Scholar

  • Botella, J., & Suero, M. (2012). Managing heterogeneity of variance in studies of reliability generalization with alpha coefficients. Methodology, 8, 71–80. doi: 10.1027/1614-2241/a000039 First citation in articleAbstractGoogle Scholar

  • Eid, M., & Nussbeck, F. W. (2009). The Multitrait-Multimethod matrix at 50!. Methodology, 6, 71 doi: 10.1027/1614-2241.5.3.71 First citation in articleGoogle Scholar

  • Flora, D. B. (2011). Two-part modeling of semicontinuous longitudinal variables. Methodology, 8, 145–156. doi: 10.1027/1614-2241/a000032 First citation in articleGoogle Scholar

  • Gonzalez-Betanzos, F., & Abad, F. J. (2012). The effects of purification and the evaluation of Differential Item Functioning with the likelihood ratio test. Methodology, 8, 134–145. doi: 10.1027/1614-2241/a000046 First citation in articleAbstractGoogle Scholar

  • Grilli, L., & Variale, R. (2014). Specifying measurement error correlations in Latent Growth Models with multiple indicators. Methodology, 10, 117–125. doi: 10.1027/1614-2241/a000082 First citation in articleAbstractGoogle Scholar

  • Haupt, H., Lösel, F., & Stemmler, M. (2014). Quantile regression analysis and other alternatives to ordinary least squares regression. Methodology, 10, 81–91. doi: 10.1027/1614-2241/a000077 First citation in articleAbstractGoogle Scholar

  • Kankaraš, M., & Moors, G. (2011). Measurement equivalence and extreme response bias in the comparison of attitudes across Europe. Methodology, 7, 68–80. doi: 10.1027/1614-2241/a000024 First citation in articleAbstractGoogle Scholar

  • Lugtig, P., Boeije, H., & Lensvelt-Mulders, G. J. L. M. (2012). Change? What change? An exploration of the use of mixed-methods research to understand longitudinal measurement variance. Methodology, 8, 115–123. doi: 10.1027/1614-2241/a000043 First citation in articleAbstractGoogle Scholar

  • Pacagnella, O. (2011). Sample size and accuracy of estimates in multilevel models. New simulation results. Methodology, 7, 111–120. doi: 10.1027/1614-2241/a000029 First citation in articleGoogle Scholar

  • Schlueter, E., & Schmidt, P. (2010). Special issue: Survey experiments. Methodology, 6, 93–95. doi: 10.1027/1614-2241/a000010 First citation in articleAbstractGoogle Scholar

  • Schmider, E., Ziegler, M., Danay, E., Beyer, L., & Bühner, M. (2010). Is it really robust? Reinvestigating the robustness of ANOVA against violations of the normal distribution assumption. Methodology, 6, 147–151. doi: 10.1027/1614-2241/a000016 First citation in articleAbstractGoogle Scholar

  • Schweizer, K. (2010). Improving the interpretability of the variances of latent variables by uniform and factor-specific standardizations of loadings. Methodology, 6, 152–159. doi: 10.1027/1614-2241/a000017 First citation in articleAbstractGoogle Scholar

  • Steinmetz, H. (2013). Analyzing observed composite differences across groups. Is partial measurement invariance enough? Methodology, 9, 1–12. doi: 10.1027/1614-2241/a000049 First citation in articleAbstractGoogle Scholar

  • Van der Ark, A., & Vermunt, J. (2010). New development in missing data analysis. Methodology, 6, 1–2. doi: 10.1027/1614-2241/a000001 First citation in articleAbstractGoogle Scholar

  • Voelkle, M. C., & McKnight, P. E. (2012). One size fits all? A monte-Carlo simulation on the relationship between repeated measures (M)ANOVA and Latent Curve Modeling. Methodology, 8, 23–38. doi: 10.1027/1614-2241/a000044 First citation in articleAbstractGoogle Scholar

  • Willis, G., & Boeije, H. (2013). The survey field needs a framework for the systematic reporting of questionnaire development and pretesting. Methodology, 9, 85–86. doi: 10.1027/1614-2241/a000070 First citation in articleAbstractGoogle Scholar

  • Wolff Smith, L. J., & Beretvas, S. N. (2014). The impact of using incorrect weights with the multiple membership random effects model. Methodology, 10, 31–42. doi: 10.1027/1614-2241/a000066 First citation in articleAbstractGoogle Scholar

Peter Lugtig, Department of Methods and Statistics, Utrecht University, P.O. Box 80140, 3508 TC Utrecht, The Netherlands, +31 30 253-7982, +31 30 253-5797, mailto:
Nekane Balluerka, Departamento de Psicología Social y Metodología de las Ciencias del Comportamiento, University of the Basque Country, 20018 San Sebastián, Spain, +34 943 018-339, +34 943 015-670, mailto: