Skip to main content
Original Article

Comparing the Performance of Agree/Disagree and Item-Specific Questions Across PCs and Smartphones

Published Online:https://doi.org/10.1027/1614-2241/a000151

Abstract. The use of agree/disagree (A/D) questions is a common technique to measure attitudes. For instance, this question format is employed frequently in the Eurobarometer and International Social Survey Programme (ISSP). Theoretical considerations, however, suggest that A/D questions require a complex processing. Therefore, many survey researchers have recommended the use of item-specific (IS) questions, since they seem to be less burdensome. Parallel to this methodological discussion is the discussion around the use of mobile devices for responding to surveys. However, until now, evidence has been lacking as to whether the use of mobile devices for survey response affects the performance of established question formats. In this study, implemented in the Netquest panel in Spain (N = 1,476), we investigated the cognitive effort and response quality associated with A/D and IS questions across PCs and smartphones. For this purpose, we applied a split-ballot design defined by device type and question format. Our analyses revealed longer response times for IS questions than A/D questions, irrespective of the device type and scale length. Also, the IS questions produced better response quality than their A/D counterparts. All in all, the findings indicate a more conscientious response to IS questions compared to A/D questions.

References

  • Baddeley, A. (1992). Working memory. Science, 255, 556–559. First citation in articleCrossrefGoogle Scholar

  • Callegaro, M. (2013). Paradata in web surveys. In F. KreuterEd., Improving surveys with paradata. Analytic uses of process information (pp. 261–280). Hoboken, NJ: Wiley. First citation in articleGoogle Scholar

  • Callegaro, M., Yang, Y., Bhola, D. S., Dillman, D. A. & Chin, T. Y. (2009). Response latency as an indicator of optimizing in online questionnaires. Bulletin de Méthodologie Sociologique, 103, 5–25. First citation in articleCrossrefGoogle Scholar

  • Carpenter, P. A. & Just, M. A. (1975). Sentence comprehension: A psycholinguistic processing model of verification. Psychological Review, 82, 45–73. First citation in articleCrossrefGoogle Scholar

  • Cohen, J. (1969). Statistical power analysis for the behavioral science. New York, NY: Academic Press. First citation in articleGoogle Scholar

  • Conrad, F. G., Couper, M. P., Tourangeau, R. & Zhang, C. (2017). Reducing speeding in web surveys by providing immediate feedback. Survey Research Methods, 11, 45–61. First citation in articleGoogle Scholar

  • Converse, J. M. & Presser, S. (1986). Survey questions: Handcrafting the standardized questionnaire. Beverly Hills, CA: Sage. First citation in articleCrossrefGoogle Scholar

  • Couper, M. P. & Kreuter, F. (2013). Using paradata to explore item level response times in surveys. Journal of the Royal Statistical Society, 176, 271–286. First citation in articleCrossrefGoogle Scholar

  • Couper, M. P. & Peterson, G. J. (2017). Why do web surveys take longer on smartphones? Social Science Computer Review, 35, 357–377. First citation in articleCrossrefGoogle Scholar

  • Ferreira, F. & Clifton, C. (1986). The independence of syntactic processing. Journal of Memory and Language, 25, 348–368. First citation in articleCrossrefGoogle Scholar

  • Fowler, F. (1995). Improving survey questions: Design and evaluation. Thousand Oaks, CA: Sage. First citation in articleGoogle Scholar

  • Fuchs, M. (2008). Mobile Web surveys: A preliminary discussion of methodological implications. In F. G. ConradM. F. SchoberEds., Envisioning the Survey Interview of the Future (pp. 77–94). New York, NY: Wiley. First citation in articleGoogle Scholar

  • Groves, R. M., Fowler, F. L., Couper, M. P., Lepkowski, J. M., Singer, E. & Tourangeau, R. (2004). Survey methodology. Hoboken, NJ: Wiley. First citation in articleGoogle Scholar

  • Hoaglin, D. C., Mosteller, F. & Tukey, J. W. (2000). Understanding robust and exploratory data analysis. New York, NY: Wiley. First citation in articleGoogle Scholar

  • Höhne, J. K. & Krebs, D. (2018). Scale direction effects in agree/disagree and item-specific questions: A comparison of question formats. International Journal of Social Research Methodology, 21, 91–103. First citation in articleCrossrefGoogle Scholar

  • Höhne, J. K. & Lenzner, T. (2015). Investigating response order effects in web surveys using eye tracking. Psihologija, 48, 361–377. First citation in articleCrossrefGoogle Scholar

  • Höhne, J. K. & Lenzner, T. (2018). New insights on the cognitive processing of agree/disagree and item-specific questions. Journal of Survey Statistics and Methodology, 6, 401–417. https://doi.org/10.1093/jssam/smx028 First citation in articleCrossrefGoogle Scholar

  • Höhne, J. K. & Schlosser, S. (2018). Investigating the adequacy of response time outlier definitions in computer-based web surveys using paradata “SurveyFocus”. Social Science Computer Review, 36, 369–378. First citation in articleCrossrefGoogle Scholar

  • Höhne, J. K., Schlosser, S. & Krebs, D. (2017). Investigating cognitive effort and response quality of question formats using paradata. Field Methods, 29, 365–382. First citation in articleCrossrefGoogle Scholar

  • Krosnick, J. A. (1991). Response strategies for coping with the demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236. First citation in articleCrossrefGoogle Scholar

  • Krosnick, J. A. & Presser, S. (2010). Question and questionnaire design. In P. V. MarsdenJ. D. WrightEds., Handbook of survey research (pp. 263–313). Bingley, UK: Emerald. First citation in articleGoogle Scholar

  • Kunz, T. (2017, March). Evaluation of agree-disagree versus construct-specific scales in a multi-device web survey. Paper presented at the General Online Research conference, Berlin, Germany First citation in articleGoogle Scholar

  • Kuru, O. & Pasek, J. (2016). Improving social media measurement in surveys: Avoiding acquiescence bias in Facebook research. Computers in Human Behavior, 57, 82–92. First citation in articleCrossrefGoogle Scholar

  • Lelkes, Y. & Weiss, R. (2015). Much ado about acquiescence: The relative validity and reliability of construct-specific and agree-disagree questions. Research and Politics. https://doi.org/10.1177/2053168015604173 First citation in articleCrossrefGoogle Scholar

  • Lenzner, T., Kaczmirek, L. & Lenzner, A. (2010). Cognitive burden of survey questions and response times: A psycholinguistic experiment. Applied Cognitive Psychology, 24, 1003–1020. First citation in articleCrossrefGoogle Scholar

  • Liu, M., Lee, S. & Conrad, F. G. (2015). Comparing extreme response styles between agree-disagree and item-specific scales. Public Opinion Quarterly, 79, 952–975. First citation in articleCrossrefGoogle Scholar

  • Malhotra, N. (2008). Completion time and response order effects in web surveys. Public Opinion Quarterly, 72, 914–934. First citation in articleCrossrefGoogle Scholar

  • Mavletova, A. (2013). Data quality in PC and mobile web surveys. Social Science Computer Review, 31, 725–743. First citation in articleCrossrefGoogle Scholar

  • Mavletova, A. & Couper, M. P. (2013). Sensitive topics in PC web and mobile web surveys: Is there a difference? Survey Research Methods, 7, 191–205. First citation in articleGoogle Scholar

  • Mavletova, A. & Couper, M. P. (2015). A meta-analysis of breakoff rates in mobile web surveys. In D. ToninelliR. PinterP. de PedrazaEds., Mobile research methods: Opportunities and challenges of mobile research methodologies (pp. 81–98). London, UK: Ubiquity Press. First citation in articleGoogle Scholar

  • Mayerl, J. & Urban, D. (2008). Antwortreaktionszeiten in Survey-Analysen: Messung, Auswertung und Anwendungen [Response times in surveys: Measurement, analyses, and applications]. Wiesbaden, Germany: VS. First citation in articleGoogle Scholar

  • Revilla, M. (2017). Analyzing the Survey characteristics, participation, and Evaluation across 186 surveys in an online opt-in panel in Spain. Methods, Data, Analyses, 11, 135–162. First citation in articleGoogle Scholar

  • Revilla, M. & Couper, M. P. (2018a). Comparing grids with vertical and horizontal item-by-item formats for PCs and Smartphones. Social Science Computer Review, 36, 349–368. First citation in articleCrossrefGoogle Scholar

  • Revilla, M. & Couper, M. P. (2018b). Testing different order-by-click question layouts for PC and Smartphone respondents. International Journal of Social Research Methodology. Advance online publication. https://doi.org/10.1080/13645579.2018.1471371 First citation in articleCrossrefGoogle Scholar

  • Revilla, M., Couper, M. P. & Ochoa, C. (2018). Willingness of online panelists to perform additional tasks. Methods, Data, Analyses. https://doi.org/10.12758/mda.2018.01 First citation in articleGoogle Scholar

  • Revilla, M., Saris, W. & Krosnick, J. A. (2014). Choosing the number of categories in agree-disagree scales. Sociological Methods & Research, 43, 73–97. First citation in articleCrossrefGoogle Scholar

  • Revilla, M., Toninelli, D., Ochoa, C. & Loewe, G. (2016). Do online access panels really need to allow and adapt surveys to mobile devices? Internet Research, 26, 1209–1227. First citation in articleCrossrefGoogle Scholar

  • Saris, W. E., Revilla, M., Krosnick, J. A. & Shaeffer, E. M. (2010). Comparing questions with agree/disagree response options to questions with item-specific response options. Survey Research Method, 4, 61–79. First citation in articleGoogle Scholar

  • Stern, M. J. (2008). The use of client-side paradata in analyzing the effects of visual layout on changing responses in web surveys. Field Methods, 20, 377–398. First citation in articleCrossrefGoogle Scholar

  • Toninelli, D. & Revilla, M. (2016a). Smartphones vs. PCs: Does the device affect the web survey experience and the measurement error for sensitive topics? A replication of the Mavletova & Couper’s 2013 experiment. Survey Research Methods, 10, 153–169. First citation in articleGoogle Scholar

  • Toninelli, D. & Revilla, M. (2016b). Is the Smartphone participation affecting the web survey experience? In M. PratesiC. Pena (Eds.), Proceedings of the 48th Scientific Meeting of the Italian Statistical Society (pp. 1–6). Salerno, Italy: ISS. First citation in articleGoogle Scholar

  • Toninelli, D. & Revilla, M. (in press). How mobile devices screen size affects data collected in web surveys. In P.C. BeattyA. WilmotD. CollinsL. KayeJ.L. PadillaG. Willis (Eds.), Advances in questionnaire design, development, evaluation and testing. New York, NY: Wiley. First citation in articleGoogle Scholar

  • Yan, T. & Olson, K. (2013). Analyzing paradata to investigate measurement error. In F. Kreuter (Ed.). Improving surveys with paradata: Analytic uses of process information (pp. 73–96). Hoboken, NJ: Wiley. First citation in articleCrossrefGoogle Scholar