Skip to main content
Open AccessOriginal Article

The role of collaborative argumentation in future teachers' selection of online information

Published Online:https://doi.org/10.1024/1010-0652/a000307

Abstract

Abstract. (Future) teachers should acquire skills in sourcing science-related information online, so they can use evidence appropriately in their pedagogical practice. To successfully use such evidence, it is vital that teachers critically question their selection of online information. Based on findings from collaborative learning, we hypothesized that collaboration promotes teachers' critical elaboration of their selection of online educational information. Additionally, collaboration allows for social comparison and may thus impact teachers' self-efficacy in seeking information. In a 2 × 2 mixed-design study with the between-participants factor reasoning (individual vs. collaborative) and the within-participants factor self-reported information seeking self-efficacy (pre vs. post the reasoning task), each of N = 83 future teachers individually sought online information regarding the educational use of mobile phones in classrooms. This constituted a realistic search on the Internet, in a natural setting. Based on each participant's particular search, s/he was asked to select the online sources that s/he perceived relevant for reasoning whether mobile phones should be used in class. To foster reflection on how they selected information, participants were asked either to reason individually (individual group, n = 33) or to chat collaboratively (collaboration group, n = 50 in 25 dyads) about their selections. Participants in both groups reported higher information seeking self-efficacy after the reasoning task. Yet participants who collaboratively reflected on their selections more frequently showed elaborated reasoning behavior, than did participants in the individual group. Nonetheless, participants in both groups referred to certain criteria that guided their selection (i.e., criteria related to the information, the provider of information, or media) with the same frequency. Considering the potential benefits and challenges of collaboration, we discuss the findings in terms of how to promote future teachers' ability to critically reflect on their selection of online educational information.

Die Rolle der kollaborativen Argumentation zwischen zukünftigen Lehrern und Lehrerinnen bei der Auswahl von Online-Informationen

Zusammenfassung. (Zukünftige) Lehrkräfte sollten ihre pädagogische Praxis auch auf der Basis von bildungswissenschaftlichen Evidenzen begründen. Dazu müssen sie Fähigkeiten erwerben, wie sie angemessen wissenschaftsbezogene Online-Informationen recherchieren können. Um Evidenzen aus Online-Informationen sinnvoll nutzen zu können, ist es wichtig, die Auswahl von Online-Informationen kritisch zu hinterfragen. Basierend auf Forschungsarbeiten zum kollaborativen Lernen wurde angenommen, dass Kollaboration eine kritisch-reflektierte Auseinandersetzung mit der Auswahl von Online-Informationen fördern kann. Zusätzlich bietet die Kollaboration die Möglichkeit zum sozialen Vergleich mit anderen und kann daher die erlebte Selbstwirksamkeit der Lehrkräfte beim Suchen nach Informationen beeinflussen. In einem 2 × 2 Mixed-Design mit dem Zwischensubjektfaktor Erörterung (individuell vs. kollaborativ) und dem Innersubjektfaktor selbstberichtete Selbstwirksamkeit bei der Suche nach Online-Informationen (Prä- vs. Post der Erörterung) suchten N = 83 Lehramtsstudierende individuell nach Online-Informationen zum Thema Handynutzung im Unterricht. Basierend auf ihren Rechercheergebnissen wählten sie diejenigen Webinhalte aus, die sie für eine Entscheidung für ihre didaktische Vorgehensweise als relevant einschätzten. Die Lehramtsstudierenden wurden dann gebeten entweder alleine (individuelle Gruppe, n = 33) oder gemeinsam im Chat mit einer weiteren Lehramtsstudierenden (kollaborative Gruppe, n = 50 in 25 Dyaden) zu erörtern, welche Kriterien die Auswahl ihrer Webinhalte leiteten. Im Anschluss berichteten die Lehramtsstudierenden beider Bedingungen eine höhere Selbstwirksamkeit in Bezug auf die Suche nach Online-Informationen. Die Lehramtsstudierenden, die sich kollaborativ austauschten, konnten ihre Auswahl jedoch argumentativ elaborierter begründen. Des Weiteren verwiesen die Lehramtsstudierenden beider Bedingungen ähnlich häufig auf die jeweiligen Kriterien, die sie für ihre Auswahl nutzten (d.h. Kriterien in Bezug auf die Informationen, die Autoren und Autorinnen, oder die Online-Medien). Die Rolle von Kollaboration bei der Beschaffung von bildungswissenschaftlichen Online-Informationen wird hinsichtlich der Förderung einer kompetenten und kritischen Reflexion über mögliche Vorgehensweisen diskutiert.

Future teachers' sourcing of online educational information

Teachers should apply reason to their professional practices, based on research knowledge, to improve their teaching and thus students' learning (Bauer & Prenzel, 2012; Bromme, Prenzel & Jäger, 2014). Yet, to build upon research knowledge, one must typically apply complex processes, such as searching for evidence or evaluating the quality thereof (Bromme et al., 2014; Rousseau & Gunia, 2016). (Future) teachers often report lacking sufficient time to search for the best evidence. They also frequently lack adequate skills to locate, critically evaluate, and effectively use educational research that might strengthen their educational practices (Duke & Ward, 2009) (see also “theory-practice gap”: Bråten & Ferguson, 2015). Instead, they tend to prefer experiential knowledge (i.e., knowledge from teacher colleagues, from their own teaching experiences, or conventional wisdom). However, such experiential knowledge may conflict with knowledge derived through systematic research (Bråten & Ferguson, 2015; Williams & Coles, 2007).

Given the mentioned obstacles in teachers' skills and time for accessing original research, the Internet offers important opportunities for many teachers, who frequently use it to search for educational evidence (Bougatzeli, Douka, Bekos & Papadimitriou, 2017; Williams & Coles, 2007). One reason for this may be the abundance of educational information that is easily accessible online at a low threshold (e.g., on open access education science journals or blogs from colleagues). This means that teachers can access online information easily and without investing a lot of time (Williams & Coles, 2007). The Internet also allows in-service teachers to access science-related information, far beyond their time at the university. This becomes particularly important, considering that teachers' knowledge of education science at the stage when they left university may become outdated in the context of ever-evolving pedagogical research (Bromme et al., 2014).

However, research indicates that many (future) teachers also experience challenges when sourcing information from the Internet: They report frustration and worry about being unable to find accurate information or evaluate it appropriately (Chen, Chien & Kao, 2019; Iding, Crosby, Auernheimer & Klemm, 2008). Furthermore, different teachers often judge the same website as having widely different accuracy ratings (Iding et al., 2008). Teachers also generally use only one search engine (i.e., Google) (Bougatzeli et al., 2017), although the type of search engine is highly related to the resulting information.

Thus, to successfully synthesize scientific evidence from the vast amount of online educational information, (future) teachers must develop the respective competencies (see also the European Framework of Digital Education: Caena & Redecker, 2019). They must be able not just to evaluate whether diverse information is complete, correct, and appropriate for their specific teaching setting – but also to evaluate the information under the conditions of the Internet (such as the pre-filtering of information through a search engine's algorithm; see Guzman & Lewis, 2020). Essential elements of any successful sourcing of online information (that is, seeking, evaluating, and using online information) include – among others – mechanisms of 1) critical reflection on one's selection of online information and 2) one's self-efficacy in seeking (online) information (Andreassen & Bråten, 2013; Caena & Redecker, 2019; Hendriks, et al., 2020).

By examining an authentic online sourcing scenario, this study aims to investigate whether future teachers select science-related information, how they apply reason to their selection of online information, and how often they refer to the criteria that may have guided this selection. So far, relatively little is known about how (future) teachers source online educational information and whether they critically reflect on any criteria that may have guided their sourcing process. Thus, here we provide literature on how people source online information in general, to draw conclusions about future teachers' sourcing of online information. We then describe how the ability to critically reflect on criteria that may have influenced this sourcing is crucial, for synthesizing online scientific information into (future) teachers' professional practices. Furthermore, we describe the potential of collaboration to promote (future) teachers' critical reflection on any criteria that may have influenced their sourcing and the relevance of teachers' information seeking self-efficacy (ISSE). The present study investigated 1) whether future teachers actually select science-related information about an educational topic, when searching for it online; 2) the effects of collaborative versus individual reasoning on one's critical reflection about the selection; and 3) potential effects on self-reported ISSE.

Criteria guiding the evaluation of online information

Empirical studies have identified several kinds of criteria that are generally used by information seekers, to evaluate whether they can rely on online information (Choi & Stvilia, 2015; Sundar, 2008). These criteria often relate to meta information – that is, information about the individuals and organizations that create and provide content, or information about when, where, in what context, and for what purpose the content was created and provided (also called source information: Bråten & Braasch, 2018). In online contexts, the criteria often refer to three levels: 1) the information itself (e.g., comprehensibility); 2) the providers of information (e.g., their expertise); and 3) the online media (here: any media digitally encodable and connected to the Internet).

Given the diverse and ever-changing manifestations of media offerings online, it seems worthwhile to also consider media affordances when aiming to understand how teachers may evaluate information found on the Internet. Media affordances represent the dynamic relations among users, how users perceive an online environment that is accompanied by certain tools, and how and why users typically interact with these tools (e.g., comment or click ‘like’ below a video) (Evans, Pearce, Vitak & Treem, 2016). Hence, affordances – such as interactivity, navigability, or modality – do not just represent the tools available to users (e.g., a button on a website), but rather represent users' typical use of the online media. In considering the affordances of search engines and how teachers typically navigate therein, teachers often use only one search engine (Bougatzeli et al., 2017). Furthermore, teachers – like other users – may also be attracted to the results shown at the top of the search results page and thus by a search engine's affordance (Haas & Unkel, 2017). They may hence risk being guided by a search engine's algorithm, which pre-selects information. Table 1 lists exemplary studies on criteria at the levels of information, provider, and media. These studies indicate certain types of criteria as being influential, when it comes to information seekers' evaluation of online information in general.

Table 1 Research examples alluding to the relevance of diverse criteria for credibility evaluations of online information

The benefits gained from relying on such criteria can be manifold. For example, if teachers cannot exclusively rely on their prior knowledge (as it might be out of date) or do not have the time to critically evaluate the information they have found, it may be helpful for them to draw on certain criteria for their sourcing. This allows them to effectively decide whether they can rely on the information – based, for instance, on its provider's expertise or the trustworthiness of a website (Bromme & Goldman, 2014; Choi & Stvilia, 2015). In this sense, drawing on such criteria can help teachers to decide which sources are more trustworthy or provide more credible information (Bråten & Braasch, 2018; Bromme & Goldman, 2014; Rousseau & Gunia, 2016).

The importance of critical reflection around any use of criteria

While drawing on certain criteria can be useful, it can also influence a teacher's evaluation of information in a dysfunctional way. Highly ranked search results can be misleading, for example, since the ranking of a piece of information within the search results does not necessarily indicate the quality of that information (Bougatzeli et al., 2017). Similarly, teachers' familiarity with a certain website may lead them to select information that only confirms their existing knowledge (Iding et al., 2008). The use of criteria thus poses some risks. Critical reflection – regarding how, when, and why we make use of certain criteria – is considered an essential competency that can help to source online information effectively and efficiently (Hendriks et al., 2020).

While studies on how to promote sourcing competencies indicate that individuals do use certain criteria – at first – when checking for trustworthiness, they generally do not critically reflect on the results of this ‘trustworthiness check’ (Brante, 2019). Researchers have, however, successfully conducted trainings to improve individuals' critical reflection about their use and justification for the use of certain criteria (Bråten, Brante & Strømsø, 2019; Pérez et al., 2018). These trainings have mainly addressed individual measures to foster sourcing competencies. Implementing collaborative learning might allow such interventions to foster additional and deeper critical reflection, when reflecting on one's sourcing activities. To follow, we outline the potential of collaborative argumentation and why it might help (future) teachers to reflect critically on their use of any criteria, when sourcing online information.

The potential of collaborative argumentation to foster critical reflection

Educational research on collaboration points to several benefits of learning with others. It helps individuals gain knowledge and skills (e.g., writing skills within a Wiki environment) and can positively affect their self-efficacy, especially at a tertiary education level (Chen, Wang, Kirschner & Tsai, 2018). In a study focusing on the potential of collaboration for synthesizing evidence, for example, future teachers who engaged in collaboration analyzed pedagogical problems in a more reflective and evidence-based manner (Csanadi, Kollar & Fischer, 2020). Collaborative engagement also seems to provide a promising setting for sharing, interpreting, and critically examining scientific information in online contexts (Hendriks et al., 2020). From an educational perspective, collaboration is thought of as a “co-elaboration of conceptual understanding and knowledge” (Baker, 2015, p.4), accompanied by communicative activities – such as explaining and understanding ideas, representing knowledge and concepts, getting multiple perspectives from others, and arguing collaboratively (Asterhan & Schwarz, 2016; Chinn & Clark, 2013).

The latter, collaborative argumentation, occurs in dialogic debates in which two or more people engage – thereby usually exchanging statements and questions, making claims, supporting their claims with reasons and evidence, and critically questioning the others' arguments. This may result in agreement or disagreement among the learners (Chinn & Clark, 2013). The dialogic setting can thus stimulate the explicit articulation of dialog partners' ideas by asking them for clarification and critical elaboration on the reasons behind these ideas. It thereby promotes high-quality cognitive elaboration processes that are assumed to elicit changes in knowledge structures (Csanadi et al., 2020; Van Boxtel, van der Linden & Kanselaar, 2000). Accordingly, explicit forms of cognitive elaboration processes are often considered indicators for the depth of one's reflection processes (Csanadi et al., 2020; Felton & Kuhn, 2001; Thiebach, Mayweg-Paus & Jucks, 2016). In this vein, a critical reflection of teachers' own sourcing could be characterized by their manner of elaboration. That is, for instance, whether they give reasons for why certain criteria had guided the selection, whether they support their own or the others' perspectives with further explanation or arguments, and/or whether they report uncertainty about whether their criteria are appropriate.

A meta-analysis on studies within the field of argumentation-based computer-supported collaborative learning (ABCSCL) – as a means of learning and collaboratively debating with others, using a variety of technological and pedagogical strategies – indicates that engaging in collaborative argumentation can positively affect domain-related knowledge construction, acquisition of argumentation skills, and elaboration of materials (Noroozi, Weinberger, Biemans, Mulder & Chizari, 2012). Furthermore, among other functions such as finding consensus or justifying knowledge claims, an important function of argumentation is critical reflection on one's own reasoning (Hoffmann, 2016). The positive outcomes of collaborative argumentation may be a result of its allowing the partners to successfully integrate multiple perspectives (Veerman, Andriessen & Kanselaar, 2002); interact transactively by challenging the partners' knowledge and arguments (Felton & Kuhn, 2001; Thiebach et al., 2016); and use high-quality argumentation strategies, such as critically questioning one's own and others' arguments (Mayweg-Paus, Thiebach & Jucks, 2016). The aims of collaborative argumentation may be either to persuade or to reach consensus. Yet a consensus-oriented discourse may increase the above-mentioned positive outcomes as the partners may become more interactive, using the arguments originally introduced by their partners and critically challenging their own arguments (Felton, Garcia-Mila, Villarroel & Gilabert, 2015; Felton & Kuhn, 2001). Nonetheless, collaborative learning also introduces certain challenges – for example, that individuals learn at different rates. There is thus a risk that some partners may move more quickly through the collaborative phases, moving on to the next sub-task before everyone is ready (Mullins, Rummel & Spada, 2011).

Collaborative argumentation may supercharge the individual learning process and has the potential to support teachers' critical reflection on both their sourcing of online information and their use of criteria that may have guided this sourcing. Collaboration may also impact another important part of any successful sourcing of online information – that is, one's self-efficacy when seeking information.

Future teachers' ISSE

Individuals' ISSE is considered an essential part of competencies related to successfully sourcing online information (Andreassen & Bråten, 2013; Caena & Redecker, 2019; Hendriks, et al., 2020). Research indicates that a teacher's perceived ISSE is also related to his/her actual sourcing behavior: Education students with high ISSEs, for example, used online library databases rather than Google to search for information (Tang & Tseng, 2013). Similarly, elementary teachers' information seeking standards (e.g., judging the accuracy of online information using multiple sources) were positively related to their confidence in using the Internet for advanced sourcing strategies (e.g., searching with keywords) (Wu & Wang, 2015).

According to Bandura (1997), self-efficacy refers to one's belief in one's own capabilities to organize and execute the courses of action required to attain particular goals. Thus, teachers' ISSE reflects their interpretation of their own competencies around sourcing online information (Kurbanoglu, 2003). It is likely that a teacher's confidence in sourcing online information is influenced by his/her interpretation of his/her own performance, and also by social comparisons to others' successes or failures (Bronstein, 2014). In collaborative settings, the partners' search approaches come into play while collaboratively elaborating on both discourse partners' sourcing strategies. Thus, the use of one's own strategies may become more evident in comparison to those of others. However, there is not enough evidence to derive assumptions on whether collaboration – which allows for social comparison – may increase or decrease one's perception of one's own seeking skills. The exchange with others may lead to more awareness of one's own competencies, and hence to higher perceived self-efficacy; or it may instead reveal deficits in one's own search strategies and thus lead to lower perceived self-efficacy.

The present study

When (future) teachers search the Internet to inform themselves on educational topics, they should critically reflect upon their sourcing of online (scientific) information – including reflecting upon any criteria that may have guided their selection. In this study, we first investigated whether future teachers select science-related information about an exemplary educational topic (i.e., the use of students' mobile phones in classes) at all when searching for it online.

Since collaborative argumentation is accompanied by the underlying mechanisms of exchanging and critically questioning one's own and others' perspectives and arguments, it may support future teachers in critically reflecting upon their sourcing of online information. Hence, secondly we investigated whether future teachers' critical reflection on how they selected online information differed, according to whether they were asked to reason their guiding criteria individually or collaboratively. In this context, we focus on elaboration in future teachers' argumentative reasoning behavior, since it serves as indicator for cognitive reflection processes. Informed by the literature – which indicates that drawing on diverse criteria affects how people evaluate the credibility of information – this study further examined whether and how future teachers critically question their own use of different criteria, when sourcing educational information online. We assumed that, in a collaborative reasoning task, future teachers' argumentative reasoning behavior would more frequently show elaborated reasoning and they would more frequently refer to criteria that guided their choice of online information (i.e., criteria related to the online information, provider, and media; see Table 1), than in an individual reasoning task.

Collaboration may also impact a teacher's ISSE, as it allows for comparison with others and thus may influence a teacher's interpretation of his/her own searching competencies. Thus, thirdly we exploratively investigated whether collaborative or individual reasoning affected participants' self-reported ISSE. Since feeling confident in sourcing information online is likely influenced not just by how someone interprets his/her own performance, but also by social comparison to others' successes or failures in the search task, we assumed that participants who reasoned their choice either collaboratively or individually would show a different self-reported ISSE after the reasoning task than before. However, we had no assumption as to whether collaborative or individual reasoning would increase the degree of difference (or whether they would lead to either higher or lower self-reported ISSE) after the reasoning task.

Since we know from the literature that one's epistemic beliefs (i.e., one's beliefs about knowledge and knowing) affect the sourcing of information (Hendriks et al., 2020), we assessed participants' epistemic beliefs regarding educational knowledge from the Internet, to control for potential differences between the collaborative and individual settings.

Methods

Participants

Ninety-one future teachers participated voluntarily in the study and were reimbursed with 20 Euros. Participants were studying secondary-school teaching, at either the bachelor's or the master's level. We excluded data from six participants whose Internet connectivity failed during the investigation. We likewise excluded data from two participants whose time of actual chatting during the common reasoning task deviated more than one standard deviation from the overall mean duration for chatting time, among all participants in the discourse group (M = 23.77 min.; SD = 7.47). Hence, we finally analyzed data from N = 83 participants (56 female and 1 diverse) aged 18–41 (M = 25.34, SD = 5.14), with n = 33 participants in the individual group (groupin) and n = 50 participants in the discourse group (groupcoll; grouped in 25 dyads).

Of these 83 participants, 73 indicated German as their first language. At the time of the investigation, participants had been studying for an average of 3.83 years (SD = 3.10). Eighteen of the participants in the groupcoll and 9 of the groupin were studying at the master's level; 31 and 25 of the participants, respectively, were female (differences between experimental conditions were not significant; study level: χ 2(1) = .690, p = .406; and gender: χ 3(2) = 3.86, p = .145). The average duration of participation for all participants was 103.67 minutes (SD = 31.81) and did not differ between the experimental conditions, Wald χ 2(1) = 2.54, p = .11.

Participants reported that they used a computer, notebook, or tablet for an average of 4.02 (SD = 2.47) hours per week (hrs/wk) and spent an average of 4.94 (SD = 2.96) hrs/wk on the Internet. Participants reportedly sought general online information for an average of 1.99 (SD = 1.51) hrs/wk and searched specifically for educational online information for an average of 1.54 (SD = 1.2) hrs/wk. Participants rated their self-perceived prior knowledge on the ‘use of mobile phones in classes’, based on four items, as neither very low nor very high (M = 2.50; SD = .60). With respect to participants' prior opinions, they were neither for nor against mobile phones in classes, based on one item (all items ranged from 1 = ‘I strongly disagree’ to 5 = ‘I strongly agree’) (M = 2.96, SD = 1.03).

Design

In a 2 × 2 mixed design with the between-participants factor reasoning (individual vs. collaborative) and the within-participants factor self-reported ISSE (pre vs. post measure), all participants answered the questionnaire in terms of self-perceived ISSE, before and after the search and reasoning task. Participants were instructed to individually search for pedagogical information on ‘mobile phone use in classes’ online. Based on their search results, participants were asked to select four pieces of online Web content (in the following: Web items [WI]) that they perceived to be relevant for reasoning an opinion. Participants in the groupin individually reasoned their choice of WI, while participants in the groupcoll engaged in a collaborative discourse via chat and reasoned their four WI together. Each discourse partner contributed two of the selected WI. Participants were randomly assigned to one of the two experimental conditions. In the groupcoll, they were randomly paired into 25 dyads.

Procedure

Participants performed the experiment on-site at the university and, hence, had access to the university network (i.e., including scientific sources such as scientific journal articles). Each participant sat individually in front of a computer. All participants worked at their own pace and were guided through the experiment by the online survey (unipark.com by Questback EFS Surveys), without verbal instructions from the examiner. From the beginning of the study, each computer displayed an open Web browser window (i.e., Mozilla Firefox) showing the same university website. Additionally, participants in the groupcoll saw an open window of the open source chat application (i.e., https://discordapp.com/).

Participants first answered items according to demographic and preparatory variables. They were then asked to rate their self-perceived ISSE (pre-measure). Afterwards, participants received a fictional scenario: They were asked to imagine themselves as teachers, searching for information on the topic of ‘mobile phone use in classes’. All participants were instructed to use this search for preparation of a fictional school conference on the topic. The groupcoll was further instructed to subsequently discuss these search results with a fellow teacher. After the individual search for educational information was finished (M = 25.05 min; SD = 6.1 in the groupin and M = 23.09 min; SD = 10.16 in the groupcoll), participants were asked to give reasons – either individually or collaboratively – for how, and based on what criteria, they had selected their four online sources. Participants in the groupcoll were also asked to commonly select the two most relevant WI. Similarly, participants in the groupin were asked to choose two of their initially selected WI as being the most relevant, which should help to increase their reasoning motivation (see electronic supplementary material [ESM] 1 for all the experimental instructions). Participants in the groupcoll communicated only via chat. Reasoning their choices took M = 20.47 minutes (SD = 7.54) in the individual group and M = 24.48 minutes (SD = 6.69) in the discourse group. Afterwards, all participants were asked to state their view on the topic ‘mobile phone use in classes’ and rate their self-perceived ISSE (post-measure) again. Finally – and only to ensure the possibility of future exploratory investigations – the participants judged trust-related measures for each of their four initially selected WI and attitudinal measures, in terms of their opinion towards the topic after the reasoning task (see ESM 2). We had no assumptions regarding these measures. Each participant's computer screen was recorded, via screen video, throughout the entire study.

The educational topic

The use or ban of students' mobile phones in classes is a highly relevant educational topic – one that is regulated differently in schools throughout Europe and even within each country. In Germany, each of the federal states applies different regulations. Educational science research on this topic describes the advantages and disadvantages of using mobile phones in classrooms – regarding students' attention and learning outcomes, as well as students' social and digital competencies (Sung, Chang & Liu, 2016). Thus, regardless of whether a teacher supports or opposes mobile phone use in classes, a variety of educational evidence is available to support either view. In this study, participants were asked to search for pedagogical reasons for any use of – or for a ban on – student's mobile phones in the classroom. They were given no further instructions on specific aspects within this topic (that is, they were allowed to search for any opportunities or challenges mobile phones might present – for example, opportunities for students' learning or challenges regarding students' distraction) (see ESM 1).

Measurements

Science-relatedness of Web items

To investigate whether future teachers choose science-related WI when they search for online educational information, we determined the relative frequencies of science-related WI. WI were considered science-related if they were scientific journal articles, scientific reports, monographs, scientific blogs, school textbooks, or university theses. They were not considered science-related if they were – for instance – online news portals, information platforms, or blogs.

Argumentative reasoning behavior

To assess the participants' reasoning behavior, we analyzed their communicative behavior in the reasoning task (i.e., individual and collaborative reasoning for selecting their online sources). Participants' reasoning behavior was divided into units of meanings, where each unit contained a participant's semantic description of a distinct theme or idea (Clarà & Mauri, 2010). To code the units of meanings (coding scheme described below), two raters who were blind to the hypothesis independently assessed the 33 individual and 25 collaborative texts that emerged from the reasoning task. The level of agreement between these independent raters, in terms of all coding categories, ranged from Cohen's Kappa = .67 to 1.0. The percentage of agreement between these two independent raters was PA = 100% for 21 out of the 58 documents, resulting in 100% agreement for 36.2% of the documents at the levels of the units of meanings.

The coding scheme aimed to describe 1) how participants reasoned (i.e., unelaborated vs. elaborated) their selection and 2) whether they referred to criteria that guided their selection. These criteria were derived from the literature regarding the influence of meta information on the sourcing of online information (see Table 1). The first coding category relates to criteria regarding the credibility of information, such as whether the information was considered appropriate or scientific (Zimmermann & Jucks, 2018). Participants often discussed whether the information contained ‘pro’ and ‘con’ arguments, whether it was recent, and/or whether it provided concrete recommendations for pedagogical actions, to guide their selection. We thus added these as criteria. Studies on whether one- or two-sided arguments guide trust-related judgements support our integration of this category, as two-sided arguments seem to be viewed as more trustworthy (Mayweg-Paus & Jucks, 2018). The second coding category relates to criteria associated with the epistemic trustworthiness of the provider of information, such as whether the provider was competent or benevolent (Hendriks, Kienhues & Bromme, 2015). The third category reflects criteria related to the credibility of the media from which the WI came, such as the media affordances (e.g., website design) (Sundar, 2008). In addition, we coded other aspects of participants' reasoning that did not relate to the WI, but that rather reflected participants' personal experiences with mobile phone use in classes' or their ‘previous knowledge on the topic.

For each category, we further coded whether participants reasoned about the aspects in an elaborated or unelaborated way. Elaborated reasoning, here, is characterized not just by identifying whether participants discussed the criteria. It is also characterized by whether they reflected and reasoned critically (i.e., the participant made arguments for why the criteria had guided them to select the WI; they supported their own or their discourse partner's perspective with further explanation or arguments; and/or they reported uncertainty about whether their criteria was appropriate for selecting the WI). Conversely, unelaborated reasoning represents reasoning behavior that lacks critical reasoning (i.e., the participant mentioned the criteria, but did not argue why it had helped them select the WI; did not support their own nor their partner's perspective with further explanations or arguments; and/or did not explicitly report uncertainty about whether their criteria was appropriate for selecting the WI). ESM 3 summarizes the complete coding scheme, including examples of argumentative behavior derived from participants' reasoning.

Participants' self-reported ISSE

As an indicator of participants' reflections regarding their own information seeking competencies, we assessed their self-reported ISSE with items adapted from the Information Seeking Self-Efficacy Scale (IRSES) by Hinson, Distefano, and Daniel (2003). The scale incorporates three dimensions related to one's personal self-evaluation (e.g., ‘I know how to search for information that I am going to need’ [12 items]); one's comparison with others (e.g., ‘I know more about seeking information than most other people’ [4 items]); and one's physical state while seeking (e.g., ‘I like to search for information’ [5 items]). The internal consistency for the 21 items at the pre-measure was Cronbach's α = .94. At the post-measure, it was Cronbach's α = .93.

Further variables

Finally, to control any variance that may have been caused by participants' epistemological beliefs toward online information, we assessed participants' Internet-specific epistemological beliefs (ISEB). We based this assessment on the questionnaire by Bråten, Strømsø, and Samuelstuen (2005). The questionnaire addresses dimensions concerning Web-based knowledge (what one believes knowledge is like on the Web) and Web-based knowing (how one ‘comes to know’ on the Web). The 14 items yielded an internal consistency of Cronbach's α = .87.

In addition to the above described dependent measures, we assessed trust-related and attitudinal measures to exploratorily describe the features of the selected WI and its impact on participants' opinion: 1) self-perceived credibility of information, provider, and media; 2) opinion toward mobile phone usage; and 3) certainty of participants' opinion. We guided participants to evaluate these variables only after the reasoning task, to ensure that it did not influence the reasoning task. We had no assumptions about any differences for these items, in terms of the experimental conditions. The items were measured only in case of further interest. We have thus included the item description – along with the results, indicating no differences among the experimental conditions regarding the items – in the ESM 2, for those who are interested.

Preparatory analyses

A multivariate ANOVA – with experimental conditions as the independent variable and the demographic variables, as well as participants' ISEB as dependent variables – yielded no significant differences (all, F (1, 79) ≤ 2.41, p ≥ .13). We further tested whether the relative frequency of science-related WI – among all of the initially selected four WI – differed between the experimental groups, since differences in these relative frequencies may have affected the length and depth of participants' elaboration during the reasoning task (Salmerón, Fajardo & Gómez-Puerta, 2019). There were no significant differences between experimental conditions in the relative frequency of the science-related WI initially selected by participants, χ 4(1) = 2.58, p = .63, see ESM 4. Thus, we did not include these variables in our main analyses, as any difference between the experimental conditions is not caused by these variables.

Main analyses

To consider any variance in differences regarding the dependent measures at the level of the discourse dyads, we conducted a hierarchical model. The intra-class correlation of the dyadic level (ICC = .01) was not significant (F(1, 108) = 0.78, p = .85), meaning that only 1% of variance was caused by the dyads. Thus, three generalized linear models were conducted to test whether participants' pre- and post-ISSE and the relative frequencies of (un)elaborated reasoning in terms of the types of criteria differed between the experimental conditions (Cress, 2008). We set an α error of α = .01.

Results

Results of science-relatedness of WI

Of the 232 selected WI (100 for the 50 participants in the groupcoll and 132 for the 33 participants in the groupin), 111 WI were unique. While most of the WI were only selected once or twice by all participants, there were six WI that were selected very often; see ESM 5. Of these 111 WI, only 38 (34.2%) were determined to be science-related.

Results of argumentative reasoning behavior

From participants' individual and collaborative ARB, we determined the relative frequencies of the reasoning criteria that guided participants' selection of WI – in relation to the overall frequencies of on-task units (i.e., comments made about the reasoning task). These relative frequencies thus represent the relative numbers regarding all task-related comments and not the total number of comments made (there were also off-task comments – e.g., related to task management). As expected, participants in the groupcoll more often expressed elaborated reasoning (M = 90.2%, SD = .10) and less often engaged in unelaborated reasoning (M = 9.7%, SD = .10), compared to the groupin (elaborated comments: M = 62.2%, SD = .26; unelaborated comments: M = 37.8%, SD = .26), both F(1,56) = 26.01, p < . 001, η 2 = .32. Unexpectedly, participants in the groupcoll did not more often cite criteria related to 1) the information (M = 59.1%, SD = .21); 2) the provider (M = 4.2%, SD = .07); or 3) the media (M = 7.6%, SD = .08), compared to the groupin (comments related to information: M = 61.2%, SD = .20; provider: M = 4.3%, SD = .07; media: M = 14.3%, SD = .12); all: F(1,56) ≤ 5.69, p ≥ . 02, η 2 ≤ .09). Furthermore, the groupcoll significantly more often made comments related to the management and coordination of the task (i.e., relative frequencies of off-task units in relation to overall units) (M = 48.2%, SD = .13), compared to those in the groupin (M = 4.5%, SD = .07), F(1,56) > 256.21, p < . 001, η 2 = .82 (see Tables 2 and 3). Interestingly, only one participant mentioned that s/he used a specific search engine. Furthermore, none of the participants considered whether the WI was generated or filtered through algorithms. See ESM 3 for an extract of the individual and collaborative ARB.

Table 2 Multivariate ANOVA to test for differences between the experimental conditions, regarding the relative frequencies of unelaborated and elaborated reasoning of the criteria guiding the selection of WI
Table 3 Descriptive statistics of the relative frequencies of unelaborated and elaborated reasoning around the criteria guiding the selection of WI

Results of information seeking self-efficacy

A generalized linear model, including the between-participants factor reasoning and the within-participants factor time of measurement of ISSE, revealed a significant main effect of time but no significant main effect of experimental conditions (groupin: Min = 3.52, SE= .01; groupcoll: Mcoll = 3.64, SEcoll = .08; F(1, 81) = .96, p = .33, η 2 = .01), as well as no interaction effect between time and experimental conditions (groupin: Mpre in = 3.51, SDpre in = .61; Mpost in = 3.53, SDpost in = .59; groupcoll: Mpre coll = 3.57, SDpre coll = .56; Mpost coll = 3.71, SDpost coll = .52; F(1, 81) = 4.18, p = .04, η 2 = .05). Overall, participants reported higher ISSE after the reasoning task (Mpre = 3.55, SDpre = .58; Mpost = 3.64, SDpost = .55; F(1, 81) = 6.59, p = .01, η 2 = .08).

Discussion

Summary of findings

The findings of the present study shed light on how future teachers select online educational information, when carrying out an authentic search. With respect to the participants' selection of types of educational information, their selected WI were often not science-related. This is in line with previous findings on teachers' preferences for sourcing evidence (Bråten & Ferguson, 2015; Duke & Ward, 2009; Williams & Coles, 2007). Regarding participants' critical reflection around their selection of educational information, this study investigated how they reasoned their selection (elaborated vs. unelaborated) and how often they referred to the criteria that may have guided this selection (i.e., meta information about the information, provider of information, and media; see Table 1). As expected, participants more often gave elaborated and less often gave unelaborated responses, when reasoning their selections collaboratively (e.g., they more often supported their own or their partner's argument with further explanations or evidence). Accordingly, the setting wherein future teachers collaboratively engaged in argumentation seemed to support their critical elaboration of how they selected online information. This is in line with approaches on collaborative learning and argumentation (Baker, 2015; Noroozi et al., 2012).

Yet, unexpectedly, participants in both the collaborative and the individual reasoning groups referred equally often to the three theoretically and empirically derived types of criteria (Table 1). In this context, criteria regarding the provider and the media itself was mentioned relatively rarely – even though research indicates that others often use these criteria (Choi & Stvilia, 2015) and that drawing on associated cues related to the providers can facilitate the efficient selection of online information (Bromme & Goldman, 2014). With respect to cues related to the media, only one participant questioned his/her own use of search engines; no other participants mentioned criteria related to algorithm-based content or the type of search engine (Guzman & Lewis, 2020). This is in line with other findings, indicating that teachers tend to use only one type of search engine (Bougatzeli et al., 2017).

Furthermore, participants in the two experimental conditions did not differ in terms of their self-reported ISSE. This was contrary to our assumption that the possibility to compare to the others′ search approaches could impact one's own ISSE. Both groups reported higher ISSE scores afterwards, possibly in part because the combined seeking and reasoning task may have led both groups' participants to feel more competent around seeking information in general.

Limitations and future research

In this study, future teachers' self-reported confidence in seeking online information – as well as their elaboration on their argumentative reasoning behavior – may function only as indicators of how deeply they reflect on their sourcing of online educational information. Yet, even though neither group was shown to frequently discuss criteria related to the provider or media, we do not know whether the participants reflected internally on these aspects without discussing them in the reasoning task.

Due to the sample size of N = 83 participants, determining the variances caused by the individuals within the dyadic levels is impossible. Yet achieving a sufficiently large group-level sample size to conduct multi-level analyses is challenging in general, for studies investigating collaborative learning processes (Cress, 2008). Future research could investigate the underlying collaborative processes – and, for instance, whether different kinds of paired dyads result in different qualities of argumentation (e.g., when both collaborators are experienced teachers versus student teachers, or when they have high versus low ISEB or ISSE).

While the topic of using mobile phones in class is highly relevant for (future) teachers – who must establish practices grounded in educational research – our study's focus on this topic means that the findings cannot necessarily be generalized. Therefore, future research could investigate whether (future) teachers' critical reasoning of how they select online information differs for other topics. Similarly, since the participants in this study were future teachers, the findings cannot be generalized completely towards in-service teachers and their sourcing of online information. Although participants in this study had access to scientific research articles on campus, they rarely selected science-related WI. Thus, future research – as well as teacher trainings on teachers' sourcing of online information – should consider the different circumstances under which in-service teachers, who lack such access to scientific articles, source scientific evidence.

Another limitation refers to the coding scheme that addresses future teachers' argumentative reasoning behavior when sourcing online educational information. This was determined with regard to 1) whether participants referred to types of criteria that have been empirically and theoretically found to impact information seekers' evaluation of online information; and 2) whether they referred to the criteria in either an elaborated or an unelaborated way. This coding scheme thus addresses critical reasoning – not just regarding the analysis of evidence within the information, but also regarding the criteria participants used to select online information, including criteria specific to online sources and media types. Nonetheless, the types of criteria addressed in the coding scheme are non-exhaustive. With respect to the coding of the reasoning behavior, it is important to note that our scheme merely focused on the structure of participants' reasoning behavior – for instance, whether and how participants provided reasons for their selection of information – without any coding of whether these reasons were adequate for their intended function (e.g., ‘I chose this WI because the author is a prominent vs. an experienced teacher’). Thus, in this study, we coded participants' reasoning as critically elaborated if they gave any reason or argument for their selection that was plausible to them. This was done because the intention of this study was to better understand the role of collaborative argumentation in stimulating critical reflection on one's own preferred and established sourcing strategies. In this sense, we were interested in whether participants were able to provide any reasons and to elaborate on them, since such elaboration processes are essential for deeper reflection (e.g., Felton & Kuhn, 2001). Future research should build on these findings and investigate whether a given argument is not just plausible to the (future) teachers themselves, but whether it is a promising strategy for selecting correct, complete, and appropriate information about online educational information (Bromme & Goldman, 2014).

Lastly – in considering the complexity of processes for using scientific knowledge, as well as when sourcing for such knowledge on the Internet (Rousseau & Gunia, 2016) – this study focused on those processes related to the critical reflection of sourcing after the participants had conducted their realistic searches. Future research may additionally investigate teachers' critical reflection prior to selecting information, or may explore their decision not to select particular information found online.

Implications for teacher education

This study has potential implications for the development of trainings to support future teachers' sourcing of online educational information. Besides preparing future teachers to effectively use research evidence for professional practice in general, teacher education programs should focus explicitly on the characteristics of the Internet as a meaningful source of scientific information (Caena & Redecker, 2019; Duke & Ward, 2009; Rousseau & Gunia, 2016). Teachers need to know what information they can rely on and how to find it. Especially since the Internet serves as an easy, fast, and accessible resource of information that can be used by teachers for a lifetime – long after their university years – trainings should consider the research indicating the influence of certain criteria, such as information about the information, the provider of information, and the media. Effective training for educators may include, for example, not just the use of certain technologies and media (e.g., search engines and websites) but also how to critically reflect on the potential influence of the corresponding affordances (e.g., search engine algorithms that can lead to different search results). Teachers should become more aware of the various types of criteria that may influence their selection of online educational information (Choi & Stvilia, 2015). The ability to critically reflect on whether and how to draw on these criteria is, likewise, a crucial competency for sourcing evidence appropriately – given the fact that, while often useful, criteria can also inadvertently guide teachers toward a rather biased evaluation of the information (Bråten et al., 2019; Bromme & Goldman, 2014; Rousseau & Gunia, 2016 ).

Trainings that aim to encourage (future) teachers to critically reflect on any use of criteria may also benefit from adding a collaborative component (Bråten et al., 2019; Pérez et al., 2018). Collaborative argumentation may particularly help future teachers to discuss information (whether science-related or not), as well as to critically reason whether and how they have selected online information. In this study, the groupcoll more often referred to the criteria that had guided their selection using elaborated reasoning – even though they had not received previous training on key argumentative activities, such as critically questioning others (Mayweg-Paus et al., 2016; Noroozi et al., 2012). Apparently, the dialogic setting itself stimulates deeper elaboration as a person is subject to the interlocutors' scrutiny of his/her preferred selection strategy. Future research could shed light on whether adding interventional trainings that promote the use of argumentative strategies more systematically (e.g., prompts on how to question other arguments) could help future teachers to more critically reflect on whether their choices of online information are guided by criteria that they currently inadvertently overlook. In addition to that, future research may investigate whether the benefits of collaboration, when reflecting about online sourcing, occur only if teachers actually collaborate – or even when they simply imagine discussing the sourcing with another teacher. This might become relevant, since actual collaboration also poses challenges (e.g., effort to manage the task). In this study, participants who worked collaboratively more often uttered phrases related to managing and coordinating the reasoning task. Collaboration, therefore, likely increased participants' investment of resources like time, effort, and communication in managing the task (Mullins et al., 2011). This additional effort needs to be considered – particularly in terms of time constraints. In fact, it may be due to time constraints that teachers are searching for information on the Internet in the first place; they may thus have no extra time to discuss their search with another colleague.

Overall – while critical reflection when sourcing online information may not replace formal criteria for scientific practices (e.g., knowledge about how scientific evidence emerges) – collaborative argumentation appears to be a promising approach to increase critical reflection as an important element of any successful sourcing of educational information on the Internet. As the online context will continue to rapidly evolve over time, teachers are required to continuously reflect on and engage with their own strategies in this specific environment. In this sense, future research – as well as teacher trainings – could help encourage teachers to critically question both the background evidence and how they acquire the information. This should include considering the types of criteria that may guide this process in online contexts. Such critical questioning could also be promoted by collaborative learning components – allowing participants to reflect on their own, as well as on others' methods of choosing online content.

We thank Thu Trang Phi and Claudia Lefke for their help in data collection and processing.

References

  • Andreassen, R. & Bråten, I. (2013). Teachers' source evaluation self-efficacy predicts their use of relevant source features when evaluating the trustworthiness of web sources on special education. British Journal of Educational Technology , 44 (5), 821–836. https://doi.org/10.1111/j.1467–8535.2012.01366.x First citation in articleCrossrefGoogle Scholar

  • Asterhan, C. S. C. & Schwarz, B. B. (2016). Argumentation for learning: Well-trodden paths and unexplored territories. Educational Psychologist , 51 (2), 164–187. https://doi.org/10.1080/00461520.2016.1155458 First citation in articleCrossrefGoogle Scholar

  • Baker, M. J. (2015). Collaboration in collaborative learning. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems , 16 (3), 451–473. https://doi.org/10.1075/is.16.3.05bak First citation in articleCrossrefGoogle Scholar

  • Bandura, A. (1997). Self-efficacy: The exercise of control . New York: W.H. Freeman. https://doi.org/10.1891/0889–8391.13.2.158 First citation in articleGoogle Scholar

  • Bauer, J. & Prenzel, M. (2012). European teacher training reforms. Science , 336 (6089), 1642–1643. https://doi.org/10.1126/science.1218387 First citation in articleCrossrefGoogle Scholar

  • Bougatzeli, E. , Douka, M. , Bekos, N. & Papadimitriou, E. (2017). Web literacy practices of teacher education students and in-service teachers in Greece: A descriptive study. Preschool and Primary Education , 5 (1), 97. https://doi.org/10.12681/ppej.10336 First citation in articleCrossrefGoogle Scholar

  • Brante, E. W. (2019). A multiple-case study on students' sourcing activities in a group task. Cogent Education , 6 (1), 1–13. https://doi.org/10.1080/2331186X.2019.1651441 First citation in articleCrossrefGoogle Scholar

  • Bråten, I. & Braasch, J. L. G. (2018). The role of conflict in multiple source use. In J. L. G. Braasch I. Bråten M. T. McCrudden (Eds.), Handbook of multiple source use (pp.184–201). New York: Routledge. https://doi.org/10.4324/9781315627496 First citation in articleCrossrefGoogle Scholar

  • Bråten, I. , Brante, E. W. & Strømsø, H. I. (2019). Teaching sourcing in upper secondary school: A comprehensive sourcing intervention with follow-up data. Reading Research Quarterly , 54 (4), 481–505. https://doi.org/10.1002/rrq.253 First citation in articleCrossrefGoogle Scholar

  • Bråten, I. & Ferguson, L. E. (2015). Beliefs about sources of knowledge predict motivation for learning in teacher education. Teaching and Teacher Education , 50 , 13–23. https://doi.org/10.1016/j.tate.2015.04.003 First citation in articleCrossrefGoogle Scholar

  • Bråten, I. , Strømsø, H. I. & Samuelstuen, M. S. (2005). The relationship between Internet-specific epistemological beliefs and learning within Internet technologies. Journal of Educational Computing Research , 33 , 141–171. https://doi.org/ 10.2190/E763-X0LN-6NMF-CB86 First citation in articleCrossrefGoogle Scholar

  • Bromme, R. & Goldman, S. R. (2014). The public's bounded understanding of science. Educational Psychologist , 49 (2), 59–69. https://doi.org/10.1080/00461520.2014.921572 First citation in articleCrossrefGoogle Scholar

  • Bromme, R. , Prenzel, M. & Jäger, M. (2014). Empirische Bildungsforschung und evidenzbasierte Bildungspolitik: Eine Analyse von Anforderungen an die Darstellung, Interpretation und Rezeption empirischer Befunde. [Educational research and evidence based educational policy: The challenge of exposing and of understanding educational research] Zeitschrift für Erziehungswissenschaft , 17 , 3–54. https://doi.org/10.1007/s11618-014-0514-5 First citation in articleCrossrefGoogle Scholar

  • Bronstein, J. (2014). The role of perceived self-efficacy in the information seeking behavior of library and information science students. Journal of Academic Librarianship , 40 (2), 101–106. https://doi.org/10.1016/j.acalib.2014.01.010 First citation in articleCrossrefGoogle Scholar

  • Caena, F. & Redecker, C. (2019). Aligning teacher competence frameworks to 21st century challenges: The case for the European Digital Competence Framework for Educators (DIGCOMPEDU). European Journal of Education , 54 (3), 356–369, JRC117352. https://doi.org/10.1111/ejed.12345 First citation in articleCrossrefGoogle Scholar

  • Chen, J. , Wang, M. , Kirschner, P. A. & Tsai, C. C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research , 88 (6), 799–843. https://doi.org/10.3102/0034654318791584 First citation in articleCrossrefGoogle Scholar

  • Chen, Y. J. , Chien, H. M. & Kao, C. P. (2019). Online searching behaviours of preschool teachers: A comparison of pre-service and in-service teachers' evaluation standards and searching strategies. Asia-Pacific Journal of Teacher Education , 47 (1), 66–80. https://doi.org/10.1080/1359866X.2018.1442556 First citation in articleCrossrefGoogle Scholar

  • Chinn, C. A. & Clark, D. B. (2013). Learning through collaborative argumentation. In C. E. Hmelo-Silver C. A. Chinn C. K. K. Chan A. M. O'Donnell (Eds.), The international handbook of collaborative learning (pp.314–332). New York, NY: Taylor & Francis. https://doi.org/10.4324/9780203837290.ch18 First citation in articleCrossrefGoogle Scholar

  • Choi, W. & Stvilia, B. (2015). Web credibility assessment: Conceptualization, operationalization, variability, and models. Journal of the Association for Information Science and Technology , 66 (12), 2399–2414. https://doi.org/10.1002/asi.23543 First citation in articleCrossrefGoogle Scholar

  • Clarà, M. & Mauri, T. (2010). Toward a dialectic relation between the results in CSCL: Three critical methodological aspects of content analysis schemes. International Journal of Computer-Supported Collaborative Learning , 5 (1), 117–136. https://doi.org/10.1007/s11412-009-9078-4 First citation in articleCrossrefGoogle Scholar

  • Cress, U. (2008). The need for considering multilevel analysis in CSCL research – An appeal for the use of more advanced statistical methods. International Journal of Computer-Supported Collaborative Learning , 3 (1), 69–84. https://doi.org/10.1007/s11412-007-9032-2 First citation in articleCrossrefGoogle Scholar

  • Csanadi, A. , Kollar, I. & Fischer, F. (2020). Pre-service teachers' evidence-based reasoning during pedagogical problem-solving: Better together? European Journal of Psychology of Education . https://doi.org/10.1007/s10212-020-00467-4 First citation in articleCrossrefGoogle Scholar

  • Duke, T. S. & Ward, J. D. (2009). Preparing information literate teachers: A meta synthesis. Library and Information Science Research , 31 (4), 247–256. https://doi.org/10.1016/j.lisr.2009.04.003 First citation in articleCrossrefGoogle Scholar

  • Evans, S. K. , Pearce, K. E. , Vitak, J. & Treem, J. W. (2016). Explicating affordances: A conceptual framework for understanding affordances in communication research. Journal of Computer-Mediated Communication , 22 (1), 35–52. https://doi.org/10.1111/jcc4.12180 First citation in articleCrossrefGoogle Scholar

  • Felton, M. , Garcia-Mila, M. , Villarroel, C. & Gilabert, S. (2015). Arguing collaboratively: Argumentative discourse types and their potential for knowledge building. British Journal of Educational Psychology , 85 (3), 372–386. https://doi.org/10.1111/bjep.12078 First citation in articleCrossrefGoogle Scholar

  • Felton, M. & Kuhn, D. (2001). The development of argumentive discourse skill. Discourse Processes , 32 (2), 135–153. https://doi.org/10.1207/s15326950dp3202&3_03 First citation in articleCrossrefGoogle Scholar

  • Guzman, A. L. & Lewis, S. C. (2020). Artificial intelligence and communication: A human–machine communication research agenda. New Media and Society , 22 (1), 70–86. https://doi.org/10.1177/1461444819858691 First citation in articleCrossrefGoogle Scholar

  • Haas, A. & Unkel, J. (2017). Ranking versus reputation: Perception and effects of search result credibility. Behaviour and Information Technology , 36 (12), 1285–1298. https://doi.org/10.1080/0144929X.2017.1381166 First citation in articleCrossrefGoogle Scholar

  • Hendriks, F. , Kienhues, D. & Bromme, R. (2015). How to measure trust in vloggers in a digital age: The Muenster Epistemic Trust Inventory (METI). PLoS One 2015 , 10 (10), e0139309. https://doi.org/10.1371/journal.pone.0139309 First citation in articleCrossrefGoogle Scholar

  • Hendriks, F. , Mayweg-Paus, E. , Felton, M. , Iordanou, K. , Jucks, R. & Zimmermann, M. (2020). Constraints and affordances of online engagement with scientific information – A literature review. Frontiers in Psychology , 11 (572744). https://doi.org/10.3389/fpsyg.2020.572744 First citation in articleCrossrefGoogle Scholar

  • Hinson, J. , Distefano, C. & Daniel, C. (2003). The Internet self-perception scale: Measuring elementary students' levels of self-efficacy regarding internet use. Journal of Educational Computing Research , 29 (2), 209–228. https://doi.org/10.2190/BWGN-84AE-9AR6-16DY First citation in articleCrossrefGoogle Scholar

  • Hoffmann, M. H. (2016). Reflective argumentation: A cognitive function of arguing. Argumentation , 30 , 365–397. https://doi.org/10.1007/s10503-015-9388-9 First citation in articleCrossrefGoogle Scholar

  • Iding, M. K. , Crosby, M. E. , Auernheimer, B. & Klemm, E. B. (2008). Web site credibility: Why do people believe what they believe? Instructional Science , 37 (1), 43–63. https://doi.org/10.1007/s11251-008-9080-7 First citation in articleCrossrefGoogle Scholar

  • Iding, M. & Klemm, E. B. (2005). Pre-service teachers critically evaluate scientific information on the World Wide Web: What makes information believable? Computers in the Schools , 21 , 7–18. https://doi.org/10.1300/J025v22n01_02 First citation in articleCrossrefGoogle Scholar

  • Kurbanoglu, S. S. (2003). Self-efficacy: A concept closely linked to information literacy and lifelong learning. Journal of Documentation , 59 , 635–646. https://doi.org/10.1108/00220410310506295 First citation in articleCrossrefGoogle Scholar

  • Macedo-Rouet, M. , Potocki, A. , Scharrer, L. , Ros, C. , Stadtler, M. , Salmerón, L. , et al. (2019). How good is this page? Benefits and limits of prompting on adolescents' evaluation of web information quality. Reading Research Quarterly , 54 (3), 299–321. https://doi.org/10.1002/rrq.241 First citation in articleCrossrefGoogle Scholar

  • Mayweg-Paus, E. & Jucks, R. (2018). Conflicting evidence or conflicting opinions? Two-sided expert discussions contribute to experts' trustworthiness. Journal of Language and Social Psychology , 37 (2), 203–223. https://doi.org/10.1177/0261927X17716102 First citation in articleCrossrefGoogle Scholar

  • Mayweg-Paus, E. , Thiebach, M. & Jucks, R. (2016). Let me critically question this! – Insights from a training study on the role of questioning on argumentative discourse. International Journal of Educational Research , 79 , 195–210. https://doi.org/10.1016/j.ijer.2016.05.017 First citation in articleCrossrefGoogle Scholar

  • Mullins, D. , Rummel, N. & Spada, H. (2011). Are two heads always better than one? Differential effects of collaboration on students' computer-supported learning in mathematics. International Journal of Computer-Supported Collaborative Learning , 6 (3), 421–443. https://doi.org/10.1007/s11412-011-9122-z First citation in articleCrossrefGoogle Scholar

  • Noroozi, O. , Weinberger, A. , Biemans, H. J. A. , Mulder, M. & Chizari, M. (2012). Argumentation-Based Computer Supported Collaborative Learning (ABCSCL): A synthesis of 15 years of research. Educational Research Review , 7 (2), 79–106. https://doi.org/10.1016/j.edurev.2011.11.006 First citation in articleCrossrefGoogle Scholar

  • Pérez, A. , Potocki, A. , Stadtler, M. , Macedo-Rouet, M. , Paul, J. , Salmerón, L. , et al. (2018). Fostering teenagers' assessment of information reliability: Effects of a classroom intervention focused on critical source dimensions. Learning and Instruction , 58 , 53–64. https://doi.org/10.1016/j.learninstruc.2018.04.006 First citation in articleCrossrefGoogle Scholar

  • Rousseau, D. M. & Gunia, B. C. (2016). Evidence-based practice: The psychology of EBP implementation. Annual Review of Psychology , 67 , 667–692. https://doi.org/10.1146/annurev-psych-122414-033336 First citation in articleCrossrefGoogle Scholar

  • Salmerón, L. , Fajardo, I. & Gómez-Puerta, M. (2019). Selection and evaluation of Internet information by adults with intellectual disabilities. European Journal of Special Needs Education , 34 (3), 272–284. https://doi.org/10.1080/08856257.2018.1468634 First citation in articleCrossrefGoogle Scholar

  • Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp.72–100). Cambridge, MA: The MIT Press. https://doi.org/10.1162/dmal.9780262562324.073 First citation in articleGoogle Scholar

  • Sung, Y. T. , Chang, K. E. & Liu, T. C. (2016). The effects of integrating mobile devices with teaching and learning on students' learning performance: A meta-analysis and research synthesis. Computers and Education , 94 , 252–275. https://doi.org/10.1016/j.compedu.2015.11.008 First citation in articleCrossrefGoogle Scholar

  • Tang, Y. & Tseng, H. W. (2013). Distance learners' self-efficacy and information literacy skills. Journal of Academic Librarianship , 39 (6), 517–521. https://doi.org/10.1016/j.acalib.2013.08.008 First citation in articleCrossrefGoogle Scholar

  • Thiebach, M. , Mayweg-Paus, E. & Jucks, R. (2016). Better to agree or disagree? The role of critical questioning and elaboration in argumentative discourse. Zeitschrift für Pädagogische Psychologie , 30 (2–3), 133–149. https://doi.org/10.1024/1010-0652/a000174 First citation in articleLinkGoogle Scholar

  • Thomm, E. & Bromme, R. (2012). “It should at least seem scientific!” Textual features of “scientificness” and their impact on lay assessments of online information. Science Education , 96 (2), 187–211. https://doi.org/10.1002/sce.20480 First citation in articleCrossrefGoogle Scholar

  • Van Boxtel, C. , van der Linden, J. & Kanselaar, G. (2000). Collaborative learning tasks and the elaboration of conceptual knowledge. Learning and Instruction , 10 (4), 311–330. https://doi.org/10.1016/S0959-4752(00)00002-5. First citation in articleCrossrefGoogle Scholar

  • Veerman, A. , Andriessen, J. & Kanselaar, G. (2002). Collaborative argumentation in academic education. Instructional Science , 30 (3), 155–186. https://doi.org/10.1023/A:1015100631027 First citation in articleCrossrefGoogle Scholar

  • Williams, D. & Coles, L. (2007). Evidence-based practice in teaching: An information perspective. Journal of Documentation , 63 (6), 812–835. https://doi.org/10.1108/00220410710836376 First citation in articleCrossrefGoogle Scholar

  • Wu, Y.-T. & Wang, L.-J. (2015). The exploration of elementary school teachers' Internet self-efficacy and information commitments: A study in Taiwan. Journal of Educational Technology & Society , 18 (1), 211–222. Retrieved from www.jstor.org/stable/jeductechsoci.18.1.211 First citation in articleGoogle Scholar

  • Zimmermann, M. & Jucks, R. (2018). How experts' use of medical technical jargon in different types of online health forums affects perceived information credibility: Randomized experiment with laypersons. Journal of Medical Internet Research , 20 (1), 1–13. https://doi.org/10.2196/jmir.8346 First citation in articleCrossrefGoogle Scholar

Dr. Maria Zimmermann, Humboldt University of Berlin, Department of Education Studies, Faculty of Humanities and Social Sciences | Digital Knowledge Management, Unter den Linden 6, 10099 Berlin, Germany,