Skip to main content
Open AccessOriginal Article

Forced Virtuality During COVID-19

A Multigroup Perspective on Technology Acceptance of Public Digital Services

Published Online:https://doi.org/10.1026/0932-4089/a000366

Abstract

Abstract. Social distancing received top priority during the COVID-19 crisis, resulting in new users of public digital services (PDS) with heterogeneous use motivation. While some users decided to use a PDS voluntarily and independently of COVID-19, others were forced to use PDS because of the COVID-19 lockdown. Based on technology acceptance models, we compared forced users (N1 = 346) and voluntary users (N2 = 315) using latent multigroup analysis. First-time users of a PDS (N = 661) participated in the survey after reporting a crime online to the police. Results showed that forced and voluntary users differed regarding key factors (performance expectancy, effort expectancy, behavioral intention) and antecedents (system trust, ICT self-concept) of technology acceptance with less positive values for forced users. Further, forced users had stronger needs for system trust and usefulness (performance expectancy) than voluntary users, revealing user group-specific predictive values. The lessons learned for PDS design and marketing beyond pandemic times are discussed.

Erzwungene Virtualität während COVID-19: Eine Multigruppen-Perspektive auf die Technologieakzeptanz öffentlicher digitaler Dienste

Zusammenfassung. Soziale Distanzierung hatte während der COVID-19-Pandemie oberste Priorität. Daraus resultierten neue Nutzende öffentlicher digitaler Dienste (ÖDD) mit heterogener Nutzungsmotivation. Während einige Nutzende sich freiwillig und unabhängig von COVID-19 für die Verwendung ÖDD entschieden, sahen sich andere aufgrund des COVID-19 Lockdowns zur Verwendung ÖDD gezwungen. Auf Basis von Modellen der Technologieakzeptanz verglichen wir durch COVID-19 gezwungene (N1 = 346) und freiwillige Nutzende unabhängig von COVID-19 (N2 = 315) mittels latenter Multigruppen-Analyse. Erstnutzende eines ÖDD (N = 661) nahmen an der Umfrage teil, nachdem sie online eine Straftat bei der Polizei gemeldet hatten. Die Ergebnisse zeigten, dass sich die Nutzergruppen hinsichtlich Kernfaktoren (Leistungserwartung, Aufwandserwartung, Verhaltensabsicht) und Antezedenzien (Systemvertrauen, IKT-Selbstkonzept) der Technologieakzeptanz unterschieden. Hier zeigten gezwungene Nutzende weniger positive Werte. Außerdem zeigten sich nutzergruppen-spezifische prädiktive Werte. Gezwungene Nutzende hatten ein stärkeres Bedürfnis nach Systemvertrauen und Nützlichkeit (Leistungserwartung) als freiwillige Nutzende. Lehren für das Design und Marketing ÖDD über die Zeit der Pandemie hinaus werden diskutiert.

As a measure taken against COVID-19, social distancing became a top priority during the pandemic, forcing many people to reduce physical contacts and switch to digital contacts. The use of public digital services (PDS) in policing activities such as online crime reporting systems experienced a boost compared to pre-Corona times by 147 % (BKA, 2020). Further, the COVID-19 lockdown led to the unique situation that, in addition to voluntary users independent of COVID-19, a previously unreachable user group was forced to use PDS to help fight the pandemic (forced users). This latter group would have otherwise used the analog service in the absence of the pandemic. Thus, forced and voluntary users show motivational differences. It is important to know how motivational differences along with other person-related variables may explain PDS use and acceptance, to provide customized support, address the needs of user groups accordingly both in and beyond pandemic times, and to extend the use of PDS in policing.

Extended use of PDS in policing enables structural changes in policing regarding crime management and the allocation of personnel or financial resources (Ganiron et al., 2019). However, little research exists about its acceptance (Iriberri et al., 2006). In other contexts (e. g., tax reporting), models of technology acceptance proved great applicability (e. g., Wu & Chen, 2005). Importantly, existing research shows that forced and voluntary users differ in the level and predictive values of acceptance factors (Venkatesh et al., 2003; Wu & Lederer, 2009). However, previous research focused primarily on optional versus mandatory digital systems at work, while PDS concern more private life, and the COVID-19 pandemic lockdown is notable because of its limited time. In addition, the research focus was put on key factors of technology acceptance (i. e., performance expectancy, effort expectancy, behavioral intention) and less on its antecedents. Yet, it is important to include antecedents to also examine modifiable reasons for differences in technology acceptance. System trust (“I rely on”) and ICT1 self-concept (“I can” and “I like”) are relevant antecedents of technology acceptance in the given context of first-time users of PDS in policing (i. e., no prior experience, sensitive data; Venkatesh et al., 2003; Wu & Chen, 2005) and against the background of voluntariness during COVID-19 (i. e., motivational differences; Ryan & Deci, 2000).

This article compares forced and voluntary first-time users of a PDS in policing, focusing on the role of system trust and ICT self-concept as antecedents as well as on key factors of technology acceptance. Our two research aims are, first, to examine group differences between forced and voluntary users using latent mean comparisons and, second, to examine user group-specific predictive values within our proposed research model using multigroup analysis of structural invariance (MASI).

In doing so, we extend the theoretical knowledge on technology acceptance and consider a new contextual setting (i. e., PDS in policing), predictors of technology acceptance (i. e., the inclusion of antecedents), and a temporal perspective on voluntariness of use motivation (i. e., time-limited compulsion). On a more practical level, this article marks a change in the use of PDS during the COVID-19 pandemic, quantifying changed user behavior and related system evaluation. From these parameters, predictions about future PDS demand (e. g., frequency of use) can be made that impact work processes on the provider’s side (e. g., personnel resources, postprocessing). Further, we identify the characteristics of forced and voluntary users. Our insights into group differences (e. g., level of system trust) and user group-specific predictive values (e. g., the impact of system trust on performance expectancy) offer lessons surrounding PDS design and marketing, such as creating user group-specific personas.2 If forced users continue to use PDS in the future, the health and economic crisis can at least have been a chance to effect lasting political and social participation in a digitalized society.

Research Context: PDS in Policing During COVID-19

In this article, we focus on first-time users of the PDS Onlinewache3, a German online crime reporting system that allows citizens to report a theft, fraud, a hint, or property damage online, whereas more serious crimes (e. g., sexual assault) are recorded face-to-face at a local police station only. Similar systems are available for example in the Netherlands4 and Australia5. The central aim of such online crime reporting systems is to increase the rate of reported crimes by providing benefits to users (e. g., 24/7 accessibility). At the same time, PDS in policing allow the reallocation of resources and the optimization of subsequent processes (e. g., crime management) on the provider’s side (Ganiron et al., 2019; Iriberri et al., 2006). Despite its benefits, users do not always prefer the digital solution (Iriberri et al., 2006). The COVID-19 lockdown offered a unique research chance to investigate why and by whom by investigating forced users.

With the COVID-19 lockdown, the overall role of PDS in policing changed: The formerly optional PDS became a temporary essential solution to reducing physical contact. This forced situation resulted in both increased use overall and an extended user group with heterogeneous use motivation (voluntary and forced users). Technology acceptance models offer a theoretical framework for comparing forced and voluntary users because they systematize key factors and antecedents of technology acceptance and specify voluntariness as a moderator of technology acceptance that influences both the level and predictive values of acceptance factors (Venkatesh et al., 2003; Wu & Lederer, 2009).

This article compares forced and voluntary users regarding both their mean group differences (H1) and predictive values (H2) on selected key factors and antecedents of technology acceptance. First, however, we introduce relevant research variables and discuss the role of voluntariness.

Technology Acceptance

Key Factors: Behavioral Intention, Performance Expectancy, Effort Expectancy

There exist multiple models of technology acceptance (see Venkatesh et al., 2016). Their underlying concept is that individual reactions determine a behavioral intention, which in turn leads to actual behavior toward a system. Therefore, we take behavioral intention as a proxy for actual behavior. Behavioral intention can refer to the use of a system (intention to use) or the recommendation of a system to others (intention to recommend; Oliveira et al., 2016). Further, we focus on two key determinants of behavioral intention: performance expectancy and effort expectancy (Khechine et al., 2016). Performance expectancy describes “the degree to which an individual believes that using the system will help him or her to attain gains in job performance” (p. 447) and includes the perceived usefulness and the individual (added) value of a system. Effort expectancy describes “the degree of ease associated with the use of the system” (p. 450) and is vital in the first-time use of a technology (Venkatesh et al., 2003), as for new users during COVID-19. Empirical research shows that effort expectancy positively influences performance expectancy (Brandsma et al., 2020; Maillet et al., 2015): People evaluate a system as more useful if they perceive the system as easy to use.

Antecedents: System Trust and ICT Self-Concept

Empirical research on PDS in policing is rare but points to the critical role of attitudes on crime reporting behavior in a conventional analog (Boateng, 2018) and virtual crime reporting setting (Hoefnagel et al., 2012). Attitudes, in turn, are person-related antecedents of performance expectancy and effort expectancy (Venkatesh et al., 2016).6 Further, motivational theories (e. g., self-determination theory, SDT; Ryan & Deci, 2000) highlight the interplay of voluntariness (i. e., autonomy) and attitudes (e. g., trust, self-concept). In this article, we focus on the antecedents system trust and ICT self-concept because of their relevance both in the context of PDS in policing and against the background of forced and voluntary users.

System trust can be defined as the willingness to depend on something and to be vulnerable in a situation characterized by uncertainty and risk (McKnight et al., 2011; Thielsch et al., 2018). Ample research confirms the beneficial role of high system trust on technology acceptance, especially in online settings (e. g., Wu et al., 2011). A mixed picture exists (Venkatesh, Thong, & Xu, 2016) regarding the conceptualization of system trust as an exogenous (i. e., indirect effect on behavioral intention) or endogenous (i. e., direct effect on behavioral intention) variable among other technology acceptance factors. Here, we examine the role of system trust as an exogenous variable (i. e., an antecedent of performance expectancy and effort expectancy with an indirect effect on behavioral intention) because well-established basic technology acceptance models do not conceptualize system trust as a direct determinant of behavioral intention (see Venkatesh et al., 2003). Empirical research supports this mediated effect on behavioral intention (e. g., Belanche et al., 2012; Casey & Wilson-Evered, 2012). Note that model extensions do exist that report a direct effect of system trust on behavioral intention (e. g., Oh & Yoon, 2014; Oliveira et al., 2014). In the specific context of PDS in policing during COVID-19, we focus on system trust for two more reasons: First, system trust is essential for PDS with high personal involvement and use of sensitive data (e. g., Carter & Bélanger, 2005), for example, when reporting a crime (e. g., stolen bike); second, system trust is important in uncertain situations (McKnight et al., 2011), which is the case during the COVID-19 crisis, when many people used PDS for the first time (Richter & Mohr, 2020). First-time use in itself represents an uncertain situation (Venkatesh et al., 2016).

ICT self-concept is conceptualized as the mental representation of one’s competences and affective reactions when using digital systems (Peiffer et al., 2020; Zylka et al., 2015). This includes an affect component (affect ICT self-concept, “I like”) and a competence component (competence ICT self-concept, “I can”). Both components are interrelated (e. g., Arens et al., 2011) and correlate with trust (e. g., Belanche et al., 2012). Technology acceptance models conceptualize competence self-perceptions and affective reactions toward a technology as exogenous antecedents of performance expectancy and effort expectancy, with an indirect effect on behavioral intention (for a synthesis, see Venkatesh et al., 2003). Competence self-perceptions more strongly predict effort expectancy than performance expectancy, whereas affective reactions (i. e., perceived enjoyment) more strongly predict the perceived usefulness than the perceived ease of use (e. g., Chang et al., 2017; Maillet et al., 2015; Rizun & Strzelecki, 2020). To date, research examining the role of self-perceptions on technology acceptance has focused on more specific constructs that refer to a specific application (i. e., system X) or a specific electronic device, mainly computers (e. g., Chang et al., 2017; Venkatesh, 2000). Rather general constructs, like the ICT self-concept, are prominent in educational science (e. g., Marsh et al., 2017; Zylka et al., 2015) but have rarely been integrated into technology acceptance research. However, in the given context of new PDS in policing during COVID-19, ICT self-concept is a more promising construct: First-time users may have general ICT-related experiences but no PDS-specific experience. Consequently, broader ICT-related experiences that form the ICT self-concept should influence PDS acceptance. User group-specific knowledge about ICT self-concept as a potential hurdle for PDS use can help providers to lower these hurdles via customized PDS design and marketing.

Moderator: Forced Versus Voluntary Users

During COVID-19, the voluntariness of use motivation differed between user groups. Voluntariness can be defined as the degree of free will involved in the adoption of PDS and is primarily influenced by factors in the environment (Wu & Lederer, 2009). Voluntariness is a central moderator variable of technology acceptance besides age, gender, and experience with the system (Venkatesh et al., 2003).

Regarding the level of acceptance factors, previous research showed that a voluntary action is associated with more positive attitudes and evaluations toward a system compared to forced actions (Lakhal et al., 2013; Ryan & Deci, 2000). As system trust and ICT self-concept influence future PDS use, we assume that they also influenced first-time use motivation. Thus, we hypothesize that forced users score lower on these antecedents than voluntary users. Because system trust and ICT self-concept influence key factors of technology acceptance, we also assume lower scores for forced users compared to voluntary users on key factors of technology acceptance. Examining mean group differences between forced and voluntary users, these assumptions formally result in the following:

Hypothesis 1 (H1): Forced users score lower on key factors and antecedents of technology acceptance than voluntary users.

Regarding the predictive values among acceptance factors, the picture is heterogeneous and complex. Voluntary and forced users differ regarding their motivation. Whereas a voluntary action (“free will”) is active, rather self-determined, and intrinsically motivated, a forced action is more reactive, less self-determined, and extrinsically motivated (Ryan & Deci, 2000). Thus, individual self-perceptions and evaluations of a system should determine behavioral intention especially in voluntary settings. In line with this assumption, a meta-analysis (71 empirical studies) by Wu and Lederer (2009) examined the role of voluntariness on key factors of technology acceptance and found that higher levels of voluntariness lead to a greater impact of performance expectancy and effort expectancy on behavioral intention, whereas the relationship between performance expectancy and effort expectancy was not moderated by voluntariness. Venkatesh et al. (2003) postulate no moderating effect of voluntariness on the predictive values of performance expectancy and effort expectancy on behavioral intention.

Our setting differs from existing research in two main aspects: First, the role of voluntariness has been studied mostly in the context of digital systems at work, not in the context of PDS in policing that concern private life and are used irregularly and less frequently. Second, previous research focused on permanent mandatory versus optional digital systems, instead of time-limited forced use because of the COVID-19 pandemic lockdown. Thus, in our setting, users differ in their first-time use motivation (forced vs. voluntary), though both user groups indicate whether they would use the PDS again or recommend the PDS to others in the future, irrespective of the pandemic crisis (i. e., voluntary behavioral intention of PDS). This could imply that the effects are even stronger for forced first-time users, as forced users might experience more degrees of freedom to act in future times. Besides, the role of voluntariness on antecedents of technology acceptance remains largely unexplored. Drawing on its definition, system trust is powerful in uncertain situations (e. g., McKnight et al., 2011), like the first-time use of a PDS. This uncertainty could be intensified in the case of low controllability and pressure among forced compared to voluntary users, intensifying the predictive values of system trust on key factors of technology acceptance. Similarly, ICT self-concept may function as a more sensitive predictor of technology acceptance in cases of forced compared to voluntary PDS use. However, empirical evidence is too limited to postulate a directed hypothesis. Investigating user group-specific predictive values between acceptance factors we, thus, hypothesize:

Hypothesis 2 (H2): The predictive values among key factors and antecedents of technology acceptance differ between forced and voluntary users.

Research Model

Based on the aforementioned literature, we propose the following underlying research model for PDS acceptance (see Figure 1, solid lines). In this article, we compare forced and voluntary first-time users of a PDS in policing regarding key factors and antecedents of technology acceptance. We focus on both mean group differences (H1, Figure 1, bold dashed lines) and user group-specific predictive values (H2, Figure 1, thin dashed lines).

Figure 1 Note. BI = behavioral intention; PE = performance expectancy; EE = effort expectancy; ST = system trust; ICT–SCA = affect ICT self-concept; ICT-SCC = competence ICT self-concept. Exogenous variables (ST, ICT-SCA/C): antecedents of technology acceptance; endogenous variables (BI, PE, EE): key factors of technology acceptance. Solid lines = underlying research model; bold dashed lines = H1; thin dashed lines = H2, no direct path between EE and BI is considered because of prior analysis. Figure 1. The proposed research model of PDS acceptance in policing during COVID-19.

Method

Design and Sample

The field survey was part of a project in cooperation with the German police. We used data from participants who used the PDS Onlinewache for the first time7, either by being forced to because of COVID-19 (N = 346) or voluntarily independent of COVID-19 (N = 315). The survey was linked after the complete formal crime report was transmitted to the police. No conclusion to the reported crime was possible. Fieldwork took place in the spring of 2020 during the first COVID-19 lockdown using Unipark (Questback, 2020). The data protection officers of the German police confirmed ethical and data protection compliance. Participation was voluntary. A withdrawal was possible at any given time of the survey. No expense allowance was paid. The sample consisted of 661 participants (62.8 % male, 37.1 % female, 0.1 % no information) between 17 and 89 years (M = 44.69, SD = 15.13). The number of previously reported crimes varied from one to more than ten, with 356 participants (53.9 %) having reported only one crime before the survey.

Measures

Because the survey (administered in German) was part of a large cooperative project, we report the following measures relevant in the context of this article:

COVID-19 use motivation was used as the group variable. We asked participants “Why did you choose the Onlinewache to report your crime? Did Corona influence your decision?” Considered answer options were forced (“I first had contact with the local police. Because of Corona, they recommended the Onlinewache. Therefore, I used the PDS.”) and voluntary (“I would have used the Onlinewache independent of Corona. Corona is currently another argument, but not the main reason for my use of the Onlinewache.”).

Key factors of technology acceptance were measured with six items, mainly those by Venkatesh et al. (2003), rated on a four-point Likert scale ranging from disagree (1) to agree (4). Item wording was slightly adapted to fit the research context. Two items assessed performance expectancy (“The Onlinewache is useful to report a crime.” and “The Onlinewache is a more suitable instrument to report this crime than the local police station.”; ω8 = .59). Two items measured effort expectancy (“The fill-in instructions of the Onlinewache are easy to understand.” and “It is easy to operate the Onlinewache.”; ω = .84). Two items operationalized behavioral intention (ω = .91) referring to the intention to use the PDS again (“I would use the Onlinewache to report the next crime.”) and the intention to recommend the PDS to others (“I would recommend the Onlinewache to other people.”). The latter item is based on Oliveira et al. (2016).

Antecedents of technology acceptance were measured with nine items, each rated on a four-point Likert scale ranging from disagree (1) to agree (4). Three items by Thielsch et al. (2018) measured system trust. Item wording was slightly adapted to fit the research context (e. g., “I rely on the Onlinewache system.”; ω = .91). We assessed ICT self-concept with six items. Item formulation based on the German version of the Self-Description Questionnaire I (SDQ I, Marsh, 1990), which is one of the most known questionnaires to measure domain-specific self-concepts (Byrne, 1996). Three items each measured the affect ICT self-concept (e. g., “In general, I like digital systems.”; ω = .91) and the competence ICT self-concept (e. g., “In general, I am good at using digital systems.”; ω = .91).

We controlled for central moderators of technology acceptance (Venkatesh et al., 2003) namely, age, gender, and crime reporting experience (“How many crimes have you reported in your life in total to date, local or online?”) measured with a single item each.

Data Analysis

Prior Analysis

The data analysis was conducted using SPSS 26 (IBM, 2019, for manifest analyses) and Mplus 8 (Muthén & Muthén, 1998 – 2017, for latent analyses). Before the main analysis, we first examined the item and scale characteristics. To assess internal consistency, we calculated McDonald’s omega (ω). Values below .50 were considered as unacceptable (Blanz, 2015). To examine group differences in age, gender, and crime reporting experience (control variables), we compared forced and voluntary users on these variables using t-tests. Second, we conducted confirmatory factor analysis (CFA) to ensure the research model’s applicability in the entire research sample and the two user groups (Figure 1, solid lines). We used the maximum likelihood estimator (MLR) for its robustness against mild deviations of the normal distribution and the full information maximum likelihood estimation procedure (FIML) to handle missing data (0 – 1.8 %). To evaluate model fit, we used the well-established fit indices chi-square (χ²), the comparative fit index (CFI > .95), the root-mean-square error of approximation (RMSEA < .08), and the standardized root-mean-square residual (SRMR < .06, Dimitrov, 2010). Factor loadings stronger than .40 were considered acceptable (Field, 2018). If necessary, the research model was adjusted.

Third, we tested for measurement invariance (MI) to ensure psychometric equivalence of the latent factors across the two user groups. We ran multigroup CFA and tested increasing levels of MI (configural, metric, scalar) against each other starting with the least constraint solution (step-up approach, Putnick & Bornstein, 2016). Only if scalar MI holds (equal pattern, factor loading, and intercepts), the comparison of latent scale means is possible (Chen et al., 2019). Instead of testing MI for each latent construct separately, we ran separate multigroup CFAs for the exogenous antecedents and for the endogenous key factors to avoid identification problems. Further, we tested for metric MI in the final research model, as a precondition for MASI. We followed the recommendations by Chen (2007) accepting the higher level of MI if the decrease in CFI was less than .01.

Main Analysis

To investigate mean group differences between forced and voluntary users (H1), we compared their latent means (Figure 1, bold dashed lines). The analysis was conducted separately for antecedents and key factors, comparable to MI testing. Following Chen et al. (2019), the baseline model was a full scalar invariance model. We constrained latent means of forced users to zero. The latent means of voluntary users were estimated freely. We used the critical ratio (CR, parameter estimate divided by standard error) to assess latent mean differences. We assessed effect size for the latent mean differences using Cohen’s d (small: |d| ≥ 0.20, medium: |d| ≥ 0.50, large: |d| ≥ 0.80; Cohen, 1988).

To investigate user group-specific predictive values within the research model (H2), we used MASI (Figure 1, thin dashed lines). According to the procedure applied by Deng et al. (2005) in the context of technology acceptance, the baseline model was a full metric invariance model. We tested for structural invariance across user groups. If the predictive values across the user groups were not invariant (ΔSBχ², p < .05), we identified which paths differed across the user groups. Following Koufteros and Marcoulides (2006), we compared two nested models at a time.

Results

Prior Analysis

The detailed results of the prior analyses are documented in the Electronic Supplementary Material 1 (ESM 1, Table E1: descriptive statistics and model applicability, Table E2: t-tests, Table E3: MI testing). First, we examined the item and scale characteristics. Internal consistencies were good to excellent (.84 < ω < .91) except for performance expectancy, which had low but not unacceptable internal consistency (ω = .59). Descriptive statistics of the hypotheses-relevant variables were rather high in the entire sample (2.87 < M < 3.63). Layered by user groups, forced users had lower descriptive values on these items compared to voluntary users (see ESM 1, Table E1). Regarding the control variables, t-tests showed that both user groups were comparable in age, t‍(659) = 1.21, p > .05, gender, t‍(658) = 0.74, p > .05, and crime reporting experience, t‍(657) = -0.52, p > .05 (see ESM 1, Table E2). Second, we examined the applicability of the proposed research model in the entire research sample and each user group using CFAs.9 Model fit can be assessed as very good in the entire sample (χ² = 257.86, p < .001, CFI = .962, RMSEA = .058, SRMR = .042) as well as in the two user groups, forced (χ² = 213.99, p < .001, CFI = .950, RMSEA = .070, SRMR = .049) and voluntary (χ² = 115.81, p < .01, CFI = .981, RMSEA = .038, SRMR = .041). All items loaded significantly on the respective factor and were above .40 (see ESM 1, Table E1). Predictive values between the latent factors were mainly significant. Effort expectancy did not predict behavioral intention in all models (p > .10). Therefore, we adjusted our research model and did not consider this path in subsequent analyses.10 Further, the competence ICT self-concept was not a significant predictor of effort expectancy among voluntary users giving first descriptive evidence for user group-specific predictive values (H2).

Third, we tested for increasing levels of MI across forced (N = 346) and voluntary users (N = 315) using multigroup CFA (see ESM 1, Table E3). The results supported scalar MI for the antecedents (ΔCFI = -.006) and key factors (ΔCFI = -.004). Thus, the comparison of latent mean differences was allowed (H1). Also, results supported metric MI (equal factor loadings) within the adjusted research model (ΔCFI = -.007). Thus, structural invariance testing was allowed (H2). The changes of CFI were below the critical threshold of -.01.

Main Analysis

Mean Group Differences (H1)

We hypothesized mean group differences between forced and voluntary users (H1). The latent results are displayed in Table 1. Here, mean differences above zero indicate lower values for forced users compared to voluntary users. As expected, forced users evaluated key factors and antecedents of technology acceptance less positive than voluntary users. Forced users had lower values on performance expectancy (ΔM: 0.40, CR: 9.51, p < .001, large effect d = 0.80), effort expectancy (ΔM: 0.19, CR: 3.20, p < .01, small effect d = 0.25), and behavioral intention (ΔM: 0.51, CR: 8.90, p < .001, medium effect d = 0.74) than voluntary users. Further, forced users reported lower values of system trust (ΔM: 0.30, CR: 6.10, p < .001, small effect d = 0.49), affect ICT self-concept (ΔM: 0.25, CR: 4.87, p < .001, small effect d = 0.39), and competence ICT self-concept (ΔM: 0.18, CR: 4.23, p < .001, small effect d = 0.33) than voluntary users.

In conclusion, as expected, forced users evaluated key factors more negatively and scored lower on antecedents of technology acceptance than voluntary users (H1).

Table 1 Latent mean differences between forced and voluntary users on key factors and antecedents of technology acceptance

User Group-Specific Predictive Values (H2)

We hypothesized user group-specific predictive values between forced and voluntary users (H2). Table 2 shows the results of the structural invariance testing. Table 3 displays standardized model results among forced and voluntary users based on a model with invariant factor loadings across the user groups. They estimate the true score best, unaffected by differences in factor loadings across the user groups (Deng et al., 2005).

Baseline (M1) was a model with equal factor loadings across the two user groups. The analysis showed that constraining all structural weights to be invariant across the user groups (M2) led to a deterioration of overall model fit below the critical threshold of CFI > .95. The difference in the chi-square values (M2-M1) was significant (ΔSBχ² = 53.21, df = 9, p < .001). Examining which paths differed across user groups revealed the following results.

Key factors. First, constraining the path between performance expectancy and behavioral intention to be equal across user groups led to a significant model deterioration (M3-M1, ΔSBχ² = 4.38, df = 1, p < .05), leading to the rejection of structural invariance. This means that the predictive value of performance expectancy is stronger for forced users (β = .89) but still strong for voluntary users (β = .74). Second, fixing the path between effort expectancy and performance expectancy to be invariant resulted in a marginal significant model deterioration (M4-M1, ΔSBχ² = 3.09, df = 1, p < .10). Effort expectancy tended to be more important in predicting performance expectancy for forced (β = .38) than for voluntary users (β = .29).

Antecedents. First, constraining the path between system trust and performance expectancy to be equal led to a significant model deterioration (M5-M1, ΔSBχ² = 7.23, df = 1, p < .01). Rejecting structural invariance, the predictive value of system trust on performance expectancy was stronger for forced (β = .54) than for voluntary users (β = .33). Second, fixing the path between the competence ICT self-concept and effort expectancy led to a marginal significant increase in the chi-square value (M8-M1, ΔSBχ² = 3.32, df = 1, p < .10). Competence ICT self-concept tended to be important only in predicting effort expectancy for forced users (β = .27) but not for voluntary users (β = .09, p = .12). Third, constraining the correlation between the competence and affect component of ICT self-concept resulted in a significant model deterioration (M11-M1, ΔSBχ² = 11.63, df = 9, p < .001). The correlation between affect and competence ICT self-concept was lower but still strong for forced (r = .59) than for voluntary users (r = .77). Further model comparisons were nonsignificant.

Each constrained path alone did not affect the overall model fit. The overall model fit remained above the critical CFI threshold of .95 in all models (except M2). Nevertheless, standardized model results showed that the research model descriptively explained more variance in behavioral intention (R2 = .79/.55) and performance expectancy (R2 = .70/.46) for forced than for voluntary users but comparable variance in effort expectancy (R2 = .23/.23).

In conclusion, as expected, structural invariance testing revealed user group-specific predictive values for forced and voluntary users (H2). The predictive values among antecedents and key factors of technology acceptance were similar but somewhat stronger for forced than for voluntary users. Performance expectancy and system trust played a more important role in predicting behavioral intention for forced than for voluntary users, intensifying mean group differences between the two user groups through their stronger predictive values.

Table 2 Structural invariance testing between forced and voluntary users
Table 3 Standardized model results for forced and voluntary users based on a model with equal factor loadings across the two user groups

Discussion

In this study, we compared forced and voluntary first-time users of a PDS in policing, focusing on the role of key factors of technology acceptance as well as on their antecedents, system trust and ICT self-concept (affect and competence). We hypothesized both mean group differences (H1) and user group-specific predictive values (H2) between the two user groups. In summary, we found that forced and voluntary users, comparable in age, gender, and crime reporting experience, differed regarding key factors and antecedents of technology acceptance with less positive values for forced users than for voluntary users, as expected (H1). Further, as hypothesized, we found user group-specific predictive values between antecedents and key factors of technology acceptance (H2). More specifically, forced users showed stronger predictive values of system trust and performance expectancy on behavioral intention compared to voluntary users. This should be carefully discussed against the background of existing findings.

Unlike in the context of forced use of digital systems at work (Wu & Lederer, 2009), the COVID-19-induced forced use of PDS in policing intensified, not diminished, the impact of performance expectancy on behavioral intention. To reconcile seemingly conflicting results, it seems important to differentiate between current and future use motivation. A crime report is a rare event for most people, indicated by the fact that over 50 % of the participants had reported so far only one crime to the police throughout their lives. Reporting another crime or recommending the PDS to others who experienced a crime probably concerns future times beyond the pandemic. Thus, in this article, forced users seem to experience a greater degree of freedom to act (i. e., recovery of autonomy: from forced to voluntary use motivation) when thinking about future use behavior than did voluntary users, manifested in stronger predictive values. In contrast, the forced use of digital systems at work, for example, because the supervisor prescribes its use, constrains the degree of freedom to act beyond the initial use. Including this temporal perspective on the voluntariness of use motivation, our results (i. e., stronger effects for forced users) and existing findings from the work context (i. e., stronger effects for voluntary users; Wu & Lederer, 2009) can be explained on basis of SDT (Ryan & Deci, 2000). In both cases, the effects are stronger in situations with high self-perceived autonomy.

These stronger predictive values among acceptance factors for forced users amplify existing level differences between the two user groups (e. g., forced users showed a lower level of system trust). Level differences were expected based on motivational theory (SDT; Ryan & Deci, 2000) as well as technology acceptance models (Venkatesh et al., 2003). Moreover, theory from social psychology could be used to explain these level differences. According to Festinger’s theory of cognitive dissonance (1957), people strive for consistency and want to avoid dissonance. Strategies to reduce dissonance remove dissonant cognitions (i. e., elimination) or add new consonant cognitions (i. e., addition), among other things (cf., Raab et al., 2010). Thus, it seems plausible that forced users evaluate acceptance factors more negatively, whereas voluntary users rate acceptance factors more positively to achieve consistency with their own initial use motivation.

These results offer new theoretical implications for technology acceptance research and lessons learned for PDS providers for pandemic times and beyond. The existing study limitations provide fruitful starting points for future research.

Implications for Technology Acceptance Research

Forced and voluntary users differ in terms of antecedents and key factors of technology acceptance and their relations. These user group-specific characteristics and needs were previously overlooked in overall models. Thus, our results call for the examination of voluntariness of use motivation as a situative moderator of technology acceptance, carefully differentiated by its context (i. e., work or PDS) and temporal aspects like permanence (i. e., permanent or temporal pressure) or regularity (i. e., regular or nonregular PDS). Methodologically, we highlight the need for structural invariance testing in technology acceptance research. So far, technology acceptance research has focused more on measurement invariance and less on structural invariance when comparing user groups. When structural invariance testing was applied (e. g., Deng et al., 2005), the focus was mainly on key factors of technology acceptance and not its antecedents. Yet, the inclusion of antecedents adds value to technology acceptance research because differences in technology acceptance and user group-specific reasons for these differences are co-examined that offer starting points for interventions (e. g., foster system trust).

On a more general level, our study shows that, despite the overall robustness of technology acceptance models, their replication in a specific context (i. e., PDS in policing) is not trivial. Effort expectancy had no direct effect on behavioral intention in both user groups, which is inconsistent with theoretical models of technology acceptance (e. g., Venkatesh et al., 2003) but similar to single findings in the context of other digital services in banking (Tarhini et al., 2016) or medical contexts (e. g., Duyck et al., 2010). A plausible reason for our finding could be that using the PDS Onlinewache requires users to select suitable rubrics (e. g., data abuse) and enter crime details into a highly prestructured user interface. These actions require only basic digital competence (Carretero et al., 2017), so that ease of use becomes less of a concern for users. Therefore, effort expectancy may not influence behavioral intention directly but rather is a precondition because effort expectancy influences performance expectancy.

Replicating existing research, both antecedents, system trust and ICT self-concept, significantly predicted key factors of technology acceptance, highlighting ICT self-concept as a promising antecedent of technology acceptance for its domain-specific and twofold nature (affect and competence). Self-concept has been extensively studied in educational science (Marsh et al., 2017) but hardly integrated into technology acceptance research. To date, technology acceptance research was dominated by more specific computer-related or application-specific self-perceptions (Venkatesh et al., 2003). These constructs are less comparable across studies than the more global ICT self-concept and at the same time exclude other frequently used digital devices such as laptops, smartphones, or tablets. Also, the focus was on competence self-perceptions. Rarely have affective reactions and competence self-perceptions been studied together and at the same (domain-specific) level (e. g., Chang et al., 2017), although our research shows that both components influence technology acceptance. In particular, the previously neglected affect ICT self-concept (“I like”) showed stable effects across user groups. Further, the results extend our knowledge of system trust. In line with previous findings on the acceptance of PDS in which sensitive data is involved (e. g., Belanche et al., 2012), system trust is vital in policing, too. This adds to technology acceptance research, since PDS in policing are empirically underinvestigated (Iriberri et al., 2006). Also, our results reaffirm that system trust is a particularly powerful predictor of technology acceptance in situations with high uncertainty (McKnight et al., 2011; Venkatesh et al., 2016).

Lessons Learned for PDS in Pandemic Times and Beyond

The results reveal a change in user behavior during the COVID-19 pandemic and provide detailed insight into user group-specific PDS acceptance. Reliable prediction of PDS acceptance in policing is crucial because online crime reporting systems aim to increase the rate of reported crimes (Iriberri et al., 2006). The counteracting effect from a lack of technology acceptance would impact public safety (e. g., fewer crime prosecutions). However, field data are rare regarding PDS in policing, and the inclusion of forced users within an optional, not mandatory, PDS represents a unique research chance. Thus, from a provider’s view, our findings provide valuable implications for PDS design and marketing beyond pandemic times. We highlight five lessons learned:

  1. 1.
    Forced first-time users plan on continuing to use PDS beyond COVID-19. More than 50 % of the participants would not have used the PDS in the absence of the pandemic (forced users), but they would now mainly recommend the PDS to others and use the PDS for the next crime report. Although behavioral intention was lower for forced than for voluntary users, it was still high. Thus, in line with existing research (Richter & Mohr, 2020) our results support a shift from analog to digital solutions driven by COVID-19 with the potential for the long-term digitalization of the public sector. Providers should be prepared for ongoing increased use of PDS in and beyond pandemic times.
  2. 2.
    System trust is particularly important for PDS acceptance of forced users. Forced users rely less on the PDS than voluntary users. Thereby, this lower trust score more strongly impacts PDS acceptance than among voluntary users. To foster system trust, providers need to consider first the user’s disposition to trust technology. If users believe that technologies are generally “consistent, reliable, functional, and provide the help needed” (McKnight et al., 2011, p. 7), system trust is higher. Second, providers need to maximize the perceived trustworthiness of the specific PDS (McKnight et al., 2011). The perceived trustworthiness of a system often relies on exchange with other people, written PDS evaluation, or the provider’s reputation. PDS marketing should highlight the trustworthiness and make evaluations of prior PDS users publicly available for interested users. Further, Thielsch et al. (2018) offer practical implications on how to support system trust and avoid distrust considering system quality (e. g., plausibility checks) as well as contextual or procedural elements (e. g., involve users in problem-solving). These elements also affect the perceived usefulness of a system (Venkatesh, Thong, & Xu, 2016).
  3. 3.
    ICT self-concept influences PDS acceptance, here especially low self-perceived competence can be a hurdle. The fact that competence ICT self-concept impacts effort expectancy only among forced users – who score lower on competence ICT self-concept than voluntary users – indicates that especially low self-perceived ICT competence influences PDS acceptance. Considering that people with extremely low competence ICT self-concept might have refused to use the PDS despite COVID-19 and, therefore, are not included in this study further stresses its relevance for PDS design and marketing. Support services in case of troubleshooting (e. g., chatbots, remote assistance) or in case of uncertainty in use (e. g., tandem user-supporter) could be integrated into PDS design and highlighted by PDS marketing. This makes PDS more attractive for people with low self-perceived ICT competence, as positive user experiences become more likely. Success experiences in turn influence ICT self-concept positively (Bong & Skaalvik, 2003).
  4. 4.
    The average PDS user is middle-aged and male, with little crime reporting experience. Personas, describing the prototypical forced and voluntary user (Cooper & Reimann, 2003), are comparable in age, gender, and crime reporting experience. Consequently, no specific focus on age or gender is necessary to address forced users beyond pandemic times. However, our analysis showed that about 2/3 of the participants in both user groups were male. Like other digital applications (Davaki, 2018), women use PDS in policing less frequently than men. PDS should be equally attractive and available to all genders. This offers starting points for customized PDS marketing (e. g., targeted try-outs, slogans, flyers).
  5. 5.
    PDS are useful if people assume that they are easy to use at an early stage. The perceived ease of use reflects a precondition for the perceived usefulness of a PDS, which strongly determines future use and recommendation behavior in both user groups. Developers should ensure an intuitive interface design (e. g., symbols, system feedback) to foster the perceived usefulness and include specific design elements that highlight the usefulness and benefit of the PDS (e. g., linked background information about benefits of the PDS and the usefulness of single information).

Limitations and Directions for Future Research

In the unique field sample lies an advantage of this study. However, this uniqueness comes at the cost of some limitations. First, method artifacts cannot be ruled out because of our one-shot postsurvey (Podsakoff et al., 2003). One critical argument could be that the user experience itself influenced system trust and ICT self-concept, and thus explains their effects on key factors of technology acceptance. However, we replicated the underlying research model in an experimental pre-post setting among potential users of the Onlinewache, where the antecedents were measured before and the key factors were measured after the first-time use of the Onlinewache to report a fictive crime.11 Second, we used behavioral intention as a proxy for actual behavior as no follow-up survey (next crime report) or ad-hoc recommendation of the PDS (actual recommendation behavior) was feasible. Although the behavioral intention is the most common dependent variable in technology acceptance research and is frequently used as a proxy for actual behavior (Khechine et al., 2016), there may be a gap between behavioral intention and actual behavior. Routines or contextual circumstances can prevent a behavioral intention from resulting in actual behavior (e. g., Liu et al., 2019). Nevertheless, the results provide a valuable basis for simulating user behavior since the behavioral intention is a precondition for actual behavior (Sheeran, 2002).

Third, this study covers only the front-end of the Onlinewache, focusing on the user’s side. However, changed user behavior during COVID-19 simultaneously led to evolving demands on the provider’s side regarding subsequent data processing and data management (back-end). While the front-end evaluation points to the success of the Onlinewache during COVID-19, our analyses yield no comments on the evaluation of the back-end process on the provider’s side (i. e., the police). Recent studies point to an increased level of stress because of extended ICT use at work during COVID-19 (Techniker Krankenkasse, 2020). To gain a holistic picture of the impact of COVID-19 on work processes, we plan to examine the back-end process of the Onlinewache in subsequent research projects.

Regarding future research, we see great potential in our proposed research model and the method applied (MASI). We assume stable effects in related contexts in which sensitive data are included (e. g., health report). Yet, only replication studies can provide empirical evidence for this. Besides the COVID-19 use motivation, other heterogeneous use motivations (e. g., saving the climate, time, or money) are worth examining in the future. First evidence by Belanche et al. (2012) points to user group-specific needs when comparing PDS acceptance of users with high versus low environmental concerns or time-consciousness. To address the existing limitations of this study, we further recommend a longitudinal study design, including actual behavior as a dependent variable. The effect of user’s experiences of success and failure on PDS acceptance and its antecedents is another topic for future research. Besides field data, here, experimental studies would be beneficial, as success and failure are manipulable.

Conclusion

Every crisis can also be an opportunity, as crises create learning and space for change. The COVID-19 crisis led to sudden and forced changes in professional and public lives toward digital solutions. The top priority during this pandemic was public health. The second priority, however, should be to make this time of change a turning point for the digitalization of work processes. To accomplish this, this study teaches us important lessons for research and practice in and beyond pandemic times. Forced users during COVID-19, in general, plan to stay digital in the future, though the user’s motivation to use a digital system makes a difference regarding technology acceptance. Researchers should uncover user group-specific needs and characteristics. Providers should consider this user group specificity in design and marketing so that PDS are beneficial for heterogeneous users and providers.

This study was part of the research project “Onlinewache Rheinland-Pfalz”, a cooperative project between Trier University and the State Criminal Police Office of Rhineland-Palatinate. We would like to thank in particular Police Councillor Thomas Welsch and Detective Superintendent Patrick Knies for their support.

Literatur

  • Arens, A. K., Yeung, A. S., Craven, R. G., & Hasselhorn, M. (2011). The twofold multidimensionality of academic self-concept: Domain specificity and separation between competence and affect components. Journal of Educational Psychology, 103 (4), 970 – 981. https://doi.org/10.1037/a0025047 First citation in articleCrossrefGoogle Scholar

  • Belanche, D., Casaló, L. V., & Flavián, C. (2012). Integrating trust and personal values into the technology acceptance model: The case of e-government services adoption. Cuadernos De Economía Y Dirección De La Empresa, 15 (4), 192 – 204. https://doi.org/10.1016/j.cede.2012.04.004 First citation in articleCrossrefGoogle Scholar

  • BKA. (2020). Polizeiliche Kriminalstatistik (PKS) 2020 [Police Crime Statistics 2020, unpublished statistics]. First citation in articleGoogle Scholar

  • Blanz, M. (2015). Forschungsmethoden und Statistik für die Soziale Arbeit: Grundlagen und Anwendungen [Research Methods and Statistics for Social Work: Basics and Applications]. Kohlhammer Verlag. First citation in articleCrossrefGoogle Scholar

  • Boateng, F. D. (2018). Crime reporting behavior: Do attitudes toward the police matter? Journal of Interpersonal Violence, 33 (18), 2891 – 2916. https://doi.org/10.1177/0886260516632356 First citation in articleCrossrefGoogle Scholar

  • Bong, M., & Skaalvik, E. M. (2003). Academic self-concept and self-efficacy: How different are they really? Educational Psychology Review, 15 (1), 1 – 40. https://doi.org/10.1023/A:1021302408382 First citation in articleCrossrefGoogle Scholar

  • Brandsma, T., Stoffers, J., & Schrijver, I. (2020). Advanced technology use by care professionals. International Journal of Environmental Research and Public Health, 17 (3), 742 https://doi.org/10.3390/ijerph17030742 First citation in articleCrossrefGoogle Scholar

  • Byrne, B. M. (1996). Academic self-concept: Its structure, measurement, and relation to academic achievement. In B. A. Bracken (Ed.), Handbook of self-concept: Developmental, social, and clinical considerations (pp. 287 – 316).. Wiley. First citation in articleGoogle Scholar

  • Carretero, S., Vuorikari, R., & Punie, Y. (2017). DigComp 2.1: The digital competence framework for citizens with eight proficiency levels and examples of use (EUR, Scientific and Technical Research Series). Luxembourg. Retrieved from http://publications.jrc.ec.europa.eu/repository/bitstream/JRC106281/web-digcomp2.1pdf_(online).pdf First citation in articleGoogle Scholar

  • Carter, L., & Bélanger, F. (2005). The utilization of e‐government services: Citizen trust, innovation and acceptance factors. Information Systems Journal, 15 (1), 5 – 25. https://doi.org/10.1111/j.1365-2575.2005.00183.x First citation in articleCrossrefGoogle Scholar

  • Casey, T., & Wilson-Evered, E. (2012). Predicting uptake of technology innovations in online family dispute resolution services: An application and extension of the UTAUT. Computers in Human Behavior, 28 (6), 2034 – 2045. https://doi.org/10.1016/j.chb.2012.05.022 First citation in articleCrossrefGoogle Scholar

  • Chang, C.-T., Hajiyev, J., & Su, C.-R. (2017). Examining the students’ behavioral intention to use e-learning in Azerbaijan? The general extended technology acceptance model for e-learning approach. Computers & Education, 111, 128 – 143. https://doi.org/10.1016/j.compedu.2017.04.010 First citation in articleCrossrefGoogle Scholar

  • Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 14 (3), 464 – 504. https://doi.org/10.1080/10705510701301834 First citation in articleCrossrefGoogle Scholar

  • Chen, H., Dai, J., & Gao, Y. (2019). Measurement invariance and latent mean differences of the Chinese version physical activity self-efficacy scale across gender and education levels. Journal of Sport and Health Science, 8 (1), 46 – 54. https://doi.org/10.1016/j.jshs.2017.01.004 First citation in articleCrossrefGoogle Scholar

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Erlbaum. First citation in articleGoogle Scholar

  • Cooper, A., & Reimann, R. (2003). About face 2.0: The essentials of interaction design (2nd ed.). Wiley. First citation in articleGoogle Scholar

  • Davaki, K. (2018). The underlying causes of the digital gender gap and possible solutions for enhanced digital inclusion of women and girls: Study. European Parliament. https://doi.org/10.2861/98269 First citation in articleGoogle Scholar

  • Deng, X., Doll, W. J., Hendrickson, A. R., & Scazzero, J. A. (2005). A multigroup analysis of structural invariance: An illustration using the technology acceptance model. Information & Management, 42 (5), 745 – 759. https://doi.org/10.1016/j.im.2004.08.001 First citation in articleCrossrefGoogle Scholar

  • Dimitrov, D. M. (2010). Testing for factorial invariance in the context of construct validation. Measurement and Evaluation in Counseling and Development, 43 (2), 121 – 149. https://doi.org/10.1177/0748175610373459 First citation in articleCrossrefGoogle Scholar

  • Duyck, P., Pynoo, B., Devolder, P., Voet, T., Adang, L., Ovaere, D., & Vercruysse, J. (2010). Monitoring the PACS implementation process in a large university hospital: Discrepancies between radiologists and physicians. Journal of Digital Imaging, 23 (1), 73 – 80. https://doi.org/10.1007/s10278-008-9163-7 First citation in articleCrossrefGoogle Scholar

  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press. First citation in articleCrossrefGoogle Scholar

  • Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). Sage. First citation in articleGoogle Scholar

  • Ganiron, T. U., Chen, J. S., Cruz, R. D., & Pelacio, J. G. (2019). Development of an online crime management and reporting system. World Scientific News, 131, 164 – 180. First citation in articleGoogle Scholar

  • Hoefnagel, R., Oerlemans, L., & Goedee, J. (2012). Acceptance by the public of the virtual delivery of public services. Social Science Computer Review, 30 (3), 274 – 296. https://doi.org/10.1177/0894439311419807 First citation in articleCrossrefGoogle Scholar

  • IBM. (2019). IBM SPSS Statistics for Windows (Version 26.0) [Computer software]. IBM Corp. Retrieved from https://www.ibm.com/analytics/spss-statistics-software First citation in articleGoogle Scholar

  • Iriberri, A., Leroy, G., & Garrett, N. (2006). Reporting on-campus crime online: User intention to use. In IEEF (Ed.), Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS’06) (82a). https://doi.org/10.1109/HICSS.2006.416 First citation in articleCrossrefGoogle Scholar

  • Khechine, H., Lakhal, S., & Ndjambou, P. (2016). A meta-analysis of the UTAUT model: Eleven years later. Canadian Journal of Administrative Sciences / Revue Canadienne Des Sciences De L’administration, 33 (2), 138 – 152. https://doi.org/10.1002/CJAS.1381 First citation in articleCrossrefGoogle Scholar

  • Lakhal, S., Khechine, H., & Pascot, D. (2013). Student behavioural intentions to use desktop video conferencing in a distance course: Integration of autonomy to the UTAUT model. Journal of Computing in Higher Education, 25 (2), 93 – 121. https://doi.org/10.1007/s12528-013-9069-3 First citation in articleCrossrefGoogle Scholar

  • Koufteros, X., & Marcoulides, G. A. (2006). Product development practices and performance: A structural equation modeling-based multigroup analysis. International Journal of Production Economics, 103 (1), 286 – 307. https://doi.org/10.1016/j.ijpe.2005.08.004 First citation in articleCrossrefGoogle Scholar

  • Liu, H., Wang, L., & Koehler, M. J. (2019). Exploring the intention‐behavior gap in the technology acceptance model: A mixed‐methods study in the context of foreign‐language teaching in China. British Journal of Educational Technology, 50 (5), 2536 – 2556. https://doi.org/10.1111/bjet.12824 First citation in articleCrossrefGoogle Scholar

  • Maillet, É., Mathieu, L., & Sicotte, C. (2015). Modeling factors explaining the acceptance, actual use and satisfaction of nurses using an electronic patient record in acute care settings: An extension of the UTAUT. International Journal of Medical Informatics, 84 (1), 36 – 47. https://doi.org/10.1016/j.ijmedinf.2014.09.004 First citation in articleCrossrefGoogle Scholar

  • Marsh, H. W. (1990). Self-Description Questionnaire I (SDQ I). Manual. MacArthur. First citation in articleGoogle Scholar

  • Marsh, H. W., Martin, A. J., Yeung, A. S., & Craven, R. G. (2017). Competence self-perceptions. In A. J. ElliotC. S. DweckD. S. Yeager (Eds.), Handbook of competence and motivation: Theory and application (pp. 85 – 115). Guilford. First citation in articleGoogle Scholar

  • McKnight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2 (2), 1 – 25. https://doi.org/10.1145/1985347.1985353 First citation in articleCrossrefGoogle Scholar

  • Muthén, B. O., & Muthén, L. K. (1998 – 2017). MPlus (Version 8) [Computer software]. Author. Retrieved from https://www.statmodel.com/ First citation in articleGoogle Scholar

  • Oh, J.-C., & Yoon, S.-J. (2014). Predicting the use of online information services based on a modified UTAUT model. Behaviour & Information Technology, 33 (7), 716 – 729. https://doi.org/10.1080/0144929X.2013.872187 First citation in articleCrossrefGoogle Scholar

  • Oliveira, T., Faria, M., Thomas, M. A., & Popovič, A. (2014). Extending the understanding of mobile banking adoption: When UTAUT meets TTF and ITM. International Journal of Information Management, 34 (5), 689 – 703. https://doi.org/10.1016/j.ijinfomgt.2014.06.004 First citation in articleCrossrefGoogle Scholar

  • Oliveira, T., Thomas, M., Baptista, G., & Campos, F. (2016). Mobile payment: Understanding the determinants of customer adoption and intention to recommend the technology. Computers in Human Behavior, 61, 404 – 414. https://doi.org/10.1016/j.chb.2016.03.030 First citation in articleCrossrefGoogle Scholar

  • Peiffer, H., Schmidt, I., Ellwart, T., & Ulfert, A.-S. (2020). Digital competences in the workplace: Theory, terminology, and training. In E. WuttkeJ. SeifriedH. M. Niegemann (Eds.), Research in vocational education. Vocational education and training in the age of digitization: Challenges and opportunities (pp. 157 – 181). Barbara Budrich. First citation in articleCrossrefGoogle Scholar

  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88 (5), 879 – 903. https://doi.org/10.1037/0021-9010.88.5.879 First citation in articleCrossrefGoogle Scholar

  • Putnick, D. L., & Bornstein, M. H. (2016). Measurement invariance conventions and reporting: The state of the art and future directions for psychological research. Developmental Review, 41, 71 – 90. https://doi.org/10.1016/j.dr.2016.06.004 First citation in articleCrossrefGoogle Scholar

  • Questback. (2020). Enterprise Feedback Suite Survey (Version Fall 2020) [Survey software]. Questback GmbH. Retrieved from https://www.unipark.com First citation in articleGoogle Scholar

  • Raab, G., Unger, A., & Unger, F. (2010). Die Theorie kognitiver Dissonanz [The Theory of Cognitive Dissonance]. In G. RaabA. UngerF. Unger (Eds.), Marktpsychologie (Vol. 2, pp. 42 – 64). Gabler. https://doi.org/10.1007/978-3-8349-6314-7_4 First citation in articleCrossrefGoogle Scholar

  • Richter, G., & Mohr, N. (June ). (2020). Digital sentiment survey Germany: Understanding the new digital user. McKinsey & Company. Retrieved from https://www.mckinsey.de/publikationen/digital-sentiment-survey-germany-2020 First citation in articleGoogle Scholar

  • Rizun, M., & Strzelecki, A. (2020). Students’ acceptance of the COVID-19 impact on shifting higher education to distance learning in Poland. International Journal of Environmental Research and Public Health, 17 (18), 6468 https://doi.org/10.3390/ijerph17186468 First citation in articleCrossrefGoogle Scholar

  • Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25 (1), 54 – 67. https://doi.org/10.1006/ceps.1999.1020 First citation in articleCrossrefGoogle Scholar

  • Sheeran, P. (2002). Intention-behavior relations: A conceptual and empirical review. European Review of Social Psychology, 12 (1), 1 – 36. https://doi.org/10.1080/14792772143000003 First citation in articleCrossrefGoogle Scholar

  • Tarhini, A., El-Masri, M., Ali, M., & Serrano, A. (2016). Extending the UTAUT model to understand the customers’ acceptance and use of internet banking in Lebanon: A structural equation modeling approach. Information Technology & People, 29 (4), 830 – 849. https://doi.org/10.1108/ITP-02-2014-0034 First citation in articleCrossrefGoogle Scholar

  • Techniker Krankenkasse. (2020, July 13). Corona-Stress: Jeder Zweite fühlt sich stark belastet [Corona-stress: Every second feels strongly burdened] [Press release]. Retrieved from https://www.tk.de/presse/themen/praevention/gesundheitsstudien/corona-stress-jeder-zweite-fuehlt-sich-stark-belastet-2088252 First citation in articleGoogle Scholar

  • Thielsch, M. T., Meeßen, S. M., & Hertel, G. (2018). Trust and distrust in information systems at the workplace. PeerJ, 6, e5483. https://doi.org/10.7717/peerj.5483 First citation in articleCrossrefGoogle Scholar

  • Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research, 11 (4), 342 – 365. https://doi.org/10.1287/isre.11.4.342.11872 First citation in articleCrossrefGoogle Scholar

  • Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27 (3), 425 – 478. https://doi.org/10.2307/30036540 First citation in articleCrossrefGoogle Scholar

  • Venkatesh, V., Thong, J. Y. L., Chan, F. K. Y., & Hu, P. J. H. (2016). Managing citizens’ uncertainty in e-government services: The mediating and moderating roles of transparency and trust. Information Systems Research, 27 (1), 87 – 111. https://doi.org/10.1287/isre.2015.0612 First citation in articleCrossrefGoogle Scholar

  • Venkatesh, V., Thong, J. Y.L., & Xu, X. (2016). Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the Association for Information Systems, 17 (5), 328 – 376. https://doi.org/10.17705/1jais.00428 First citation in articleCrossrefGoogle Scholar

  • Wu, I.-L., & Chen, J.-L. (2005). An extension of trust and TAM model with TPB in the initial adoption of on-line tax: An empirical study. International Journal of Human-Computer Studies, 62 (6), 784 – 808. https://doi.org/10.1016/j.ijhcs.2005.03.003 First citation in articleCrossrefGoogle Scholar

  • Wu, J., & Lederer, A. (2009). A meta-analysis of the role of environment-based voluntariness in information technology acceptance. MIS Quarterly, 33 (2), 419 – 432. https://doi.org/10.2307/20650298 First citation in articleCrossrefGoogle Scholar

  • Wu, K., Zhao, Y., Zhu, Q., Tan, X., & Zheng, H. (2011). A meta-analysis of the impact of trust on technology acceptance model: Investigation of moderating influence of subject and context type. International Journal of Information Management, 31 (6), 572 – 581. https://doi.org/10.1016/j.ijinfomgt.2011.03.004 First citation in articleCrossrefGoogle Scholar

  • Zylka, J., Christoph, G., Kroehne, U., Hartig, J., & Goldhammer, F. (2015). Moving beyond cognitive elements of ICT literacy: First evidence on the structure of ICT engagement. Computers in Human Behavior, 53, 149 – 160. https://doi.org/10.1016/j.chb.2015.07.008 First citation in articleCrossrefGoogle Scholar

1ICT = information and communication technology

2Personas are used in the user interface design and include prototypical, vivid descriptions of users (Cooper & Reimann, 2003).

3The Onlinewache (English: Online Police Station) represents the online crime reporting system of Rhineland-Palatinate (RLP), Germany, and is available at https://www.polizei.rlp.de/de/onlinewache/

4Available at https://www.politie.nl/en

5Available at https://www.police.qld.gov.au/reporting

6Apart from person-related antecedents, other antecedents also exist that influence performance expectancy and effort expectancy, including technology-related factors (Venkatesh et al., 2016).

7Filter question was “How many criminal charges have you reported online via a digital platform (nationwide)?”. Only people who answered the filter question with “1” were considered for this article.

8McDonald’s omega.

9In supplemental analyses, we tested alternative research models with a direct (M1) and both a direct and an indirect path between system trust and behavioral intention (M2) to account for heterogeneous empirical evidence regarding the theoretical localization of system trust in technology acceptance models. M1 showed worse model fit compared to the proposed research model (see Figure 1, solid lines). M2 showed comparable model fit but lower parsimony compared to the proposed research model. Further, upon examining M2 in the two user groups, forced and voluntary, we found the direct effect of system trust on behavioral intention to be nonsignificant in both user groups. Model fit information of these alternative models M1 and M2 is documented in ESM 2.

10In a supplemental analysis, we replicated the adjusted research model in an experimental pre-post setting among potential users of the Onlinewache (N = 169, 50 % male, 50 % female, age: M = 44.73, SD = 20.44, 18 – 84 years). Here, the antecedents (system trust, ICT self-concept) were measured before and the key factors (performance expectancy, effort expectancy, behavioral intention) were measured after the first-time use of the Onlinewache to report a fictive crime. The results showed acceptable model fit (χ² = 168.30, p < .001, CFI = .936, RMSEA = .080, SRMR = .049) and similar model results as in the field survey.

11see footnote no. 10