Skip to main content
Open AccessOriginal Article

DCA-Transparency

Validation and Extension of a German Scale

Published Online:https://doi.org/10.1026/0932-4089/a000360

Abstract

Abstract. Even though there is strong theoretical support that transparency in organizations leads to trust between employees and managers as well as increasing job satisfaction (e. g., Albu & Flyverbom, 2019; Giri & Kumar, 2010), such research lacks consistent empirical evidence. This inconsistency might be explained by the use of specific nongeneralizable scales (Schnackenberg & Tomlinson, 2016). Therefore, Schnackenberg et al. (2020) developed a multidimensional transparency scale consisting of three dimensions: disclosure, clarity, and accuracy (so-called DCA-transparency). This paper validates a German version of the scale as well as conceptually and empirically extends its utility by adding the two dimensions of timeliness and relevance. We conducted three quantitative studies to examine the factorial structure (N = 325), content validity (N1 = 133, N2 = 120), and usefulness (N = 376, with a representative longitudinal sample). The results support the accuracy and utility of the extended German DCA scale in organizational settings and its multidimensionality.

DCA-Transparenz: Deutsche Validierung und Erweiterung

Zusammenfassung. Es wird davon ausgegangen, dass organisationale Transparenz die Vertrauensbildung zwischen Mitarbeitenden und Vorgesetzten stärkt und die Arbeitszufriedenheit von Mitarbeitenden erhöht (z. B. Albu & Flyverbom, 2019; Giri & Kumar, 2010). Die empirische Überprüfung dieser Beziehungen führt allerdings zu widersprüchlichen Ergebnissen. Dies kann darin begründet sein, dass sich die genutzten spezifischen Skalen in wesentlichen Aspekten unterscheiden (Schnackenberg & Tomlinson, 2016). Daher haben Schnackenberg, Tomlinson und Coen (2020) einen mehrdimensionalen Fragebogen, bestehend aus den Dimensionen Disclosure (Offenlegung), Clarity (Verständlichkeit) und Accuracy (Richtigkeit) (DCA-Transparency) entwickelt. In dieser Arbeit wird das Messinstrument um die beiden Dimensionen Rechtzeitigkeit und Relevanz erweitert. Dazu werden drei quantitative Studien durchgeführt, die die Dimensionalität (N = 325), die Inhaltsvalidität (N1 = 133, N2 = 120) und zudem Auswirkungen von Transparenz mittels einer repräsentativen Längsschnittstudie (N = 376) untersuchen. Die Ergebnisse bestätigen die Validität des Messinstruments für den deutschen Sprachraum und unterstreichen die Bedeutung eines mehrdimensionalen Verständnisses von organisationaler Transparenz.

Background

Transparency, which has been much discussed in organizational research and beyond (e. g., Rawlins, 2008; Schnackenberg & Tomlinson, 2016), refers to the perceived quality of disclosed information. High-quality information enables individuals to reduce uncertainties attached to unknown situations (Venkatesh et al., 2016). Therefore, the perception of transparency is important for decision-making processes as well as making sense under different circumstances (Giri & Kumar, 2010). For example, in the organizational setting, it is assumed that transparency increases the job satisfaction of employees (Pirson & Malhotra, 2011; Schnackenberg & Tomlinson, 2016). Moreover, transparency is widely seen as an effective mechanism for managing stakeholders and has been studied mainly as an antecedent to trust (e. g., Albu & Flyverboom, 2019; Auger, 2014). Research in this field primarily builds on the integrative model of trust by Mayer et al. (1995), stating that trust is influenced by the perceived ability, benevolence, and integrity of the trustee (all three summarized under the term “trustworthiness”). Based on this model, transparency is seen as an antecedent to trustworthiness, which consequently is an antecedent of trust.

However, there is a surprising lack of consistent findings regarding the described relationships, since the literature contains ambiguous empirical evidence. In particular, when looking at the transparency-trust relationship, Jahansoozi (2006) and Rawlins (2008) found evidence for a positive relationship, while others, such as Pirson and Malhotra (2011), failed to confirm this result. Building on recent theoretical advancements (e. g., Albu & Flyverbom, 2019; Schnackenberg & Tomlinson, 2016), these mixed empirical findings might be explained by the heterogeneous operationalization of transparency measures, which differ in important aspects such as dimensionality, focus, and perspective. To gain a more detailed understanding, it is therefore necessary to operationalize the different facets of the construct and to measure them appropriately. To this end, Schnackenberg et al. (2020) developed an integrative multidimensional scale consisting of the disclosure, clarity, and accuracy of information (so-called DCA-transparency). The authors underline the utility of this approach by showing that the scale is suitable for explaining the relationship between transparency and trustworthiness.

The literature already contains some constructs related to transparency, like the openness or closure of organizations (Gebert et al., 1998), communication effort (Rawlins, 2008), or communication within organizations (Sperka, 1997). The newly presented DCA-transparency, however, has a narrower focus and examines the different dimensions of information quality in greater detail.

The present study empirically validates the measure in German and expands its scope by adding two dimensions: timeliness and information relevance, which were part of the theoretical model but not of the scale development (Schnackenberg et al., 2020). Further, we provide empirical evidence showing that DCA-transparency works in a non-American cultural environment using the German version. Thus, we add to the notion that DCA-transparency has at least some degree of cross-cultural validity. Furthermore, we examine the mediating influence of trustworthiness in the transparency-trust relationship and the impact of transparency on the job satisfaction of employees.

The paper is structured as follows: First, we explain and extend the transparency framework. Then, three quantitative studies explore the validity and utility of the German version of the scale. Finally, we derive theoretical and practical implications as well as directions for future research.

Conceptualization and Extension of DCA-Transparency

The concept of transparency has been widely discussed recently, and publications on the topic have increased dramatically. Applications range from information systems (e. g., Nikolaou et al., 2013), financial markets (e. g., Winkler, 2000), and communication research (e. g., Rawlins, 2008) to organizational behavior (e. g., Pirson & Malhotra, 2011). Although there is quite a long research tradition on communication quality and transparency, particularly in communication science (e. g., Auger, 2014; Holland et al., 2018, Rawlins, 2008), until now there was still considerable debate on the scope and hence on the exact operationalization of transparency. Further, the conceptual structure (uni- vs. multidimensionality) as well as the theoretical rationale (latent vs. formative construct) and the focus (implementation- vs. theory-driven) went largely unconsolidated (Schnackenberg & Tomlinson, 2016). Recently, however, the definition by Schnackenberg and Tomlinson (2016, p. 1788) of transparency as “the perceived quality of intentionally shared information from a sender” has been widely adopted. In this context, transparency refers to the quality of information in any context and can be adapted to a variety of settings (e. g., communication between a manager and an employee). According to this definition, transparency is a perceived level rather than an absolute value, i. e., pure existence (Albu & Flyverbom, 2019).

While the field of transparency research has consolidated behind Schnackenberg and Tomlinson’s (2016) influential definition, there is still no empirically validated measure that follows the theoretical rationale. According to recent research, transparency can be conceptualized as a function of disclosure, clarity, and accuracy. These dimensions cover related constructs used to describe transparency such as observability, availability of information, understandability, comprehensibility, or reliability (see Schnackenberg & Tomlinson, 2016, for further details). Accordingly, disclosure is the perception that all relevant information is shared in a timely manner. Disclosure is often considered to be the most important aspect of transparency and a necessary but insufficient condition (Albu & Flyverbom, 2019; Clark & Williams, 2008; Pirson & Malhotra, 2011). Clarity indicates the perceived degree of comprehensibility of the shared information (Holland et al., 2018; Rawlins, 2008; Winkler, 2000). Clarity of information can be interpreted as a measure of the congruence of the intended and the understood meaning of information. Accuracy is the perceived degree to which shared information reflects reality and is not (purposely) biased. Information is perceived as accurate if the receiver is convinced that the information is correct at the moment of its disclosure (Schnackenberg & Tomlinson, 2016).

We extend the present transparency operationalization by timeliness and relevance of information disclosure. Both aspects are mentioned within the conceptual description of the disclosure dimension (sharing of relevant information in a timely matter) but do not explicitly appear in the scale items. Because there is no reflection of the timeliness facet within the items – though it is part of the theoretical definition of transparency – we included specific items and consequently a separate factor for timeliness. Information relevance is indirectly mentioned within the disclosure dimension, but because information relevance is central to transparency we argue that an explicit relevance dimension should be tested (Nicolaou & McKnight, 2006). By adding these dimensions, we aim to take all theoretical facets derived by Schnackenberg and Tomlinson (2016) into account. Arguably, particularly relevance and timeliness can be distinctly influenced, e. g., timeliness perception could differ from the other factors when information is delayed.

Therefore, we propose a narrower definition of disclosure to avoid overlaps between definitions. Thus, disclosure (in the narrow sense) refers to the perception that a sufficient quantity of information is shared; the timeliness of information sharing is the perceived degree to which the information is released at an appropriate point of time (Bandsuch et al., 2008; Williams, 2005). Bandsuch et al. (2008) point out that information has no value if it is disclosed too late. In this sense, transparency can be understood as a function of the information-sharing frequency (Albu & Flyverbom, 2019; Berglund, 2014). Information relevance indicates the perceived degree to which the disclosed information is important to the receiver (Xu et al., 2010). This dimension is incorporated in many transparency definitions, either directly (e. g., Rawlins, 2008; Schnackenberg & Tomlinson, 2016; Williams, 2005) or indirectly (e. g., Bushman et al., 2004; Kaptein, 2008). This dimension is important as the disclosure of nonrelevant information can even lower the perceived degree of transparency (Berglund, 2014).

Method

The DCA scale by Schnackenberg et al. (2020) was translated into German and extended with items for the added dimensions. Study 1 explored the factorial structure of the construct, and Study 2 assessed the content validity using a quantitative approach. Finally, Study 3 analyzed trustworthiness, trust, and job satisfaction as outcomes of transparency through a longitudinal study.

Translation and Item Development

In order to convey the meaning of the original items, three qualified persons independently translated them. A common translation was then selected. Finally, a fourth bilingual person compared the German and English versions and confirmed their congruence. Then, the items for timeliness and relevance were developed and discussed, with special attention being paid to ensure similar wording and item length as for the existing ones (see Appendix A1 for the full list of items).

Study 1 – Factorial Structure

Study 1 examines whether the German scale has the same factor structure as the English version. We assumed that the added items for timeliness and relevance will form two separate factors. Because theorizing in the field of transparency is generally still developing, the timeliness and relevance items represent new developments, and their interrelations with the other dimensions are empirically untested, we used exploratory factor analysis (EFA; Flake et al., 2017). In addition, we examined whether a higher-order factor, understood as transparency, can be retrieved by conducting confirmatory factor analysis (CFA).

Table 1 EFA pattern matrix

We recruited participants based on personal contacts and response-driven sampling; only native German-speaking employees were permitted to participate in the survey. After deletion of unusable cases, the final sample size was N = 325 (54/46 % f/m; mean age of 31 years).

Because we assumed the factors to be correlated, we used an oblique rotation method. Should the factors not be correlated, the oblique rotation method would yield the same result as orthogonal rotation methods (Costello & Osborne, 2005; Worthington & Whittaker, 2006). Parallel analysis (Horn, 1965) indicated that five factors should be extracted. While there are no major cross-loadings, all items (except A3) have the highest factor loading on their a priori factor. We chose an alternative translation for A31, which was debated during the translation process, and the EFA (same specifications) was rerun with another independent sample (N = 322) utilizing the new A3 item. The results confirm that A3 loads on the accuracy factor (see Table 1). Thus, we found support for both the DCA factor structure and for the factorial distinctiveness of timeliness and relevance.

We then tested the hierarchical factor structure using CFA. In Model 1, all items build one factor; in each following model, we tested a factor solution with an additional factor until we arrive at the assumed hierarchical five-factor solution; this structure shows the best model fit (Backhaus et al., 2015). The detailed results are displayed in Table 2.

Study 2 – Content Validity

The assessment of content validity is a pivotal but often neglected aspect of scale development (Colquitt et al., 2019). It indicates whether the chosen items cover all relevant aspects without including other constructs (DeVellis, 2017). To assess the content validity of the German scale, we used the quantitative method designed by Hinkin and Tracey (1999), recently extended by Colquitt et al. (2019). One main advantage of this straightforward method is that it eliminates subjectivity in item retention while using small sample sizes (Hinkin & Tracey, 1999).

Table 2 Comparison of model fits

In this approach, naïve judges (i. e., people not involved in the scale development) rate the correspondence between items and construct definitions on Likert scales. Based on these ratings, the fit – understood as the definitional distinctiveness, i. e., the degree to which items load higher onto one factor and lower onto another – are assessed by performing an analysis of variance (ANOVA). In addition, an indicator proposed by Colquitt et al. (2019) is calculated to evaluate the magnitude of the fit. First, we examined the definitional distinctiveness of the transparency construct to related constructs (Study 2a). Second, we added further insights by examining the distinctiveness of the five transparency dimensions to each other (Study 2b).

Study 2a – Distinctiveness of Orbiting Scales

Following Colquitt et al. (2019), we selected two orbiting scales, ensuring that they are “at the same stage of ‘causal flow’” (Colquitt et al., 2019, p. 8) as the focal construct and are neither antecedents nor consequences of transparency (Colquitt et al., 2019). We used a scale for informational justice by Colquitt (2001), translated and validated by Maier et al. (2007) with five items. Informational justice is one factor of organizational justice and refers to the perception of communication of a decision-maker in an organization (i. e., truthful, specific, and timely; Colquitt, 2001; Maier et al., 2007).

Another related instrument is the “questionnaire for the assessment of communication in organizations (KomminO)” developed by Sperka (1997). This instrument provides a holistic view of different aspects of organizational communication across different referents. The instrument comprises seven subscales, e. g., trustful communication or feedback, with 35 items in total. Based on the communication quality subscale of KomminO, Mohr et al. (2014) presented their “communication quality between employees and managers” scale. For our analysis, we chose Mohr et al.’s scale because it consists of only eight items, and the items are positively formulated like the items of the extended DCA-transparency scale. Notable similarities between these instruments can be found especially within the disclosure and clarity dimensions.

At the beginning of each page in the online survey, the participants were given the definition of one of the three constructs followed by the items of all constructs. They were asked to rate the fit between the definition and each item. To do so, they used a 7-point Likert scale, this procedure was repeated for each construct. Thus, the participants evaluated the fit of all items to all construct definitions. To ensure data quality, we included attention tests, and participants failing these tests were excluded from the analysis. Since the sample does not need to be representative and the task requires high attention, we decided to use the online crowdsourcing platform Prolific. Participants were acquired by offering a financial incentive for completing the survey. Overall, 150 individuals participated in the survey. After cleaning the dataset, N = 133 cases remained (45/55 % f/m; mean age of 32 years)

Based on Hinkin and Tracey (1999, p. 181), we conducted an ANOVA to compare “the item’s mean rating on one conceptual dimension to the item’s ratings on another comparative dimension.” According to this approach, content validity is given if the mean score of the item is significantly higher on the intended scale than on the orbiting scales. Since the ANOVA shows significant (p < .001) differences between the means, we conducted the posthoc Duncan test (Backhaus et al., 2015). The transparency items show the highest average rating for the transparency definition (M = 5.50, SD = 1.60) compared with the informational justice definition (M = 4.65, SD = 1.90) and the definition of communication quality between employees and managers (M = 4.93, SD = 1.85).

Colquitt et al. (2019) proposed to further interpret the means by constructing an indicator they called the “Hinkin Tracey distinctiveness” (htd). In this case, the positive htd of .118, under consideration of the mean correlation (.87) of the orbiting scales to organizational transparency, indicates that the definitional distinctiveness should be interpreted as moderate (Colquitt et al., 2019).

In order to assess convergent and discriminant validity, we also compared transparency to theoretically related and unrelated constructs. We used Need-for-Cognition as an unrelated scale, since this is an individual disposition rather than a personal perception and thus should conceptually and content-wise differ from organizational transparency. We found high correlations (.65) between overall transparency and related constructs (e. g., communication quality) and almost no correlation (.067) between transparency and theoretically unrelated instruments like Need-for-Cognition. This provides evidence for both convergent and discriminant validity. We further investigated the incremental effect of transparency on different outcomes (e. g., trust, trustworthiness, and job satisfaction) over and above related constructs and found significant results for all dependent variables. We further found increases in explained variance as well as decreased coefficients for all related constructs. Table 1 and Table 2 show the correlation matrices, and Table 3 of the electronic supplement (ESM) shows the results of the incremental effects.

Study 2b – Distinctiveness of Transparency Dimensions

Study 2b tested the distinctiveness of the five transparency dimensions relative to each other. Again, we created an online survey. At the beginning of each page, the participants received the definition of one dimension followed by all transparency items. This was repeated until they had rated the fit of all items to all five transparency dimensions. The data collection process was identical to Study 2a. After cleaning the dataset, N = 120 cases remained (49/51 % f/m; mean age of 30 years). Like in Study 2a, we conducted an ANOVA including the posthoc Duncan test and found that 90 % of the items (18 of the 20) were statistically sorted according to their associated dimension and no other dimension (see Appendix A2). Only the average ratings of items D1 and D2 were not significantly different between their supposed dimension disclosure and the relevance dimension. Since we assumed that the information relevance is indirectly incorporated in the disclosure items, this result can be explained. Nevertheless, the other two disclosure items clearly refer to the disclosure dimension, and all relevance items are assigned to their intended definition. Therefore, the additional relevance dimension seems reasonable. The results also indicate the assumed distinctiveness of the timeliness dimension. With regard to an average correlation between the transparency dimensions of .556 and htds varying between .276 and .581, the definitional distinctiveness from each transparency dimension to the others should be interpreted as very strong (Colquitt et al., 2019).

Study 3 – Utility of the Extended DCA-Transparency Scale

Study 3 investigated the criterion validity by testing the relationship between transparency and expected outcomes (Hinkin, 1995). It examined the transparency-trust relationship under consideration of trustworthiness as a mediator. Following the authors’ request to investigate further possible outcomes of transparency, we examined the relationship between transparency and employee job satisfaction (Schnackenberg et al., 2020).2 In the integrative model of trust, trust itself is predicted by specific characteristics (ability, benevolence, integrity) of a person subsumed as trustworthiness. Trust is defined as the willingness of a party to be vulnerable based on positive expectations (Mayer et al., 1995). Ability covers “skills, competencies, and characteristics that enable a party to have influence within some specific domain” (Mayer et al., 1995, p. 717). Benevolence means the “extent to which a trustee is believed to want to do good to the trustor” (Mayer et al., 1995, p. 718), and integrity describes whether the trustee’s set of values is similar to the trustor’s and is morally acceptable. Therefore, “the principle of profit seeking at all costs” (Mayer et al., 1995, p. 719) would not be considered as having integrity (Mayer et al., 1995).

We also investigated the direct influence of transparency on trust. As stated above, much theory supports this relationship (e. g., Albu & Flyverbom, 2019; Norman et al., 2010). We expected that the global transparency construct as well as all individual dimensions have a positive effect on trust. Moreover, we assessed the relationship between the transparency dimensions and trustworthiness as a proximal predictor of trust. We assumed that all transparency dimensions are positively related to the global trustworthiness factor consisting of perceived ability, benevolence, and integrity. For instance, the transparent disclosure of information requires certain abilities to handle the data efficiently (Calvard, 2016). In addition, the selection of information follows specific principles; if relevant information is disclosed, employees and their managers are guided by similar principles, and the manager is therefore perceived as having integrity. Finally, transparent information sharing can be understood as benevolent since it is in the best interest of employees. In summary, transparent behavior serves the different dimensions of trustworthiness and is therefore expected to be positively correlated with the global trustworthiness factor. Since the positive correlation between trustworthiness and trust is already well documented in the literature (e. g., Mayer & Davis, 1999), we also expected this relationship in this study. If there is a relationship between transparency and trustworthiness, we further assumed that trustworthiness mediates the relationship between transparency and trust.

In addition to the relationships described above, we studied the relationship between transparency and job satisfaction. Job satisfaction is defined as “the pleasurable emotional state resulting from the appraisal of one’s job as achieving or facilitating the achievement of one’s job values” (Locke, 1969, p. 316). Giri and Kumar (2010) showed that open communication leads to higher job satisfaction and improved employee performance, which is in line with previous research on employee participation (e. g., Miller & Monge 1986; Spector, 1986; Wagner & LePine, 1999), showing a positive effect of employee participation in decision-making on job satisfaction. Sharing transparent information is a necessary first step for employee involvement. The disclosure of information facilitates the achievement of professional objectives and enables individuals to behave according to their job values. It can be assumed that this information also has to be clear, accurate, timely, and relevant to be considered valuable for decision-making. For this reason, we assumed that transparency as a global concept as well as all five transparency dimensions are positively correlated with the job satisfaction of employees. Figure 1 summarizes the assumed relationships assessed in Study 3.

Figure 1 Note. t1, t2, t3 indicate the measurement points. Figure 1. Relationships assessed in Study 3.

Measures and Data Collection

To reduce common-method bias, which might arise if dependent and independent variables are collected at the same time in the same survey, we used a longitudinal research design (Podsakoff et al., 2003). Data collection took place at three measurement points with two weeks in between. During the first wave, we requested demographics as well as the extended German version of the DCA scale; during the second wave, we assessed perceived trustworthiness (ability, benevolence, integrity). We used the German version of the scale by Mayer and Davis (1999), as translated by Dreiskämper et al. (2016). The third wave focused on the consequences of transparency, where, again, a scale of Mayer and Davis (1999) for trust was chosen. We measured job satisfaction with Thompson and Phua’s (2012) scale. Since no German validation was available, we adapted both scales by using the backtranslation process, using 5-point Likert scales for each construct; the items were randomized within their constructs. The necessary sample size was calculated by conducting an a priori power analysis (power: .8; significance level: .05; effect size: .25), leading to a minimum sample size of 298 participants. To collect the data, we created a study protocol explaining the research design and submitted it to the Leibniz Institute of Psychology. The Institute accepted the research design and preregistered the study. Data collection then took place at a suitable panel provider. We assumed that individuals might have different perceptions of transparency depending on their age, sex, and scope of employment (part- vs. full-time). For that reason, we requested the sample to be representative of the German working population with regard to these characteristics. The distribution was adapted from the German Federal Statistical Office (Statistisches Bundesamt, 2019). Based on an expected dropout rate of 25 % between the measurement waves, we planned on 540 participants in the first wave to ensure the minimum sample size in the third wave. Since the actual dropout rate turned out to be smaller, we had 376 participants in the third wave, and the sample was representative of the German working population in terms of age, sex, and scope of employment.

Data Analysis and Results

The descriptions and correlations are shown in Table 3. All variables except trust showed very high internal consistency (> .9). The trust scale had a very low internal consistency of .4. Subsequent inspection revealed a two-factor structure with items 1 and 3 loading on one factor and items 2 and 4 loading onto another. Because of the higher consistency of items 1 and 3 (.61) vs. items 2 and 4 (.51), we deleted the latter, and our final trust scale consisted of only two items. In contrast to the trust scale, the translated job satisfaction scale also indicated high internal consistency.

Table 3 Descriptive statistics of all variables

We then used multiple regression to investigate the hypothesized relationships. In Table 4 and Table 5 the detailed results are presented. In particular, Table 4 shows the direct effects of each dimension of transparency – as well as overall transparency – on trust and job satisfaction. Table 5 presents the mediation of overall trustworthiness on the effect of transparency on trust. We again were able to show the effect of each transparency dimension in addition to the effect of overall transparency.

Table 4  Main effects

Regarding the assumed relationships, we found a large effect of overall transparency on each trustworthiness dimension (0.70 – 0.81; p < .001). Regarding the separate trustworthiness dimensions, we found significant effects of disclosure and relevance on benevolence (but not for clarity, accuracy, and timeliness), though we did find significant effects of all transparency dimensions on integrity. Further, we found a less strong but still meaningful effect of transparency on trust (0.25; p < .001 vs. 0.75 on average for trustworthiness), with accuracy (0.27; p < .01) and clarity (0.25; p < .05) having a significant effect but not for disclosure, timeliness, or relevance. We also found significant effects of overall transparency (0.55; p < .001), disclosure (0.26; p < .01), timeliness (0.20; p < .01), and relevance (0.20; p < .01) on job satisfaction, but no significant effect of accuracy and clarity.

Additionally, we compared DCA-transparency with our more elaborate model including relevance and timeliness as additional dimensions. For all individual trustworthiness dimensions (but not for trust) we found a significant ΔR2 when using the additional two dimensions of timeliness and relevance in addition to DCA-transparency (see Table 4).

Concerning the mediation (see Table 5), as presumed, we found that trustworthiness (fully) mediates the effect of transparency on trust. In more detail, we found that, for clarity (0.15, p < .05) and for accuracy (0.16, p < .01), there was a significant direct effect on trust, while for all other factors, including the overall transparency factor, there was no significant direct effect above trustworthiness. The total effects were meaningful while moderate with coefficients from 0.11 (p < .01) for timeliness, 0.12 (p < .01) for relevance, 0.14 (p < .01) for disclosure, to 0.25 (p < .001) for full transparency, and 0.29 (p < .001) for clarity and accuracy each. It is worth noting that the effect for both dimensions clarity and accuracy were higher than the effect of overall transparency.

Table 5 Mediation effects

Discussion

General Discussion of Results

In this study, we investigated the accuracy and appropriateness of the German extended DCA-transparency scale. We found support for both the accuracy and the utility of the construct structure.

In Study 1, we found that the German translation of the original items follows the three assumed factors of disclosure, accuracy, and clarity. Furthermore, the two theoretically derived additional dimensions of timeliness and relevance were also found to form distinct factors. We extended the original study by showing that transparency is best understood as a higher-order construct.

In Study 2, we found further quantitative evidence for content validity by showing the definitional distinctiveness of transparency. In addition, we found evidence that the transparency dimensions are distinct from one another. We concluded that the items cover all relevant aspects of the corresponding definition and yet do not overlap with closely related scales. By calculating the indicator proposed by Colquitt et al. (2019), the strength of definitional distinctiveness can be interpreted as moderate for related scales and as very strong between the transparency dimensions.

In Study 3, we found support for the appropriateness of using transparency as an antecedent of trust‍(–worthiness). Moreover, we found that the transparency dimensions differ in their direct relationship to trustworthiness and trust. While all dimensions have a significant indirect effect on trust via trustworthiness, clarity and accuracy are only partially mediated by trustworthiness. Thus, we were able to show that the effect of transparency on trust is (largely) mediated by trustworthiness. While disclosure, timeliness, and relevance have a significant effect on job satisfaction, the other dimensions do not. These results indicate the utility and the need for a differentiated view of transparency.

Theoretical Implications

In this study we found evidence for the utility of the translated and extended DCA-transparency scale. Besides extending the scale by two dimensions, we underlined that the instrument also works in German, providing first empirical evidence for cross-cultural accuracy and appropriateness. Considering the transparency-trust relationship, we argue that the unclear empirical findings can be attributed to the use of unidimensional scales stressing the disclosure dimension. We found that there is no significant effect of disclosure on trust when controlling for the other transparency dimensions. Nevertheless, disclosure is indirectly connected to trust. Therefore, the important role of trustworthiness as a mediator between transparency and the formation of trust is underlined. In addition to trust, we investigated job satisfaction as a second consequence of transparency, thus broadening the scope of applicability.

Extending the DCA scale is meaningful since these dimensions add to the understanding of transparency and are useful for explaining specific outcomes (such as job satisfaction), as we showed that the explained variance is significantly higher when including the two additional dimensions.

Practical Implications

Although this study focuses on scale validation, there are important practical implications as well. First, a mismatch between the intended and the perceived transparency between managers and their employees was observed in some cases (Schnackenberg & Tomlinson, 2016). This may be the result of a limited understanding and/or a too-narrow focus of transparency, by focusing only on the disclosure facet while neglecting other important aspects of transparency. Therefore, supervisors must also consider the clarity, accuracy, timeliness, and relevance of disclosed information. By adding the timeliness and relevance dimensions, we made the transparency measure even better suited to derive concrete practical measures to increase perceived transparency.

Second, it might not be possible to satisfy all transparency dimensions at one time. For instance, there could be a trade-off between the accuracy and timeliness of information. In such cases, managers must decide which dimensions to emphasize. By being aware of the different aspects and consequences of transparency, supervisors can weigh different possibilities and make informed decisions. It is also important to note that sharing information is a necessary first step toward greater participation and involvement of different stakeholder groups. Because we found a positive relationship between transparency and job satisfaction, the extended DCA-transparency scale might also serve to further investigate the role of participation in this process. It would be interesting to examine which dimensions are most important for improving employees’ perception of participation in decision processes.

Finally, supervisors who share information with their employees usually depend on the information of others as well. Therefore, the more transparent the entity as a whole acts, the more managers are able to transparently share information with their employees.

Further Research and Limitations

Future research should consider extending the English DCA scale by timeliness and relevance as well. Moreover, a short version of the extended DCA scale could be developed to further enhance its adoption in practical settings. In addition to employees as information recipients, other stakeholder groups might have different information needs, leading to different transparency perceptions (Rawlins, 2009; Wehmeier & Raaz, 2012). It is therefore desirable to review the scale under consideration of the needs of different stakeholder groups such as customers or shareholders. Because the primary focus of this work lies on transparency in organizational settings, the applicability of the developed scale in other contexts should be investigated. As transparency is a large-scale phenomenon and is examined in different contexts (Christensen & Cheney, 2015), the scale presented seems to be useful beyond the scope of the organizational setting. Thus, in addition to “managers” or “top management,” also the company’s internal communication may be analyzed using the extended DCA-scale. Since the items are worded relatively broadly and can be adapted to different contexts, applications in other settings are easily feasible. We also encourage research on additional outcome variables such as organizational citizenship behavior, stress, or intention to quit. Because our study was the first to investigate DCA-transparency outside the English cultural context and language, we feel further empirical examinations in different contexts should be encouraged.

In this study, we followed the perspective of Schnackenberg and colleagues and conceptualized organizational transparency as an individual nonsystemic perception of information quality. Future research should investigate transparency from the perspective of organizational politics (micropolitical) (Neuberger, 2006). It would be insightful to see whether – and how far – information quality can be (and maybe consciously is being) manipulated by information senders. It would be particularly interesting to see the extent to which a consensus on transparency perceptions can be reached when motivated stakeholders are involved. Such an approach could contribute to a more holistic understanding of transparency.

This work does not come without limitations. Regarding content validity, we did not consult experts when developing the German item pool, as it was done in the original study. This might result in limitations since we assume that the qualitative evidence for content validity found in the original study also holds true for the German version, though we have not explicitly tested this. Another limitation arises with regard to the scale used for trust by Mayer and Davis (1999), which showed poor measurement quality. During the analysis, we decided to delete two items to improve Cronbach’s alpha to .6. Although this value still lies below the frequently mentioned threshold of .7, it is more acceptable than before, especially for a very short scale (Cortina, 1993). The well-studied positive relationship between trustworthiness (ability, benevolence, integrity) and trust (e. g., Mayer, Davis, 1999) is also visible in our data with the shorter trust scale. For a more detailed examination of the transparency-trust relationship, we would still recommend that future research use different trust scales to underline the appropriateness in this context. In contrast to other constructs we used, there was no validated German version, which again underlines the importance of scale validation.

Electronic Supplementary Material

The electronic supplementary material is available with the online version of the article at https://doi.org/10.1026/0932-4089/a000360

Literatur

  • Albu, O. B., & Flyverbom, M. (2019). Organizational transparency: Conceptualizations, conditions, and consequences. Business & Society, 58 (2), 268 – 297. https://doi.org/10.1177/0007650316659851 First citation in articleCrossrefGoogle Scholar

  • Auger, G. A. (2014). Trust me, trust me not: An experimental analysis of the effect of transparency on organizations. Journal of Public Relations Research, 26 (4), 325 – 343. https://doi.org/10.1080/1062726X.2014.908722 First citation in articleCrossrefGoogle Scholar

  • Backhaus, K., Erichson, B., & Weiber, R. (2015). Fortgeschrittene Multivariate Analysemethoden: Eine anwendungsorientierte Einführung (3., überarb. und aktual. Aufl.). [Advanced Multivariate Analysis Methods: An Application-Oriented Introduction]. Springer Gabler. https://doi.org/10.1007/978-3-662-46087-0 First citation in articleCrossrefGoogle Scholar

  • Bandsuch, M., Pate, L., & Thies, J. (2008). Rebuilding stakeholder trust in business: An Examination of principle-centered leadership and organizational transparency in corporate governance. Business and Society Review, 113 (1), 99 – 127. https://doi.org/10.1111/j.1467-8594.2008.00315.x First citation in articleCrossrefGoogle Scholar

  • Berglund, T. (2014). Corporate governance and optimal transparency. In J. ForssbaeckL. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 359 – 371). Oxford University Press. First citation in articleGoogle Scholar

  • Bushman, R. M., Piotroski, J. D., & Smith, A. J. (2004). What determines corporate transparency? Journal of Accounting Research, 42 (2), 207 – 252. https://doi.org/10.1111/j.1475-679X.2004.00136.x First citation in articleCrossrefGoogle Scholar

  • Calvard, T. S. (2016). Big data, organizational learning, and sensemaking: Theorizing interpretive challenges under conditions of dynamic complexity. Management Learning, 47 (1), 65 – 82. https://doi.org/10.1177/1350507615592113 First citation in articleCrossrefGoogle Scholar

  • Christensen, L. T., & Cheney, G. (2015). Peering into transparency: Challenging ideals, proxies, and organizational practices. Communication Theory, 25 (1), 70 – 90. https://doi.org/10.1111/comt.12052 First citation in articleCrossrefGoogle Scholar

  • Clark Williams, C. (2008). Toward a Taxonomy of corporate reporting strategies. Journal of Business Communication, 45 (3), 232 – 264. https://doi.org/10.1177/0021943608317520 First citation in articleCrossrefGoogle Scholar

  • Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86 (3), 386 – 400. https://doi.org/10.1037//0021-9010.86.3.386 First citation in articleCrossrefGoogle Scholar

  • Colquitt, J. A., Sabey, T. B., Rodell, J. B., & Hill, E. T. (2019). Content validation guidelines: Evaluation criteria for definitional correspondence and definitional distinctiveness. Journal of Applied Psychology, 104 (10), 1 – 23. https://doi.org/10.1037/apl0000406 First citation in articleCrossrefGoogle Scholar

  • Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78 (1), 98 – 104. https://doi.org/10.1037/0021-9010.78.1.98 First citation in articleCrossrefGoogle Scholar

  • Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10 (7), 86 – 99. https://doi.org/10.7275/jyj1-4868 First citation in articleGoogle Scholar

  • DeVellis, R. F. (2017). Scale development: Theory and applications (4th ed.). (Applied Social Research Methods Series, Vol. 26). Sage. First citation in articleGoogle Scholar

  • Dreiskämper, D., Pöppel, K., & Strauß, B. (2016). Vertrauen ist gut …Entwicklung und Validierung eines Inventars zur Messung von Vertrauenswürdigkeit im Sport [Trust is good. Development and validation of an inventory for measuring trustworthiness in Sport]. Zeitschrift für Sportpsychologie, 23 (1), 1 – 12. https://doi.org/10.1026/1612-5010/a000156 First citation in articleLinkGoogle Scholar

  • Flake, J. K., Pek, J., & Hehman, E. (2017). Construct validation in social and personality research. Social Psychological and Personality Science, 8 (4), 370 – 378. https://doi.org/10.1177/1948550617693063 First citation in articleCrossrefGoogle Scholar

  • Gebert, D., Boerner, S., & Matiaske, W. (1998). Offenheit und Geschlossenheit in Organisationen – Zur Validierung eines Meßinstruments (FOGO-Fragebogen) [Open and closed organizations – on the validity of a questionaire (FOGO – Questionaire of the openess and closure in organizations)]. Zeitschrift für Arbeits- und Organisationspsychologie, 42, 15 – 26. First citation in articleGoogle Scholar

  • Giri, V. N., & Kumar, P. B. (2010). Assessing the impact of organizational communication on job satisfaction and job performance. Psychological Studies, 55 (2), 137 – 143. https://doi.org/10.1007/s12646-010-0013-6 First citation in articleCrossrefGoogle Scholar

  • Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21 (5), 967 – 988. https://doi.org/10.1177/014920639502100509 First citation in articleCrossrefGoogle Scholar

  • Hinkin, T. R., & Tracey, J. B. (1999). An analysis of variance approach to content validation. Organizational Research Methods, 2 (2), 175 – 186. https://doi.org/10.1177%2F109442819922004 First citation in articleCrossrefGoogle Scholar

  • Holland, D., Krause, A., Provencher, J., & Seltzer, T. (2018). Transparency tested: The influence of message features on public perceptions of organizational transparency. Public Relations Review, 44 (2):, 256 – 264. https://doi.org/10.1016/j.pubrev.2017.12.002 First citation in articleCrossrefGoogle Scholar

  • Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30 (2), 179 – 185. https://doi.org/10.1007/BF02289447 First citation in articleCrossrefGoogle Scholar

  • Jahansoozi, J. (2006). Organization‐stakeholder relationships: exploring trust and transparency. Journal of Management Development, 25 (10), 942 – 955. https://doi.org/10.1108/02621710610708577 First citation in articleCrossrefGoogle Scholar

  • Kaptein, M. (2008). Developing and testing a measure for the ethical culture of organizations: The corporate ethical virtues model. Journal of Organizational Behavior, 29 (7), 923 – 947. https://doi.org/10.1002/job.520 First citation in articleCrossrefGoogle Scholar

  • Locke, E. A. (1969). What is job satisfaction? Organizational Behavior and Human Performance, 4 (4), 309 – 336. https://doi.org/10.1016/0030-5073(69)90013-0 First citation in articleCrossrefGoogle Scholar

  • Maier, G. W., Streicher, B., Jonas, E., & Woschée, R. (2007). Gerechtigkeitseinschätzungen in Organisationen: Die Validität einer deutschsprachigen Fassung des Fragebogens von Colquitt (2001) [Assessment of justice in organizations: The validity of a German version of the questionnaire by Colquitt (2001)]. Diagnostica, 53 (2), 1 – 38. https://doi.org/10.1026/0012-1924.53.2.97 First citation in articleLinkGoogle Scholar

  • Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal system on trust for management: A field quasi-experiment. Journal of Applied Psychology, 84 (1), 123 – 136. https://doi.org/10.1037/0021-9010.84.1.123 First citation in articleCrossrefGoogle Scholar

  • Mayer, R. C., Davis, J. H., & Schoorman, D. F. (1995). An integrative model of organizational trust. Academy of Management Review, 20 (3), 709 – 734. https://doi.org/10.5465/amr.1995.9508080335 First citation in articleCrossrefGoogle Scholar

  • Miller, K. I., & Monge, P. R. (1986). Participation, satisfaction, and productivity: A meta-analytic review. Academy of Management Journal, 29 (4), 727 – 753. First citation in articleGoogle Scholar

  • Mohr, G., Wolfram, H.‑J., Schyns, B., Paul, T., & Günster, A. C. (2014). Kommunikationsqualität Führungskräfte und MitarbeiterInnen [Communication Quality Managers and Employees]. In Zusammenstellung sozialwissenschaftlicher Items und Skalen. Gesis. https://doi.org/10.6102/zis27 First citation in articleGoogle Scholar

  • Neuberger, O. (2006). Mikropolitik: Stand der Forschung und Reflexion [Research in micropolitics: State of the art and critical reflections]. Zeitschrift für Arbeits- und Organisationspsychologie, 50 (4), 189 – 202. https://doi.org/10.1026/0932-4089.50.4.189 First citation in articleLinkGoogle Scholar

  • Nicolaou, A. I., Ibrahim, M., & van Heck, E. (2013). Information quality, trust, and risk perceptions in electronic data exchange. Decision Support Systems, 54 (2), 986 – 996. https://doi.org/10.1016/j.dss.2012.10.024 First citation in articleCrossrefGoogle Scholar

  • Nicolaou, A. I., & McKnight, D. H. (2006). Perceived information quality in data exchanges: Effects on risk, trust, and intention to use. Information Systems Research, 17 (4), 332 – 351. https://doi.org/10.1287/isre.1060.0103 First citation in articleCrossrefGoogle Scholar

  • Norman, S. M., Avolio, B. J., & Luthans, F. (2010). The impact of positivity and transparency on trust in leaders and their perceived effectiveness. The Leadership Quarterly, 21 (3), 350 – 364. https://doi.org/10.1016/j.leaqua.2010.03.002 First citation in articleCrossrefGoogle Scholar

  • Pirson, M., & Malhotra, D. K. (2011). Foundations of organizational trust: What matters to different stakeholders? Organization Science, 22 (4), 1087 – 1104. https://doi.org/10.1287/orsc.1100.0581 First citation in articleCrossrefGoogle Scholar

  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.‑Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88 (5), 879 – 903. https://doi.org/10.1037/0021-9010.88.5.879 First citation in articleCrossrefGoogle Scholar

  • Rawlins, B. R. (2008). Measuring the relationship between organizational transparency and employee trust. Public Relations Journal, 2 (2), 1 – 21. First citation in articleGoogle Scholar

  • Rawlins, B. R. (2009). Give the Emperor a mirror: Toward developing a stakeholder measurement of organizational transparency. Journal of Public Relations Research, 21 (1), 71 – 99. https://doi.org/10.1080/10627260802153421 First citation in articleCrossrefGoogle Scholar

  • Schnackenberg, A., & Tomlinson, E. (2016). Organizational transparency: A new perspective on managing trust in organization-stakeholder relationships. Journal of Management, 42 (7), 1784 – 1810. https://doi.org/10.1177/0149206314525202 First citation in articleCrossrefGoogle Scholar

  • Schnackenberg, A., Tomlinson, E., & Coen, C. (2020). The dimensional structure of transparency: A construct validation of transparency as disclosure, clarity and accuracy in organizations. Human Relations, 1 – 33. https://doi.org/10.1177%2F0018726720933317 First citation in articleGoogle Scholar

  • Spector, P. E. (1986). Perceived control by employees: A meta-analysis of studies concerning autonomy and participation at work. Human Relations, 39 (11), 1005 – 1016. https://doi.org/10.1177/001872678603901104 First citation in articleCrossrefGoogle Scholar

  • Sperka, M. (1997). Zur Entwicklung eines “Fragebogens zur Erfassung der Kommunikation in Organisationen“ (KomminO) [On the development of a ”Questionnaire for Assessment of Communication in Organizations” (KomminO)]. Zeitschrift für Arbeits- und Organisationspsychologie. 41, 182 – 190. First citation in articleGoogle Scholar

  • Statistisches Bundesamt. (2019). Erwerbstätige und Erwerbstätigenquote nach Geschlecht und Alter 2008 und 2018: Ergebnis des Mikrozensus [Employed persons and employment rate by gender and age 2008 and 2018: Result of the microcensus]. Retrieved from https://www.destatis.de/DE/Themen/Arbeit/Arbeitsmarkt/Erwerbstaetigkeit/Tabellen/erwerbstaetige-erwerbstaetigenquote.html First citation in articleGoogle Scholar

  • Thompson, E. R., & Phua, F. T. T. (2012). A brief index of affective job satisfaction. Group & Organization Management, 37 (3), 275 – 307. https://doi.org/10.1177/1059601111434201 First citation in articleCrossrefGoogle Scholar

  • Venkatesh, V., Thong, J. Y. L., Chan, F. K. Y., & Hu, P. J. H. (2016). Managing citizens’ uncertainty in e-government services: The mediating and moderating roles of transparency and trust. Information Systems Research, 27 (1), 87 – 111. https://doi.org/10.1287/isre.2015.0612 First citation in articleCrossrefGoogle Scholar

  • Wagner III, J. A., & LePine, J. A. (1999). Effects of participation on performance and satisfaction: Additional meta-analytic evidence. Psychological Reports, 84 (3), 719 – 725. First citation in articleCrossrefGoogle Scholar

  • Wehmeier, S., & Raaz, O. (2012). Transparency matters: The concept of organizational transparency in the academic discourse. Public Relations Inquiry, 1 (3), 337 – 366. https://doi.org/10.1177/2046147X12448580 First citation in articleCrossrefGoogle Scholar

  • Williams, C. C. (2005). Trust diffusion: The effect of interpersonal trust on structure, function, and organizational transparency. Business & Society, 44 (3), 357 – 368. https://doi.org/10.1177/0007650305275299 First citation in articleCrossrefGoogle Scholar

  • Winkler, B. (2000). Which kind of transparency? On the need for clarity in monetary policy-making. ECB Working Paper, 26, 1 – 34. First citation in articleGoogle Scholar

  • Worthington, R. L., & Whittaker, T. A. (2006). Scale development research: A content analysis and recommendations for best practices. The Counseling Psychologist, 34 (6), 806 – 838. https://doi.org/10.1177/0011000006288127 First citation in articleCrossrefGoogle Scholar

  • Xu, Y., Kim, H.‑W., & Kankanhalli, A. (2010). Task and social information seeking: Whom do we prefer and whom do we approach? Journal of Management Information Systems, 27 (3), 211 – 240. https://doi.org/10.2753/MIS0742-1222270308 First citation in articleCrossrefGoogle Scholar

1See Appendix A1 for the final scale.

2We focused on the main constructs of interest here. There were further hypotheses in the preregistration stage (such as the moderating effect of propensity to trust on the trustworthiness-trust relationship), but they were mainly included for completeness.

Appendix

Table A1 List of items and their factor loadings
Table A2 Results of ANOVA from Study 2b