Вы находитесь на странице: 1из 8

Author's personal copy

J Bus Psychol (2011) 26:241248


DOI 10.1007/s10869-011-9238-1

To Aggregate or Not to Aggregate: Steps for Developing


and Validating Higher-Order Multidimensional Constructs
Russell E. Johnson Christopher C. Rosen
Chu-Hsiang Chang

Published online: 1 July 2011


Springer Science+Business Media, LLC 2011

Abstract The use of higher-order multidimensional


constructs (i.e., latent constructs comprised of standalone
variables) in the organizational psychology and behavior
literatures is becoming commonplace. Despite their
advantages (e.g., greater parsimony and bandwidth), the
development and validation of such constructs often
unfolds in an indiscriminant fashion. It is not surprising,
then, that much debate has arisen regarding the viability of
many higher-order constructs. In this article, we outline ten
recommendations for improving the construct- and criterion-related validity of higher-order constructs. Chief
among these recommendations include the need for
researchers to specify precise theoretical and empirical
inclusion criteria, to rule out alternative explanations for
the emergence of a higher-order factor and to assess
incremental and relative importance. To illustrate how
these recommendations play out, we apply them to core
self-evaluation as an example. We believe that higherorder constructs may offer unique insight into work-relevant phenomena, provided they are established via rigorous
means.

R. E. Johnson (&)
Department of Management, Broad College of Business,
Michigan State University, East Lansing, MI 48824, USA
e-mail: johnsonr@bus.msu.edu
C. C. Rosen
Department of Management, Sam M. Walton College of
Business, University of Arkansas, Fayetteville, USA
C.-H. Chang
Department of Psychology, Michigan State University, East
Lansing, MI 48824, USA

Keywords Higher-order constructs  Multidimensional


constructs  Validation  Common method variance 
Incremental and relative importance  Core self-evaluation

There has been an increase in the popularity of higherorder multidimensional constructs, which are second-order
latent factors whose indicators are standalone constructs
backed by their own history of theory and research
(Edwards 2001; Law et al. 1998). Take, for example, the
ve factor model of personality, which is comprised of the
traits of neuroticism, extraversion, openness to experience,
agreeableness, and conscientiousness (Goldberg 1992;
McCrae and Costa 1987). Researchers have begun to
examine these traits as indicators of two higher-order factors: getting ahead (comprised of extraversion and
openness to experience) and getting along (comprised of
neuroticism, agreeableness, and conscientiousness; Digman 1997). In fact, it has been proposed that all ve traits
may even load on a single factor of personality (Rushton
and Irwing 2008). Examples of higher-order constructs in
the organizational sciences that have been previously
examined include trait engagement (comprised of proactive
personality, positive affectivity, and conscientiousness;
Macey and Schneider 2008), core self-evaluation (CSE:
comprised of self-esteem, generalized self-efcacy, neuroticism, and locus of control; Judge et al. 1997), overall
job attitude (comprised of job satisfaction and organizational commitment; Harrison et al. 2006; Rosen et al.
2009), and overall fairness (comprised of distributive,
procedural, and interactional justice; Ambrose and Schminke 2009). Other possibilities might include destructive
personality (comprised of arrogance, narcissism, Machiavellianism, etc.), innovative personality (comprised of
openness to experience, tolerance of ambiguity, positive

123

Author's personal copy


242

affectivity, etc.), and overall work support (comprised of


perceived organizational support, perceived leader support,
team-member exchange, etc.) and commitment (comprised
of commitment to ones organization, supervisor,
coworkers, etc.).
Several reasons account for the increased interest in
higher-order multidimensional constructs. One reason pertains to the bandwidth-delity tradeoff, or the idea that
broader constructs are better predictors of criteria that span
multiple domains and periods of time (Jenkins and Grifth
2004). If the goal is to predict broadly dened work attitudes
and behaviors, then higher-order multidimensional constructs may prove quite valuable. Another reason is to
overcome the jangle fallacy, which occurs when a single
phenomenon is examined separately under the guise of two
or more variables with different labels (Kelley 1927). If
multiple variables are indicators of an unitary construct, then
it is more parsimonious to examine the source construct
rather than the individual (and redundant) indicators. A nal
reason for the popularity of higher-order multidimensional
constructs is the ease of specifying and analyzing latent
factors using contemporary structural equation modeling
software. The mathematics and computational power needed
to model higher-order constructs are no longer obstacles.
Despite the potential advantages of higher-order multidimensional constructs, our thesis is that the decision whether or not to combine constructs into higher-order factors
requires careful theoretical consideration and empirical
justication. Unfortunately, the validity of higher-order
constructs does not typically receive the same level of
scrutiny as the validity of standalone, unidimensional constructs. This oversight is worrisome because the validity
evidence that exists for standalone constructs is no substitute
for the validity of higher-order constructs (Johnson et al. in
press-a). If the internal structure of a higher-order construct
lacks validity, then any observed relations it has with
correlate and outcome variables are moot. Thus, in this
article, we present guidelines for developing and validating
higher-order constructs that center around four themes:
(a) providing theoretical and empirical inclusion criteria,
(b) specifying the nature of the higher-order construct,
(c) ruling out alternative explanations for the emergence of
the higher-order factor, and (d) demonstrating incremental
prediction. We discuss these themes next, and then conclude
the article by working through an example where we highlight
key decision challenges and how they might be resolved.

Specifying Theoretical and Empirical Inclusion Criteria


for Higher-Order Constructs
Ideally, the theory from which a multidimensional construct is derived should be used to generate theoretical and

123

J Bus Psychol (2011) 26:241248

empirical criteria that can be applied to identify constructs


to be included as indicators (Johnson et al. in press-a).
Either a deductive (i.e., top-down) or inductive (i.e.,
grounded) approach to construct development may be
appropriate (Spector 1992), but what is important is that
researchers develop and use sound theory to identify the
higher-order construct. One alarming trend that we have
observed is many higher-order constructs lack underlying
theory supporting their existence. In these cases, the reasoning appears to be that there is a set of variables that are
moderately to highly correlated, therefore there must be a
latent construct lurking beneath them. We do not nd this
logic compelling because there are many explanations for
the presence of high intercorrelations (see our later section
Ruling Out Alternative Explanations for the Emergence
of Higher-Order Constructs).
Sound theory not only helps higher-order constructs
withstand scrutiny, but also it is needed to specify a shared
set of inclusion criteria that can be used to identify all
relevant lower-level indicators. At the same time, these
theoretical criteria should also be precise enough to
exclude constructs from consideration. For example, the
theoretical criteria for core self-evaluation are fundamental, broad, and evaluative (Judge et al. 1997), which support the inclusion of self-efcacy and the exclusion of
conscientiousness. If the criteria are not comprehensive
enough to identify all relevant indicators, then errors of
omission may occur, whereas failure to provide sufciently
precise criteria results in errors of commission.
It is also necessary for researchers to rely on theory to
develop empirical inclusion criteria. These empirical criteria, along with theoretical criteria, serve as a litmus test
for assessing the status of variables as indicators of a
higher-order construct. Consider, for example, the higherorder general factor of personality (Rushton and Irwing
2008). Because it reects a fundamental disposition, high
testretest reliability might serve as one empirical criterion
for assessing the appropriateness of individual indicators.
Empirical criteria help to reduce ambiguity with regard to
which indicators should be included when modeling
higher-order constructs and they provide veriable evidence supporting the structural validity of constructs
(Johnson et al. in press-a).
Our review of extant higher-order constructs indicates
that few researchers specify testable inclusion criteria. As a
result, there has been disagreement regarding the indicators
of various higher-order constructs, such as work engagement (Macey and Schneider 2008) and core self-evaluation
(Johnson et al. 2008). Therefore, we call for more rigor in
the development of higher-order constructs by adhering to
the following three-step process: (1) rely on theory to
develop criteria that can be used to identify potential
indicators; (2) specify empirical criteria that can be used to

Author's personal copy


J Bus Psychol (2011) 26:241248

evaluate the status of candidate indicators; and (3) collect


data and actually test the indicators against the theoretical
and empirical inclusion criteria. Constructs that meet these
benchmarks can then be retained and used in future
research to model the higher-order construct.

Specifying and Testing the Nature of Higher-Order


Constructs
In addition to using theory to identify inclusion criteria for
selecting suitable indicator variables, the nature of relations
among the lower-level indicators and the higher-order
construct must be claried. Two types of higher-order
constructs are common (see Edwards 2001; Law et al.
1998). For superordinate constructs, causality ows from
the higher-order construct to its indicators, which are
labeled as effects indicators. An example is general mental
ability, which determines peoples lower-level verbal and
analytical abilities (Spearman 1904). For aggregate constructs, causality ows from the indicators, which are
labeled as causal indicators, to the higher-order factor. An
example is overall fairness, which reects the sum of the
lower-level indicators of outcome fairness, procedural
fairness, and interactional fairness (Ambrose and Schminke
2009).
Superordinate and aggregate constructs differ conceptually and statistically (Bollen and Lennox 1991; Edwards
2001, 2011; MacKenzie et al. 2005) and these conceptual
and statistical differences have implications for the selection of indicators and the assessment of reliability of
higher-order multi-dimensional constructs. Specically, a
superordinate construct is the primary cause of its indicators, thus effects indicators are expected to demonstrate
high intercorrelations. Similarly, excluding an effects
indicator should not alter substantially the meaning of the
higher-order construct because all indicators share the
same underlying construct. Thus, as a general guideline,
when researchers are dealing with superordinate constructs,
they should identify indicators that share a high degree of
overlap and are conceptually interchangeable. Moreover,
as loadings between indicators and the higher-order construct increase, the inter-correlations among indicators also
increase, resulting in higher internal consistency reliability
(Edwards 2011). Together, these characteristics suggest
that inclusion criteria for indicators of superordinate constructs might be moderate to high cutoffs for factor loadings (e.g., k C 0.70) and high internal consistencies among
the indicator scores (e.g., a C 0.80).
In contrast, aggregate constructs are the sum of the
shared and unique variance in their indicators and thus,
causal indicators are not interchangeable and high

243

intercorrelations among indicators are neither necessary


nor desirable (Edwards 2011). Thus, for aggregate constructs, researchers should identify indicators that tap into
unique dimensions of the focal construct and thus are
minimally redundant (Diamantopoulos and Siguaw 2006).
Because indicators of aggregate constructs generally share
less overlap than those of superordinate constructs, internal
consistency among indicators is not a concern for aggregate constructs. Moreover, because indicators of aggregate
constructs are ideally heterogenous, factor loadings (as
well as the meaning of the overall construct) are not easily
interpreted (Edwards 2011). Thus, internal consistency and
factor loadings are not appropriate for assessing aggregate
constructs. Rather, the usefulness of aggregate constructs is
contingent on their success at capturing the majority of
variance in their set of indicators. The proportion of variance extracted by aggregate constructs can be calculated
via the adequacy coefcient (Edwards 2001). An adequacy
coefcient of 0.70, for example, means that the aggregate
construct captures 70% of the variance in the set of causal
indicators.
Based on the previous discussion, it is necessary for
researchers to specify whether higher-order constructs are
superordinate or aggregate ones because the nature of
higher-order constructs inuences the inclusion criteria that
are used to identify indicators. Relations of higher-order
constructs with their antecedents, correlates, and outcomes
may also differ when constructs are modeled as superordinate versus aggregate ones (Johnson et al. in press-a). It
is not enough, however, to merely state the nature of a
higher-order construct, but this belief ought to be treated as
a hypothesis that is subsequently tested. Empirical tests are
available for evaluating the appropriateness of higher-order
constructs with effects indicators versus causal indicators,
such as a conrmatory tetrad analysis (Bollen and Ting
2000) and proportionality constraint analysis (Franke et al.
2008). We therefore encourage researchers to present both
theoretical and empirical evidence in support of the nature
of their higher-order construct.

Ruling Out Alternative Explanations for the Emergence


of Higher-Order Constructs
After empirically verifying the appropriateness of both the
indicators and nature of the higher-order construct, the next
step is to rule out competing explanations for the emergence of higher-order factors. One possible explanation is
common method variance (CMV), which refers to shared
systematic error variance among the indicators owing to
similar measurement methods and sources (Johnson et al.
in press-b). Sources of CMV include the use of survey

123

Author's personal copy


244

items with similar wording and response formats, socially


desirable responding, and transient factors like mood
(Podsakoff et al. 2003). CMV is a concern as it can inate
observed relationships among variables (e.g., Crampton
and Wagner 1994; Williams et al. 1989), including relations among the indicators of higher-order constructs. This
can lead to the erroneous inclusion of indicators for
superordinate constructs or the exclusion of suitable indicators for aggregate constructs. For example, Johnson et al.
(in press-b) found that CMV accounted for a signicant
amount of variance in the loadings of self-esteem, selfefcacy, neuroticism, and locus of control on the higherorder core self-evaluation construct. In fact, locus of control no longer loaded on core self-evaluation after CMV
was partialed out.
Researchers must therefore take steps to minimize the
inuence of CMV in their data when examining higherorder constructs by using statistical and procedural controls
(Conway and Lance 2010; Podsakoff et al. 2003). Statistical controls deal with CMV during the data analysis stage
by, for example, controlling for a theoretically unrelated
marker variable (Williams et al. 2010), or controlling for
measured (e.g., social desirability) or unmeasured sources
of CMV (Richardson et al. 2009). Conversely, procedural
controls deal with CMV during the data collection stage
by, for example, creating methodological separation among
the measures of focal variables (e.g., using different
response formats, including ller scales), or by collecting
data at different times or from different sources (Harrison
et al. 1996; Johnson et al. in press-b). Given that statistical
controls address the symptoms of CMV whereas procedural controls prevent CMV from contaminating the data in
the rst place, the latter method is preferable (Conway and
Lance 2010). Based on initial ndings (Johnson et al. in
press-b), measuring the indicators of higher-order constructs at different times appears to be an effective procedural strategy for ruling out CMV.
Model specication errors, such as unmeasured variables that inuence focal constructs (Kline et al. 2002),
may also be responsible for inating the shared variance
among indicators of presumed higher-order constructs.
Returning to the example of CSE, all of its trait indicators
(viz., self-esteem, self-efcacy, neuroticism, and locus of
control) reect to some degree success in achievement
domains (Johnson et al. 2008). Thus, general cognitive
ability and other precursors of success may account for
some of the overlapping variance among the indicators. It
is therefore important to demonstrate that higher-order
factors still emerge when possible confounding variables
are simultaneously modeled. Determining what variables
are potentially relevant and ought to be controlled for when
modeling higher-order constructs requires a deductive or
top-down approach.

123

J Bus Psychol (2011) 26:241248

Demonstrating the Incremental Prediction of HigherOrder Constructs


It has been argued that higher-order constructs may provide
a more parsimonious solution to considering individual
dimensions separately (Law et al. 1998). Beyond parsimony, however, it is necessary to provide evidence supporting the validity of such constructs (Edwards 2001).
Criterion-related validity is particularly relevant, as it
involves testing hypotheses pertaining to how a construct
relates to other variables (Spector 1992). To support the
criterion-related validity of higher-order constructs,
researchers must not only provide evidence that the construct relates to other variables of interest, but also demonstrate that the construct explains variance equal to or
greater than what is explained by its indicators (Edwards
2001). One way of doing so is via an usefulness analysis,
which involves testing whether the higher-order construct
accounts for variance in outcomes incremental to its indicators. This analysis can be run using hierarchical regression, where the outcome is regressed on the indicators in an
initial step, followed by the higher-order construct in a
second step. An analogous approach using structural
equation modeling would involve specifying a model in
which there is a direct path from the higher-order construct
to the outcome while simultaneously modeling direct paths
from each indicator to the outcome. If the path from the
higher-order construct to the outcome is signicant after
controlling for the effects of its indicators, then there is
utility in considering the construct.
In addition to an usefulness analysis, which provides
evidence of incremental importance, researchers should
also consider whether the contribution of the higher-order
construct to the total R2 for outcomes exceeds that of its
indicators. This is referred to as relative importance, which
can be assessed using dominance analysis or relative
weights analysis (Johnson 2000; LeBreton et al. 2007).
Incremental importance and relative importance are not
synonymous because a higher-order construct may explain
little unique variance in an outcome yet make a large
overall contribution to total R2 relative to its indicators.
Recommendations concerning when and how to assess
relative importance are provided by Tonidandel and
LeBreton (2011).
Note that our recommendation for researchers to provide
evidence supporting the criterion-related validity of higherorder constructs requires the use of indirect measures.
Indirect measures involve measuring the indicators separately and using these data to derive peoples scores on the
higher-order construct. Thus, indirect measures provide
information about both the indicators and the higher-order
construct, which is needed to assess incremental importance and relative importance (as well as determining

Author's personal copy


J Bus Psychol (2011) 26:241248

whether the focal higher-order construct is a superordinate


or aggregate one). An added benet to using indirect
measures is that they enable researchers to examine the
extent to which CMV and unmeasured third variables
inuence the loadings of indicators on their higher-order
constructs, as was discussed in the previous section. It is
crucial to assess CMV at the indicator-level rather than the
level of the higher-order construct because CMV likely has
congeneric effects, meaning that its effects are asymmetrical across indicators (see Johnson et al. in press-b).

Applying Our Guidelines to a Specic Example


Listed in Table 1 are step-by-step recommendations for
developing and validating higher-order multidimensional
constructs. In this section, we apply these recommendations to a specic examplecore self-evaluation (CSE;
Judge et al. 1997)to illustrate how our guidelines play
out. We selected CSE as our example because, despite
being a popular multidimensional construct, some issues
concerning its validity have been noted (Chen in press;
Johnson et al. in press-b; Johnson et al. 2008; Schmitt
2004). Many of these issues pertain to the themes discussed
in this article (e.g., lack of empirical inclusion criteria,
failure to rule out alternative explanations for the emergence of the higher-order CSE factor), which makes CSE a
tting example.
The rst two recommendations involve specifying
theoretical criteria to identify candidate indicators of the
higher-order construct and specifying the nature of the
construct. According to CSE theory (Chang et al. in press;
Judge et al. 1997), traits that are evaluative (i.e., they
involve evaluations of the self), fundamental (i.e., they are
relatively stable and central to the self-concept), and broad
in scope (i.e., they extend to all life domains) qualify as
indicators of the CSE construct. At least seven candidate
traits satisfy these inclusion criteria: self-esteem, generalized self-efcacy, emotional stability (or conversely

245

neuroticism), locus of control, optimism, and positive and


negative affectivity (see Johnson et al. 2008; Judge et al.
1997). Furthermore, CSE is believed to be a superordinate
construct, one that causes a persons standing on its trait
indicators (Judge et al. 2003).
While theoretical criteria are useful for suggesting
possible indicators, testable criteria are needed to denitively establish the set of indicator variables (Johnson et al.
in press-a). The next step, then, is to specify empirical
criteria for indicators of CSE. Given that it is a superordinate construct, trait indicators should show high loadings
on the CSE construct. Thus, one criterion might be a factor
loading cutoff of 0.70, which ensures that each trait indicator shares approximately 50% of its variance with the
higher-order factor. This can be tested using conrmatory
factor analysis by specifying a higher-order CSE factor that
all seven candidate traits load on. Only those traits whose
loadings meet or exceed the 0.70 cutoff would be retained.
The set of trait indicators should also be unidimensional
and have high internal consistency because CSE is believed
to be the primary source of the traits. The criterion of
unidimensionality can be assessed by verifying that the t
of the measurement model with the second-order CSE
factor and the rst-order traits is acceptable and superior
compared to the t of alternative models with two or more
second-order factors (Gerbing et al. 1994). The criterion of
high internal consistency can be evaluated by calculating
the composite latent variable reliability (CLVR) of the
second-order CSE factor (Anderson and Gerbing 1988).
Acceptable values of CLVR are 0.70 or higher. If the criteria for unidimensionality and internal consistency are
satised, then the set of trait indicators can be retained.
However, if one or both of these criteria are not met, then
additional steps are needed to identify and remove problematic indicators. For example, if a subset of indicators
loaded on a non-CSE second-order factor, then some or all
of these indicators may need to be dropped.
Owing to the dispositional nature of CSE, a nal
empirical inclusion criterion might be high testretest

Table 1 Steps for developing and validating higher-order multidimensional constructs


1

Use theory to develop criteria that can be used to identify potential indicators

Specify the nature of the higher-order constructis it a superordinate construct or an aggregate construct?

Specify empirical criteria that can be used to evaluate the status of candidate indicators

Collect data on candidate indicators and evaluate them against the empirical inclusion criteria from the previous step

Treat the presumed nature of the higher-order construct (from step 2) as a hypothesis and empirically test it

Rule out CMV as the cause of the shared variance among the indicators and the emergence of the higher-order factor. Procedural controls
for CMV are preferred

Rule out unmeasured causal variables as the source of the shared variance among the indicators and the emergence of a higher-order factor

Assess the incremental importance of the higher-order construct via a usefulness analysis

Assess the relative importance of the higher-order construct via a dominance or relative weights analysis

10

Use indirect approaches (i.e., collect indicator-level data) in order to measure and model higher-order constructs

123

Author's personal copy


246

reliability. CSE was initially proposed to account for the


dispositional source of job satisfaction (Judge et al. 1997),
which suggests that peoples levels on the candidate traits
are relatively stable over time. A moderate-to-high test
retest cutoff of 0.70 might therefore be adopted, and only
those traits that meet or exceed this cutoff would be
retained.
The fourth step is to collect data on the candidate trait
indicators and evaluate them against the empirical inclusion criteria discussed above (i.e., factor loading cutoffs
C0.70, acceptable t of second-order factor model, CLVR
C0.70, testretest reliability C0.70). The critical issue here
is to collect data from an appropriate sample. In the case of
CSE, a fundamental personality variable, it would be
acceptable to collect data from any of the usual participant
sources (e.g., college students, full-time workers, etc.).
More targeted samples may be required when other higherorder constructs are considered, however. For example,
data on work engagement ought to be collected from
people who are currently employed, whereas data on
destructive leadership personality ought to be collected
from employees in supervisory roles. Note also that data
may need to be collected at multiple times, as is needed to
evaluate the testretest reliability of CSEs trait indicators.
The fth step is to test the t of CSE as a superordinate
construct with effects indicators and rule out the possibility
that it is an aggregate construct with causal indicators. This
is an important step because misspecifying superordinate
constructs as aggregate ones (and vice versa) alters results
(MacKenzie et al. 2005; Taing et al. 2010). In the case of
our example, model t can be compared when CSE is
specied as a superordinate versus aggregate construct,
which the expectation that model t should be superior in
the former case. In addition, conrmatory tetrad analyses
(see Bollen and Ting 2000) can be used to verify CSE as a
superordinate construct whereas proportionality constraint
analyses (see Franke et al. 2008) can be used to rule out the
possibility that CSE is an aggregate construct.
Although, the data collected in the previous step can be
used to test the nature of CSE; it is recommended that
researchers collect additional data for this fth step. Collecting additional data has the advantage of enabling
researchers to cross-validate their earlier ndings.
Steps 6 and 7 involve ruling out alternative explanations for the emergence of the higher-order CSE factor.
One threat to the validity of higher-order factors is CMV
(Johnson et al. in press-b). We would therefore collect
data on the trait indicators after employing one or more
controls for CMV, such as measuring each trait at a
different point in time and controlling for possible measured sources of CMV (e.g., social desirability and negative affectivity). Consistent with Conway and Lance
(2010), procedural controls for CMV (as opposed to

123

J Bus Psychol (2011) 26:241248

statistical controls) are preferred (see also Podsakoff


et al. 2003). Besides CMV, there may be other (unmeasured) variables that are responsible for the overlapping
variance among the trait indicators. In the case of CSE,
for example, people with high cognitive ability may
develop high levels of self-esteem, generalized self-efcacy, internal locus of control, etc., as a result. It is
therefore important to demonstrate that the higher-order
CSE factor still emerges when cognitive ability is
included as a covariate.
After controlling for CMV and possible confounding
variables (e.g., cognitive ability), the measurement model
that includes the higher-order CSE factor should be reevaluated on the basis of the empirical inclusion criteria
from steps 3 and 4. It may be the case that one or more
indicators no longer satisfy the cutoff criteria once the
effects of CMV and other confounding variables are taken
into account. For example, Johnson et al. (in press-b)
observed that locus of control no longer loaded on the
higher-order CSE factor when the trait indicators were
measured at different times. Based on this nding, locus of
control ought to be excluded as an indicator of CSE. Note
that because steps 6 and 7 involve more sophisticated data
collection procedures (e.g., collecting data at different
times or from different sources and measuring additional
variables), different samples from the ones used in previous
steps are needed (and recommended).
The previous steps all dealt with assessing and rening
the structural validity of the higher-order CSE construct. In
contrast, the nal two steps concern the predictive validity
of CSE, which requires data on not only the trait indicators
but also outcome variables. In the case of CSE, outcomes
that might be examined include job satisfaction and
approach and avoidance work motives (Chang et al. in
press; Ferris et al. 2011). Once data are collected, establishing the predictive validity of higher-order multidimensional constructs involves two steps (Johnson et al. in
press-a). First, it is necessary to demonstrate that CSE
accounts for variance in outcomes incremental to its trait
indicators (or what LeBreton et al. 2007, call incremental
importance). This can be done using structural equation
modeling by specifying paths from the higher-order construct to the outcomes as well as paths from the trait
indicators to the outcomes. If the paths from CSE to the
outcomes are signicant in the presence of indicatoroutcome paths, then CSE has incremental importance. Alternatively, incremental importance can also be assessed via
regression analyses. To do so, a latent factor score for CSE
must rst be computed, then each outcome can be regressed on the set of trait indicators and the CSE factor score.
Support for incremental importance is found if CSE is
signicant when the trait indicators are simultaneously
modeled.

Author's personal copy


J Bus Psychol (2011) 26:241248

The second step to establish predictive validity is estimating the proportion of explained variance in outcomes
that is attributable to CSE vis-a`-vis its trait indicators (or
what LeBreton et al. 2007, call relative importance). This
can be done by conducting a relative weights analysis (see
Tonidandel and LeBreton 2011). Note that relative weights
analysis involves manifest variables, therefore a latent
factor score for CSE must be computed. After calculating
the proportion of variance in outcomes attributable to CSE,
it must then be tested whether or not this proportion is
statistically signicant (Tonidandel et al. 2009). Taken
together, demonstrating both incremental importance and
relative importance is necessary to conclude that CSE has
predictive validity.

Conclusion
In sum, we believe that higher-order constructs offer a
number of advantages that help overcome problems owing
to bandwidth-delity and the jangle fallacy. However,
whether or not to combine standalone indictors into a
higher-order construct is a complicated issue, one requiring
much theoretical consideration and empirical evidence. We
encourage scholars to follow the steps in Table 1 to demonstrate the appropriateness and validity of higher-order
constructs.

References
Ambrose, M. L., & Schminke, M. (2009). The role of overall justice
judgments in organizational justice research: A test of mediation.
Journal of Applied Psychology, 94, 491500.
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation
modeling in practice: A review and recommended two-step
approach. Psychological Bulletin, 103, 411423.
Bollen, K. A., & Lennox, R. (1991). Conventional wisdom on
measurement: A structural equation perspective. Psychological
Bulletin, 110, 305314.
Bollen, K. A., & Ting, K. -F. (2000). A tetrad test for causal
indicators. Psychological Methods, 5, 322.
Chang, C. -H., Ferris, D. L., Johnson, R. E., Rosen, C. C., & Tan, J. A.
(in press). Core self-evaluations: A review and evaluation of the
literature. Journal of Management (2012 Review Issue).
Chen, G. (in press). Evaluating the core: Critical assessment of core
self-evaluations theory. Journal of Organizational Behavior.
Conway, J. M., & Lance, C. E. (2010). What reviewers should expect
from authors regarding common method bias in organizational
research. Journal of Business and Psychology, 25, 325334.
Crampton, S. M., & Wagner, J. A., I. I. I. (1994). Percept-percept
ination in microorganizational research: An investigation of
prevalence and effect. Journal of Applied Psychology, 79,
6776.
Diamantopoulos, A., & Siguaw, J. A. (2006). Formative versus
reective indicators in organizational measure development: A

247
comparison and empirical illustration. British Journal of Management, 17, 263282.
Digman, J. M. (1997). Higher order factors of the Big Five. Journal of
Personality and Social Psychology, 73, 12461256.
Edwards, J. R. (2001). Multidimensional constructs in organizational
behavior research: An integrative analytical framework. Organizational Research Methods, 4, 144192.
Edwards, J. R. (2011). The fallacy of formative measurement.
Organizational Research Methods, 14, 370388.
Ferris, D. L., Rosen, C. C., Johnson, R. E., Brown, D. J., Risavy, S., &
Heller, D. (2011). Approach or avoidance (or both?): Integrating
core self-evaluations within an approach/avoidance framework.
Personnel Psychology, 64, 137161.
Franke, G. R., Preacher, K. J., & Rigdon, E. E. (2008). Proportional
structural effects of formative indicators. Journal of Business
Research, 61, 12291237.
Gerbing, D. W., Hamilton, J. G., & Freeman, E. B. (1994). A largescale second-order structural equation model of the inuence of
management participation on organizational planning benets.
Journal of Management, 20, 859885.
Goldberg, L. R. (1992). The development of markers for the big-ve
factor structure. Journal of Personality and Social Psychology,
59, 12161229.
Harrison, D. A., McLaughlin, M. E., & Coalter, T. M. (1996).
Context, cognition, and common method variance: Psychometric
and verbal protocol evidence. Organizational Behavior and
Human Decision Processes, 68, 246261.
Harrison, D. A., Newman, D. A., & Roth, P. L. (2006). How
important are job attitudes? Meta-analytic comparisons of
integrative behavioral outcomes and time sequences. Academy
of Management Journal, 49, 305325.
Jenkins, M., & Grifth, R. (2004). Using personality constructs to
predict performance: Narrow or broad bandwidth. Journal of
Business and Psychology, 19, 255269.
Johnson, J. W. (2000). A heuristic method for estimating the relative
weight of predictor variables in multiple regression. Multivariate
Behavioral Research, 35, 119.
Johnson, R. E., Rosen, C. C., & Levy, P. E. (2008). Getting to the core
of core self-evaluations: A review and recommendations.
Journal of Organizational Behavior, 29, 391413.
Johnson, R. E., Rosen, C. C., Chang, C. -H., Djurdjevic, E., & Taing,
M. U. (in press-a). Recommendations for improving the
construct clarity of higher-order multidimensional constructs.
Human Resource Management Review.
Johnson, R. E., Rosen, C. C., & Djurdjevic, E. (in press-b). Assessing
the impact of common method variance on higher-order
multidimensional constructs. Journal of Applied Psychology.
Judge, T. A., Locke, E. A., & Durham, C. C. (1997). The dispositional
causes of job satisfaction: A core evaluations approach.
Research in Organizational Behavior, 19, 151188.
Judge, T. A., Erez, A., Bono, J. E., & Thoresen, C. J. (2003). The core
self-evaluations scale: Development of a measure. Personnel
Psychology, 56, 303331.
Kelley, T. L. (1927). Interpretation of educational measurements.
Yonkers, New York: World Book Company.
Kline, T. J. B., Sulsky, L. M., & Rever-Moriyama, S. D. (2002).
Common method variance and specication errors: A practical
approach to detection. Journal of Psychology, 134, 401421.
Law, K. S., Wong, C. S., & Mobley, W. H. (1998). Toward a
taxonomy of multidimensional constructs. Academy of Management Review, 23, 741755.
LeBreton, J. M., Hargis, M. B., Griepentrog, B., Oswald, F. L., &
Ployhart, R. E. (2007). A multidimensional approach for
evaluating variables in organizational research and practice.
Personnel Psychology, 60, 475498.

123

Author's personal copy


248
Macey, W. H., & Schneider, B. (2008). The meaning of employee
engagement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 330.
MacKenzie, S. B., Podsakoff, P. M., & Jarvis, C. B. (2005). The
problem of measurement model misspecication in behavioral
and organizational research and some recommended solutions.
Journal of Applied Psychology, 90, 710730.
McCrae, R. R., & Costa, P. T., Jr. (1987). Validation of the ve-factor
model of personality across instruments and observers. Journal
of Personality and Social Psychology, 52, 8190.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. -Y., & Podsakoff, N. P.
(2003). Common method biases in behavioral research: A
critical review of the literature and recommended remedies.
Journal of Applied Psychology, 88, 879903.
Richardson, H. A., Simmering, M. J., & Sturman, M. C. (2009). A tale
of three perspectives: Examining post hoc statistical techniques
for detection and correction of common method variance.
Organizational Research Methods, 12, 762800.
Rosen, C. C., Chang, C. H., Johnson, R. E., & Levy, P. E. (2009).
Perceptions of the organizational context and psychological
contract breach: Assessing competing perspectives. Organizational Behavior and Human Decision Processes, 108, 202217.
Rushton, J. P., & Irwing, P. (2008). A general factor of personality
(GFP) from two meta-analyses of the Big Five: Digman (1997)
and Mount, Barrick, Scullen, and Rounds (2005). Personality
and Individual Differences, 45, 679683.

123

J Bus Psychol (2011) 26:241248


Schmitt, N. (2004). Beyond the Big Five: Increases in understanding
and practical utility. Human Performance, 17, 349359.
Spearman, C. (1904). General intelligence objectively determined
and measured. American Journal of Psychology, 15, 201293.
Spector, P. E. (1992). Summated rating scales. Thousand Oaks, CA:
Sage.
Taing, M. U., Johnson, R. E., & Jackson, E. M. (2010, August). On
the nature of core self-evaluation: A formative or reective
construct? Paper presented at the 70th Academy of Management
Annual Meeting, Montreal, Quebec.
Tonidandel, S., & LeBreton, J. M. (2011). Relative importance
analysis: A useful supplement to regression analysis. Journal of
Business and Psychology, 26, 19.
Tonidandel, S., LeBreton, J. M., & Johnson, J. W. (2009). Statistical
signicance tests for relative weights. Psychological Methods,
14, 387399.
Williams, L. J., Cote, J. A., & Buckley, M. R. (1989). Lack of method
variance in self-reported affect and perceptions at work: Reality
or artifact? Journal of Applied Psychology, 74, 462468.
Williams, L. J., Hartman, N., & Cavazotte, F. (2010). Method
variance and marker variables: A review and comprehensive
CFA marker technique. Organizational Research Methods, 13,
477514.

Вам также может понравиться