Вы находитесь на странице: 1из 15

AMS Rev (2013) 3:317 DOI 10.

1007/s13162-013-0033-1

The MIMIC model and formative variables: problems and solutions


Nick Lee & John W. Cadogan & Laura Chamberlain

Received: 28 September 2012 / Accepted: 5 January 2013 / Published online: 24 January 2013 # Academy of Marketing Science 2013

Abstract The use of the multiple indicators, multiple causes model to operationalize formative variables (the formative MIMIC model) is advocated in the methodological literature. Yet, contrary to popular belief, the formative MIMIC model does not provide a valid method of integrating formative variables into empirical studies and we recommend discarding it from formative models. Our arguments rest on the following observations. First, much formative variable literature appears to conceptualize a causal structure between the formative variable and its indicators which can be tested or estimated. We demonstrate that this assumption is illogical, that a formative variable is simply a researcher-defined composite of sub-dimensions, and that such tests and estimates are unnecessary. Second, despite this, researchers often use the formative MIMIC model as a means to include formative variables in their models and to estimate the magnitude of linkages between formative variables and their indicators. However, the formative MIMIC model cannot provide this information since it is simply a model in which a common factor is predicted by some exogenous variablesthe model does not integrate within it a formative variable. Empirical results from such studies need reassessing, since their interpretation may lead to inaccurate theoretical insights and the development of untestN. Lee (*) : L. Chamberlain Marketing, Aston Business School, Aston University, Birmingham B4 7ET, UK e-mail: n.j.lee@aston.ac.uk L. Chamberlain e-mail: l.m.chamberlain1@aston.ac.uk J. W. Cadogan Marketing, Business School, Loughborough University, Loughborough LE11 3TU, UK e-mail: j.w.cadogan@lboro.ac.uk

ed recommendations to managers. Finally, the use of the formative MIMIC model can foster fuzzy conceptualizations of variables, particularly since it can erroneously encourage the view that a single focal variable is measured with formative and reflective indicators. We explain these interlinked arguments in more detail and provide a set of recommendations for researchers to consider when dealing with formative variables. Keywords Formative variables . Measurement . Composites . Indicators . Theory . Causality . Ontology . Philosophy The idea that some observable indicators cause or form (the terminology is often used interchangeably, e.g. Diamantopoulos 2008; Diamantopoulos and Winklhofer 2001; Edwards and Bagozzi 2000; Hardin et al. 2011; Jarvis et al. 2003) a variable is now well-accepted in mainstream marketing research and is known as the formative variable approach. The formative approach is in sharp contrast to the reflective measurement model, which posits that observable indicators are reflective effects of latent variables (emphasis added, Howell et al. 2007a p. 205). The formative model is not new and was explicit in the measurement literature as early as 1964 (Blalock, reprinted in 1972) and can be implied from at least 1948 (MacCorquodale and Meehl 1948; Rozeboom 1956). Although formative variable approaches are presented in various ways in the literature (e.g. Bagozzi and Fornell 1982; Blalock 1971, 1975; Bollen 1984, 1989; Bollen and Lennox 1991; Hayduk 1987, 1996; MacCallum and Browne 1993), the current acceptance in mainstream business research is driven, in part, by Diamantopoulos and Winklhofers (2001) now seminal Journal of Marketing Research article, as well as with the publication of an entire special issue of the Journal of Business Research on formative measurement (Diamantopoulos 2008) and special sections of Psychological

AMS Rev (2013) 3:317

Methods (Howell et al. 2007a; Bollen 2007; Bagozzi 2007) and MIS Quarterly on the same (e.g. Diamantopoulos 2011; Bollen 2011). In this context, formative variable approaches are increasingly used in marketing, management, and other business research fields (Edwards 2011; Diamantopoulos and Siguaw 2006; Hardin and Marcoulides 2011; Howell et al. 2007a). While an appreciation of approaches to variable construction other than the traditional reflective view of constructs has the potential to make a considerable contribution to the quality of marketing research (Rigdon et al. 2011), there are significant areas of disquiet around the formative model (e.g. Borsboom et al. 2003; Cadogan and Lee 2013; Edwards 2011; Franke et al. 2008; Howell et al. 2007a; Lee and Cadogan 2013; Wilcox et al. 2008). In particular, Hardin and Marcoulides (2011 pp. 2) point out that the sum total of the growing body of research on formative variables is a series of disjointed and contradictory messages that are not only confusing to consumers of this research but also could threaten the advancement of knowledge. Such concerns are perhaps summed up best by Hardin, et al. (2011 pp. 282) when they say, the application of causal indicators as formative measures has become a panacea without a clear set of theoretical foundations underlying their use. One area of particular concern is the general acceptance of the use of the multiple indicators, multiple causes (MIMIC) model as a way of operationalizing formative models in typical applications (hereafter, the formative MIMIC model). Methodological literature (e.g. Jarvis et al. 2003; MacKenzie et al. 2005) recommends the use of the formative MIMIC model to solve the inherent problems involved in using formative variables in empirical work. Not surprisingly, it is well established in the applied literature (e.g. Bello et al. 2010; Ernst et al. 2011; Gregoire and Fisher 2008). However, the current study argues that the MIMIC model is not appropriate for modeling formative variables. Consequently, the findings of research studies that rely on MIMIC models to assist in the use of formative variables may need to be reassessed and reinterpreted. In order to demonstrate the problems of the formative MIMIC model, we present a conceptual discussion of formative variables that clarifies a number of problematic contradictions and assumptions contained in the existing formative variable literature. In doing so, we build on recent studies critical of formative models (e.g. Bagozzi 2007; Edwards 2011; Howell et al. 2007a; Wilcox et al. 2008; Hardin et al. 2011; Hardin and Marcoulides 2011). However, unlike some (cf. Edwards 2011; Hardin and Marcoulides 2011), we stop short of recommending the abandonment of formative models. Rather, we provide a set of directions and recommendations for applied researchers who wish to employ variables conceptualized as formative in their research. In what follows, we explain the principles underpinning the reflective and

formative models and extend the discussion to include the formative MIMIC modeling approach. Building on issues relating to (a) the nature of formative variables, and in particular, whether the formative variable represents a real entity that is distinct from its defining indicators, (b) the causal linkage (or lack thereof) between a formative variables indicators and the formative variable itself, and (c) notions of interpretational confounding, we explain why the MIMIC model does not provide a valid method of integrating formative variables into empirical studies. We also discuss the issue of the conceptual compression of constructs that formative MIMIC models encourage researchers to engage in, arguing that theory advancement and the accumulation of knowledge will benefit from recognizing the potential problems that such compression can bring to theory development. We conclude by reflecting on the practical implications of our assessment of the MIMIC models place in formative variable models and recommend several avenues for further research.

Reflective and formative models and the MIMIC model Figure 1 illustrates both the formative and reflective models of latent variables as they commonly appear in organizational and measurement literature. Figure 1a depicts the reflective model and can be represented by the equation x i li x d i 1

where the subscript i represents the ith indicator, refers to the random error (or unique variance) for the ith indicator, and i indicates the loading of the ith indicator on the latent variable , which is exogenous. Thus, each observable indicator xi is a function of the common latent variable and some unique error i. As Borsboom (2005) shows, this model is customary in psychological measurement and generalizes to models such as item response theory, common factor analysis, confirmatory factor analysis, and classical test theory (Howell et al. 2007a). There are at least two conceptualizations of the formative variable model represented in general terms by Fig. 1b. The specification advocated in most of the literature on formative variables (e.g. Bollen and Lennox 1991; Diamantopoulos 2006; Diamantopoulos and Winklhofer 2001; Jarvis et al. 2003) is g1 x1 g2 x2 . . . gn xn z 2

where the latent variable is endogenous and considered to be a function of some observable indicators x1 to xn with i referring to the contribution of xi to . An error term is also included at the construct level. Error is not included at the item level in this formative model and, while interpretation of the construct-level error term has been subject to some debate (Diamantopoulos 2006), it is generally recognized as

AMS Rev (2013) 3:317

variable, C, and represented with a hexagon, see Grace and Bollen (2008) is not missing any additional causes beyond x1 xn (i.e., =0): C w1 x1 w2 x2 . . . wn xn 3

(a) The reflective variable model

(b) The formative variable model

Both Eqs. (2) and (3) share the property of being empirically inestimable without an additional endogenous variable or variables (Bagozzi 2007), meaning that the and w valuesand thus and Care dependent on which specific endogenous variables are used to estimate the model (Heise 1972). The specifications of Eq. (1), (2), and (3), as covered above, also unavoidably imply conceptual differences. Most importantly, Eq. (1) expresses how the indicators x vary as a consequence of the variation in the latent variable . Therefore, the implication is that is of independent existence (i.e. it can exist without the indicators). Conversely, Borsboom et al. (2003) state that the formative variable is dependent on the specific formative indicators used in the model and, thus, that it is impossible to make any claims that represents an independent entity (see also Howell et al. 2007b). This is not necessarily a problem per se, but it does have substantive consequences on how a formative variable can be modeled and interpreted (Borsboom 2005; Howell et al. 2007a). What is potentially problematic is that when researchers consider the meaning of formative models, they may visualize a model such as that presented in Fig. 1b and, drawing on their experiences of working with classical test theory models of latent variables, conclude that the parameters are paths that imply a causal relationship between the formative indicators and the formative variable. These parameters, the researcher assumes, need to be estimated empirically from data. However, models such as those presented in Fig. 1b and c cannot be estimated as they stand, simply because they are under-identified. This is where MIMIC models can be employed, since MIMIC models are quite easy to identify empirically. Specifically, formative MIMIC models as presented in Fig. 2 contain a focal latent variable (), some reflective indicators (the ys), and some formative indicators (the xs), and so allow for the inclusion of formative variables when estimating conceptual models (Jarvis et al. 2003; Kline 2006). Indeed, the formative MIMIC modeling approach is also presented as a way of validating/developing formative measures.

(c) The composite variable model


Fig. 1 Alternative latent variable models. a The reflective variable model. b The formative variable model. c The composite variable model

representing the remaining causes of that are not captured by x1 xn. A second specification (Fig. 1c) is identical to Eq. (2) except that the formative variable (now labeled a composite

Fig. 2 The formative MIMIC model

AMS Rev (2013) 3:317

Despite its mainstream acceptance, the MIMIC models use in modeling formative variables raises difficult issues. First among these is the fact that the MIMIC model suffers from interpretational confounding (Burt 1976), which Howell et al. (2007b pp. 245) consider to render formative models ambiguous, at best[which] can be a fatal flaw in theory testing. In simple terms, interpretational confounding means that the meaning of the formative variable one is interested in studying is determined by the endogenous variables that the formative variable is meant to predict, not the exogenous items that are supposed to measure it (Burt 1976; see Howell et al. 2007a for a full explication). Beyond the latter issue, however, the use of the formative MIMIC model is flawed because a fundamental assumption is incorrect. Specifically, the notion that there is a causal structure between the formative indicators and the focal variable is incorrect. Rather, the loadings implied by Figs. 1b and 2 are not causal paths, they are simply weights that indicate the contribution that a researcher decides an indicator makes to a formative variable. Indeed, the formative indicators and the formative focal variable are not distinct entitiesthey are the same entity and so one cannot cause the other. Accordingly, it is up to the researcher to specify the weight that a formative indicator has when defining the formative variable. There is no need to try to estimate these contributions, since they are part of the construct definition and cannot be inferred by statistical means. Furthermore, the use of the formative MIMIC model brings with it deep logical errors. Specifically, contrary to conventional understanding, a MIMIC model does not provide a method of measuring a single focal variable simultaneously using formative and reflective indicators. Rather, the MIMIC model simply models a reflective latent variable with some exogenous predictors. The following discussions explain, in-depth, the reasoning behind these conclusions.

The nature of the latent variable: application to the formative model While the idea of latent variables is elemental to measurement theory, it has long been subject to disagreement about whether a latent variable is a proxy for a real entity that is unobservable, a convenient fiction to help us make sense of observable events such as behavior, or a simple numerical operation (Borsboom, et al. 2003; see e.g.; Bagozzi 2007; Bollen 2002; Borsboom 2005; Borsboom and Mellenbergh 2002; Edwards and Bagozzi 2000; MacCorquodale and Meehl 1948; Rozeboom 1956 for a brief picture of the development of perspectives on this issue). The situation is not helped by the plethora of different terms used in the literature (cf. Bollen 2002; e.g. Bagozzi 1982; DeVellis 1991; Nunnally 1967). In fact, Bollen (2002 p. 608) argues

that we should ignore this metaphysical dilemma, as it narrows the use of the concept of latent variables and raises metaphysical debatesit seems preferable to leave the real or hypothetical nature of latent variables as an open question that may well be unanswerable. This seems a strangely equivocal position, and Borsboom et al. (2003 p. 204, emphasis in original) counter that latent variable theory is not philosophically neutral and that the connection between mathematical definitions of latent variables and empirical data requires anontological stance. Borsboom (2005, p. 58) provides a very clear definition in this regard when he states that the only possible stance which is consistent with both accepted theory of latent variables and current practice is the view that the latent variable is a real entity which is assumed to exist independent of measurement. This is termed entity realism, in which the theoretical entities that we propose as components of our theories actually correspond to entities in reality. Borsboom et al. (2003) give the example of electrons as theoretical entities (being as they are unobservable) and perhaps the Higgs Boson would be another topical example in more recent times. Entity realism would maintain that the theoretical entity of an electron actually corresponds to a real particle albeit one that is unobservable with our present capabilities. Entity realism would therefore maintain the same correspondence between the modeled entities in marketing theories and actual entities that exist in reality. Despite this, literature advocating and using formative variables rarely provides an explicit discussion of the ontological stance taken by the researcher in terms of their view of the formative variables they are studying. However, a number of authors critical of the formative approach contend that formative models fail to support a realist interpretation (e.g. Edwards 2011; Wilcox et al. 2008). Borsboom et al. (2003 pp. 208209) explain this clearly in relation to socioeconomic status (SES), using measures of income, education, and place of residence as indicators, and conclude that variables of the formative kind are not conceptualized as determining our measurements, but as a summary of these measurements [and as such] nowhere has it been shown that SES exists independent of the measurements. This is problematic in light of current theory and practice regarding formative variables, much of which seem to imply that formative variables (e.g. or C in Fig. 1b and c) are different from the formative indicators (the xs). A particular problem concerns the nature of the relationship between the items and the corresponding formative latent variable.

Causality and the formative variable Detailed discussion of causality is rare in formative literature (see Bagozzi 2007; Edwards and Bagozzi 2000 for

AMS Rev (2013) 3:317

notable exceptions). However, the causal implications of the formative variable model are simply not consistent with the way the formative model seems to be used in practice, as well as with the assumptions tacit in the formative literature which appear to be drawn primarily from those of reflective measurement (Borsboom et al. 2003; Hardin and Marcoulides 2011). Despite causality being basic to human thought (Pearl 2000 p. 331; see also Blalock 1972; Scriven 1966), the philosophy literature has seen considerable debate regarding what it is for one variable or event to cause another (Sosa and Tooley 1993; Pearl 2000). Nevertheless, whichever philosophical definition of causality is used, one must define when causal inferences from empirical data can be made. Bollen (1989) provides three causality conditions that must be present to infer some form of causal relationship between X and Y. Briefly, a causal link from X to Y can be inferred from a data set if a) Y is isolated from all other possible causes other than X, b) there is an empirical association R observed between X and Y, and c) X can be determined to come before Y. However, one other condition is explicit in the philosophical discourse (Sosa and Tooley 1993) and that is that both cause and effect must be separate material entities. Similarly, Edwards and Bagozzi (2000 pp. 157158, see also Wilcox et al. 2008) note that the latent variable must be distinct from its measure to infer causality in a measurement model. With an entity realist view, this condition is not problematic. Specifically, if entities exist in reality (e.g. electrons, the Higgs Boson, attitudes, etc.,), it is easy to see that measurement items are separate entities from the latent variable they are purported to measure. To illustrate this in a practical sense, consider a construct such as role ambiguity, which can be measured using indicators such as I feel certain about how much authority I have in my selling position (MacKenzie et al. 1998). It would be usual to conceptualize a response to such an item to vary as a function of the unobservable role ambiguity phenomenon; salespeople who feel less ambiguous about their jobs will feel more affirmation towards this item than those who are more ambiguous. As such, role ambiguity occurs first and causes the response to the item (Borsboom et al. 2003). This conceptualization is accurately modeled using a reflective variable model. Alternatively, consider a variable such as Advertising Expenditure, one of Diamantopoulos and Winklhofer s (2001 pp. 275) examples of a formatively-modeled variable. The measure of this consists of four items, each tapping a type of expenditure: Television (TV), Radio, Newspaper, and All Media. Consistent with Diamantopoulos and Winklhofer (2001) and Fornell and Booksteins (1982) definition of a formative approach, Advertising Expenditure is defined as a composite of four different types of expenditure that are tapped. But this also means that it cannot logically

be a phenomenon that exists, independent of the indicators. TV, Radio, Newspaper and All Media expenditure do not cause Advertising Expenditure, they are Advertising Expenditure. The formative Advertising Expenditure latent variable does not possess the property of being distinct from its indicators and has no meaning over and above the four formative indicators. This lack of distinction between the indicators and the construct is explicit in the conceptual definition of formative measurement (e.g. Diamantopoulos and Winklhofer 2001) and, as such, we cannot justify a causal relationship between the indicator and the outcome (Borsboom et al. 2003).

Inherent problems with operationalizing formative latent variables The inappropriateness of the MIMIC model for operationalizing formative latent variables can be effectively shown with an example. To return to Advertising Expenditure, recall that it is defined by four formative indicators: TV, Radio, Newspaper, and All Media. Imagine we have an industry-sourced data set that includes TV, Radio, and Newspaper expenditure, but nothing else. Figure 3a shows our initial conceptual model of Advertising Expenditure, which contains three observed formative indicators (TV, Radio, Newspaper) and one unobserved formative indicator (All Media) that is represented by the error term, 1 (cf. Diamantopoulos 2006). As it stands, this model is not testable due to lack of identification. Thus, many theorists (e.g., Diamantopoulos and Winklhofer 2001) suggest adding two or more reflective indicators of the construct (y1 and y2) to enable identification, creating a MIMIC model. In our example (see Fig. 3b), the reflective indicators chosen are Market Share and Brand Image, both of which are likely to be shaped by the various Advertising Expenditure indicators. Now we have the tools to identify the latent variable model and provide estimates, including one for the error term, and it might be assumed by some that the error term returned (2 in Fig. 3b) corresponds to All Media, the unmeasured component of the Advertising Expenditure variable (1 in Fig. 3a). This assumption is a mistake. While the diagrammatic representation of the formative model of Advertising Expenditure (Fig. 3a) appears to imply that the xs are causes of the Advertising Expenditure construct, as we have pointed out, this is not the casethe indicators are simply the expenses that need to be aggregated to calculate Advertising Expenditure. The Fig. 3a diagram is merely a convenient way of showing what indicators define the formative Advertising Expenditure variable and of visually differentiating the formative variable from the reflective variable. In practice, though, researchers do erroneously try to estimate the parameters in formative variable models and, in order to identify the model, will add endogenous

8 Fig. 3 Alternative advertising expenditure models. a Advertising expenditure as a formative variableinitial definition. b MIMIC model that is estimated. c Advertising expenditure as a formative variablemodified definition

AMS Rev (2013) 3:317

(a) Advertising expenditure as a formative variable initial definition

(b) MIMIC model that is estimated

(c) Advertising expenditure as a formative variable modified definition


variables to the formative model (e.g., Fig. 3b). Unfortunately, in such cases, the meaning of the focal latent variable, 2, is not theoretically grounded in the formative items but is empirically grounded in the covariance between Market Share and Brand Image (see Howell et al. 2007a; Lee and Cadogan 2013; Treiblmaier et al. 2011). The empiricallytested latent variable, 2 in Fig. 3b, is, therefore, not the same conceptually as the theoretically-grounded 1 variable in Fig. 3a; it is simply the common factor (shared variance) underlying the two reflective variables. In this case, we have a serious problem. If we draw conclusions from our empirically-identified model (Fig. 3b), we may make the mistake of thinking they can be applied to our conceptual model (Fig. 3a). This is not the case (Lee and Cadogan 2013) and if we make this mistake, we are in real danger of drawing erroneous conclusions. To reinforce this point, it should be clear that the theoretical meaning of Advertising Expenditure has no actual

AMS Rev (2013) 3:317

impact on the empirical value of the error term resulting from our model (Lee and Cadogan 2013). Imagine that we change the definition of Advertising Expenditure to add a fifth formative component (see Fig. 3c), corresponding to advertising through Paid Word of Mouth without collecting new data. We still only have access to TV, Radio, and Newspaper expenditure data, but now the error term has changed (3 1) and has two components, All Media and Paid Word of Mouth. If the MIMIC model for Advertising Expenditure (Fig. 3b) provides an accurate representation of the theoretical Advertising Expenditure variable, Fig. 3bs error term (2) should change to include both All Media and Paid Word of Mouth, accommodating the change in the conceptualization of Advertising Expenditure. However, as one can anticipate, simply changing Advertising Expenditures formative composition, but not adding new data, will not result in new estimates if Fig. 3b is rerun. The estimates will of course be identicalincluding the magnitude of 2. This is because the value of 2 returned is not dependent on the conceptual meaning that researchers ascribe to Advertising Expenditure. Rather, the 2 error term just represents unexplained variance from trying to predict the common factor of market share and brand image using TV, Radio, and Newspaper expenditures as exogenous predictors. This explains why Borsboom et al. (2003) consider that a MIMIC model is not a formative latent variable model but is rather a reflective latent variable model (with all the associated entity realism foundations) predicted by a set of causes.

Theoretical directions for formative latent variable models: divorcing causal terminology from formative models Perhaps one reason the formative MIMIC model is so prevalent is the lack of clarity within the formative literature, with some authors using formative (e.g. Diamantopoulos and Winklhofer 2001; Fornell and Bookstein 1982) and others using the term causal to refer to what seems to be the same type of variable (e.g. Bollen 1989; Bollen and Lennox 1991; MacCallum and Browne 1993; see also Blalock 1972). Edwards and Bagozzi (2000 p. 162) go so far as to state that the formative model specifies measures as correlated causes of a construct, while Baxter (2009 p. 1370) writes that in formative scalesthe indicators are independent causes of the construct being measured. Likewise, the literature critical of formative models is not exempt from this confusion (e.g. Hardin et al. 2011). Yet, as we know, there is a tautology in considering formative indicators as causes of their latent variable (rather, the formative items define their variablethey simply are the variable) and, as such, the terms formative and causal cannot logically be interchangeable.

A problem that some have identified is working out whether a formative indicator is really a formative indicator (i.e., a defining component of a variable) or rather if it is an exogenous cause of a variable (Cadogan and Lee 2013). To illustrate, we borrow Diamantopoulos and Winklhofer s (2001) example of Perceived Coercive Power. This latter latent variable is measured using six items, each referring to a buyer s perception of a supplier s ability to take a different type of action (Gaski and Nevin 1985) such as delay delivery, or refuse to sell. It seems clear that these items are external causes of a distinct phenomenon of Perceived Coercive Power as experienced by the buyer rather than integral components of it. Thus, these measurement items are not necessarily providing information on whether the buyer really perceives the supplier to have coercive power. They are merely potential causes of such a perception. The buyers perception of whether the supplier has a lot or a little coercive power is less complicated and is a simple unidimensional perception in this regard (one that could be assessed using reflective items). Thus, the relationship between the six items and the notion of buyers perceptions of suppliers coercive paper satisfies a causal interpretation being as the phenomenon is independent of its indicators, and one would expect the indicators to occur prior to their effects. Furthermore, the variable Perceived Coercive Power satisfies an entityrealist standpoint. Interestingly, Gaski and Nevins (1985) items could be argued to be formative components of a different variable, perhaps one capturing the supplier s Access to Coercive Tools, as opposed to causal indicators of the buyer s perceptions of the Coercive Power of their suppliers. Of course, these different variables may have separate antecedents and consequences. By way of illustration, Table 1 presents the two alternative variables and the three possible models discussed here. The previous discussion shows how conceptualizing a formative variable unavoidably defines it as nothing more than its indicators. It is important to realize that this does not run counter to statements such as that by Wilcox et al. (2008 p. 1220) that constructs themselves, posited under a realist philosophy of scienceare neither formative nor reflective. Rather, defining a variable as formative places it outside the entity realist stance (Borsboom et al. 2003), by defining it as representing nothing more than its indicators. In other words, the constructs discussed by Wilcox et al. (2008) and others could never be formative as they represent real entities, independent of their measurement. Once one posits an entity independent of the indicators, one is dealing with a different construct and, therefore, can include this construct in either reflective or causal models as shown in Table 1. However, the construct cannot simultaneously be conceptualized or modeled as formative.

10

Table 1 Illustration of different latent variable types Unit of analysis Items Relationship of items to latent variable Ontological status of latent variable

Construct

Definition

Perceived coercive power How much capability the supplier Buyer (buyers perception (see Gaski and Nevin 1985) has to get the buyer to do what of suppliers ability) they would not have done otherwise by way of punishment (see Gaski and Nevin 1985) 0=No Capability 4=Very much capability 1. Delay delivery 2. Delay warranty claims 3. Take legal action 4. Refuse to sell 5. Charge high prices

Items cause latent variable to occur. Latent variable is a hypothetical I.e. buyers perception of real entity. suppliers ability causes buyer to perceive supplier to have high coercive power. Whether the items are measures is questionable.

Access to coercive tools

Ability of supplier to take different actions in their dealings with a given firm.

6. Deliver unwanted products Supplier (e.g. suppliers 0=No Capability Items form latent variable. I.e. the Latent variable is an abstraction rating of their own ability) 4=Very much capability supplier s coercive power is a of the empirical data. composite of their ability to take 1. Delay delivery each action 2. Delay warranty claims 3. Take legal action

Perceived coercive power How much capability the supplier Buyer (buyers perception (see Gaski and Nevin 1985) has to get the buyer to do what of supplier) they would not have done otherwise by way of punishment (see Gaski and Nevin 1985).

4. Refuse to sell 5. Charge high prices 6. Deliver unwanted products 1=Strongly Disagree Items are caused by the latent Latent variable is a hypothetical variable. I.e. the coercive power 5=Strongly Agree real entity. of the supplier influences the 1. I believe this supplier has level of all the items the ability to force my firm to go along with their plans whether we want to or not 2. Managers at my firm do almost anything this supplier wants, even when we dont want to 3. When this supplier says jump, we ask how high? 4. We are afraid of the ability this supplier has to hurt us if we dont go along with their plans. (etc.)

AMS Rev (2013) 3:317

Parts of this table adapted from Diamantopoulos and Winklhofer (2001)

AMS Rev (2013) 3:317

11

Bollen and Bauldry (2011 p. 266; see also Bollen, 2011) also distinguish between causal, effect (their term for reflective) and composite (their term for formative) models, stating that the different types of variables have distinct theoretical interpretations, and careful consideration of the differences among them may inform the substantive understanding of the object under study. Bollen is not the first to make this distinction. Over 60 years ago, MacCorquodale and Meehl (1948 p. 103) conceptualized a distinction between abstractive and hypothetical variables which is essentially the same thing. An abstractive variable is defined as simply a quantity obtained by a specified manipulation of the values of empirical variables; it will involve no hypothesis as to the existence of nonobserved entities. Such a variable does not represent a nonobservable phenomenon and is defined by its observable indicators (as in the Advertising Expenditure example) and is therefore appropriately constructed as a formative composite (e.g. Bollen 2011). A hypothetical variable on the other hand is defined as referring to processes or entities that are not directly observed (although they need not be in principle unobservable)the truth of the empirical laws is a necessary but not a sufficient condition for the truth of these conceptions (MacCorquodale and Meehl 1948 p. 104). Clarifying which type of variable one is dealing with for all variables in a given model is vitally important to the validity of the models representation of the underlying theory. However, such decisions can conceal traps for the unwary researcher.

Clarifying the constructs implied in a formative MIMIC model Despite the logic of the above discussion, researchers in applied settings sometimes make the error of assuming that a MIMIC model allows them to essentially do two things at the same timethat is, measure a single construct in two ways, both formatively and reflectively. Unfortunately, this is not the case, and the formative and reflective models imply two separate constructs. For example, Cadogan et al. (2008) provide an example of a situation where researchers have assumed that the focal construct in a MIMIC model ( in Fig. 2) can be measured using both formative indicators (the xs) and reflective indicators (the ys). For instance, one of their focal constructs (1) is the quality of the firms market information generation process, and the formative approach to its assessment involves measuring direct indicators of information generation quality, such as the speed of information generation, the quantity of information generated, etc., while the reflective measures ask respondents about the effectiveness and efficiency of the generation process and the firms satisfaction with the generation process (see Fig. 4a).

Upon rumination, however, it seems obvious that assuming that the formative and reflective items measure the same thing is flawed. On the one hand, we have shown that the aggregate of the measured and unmeasured xs in a MIMIC model is not the same thing as the common factor underpinning the ys; this is evidenced by the fact that the formative variable in a MIMIC model can change (e.g., by changing the number of and/or breadth of conceptual coverage of the xs), but the conceptual meaning of the reflective construct measured by the ys will remain the same. As such, when the MIMIC model is used in formative variable models, the single focal construct might be better thought of as being two variables (see Fig. 4b); on the one hand, there is the aggregate of the formative indicators (1)the formative variable the researcher wishes to assess with the xs and, on the other hand, there is the unidimensional reflective construct (2) that is the common factor underpinning the ys. In Cadogan et al.s (2008) case, for instance, the 1 construct (quality of the firms market information generation process) could be considered different from, and perhaps causally linked to, the reflective construct (2) capturing managers overall assessment of whether their firms market information generation processes are of high or low quality. It is plain to see that 1 and 2 are different thingsalthough they may be related (in the sense that there may be some kind of causal relationship between the two variables), they should not be confused as being the same variable. Indeed, even if 1 is a cause of 2 (as depicted in Fig. 4b), it is only one of several causes of 2, indicating that 1 and 2 may have very different causal relationships with other variables and may not be good proxies for each other (certainly not good enough to act as surrogates for each other in causal models). Giving different constructs the same name is confusing and potentially dangerous for scientific progress. This is shown in the case of socio-economic status (SES), which is a single label that has been applied to multiple different variables. For example, Blalock (1975) conceptualizes SES in two ways. First, Blalock (1975 p. 365) defines what he calls the usual notion of socio-economic status as being an internal evaluation of others by a judge, where objective propertiesin part determine the subjective evaluations which confer status. SES here is a hypothetical entity, possibly influenced by causal indicators (interestingly, clearly this is not the usual notion of SES as it has developed since). Later, Blalock (1975 p. 365) conceptualizes a different construct as the combination of positions or group or category memberships that [a person] occupies. Blalock (1975) also labels this second construct as SES, even though it is clearly a different variable from the initial usual notion of SES. In fact, Blalocks (1975) second definition appears consistent with an abstractive conceptualization, using a

12 Fig. 4 Multiple variables underpinning the MIMIC model. a The focal formative variable, 1, in a formative MIMIC model. b The focal formative variable, 1, and a conceptually distinct entity, 2. c 1 and 2 in a causal model

AMS Rev (2013) 3:317

a) The focal formative variable, 1, in a formative MIMIC model

b) The focal formative variable, 1, and a conceptually distinct entity, 2

c) 1 and 2 in a causal model


composite of formative indicators (ironically, it appears this latter conception has become the usual notion of SES since Blalocks work). This is not a simple semantic difference. In fact, each conceptualization of SES refers to a different concept, despite the common name, and this difference has important implications for any associated theoretical models and empirical operationalizations. Wilcox et al. (2008) point out how reviews of the SES literature have shown inconsistencies that have led to non-comparable results across studies, which is a serious issue for scientific progress. Note that this is not necessarily due to inherent problems with the different variable models, but one of a) inconsistent conceptualization in the first place and b) inadequate information presented in studies about how that construct was conceptualized. Clear information in this regard would have allowed researchers to decide which type of SES they wished to incorporate in their theories, and forced them to justify why, allowing future work to build more consistently. In light of the above, one can sympathize with Howell et al.s (2007a, 2007b; see also Wilcox et al., 2008) contention that formative indicators are not measures at all. Instead, abstractive (i.e., formative composite) variables are used in models because they provide some shorthand or convenient summary of the empirical data which is useful for some purpose, rather than to measure some underlying hypothetical entity. With the above in mind, one could therefore term this process of using formative variables constructing a variable rather than measuring a construct. It can be argued that such abstractive formative composites are useful to the extent that they provide clear and consistent definitions and quantitative

AMS Rev (2013) 3:317

13

operationalizations (see Lee and Cadogan 2013). In fact, such approaches are common in clinical research settings where, among other things, they are used to diagnose various conditions and assess key indicators of health (e.g., Torrance et al. 1996). Correspondingly, in a marketing context, there is surely no major problem with using a formative composite such as Advertising Expenditure as long as we are clear on how the composite is operationalized.

The practical implications of the clarification of the formative MIMIC model Our clarification of the formative MIMIC model, as discussed above, is a useful one in a number of ways. In particular, it forces the researcher to think more clearly about exactly what their variables represent. Of relevance here is Bollens (2011; Bollen and Bauldry 2011) aforementioned work on different types of indicators. Specifically, Bollen and Bauldry (2011) differentiate three different kinds of indicator: composite (i.e. formative), effect (i.e. reflective), and a new kind of indicator, the causal indicator. The distinction between causal and composite (formative) indicators appears to be founded on the idea that causal indicators have conceptual unity, whereas composite indicators can be an arbitrary combination of variables. However, Bollen and Bauldry (2011) state that the distinctions between these indicators can be blurred and that theoretical framing of the analysis is the most important way of determining the different variable types. Unfortunately, Bollens (2002; 2011; Bollen and Bouldry, 2011) work appears to place the definitional focus on the indicators rather than the latent variable and does not take an explicit ontological stance on the nature of latent variables. It is not clear whether, in a causal indicator model, the indicators (i.e., the xs) are separate entities from the focal variable () nor whether the variable is a real entity. Yet, if the ontological status of the focal variable is clarified (i.e., if the researcher specifies whether the variable is a real entity or not, and whether the xs are separate entities from the variable), it is hard to see where the difficulties in distinguishing the different types of indicators that are cited by Bollen and Bauldry (2011) could lie. In particular, invoking an entity-realism perspective for a given variable and a set of xs immediately rules out a formative (composite) approach and, as a result, any x identified must be exogenous causes of the , or covariates. However, if does not exist as a separate entity from the xs, then the model is a composite (formative) one. Of course, if one does clarify that does exist as a separate entity, this begs the question of why the causal items are necessary, and whether one is not better served by simply using the reflective model to develop a measure. In fact, it seems likely that the causal variable model offered

by Bollen and Bauldry (2011) would rarely offer any advantages over a reflective model, apart from the situation in which it is impossible to develop a reflective measurement model. Such situations could arise when one only has limited access to data or is using secondary data, where there are no logical reflective measures of the construct. Of course, this would bring up the potential for interpretational confounding again, and there is always the inherent question regarding the validity of proxy measures of this kind. When it comes to the question of how to model formative indicators without using a MIMIC model, Diamantopoulos et al. (2008) give some alternatives, some of which involve replacing formative indicators of the construct with endogenous reflectively-measured latent variables. Alternatively, Cadogan et al. (2008) offer the aforementioned example of modeling the quality of market information generation as a formative construct, taking into account complex relationships between the indicators and construct (e.g. interactions between indicators), an idea implied also by Blalock (1982). None of these approaches solves the conceptual problems raised presently, that the formative latent variable is not an independent entity. However, Howell et al. (2007a pp. 214 215; see also Wilcox et al. 2008) suggest a number of other options for dealing with sets of formative indicators, including principal components analysis, weighted averages, and partial least squares. Ultimately, they conclude that modeling formative indicators as separate constructs is the most reasonable course of action. Bollen and Bauldry (2011 p. 268) suggest the use of the model shown in Fig. 5 for composite variables. This model shows the use of a separate composite for each outcome variable (yi) and avoids the assumption that a single formative composite completely mediates the entire effect of the indicators (the xs) on all of the outcomes (ys). The model also allows different weights to be used for each composite, optimizing the predictive ability of the model for each outcome. The literature on clinical measurement also offers at least one interesting alternative, given that it has dealt with similar issues for many years (Fayers and Hand 2002; e.g. Torrance et al. 1996). Indeed, the statement most symptomscan combine and interact in various ways, and we are confronted by the problem of how to use them in a single

Fig. 5 Formative composite model (Bollen and Bauldry 2011 pp. 268)

14

AMS Rev (2013) 3:317

model (Fayers and Hand 2002 p. 239) sums up exactly the problem outlined above concerning formative indicators. Fayers and Hand (2002 pp. 246) come to the same conclusion as Howell et al. (2007a) that both simple summation and weighted sums are less easy to justify and suggest a method based on the maximum of the observed variables, citing a model proposed by Blalock (1982) in which y (the value of the composite/formative variable of interest) is defined as a function of a number of x indicators as follows: log1 y X bj log 1 xj : 4

Here, the value of y is high if any one of the x variables is large, which is likely to be more appropriate than a simple sum in many situations. However, the choice of the weights in this equation is vital and provides a significant part of the definition of the concept itself. Yet as Fayers and Hand (2002 pp. 247) state, this is not a data analytic problem, but one of definition. In other words, the researcher must in some way define these weights and not leave it to the data analysis process, just as some commentators have suggested regarding the formative composite model (Lee and Cadogan 2013). There are various ways to engage with this problem. For example, one could ask members of the population to be measured to give their opinions of the importance of each component (e.g., through a preliminary survey) or academic expert analysis could be used. These methods share much with Rossiter s (2002; 2011) thoughts on content validity. Ryan and Farrar (2000) also suggest that conjoint analysis may be used to provide information on the relative values of different attributes. The information could then be used to weight various items using Eq. (4). All of these approaches are fundamentally different from those embedded in marketing research at present, which rely on the empirical data to either test hypotheses about the measure (reflective models, where such an approach is justifiable) or to define the variable itself (the current perspective on formative models, which can be approximated by a partial least squares approach). Health economics also offers the multiattribute utility function approach (e.g. Feeney 2006), which Fayers and Hand (2002) note is broadly analogous to Eq. (4), but more complex and flexible. In brief, this approach is in two stages. Subjects of interest fill in a standard questionnaire of the relevant items. An index is then computed from these scores using a multiattribute utility function (MAUF). In essence, this function weights the different items and combines them to create a single score, and the form of the function defines the interaction between the items, be it linear, multiplicative, or multi-linear (Feeney 2006). The function must be derived from a prior survey of the population to determine preference scores for each of the items.

The more complex the function and the larger the set of items, the more difficult and demanding this is and generally around seven attributes are considered to be the maximum (Feeney 2006). Such approaches appear to be challenging and time consuming in the marketing context but may be appropriate in many situations. For example, service quality looks to be a concept which is ideally suited to such an approach. Nevertheless, in many situations where formative indicators are used in organizational research, complex approaches such as Eq. (4) or the MAUF would seem unnecessary. For example, the advertising expenditure example would seem to refer to a situation where a simple sum is appropriate. In most situations, given the definition used here, advertising expenditure is neither complex nor abstract; it is simply an amount of moneya concrete variable (Rossiter 2002). Thus, if no more detailed conceptualization is provided, the value of advertising expenditure is the simple sum of the indicators. Many of the alternative approaches to the treatment of formative indicators discussed above share the standpoint that it is fundamentally the researcher, sometimes using information from the research population, who determines the nature of a formative variable by conceptualizing its formative components and their relationship to the single variable score. In other words, when constructing a formative variable, the researcher is making a statement that these components form this composite in this manner. Allowing the data to determine the nature of a formative composite latent variable in a flexible manneras estimating a formative MIMIC model or even using partial least squares as some formative literature (e.g. Diamantopoulos and Winklhofer 2001; Edwards and Bagozzi 2000) doesis fundamentally unsound (Lee and Cadogan 2013). Also bear in mind that if different researchers use different weights to create a formative variable score, even if they may use the same formative indicators and the same data, then the variables they create are not equivalent and cannot be compared or treated as though they have the same conceptual or empirical meaning. We would advocate that researchers who wish to employ variables conceptualized as formative in their research models avoid using the MIMIC model to operationalize formative variables. In fact, the MIMIC model is not needed if researchers follow the guidelines below: (a) Use predefined weights for formative indicators that are explicitly part of the construct definition. (b) Specify the weights using some explicit prior theory (e.g., the weights are all the same) or use some empirical method to determine the weights (e.g., a survey of key informants, Delphi method, or utility function methods).

AMS Rev (2013) 3:317

15

(c) Use these weights to create a single composite score for the formative variable, using a standard algorithm that is also explicitly part of the construct definition. (d) Use the single composite score to test theoretical models (e.g., to identify patterns of covariance between the composite score and other variables). The analysis method in this case could be anything capable of dealing with quantitative measurements. (e) Critically, bear in mind that the composite score is not a measure of a single real entity and so any observed covariance between the formative composite and another variable runs the risk of being rather uninformative (see Fig. 6a), since the covariance () may hide the true relationships between the indicators that create the composite score and the other variable (see Fig. 6b). (f) Accordingly, theory development will be greatly enhanced if the formative indicators are modeled in an explicit causal model, whereby the potential interrelationships between the indicators themselves and between the individual indicators and other variables in the model (such as ) are specified.

Conclusions and questions for further research This article addresses the problematic use of the MIMIC modeling approach when applied to formative variables and offers some possible solutions. We demonstrate that the MIMIC model is not doing its intended job since it does not provide
Fig. 6 Alternative models of the relationship between items and constructs. a A formative composite variable, C, and its covariance with another variable, : true relationships hidden. b Possible interrelationships between the xs, and relationships between the xs and

a way for researchers to operationalize formative variables in empirical research models. We explain why this is so by drawing on various arguments, including those that look at the nature of the causal relationship between the formative indicators and the formative focal variable and by invoking the entity-realist position, which we show does not hold for formative variables. Formative latent variables are defined as abstractive composites of empirical variables, with no independent meaning of their own, and as such the idea of a causal relationship between an indicator and a formative variable is shown to be tautological. Ultimately, the onus is on the researcher, not the data, to define formatively-constructed latent variables by way of various modeling approaches such as the multiattribute utility function. We also highlight the dangers in visualizing formative variables using the MIMIC model approach; in some cases, it can encourage researchers to develop fuzzy or confused conceptual understanding of their variables. The arguments advanced in this paper have implications for marketing researchers who wish to use formative variable models in their work. First of all, researchers should avoid using the MIMIC model when they are dealing with formative variables. However, this decision will naturally require researchers to seriously consider whether the entity-realist position is appropriate for the variables they are dealing with. This fundamental conceptual task is likely to lead to some interesting conclusions regarding many of our foundational marketing variables, as well as when conceptualizing new ones. Further, when using formative composites, researchers will have to be very clear in their definitions of such variables in order that they

a) A formative composite variable, C, and its covariance with another variable, : true relationships hidden

b) Possible inter-relationships between the xs, and relationships between the xs and

16

AMS Rev (2013) 3:317 Bagozzi, R. P. (2007). On the meaning of formative measurement and how it differs from reflective measurement: comment on Howell, Breivik and Wilcox. Psychological Methods, 12(2), 229237. Bagozzi, R. P., & Fornell, C. (1982). Theoretical concepts, measurements, and meaning. In C. Fornell (Ed.), A second generation of multivariate analysis. New York: Praeger. Baxter, R. (2009). Reflective and formative metrics of relationship value: a commentary essay. Journal of Business Research, 62, 13701377. Bello, D. C., Katsikeas, C. S., & Robson, M. J. (2010). Does accomodating a self-serving partner in and international marketing alliance pay off? Journal of Marketing, 74(November), 7793. Blalock, H. M. (1971). Causal models involving unmeasured variables in stimulusresponse situations. In H. M. Blalock (Ed.), Causal models in the social sciences. Chicago: Aldine. Blalock, H. M. (1972). Causal inferences in nonexperimental research. New York: W.W. Norton and Company Inc. Blalock, H. M. (1975). The confounding of measured and unmeasured variables. Sociological Methods and Research, 3(4), 355383. Blalock, H. M. (1982). Conceptualization and measurement in the social sciences. Beverly Hills: Sage. Bollen, K. A. (1984). Multiple indicators: internal consistency or no necessary relationship. Quality and Quantity, 18, 377385. Bollen, K. A. (1989). Structural equations with latent variables. New York: John Wiley and Sons. Bollen, K. A. (2002). Latent variables in psychology and the social sciences. Annual Review of Psychology, 53, 605634. Bollen, K. A. (2007). Interpretational confounding is due to misspecification, not to type of indicator: comment on Howell, Breivik, and Wilcox. Psychological Methods, 12, 219228. Bollen, K. A. (2011). Evaluating effect, composite, and causal indicators in structural equation models. MIS Quarterly, 35 (2), 359372. Bollen, K. A., & Bauldry, S. (2011). Three Cs in measurement models: causal indicators, composite indicators, and covariates. Psychological Methods, 16(3), 265284. Bollen, K. A., & Lennox, R. (1991). Conventional wisdom in measurement: a structural equations perspective. Psychological Bulletin, 110(2), 305314. Borsboom, D. (2005). Measuring the mind: Conceptual issues in contemporary psychometrics. Cambridge: Cambridge University Press. Borsboom, D., & Mellenbergh, G. J. (2002). True scores, latent variables, and constructs: a comment on Schmidt and Hunter. Intelligence, 30, 503514. Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110 (2), 203219. Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review, 111(4), 10611071. Burt, R. S. (1976). Interpretational confounding of unobserved variables in structural equation models. Sociological Methods and Research, 5(1), 352. Cadogan, J. W., & Lee, N. J. (2013). Improper use of endogenous formative variables. Journal of Business Research, 66, 233241. Cadogan, J. W., Souchon, A. L., & Procter, D. B. (2008). The quality of market-oriented behaviors: formative index construction. Journal of Business Research, 61(12), 12631277. DeVellis, R. F. (1991). Scale development: theory and applications. London: Sage. Diamantopoulos, A. (2006). The error term in formative measurement models: interpretation and modeling implications. Journal of Modelling in Management, 1(1), 717. Diamantopoulos, A. (2008). Advancing formative measurement models. Journal of Business Research, 61(12), 12011202. Diamantopoulos, A. (2011). Incorporating formative measures into covariance-based structural equation models. MIS Quarterly, 35 (2), 335358.

are to be of use across different contexts and studies. Formative modelers must also take care in explicitly understanding the match between their conceptual definitions and operational decisions (e.g. setting the beta weights when using Eq. [4]), and the ideas presented herein should help many begin. Borsboom et al. (2004) are right to point out the dangers of treating collections of distinct attributes as if they were actual real entities (by naming them and using them as constructs in models); yet, Howell et al. (2007a) suggest that this may be justifiable if one can show that the component parts of the formative variable exhibit similar relationships to antecedents and consequences. Most important is that a formative variable should be constructed for a sound reason, over and above using the individual components. This reason should not be that it represents an entity separate to its component items (because it has been shown that in such a case it cannot by definition be formative), but that using a single variable makes logical sense and provides an important benefit to the researcher, outweighing any possible advantages one may get by testing individual components separately. The benefits of a single variable can include prediction, description, parsimony, interpretability, and thus the possible increased practical influence of a model. But, they should always be balanced against the potential obfuscation of individual important relationships. Even so, more work clearly is needed in this regard. Finally, it should be clear that this paper does not argue that formative models are somehow inferior to reflective models. Instead, the formative model should be considered as a different form of thinking about a variable which may be completely appropriate in many situations. It seems to be the case that the theoretical idea of a latent variable as an abstraction of its empirical indicators (cf. MacCorquodale and Meehl 1948) has been generally ignored, even while the discussion of formative indicators has essentially implied exactly this type of latent variable. In part, this may be due to what Hayduk (1987; 1996; see also Rossiter, 2002) refers to as the entrenched factor analytic thought patterns inherent in reflective measurement theories. Yet, as Blalock (1975 p. 372) stated over 30 years ago, we should not confuse our own convenience with the theoretical adequacy of our underlying models. It is to be hoped that this paper may encourage marketing researchers to give more consideration to the conceptualization and empirical realization of formative variables, since such consideration will play an important role in assisting researchers in their efforts to develop coherent theories.

References
Bagozzi, R. P. (1982). The role of measurement in theory construction and hypothesis testing: Toward a holistic model. In C. Fornell (Ed.), A second generation of multivariate analysis. New York: Praeger.

AMS Rev (2013) 3:317 Diamantopoulos, A., & Siguaw, J. (2006). Formative versus reflective indicators in organizational measure development: a comparison and empirical illustration. British Journal of Management, 17, 263282. Diamantopoulos, A., & Winklhofer, H. M. (2001). Index construction with formative indicators: an alternative to scale development. Journal of Marketing Research, 38, 269277. Diamantopoulos, A., Riefler, P., & Roth, K. P. (2008). Formative indicators: introduction to the special issue. Journal of Business Research, 61(12), 12031218. Edwards, J. E. (2011). The fallacy of formative measurement. Organizational Research Methods, 14(2), 370388. Edwards, J. K., & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5(2), 155174. Ernst, H., Hoyer, W. D., Krafft, M., & Krieger, K. (2011). Customer relationship management and company performancethe mediating role of new product performance. Journal of the Academy of Marketing Science, 39, 290306. Fayers, P. M., & Hand, D. J. (2002). Causal variables, indicator variables and measurement scales: an example from quality of life. Journal of the Royal Statistical Society A, 165(2), 233261. Feeney, D. (2006). The multiattribute utility approach to assessing health-related quality of life. In A. M. Jones (Ed.), The Elgar Companion to health economics. Cheltenham: Edward Elgar Publishing. Fornell, C., & Bookstein, F. L. (1982). A comparitive analysis of two structural equation models: LISREL and PLS applied to market data. In C. Fornell (Ed.), A second generation of multivariate analysis. New York: Praeger. Franke, G., Preacher, K. J., & Rigdon, E. (2008). Proportional structural effects of formative indicators. Journal of Business Research, 61(12), 12291237. Gaski, J. F., & Nevin, J. R. (1985). The differential effects of exercised and unexercised power sources in a marketing channel. Journal of Marketing Research, 22(2), 130142. Grace, J. B., & Bollen, K. A. (2008). Representing general theoretical concepts in structural equation models: the role of composite variables. Environmental and Ecological Statistics, 15(2), 191213. Gregoire, Y., & Fisher, R. J. (2008). Customer betrayal and retaliation: when your best customers become your worst enemies. Journal of the Academy of Marketing Science, 36, 247261. Hardin, A. M., & Marcoulides, G. A. (2011). A commentary on the use of formative measurement. Educational and Psychological Measurement, 71(5), 753764. Hardin, A. M., Chang, J. C.-J., Fuller, M. A., & Torkzadeh, G. (2011). Formative measurement and academic research: in search of measurement theory. Educational and Psychological Measurement, 71(2), 281305. Hayduk, L. A. (1987). LISREL: Essentials and advances. Baltimore: John Hopkins University Press. Hayduk, L. A. (1996). LISREL: Issues, debates and strategies. Baltimore: John Hopkins University Press. Heise, D. R. (1972). Employing nominal variables, induced variables, and block variables in path analyses. Sociological Methods and Research, 1(2), 147173. Howell, R. D., Breivik, E., & Wilcox, J. B. (2007a). Reconsidering formative measurement. Psychological Methods, 12(2), 205218.

17 Howell, R. D., Breivik, E., & Wilcox, J. B. (2007b). Is formative measurement really measurement? Reply to Bollen and Bagozzi. Psychological Methods, 12(2), 238245. Jarvis, C. B., Mackenzie, S. B., & Podsakoff, P. M. (2003). A critical review of construct indicators and measurement model misspecification in Marketing and consumer research. Journal of Consumer Research, 30(4), 199218. Kline, R. B. (2006). Reverse arrow dynamics: Formative measurement and feedback loops. In G. R. Hancock & R. D. Mueller (Eds.), Structural equation modeling: A second course. Greenwich: IAP. Lee, N., & Cadogan, J. W. (2013). Problems with formative and higher-order reflective variables. Journal of Business Research, 66, 242247. MacCallum, R. C., & Browne, M. W. (1993). The use of causal indicators in covariance structure models: some practical issues. Psychological Bulletin, 114(3), 533541. MacCorquodale, K., & Meehl, P. E. (1948). On a distinction between hypothetical constructs and intervening variables. Psychological Review, 55, 95107. MacKenzie, S. B., Podsakoff, P. M., & Ahearne, M. A. (1998). Some possible antecedents and consequences of in-role and extra-role salesperson performance. Journal of Marketing, 62(July), 8796. MacKenzie, S. B., Podsakoff, P. M., & Jarvis, C. B. (2005). The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions. Journal of Applied Psychology, 90(4), 710730. Nunnally, J. C. (1967). Psychometric theory. New York: McGraw-Hill. Pearl, J. (2000). Causality: Models, reasoning and inference. Cambridge: Cambridge University Press. Rigdon, E. E., Preacher, K. J., Lee, N., Howell, R. D., Franke, G. R., & Borsboom, D. (2011). Avoiding measurement dogma: a response to Rossiter. European Journal of Marketing, 45(11/12), 1589 1600. Rossiter, J. R. (2002). The C-OAR-SE procedure for scale development in marketing. International Journal of Research in Marketing, 19(4), 305336. Rossiter, J. R. (2011). Marketing measurement revolution: the C-OARSE method and why it must replace psychometrics. European Journal of Marketing, 45(11/12), 15611588. Rozeboom, W. W. (1956). Mediation variables in scientific theory. Psychological Review, 53(4), 249264. Ryan, M., & Farrar, S. (2000). Using conjoint analysis to elicit preferences for health care. British Medical Journal, 320, 15301533. Scriven, M. (1966). Causes, connections, and conditions in history. In W. H. Dray (Ed.), Philosophical analysis and history. New York: Harper and Row. Sosa, E., & Tooley, M. (1993). Introduction. In E. Sosa & M. Tooley (Eds.), Causation. Oxford: Oxford University Press. Torrance, G. W., Feeny, D. H., Furlong, W. J., Barr, R. D., Zhang, Y., & Wang, Q. (1996). A multiattribute utility function for a comprehensive health status classification system: health utilities mark 2. Medical Care, 34(7), 702722. Treiblmaier, H., Bentler, P. M., & Mair, P. (2011). Formative constructs implemented via common factors. Structural Equation Modeling, 18(1), 117. Wilcox, J. B., Howell, R. D., & Breivik, E. (2008). Questions about formative measurement. Journal of Business Research, 61(12), 12191228.

Вам также может понравиться