Вы находитесь на странице: 1из 6

The Study of Cause and Consequence in the

Social Sciences

Much of science is consumed with the study of cause and consequence, particularly the

fit between theories of causation and empirical observations of the world. The social

42

sciences have experienced a great deal of internal controversy about the feasibility of

studying cause and consequence. This controversy has been fueled by the sloppy use of

language to interpret results of empirical analyses, the failure to explicitly identify key

assumptions, and the inappropriate use of specific analytic techniques. We do not review

this controversy here. Instead we review some of the key analytic approaches to the study

of causation in the social sciences and highlight the ways in which mixed method data

collection can be used to advance causal reasoning.

Typically, social scientists cannot study causal relationships by randomly

assigning subjects to experimental conditions. Instead they must usually draw causal

inference from observing people in social settings. The difficulty in drawing conclusions

about causal effects from observational data is that the exposure of interest is usually

allocated in some systematic way – the exposed group differs in composition from the

unexposed group and these differences in composition are related to the outcome being

studied (Moffitt 2003, 2005; Winship and Morgan 1999). In other words, people are not

randomly assigned to the social condition we want to study. This inevitably leads to the

problem that there are alternative explanations for the associations between outcome and

hypothesized cause that we observe. Put yet another way, some factors we do not observe

may have influence on the associations we do observe. Sometimes special circumstances

allow the assignment of people to experimental conditions in the social world, usually

when some type of special government benefits or programs are to be distributed, and

those involved desire a “fair” system of distribution. But these circumstances are rare.

43
An entire literature is devoted to causal modeling in the social sciences. This

literature proposes various analytic techniques for estimating models of cause and

consequence, critiques those techniques, proposes superior alternatives, and occasionally

argues that social scientists should give up altogether the effort to study cause and

consequence (Abbot 1998; Bachrach and McNicoll 2003; Freedman 1991; Fricke 1997;

Heckman 1978, 2000; Marini and Singer 1988; Marsden 1991; Moffitt 2003, 2005;

Raftery 1998; Rubin 1974; Smith 2003; Snijders and Hagenaars 2001; Winship and

Morgan 1999). During the past 50 years, a series of hopeful methodological fads emerged

and receded, each hoping to solve the problem of studying causal relationships in the

social sciences. These include analytic techniques such as selection corrections,

instrumental variable approaches, and various model identification procedures. Analytic

attention to issues of causal inference is important, but it is rarely satisfactory. Analytic

approaches to studying social causes always suffer from either untestable assumptions, or

limitations of the information available to operationalize these approaches, or both

(Heckman 2000; Moffitt 2003, 2005).

Many innovative research designs hold tremendous promise for addressing causal

questions because or opportunities to exclude alternative explanations of observed

associations in the research design itself (Campbell and Stanley 1963; Cook and

Campbell 1979; Rosenbaum 1999, 2001). Studies of twins and other “natural”

experiments seem among the most promising of current design-based approaches,

because they are generally thought to have the benefit of removing the chance that

44

unobserved factors produce observed associations between the change and subsequent

behavior of the people affected (Card and Krueger 1995; Heckman and Smith 1995;

Rosenzweig and Wolpin 2000). That is, they exploit situations in which the exposure of

interest was allocated as though it were randomly assigned. However, these approaches

also have limitations. First, such situations are rare, and the rarity of these situations

dramatically limits the range of topics and settings that can be studied with this approach
(Moffitt 2003, 2005). Second, often times careful examination of exposures that seem to

be assigned as though at random reveal that important unobserved circumstances may

still produce the observe empirical associations (Bound and Solon 1999; Heckman 2000;

Moffitt 2003, 2005; Winship and Morgan 1999).

Thus, none of the approaches currently available offers a panacea for the problem

of causal inference for all subjects, settings, or occasions. In fact, we argue there are no

ultimate solutions to the problem of causal inference in the social sciences. The approach

we advocate for the study of cause and consequence in the social sciences lies between

recognition of the flaws in all available observational approaches and abandonment of the

effort. We advocate taking full advantage of every reasonable approach available to build

a comprehensive set of evidence for the research question at hand.

We argue that advances in causal reasoning in the social sciences must come from

a constant interplay between theory and empirical evidence. In our view the careful social

scientist should try to address the problem of unobserved factors producing observed

associations by trying to account for as many factors as possible. This accounting

45

includes (1) theorizing and reasoning regarding possible unobserved factors that may be

responsible for observed associations, (2) the construction of research designs to

eliminate key rival hypotheses, and (3) designing measurement strategies to observe as

many of the potential causes as possible. Although we believe no one approach will solve

the problem of causal inference, we believe these steps advance the social scientific study

of cause and consequence.

Our emphasis on measurement strategies for advancing causal reasoning is

founded on a specific epistemological view of causal reasoning in social science. In our

view, neither social scientific methods nor the data gathered and analyzed by social

scientists can demonstrate “proof” of cause. Rather, scientists hypothesize about cause in

the theoretical arguments they create. Empirical analyses of data about the social world

may be consistent or inconsistent with those arguments, but they cannot prove or

demonstrate causation (Marini and Singer 1988). Based on this perspective, the more
reliable evidence brought to bear on a theory the better. Therefore, we espouse an

approach to the study of causation that uses multiple sources of evidence to test, or

challenge, causal theories – an approach that lies at the foundation of the methods we

advocate.

Furthermore, this view of causal inference leads us to emphasize the importance

of two specific aspects of measurement in the social sciences. These are the temporal

order among measures and the comprehensiveness of the measurement. Temporal

ordering of measures is important because most causal reasoning has temporal ordering

46

embedded in it. That is, when we reason that X causes Y, almost always we argue that X

occurs before Y. The most common exception to this temporal order is the argument that

anticipation of X in the future causes Y now. However, this exception simply identifies a

third component to the causal reasoning, Z, where Z is the anticipation of X. If it is

possible to observe Z, the temporal order of our reasoning is still quite clear: Z causes Y

and Z occurs before Y. This view of the relationship between causal reasoning and

temporal order is fundamental to the measurement strategies we describe in Chapters 5

and 6, and the idea that anticipation of events may produce cause is fundamental to the

approaches we describe in Chapter 7.

The comprehensiveness of measurement is important for two reasons. First,

redundant measures from multiple sources can be used to help reduce the chances that

bias associated with the measurement is responsible for observed associations. We argue

that the best measurement strategies take advantage of multiple methods to allow the

strengths of some methods to compensate for the weaknesses of other methods, thereby

reducing the likelihood of replicating bias across measures (Rosenbaum 2001). The effort

to avoid replication of measurement bias is particularly important because each specific

approach to the study of a specific causal question may be threatened by bias. Replication

of investigations into a specific causal question using a variety of different approaches,

varying both research design and measurement strategy, holds great promise for

advancement of our understanding of the underlying causal relationship (Rosenbaum


2001).

47

Second, comprehensive measurement can be used to document potential

mechanisms responsible for an observed association, lending further evidence to the

argument for a causal relationship. Measures of all potential causal factors identified by

theory is a fundamental tool for establishing empirical evidence of causal relationships in

the social world. Measures of the potential mechanisms responsible for producing an

association are particularly important to causal reasoning because empirical evidence of

such mechanisms is one tool for establishing a causal relationship (Rosenbaum 2001).

Again, mixed method data collection is a particularly useful tool in this endeavor.

Readers may associate empirical study of cause and consequence in the social world with

statistical analyses of survey data. However, just as we argue that no one data collection

method is sufficient for the study of cause and consequence, the leading survey

methodologists make the same argument about survey methods. Consider the following

quote from Groves et al.’s 2004 monograph on survey methodology:

ext Surveys are rather blunt instruments for information gathering. They are powerful in

producing statistical generalizations to large populations. They are weak for in generating rich

understanding of the intricate mechanisms that affect human thought and behavior. Other

techniques are preferred for that purpose. (Groves et al. 2004, p. 378)

Surveys are powerful for providing evidence of associations, but they are less powerful

for discovering the mechanisms responsible for those associations. Less structured

methods, such as observation and unstructured interviewing, are more powerful for

discovering these mechanisms (Moffitt 2000; Seiber 1973). But once such potential

mechanisms are discovered, again survey methods are a powerful tool for establishing

48

associations between these potential mechanisms and the outcome of interest. As a result,

mixed method techniques that combine survey data collection with less structured
interviewing or observational data collection are extremely powerful for advancing causal

reasoning. We devote Chapters 3, 4, and 8 of this book to detailed consideration of these

techniques.

Вам также может понравиться