Вы находитесь на странице: 1из 20

Counseling Outcome

Research and Evaluation


http://cor.sagepub.com/

A Five-Step Guide to Conducting SEM Analysis in Counseling Research


Stephanie A. Crockett
Counseling Outcome Research and Evaluation 2012 3: 30 originally published online 30 January
2012
DOI: 10.1177/2150137811434142
The online version of this article can be found at:
http://cor.sagepub.com/content/3/1/30

Published by:
http://www.sagepublications.com

On behalf of:

Association for Assessment and Research in Counseling

Additional services and information for Counseling Outcome Research and Evaluation can be found at:
Email Alerts: http://cor.sagepub.com/cgi/alerts
Subscriptions: http://cor.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations: http://cor.sagepub.com/content/3/1/30.refs.html

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

>> Version of Record - May 2, 2012


OnlineFirst Version of Record - Jan 30, 2012
What is This?

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Outcome Research Design

A Five-Step Guide to
Conducting SEM Analysis
in Counseling Research

Counseling Outcome Research


and Evaluation
3(1) 30-47
The Author(s) 2012
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/2150137811434142
http://core.sagepub.com

Stephanie A. Crockett1

Abstract
The use of structural equation modeling (SEM), a second-generation multivariate analysis technique
that determines the degree to which a theoretical model is supported by the sample data, is becoming increasingly popular in counseling research. SEM tests models that include both observed and
latent variables, allowing the counseling researcher to confirm the factor structure of a newly developed or existing psychological instruments and to examine the plausibility of complex, theoretical
counseling models. This article provides counseling researchers and practitioners with an overview
of SEM and presents five steps for conducting SEM analysis in counseling research.
Keywords
structural equation modeling, counseling research
Submitted 2 October 2011. Revised 4 December 2011. Accepted 5 December 2011.

In recent decades, the field of counseling has


made increased efforts to empirically know
how and what works in client treatment and
to build a scientific foundation that substantiates the efficacy of counseling practice in relation to client outcomes (Kaplan & Gladding,
2011; Ray et al., 2011). As we attend to what
works in counseling, it is critical that counseling researchers and practitioners employ clinical interventions and assessments that are
grounded in empirically verified counseling
theories and constructs. The validation of complex counseling theories and constructs
requires counseling researchers to employ
advanced statistical methods. Structural equation modeling (SEM) is one such advanced statistical method that allows for the testing of
multifaceted theories and constructs; and in the
social sciences, it is rapidly becoming the
favored method for determining the plausibility

of theoretical models (Martens, 2005; Quintana


& Maxwell, 1999; Schumacker & Lomax, 2010).
SEM is a collection of statistical techniques
that allows researchers to assess empirical
relationships among directly observed variables and underlying theoretical constructs
(i.e., latent variables; Raykov & Marcoulides,
2000). It is highly applicable within the field of
counseling as researchers often strive to validate
theoretical constructs and models. Specifically,
SEM can be used to confirm the factor structure
of a newly developed psychological instrument

Department of Counseling, University of Oakland,


Rochester, MI, USA
Corresponding Author:
Stephanie A. Crockett, University of Oakland, 2200 N.
Squirrel Road, Rochester, MI 48309, USA
E-mail: crockett@oakland.edu

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

31

(Martens, 2005; Tomarken & Walker, 2005).


Counseling researchers may also wish to use
SEM to confirm the factor structure of an existing
psychological instrument with a new population
(Martens, 2005). SEM techniques can also be
employed to determine the plausibility of complex, theoretical counseling models. Further,
counseling researchers can use SEM to compare competing theoretical models in order to
determine which model is a better fit to the
empirical data (Chan, Lee, Lee, Kubota, &
Allen, 2007).
For these reasons, SEM is becoming increasingly popular in counseling research. For example, Bullock-Yowell, Peterson, Reardon,
Leierer, and Reed (2011) evaluated the cognitive
information process theory using SEM to determine whether career thoughts mediate the relationship between career/life stress and level of
career decidedness. Chao, Chu-Lien, and Sanjay
(2011) employed SEM techniques to examine the
role of ethnic identity, gender roles, and multicultural training in college counselors multicultural
counseling competence. Cochran, Wang, Stevenson, Johnson, and Crews (2010) sought to empirically
verify
Gottfredsons
theory
of
Circumscription and Compromise using SEM
to test the relationship between adolescent occupational aspirations and midlife career success.
Villodas, Villodas, and Roesch (2011) examined
the factor structure of the Positive and Negative
Affect Schedule (PANAS) for a multiethnic sample of adolescents using a confirmatory factor
analysis (CFA). Tovar and Simon (2010)
employed a CFA to validate the factor structure
of the Sense of Belonging scales.
From the examples listed above, it is apparent that SEM plays a vital role in the advancement of counseling research and, as easy-to-use
SEM computer programs such as AMOS
(Arbuckle & Wothke, 1999), become readily
accessible it can be expected that SEM will
be increasingly important in determining the
efficacy of counseling services and treatment.
While SEM is widely used in social science
research (Chan et al., 2007; Quintana &
Maxwell, 1999), to date no tutorial articles have
been published to assist counseling researchers
and practitioners in the step-by-step application

of SEM techniques. This article strives to familiarize counseling researchers and practitioners
with the purpose and uses of SEM, as well as
provide an applied approach to conducting
SEM analysis. In particular, the article begins
with a general overview of SEM, including
key terms and definitions, a brief history of
SEM development, and the advantages and
limitations associated with the approach.
Readers will then learn how to conduct SEM
analysis in counseling research using a series
of five, applicable stages.

Overview of SEM
SEM is a second-generation multivariate analysis technique that is used to determine the
extent to which an a priori theoretical model
is supported by the sample data (Raykov &
Marcoulides, 2000; Schumacker & Lomax,
2010). More specifically, SEM tests models
that specify how groups of variables define a
construct, as well as the relationships among
constructs. For example, consider a counseling
researcher who is interested in the impact of the
therapeutic working alliance, a construct that
cannot be directly measured, on the number
of counseling sessions a client attends. The
researcher could use SEM to determine
whether (a) variables such as agreement on
therapy tasks, agreement on therapy goals, and
the counselorclient emotional bond comprise
the construct therapeutic working alliance, and
(b) the therapeutic working alliance, as a
whole, is predictive of client number of counseling sessions attended.
In essence, SEM uses hypothesis testing to
improve our understanding of the complex relationships that occur among observed variables
and latent constructs. Observed variables (i.e.,
indicator variables) are variables that can be
directly measured using tests, assessments, and
surveys, and are used to define a given latent
construct. Latent constructs cannot be directly
observed or measured and, as a result, must
be inferred from a set of observed variables.
In our example, agreement on therapy tasks,
agreement on therapy goals, and emotional
bond are observed variables that are directly

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

32

Counseling Outcome Research and Evaluation 3(1)

measured by the Working Alliance Inventory


(WAI; Horvath & Greenberg, 1986). The
inventory yields three separate subscale scores
that are then used to make inferences regarding
the overall working alliance between the
counselor and client. Therefore, it can be said
that agreement on therapy tasks, agreement on
therapy goals, and the emotional bond between
the counselor and client define the latent construct working alliance. The outcome variable,
number of counseling sessions attended, is also
an observed variable and, using SEM procedures, a researcher can test the hypothesized
relationship between the observed outcome
variable and the latent predictor variable.

The Development of SEM


SEM was derived from the evolution of three
particular types of models: regression, path,
and confirmatory factor (Schumacker &
Lomax, 2010). The first step toward SEM
development was linear regression modeling.
Linear regression modeling is concerned with
observed variables only and attempts to predict
a dependent, observed variable from one or
more independent, observed variables. Regression models use a correlation coefficient and
least squares criterion to estimate the parameters of the model by minimizing the sum
of squared differences between observed and
predicted scores of the dependent variable. Path
analysis, another precursor to SEM, is also concerned with observed variables, and predicts
relationships among observed variables by solving a series of concurrent regression equations. Path models permit the researcher to
test relationships among multiple independent
and dependent variables. Overall, path analysis
allows for the testing of more complex models
than linear regression analysis. The final model
that contributed to the development of SEM is
the confirmatory factor model. CFA assumes
that items on an inventory correlate with one
another and yield observed scores that measure
or define a construct. Confirmatory factor models seek to validate the existence of theoretical
constructs by empirically testing the relationships between observed and latent variables.

SEM models combine path and factor analytic models allowing for the incorporation of
both observed and latent variables into a model.
SEM procedures ultimately determine the plausibility of a theoretical model by comparing
Pthe
estimated theoretical covariance matrix
to
the observed covariance matrix S (i.e., the
matrix derived from the sample data; Schumacker & Lomax, 2010). Many SEM software
programs are currently available to researchers.
These include LISREL1, AMOS, EQS1,
Mx, Mplus1, Ramona, and SEPATH1. Many
of the SEM software programs allow researchers to statistically analyze raw data and provide
procedures for managing missing data, outliers,
and variable transformations. Programs, such
as AMOS and LISREL1, offer researchers
the option to construct a path diagram that can
be translated by the software program into the
mathematical equations needed for analysis.

Advantages and Limitations of SEM Use


SEM techniques yield several advantages over
first-generation multivariate methods (Kline,
2010; Schumacker & Lomax, 2010). Most
importantly, SEM offers researchers an
enhanced understanding of the complex relationships that exist among theoretical constructs. As the counseling field continues to
explore increasingly complex phenomenon, the
theoretical models used to explain such phenomenon are also increasing in complexity.
SEM techniques provide counseling researchers with a comprehensive method for specifying and empirically testing the plausibility of
complex theoretical models (Kelloway, 1998).
SEM also allows for the simultaneous analysis of direct and indirect effects with multiple
exogenous and endogenous variables (Stage,
Carter, & Nora, 2004). A direct effect occurs
when the exogenous (i.e., independent) variable influences an endogenous (i.e., dependent)
variable. An indirect effect, on the other hand,
occurs when the relationship between the exogenous and endogenous variable is mediated
by one or more intervening variables (Baron
& Kenny, 1986). While multiple regression
analysis can also be used to explore indirect

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

33

relationships among variables (see e.g., Baron


& Kenny, 1986), it assumes that no measurement error exists for the exogenous variables
(Raykov & Marcoulides, 2000). Such an
assumption rarely applies to actual practice.
Ignoring potential measurement error can
adversely impact the validity and reliability of
a study and, as a result, multiple regression
techniques may be highly susceptible to errors
in interpretation. SEM techniques, on the other
hand, overtly take into account the measurement error in the models observed variables
(Schumacker & Lomax, 2010).
In addition, SEM affords counseling
researchers the ability to test increasingly complex theoretical models. For example, SEM permits the same variable to be interpreted as both
an exogenous and endogenous variable (Stage
et al., 2004), and allows for an interaction term
to be included in the theoretical model in order
to test main and interaction (i.e., moderator)
effects (Schumacker & Lomax, 2010). These
techniques can also be used to compare alternative theoretical models in order to assess the
relative fit of each model, which decreases the
high frequency of model misspecification found
in regression analysis (Skosireva, 2010).
Finally, SEM provides a path diagram, or visual
representation of the hypothesized relationships
among variables, that can be directly translated
into the mathematical equations needed for
analysis (Raykov & Marcoulides, 2000; Stage
et al., 2004).
While SEM has several advantages over traditional, first-generation multivariate methods,
there are limitations associated with using this
technique. Similar to other multivariate statistical techniques, SEM examines the correlations
among variables, but cannot establish causal
effects. As a result, the successful application
of SEM techniques relies on the researchers
theoretical knowledge of each variable (Stage
et al., 2004). SEM is also an inherently confirmatory technique and is most advantageous
when the researcher has an a priori theoretical
model to test. It is not an exploratory technique
and is ill suited for exploring and identifying
relationships among variables (Kelloway,
1998, p. 7).

Steps for Conducting SEM Analysis


Prior to discussing the steps for conducting an
SEM analysis, counseling researchers should
be reminded that SEM is a correlational
research technique and, as a result, note the
analysis is impacted by measurement scales,
restriction of range, outliers, linearity, and nonnormality (Schumacker & Lomax, 2010).
Counseling researchers should take the time
to thoroughly screen the data, attending to outliers and missing data, as well as issues related
to linearity and normality before running SEM
analysis. The actual SEM analysis consists of a
series of five sequential steps: model specification, model identification, model estimation,
model testing, and model modification (Bollen
& Long, 1993; see Table 1). The remainder of
this section discusses each of these steps at
length. To illustrate the application of SEM
procedures, an example theoretical model
based on a study by Crockett (2011) is used
throughout this section. The study examined
the impact of supervisor multicultural competence and the supervisory working alliance on
supervision outcomes in a sample of 221 counseling trainees enrolled in masters and doctoral level counseling programs across the
United States.

Model Specification
Model specification is the first step of SEM
analysis and occurs prior to data collection and
analysis. It is often the most difficult step for
researchers as it involves the development of
a theoretical model using applicable, related
theory and research to determine variables of
interest and the relationships among them
(Cooley, 1978). It is critical that the hypothesized theoretical model be grounded in and
derived from the extant literature. The
researcher must be able to provide plausible
explanations for relationships included in the
model and a rationale for the overall specification of a model. The example theoretical
model attempts to specify the relationship
between supervisor multicultural competence
and supervisee outcomes. The model

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

34

Counseling Outcome Research and Evaluation 3(1)

Table 1. Steps for Conducting SEM Analysis


SEM Step
Model specification

Model identification

Model estimation

Model testing

Model modification

Description of Step
This step involves the specification of a theoretical model that utilizes applicable,
related theory and research to determine the latent and observed variables of
interest and the relationships among them. In particular, researcher must specify a
measurement and structural model. A path diagram can be constructed to visually
represent the hypothesized relationships among variable in the theoretical model
This step helps the researcher to determine whether the specified model is capable of
producing actual results that can be estimated in SEM analysis. Models must be
indentified and able to generate a unique solution and parameter estimates.
OBriens (1994) criteria can be used to establish whether a measurement model is
identified. To determine whether a structural model is indentified researchers can
use Bollens (1989) recursive rule and the t rule
This step involves the use of an iterative
P procedure (i.e., fitting function) to generate
the theoretical covariance matrix , as wellPas minimize the differences between
the estimated theoretical covariance matrix
and the observed covariance matrix
S. Maximum likelihood (ML) and generalized least squares (GLS) are the most
commonly used fitting functions
This step involves the analysis of both the measurement and structural models in order
to determine (a) the global fit of the entire model, and (b) the fit of individual model
parameters. Multiple indices of fit (i.e., absolute, comparative, and parsimonious)
should be analyzed to determine the degree to which the theoretical model fits the
sample data. The w2 difference test can also be used when working with nested
models to compare the plausibility of the theoretical model to viable alterative
models. It should be noted that the measurement model must yield a good fit to the
data before the structural model can be analyzed
The final step involves using theory trimming or the addition of new parameters to
attempt to improve the theoretical models fit to the data. Researchers should be
advised to model modification is an exploratory procedure and is based on the
sample data instead of the extant literature. Respecified models will need to be
cross-validated with a new sample

hypothesizes: (a) that supervisor multicultural


competence directly impacts supervisee counseling self-efficacy (CSE) and (b) that supervisor multicultural competence indirectly
impacts supervisee CSE through the supervisory working alliance. That is, the supervisory
working alliance mediates the relationship
between the exogenous variable and endogenous variables. The hypothesized relationships
among the variables are depicted as:
Supervisor Multicultural Competence
! Supervisee CSE Supervisor Multicultural
Competence ! Supervisory Working
Alliance ! Supervisee CSE:
Given that SEM models contain both observed
and latent variables, model specification is a

two-step building process (Anderson &


Gerbing, 1988). First, the measurement model
must be specified; this involves the identification of observed variables that comprise each
of the models latent constructs. It is important
to note that the measurement model does not
specify directional relationships among the
latent variables. The measurement model in
the example includes three latent constructs.
The first latent variable, supervisory working
alliance, is estimated by the three observed
factors (i.e., task, goals, and bond subscales)
that comprise the underlying structure of the
WAI-Short Form (WAI-SF; Ladany, Mori, &
Mehr, 2007). The second latent variable,
supervisee CSE, is estimated by the five factors (i.e., microskills, counseling process, difficult client behaviors, cultural competence,

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

35

and counselor values/biases subscales) that


comprise the underlying structure of the Counselor Self-Estimate Inventory (COSE; Larson
et al., 1992). The measurement model for the
example study can be expressed through a
series of eight equations wherein the error
term indicates the measurement error inherent
in the observed variable:
Tasks = function of supervisory
working alliance error;

exogenous variable, supervisor multicultural


competence, and the latent exogenous variables,
supervisee CSE through the latent mediator variable, supervisory working alliance. The structural model can also be illustrated through a
series of equations; because the model includes
a mediator variable three equations are
specified:
Supervisee CSE = structure coefficient1
 Supervisor Multicultural Competence error;

9
Goals = function of supervisory
working alliance error;
Bond = function of supervisory
working alliance error;

Supervisee CSE = structure coefficient2


 Supervisory Working Alliance error;
10

Supervisory Working Alliance = structure


Microskills = function of supervisee
CSE error;
Counseling process = function of
supervisee CSE error;
Difficult client behaviors = function of
supervisee CSE error;
Cultural competence = function of
supervisee CSE error;
Counselor values/biases = function of
supervisee CSE error:

coefficient3  Supervisor Multicultural


Competence error:
11

If the latent constructs in the measurement


model are adequately measured by the observed
variables, then researchers can specify the structural model. The structural model specifies relationships among the latent variables in the
theoretical model. It is imperative that such relationships are indicated prior to model estimation
and testing as SEM is a confirmatory technique.
The structural model in the example study identifies: (a) the hypothesized direct relationship
between the exogenous variable, supervisor multicultural competence, and the latent exogenous
variables, supervisee CSE; and (b) the hypothesized indirect relationship between the

The structural equations specify the estimation of three structure coefficients (i.e., elements that comprise
P the estimated theoretical
covariance matrix ). Each equation contains
a prediction error which specifies the degree
of variance in the latent endogenous variable
that is not accounted for by the other variables
in the equation (Schumacker & Lomax, 2010).
Finally, the equations specify the direction of
the predicted relationships.
The hypothesized relationships among
observed and latent variables in a theoretical
model can also be illustrated through a path
diagram (i.e., a graphical representation of the
theoretical model). Such diagrams use a series
of conventional symbols to depict the relationships among model variables (see Figure 1). A
rectangle represents an observed variable,
whereas an oval denotes a latent variable. Unidirectional arrows indicate a hypothesized
relationship in which one variable influences
another. These arrows are often referred to as
model paths. Bidirectional, curved arrows are
used to denote covariance between two independent variables. Finally, the measurement
error for each observed, dependent variable

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

36

Counseling Outcome Research and Evaluation 3(1)

Observed Variable

Latent Variable

Unidirectional, or recursive relationship

Nonrecursive relationship

Covariance among two independent variables

Measurement error for an observed variable

Figure 1. Hypothesized relationships among


observed and latent variables in a theoretical model
can be illustrated through a path diagram that uses a
series of conventional symbols to depict the relationships among model variables.

is symbolized by a circle that includes an error


term and points toward the dependent variable. Figure 2 provides an example path diagram for the example model.

Model Identification
Model identification is a requirement for producing results that can be estimated in SEM analysis.
This step occurs prior to estimating model parameters (i.e., relationships among variables in the
model) and is concerned with whether a unique
solution to the model can be generated. For a
model to be considered identified, it must be
theoretically possible to establish a unique
estimate for each parameter (Kelloway, 1998;
Schumacker & Lomax, 2010) and is dependent
on the designation of model parameters as free
(i.e., a parameter that is unknown and needs to
be estimated), fixed (i.e., a parameter that is fixed
at a specific value, often a 0 or 1), or constrained
(i.e., a parameter that is unknown, but constrained to equal one or more other parameters).
For example, a theoretical model that exerts x
y 20 has no sole solution; the value of x could
be 10, 15, or 19. In order to find a unique solution

for x, the value of y must be fixed. If the value y is


fixed at 15 then x has to 5.
The measurement model must first be identified for the overall SEM to be identified. According to OBrien (1994), the measurement model is
most likely identified when: (a) there are two or
more latent variables, each with at least three
indicators that load on it, the errors of these indicators are not correlated, and each indicator loads
on only one factor, or (b) there are two or more
latent variables, but there is a latent variable on
which only two indicators load, the errors of the
indicators are not correlated, each indicator loads
on only one factor, and the variances or covariances between factors is zero. To increase the
likelihood of identification in the structural
model, a causal path from each latent variable
to a corresponding observed variable must be
fixed at zero. This one fixed, nonzero loading
is termed a reference variable and is often the
variable with the most reliable scores (Kline,
2010). CFA results (see section on model testing)
confirmed that the example measurement model
was identified as each latent variable had three or
more indicators that appropriately loaded on
each variable, the errors of the indicators were
not correlated, and each indicator in the model
only loaded on one factor. Additionally, the reference variable for each latent variable was identified in the CFA. The reference variable for
supervisory working alliance and supervisee
CSE was task and microskills, respectively.
Establishing that a structural model is identified can be extremely cumbersome and
involves highly complex mathematical calculations. As a result, Bollen (1989) outlined a
widely used set of rules for the identification
of structural models: the recursive rule and the
t rule. The recursive rule states that a structural
model should be recursive to be identified. A
structural model is recursive when all of the
relationships specified by the model are unidirectional (i.e., two variables are not reciprocally
related; Schumacker & Lomax, 2010). To satisfy the recursive rule: (a) the c matrix (i.e.,
errors in the structural equations) of a structural
model must be diagonal, meaning that there are
no correlated errors in the endogenous variables, and (b) the b matrix must be able to be

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

37

e
e
Goal

Task

Bond

Supervisory
Working
Alliance
Supervisor
Multicultural
Competence

Counseling
Self-Efficacy

Microskill

Process

Difficult

Culture

Value

Figure 2. This path diagram depicts the hypothesized direct and indirect relationships among supervisor multicultural competence, the supervisory working alliance, and supervisee counseling self-efficacy as specified by the
example theoretical model.

arranged so that all free elements are in the


lower triangle of the matrix, meaning that
no reciprocal relationships or feedback loops
exist among the endogenous variables (Bollen,
1989). A visual inspection of a models path
diagram, in conjunction with examining the c
and b matrices found on the analysis output
of an SEM statistical software program allow
the researcher to determine whether the model
is recursive or not. An examination of Figure
2 indicates that the example model is recursive
as all relationships specified in the model
are unidirectional.
The t rule exerts that in an identified, recursive model the number of parameters to be estimated is less than the nonredundant (i.e., unique)
elements in the sample covariance matrix S
(i.e., the true model generated from the data).
Simply stated, the structural model must have
more known pieces of information than
unknown pieces in order to find unique solutions. To determine whether this necessary condition is met, the number of knowns (i.e., the
number of unique elements in the covariance
matrix of the structural model) is calculated
using p(p 1)/2, where p is equal to the
number of observed variables. The number of
unknowns is equal to the number of free parameters to be estimated in the model (i.e., the
relationships between the exogenous and endogenous variables, relationships between the
endogenous variables, factor loadings, errors in

the equations, variance/covariance of the exogenous variables). In our example theoretical


model, there are nine observed variables; therefore, the number of unique elements in the covariance matrix (i.e., the number of knowns) is
45. The number of free parameters (i.e., the
number of unknowns) to be estimated in the
model is 9. Given that the number of unique elements in the covariance matrix exceeded the
number of free parameters in the model, the
model is said to be overidentified. SEM models
can also be underidentified or just-identified.
Underidentified models do not provide enough
information for the model parameters to be distinctively estimated and, as a result, fail to yield
a unique solution. Just-identified provide just
enough information for all of the model parameters to be uniquely estimated. Overidentified
and just-identified models are both considered to
be identified; however, an overidentified model
yields a number of possible solutions, whereas a
just-identified model produces only one solution. Given that the covariance matrix contains
many sources of error (e.g., sampling and measurement error), researchers (Kelloway, 1998)
suggest that an overidentified model is ideal.
In an overidentified model, the goal of SEM is
to select the solution that comes closest to
explaining the observed data (Kelloway, 1998).
Underidentified models, such as x y 20, can
easily become identified by imposing additional
constraints on model parameters.

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

38

Counseling Outcome Research and Evaluation 3(1)

Model Estimation

Model Testing

Model estimation, the third step of SEM analysis, involves estimating the parameters of the
theoretical model in such a way that the theoretical parameter values yield a covariance
matrix as close as possible to the observed
covariance matrix S. SEM analysis programs
use an iterative procedure, often referred to as
a fitting function, to minimize the differences
betweenPthe estimated theoretical covariance
matrix
and the observed covariance matrix
S. Specifically, the iterative procedure attempts
to improve the preliminary parameter estimates
with subsequent calculation cycles. The final
parameter estimates represent the best fit to
observed covariance matrix S.
Several fitting functions are available to
researchers (e.g., ordinary least squares [OLS],
generalized least squares [GLS], maximum likelihood [ML]). ML is the most widely used type
of estimation, followed by GLS (Kelloway,
1998). Although ML and GLS are comparable
to OLS estimation used in multiple regression,
they are slightly different from and several
advantages over OLS estimation. In particular,
ML and GLS are (a) not scale-dependent, (b)
allow dichotomous exogenous variables (Skosireva, 2010), and (c) consistent and asymptotically efficient in large samples (Bollen, 1989;
Kelloway, 1998; Schumacker & Lomax,
2010). ML and GLS assume multivariate normality of dependent variables and, unlike OLS,
are full information techniques, meaning that
they estimate all model parameters simultaneously to produce a full estimation model. When
the assumption of multivariate normality is violated, researchers may use an asymptotically
distribution-free (ADF) estimator. ADF is not
dependent on the underlying distribution of the
data, but it does require a large sample size as the
estimator yields inaccurate chi-square (w2) statistics for smaller sample sizes (Mueller, 1996). For
more information on ADF, please see Raykov
and Widaman (1995). In the example model,
ML was employed by LISERL1 during the
SEM analysis to minimize the differences
betweenPthe estimated theoretical covariance
matrix
and the observed covariance matrix S.

As mentioned earlier SEM allows for the simultaneous analysis of direct and indirect relationships among latent and observed variables;
however, many researchers (e.g., Anderson &
Gerbing, 1988; James, Mulaik, & Brett, 1982)
recommend a two-step approach to model testing. In particular, James, Mulaik, and Brett
(1982) argued that model testing involved the
analysis of two conceptually distinct models: the
measurement model and the structural model.
The researcher must first determine whether the
proposed measurement model holds, ensuring
that the chosen observed indicators for a latent
construct actually measure the construct. If the
chosen indicators for a construct do not accurately measure the construct, then the structural
model is meaningless (Joreskog & Sorbom,
1993). Accordingly, it is recommended that
researchers conduct a CFA of the measurement
model to determine whether the factor indicators
loaded on the latent variables in the direction
expected prior to testing the structural model.
A CFA of the example measurement model
was run prior to estimating the structural model
to ensure that all factors loaded on the latent
variables in the direction expected. Results
indicated an adequate fit of the CFA model,
w2(19) 44.72, p < .05; root mean square
error of approximation (RMSEA) .07; comparative fit index (CFI) .97; Parsimonious
normed fit index (PNFI) .65, to the data (see
information on model fit). The standardized
parameter estimates were significant at the
p < .05 level and consistent with the specified
hypotheses, loading in the appropriate direction. The individual parameters comprising the
model were also analyzed. As predicted, the
latent variable supervisory working alliance
was significantly positively correlated with its
factor indicators: WAI-SF bond subscale (r
.83, p < .05), WAI-SF task subscale (r .92,
p < .05), and WAI-SF goal subscale (r .82,
p < .05). The latent variable supervisor CSE
was also significantly positively correlated
with its factor indicators: COSE microskills
subscale (r .83, p < .05), COSE counseling
process subscale (r .79, p < .05), COSE

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

39

difficult client behaviors subscale (r .74, p <


.05), COSE cultural competence subscale (r
.71, p < .05), and COSE counselor values/biases
subscale (r .32, p < .05).
After confirming the fit of the measurement
model, counseling researchers can analyze the
structural model to determine the extent to
which the model is supported by the sample
data. Model fit is concerned with the fit of individual model parameters as well as the overall,
global fit of the entire model. When examining
the fit of individual parameters, Schumacker
and Lomax (2010) recommend researchers
consider whether each parameter is statistically
significant from zero, runs in the expected
direction, and can be meaningfully interpreted.
The majority of statistical programs yield a critical value (i.e., parameter estimate/estimated
standard error) for each model parameter. For
example, LISREL1 reports a t value. If the
critical value exceeds the expected value at a
specific a level then the parameter is said to
be statistically significant. For a two-tailed test
at a .05 the expected value is equal to 1.96.
In addition to being statistically significant,
each parameters sign should agree with the
theoretical model. For example, if the theoretical model exerts that a high level of supervisor multicultural competence leads yields
increased supervisee CSE, then a parameter
estimate with a positive sign would support
the hypothesis. Finally, the parameter estimates should remain within expected values
(i.e., correlations should not exceed 1, variances should have positive values). Researchers should also consider the overall model fit,
using global measures of fit to evaluate the
degree discrepancy P
between the theoretical
covariance matrices
and the sample covariance matrices S.
Global measures of model fit. There are several
global fit measures that can aid the researchers
in assessing whether the theoretical model adequately fits the sample data. Model fit indices
can be categorized into three distinctive types
of fitting functions: absolute, comparative, and
parsimonious. When the model-fit indices are
suitable, the theoretical model is supported by

the sample data. Model-fit indices that are not


acceptable indicate that the sample data do not
support the hypothesized model, requiring the
respecification of the theoretical model.
Absolute fit. Indices of absolute fit are
concerned with the structural models ability
to reproduce the sample covariance matrix S.
The test specifies how well the proposed
interrelationships between the variables match
the interrelationships between the actual or
observed interrelationships (Meyer, Gamst,
& Guarino, 2006, p. 558). The w2 test, RMSEA,
goodness-of-fit index (GFI), and adjusted
goodness-of-fit index (AGFI) are commonly
used absolute fit measures. The w2 test used
in SEM today was derived from early attempts
to specify a model in path analysis. A nonsignificant w2 value indicates
P that theoretical model
covariance matrix
and the sample covariance matrix S are similar. The w2 goodnessof-fit test is, however, sensitive to violations
of the assumptions of multivariate normality
and sample size. Multivariate nonnormality in
the data can inflate w2 statistics. Additionally,
the w2 goodness-of-fit test uses N to calculate
model fit, therefore, as N increases, the w2 value
also increases (Kelloway, 1998; Schumacker &
Lomax, 2010). The w2 statistic, therefore, has a
tendency to indicate a statistically significant
probability level in sample sizes over 200 and
a nonsignificant probability level is samples
sizes under a 100 (Schumacker & Lomax,
2010). Given that the w2 goodness-of-fit test
is reliant on sample size to calculate model fit
and is sensitive to violations in multivariate
normality, researchers using larger sample
sizes are at an increased risk for making a Type
I error and concluding that a significant difference exists between
P the theoretical model covariance matrix
and the sample covariance
matrix S, when in fact the two matrices are similar (Kelloway, 1998; Schumacker & Lomax,
2010). As a result, scholars have argued that
multiple indices of overall model fit be used
in conjunction with the w2 goodness-of-fit test
(Joreskog & Sorbom, 1993; Kelloway, 1998;
Lent, Lopez, Brown, & Gore, 1996; Schumacker
& Lomax, 2010).

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

40

Counseling Outcome Research and Evaluation 3(1)

RMSEA is based on the analysis of residuals,


using the square root of the mean-squared differences between the elements contained
in theoreP
tical model covariance matrix and the sample
covariance matrix S. Index values range from
0.00 to 1.00 with lower values indicating a better
fit to the data. Any value lower than 1.00 is
assumed to be an adequate fit to the data, with
values lower than .05 being a very good fit to
the data (Steiger, 1990). The GIF is based on the
ratio of the sum of the squared differences to the
observed variances, measuring the degree of
variance and covariance in the sample covariance matrix S that is predicted
P by the theoretical
model covariance matrix . GIF values range
from 0 to 1, with values higher than .90 indicating a good fit to the data. AGFI essentially
adjusts the GFI values to account for degrees
of freedom in the model. Similar to GIF, AGFI
values range from 0 to 1, with values higher
than .90 indicating a good fit to the data. Discrepancies between GIF and AGFI values
typically indicate the presence of small, nonsignificant parameters in the theoretical model
(Kelloway, 1998).
Comparative fit. Unlike absolute fit indices
which compare the theoretical model to a
model that fits the data perfectly, comparative
fit measures compare the model to a baseline
model that is said to fit the data poorly. Comparative fit indices are an incremental fit measure that determines the relative position of
model fit on a continuum that ranges from
worst fit (i.e., no relationships in the data) to
perfect fit. Commonly used comparative fit
indices include the normed fit index (NFI),
nonnormed fit index (NNFI), CFI, and relative
fit index (RFI). The NIF determines the percentage of improvement in the theoretical
modes fit as compared to the baseline model.
The fit index ranges from 0 to 1, with values
greater than .90 indicating a that the model has
at least a 90% better fit than the baseline model.
It should be noted that in small sample sizes,
NFI has a tendency to underestimate model fit.
The NNFI attempts to correct for this error by
adjusting for the degrees of freedom in the
model. As a result, the index numbers can range

beyond 0 to 1; higher values indicate a better


model fit. The CFI is based on a noncentral
w2 distribution and index values range from 0
to 1, with values exceeding .90 indicating a
good fit. The RFI is very similar to the other
comparative fit indices described above in that
it ranges from 0 to 1, with values greater than
.90 indicating a good fit to the data.
Parsimonious fit. Parsimonious fit measures
can be likened to a cost-benefit analysis. These
indices are used to establish the impact of
adding additional parameters to the model.
Essentially, parsimonious fit measures determine whether the impact of adding additional
parameters on model fit is worth the decrease
in degrees of freedom. PNFI and parsimonious
goodness-of-fit index (PGFI) are commonly
used. Both indices range from 0 to 1. Unlike
other fit indices, PNFI and PGFI have no standard cutoff point for determining a good fit,
although some researchers (i.e., Meyer et al.,
2006; Mulaik et al., 1989) suggest that any
number above .50 indicates an acceptable
model. Instead, this index is best used to compare two or more models; the model having
the highest PNFI or PGFI would be the most
parsimonious model.
SEM analysis was conducted on the example model to determine the fit of the individual
parameter estimates and the overall global fit of
the entire model. The w2 test for the model was
significant, w2(25) 51.25, p < .05, but other
fit indices (RMSEA .06; CFI .98) indicated the model was a good fit to the data. The
path from supervisor multicultural competence
to the supervisory working alliance was significant (b .78, t 13.06, p < .05). The path from
the supervisory working alliance to supervisee
CSE was also significant (b .26, t 2.07,
p < .05). The path from supervisor multicultural
competence to supervisee CSE, however, was
not significant (b .09, t .79, ns). Given that
the path from supervisor multicultural competence to supervisee CSE was not significant, the
results suggest that the supervisory working alliance fully mediates the relationship between
supervisor multicultural competence and supervisee CSE.

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

41

Microskill
Process
Supervisor
Multicultural
Competence

Counseling
Self-Efficacy

Difficult
Culture
Value

Figure 3. This path diagram depicts the hypothesized relationship between supervisor multicultural competence and supervisee counseling self-efficacy as specified by the direct, alternative model.

Comparing nested models. Determining how


well the theoretical model fits the sample data
is complicated, and often a convoluted process.
Researchers should use multiple fit indices to
obtain a broader understanding of model fit.
Kelloway (1998) further suggests that structural models also be tested against viable alternative models. That is, two or more plausible
models are compared to one another to determine which model best fits the sample data.
Researchers may not be able to demonstrate
that the theoretical model absolutely fits the
data, but it can be helpful to know that the
model fits better than an alternative model.
Numerous alternative models exist and many
of these models are sequentially nested within
the theoretical model. Nested alternative
models can be developed by fixing some of the
free parameters in the theoretical model to 0
(Kelloway, 1998).
Our example model, for instance, contains
two alternative models. The first alternative
model (i.e., the direct model) involves only the
direct relationship between supervisor multicultural competence and supervisee CSE. It
hypothesizes that supervisor multicultural competence directly impacts supervisee CSE by
fixing the indirect parameters from: (a) supervisor multicultural competence to supervisory
working alliance and (b) supervisory working
alliance to supervisee CSE to 0. The second
alternative model (i.e., the indirect model) is
only concerned with the indirect relationship

between supervisor multicultural competence


and supervisee CSE. This model hypothesizes
that supervisor multicultural competence only
impacts supervisee CSE indirectly through the
supervisory working alliance and fixes the
direct parameter from supervisor multicultural
competence to supervisee CSE to 0. Figures 3
and 4 illustrate the path diagrams for these two
alternative models.
Certainly, a theoretical models fit to the
data can be compared with alternative models
fit using the any of the fit indices above; however, when alternative models include nested
relationships, they can be directly compared
to the theoretical model using the w2 difference
test. If the yielded w2 value is greater than the
critical w2 value, a significant difference
between the two models exists and the addition
of parameters leads to a significant increase in
model fit. If the yielded w2 value is less than the
critical w2 value, there are no significant differences between the two models and the addition
of parameters does not result in a significant
increase in model fit. For additional reading
concerning the w2 difference test please see
Kelloway (1998) or Mueller (1996).
SEM analysis was also conducted on the two
alternative models in order to determine
whether they yielded a better it to the empirical
data than the example model. The w2 test for
the direct model was significant, w2(9)
29.01, p < .05, and other fit indices (RMSEA
.098; CFI .97) indicated the direct model

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

42

Counseling Outcome Research and Evaluation 3(1)

Task

Goal

Bond

Microskill
Process

Supervisor
Multicultural
Competence

Supervisory
Working
Alliance

Counseling
Self-Efficacy

Difficult
Culture
Value

Figure 4. This path diagram depicts the hypothesized mediated relationship between supervisor multicultural
competence and supervisee counseling self-efficacy through the supervisory working alliance as specified by the
indirect, alternative model.

was only a satisfactory fit to the data. The path


from supervisor multicultural competence to
supervisee CSE was significant (b .29, t
4.16, p < .05). The w2 test for the indirect model
was significant, w2(26) 53.96, p < .05, but
was a better fit to the data than the direct model.
Other fit indices (RMSEA .07; CFI .98)
indicated the indirect model was a good fit
to the data. The indirect model was also more
parsimonious (PNFI .69) than the example
model (PNFI .67) and the direct model (PNFI
.57). The path from supervisor multicultural
competence to the supervisory working alliance
was significant, b .79, t 13.76, p < .05. The
path from the supervisory working alliance to
supervisee CSE was also significant, b .32,
t 4.20, p < .05.
The degree of difference between the example model and the two alternative nested models
was directly tested using the w2difference test.
The w2 difference between the example model
and the direct model was 22.24 with 16 degrees
of freedom. This obtained value was smaller
than the critical value for w2 with 4 degrees of
freedom, w2(16) 26.30, p .05, meaning that
there was no significant difference between the
example model and the direct path model. The
w2 difference between the example model and
the indirect model, w2(1) 2.71, was also
smaller than the critical value for w2 with 7
degrees of freedom, w2(1) 3.84, p .05, and
indicated that there was also no significant difference between the example model and the
indirect model. These results suggest that the

addition of parameters in the example model did


not result in a significant increase in model fit.
Overall the results of this SEM analysis
confirm the studys hypotheses: (a) supervisor multicultural competence directly
impacts supervisee CSE and (b) the supervisory working alliance mediates the relationship between supervisor multicultural
competence and supervisee CSE. While all
three models (i.e., the example model and two
alternative models) were a viable fit to the
data, the indirect model was most parsimonious and yielded a slightly stronger fit to the
data than the example model or the direct
model. We can then conclude that the relationship between supervisor multicultural competence and supervisee CSE is fully mediated,
meaning that the relationship between supervisor multicultural competence and supervisee
CSE is no longer statistically significant when
the mediator variable, supervisory working alliance is included in the model (Baron & Kenny,
1986). The supervisory working alliance
accounts for a 100% of the total effect.
Power and sample size. SEM techniques are
based on the assumption of large sample sizes
(Kelloway, 1998). In particular, the fitting
functions used to estimate model parameters
(e.g., ML, GLS) and tests of model fit (e.g.,
w2 test) require large sample sizes. The determination of power and sample size in SEM analysis is difficult given the complexity of
theoretical models. One of the earliest methods

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

43

for determining an appropriate sample size in


SEM was the Critical N (CN) statistic (Hoelter,
1983). CN is provided in the LISREL output or
can be calculated using the following formula:
CN w2 critical =F min 1:

12

A CN  200 is considered adequate


(Anderson & Gerbing, 1984; Bentler & Chou,
1987; Marsh, Balla, & MacDonald, 1988).
Many researchers continue to think of sample
size as an absolute in SEM analysis and recommend that studies employing SEM techniques
have a minimum of 200 participants, as any
sample size of less than 200 may yield inaccurate parameter estimates (Marsh et al., 1988). If
fact, a sample size of 200 has become the
golden standard in SEM and is the median sample size used in published articles that employ
SEM (Kline, 2010). When using ML methods
to estimate model parameters, minimum sample size can be conceptualized in terms of the
ratio of cases (N) to the number of parameters
included in the theoretical model. Bollen and
Chou (1987) recommended 510 participants per
estimated parameter, whereas Mueller (1996)
suggested a minimum sample size-to-parameter
ratio of 10:1. Other researchers argue that a
10:1 is less than adequate and propose that a
ratio of 20:1 is ideal (Costello & Osborne,
2005; Kline, 2010).
Power analysis can occur on two different
levels in SEM (Kline, 2010). Researchers can
estimate the statistical power needed to detect
the statistical significance of individual model
parameters. The best-known method for this
type of power analysis was developed by Saris
and Satorra (1993), but involves repeating the
analysis for every single model parameter in
which a power estimate is desired. A second
approach to power analysis occurs at the
model level.
MacCallum, Browne, and Sugawara
(1996) developed a power analysis procedure
is based on RMSEA that determines the
probability of detecting a reasonably correct
model. For more information related to
power analysis in SEM please see Kline
(2010).

Model Modification
The final step of SEM involves model modification. In this step, researchers employ model
modification methods in an attempt to find a
model that better fits the data. First, researchers
need to perform a specification search that
involves eliminating nonsignificant parameters
from the theoretical model (i.e., theory trimming) and examining the models standardized
residual matrix (i.e., fitted residuals). The most
commonly used procedures for eliminating
parameters include comparing the t statistic for
each parameter to the tabled t value to determine statistical significance (Schumacker &
Lomax, 2010). When examining the standardized residual matrix, researchers should look
to see that all of the values are small in magnitude. Overall large values in the matrix indicate
a misspecification of the general model, while
large values across a single variable point to the
misspecification of that variable only (Bentler,
1989). While the preceding procedures can
improve model fit, they remain highly controversial. Specification searches are exploratory
in nature and are based on the sample data
instead of previous theory and research, as a
result parameters eliminated from the theoretical model may reflect sample characteristics
that do not generalize to the broader population
(Kelloway, 1998). Additionally, model modification may lead to an inflation of Type I error
rates and be misleading (Kelloway, 1998). For
this reason, researchers should strive to balance
the elimination of parameters to the model with
improving the fit of the model.

Conclusions
This article provided an introduction to
SEM techniques and described five applied
steps for conducting SEM analysis: model specification, identification, estimation, testing,
and modification. SEM has several advantages
over first-generation multivariate methods
such as multiple regression and can used to
verify complex relationship among observed
and latent variables in counseling research.
Counseling researchers can use this statistical

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

44

Counseling Outcome Research and Evaluation 3(1)

technique to empirically validate counseling


theory, and assessments, and, in turn, make
meaningful scholarly contribution that advance
the practice of counseling. As a result, this article has numerous implications for counseling
researchers, counseling practitioners, and counselor educators.
The use of SEM in the social sciences,
and counseling researching in particular, is
rapidly increasing (Martens, 2005; Quintana
& Maxwell, 1999; Schumacker & Lomax,
2010); however, many scholars exert that
social science researchers have a history of failing to engage in the best practices regarding
SEM analysis and that SEM procedures are
often misrepresented in research publications
(Martens, 2005; Quintana & Maxwell, 1999).
One explanation for this occurrence could be
the disconnect between the statistical journals
in which SEM methodology articles are typically published and the counseling journals that
most counseling researchers subscribe to and
read. This article seeks to bridge the gap
between SEM developments and counseling
research by offering an applied overview of
SEM procedures in a readily available and
widely read counseling journal. By increasing
the availability of information on SEM analysis
in the field of counseling, counseling researchers
are afforded the opportunity to gain a thorough
understanding of SEM techniques and best practices. It further stands to reason that counseling
researchers will be more likely to employ the
technique appropriately and produce more rigorous research in the area of theory and instrument
development. Tutorial articles, such as this one,
that provide step-by-step applicable approaches
to SEM can also boost counseling researchers
confidence regarding their ability to successfully
utilize a complicated statistical technique and
increase the likelihood that they will employ
SEM analysis in future research endeavors.
Improvement in the application of a
statistical technique often indirectly influences
the practice of counseling. Counseling practitioners, however, are required to draw upon
empirical research in order to determine a best
course for client treatment (Ray et al., 2011).
As a result, improvements in the utility of SEM

procedures directly impacts the efficacy of


counseling practices and client outcomes. With
the advent of managed care and evidence-based
practices, todays practicing counselors are
often required to become researchpractitioners. This tutorial article on SEM procedures
strives to facilitate the development of
researchpractitioners by providing a basic and
thorough overview of SEM procedures, as well
as the application of SEM procedures to counseling research and practice. Counseling practitioners who have a knowledge of SEM are
more likely to be able to (a) apply the findings
of SEM research directly to their work with
clients, (b) incorporate the use of SEM in their
individual practice and research, and (c) produce SEM research collaboratively with counseling researchers (Stricker, 2002)
Practicing counselors may be interested in
research and aspire to be researchpractitioners, but scholars (Bangert & Baumberger,
2005; Lundervold & Belwood, 2000) speculate that practitioners often lack knowledge
in conducting and/or understanding research.
Additionally, the lack of quality studies that
employ SEM techniques has been attributed
to a lack of advanced statistical training in
doctoral-level graduate education (Henson,
Hull, & Williams, 2010). As a result, counselor educators may wish to consider integrating more research courses into both
masters and doctoral level curricula. Counselor educators may wish to add an advanced
statistical method component to masters
level counseling research courses in which
counselor trainees obtain a general understanding of SEM and CFA procedures, as
well as learn how to interpret the results of
such studies and apply them to practice. Students at the masters level should also be
encouraged to, and may benefit from, taking
an introductory statistics course. Counselor
educators may consider adding advanced
level statistical courses that provide an indepth overview of SEM into doctoral-level
curricula. Through such courses, counseling
doctoral students may be more informed consumers of SEM research, could better evaluate the quality of existing research, and

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

45

become interested in pursuing rigorous


research agendas that advance the field of
counseling
In summary, counseling is a multifaceted
phenomena that involves highly complex relationships among a number of variables. As the
field develops sophisticated theories to explain
and measure these complex relationships, it is
no longer sufficient for counseling researchers
to rely on basic statistical methods that can
utilize only a limited number of variables. SEM
is a powerful statistical technique that offers
counseling researchers the ability to empirically confirm such sophisticated theoretical
counseling models and develop psychological
instruments. It is imperative, however, that
counseling researchers and practitioners have
knowledge of SEM procedures and understand
how to apply these procedures to counseling
research in order to advance clinical practice
and client outcomes.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of
interest with respect to the research, authorship,
and/or publication of this article.
Funding
The author(s) received no financial support for
the research, authorship, and/or publication of
this article.
References
Anderson, J. C., & Gerbing, D. W. (1984). The effect
of sampling error on convergence, improper solutions, and goodness-of-fit indices for maximum
likelihood confirmatory factor analysis. Psychametrika, 49, 155173.
Anderson, J. C., & Gerbing, D. W. (1988). Structural
equation modeling in practice: A review and recommended two step approach. Psychological
Bulletin, 103, 411423.
Arbuckle, J. L., & Wothke, W. (1999). AMOS 4.0
users guide. Chicago: SmallWaters.
Bangert, A. W., & Baumberger, J. P. (2005).
Research and statistical techniques used in the
journal of counseling and development: 19902001. Journal of Counseling and Development,
83, 480487.

Baron, R. M., & Kenny, D. A. (1986). The moderatormediator variable distinction in social psychological research: Conceptual, strategic, and statistical
considerations. Journal of Personality and Social
Psychology, 51, 11731182. doi:10.1037/00223514.51.6.1173
Bentler, P. M. (1989). EQS: A structural equations
program manual. Los Angeles, CA: BMDP Statistical Software.
Bentler, P. M., & Chou, C. (1987). Practical issues
in structural equation modeling. Sociological
Methods and Research, 16, 78117.
Bollen, K. A. (1989). Structural equations with
latent variables. New York, NY: John Wiley.
Bollen, K. A., & Long, J. S. (1993). Testing
structural equation models. Beverley Hills,
CA: Sage.
Bullock-Yowell, E., Peterson, G. W., Reardon, R. C.,
Leirer, S. J., & Reed, C. A. (2011). Relationships
among career and life stress, negative career
thoughts, and career decision state: A cognitive
information processing perspective. Career Development Quarterly, 59, 302314.
Chan, F., Lee, G. K., Lee, E. J., Kubota, C., & Allen,
C. A. (2007). Structural equation modeling in
rehabilitation counseling research. Rehabilitation
Counseling Bulletin, 51, 5366.
Chao, R., Chu-Lien, N., & Sanjay, R. (2011). The
role of ethnic identity, gender roles, and multicultural training in college counselors multicultural counseling competence: A mediation
model. Journal of College Counseling, 14,
5064.
Cochran, D. B., Wang, E. W., Stevenson, S. J.,
Johnson, L. E., & Crews, C. (2010). Adolescent occupational aspirations: Test of Gottfredsons
theory
of
circumscription
and
compromise. Career Development Quarterly,
59, 412427.
Cooley, W. W. (1978). Explanatory observational
studies. Educational Researcher, 7, 915.
Costello, A. B., & Osborne, J. (2005). Best practices
in factor analysis: Four recommendations for getting the most from your analysis. Practical
Assessment Research & Evaluation, 10, 19.
Crockett, S. A. (2011). The role of supervisorsupervisee cultural differences, supervisor multicultural competence, and the supervisory working
alliance in supervision outcomes: A moderated

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

46

Counseling Outcome Research and Evaluation 3(1)

mediation model (Doctoral dissertation). Retrieved


from ProQuest Dissertations and Theses database
(UMI No. 3455280).
Henson, R., Hull, D., & Williams, C. (2010). Methodology in our education research culture:
Toward a stronger collective quantitative proficiency. Educational Researcher, 39, 229240.
doi: 10.2102/0013189X10365102
Hoelter, J. W. (1983). The analysis of covariance
structures: Goodness-of-fit indices. Sociological
Methods and Research, 11, 325344.
Horvath, A. O., & Greenberg, L. S. (1986). The
development of the working alliance inventory.
In L. S. Greenberg & W. M. Pinsof (Eds.), The
psychotherapeutic process: A research handbook
(pp. 529556). New York, NY: Gilford.
James, L. R., Mulaik, S. A., & Brett, J. M. (1982).
Causal analysis: Assumptions, models, and data.
Beverly Hills, CA: Sage.
Joreskog, K. G., & Sorbom, D. (1993). LISERL 8:
Structural equation modeling with SIMPLIS command language. Chicago, IL: Scientific Software
International.
Kaplan, D. M., & Gladding, S. T. (2011). A vision
for the future of counseling: The 20/20 principles
for unifying and strengthening the profession.
Journal of Counseling and Development, 89,
367372.
Kelloway, E. K. (1998). Using liseral for structural
equation modeling: A researchers guide. Thousand Oaks, CA: Sage.
Kline, R. B. (2010). Principles and practice of structural equation modeling (3rd ed.). New York,
NY: Guilford.
Ladany, N., Mori, Y., & Mehr, K. E. (2007). Trainee
perceptions of effective and ineffective supervision interventions (Unpublished manuscript).
Bethlehem, PA: Lehigh University.
Larson, L. M., Suzuki, L. A., Gillespie, K. N.,
Potenza, M. T., Bechtel, M. A., & Toulouse,
A. L. (1992). Development and validation of
the counseling self-estimate inventory. Journal
of Counseling Psychology, 39, 105130.
Lent, R. W., Lopez, F. G., Brown, S. D., & Gore,
P. A. (1996). Latent structure of the sources of
mathematics self-efficacy. Journal of Vocational Behavior, 49, 292308.
Lundervold, D. A., & Belwood, M. F. (2000). The
best kept secret in counseling: Single case (N

1) experimental designs. Journal of Counseling


and Development, 78, 92102.
MacCallum, R. C., Browne, M. W., & Sugawara, H.
M. (1996). Power analysis and determination of
sample size for covariance structure modeling.
Psychological Methods, 1, 130149.
Marsh, H. W., Balla, J. R., & McDonald, R. P. (1988).
Goodness-of-fit indexes in confirmatory factor
analysis: The effect of sample size. Psychological
Bulletin, 103, 391410.
Martens, M. P. (2005). The use of structural equation
modeling in counseling psychology research. The
Counseling Psychologist, 33, 269298. doi: 10.
1177/0011000004272260
Meyer, L. S., Gamst, G., & Guarino, A. J. (2006).
Applied multivariate research: Design and interpretation. Thousand Oaks, CA: Sage.
Mueller, R. O. (1996). Basic principles of structural
equation modeling: An introduction to LISREL
and EQS. New York, NY: Springer.
Mulaik, S. A., James, L. R., Van Alstine, J., Bennett,
N., Lindi, S., & Stilwell, C. D. (1989). Evaluation
of goodness-of-fit indices for structural equation
models. Psychology Bulletin, 105, 430445.
doi:10.1037/0033-2909.105.3.430
OBrian, R. M. (1994). Identification of simple measurement models with multiple latent variables
and correlated errors. In P. V. Marsden (Ed.),
Sociological methodology (pp. 137170). Oxford,
England: Blackwell.
Quintana, S. M., & Maxwell, S. E. (1999). Implications of recent development in structural equation
modeling for counseling psychology. The Counseling Psychologist, 27, 485527. doi: 10.1177/
0011000099274002
Ray, D. C., Hull, D. M., Thacker, A. J., Pace, L. S.,
Swan, K. L., Carlson, S. E., & Sullivan, J. M.
(2011). Research in counseling: A 10-year review
to inform practice. Journal of Counseling and
Development, 89, 349359.
Raykov, T., & Marcoulides, G. A. (2000). A first
course in structural equation modeling. Mahwah,
NJ: Lawrence Erlbaum.
Raykov, T., & Widaman, K. F. (1995). Issues in
applied structural equation modeling research.
Structural Equation Modeling, 2, 289318. doi:
10.1080/10705519509540017
Saris, W. E., & Satorra, A. (1993). Power
evaluations in structural equation models. In K.

Downloaded from cor.sagepub.com by Petrut Paula on April 13, 2014

Crockett

A. Bollen & J. S. Long (Eds.), Testing structural


equation models (pp. 181204). Newbury Park,
CA: Sage.
Schumacker, R. E., & Lomax, R. G. (2010). A beginners guide to structural equation modeling (3rd
ed.). Mahwah, NJ: Lawrence Erlbaum.
Skosireva, A. K. (2010). Acculturation, alientation,
and HIV risk among the Russian-speaking drug
users in Estonia (Doctoral dissertation). Retrieved
from ProQuest Dissertations and Theses database.
(UMI No. 3402554).
Stage, F. K., Carter, H. C., & Nora, A. (2004). Path
analysis: An introduction and analysis of a decade
of research. Journal of Educational Research, 98,
512.
Steiger, J. H. (1990). Structural model evaluation and
modification: An interval estimation approach.
Multivariate Behavioral Research, 25, 173180.
Stricker, G. (2002). What is a scientist-practitioner
anyway? Journal of Clinical Psychology, 58,
12771283. doi: 10.1002/jclp.10111
Tomarken, A. J., & Waller, N. G. (2005). Structural
equation modeling: Strengths, limitations, and

47

misconceptions. Annual Review of Clinical Psychology, 1, 3165.


Tovar, E., & Simon, M. (2010). Factorial structure
and invariance analysis of the Sense of
Belonging Scales. Measurement and Evaluation in Counseling and Development, 43,
199128.
Villodas, F., Villodas, M. T., & Roesch, S. (2011).
Examining the factor structure of the positive and
negative affect schedule (PANAS) in a
multiethnic sample of adolescents. Measurement
and Evaluation in Counseling and Development,
44, 193203. doi:10.1177/0748175611414721

Bio
Stephanie A. Crockett, PhD, is an assistant professor in the Department of Counseling of the School of
Education and Human Services at Oakland University. Her primary research interests include the use
of SEM in counseling research, testing/assessment
and outcome research in career counseling, and multicultural competence in clinical supervision.

Вам также может понравиться