Вы находитесь на странице: 1из 31

Military Psychology © 2014 American Psychological Association

2014, Vol. 26, No. 3, 221–251 0899-5605/14/$12.00 http://dx.doi.org/10.1037/mil0000040

Military Enlistment Selection and Classification: Moving Forward

Michael G. Rumsey Jane M. Arabian


Annandale, Virginia Office of the Under Secretary of Defense for
Personnel and Readiness,
Washington, DC

This article places the articles included in this special issue within the larger context of
the objectives of a selection and classification system. It examines the full range of
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

individual differences and how, until relatively recently, a focus on training success has
This document is copyrighted by the American Psychological Association or one of its allied publishers.

led to an emphasis on the cognitive subset of these differences within the military. It
describes how, consistent with a greater recognition of the importance of job perfor-
mance, the research described in this issue has opened the door to expanded coverage
of both cognitive and noncognitive attributes. It summarizes the methodological ad-
vances that have contributed to the efficacy of new noncognitive measures. It explores
how popular approaches to the measurement of classification efficiency have led to
unwarranted pessimism regarding the classification potential of multiattribute measures
and discusses research indicating how much greater classification efficiency is possible
with existing measures. Finally, it examines potentially fruitful areas of future research
to better meet military objectives. These include development of an approach to
performance measurement and validity transportation that makes future military clas-
sification research feasible; exploration of currently untapped individual-difference
domains, particularly those that might differentially predict across job groupings;
exploration of the interaction between cognitive and noncognitive attributes; and
exploration of alternative measurement techniques.

Keywords: selection, classification, testing, noncognitive, cognitive

This special issue describes a number of im- Wise (1994) enumerated a number of possi-
portant recent research developments regarding ble goals for a selection and classification sys-
military personnel selection and classification. tem. For the sake of simplicity, we will not
The purpose of this article is to place these consider all of these here, but will focus on a
developments in a broader perspective that will few. One is training success. Another is job
allow us to make some judgments regarding the performance. Job performance is a goal compli-
contributions of these developments to the se- cated by the multiple stages of an organizational
lection and classification system, and to suggest career. Do we want those who will succeed in
additional research to improve this system. training, the first stage in a military career? Do
Therefore, we begin with some discussion of we want those who will perform best in their
the goals that the system is designed to achieve first military tour? Or do we want to ensure we
and the manner in which the current system select those who have the capacity to grow into
operates. more senior positions?
To these goals, we add a third: continuance.
The organization may want those individuals
who will remain with it long enough to repay
the investment in finding them and, at least in
Michael G. Rumsey, Annandale, Virginia; Jane M. Ara-
bian, Office of the Under Secretary of Defense for Personnel the military, training them. Again, success in
and Readiness, Washington, DC. achieving this goal, whether phrased in terms of
The opinions expressed are those of the authors and not attrition or retention, can be measured at multi-
necessarily those of the U.S. Government, Department of ple points in an individual’s career. This over-
Defense.
Correspondence concerning this article should be addressed
laps one of the goals discussed by Wise
to Michael G. Rumsey, Annandale, VA 22003. E-mail: (1994)—“qualified months of service” (p.
miker1998@aol.com 355)— but because that goal combined attrition
221
222 RUMSEY AND ARABIAN

and job proficiency, which we interpret here as sensory, perceptual, and psychomotor skills, as well as
an aspect of job performance, we choose to biological variations.
focus on a simpler goal— continuance. We can view this list of construct categories
How does the organization best determine as a starting point for defining the boundaries of
which individuals to select, and for which jobs, the universe of those human attributes relevant
to meet its goals? Ultimately, this question is for our purposes. Certain categories can be
answered by administering a set of predictor eliminated at the outset as falling outside the
measures and determining which have the most realm of psychological assessment, at least in
utility with respect to the criteria employed. the current military enlistment screening envi-
However, the solution thus derived will be help- ronment, including sensory skills and biological
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

ful only to the extent that the predictors and variations. This would seem to be true for phys-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

criteria are thoughtfully selected. That is to say, ical assessments as well, but as we have seen in
to what extent is the full universe of attributes, the work on the Tailored Adaptive Personality
as well as the expected linkage between attri- Assessment System (TAPAS), there is an aspect
butes and the organization’s goals, considered of physical fitness that can be measured indi-
in determining what predictor constructs to rectly on a psychological instrument. Some cat-
measure? How well do the criteria represent the egories, as we shall see, can be combined, such
organization’s goals? Before the impact of the as the perceptual and psychomotor categories
research included in this special issue can be with the cognitive category.
evaluated, we need to explore the implications To meaningfully discuss these categories, we
of these questions. need to define them. Unfortunately, there is no
single, universally accepted definition for any.
However, we can, to some extent, extract from
Universe of Attributes the literature working definitions to provide
some sense of their generally understood scope
We will use Gottfredson and Saklofske’s and limitations.
(2009, p. 186) definition of abilities—“latent The categories can be considered in terms of
traits that we infer from regularities in behavior two groups— cognitive and noncognitive—
that we notice across time, place and circum- although behaviors cannot easily be viewed as
stance”—as our definition of attributes, because entirely a function of one or the other. To the
this definition seems to apply equally to abilities noncognitive categories offered by Snow
and other types of attributes. There is a critical (1986), we will add an additional one: self-
need for some working description of the full concept. Recognizing that the classification of
range of attributes in order to ensure that poten- attributes is an ongoing process with much left
tially important attributes will not be over- to be learned, and that no single classification
looked. The instances in which the full universe will receive approval by a consensus of the
of human characteristics is ever considered scientific community, what follows is a tenta-
prior to the development of a selection and tive representation of the different major com-
classification system are likely few. In fact, this ponents of the cognitive and noncognitive
universe is ill-defined. A search for any com- groupings.
prehensive, authoritative taxonomy of human
differences only reveals the fragmentation of Cognitive Ability
the field of differential psychology, a fragmen-
tation observed by Cronbach in 1957 and Snow Cognitive ability and intelligence are terms
in 1986 (Cronbach, 1957; Snow, 1986), and one often used interchangeably. Intelligence encom-
that continues today. Snow (1986, p. 1029) de- passes so many diverse cognitive abilities that it
scribed the elements of differential psychology is difficult to define. Sternberg and Detterman
study as follows: (1986) reported asking 24 prominent theorists
for a definition and receiving 24 different an-
Differential psychologists study intelligence; a variety swers. For the purpose of properly reflecting its
of special abilities and talents; creativity; cognitive,
motivational and learning styles and strategies; inter- scope, this description is particularly useful:
ests; values; attitudes; and all of human personality, “Intelligence is the acquired repertoire of all
both normal and abnormal. They also study physical, intellectual (cognitive) skills and knowledge
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 223

available to the person at a particular point in processing, short-term memory, long-term stor-
time” (Humphreys, 1994, p. 180). age and retrieval, processing speed, reaction and
Anastasi (1983, p. 181) noted the “rigidities decision speed, psychomotor speed, quantita-
and stereotypes” associated with the term “in- tive, reading and writing, psychomotor abilities,
telligence.” Angoff (1988, p. 713) stated that olfactory abilities, tactile abilities, and kines-
thetic abilities.
aptitude (i.e., academic aptitude) and intelligence are
in fact thought by many to be largely innate (i.e.,
Among the individual-difference categories
inherent in the individual at the time of birth) and to a identified by Snow (1986) and listed in a pre-
considerable extent inherited, and therefore unchange- vious section of this article, the CHC model
able both within a given lifetime and across (Horn & Cattell, 1966) covers not only cogni-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

generations. tive abilities and intelligence but also percep-


This document is copyrighted by the American Psychological Association or one of its allied publishers.

It is difficult to even discuss intelligence be- tual, psychomotor, and a number of special abil-
cause some will assume that the very use of the ities. The status of another category, learning
word implies the acceptance of such assump- styles, can be compared with that of thinking
tions. For us, “intelligence” is a term of conve- styles, described by Sternberg and Grigorenko
nience, and we attach no special meaning to it (1997, p. 700) as “not themselves abilities but
beyond that contained in Humphreys’s (1994) rather preferred ways of using the abilities one
definition. has.” One might think of learning styles as ways
Modern comprehensive structural of not only using existing abilities but also
theories. Several attempts have been made to strengthening them.
identify the structure of cognitive abilities General versus specific abilities. As ob-
through factor analyses of existing cognitive served by Ackerman (1997, pp. 178 –179), “The
tests. A particularly comprehensive effort was key aspect of the hierarchical theories of intel-
ligence is that they represent a ‘positive mani-
conducted by Carroll (1993). Carroll’s analyses
fold’—that is, they reflect the pervasive positive
generated a three-stratum model. The first stra-
correlations that exist among all ability mea-
tum consists of many narrow factors. The sec-
sures.” As noted earlier, Carroll’s (1993) anal-
ond consists of eight broad factors: fluid intel-
yses generated a general factor, consistent with
ligence, crystallized intelligence, general
this positive manifold. Floyd, Bergeron, Mc-
memory and learning, broad visual perception,
Cormack, Anderson, and Hargrove-Owens
broad auditory perception, broad retrieval abil- (2005, p. 330) observed that “the existence of a
ity, broad cognitive speediness, and processing single higher order general factor (g) is the
speed. The third stratum consists simply of gen- focus of much debate— even among the name-
eral cognitive ability. sakes of the CHC theory.” One group maintains
Subsequently, researchers combined the Car- that “this general factor represents well what is
roll (1993) model and an earlier model pro- shared among the broad abilities” and “is the
duced by Cattell and Horn (e.g., Horn & Cattell, only cognitive ability tapped by all ability mea-
1966) with their own analyses to produce what sures”; members of the other group “argue for a
became known as the Cattell-Horn-Carroll focus on the somewhat independent broad abil-
(CHC) model, “the cognitive taxonomy with the ities” and “relegate g to a Protean and relatively
largest body of supporting evidence” (Newton meaningless conglomerate of more specific
& McGrew, 2010, p. 622). Of particular interest cognitive abilities” (Floyd et al., 2005, p. 330).
are fluid reasoning, the “use of deliberate and Floyd, McGrew, Barry, Rafael, and Rogers
controlled mental operations, often in a flexible (2009, p. 262) later observed,
manner, to solve novel problems that cannot be
performed automatically,” and comprehension- A large body of research evidence and the model
espoused by the CHC theory informs test users that
knowledge, the “knowledge of the culture that most every cognitive ability test score-be it a compos-
is incorporated by individuals vis-à-vis a pro- ite or a subtest score-is reflective of some part general
cess of acculturation” (Newton & McGrew, factor, some part broad ability or abilities, and some
2010, p. 623). The CHC consisted of more part narrow ability or abilities (in addition to random
error).
broad factors than Carroll’s (1993) three-
stratum model, including general (domain- This does indeed seem to be the position most
specific knowledge), visual processing, auditory supported by the evidence. However, it leaves
224 RUMSEY AND ARABIAN

two important questions still to be resolved. The gence, but concluded that subsequent research
first is the relative contribution of the general showed that practical intelligence measures
factor to the total. Carroll (1992, p. 268) ob- “could significantly improve the predictor
served, power of intelligence tests” (p. 131).
Whether or not Nisbett et al.’s (2012) con-
Many reports of factorial studies contain statements to
the effect that the general factor contributes the largest clusion is correct, it is not entirely clear that the
amount of variance, say, 90%, to a test battery. This concept of practical intelligence, as another
statistic is misleading. It can be shown that the propor- type of intelligence, is either necessary or use-
tion of variance that the general factor contributes to a ful. The Carroll model (Carroll, 1993) and CHC
particular measurement is often much less, perhaps no
more than 50% on the average.
model (Newton & McGrew, 2010) would seem
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

to be sufficiently broad to incorporate the skills


This document is copyrighted by the American Psychological Association or one of its allied publishers.

Many will argue with this estimate, but it serves involved in the kinds of tasks used by Sternberg
as a useful starting point for discussion. (2013) to test his theory. Gottfredson (2003b)
The second question concerns the meaning of suggested that any unique variance contributed
the general factor. At one extreme are those by these measures might be attributed to factors
who believe, as Angoff (1988) suggested, that it other than intelligence, noting that scoring was
is something completely innate, inheritable, and often based on the relationship of one’s re-
unchangeable. At the other extreme are those sponses to those by a group of individuals iden-
who, as Reeve and Hakel (2002, p. 48) ob- tified as experts. For example, she noted that, on
served, believe “g is purely an arbitrary mathe- a test of herbal knowledge among Kenyan chil-
matical artifact.” In fact, it seems best to ac- dren, “one of the answers scored as correct on
knowledge that at this point we do not know the inventory was to agree that the ‘evil eye’ is
what g is, beyond the measured components a likely cause of a baby’s crying and stomach-
that contribute to it. Hence, Gottfredson and ache” (p. 422).
Saklofske (2009, p. 188) observed, “Debates Sternberg (2013) has also proposed the con-
over how to define intelligence are now moot cept of creative intelligence. He linked this to
because the various empirical referents to which fundamental executive processes, such as rec-
the term is commonly applied can be distin- ognizing the existence of a problem, and sug-
guished empirically and related within a com- gested that creative intelligence is involved
mon conceptual structure.” Further, as Reeve “when one applies the processes to relatively
and Hakel (2002, p. 53) commented, “We know novel tasks and situations” (p. 48). Again, al-
that individual differences in g have extremely though his measures are novel, the Carroll
important consequences. However, our under- model (Carroll, 1993) and CHC model (Newton
standing of how g comes to yield such broad & McGrew, 2010) appear to sufficiently capture
and strong impact is negligible.” this construct with fluid intelligence (Carroll
Proposed multiple intelligences. Some model) or fluid reasoning (CHC model), and
have argued that Carroll’s (1993) approach, there seems to be no need to add a new type of
leading to a single general factor of cognitive intelligence.
ability, is too narrow. Gardner (1983) advanced Mayer, Caruso, Panter, and Salovey (2012)
a theory of multiple intelligences, identifying have proposed the concept of emotional intelli-
eight separate abilities, including musical, inter- gence (EI), which “concerns the ability to iden-
personal, and intrapersonal categories, as well tify emotional information, to reason about
as more conventional ones that could more eas- emotions, and to use emotions to solve life
ily fit into Carroll’s schema, such as visual- problems” (p. 503). This represented a modifi-
spatial and verbal-linguistic. cation of an earlier definition that included mon-
Sternberg (2013) has differentiated analytic itoring one’s own and others’ feelings and emo-
intelligence, the focus of academic tests, from tions, as well as using emotional information to
practical intelligence, “the set of skills needed guide one’s actions and feelings, and that Locke
to solve everyday problems by utilizing knowl- (2005) had characterized as “some combination
edge gained from experience in order purpose- of assorted habits, skills and/or stories rather
fully to adapt to, shape, and select environ- than an issue of intelligence” (p. 426). The new
ments” (p. 55). Nisbett et al. (2012) noted early version seemed to fit more with what Locke had
objections to claims made for practical intelli- characterized as “being intelligent about emo-
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 225

tions” (p. 427). However, anticipating this the “theoretical mission” of the study of person-
change, Locke had indicated that EI would not ality “is to account for individuals’ characteris-
then be a new intelligence, but “intelligence tic patterns of thought, emotion, and behavior
applied to a particular life domain” (p. 427). together with the psychological mechanisms—
McCann, Joseph, Newman, and Roberts hidden or not— behind those patterns.” An-
(2014, p. 358) then defined EI “as the ability to other, from the American Psychiatric Associa-
process and reason about emotion and is there- tion (2013), defines personality traits as
fore measured by tasks that require that ability,” “enduring patterns of perceiving, relating to,
and included it in a factor analysis with mea- and thinking about the environment and one-
sures of other factors in the CHC taxonomy. self” (p. 826).
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Through these analyses, they identified EI as a The most widely accepted model of person-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

new CHC group factor. Unfortunately, although ality, the Big Five (Extraversion, Agreeable-
they included 15 cognitive tests representing ness, Conscientiousness, Emotional Stability,
five group factors, they could not incorporate all Openness to Experience) are presented in the
the narrow abilities identified through many de- article by Stark et al. (2014, pp. 153–164) in this
cades of research and hundreds of studies as a issue. Funder (2001) noted that the Big Five are
basis for CHC. Thus, their research failed to not completely independent; they have all been
demonstrate that a new EI group factor was found to be positively intercorrelated. Regard-
needed when the CHC structure already incor- ing whether they “subsume all there is to say
porated a narrow ability, Knowledge of Behav- about personality,” Funder stated, “The answer
ioral Content, in the CHC structure, defined as is almost certainly no: Whereas almost any per-
“knowledge or sensitivity to nonverbal human sonality construct can be mapped onto the big
communication/interaction systems (beyond five, you cannot derive every personality con-
understanding sounds and words, e.g., facial struct from the big five” (p. 200).
expressions and gestures) that communicate Motivation. Theorists have identified two
feelings, emotions, and intentions, most likely aspects of internal motivation: traits and mech-
in a culturally patterned style” (Newton & anisms. Atkinson, Bastian, Earl, and Litwin
McGrew, 2010, p. 624). (1960, p. 27) described motive as a “relatively
Thus, there is, as yet, no compelling alterna- stable disposition of the personality,” a concept
tive to the treatment of attributes characterized distinguished from motivation, a term defined
as EI than that provided by the CHC theory of by Atkinson and Reitman (1956, p. 361) as “the
intelligence. CHC theory not only incorporates aroused state of the person that exists when a
these attributes well but also provides a taxon- motive has been engaged by the appropriate
omy sufficiently comprehensive to incorporate expectancy.” To Barrick, Mount, and Judge
all the cognitive attributes thus far considered. (2001, pp. 24 –25), the “proximal means by
CHC theory itself is a work in progress, as any which personality affects performance has long
broad taxonomy of attributes should be consid- been thought to be primarily through a person’s
ered at this point. motivation.”
For the purpose of identifying attributes
Noncognitive Attributes that might be used in a selection and classi-
fication instrument, we are more interested in
The individual-difference categories listed the stable aspect of motivation than its tran-
from Snow (1986) earlier include four that we sitory role in facilitating action. Atkinson and
would consider to be primarily of a noncogni- Reitman (1956, p. 361) defined motive as a
tive nature—personality, motivation, values, “latent disposition to strive for a particular
and attitudes. We now briefly examine the goal-state or aim, for example, achievement,
meaning of each of these constructs, as well as affiliation, power.” Barrick et al. (2001, p. 25)
one that Snow did not list: self-concept. identified three “fundamental motivational con-
Personality. Available definitions of per- structs”—striving for status, striving for com-
sonality give a sense of its nature but provide munion, and task orientation. Ackerman (1997)
little guidance on how it might be distinguished suggested that motivation was “more compli-
from other attribute categories. One such defi- cated than either a unitary construct . . . or even
nition, from Funder (2001, p. 198), notes that dual constructs” (p. 197), incorporating aspects
226 RUMSEY AND ARABIAN

of personality, interest, and self-concept, but titudes that relate to stable personality traits or
capturing all aspects of such an “amalgamation” interests that would be of most relevance.
would be an extraordinarily difficult require- Interests. The article on interests in this
ment for a test developer, although it might special issue (Ingerick & Rumsey, 2014, pp.
emerge in a complex simulation. 165–181) defines work interests as
Values. According to Rokeach (1971, p.
“relatively stable individual differences, grounded in
453), “a value refers to a desirable end state of an individual’s identity, that encompass one’s prefer-
existence (terminal value) or a desirable mode of ences for performing selected work activities or work-
behavior (instrumental behavior).” Rokeach ing in certain environments (or contexts) that purpose-
and Rokeach (1989) noted that this approach fully influence work-related choices and behavior (e.g.,
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

combines influences by Williams (1968, 1979), occupational or job choice) through motivational pro-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

cesses (i.e., goal-setting and goal-striving)” (p. 166).


who viewed values in terms of standards of
conduct, and Kluckhohn (1951), who viewed The article also notes that “work interests
values in terms of ends and means. It is difficult have a strong, but not determinate, dispositional
to accept this as a separate category divorced component” (p. 166). It also describes the most
from personality, although values may be more widely accepted taxonomy of work interests,
context-specific than personality traits. identified by Holland (1973, 1997).
Conceptually related to values is integrity, Self-concept. Shavelson, Hubner, and
although, as Rieke and Guastello (1995, p. 458) Stanton (1976, p. 411) observed that, “in very
noted, “the construct of honesty or integrity broad terms, self-concept is a person’s percep-
remains vague and ill-defined after more than tion of himself. These perceptions are formed
50 years of research.” Ones, Viswesvaran, and through his experience with his environment . . .
Schmidt (1993) related integrity to “broadly and are influenced especially by environmental
defined conscientiousness” (p. 680), although reinforcements and significant others.” They
they later raised the possibility that “integrity identified seven features of this construct (p.
tests might measure a construct broader than 411): “Self-concept may be described as orga-
any one of the Big Five” (Ones, Viswesvaran, &
nized, multifaceted, hierarchical, stable, devel-
Schmidt, 1995, p. 456). The concept of integrity
opmental, evaluative, differentiable.” A hypoth-
seems to sometimes be presented simultane-
esized powerful aspect of self-concept
ously in terms of both an attribute and perfor-
advanced by Bandura (1994) is self-efficacy,
mance, further clouding its meaning.
Attitudes. Eagly and Chaiken (1993, p. 1) “people’s beliefs in their capability to exercise
defined attitude as “a psychological tendency control over their own functioning and over
that is expressed by evaluating a particular en- environmental demands” (p. 261).
tity with some degree of favor or disfavor.” Summary. Even this brief discussion high-
Albarracin, Johnson, Zanna, and Kumkale lights some of the difficulties in developing a
(2005, p. 5) maintained that “the term attitudes taxonomy of noncognitive attributes. Should at-
is reserved for evaluative tendencies, which can titudes, values, motivation, and self-concept be
both be inferred from and have an influence on considered independent categories or aspects of
belief, affect, and overt behavior.” According to personality? It appears that personality and in-
these authors, attitudes are influenced by “ex- terests are the most inclusive noncognitive cat-
ternal information, the memory of past judg- egories, and that there is some overlap between
ments, prior knowledge, and stored new judg- personality and the other categories. Even if
ments” (p. 6). values and self-concept are not considered to be
Although Guilford (1959) included attitudes independent of personality, the question re-
as one of seven trait modalities, Ackerman mains regarding how much attention to give
(1997) suggested that most modern approaches them in the development of selection and clas-
to personality only pay limited attention to at- sification measures. Although motivation traits
titudes, with some exceptions (e.g., “test anxi- may be encompassed by the personality cate-
ety”). This is true in a sense, but an examination gory, motivational processes can be studied sep-
of many personality inventories would identify arately. At this time, although we might not
numerous items that incorporate attitudes. For consider all aspects of attitudes to be encom-
examining individual differences, it is those at- passed by the personality category, those as-
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 227

pects that are do seem to be relevant for selec- how people will behave in important nontest
tion and classification purposes. situations and to forecast how people will be
described by others who know them well” (p.
Interactive Processes 256).
Management of attrition, maximizing train-
A focus on individual traits masks their in- ing performance, and maximizing job perfor-
teraction with one another and the environment. mance have all been considered as legitimate
However, a number of theorists have explicitly goals for the military’s selection and classifica-
acknowledged these interactions, leading to tion system. If one leaves service prior to com-
some illuminating findings. Snow (1992, pp. pletion of the first term of obligated service, that
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

6 –7) observed that “the definition of some par- individual’s value to the military is diminished.
This document is copyrighted by the American Psychological Association or one of its allied publishers.

ticular aptitude has to be situation dependent.” If an individual does not have the skills to
Regarding multiple trait influences on behav- successfully complete training, he or she will
ior, Ackerman (1997, p. 173) noted, not be able to contribute to military effective-
It also seems reasonable to assert that human behavior ness. Finally, if the individual performs effec-
is rarely, if ever, univocally determined by a single tively in training, but not on the job, that person
trait. Thus, an individual’s performance on an intelli- provides no value to the military.
gence test is partly determined by latent abilities and
partly determined by the respondent’s motivation; For each goal, we develop hypotheses regard-
scores on a personality test are similarly partly deter- ing future behavior based on information about
mined by traits and partly by the understanding of the both attributes and the environment in which
purpose of testing (whether psychiatric diagnosis or behavior will take place. If the goal is to man-
job selection, for example).
age attrition, we consider what we know about
Ackerman has investigated interactions be- the relationship between various attributes and
tween such constructs as personality, self- continuance and apply it to the military envi-
concept, interests, and intelligence. He has ronment in order to generate these hypotheses.
noted a number of associations, for example, If the goal is to predict training success, we
between personality and interests (Ackerman, consider what we know about attributes contrib-
1997). He has also linked these constructs to uting to educational success in general, what we
motivation: “Analyses of motivational traits know about attributes contributing to the acqui-
suggest that the locus of motivation is in an sition of knowledge in particular topic areas,
amalgamation of extant personality variables and what we know about attributes contributing
and also included in interest variables and self- to successful graduation from training courses
concepts/self-estimates of ability” (Ackerman, in the military environment.
1997, p. 197). Kanfer, Wolf, Kantrawitz, and If the outcome we want to predict is job
Ackerman (2010, p. 42), through analyses of a performance, then it would seem that the first
mix of measures from diverse fields, identified step for identifying the appropriate individual-
seven trait complexes, such as avoidance orien- difference measures is to understand the job we
tation and self-management. are trying to predict. As Messick (1995, p. 745)
observed,
Expected Linkages Between A key issue of the content aspect of construct validity
Attributes and Goals is the specification of the boundaries of the construct
domain to be assessed—that is, determining the knowl-
edge, skills, attitudes, motives, and other attributes to
From this universe of possibilities, how do we be revealed by the assessment tasks. The boundaries
determine what to measure in our individual- and structure of the construct domain can be addressed
difference measures? The simplest answer is by means of job analysis, task analysis, curriculum
that we want to measure individual differences analysis, and especially, domain theory, in other
that relate to future important outcomes. As words, scientific inquiry into the nature of the domain
processes and the ways in which they combine to
Hogan, DeSoto, and Solano (1977, p. 256) produce effects or outcomes.
noted, an “assumption underlying much current
criticism of personality assessment is that per- Similarly, Carroll (1992, p. 269) stated, “In
sonality scales measure personality traits.” my view, use of cognitive ability tests must be
However, they are “intended rather to predict informed by knowledge of what kinds of abili-
228 RUMSEY AND ARABIAN

ties are involved and what kinds of tasks they Translating Goals to Criteria
pertain to.”
The method advocated by Messick (1995, p. Attrition
745), of identifying “the knowledge, skills, at-
titudes, motives and other attributes to be re- The measures used to validate selection and
vealed by the assessment tasks,” is a familiar classification tools are an imperfect representa-
one in the military domain. Knowledge, atti- tion of the organization’s goals. The better the
tudes, motives are categories we have already match between criteria and goals, the more
considered, along with a number of other attri- likely it is that the tools developed or adapted
butes. What we have not considered, at least by will fulfill the organization’s goals. Attrition is
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

name, are skills, defined by Rosenbaum, Carl- an example of a particularly strong match be-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

son, and Gilmore (2001, p. 454) as follows: tween the criterion and the goal. A case of
“When we speak of a ‘skill’ we mean an ability attrition before completion of one’s obligated
that allows a goal to be achieved within some tour is a direct, quantifiable sample of behavior
domain with increasing likelihood as a result of that represents a loss of the military’s invest-
practice.” ment in an individual. Attrition can be reduced
In this definition, skill is differentiated to a binary event—attrit or not attrit— but more
from other abilities by the element of prac- nuanced measures have been found useful in
tice. However, none of the abilities we have terms of identifying critical predictors. Primary
considered are necessarily characterized as predictors of attrition have been found to vary
attributes that cannot be improved by prac- with respect to time served prior to attrition and
tice. Let us now return to Humphreys’s (1994, also in terms of reasons for attrition, to the
p. 180) definition of intelligence: “Intelli- extent these reasons can be accurately deter-
gence is the acquired repertoire of all intel- mined.
lectual (cognitive) skills and knowledge Training Performance
available to the person at a particular point in
time.” We can speak of verbal skills or verbal Training success can be operationalized in
ability, but identifying a verbal ability that many ways, but is generally viewed as content
cannot be improved by practice may be diffi- mastery, whether in terms of passing or failing
cult. However, the term “skill” may be useful a course, a final school grade, or some other
to refer to complex abilities whose applica- indicator. A military member does not leave
tions are more limited and specialized, such training until the course work is mastered. How-
as playing chess, firing a particular type of ever, the reason for failure to complete training
weapon, or shooting a basketball. is a bit more complicated, as it could reflect
For some, the step of relating attributes to failure to master content or failure to adapt to
job content is, at least under some conditions, the military. Academic measures may be sup-
unnecessary. Murphy (2009, p. 454) argued plemented with rating measures to get a broader
that view of the student’s behavior, but such mea-
when the set of tests that is considered as possible sures are not necessarily an indication of train-
predictors of performance are positively correlated ing success.
with one another and with the criterion (i.e., they
show positive manifold), content-oriented assess- Job Performance
ments of validity have very little to do with the
question of whether or not test scores predict job
performance.
There are two issues that need special atten-
tion with respect to job performance measure-
Although Murphy (2009) offered forceful ment. The first concerns content. Job perfor-
arguments for his position, it is dangerous to mance measures must be comprehensive and
rely on it extensively as a justification for relevant if they are to be useful for validation of
forsaking job analysis or abandoning the de- individual-difference predictors. The second
velopment of multiattribute tests. As we shall concerns feasibility. How can the organization
see, the implications Murphy associated with meet these content requirements in a cost-
positive manifold are far from firmly estab- effective manner? These are the issues we now
lished. address.
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 229

Content. Measuring job performance is Campbell, 1990; J. P. Campbell, McCloy, Op-


particularly challenging given the high level of pler, & Sager, 1992). Core technical proficiency
complexity involved. Some have attacked this and general soldiering proficiency were linked
complexity by reducing the job to a large num- with task-based hands-on and job knowledge
ber of elements, or tasks, and developing mea- tests in Project A, and have been identified as
sures that incorporate samples from this uni- “can-do” performance dimensions; the other di-
verse of tasks. However, job performance mensions were linked with ratings and admin-
consists not only of proficiency to perform dis- istrative measures, and have been referred to as
crete tasks but also of such things as one’s “will-do” dimensions.
ability to prioritize and integrate tasks over Borman and Motowidlo (1997), noting the
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

time, to understand when a task needs to be prescribed nature of activities encompassed in


This document is copyrighted by the American Psychological Association or one of its allied publishers.

performed and when it does not, to effectively the task-based dimensions and the more discre-
perform unanticipated tasks, to respond appro- tionary aspect of the other dimensions, com-
priately to new stimuli, to help others when bined the Project A findings with other research
needed, to conduct activities that cannot be re- to develop a model that differed from the J. P.
duced to task steps, and to be motivated to Campbell et al. (1992) model, primarily in how
perform effectively. An alternative approach fo- it treated the nontask portion of performance.
cuses on the critical behaviors that distinguish The authors identified this as contextual perfor-
poor performers from effective performers, and mance, characterized by its “support of the
generates ratings based on observations of such broader organizational, social and psychologi-
behaviors over time. This approach suffers from cal environment in which the technical core
limitations associated with raters’ opportunity must function” (Motowidlo & Van Scotter,
to observe, and subjectivity in terms of inter- 1994, p. 476).
pretations of what one has observed, but it does Both models suggest that different types of
provide a perspective that the task-based ap- measures reflect different types of performance
proach does not. and relate to different kinds of predictors. Thus,
Combining both approaches has led to new any comprehensive selection and classification
insights regarding the nature of performance. In system needs to be based on a multidimensional
the 1980s, the Department of Defense began view of performance that cannot currently be
oversight of the Joint-Service Job Performance represented by any single criterion.
Measurement Project when Congress directed Although the overall system needs to be
the Department to determine the linkage be- based on multiple criteria, the same is not nec-
tween enlistment standards and job perfor- essarily true for any one component of that
mance. The Army took this opportunity to cover system. It depends on the purpose of the com-
the performance universe in a uniquely compre- ponent. For example, a predictor of attrition
hensive manner. Its response to this mandate does not necessarily also need to be a predictor
and to its own objective of examining new of job performance. If it is, that is a welcome
candidate measures for enlistment testing was bonus.
called Project A, which used task-based mea- Feasibility. Meeting the goal of compre-
sures such as hands-on tests and job knowledge hensive and relevant job performance measures
tests, as well as less task-bound measures, such is costly. The cost is magnified if one is to
as administrative indices and ratings. develop performance measures for each of the
Project A revolutionized the study of job large number of entry level military jobs in each
performance, not only in the military but also U.S. service—from approximately 70 to well
beyond. It spawned two significant models of over 100, depending on the service. For selec-
performance. From its identification of five ma- tion, one is concerned with those requirements
jor performance dimensions— core technical that are common across jobs; for classification,
proficiency, general soldiering proficiency, ef- differences between jobs become a concern.
fort and leadership, personal discipline, and Development and administration of a fully com-
physical fitness and military bearing—later prehensive set of performance measures for
came the identification of a more generic model each job in each military service is infeasible.
designed to cover the latent structure of all jobs When the Department of Defense was mandated
in the Dictionary of Occupational Titles (J. P. by Congress to link enlistment standards to job
230 RUMSEY AND ARABIAN

performance, and when this mandate required the arbitrary start point, recognizing these un-
that the services provide greater resources for certainties and limitations. In other words, the
validation than were typically available, none of system in operation at that time is what we will
the services were able to develop performance call the baseline.
measures for more than a limited sample of One can consider a selection and classifica-
jobs. tion system to be the product of an organiza-
The Army’s response to this “criterion prob- tion’s goals and the prevailing views of those
lem” was to use synthetic validity to estimate within the organization regarding how best to
the equations needed to predict performance in implement those views, constrained by the re-
the jobs not sampled in Project A. Guion and sources and time required for implementation.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Gottier (1965, p. 50) described the basic con- Now let us consider what these factors produced
This document is copyrighted by the American Psychological Association or one of its allied publishers.

cept as follows: with respect to the U.S. Department of Defense


According to the synthetic validity idea, however, jobs
system.
can be analyzed into elements that are common to We begin with a discussion of elements of the
many dissimilar jobs. Tests can then be validated system that apply equally to the baseline and the
against performance in these job elements rather than present day. Given the number of individuals
against a single overall criterion. A valid test battery serving, the complexity of its job structure, its
can then be synthesized for any job— even a unique
one— by using those tests found valid for the specific focus on promotion from within, and the dura-
elements required by the job. tion of the required commitment both to and
from those chosen, the U.S. military has a par-
Peterson, Wise, Arabian, and Hoffman ticularly high need for an effective entry selec-
(2001) compared results obtained using a syn- tion and classification system. Accordingly, it
thetic approach incorporating estimates of va- has devoted considerable resources to the de-
lidity from qualified subject matter experts with velopment of a testing system that could serve
results obtained empirically in Project A. In as a model for other large organizations to em-
“one of the largest synthetic validity studies ulate for their nonsupervisory personnel.
ever conducted” (Scherbaum, 2005, p. 494), The screening process actually begins with
these researchers found that the synthetic valid- an interaction between a recruiter for a particu-
ities their techniques generated closely approx- lar service–Army, Air Force, Navy, or Marine
imated the empirical validities for individual Corps—and the potential applicant, either at the
jobs, but that the synthetic approach provided recruiter’s or the potential applicant’s initiative.
little discriminant validity across jobs. Although If the interaction proves promising to both par-
the results generally supported the potential ties, the recruiter conducts some initial screen-
utility of synthetic validation techniques, they ing, perhaps administering a brief aptitude test
suggested that further refinements were needed to get a preliminary indication of the prospect’s
before these techniques could provide a firm qualifications. The recruiter is also likely, at this
foundation for personnel classification. point, to identify the prospect’s education status
and general fitness. Clearly, recruiters are
Department of Defense Testing: screening individuals both informally and intu-
The Baseline itively before investing a significant amount of
time and other resources, in the formal enlist-
In order to properly evaluate the contribu- ment process. If the prospect’s qualifications are
tions of the articles in this special issue, it is promising, the recruiter will schedule the appli-
necessary to understand the nature of the mili- cant for formal aptitude testing and further med-
tary enlistment selection and classification sys- ical and background screening at a Military
tem when the research began. Determining ex- Entrance Processing Station (MEPS).
actly when to consider this starting point is a At the MEPS, the individual’s qualifications
somewhat arbitrary exercise, both because each are weighed against the requirements of the
of the activities described represented, in some recruiting service. If the individual is deter-
sense, a continuation of a historical process, and mined to be qualified for entry into that service,
because, to the extent identifying a starting he or she meets with a guidance counselor to
point is possible, it would be different for each determine the job in which the applicant will be
line of research. We will use the year 2000 as placed. The applicant will be offered jobs
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 231

among those for which he or she qualifies, sis on whether the individual has the skills to
based primarily on service needs and priorities, perform effectively on the job. This emphasis
but also on the applicant’s stated interests and on testing training-related aptitudes could be
preferences. If the applicant finds none of the attributed to three interlinked factors: the mil-
jobs offered acceptable, then others may be itary’s perception of its need to reduce cog-
offered until a mutually acceptable fit is found nitively linked training attrition, contempo-
(Schwartz & Mael, 1992). If a good fit is not rary practice at the time of development of the
identified, the applicant may walk away. Al- ASVAB favoring academically linked testing,
though this does not happen frequently, it is still and the state of the art favoring this form of
undesirable from the perspective of the re- testing relative to other alternatives.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

cruiter, the applicant, and the service. Over time, the view of ASVAB as principally
This document is copyrighted by the American Psychological Association or one of its allied publishers.

The military has recognized that no single a predictor of training success has given way to
tool could identify the best person for a single a more balanced view that elevates its relative
job or the best match between multiple individ- importance as a predictor of job success. This
uals and multiple jobs. Physical, medical, be- shift was both reflected and accelerated by the
havioral, and psychological screening con- congressional mandate in the early 1980s to
ducted at the recruiting and testing stations all determine the relation of the ASVAB to job
play a role in these decisions. In this article, we performance, a mandate that clearly indicated
are concerned with the contribution made by where Congress’ priority lay. More fundamen-
psychological screening instruments. tally, the value of a selection and classification
Prior to the initiation of the research de- system that does not ultimately improve mili-
scribed in this special issue, educational back- tary performance would eventually and inevita-
ground was the military’s primary tool for de- bly be questioned.
termining the likelihood that the individual The operational ASVAB consists of two ba-
would complete his or her contracted tour. We sic components: a general cognitive aptitude
place this tool in the psychological screening component designed for selection, and informa-
category because it is believed to be a measure tion tests designed for classification. The gen-
of individual adaptability and commitment to eral cognitive component, consisting of verbal
achieving a goal. The article in this special issue and math subtests, is known as the Armed
on attrition screening (White, Rumsey, Mullins, Forces Qualification Test (AFQT). If the appli-
Nye, & LaPort, 2014, pp. 138 –152) discusses cant does not achieve a qualifying score on the
the empirical basis for the use of educational AFQT, he or she is not accepted into the mili-
background as a screening tool, as well as its tary. In 1976, the AFQT developers were given
limitations as a predictor, the difficulty of cate- the charge “to develop a global measure of
gorizing academic credentials, and the contro- cognitive ability using vocabulary, arithmetic
versies elicited by prioritizing enlistment eligi- and spatial items” (Ramsberger, 2012, p. 132).
bility according to the type of secondary school This was not based on a tailored job analysis,
credential earned by an applicant. We do not but rather likely based on the well-founded hy-
need to repeat that discussion here. pothesis that a global measure of cognitive abil-
In addition, at the time that the research de- ity would be an effective predictor of the de-
scribed in this issue was begun, the primary sired outcome—success in training. The utility
psychological tool for prediction of training and of the ASVAB for selection has been well es-
job success was the Armed Services Vocational tablished by studies demonstrating its validity
Aptitude Battery (ASVAB). This test battery for predicting training attrition (Welsh, Kucin-
represents the culmination of decades of re- kas, & Curran, 1990), and for predicting the
search. It was introduced in the 1970s and rep- can-do, or technical, aspect of job performance
resents the combination of what was, until then, (J. P. Campbell & Knapp, 2001).
separate service screening batteries (Maier, The AFQT subtests are also used, in com-
1993). It remains in use today with but few bination with the information tests (general
structural changes. science, electronics, auto, shop, and mechan-
The ASVAB was essentially designed to ical comprehension), in developing service-
determine whether the individual has the ap- specific composites that qualify applicants for
titudes to complete training, with less empha- jobs linked to those composites. When the
232 RUMSEY AND ARABIAN

applicant meets the guidance counselor at the were designed with a different purpose in mind.
end of the screening process, he or she is only As Ramsberger (2012, p. 140) noted with re-
offered those jobs for which he or she has spect to a new Electronics Information test de-
received a qualifying score on the corre- veloped for the Army Classification Battery in
sponding composite. the 1950s, “the goal was to be able to identify a
These information tests were developed for level of expertise that was obtained due to in-
classification purposes. To consider the extent terest in the subject area, as opposed to formal
that they were informed by job content con- electronics training.”
siderations, we must consider a history that The utility of the ASVAB for classification,
began well before the first administration of however, has been disputed. Murphy and Da-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

the ASVAB in the 1970s. The ASVAB, a joint- vidshofer (1991, p. 274) observed that “the
This document is copyrighted by the American Psychological Association or one of its allied publishers.

service instrument, was built from service- Armed Services Vocational Aptitude Battery
specific classification instruments that preceded (ASVAB) composites do not provide informa-
it. The Army, for example, built the Army Clas- tion about different sets of aptitudes and that, in
sification Battery, with such classification- fact, any one of the composites could be sub-
oriented tests as Mechanical Aptitude, Shop stituted for the other with little loss of informa-
Mechanics, Automotive Information, and Elec- tion.” Murphy (2009, p. 457) later elaborated,
trical Information. These tests were built based
upon an understanding of the kinds of informa- ASVAB subtests and composites are generally good
predictors of performance, but the validity of these
tion required for success in certain jobs, but not tests is not meaningfully different when there is a close
upon a formal job analysis as we would under- match between the content of the test and the content
stand it today. of the job than when there is no match at all.
Maier (1993, p. 5) described the process of
identifying content for such tests as a matter of He cited a number of sources as support for
formulating “hypotheses about test content that this observation (Brown, Le, & Schmidt, 2006;
may be related to performance in an occupa- Earles & Ree, 1992; Jones & Ree, 1998; Peter-
tional area. (Researchers typically observe and son et al., 2001). Similarly, Ree and Earles
talk to workers in the area and visit training (1991, 1993) and Ree, Earles, and Teachout
programs).” (1994) questioned the classification value of
Maier (1993) noted that certain occupational ASVAB. Ree and Earles (1991, p. 330), for
areas received “special attention.” For example, example, found that although the incremental
an attempt to refine the combat area in the Army validity for specific ability, beyond general abil-
Classification Battery, one of the service- ity, was statistically significant, “the practical
specific classification batteries that fed into the contribution of specific ability measures was
ASVAB, involved first identifying those infan- trivial, adding an average of .01186 to predic-
trymen who performed well in combat in Korea, tive efficiency.”
and then determining what factors differentiated The lack of a “practically” significant contri-
the superior performers from the others (Wille- bution of specific ability measures, in terms of
min & Karcher, 1958). incremental validity, to general ability measures
There is a perceived danger that information is a common thread among those studies cited
tests could introduce what Messick (1995, p. as evidence for ASVAB’s alleged limited or
742) referred to as “construct irrelevant vari- nonexistent classification value. But Zeidner,
ance.” Gottfredson (2003a, p. 355) observed Johnson, and Scholarios (1997, p. 170) chal-
that “tests of aptitude and ability are designed as lenged the use of incremental validity as an
well as possible to exclude items that are sen- indicator of classification efficiency:
sitive to differences in exposure and experience, This approach . . . assumes that decisions to accept or
so they avoid items that tap knowledge for reject are made separately for each job and draw from
specific cultural or academic domains.” independent applicant groups. This is inconsistent with
Differences in exposure and experience are enlistment selection and classification procedures used
by the military, where assignment for multiple jobs
typically viewed as construct irrelevant, as they draws from a single group of applicants.
are perceived as differences in opportunity, and
thus not related to the ability one is trying to Zeidner and Johnson (1994), drawing upon
measure. However, ASVAB information tests work by Brogden (1951, 1959), proposed that
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 233

the appropriate metric under those circum- Bobko (1994, p. 448), commenting on one
stances was mean predicted performance of Zeidner and Johnson’s (Zeidner & John-
(MPP). Brogden’s “allocation equation ex- son, 1994) publications, observed that “the
presses MPP as a function of predictive validity, authors provided convincing evidence that
intercorrelations among the least-square esti- classification-efficient algorithms can lead to
mates (LSEs) of job performance, and the num- increased MPP.” There appears to be substan-
ber of job families” (Zeidner & Johnson, 1994, tial classification potential in the ASVAB,
p. 379). MPP could only be estimated after and considerable work still needed to better
individuals were assigned to jobs. To surmount achieve this potential.
this difficulty, Zeidner and Johnson’s approach,
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

known as differential assignment theory, in- Moving Beyond the Baseline


This document is copyrighted by the American Psychological Association or one of its allied publishers.

volved conducting simulations to derive the


MPP value. Now that we have described the context, we
Personnel classification in the military ser- examine how the activities described in this
vices involves linking multiple jobs to multiple special issue have contributed to the develop-
sets of ASVAB tests, known as aptitude area ment and implementation of tools for the mili-
composites. One must achieve a minimum pass- tary enlistment selection and classification sys-
ing score on the composite for a particular job to tem in the United States.
qualify for training in that job. Zeidner, John-
son, Vladimirsky, and Weldon (2000) observed Cognitive Testing
that the MPP for the Army ASVAB composites,
as then constituted, was minimal. However, Of the many perspectives one might take with
through simulation they determined that, with a respect to changing the cognitive tool used to
better match of jobs to tests and a more precise screen applicants for the military, the ASVAB,
weighting of tests in each composite, the con- three in particular are worth noting. One per-
tribution of classification to MPP was greater spective views the ASVAB as a measure of
than the contribution of selection. Later, general cognitive ability, and would focus on
Zeidner, Scholarios, and Johnson (2003) cited limiting it only to tests that best measure such
numerous studies in support of this finding ability. This view, which essentially takes no
(Johnson & Zeidner, 1995; Johnson, Zeidner, & account of job content, is tantamount to saying
Leaman, 1992; Scholarios, Johnson & Zeidner, that ASVAB has no classification utility, and is
1994; Statman, 1993; Zeidner et al., 1997). pessimistic about whether any cognitive test
Zeidner and Johnson (1994) did not, within battery could be built to provide such utility.
limits, dispute that the ASVAB exhibits positive However, as previously noted, the research pur-
manifold; rather, they found that substantial ported to support this view fails to fully account
classification efficiency could be obtained from for the classification efficiency inherent in the
the ASVAB despite this positive manifold. current ASVAB and the research conducted by
They observed that Zeidner, Johnson, and their colleagues (Johnson
& Zeidner, 1995; Johnson et al., 1992; Schol-
there is a nontrivial degree of multidimensionality in
the joint predictor-criterion space— despite the inevi- arios et al., 1994; Statman, 1993; Zeidner &
table presence of a strong general cognitive ability Johnson, 1994; Zeidner et al., 1997; Zeidner et
factor, g, and the usual finding that the first principal al., 2003; Zeidner et al., 2000).
component explains 70% to 85% of the total variance The second perspective acknowledges that a
in the joint predictor-criterion space. (Zeidner & John-
son, 1994, p. 383)
battery of tests such as the ASVAB can assess
multiple cognitive abilities, and suggests that its
Subsequently, the Army composites were function should be to represent these abilities,
reconfigured (Greenston, 2002, 2012) based or some subset of these abilities, as faithfully as
on the research by Zeidner, Johnson, and their possible. Roberts et al. (2000), for example,
associates, (e.g., Johnson et al., 1992; Zeidner using evidence from factor analyses of ASVAB
& Johnson, 1994; Zeidner et al., 2000) al- and other cognitive tests, argued that ASVAB is
though with fewer job families—linkages of primarily a measure of “acculturated learning,”
job groupings to ASVAB test combinations— or crystallized intelligence, rather than general
than these researchers recommended. cognitive ability, and that future changes should
234 RUMSEY AND ARABIAN

focus on more complete representation of other military. The establishment of a new Cyber
cognitive factors. Command, the growth of cyber/IT career fields,
Including measures of more cognitive factors the recommendation by the ASVAB Review
may well lead to a better overall assessment of Panel that an information/communications tech-
intelligence. But will it lead to better prediction nology literacy test be developed, the recogni-
of performance, either in training or on the job, tion by the National Academy of Engineering
than an approach focused more directly on job and The National Research Council of the im-
content? Will it lead to a better basis for clas- portance of technological literacy, and a number
sification? Possibly so, but one cannot discern of other indicators, all discussed in the cyber/IT
the basis for this expectation in the Roberts et knowledge article in this issue (Trippe, Mori-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

al. (2000) article. This, like the first perspective, arty, Russell, Caretta, & Beatty, 2014, pp. 182–
This document is copyrighted by the American Psychological Association or one of its allied publishers.

is focused more on construct measurement than 198), attest to the growing importance of this
on representing job content, but unlike the first field. However, once IT was identified as an
perspective, attempts to more fully represent the important content area, a systematic job-based
universe of cognitive constructs, not just g. analysis, consisting of a literature review, inter-
The third perspective is most specifically fo- views with subject matter experts, and further
cused on job content. It involves both job anal- subject matter expert review, helped define the
ysis and an identification of attributes hypothe- content to be included in the cyber test.
sized to relate to the job components emerging Both coverage of multiple constructs and job
from the job analysis. Given this perspective, content considerations have guided the research
the ASVAB should be changed to reflect on the two other cognitive tests examined in this
changes in job requirements. The advantage of a special issue. Coding Speed provides coverage
job representation orientation over the generic of the perceptual speed and accuracy construct,
approach offered by the first perspective, and but it also has a history of being perceived as an
the more complex approach offered by the sec- important predictor of success in clerical occu-
ond, is that this is the only approach that spe- pations. Assembling Objects provides coverage
cifically ties test content to the desired outcome. of the spatial domain, particularly the spatial
The tests included in the ASVAB that are visualization subdomain, that was viewed as
most job-content oriented are the information likely to be an effective predictor of success “in
tests (also referred to as the technical tests). MOS [military occupational specialties] that in-
They are based on the premise that job-related volve mechanical operations (e.g., inspect and
knowledge is an indicator of potential success in troubleshoot mechanical [or] electric systems,
similar jobs. If the individual has been moti- construction . . . and drawing or using maps”
vated to obtain knowledge in a particular area, (Toquam et al., 1987, p. 3–5).
this is an indication that this is an area of Even more importantly, the results reported
interest for that individual. In addition, if the by articles in this issue indicate that all three
individual has demonstrated the capacity to tests have potential to add value to the current
learn general principles within a particular sub- system. Both the Cyber and Coding Speed tests
ject area, this may provide a foundation that will have been found to predict training and job
facilitate further learning. It is important in performance success. The criteria were of a
building such tests to distinguish between proficiency, or “can-do,” nature but a basis was
knowledge that would be obtained specifically provided for expecting that Coding Speed might
through job experience and more general also predict “will-do” aspects of performance
knowledge that is less job specific, to keep the criteria. The Held, Carretta, and Rumsey (2014,
focus primarily on aptitude. pp. 199 –220) article also provided evidence of
The developers of the tests discussed in this the potential for these tests to improve both
special issue clearly favored the second and classification efficiency and assignment flexi-
third perspectives over the first. The informa- bility. In addition to these benefits, subgroup
tion technology test, in particular, was devel- analyses suggested that Coding Speed and As-
oped with job content representation in mind. It sembling Objects tests could be useful in reduc-
was not necessary to conduct a job analysis to ing adverse impact, particularly for females.
determine that information technology (IT) has The evidence for the utility of the Cyber test
become an important occupational area for the is less advanced. The article by Trippe, Mori-
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 235

arty, Russell, Carretta, and Beatty (2014, pp. necessary improvement. The TAPAS provided
182–198) shows incremental validity above the further refinement to the forced-choice proce-
AFQT as a predictor of training success in se- dure used in the AIM by equating response
lected courses. More information about its util- options in terms of social desirability.
ity as a predictor of job success and as a clas- A second achievement was the application of
sification tool would be useful, but the initial an adaptive procedure to a personality test, seen
results are promising. in both Navy Computerized Adaptive Personal-
ity Scales (NCAPS) and the TAPAS. Although
Noncognitive Measurement Advances the ASVAB has been administered adaptively
for many years, the administration of an adap-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

For much of the past century, the enlisted tive personality test in a military context is, so
This document is copyrighted by the American Psychological Association or one of its allied publishers.

selection and classification system of the U.S. far as can be determined, unprecedented. By
military could be characterized as being highly tailoring the items to the individual, the adap-
cognitively oriented. This orientation could be tive procedure permits accurate assessment in
attributed to a variety of factors: an emphasis on less time than would be possible with a more
completion of training as a primary organiza- conventional procedure. Time is a precious
tional goal, a suspicion that valid noncognitive commodity in enlistment screening, and the
measures not compromised by faking could not time savings feature of this measure enhances
be developed, and a perception that the incor- its acceptability to the military. In addition to
poration of noncognitive measures into the cur- time savings, adaptive testing adds a measure of
rent system would be impractical in terms of the test security because each examinee essentially
time and resource requirements. gets a unique set of items. Finally, the TAPAS
Gradually, the barriers to noncognitive test-
approach of using single stems as items allows
ing are being challenged with increasing suc-
the cost-effective development of huge item
cess. Much of this special issue bears witness to
pools, helping to ensure the test will not quickly
this evolution. Although success in training will
become obsolete because of test compromise.
always be an important objective to the military,
The final issue that has hindered progress in
the credibility of the idea that this objective
must always outweigh job success has steadily noncognitive testing, particularly personality
eroded. The suspicion accorded noncognitive testing, is that of resources. Resources are crit-
measurement has similarly eroded. This is due, ical at two stages— development and imple-
in no small part, to research that advanced such mentation. Development costs include those ex-
measurement, particularly with respect to per- pended by the researchers and those expended
sonality. As more attention has been paid both by the research participants, who, in the mili-
to the differentiation of separate factors and tary, are typically drawn from the sponsoring
facets of personality from one another, as well service. Resources tend to be tightly con-
as to the differentiation of the different out- strained, but their availability is partly a func-
comes linked to these elements, earlier discour- tion of priorities. As military leaders facing
agement regarding the predictive value of per- recruiting shortfalls became frustrated by the
sonality gave way to a nascent optimism, which difficulty of enlisting applicants who either
has grown. The research reported in this issue could not qualify on the cognitively oriented
has shown positive results for newly developed ASVAB or lacked certain educational creden-
personality measures with respect to multiple tials, the urgency of incorporating noncognitive
critical outcomes. elements mounted. Resources previously be-
Personality. The first such achievement lieved to be unattainable suddenly became
was the development of fake-resistant mea- available for research to investigate new non-
sures. The use of a forced choice methodology cognitive measures. The time required for such
was a key component of the Assessment of investigations was reduced, as one of these
Individual Motivation (AIM), but this by itself measures quickly moved into an operational
was not sufficient to achieve the desired results. testing phase for the Army. Thus far, the per-
The recognition that the remaining problem was ception of need has facilitated the process of
an item-specific one and the removal of items implementation, but any change in the percep-
that remained susceptible to faking provided the tion of the cost– benefit equation will affect
236 RUMSEY AND ARABIAN

whether operational administration of personal- dekinge, Putka, and Campbell (2011) that they
ity tests will continue. do provide incremental validity to these two sets
Thus far, research on personality testing in of measures gives an important boost to their
the military has focused more on selection than prospects.
classification applications. There has been con- Interest measures would appear to be most
siderable attention to identification of personal- promising for classification, to the extent that
ity constructs, but on constructs broadly ex- jobs can be successfully clustered based on
pected to be relevant to success in the military, common interests, that interests relevant for one
rather than to success in some particular set of occupational group can be sufficiently differen-
jobs. Constructs included in the AIM were tiated from those relevant to other groups, that a
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

aligned both with the Big Five and those in- sufficiently large portion of extant jobs can be
This document is copyrighted by the American Psychological Association or one of its allied publishers.

cluded in a predecessor instrument, the Assess- encompassed by the groups identified, and that
ment of Background and Life Experiences, the classification decisions suggested by interest
which included dimensions identified by subject measures can be meaningfully coordinated with
matter experts as likely to predict job perfor- those suggested by cognitive and personality
mance in the Army. Decisions concerning di- measures. The resolution of these issues will
mensions to be included in the NCAPS were help determine the extent to which the recently
guided by expectations concerning which ones developed Job Opportunities in the Navy
were important to training and job success in the (JOIN) and Army’s Work Preferences Assess-
Navy. The TAPAS originated with a fairly ment can be implemented as classification mea-
broad coverage of the Big Five dimensions and sures.
associated facets, but refinements were guided An impediment regarding implementation of
by validity data obtained in the Army training interest measures has been that of resources.
context. Within the Army, the TAPAS has been viewed
The validity data thus far obtained suggests as the most promising recent development and
that personality measures can support multiple has had priority in terms of available resources.
military objectives. The AIM and TAPAS have Within the Department of Defense generally,
been found to have utility for predicting attrition interest measures have had to compete with
and training performance. The TAPAS has been multiple candidate measures for needed admin-
found to be predictive of both can-do and istration time in an operational environment.
will-do training performance. NCAPS has Although these measures have not been ne-
shown promise for predicting will-do (ratings- glected, they have had to wait their turn.
based) job performance. The potential utility of
these measures is still being investigated, and
The Way Ahead
these initial findings may just provide a sam-
pling of what may be possible. Of particular Five questions for future research will now be
interest is what value these measures or others addressed: (a) To what extent should we con-
like them may have for improved classification. sider assessment of attributes not being cur-
Interests. Although personality measure- rently measured?; (b) To what extent should
ment has been the primary area of focus in new assessment methods be considered?; (c) To
military noncognitive testing since at least what extent can we advance beyond existing
2000, another area that is beginning to receive stovepipes in our development of new predictor
renewed attention is that of interests. Interest measures?; (d) What criterion issues need to be
measures have been shown to have validity for addressed in order to advance selection and
predicting important outcomes but have not re- classification research?; and (e) What lessons
ceived the same push for implementation as can we derive regarding implementation from
personality measures. Two main impediments the research completed?
have likely accounted for this. The first has to
do with the lack of a coherent implementation New Content
plan. Should interest measures be used for se-
lection or classification? If for selection, what Cognitive domain. Let us first consider the
would they add to cognitive and personality cognitive domain. The selection portion of the
measures? Recently, a finding by Van Id- ASVAB, the AFQT, is, as noted before, a good
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 237

general predictor of training and job perfor- generated in an ad hoc fashion as the need
mance. However, in their technical review of arose, they cannot be characterized as an out-
the ASVAB, Drasgow, Embretson, Kyllonen, come of systematic analysis. Prior to 1976, each
and Schmitt (2006) suggested that the validity service developed its own subtests through its
of the ASVAB for predicting job performance own iterative process. The subtests from each
and its generalizability over diverse populations service’s enlistment battery were combined
could be improved by adding a nonverbal or somewhat hastily into a single test battery to
low-verbal measure of reasoning. As noted by meet an externally driven timetable. These sub-
Held et al. (2014, pp. 199 –220) in this issue, tests were selected to be applicable across the
Assembling Objects “can be considered some- services to the extent possible, but through this
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

what of a spatial ability test.” Thus, aside from homogenization are no longer tailored specifi-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

its classification potential, Assembling Objects cally and fully to the needs of each service.
might be considered as a component of the Meanwhile, the job environment and the
AFQT. Based on their analyses, Anderson et al. skills and knowledge needed to succeed in that
(2011, p. i) concluded that “adding the AO environment have been changing. As Roberts et
[Assembling Objects] subtest to the AFQT al. (2000, p. 83) noted, “new capabilities appear
composite would likely increase the prediction with every innovation (e.g., computer profi-
of performance” and that “the revised AFQT ciency), while competencies that were once
composite would be fair and unbiased to minor- very important (e.g., spelling ability) are less so
ity groups.” Thus, the addition of a nonverbal now.” Similarly, Diaz et al. (2004, p. 34) ob-
measure such as Assembling Objects to the served “over the past dozen years, the techno-
AFQT deserves further investigation. logical complexity of jobs, such as Combat
The greater challenge for the ASVAB con- (CO), has grown substantially with the advent
cerns its classification function. The research of new technologies and weaponry.” During
discussed earlier by Zeidner, Johnson, and their this time, research on the ASVAB has not stood
colleagues (Johnson & Zeidner, 1995; Johnson still. We have seen, in this issue, developments
et al., 1992; Scholarios et al., 1994; Statman, in measuring IT knowledge, spatial ability, and
1993; Zeidner & Johnson, 1994; Zeidner et al., perceptual speed and accuracy consistent with
1997; Zeidner et al., 2000; Zeidner et al., 2003) recommendations provided by the ASVAB Re-
suggest that this is an area that needs further view Panel (Drasgow et al., 2006). Items in
investigation. One area of investigation con- existing ASVAB subtests are substantially re-
cerns the optimal number of job families to use vised every few years. However, these changes
in deriving classification composites. Zeidner, have not been driven by a comprehensive review
Johnson, and associates (e.g., Johnson et al., of job-specific requirements as a basis for exam-
1992; Zeidner & Johnson, 1994; Zeidner et al., ining whether new predictor constructs might en-
2000) have argued that, within limits, classifi- hance the ASVAB’s classification efficiency.
cation efficiency increases as the number of Such a review could provide new insights, par-
families increases. However, Diaz, Ingerick, ticularly if it considers, as Messick (1995, p.
and Lightfoot (2004) rejected the contention 745) recommended, “domain theory, in other
that this increase in classification efficiency was words, scientific inquiry into the nature of the
sufficient in the case of the Army to increase the domain processes and the ways in which they
number from nine to 17 or more. Unfortunately, combine to produce effects or outcomes.”
the performance data available to examine this Noncognitive domain.
question is becoming increasingly dated, and Personality. Much progress has been made
until more recent data can be collected and in mapping the personality domain and in iden-
analyzed, no clear resolution to this debate can tifying those components related to important
be expected. military criteria. Both the Army’s TAPAS and
The second basis for further investigation con- the Navy’s NCAPS, and in a more generic way,
cerns the process used to identify the ASVAB the Army’s AIM, appear to provide a reason-
subtests. The process might be described as a ably comprehensive representation of the full
series of successive approximations to a moving range of personality constructs. Funder’s (2001,
target. This process was mindful of content va- p. 200) comment about the Big Five is also a
lidity, but because these subtests were often reasonable statement about these taxonomies—
238 RUMSEY AND ARABIAN

almost every personality construct can be tions that have been observed for these forced
mapped onto them, but not every personality choice tests raise intriguing possibilities for
construct can be derived from them. classification uses.” However, Johnson and
There will be constructs that resemble one or Zeidner (2006) cast doubt on the utility of “fac-
more elements in either the TAPAS or NCAPS tor-based” personality measures. They sug-
taxonomy that bear different names. There will gested that data from an analysis of multiple
also be constructs that are offered as personality measures administered in Project A indicated
dimensions but that are actually something that the personality construct measures were
more complex. As noted earlier, Funder (2001, less powerful in generating classification effi-
p. 198) stated that the “theoretical mission” of ciency than several of the cognitive and interest
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

the study of personality “is to account for indi- tests from that project—none of the personality
This document is copyrighted by the American Psychological Association or one of its allied publishers.

viduals’ characteristic patterns of thought, emo- construct measures being among the top 10
tion, and behavior together with the psycholog- based on the classification index used. Rather
ical mechanisms— hidden or not— behind those than theoretically based personality measures,
patterns.” For some terms, it is not clear they recommended empirically derived mea-
whether they are describing mechanisms behind sures.
behavior or the behavior itself. For example, Just as with cognitive measures, a more job-
two dimensions not directly mentioned in either content approach might produce more classifi-
the Army’s or Navy’s taxonomy are “hardiness” cation-efficient personality measures than a
and “grit.” Hardiness supports “learning from construct-oriented approach. The classification
stressful circumstances” (Maddi, Matthews, value of the Project A interest measures can be
Kelly, Villarreal, & White, 2012, p. 21). Grit attributed, at least in part, to their representation
involves “working strenuously toward chal- of different job types, such as Combat and Ve-
lenges . . . over years despite failure, adversity, hicle Operator. Personality items might be cho-
and plateaus in progress” (Duckworth, Peter- sen on the same basis. The strategy of combin-
son, Matthews, & Kelly, 2007, pp. 1087–1088). ing personality and interest items for
Although these terms suggest that there are psy- classification purposes, a strategy employed
chological mechanisms behind the behaviors, successfully in the development of the Army’s
they are defined, at least in part, in terms of the Classification Battery in the 1950s (Willemin &
behaviors themselves. It would be of interest to Karcher, 1958), might even be considered.
administer the measures of grit and hardiness In 1965, Guion and Gottier found “no gener-
together with either the TAPAS or the NCAPS alizable evidence that personality measures can
to see if they provide unique variance beyond be recommended as good or practical tools for
these measures. employee selection” (Guion and Gottier, 1965,
Similarly, two dimensions often associated p. 159). Much has happened since then, but it is
with military activities, courage and adaptabil- not unreasonable to think the history of the
ity, are difficult to define, and those definitions development of effective tools for measuring
that do exist tend to focus on performance, such personality is still in its early stages. The mea-
as adaptable performance or courageous per- sures now being tested by the military are prime
formance, rather than on a single personal char- representatives of the current state of the art, but
acteristic postulated to underlie such perfor- there is good reason to expect the technology to
mance. For example, Pulakos, Arad, Donovan, continue to advance and meet the challenges
and Plamondon (2000) identified several types discussed later in the Measurement Methods
of adaptable performance, and Rate, Clark, section.
Lindsay, and Sternberg (2007) identified several Other noncognitive categories. Although
types of actions that were viewed as represent- personality and interests occupy a large portion
ing courage. of the noncognitive domain, the potential value
The extent to which personality measures of other noncognitive categories, such as self-
might be useful for classification remains an concept and values, should not be overlooked.
open question. In this special issue, Stark et al. Bandura (1994, pp. 269 –270) summarized the
(2014, pp. 153–164) note that, with respect to research on self-efficacy as follows: “People
the personality tests discussed in their article, who have a low sense of efficacy in a given
“the low to moderate trait score intercorrela- domain of functioning shy away from difficult
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 239

tasks, which they tend to perceive as personal is an event of tremendous historical, practical,
threats; have low aspirations and weak commit- and psychometric significance. Attempts to im-
ment to the goals they seek,” among other de- plement personality measures in the late 20th
ficiencies. century failed because of a concern about fak-
Although, as noted earlier, it is difficult to ing. The new measures use sophisticated tech-
view values as a category entirely distinct from niques to control faking and have been deter-
personality, the use of a measure of values in a mined to be so useful in an operational context
selection and classification system offers both that military sponsors continue to endorse new
intriguing and challenging possibilities. Paullin applications for them.
et al. (2011) included both traditional values To note that there has been enormous prog-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

and Army values among the skills, abilities, and ress in the measurement of personality is not,
This document is copyrighted by the American Psychological Association or one of its allied publishers.

other characteristics examined in their analysis however, to suggest that no further progress is
of Army officer jobs. Sternberg (2013) linked possible. The validities for such measures are
ethics, a construct closely related to values, to encouraging, particularly in areas in which cog-
wisdom, a critical element in his model of lead- nitive measures have limited value, but they fail
ership. Also linked to values is the construct of to match the validities for ASVAB in predicting
integrity, which Locke and Allison (2013, p. 63) training performance or performance on spe-
judged as “critical to long-term business suc- cific tasks. This may be, in part, because the
cess.” Neither values nor any of the related areas for which they are most useful, for pre-
constructs are easy to define or measure, but the dicting attrition or predicting performance that
prediction of counterproductive behaviors, of- is at least partially reflective of motivation, are
ten viewed as emanating from a deficiency of subject to so many influences that there is a
certain highly prized values, is an important ceiling to how much variance can be accounted
organizational objective for the military. for by a personality measure.
However, the extent of unexplained variance
Measurement Methods in the prediction of these outcomes suggests that
further efforts to improve the measurement of
Cognitive domain. Regardless of whether personality are warranted. The fundamental
one accepts Sternberg’s (2013) view of multiple problem confronted by personality measures is
intelligences, further investigation is warranted that they are trying to extract true information
regarding Sternberg’s concept of tacit knowl- from an individual who may be motivated to
edge, and his approach to measuring such provide information that presents him or her in
knowledge. In one sense, tacit knowledge items the best light, whether that presentation is true
bear a resemblance to job knowledge items— or not. The techniques employed do not remove
both call on experience that might help the this motivation; rather, they attempt to use psy-
individual respond adaptively in a related situ- chometric techniques to limit its impact. In es-
ation on the job. Situational items have been sence, they are trying to fool the individual into
explored in Army research, but the idea that greater revelation than he or she desires. Ulti-
they might have value for classification has not mately, a picture of the individual emerges that
been systematically examined. Like information is likely more realistic than if these techniques
tests, tacit knowledge tests can be designed to were not employed, but is likely less realistic
reflect content more relevant to some types of than might be obtained if these techniques were
jobs than others. In addition, Sternberg’s ap- not necessary and one could count on the indi-
proach to the measurement of creativity is novel vidual to provide veridical information.
and promising. Matthew and Stemler (2013) Is there a way to ensure that the individual
have conducted research exploring the link be- provides such information? One strategy is to
tween pattern recognition and mental flexibility try to change the focus to behavior—to observe
using a word recognition task. This research what individuals do rather than what they say
may provide an approach useful in the predic- they will do. One problem with this strategy is
tion of adaptable performance. that performance is one step removed from per-
Noncognitive domain. sonality; one can only infer, rather than observe,
Personality. The success of the new per- personality. Moving beyond this objection, let
sonality measures discussed in this special issue us examine how such a strategy might be em-
240 RUMSEY AND ARABIAN

ployed. Biodata might be viewed as an attempt would that remove the motivation to fake? Re-
to infer personality from past behavior, al- search by White, Young, Hunter, and Rumsey
though one is still dependent on the individual’s (2008), showing diminished validity for a per-
willingness and ability to recount this informa- sonality measure when it moved from a research
tion accurately. Verifiable biodata, perhaps to an operational context, suggests that motiva-
combined with a requirement that one elaborate tion is an important factor in faking. However,
on one’s responses, implicitly invokes the threat faking is a complex phenomenon, and, at this
that the individual’s recollection can be time, there is no clear basis for assuming that it
checked, although with large-scale testing, such would necessarily disappear in a classification
a threat might not be credible and thus might not context. Presumably, in an operational context,
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

be an effective deterrent to faking. Schmitt and some degree of evaluation apprehension would
This document is copyrighted by the American Psychological Association or one of its allied publishers.

Kunce (2002) and Schmitt et al. (2003) did find be present that might affect individual re-
that a requirement to elaborate on one’s re- sponses. In addition, the perceived relative
sponse, such as listing offices one held as a value of alternative jobs might affect how indi-
student, did lower scores on biodata items, a viduals view item response options that might
likely indication that responders were affected be perceived as positively associated with one
by the implicit threat, but found no evidence set of jobs versus another. Finally, the statement
that elaboration improved validity. that test results will be used for only a particular
Similarly, situational judgment tests have purpose might not convince applicants that they
been built with the objective of inferring per- do not need to manage the impression their
sonality from one’s choice of a behavioral re- responses might convey.
sponse to a hypothetical situation (Motowidlo, Interests. The measurement of work inter-
Hooper & Jackson, 2006; Sackett & Lievens, ests has followed two basic paths. One path
2008). Another indirect approach is the condi- involves the identification of an overall taxon-
tional reasoning test (James, McIntyre, Glisson, omy of interests. This is best represented by
Bowler, & Mitchell, 2004; James et al., 2005) Holland’s (1973, 1997) six-factor (Realistic, In-
that infers personality from one’s response to vestigative, Artistic, Social, Enterprising, Con-
apparently logical problems. These approaches ventional; collectively known as RIASEC)
are designed to reduce faking by, in essence, structural model of interests. Like the Big Five
fooling the individual into thinking that some- model of personality, it is a dominant model
thing other than his or her personality is being designed to encompass the entire universe of
measured. One problem is that, because of the human interests as they pertain to work. Just as
introduction of judgment or reasoning into the there are lower level facets that contribute to the
measures, to some extent, something other than Big Five, so there are lower level categories of
personality may in fact be being measured. An- interests that feed into the RIASEC dimensions.
other problem with the conditional reasoning Just as the Big Five is a useful working model
test is the extraordinary effort required to pro- of current knowledge about personality subject
duce each item (Schmitt, 2004). to revision over time, so can the RIASEC struc-
Kyllonen, Walters, and Kaufman (2005) re- ture be considered to represent the same stage of
viewed a number of approaches to personality knowledge with respect to interests.
measurement that they grouped into a category The second path develops a structure based
they labeled objective tests. These included on interests linked to particular jobs or job clus-
conditional reasoning tests and a variety of re- ters. This path is represented by the occupa-
action-time methods, which involved inferences tional scales in the Air Force’s Vocational In-
about an individual’s traits based on the latency terest Career Examination.
of his or her responses to stimuli involving these At this point, there does not seem to be a
traits. They concluded that objective tests of strong basis for choosing one approach over
personality had “not met with tremendous suc- the other. The RIASEC approach was useful
cess” (Kyllonen et al., 2005, p. 170). in ensuring each RIASEC dimension was rep-
Concern about faking on personality mea- resented, at least initially, in the development
sures has generally focused on their use for of the Army Vocational Interest Career Ex-
selection. If an individual was informed that a amination. This provided the opportunity to
measure would only be used for classification, maximize dimensional differentiation. Also,
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 241

this more generic approach might help to tronic Information Display System (Schwartz
avoid developing items too transparently & Mael, 1992).
linked to specific military jobs, which might
be particularly vulnerable to faking. How- Beyond Existing Stovepipes
ever, the occupational approach might help
ensure better coverage of all occupational New domains. Anthony (2006, p. 47) ob-
groups and avoid the inclusion of items that served, “A problem with factor analysis is that
have no military job relevance. The best strat- observation and measurement (data analysis) is
egy might be to consider both the RIASEC a fundamental component of its process. Thus,
dimensions and occupational categories in cognitive processes and mental attributes that
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

item development, but then work toward the are not easily measured or observed are inevi-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

development of an inventory that has a coher- tably excluded from the process.” This is
ent dimensional structure, which, by default, equally true of cognitive and noncognitive vari-
could be the RIASEC structure. ables. We have inferred traits from observed
Faking has not received the same level of behaviors, but have not necessarily identified all
attention in the development of interest inven- attributes that may influence attrition, training
tories as in the development of personality performance, or job performance. As we be-
measures. If an interest inventory is being come more sophisticated about both our obser-
developed for personnel classification, its vations and our inferences, we may find new
most likely purpose, there is a different type domains of attributes or new ways of organizing
of applicant motivation for completing it than attributes already identified.
for completing a selection instrument. The The complexity of reality. Long ago,
question becomes not whether the individual Cronbach (1956, p. 173) observed, “Assessment
is selected, but whether the individual is encounters trouble because it involves hazard-
placed in a job that best matches his or her ous inferences.” He elaborated, “Very little in-
interests, an outcome that both the individual ference is involved when a test is a sample of
and the military would generally view as de- the criterion or when an empirical key is devel-
sirable. However, as noted earlier with re- oped . . . . But assessors attempt a maximum
spect to personality measures, it is difficult to inference from tests.”
know how the information that a test will be A test that requires the individual to perform
used only for classification purposes will af- the activities he or she would perform on the job
fect an individual’s responses. Fortunately, with as much job context present as possible
the TAPAS approach does provide a method- would approach the reality of a job sample.
ology for developing fake-resistant interest However, it would be giving more of an advan-
inventories, using multidimensional pairwise tage to those familiar with the job requirements
preference items balanced in terms of social than would be warranted for accurate prediction
desirability and extremity. of future performance. Further, its expense
An alternative to an interest inventory ap- would be prohibitive given the large number of
proach is the use of an individual’s expressed individuals tested for enlistment in the military
interest in a particular job relative to the other on an annual basis. A lower fidelity simulation
alternatives. There is considerable evidence such as a situational judgment test, in which an
of validity for such an approach (Barge & individual is presented with a written situation
Hough, 1988), but there are also a number of and several written alternatives from which to
practical problems. The number of job choose, is more realistic and can be developed
choices in any service is great, and the indi- to be minimally dependent on knowledge of the
vidual may not know enough about many of relevant job. The value of this approach for
them to make an informed judgment. The use providing unique predictive information re-
of pictures or images, such as on the JOIN mains an open question despite considerable
inventory, offers a means of increasing occu- research already conducted on situational judg-
pational-related knowledge and the potential ment tests.
feasibility of an expressed interest approach. Although the use of job samples as predictor
The Army currently provides similar informa- variables presents both logistical and theoretical
tion, using video displays, through the Elec- problems, an alternative approach that attempts
242 RUMSEY AND ARABIAN

to better mimic the complexity of the criterion the Joint-Service Job Performance Measure-
through the use of complex sets of predictors ment project. These included the Army’s devel-
appears less problematic. Research by Acker- opment of a model of job performance, the Air
man and his associates (Ackerman, 1997; Kan- Force Walk-Through Performance Testing in-
fer et al., 2010) on interactive processes be- volving, in addition to a hands-on component,
tween multiple noncognitive variables offers an interview component in which the individual
interesting possibilities for further explorations. would explain how a task should be performed
The role of motivation, not just as an individual- (Hedge & Teachout, 1992), and the Navy’s
difference variable, but as a process, also de- development of a procedure for task sampling
serves further examination. prior to the construction of task-based measures
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

The interaction between cognitive and non- (Laabs & Baker, 1989).
This document is copyrighted by the American Psychological Association or one of its allied publishers.

cognitive measures is also likely to be fertile More recently, both the Navy and the Army
ground for future research. We return Acker- have been conducting research to develop im-
man’s (1997, p. 173) observation, “It also seems proved performance measures. The Navy has
reasonable to assert that human behavior is been working on developing performance-
rarely, if ever, univocally determined by a sin- based multimedia exams incorporating “audio,
gle trait.” Similarly, Frederiksen (1986, p. 445) full-motion video, color, and graphical images”
noted, (Baisden, Schultz, & Lewis-Brown, 2004, p. 2).
The Army conducted a project, known as Per-
Twenty years ago John French (1965) demonstrated
that a particular test may measure different abilities
formM21, examining means of developing “ef-
when given to different subjects. For a subject who is fective and affordable” performance compe-
good at imagining geometrical figures rotated in space, tency measures that could be institutionally
a spatial test measures spatial ability; but for a subject administered (Knapp & Campbell, 2004, 2006,
who approaches the same problems analytically, the 2011; Moriarty & Knapp, 2007). The project
test measures reasoning ability.
explored enhanced job knowledge tests that in-
There are many reasons for trying to link cluded nontraditional items and graphics, situa-
items to specific factors, including interpretabil- tional judgment tests, and simulations. More
ity, comparability and replicability of results, expensive methods, such as hands-on tests,
but the organization’s ultimate goals are best were not included in the research. Relative to
advanced by tests that relate to important orga- traditional job knowledge tests, these ap-
nizational outcomes. If this requires developing proaches had higher face validity and were well
test items and tests that are multifaceted, then received by soldiers in pilot tests. Although cost
this is an approach that should be considered. estimates suggested that a core competency test
including items that would be applicable across
Criterion Issues jobs would be feasible, they also suggested that
testing tailored to individual jobs would not be.
Recent developments. The availability of The criterion problem today. Despite
appropriate criteria is essential both to the main- these advances, the services still have not
tenance and the improvement of a selection and solved the “criterion problem” discussed earlier.
classification system. This issue is most press- Without institutionally available comprehensive
ing with respect to job performance criteria. The and appropriate performance measures, and
impact of job and environmental changes on an without a methodology for developing such
existing system and the potential benefits that measures in a cost-effective manner, full-scale
could be achieved with newly designed individ- classification research is not feasible. Notably,
ual difference measures cannot be determined the classification research conducted by John-
without relevant and psychometrically sound son, Zeidner, and associates (Johnson &
performance measures. Unfortunately, the pace Zeidner, 1995; Johnson et al., 1992; Scholarios
of advancement of the science of performance et al., 1994; Statman, 1993; Zeidner & Johnson,
measurement has generally tended to trail the 1994; Zeidner et al., 1997; Zeidner et al., 2000;
advancement of the science of assessing indi- Zeidner et al., 2003) was made possible because
vidual attributes. Prior to this century, the last it drew on performance data collected when the
major military advancements to performance Army had an institutionalized performance test-
measurement were achieved in the context of ing program for almost all jobs, known as Skill
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 243

Qualification Tests (R. Campbell, 2012). The ysis procedure, which it administered in a lim-
Army has since terminated this testing program. ited field test. Early findings were encouraging,
Both the Navy and the Air Force conduct pro- but further testing is needed before operational
motion tests (Keenan & Campbell, 2006), al- use can be recommended (Ingerick et al., 2010).
though they do not have a history of using these The second critical operation identified by
tests for ASVAB validation. Peterson et al. (2001, p. 412) was that
In 2006, the Army convened a Classification
it must be possible to establish equations for predicting
Research Panel to address the challenge of “col- performance on each component, such that the predic-
lecting criterion data for a sufficient number of tion equation for a given component is independent of
jobs to meet the Army’s classification research the particular job and there are reliable differences
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

needs” (J. P. Campbell et al., 2007, p. v). between the prediction equations for different
This document is copyrighted by the American Psychological Association or one of its allied publishers.

The panel’s recommendations centered on a components.


strategy for using a few jobs to represent a The authors suggested that current methods
larger number of jobs. More systematic job would benefit from improvement in this respect:
analysis, job clustering, validity transportation, “Absolute and discriminant prediction will suf-
and a comprehensive job components database fer somewhat because the synthetic methods
were all recommended to support this strategy. tend to weight the array of predictors more
A taxonomy-based job analysis would allow for similarly across jobs than do the empirical es-
efficient identification of within-job and across- timation procedures” (p. 451). This may be the
job components. Job clustering provides the ba- most formidable technical challenge to over-
sis for sampling jobs. Validity transportation
come to ensure that synthetic validity appropri-
executes the extension of validity information
ately reflects the classification efficiency inher-
from a few jobs to many. A comprehensive job
ent in a given set of predictors.
components database reduces the amount of
Peterson et al.’s (2001, p. 412) third critical
information collection required from each suc-
operation was that “synthetic validation models
cessive job analysis and validation to the next.
assume that overall job performance can be
One example of validity transportation noted
was the synthetic approach by Peterson et al. expressed as the weighted or unweighted sum of
(2001) described earlier in this article, but other performance on the critical individual compo-
approaches were identified as well, such as one nents.” This will, of course, depend on a num-
employed by McCloy (1994) using hierarchical ber of factors, such as whether the components
linear modeling, as well as combinations of are derived from the same measurement
approaches. Rather than expressing a preference method, how the weights are either derived or
for any one of these approaches, the panel rec- determined not to be needed, whether the com-
ommended research be conducted comparing ponents interact with one another, and the sub-
their effectiveness. jectivity involved in the measurement method.
Synthetic validity remains a promising ap- The accuracy of the sum also depends on
proach, but its success depends on three “criti- whether all components have been accounted
cal operations” identified by Peterson et al. for and whether environmental factors interact
(2001, p. 412). The first is that “the taxonomy of differently with a component in one job and the
relevant job components must be reasonably same component in another job. Whether or not
exhaustive of the job population under consid- the assumptions behind this third critical oper-
eration” (p. 412). The Department of Labor’s ation are problematic is a question whose an-
OⴱNET occupational analysis system (Peterson, swer may emerge over time.
Mumford, Borman, Jeanneret, & Fleishman, Another strategy that might be considered to
1999) was identified as a taxonomy that might meet the goal of making job performance mea-
be considered (Campbell et al., 2007). How- surement more feasible involves reducing the
ever, an Army investigation of the applicability number of job components that need to be mea-
of the OⴱNET’s “generalized work activities” to sured. This strategy seems to conflict with the
Army jobs identified 15 of 98 major Army arguments posed earlier in this article for ade-
duties that were either not covered or had “low quate coverage of all aspects of a job. However,
partial coverage” (Russell et al., 2008). Thus, for classification purposes, it is only necessary
the Army developed its own prototype job anal- to identify those aspects of a job that differen-
244 RUMSEY AND ARABIAN

tiate it from other jobs in terms of the relevant ASVAB were collectively viewed as the best tool
predictors. The key is knowing which aspects for classification purposes.
those are. That may require a different kind of Now, however, the ground on which the cur-
job analysis than is currently employed, one rent system rests is beginning to shift. A grow-
focused on fundamental connections between ing emphasis on job performance has height-
attributes and critical behaviors or behavior ened interest in noncognitive measures, just as
clusters within a particular environmental con- these measures have become more capable of
text. That would initially involve a protracted providing valid prediction in an operational set-
program of basic research, but could substan- ting. Rapid swings in the recruiting environ-
tially simplify the processes of job analysis and ment between supply shortages and surpluses
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

performance measurement. Like the approach have highlighted the need for increased flexi-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

advocated by the Classification Research Panel bility in selection decisions. Changes in doc-
(J. P. Campbell et al., 2007), one desirable trine, equipment, threat, and mission have led to
outcome of this approach would be a taxonomy changes in jobs and job structures that challenge
of job elements, but this taxonomy would be the efficacy of the current classification system,
limited to the subset of elements relevant for a which has also been both challenged and de-
particular purpose. fended on psychometric grounds.
As noted earlier this article, it is essential that One result of these trends in goals, and per-
both can-do and will-do aspects of the job are ceptions of desired means to reach these goals,
addressed in order to provide full coverage of has been an expansion of exploration of new
job performance requirements. Task-based sys- measures—personality measures, interest mea-
tems, such as the now-retired Skill Qualification sures, and cognitive measures. However, limi-
Test, provide coverage of only the can-do com- tations in resources, particularly with respect to
ponent. Thus, if such measures are employed, available testing time, mean that tradeoffs must
they would need to be supplemented by mea- be made and priorities must be established.
sures more suited to capturing the will-do as- Only a limited number of new tests can be
pects of performance. This would allow assess- added before testing time limits are reached.
ment of the classification potential of The tests that can be added will be those that are
noncognitive measures. For this purpose, the viewed as best meeting the military’s goals. The
Classification Research Panel (J. P. Campbell et new tests will be evaluated against not only
al., 2007) recommended consideration of be- one another but also existing tests, which may
haviorally anchored rating scales. be replaced if they do not advance these goals as
well as the new tests.
Implementation Both the Tier Two Attrition Screen (TTAS)
and the TAPAS addressed the Army’s need to
It was noted in this article that one can consider meet enlistment quotas in a constrained recruit-
a selection and classification system as the product ing environment while maintaining the highest
of an organization’s goals and the prevailing possible enlistment standards. TTAS was di-
views of those within the organization regarding rected at identifying those who had not gradu-
how best to implement those views, constrained ated with a high school diploma whose attrition
by the resources and time required for implemen- risk resembled that of high school diploma
tation. For decades after the initial implementation graduates. By enlisting such candidates, the
of the ASVAB and implementation of high school Army could better meet recruiting mission re-
diploma status as major tools in the selection and quirements. The Army need coincided with the
classification process, these factors combined to development of a noncognitive measure that
keep these primary tools relatively unchanged. could help identify such candidates and would
Emphasis on completion of training as a goal not be undermined by faking. The convergence
supported the cognitive orientation of the ASVAB of the Army need and the research solution was
and the perception that high school graduation fortuitous, but was only possible because the
status was the best available predictor of attrition Army researchers continued to refine personal-
supported its continued use. Also, based on the ity testing technology prior to the Army’s rec-
available evidence, the information subtests in the ognition of the need for an additional attrition
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 245

screen, and because the Army sponsors recog- From the seeds planted in the late 20th century
nized that the researchers could meet their need. have grown the green shoots of the early 21st
Pleased with the success of the TTAS for century described in this special issue. We can
identifying promising applicants without high aptly describe this ongoing process as a scientific
school diplomas, the Army personnel managers revolution, in which previously intractable barri-
desired a screen that could help them identify ers to personality testing were overcome, as evi-
promising graduates who otherwise might have denced by the initial implementation of such test-
been screened out based on their AFQT scores. ing for both improved performance and better
Again, this converged with ongoing research attrition management. Major developments are
that resulted in the implementation of TAPAS, underway to capitalize on the promise of interest
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

initially for screening high school graduates, assessment. The research on Assembling Objects,
This document is copyrighted by the American Psychological Association or one of its allied publishers.

then later for screening those who had not Coding Speed, and Cyber Knowledge could lead
achieved high school diploma graduation status. to significant changes to ASVAB after over 35
What lessons can be drawn from this research years of relative stability. Meanwhile, research by
involving recommended future implementation the Department of Defense continues to seek ways
strategies? The most obvious one is hardly a to enhance the efficiency and effectiveness of the
new insight: Implementation is most successful enlistment screening process.
when the tool to be implemented best serves the Although the new noncognitive measures are
organization’s goals. The most elegantly devel- already having an impact on selection, we are only
oped and psychometrically sound instrument just beginning to understand how to fully realize
possible will not be implemented if it does not the potential of both cognitive and noncognitive
predict an organizationally relevant criterion. A measures for classification. Until recently, at-
related lesson is that researchers need to antic- tempts to measure classification efficiency were
ipate future organizational requirements before hampered by inadequate metrics. Thanks to the
it is too late to provide assistance. Had the work of Zeidner, Johnson, and their associates
developers of TTAS and TAPAS not been able (Johnson & Zeidner, 1995; Johnson et al., 1992;
to capitalize on decades of military research on Scholarios et al., 1994; Statman, 1993; Zeidner &
improved personality measures, the researchers Johnson, 1994; Zeidner et al., 1997; Zeidner et al.,
would have had nothing to offer when the spon- 2000; Zeidner et al., 2003), more sophisticated
sors articulated their need. measurement is now possible, although such mea-
surement will not be feasible in the future unless
Conclusion more cost-efficient performance measurement
techniques and more robust and generalizable job
As Kuhn (1970) observed in describing scien- grouping approaches are developed than have yet
tific paradigms, research does not always move in been conceptualized. These challenges are daunt-
a linear, continuous fashion. There may be periods ing, but the remarkable progress that has already
of gradual accumulation of knowledge inter- been made in the ongoing revolution in selection
spersed with periods of explosive growth, often and classification science yields hope that they can
accompanied by new ways of looking at familiar be met.
problems. For decades, military selection and
classification research was locked in a paradigm
that focused on cognitive tests that predicted train- References
ing performance, either through measurement of
general cognitive ability or through knowledge of Ackerman, P. L. (1997). Personality, self-concept,
subject matter related to certain groups of jobs, interest, and intelligence: Which construct doesn’t
and on educational status. fit? Journal of Personality, 65, 171–205. doi:
Then, in the 1980s, the paradigm began to shift. 10.1111/j.1467-6494.1997.tb00952.x
Albarracin, D., Johnson, B. T., Zanna, M. P., &
First, job performance became a prominent crite- Kumkale, G. T. (2005). Introduction and mea-
rion. Major developments in modeling perfor- sures. In D. Albarracin, B. T. Johnson, & M. P.
mance followed. The predictor space expanded to Zanna (Eds.), The handbook of attitudes (pp.
include personality, and research on interests, spa- 3–19). Mahwah, NJ: Erlbaum.
tial ability, and psychomotor ability was reinvig- American Psychiatric Association. (2013). Diagnos-
orated. tic and statistical manual of mental disorders (5th
246 RUMSEY AND ARABIAN

ed.). Washington, DC: American Psychiatric As- search: New directions (pp. 443– 456). Hillsdale,
sociation. NJ: Erlbaum.
Anastasi, A. (1983). Evolving trait concepts. Ameri- Borman, W. C., & Motowidlo, S. J. (1997). Task
can Psychologist, 38, 175–184. doi:10.1037/0003- performance and contextual performance. The
066X.38.2.175 meaning for personnel selection research. Hu-
Anderson, L., Hoffman, R. R., Tate, B., Jenkins, J., man Performance, 10, 99 –109. doi:10.1207/
Parish, C., Stachowski, A., & Dressel, J. D. (2011). s15327043hup1002_3
Assessment of Assembling Objects (AO) for im- Brogden, H. E. (1951). Increased efficiency of selec-
proving predictive performance of the Armed tion resulting from replacement of a single predic-
Forces Qualification Test (Tech. Rep. No. 1282). tor with several differential predictors. Educa-
Arlington, VA: U.S. Army Research Institute for tional and Psychological Measurement, 11, 173–
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

the Behavioral and Social Sciences. 195. doi:10.1177/001316445101100201


This document is copyrighted by the American Psychological Association or one of its allied publishers.

Angoff, W. H. (1988). The nature-nurture debate, Brogden, H. E. (1959). Efficiency of classification


aptitudes, and group differences. American Psy- as a function of number of jobs, percent rejected,
chologist, 43, 713–720. doi:10.1037/0003-066X and the validity and intercorrelation of job per-
.43.9.713 formance estimates. Educational and Psycho-
Anthony, M. (2006). The mechanization of mind: A logical Measurement, 19, 181–190. doi:10.1177/
deconstruction of two contemporary intelligence 001316445901900204
theorists. Futures Research Quarterly, 22, 43– 61. Brown, K. G., Le, H., & Schmidt, F. L. (2006).
Atkinson, J. W., Bastian, J. R., Earl, R. W., & Litwin, Specific aptitude theory re-visited: Is there incre-
G. H. (1960). The achievement motive, goal set- mental validity for training performance. Interna-
ting, and probability preferences. The Journal of tional Journal of Selection and Assessment, 14,
Abnormal and Social Psychology, 60, 27–36. doi: 87–100. doi:10.1111/j.1468-2389.2006.00336.x
10.1037/h0047990 Campbell, J. P. (1990). Modeling the performance
Atkinson, J. W., & Reitman, W. R. (1956). Perfor- prediction problem in industrial and organizational
mance as a function of motive strength and expec-
psychology. In M. D. Dunnette & L. M. Hough
tancy of goal-attainment. The Journal of Abnormal
(Eds.), Handbook of industrial and organizational
and Social Psychology, 53, 361–366. doi:10.1037/
psychology (Vol. 1, 2nd ed., pp. 687–702). Palo
h0043477
Alto, CA: Consulting Psychologists Press.
Baisden, A. G., Schultz, N., & Lewis-Brown, S.
Campbell, J. P., & Knapp, D. J. (Eds.). (2001). Ex-
(2004, October). A new era in U.S. Navy testing:
ploring the limits of personnel selection and clas-
Multimedia Navy enlisted exam. Paper presented at
sification. Mahwah, NJ: Erlbaum.
the meeting of the International Military Testing
Association, Brussels, Belgium. Campbell, J. P., McCloy, R. A., McPhail, S. M.,
Bandura, A. (1994). Regulative function of perceived Pearlman, K., Peterson, N. G., Rounds, J., & Ing-
self-efficacy. In M. G. Rumsey, C. B. Walker, & erick, M. (2007). U.S. Army classification research
J. H. Harris (Eds.), Selection and classification panel: Conclusions and recommendations on clas-
research: New directions (pp. 443– 456). Hills- sification research strategies (Study Rep. No.
dale, NJ: Erlbaum. 2007– 05). Arlington, VA: U.S. Army Research
Barge, B. N., & Hough, L. M. (1988). Utility of Institute for the Behavioral and Social Sciences.
interest assessment for predicting job performance. Campbell, J. P., McCloy, R. A., Oppler, S. H., &
In L. M. Hough (Ed.), Literature review: Utility of Sager, C. E. (1992). A theory of performance. In
temperament, biodata, and interest assessment for N. Schmitt & W. C. Borman (Eds.), Personnel
predicting job performance (Research Note No. selection in organizations (pp. 35–70). San Fran-
88 – 02, pp. 131–188). Alexandria, VA: U.S. Army cisco, CA: Jossey-Bass.
Research Institute for the Behavioral and Social Campbell, R. (2012). Army competency testing: Skill
Sciences. Qualification Tests. In P. F. Ramsberger, N. R.
Barrick, M. R., Mount, M. K., & Judge, T. A. (2001). Wooten, & M. G. Rumsey (Eds.), A history of the
Personality and performance at the beginning of research into methods for selecting and classifying
the new millennium: What do we know and where U.S. Army personnel. Lewiston, NY: Mellen Press.
do we go next? International Journal of Selection Carroll, J. B. (1992). Cognitive abilities: The state of
and Assessment, 9, 9 –30. doi:10.1111/1468-2389 the art. Psychological Science, 3, 266 –270. doi:
.00160 10.1111/j.1467-9280.1992.tb00669.x
Bobko, P. (1994). Issues in operational selection and Carroll, J. B. (1993). Human cognitive abilities: A
classification systems: Comments and commonal- survey of factor-analytic studies. New York: NY:
ities. In M. G. Rumsey, C. B. Walker, & J. H. New York: Cambridge University Press. doi:
Harris (Eds.), Selection and classification re- 10.1017/CBO9780511571312
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 247

Cronbach, L. J. (1956). Assessment of individual Gottfredson, L. S. (2003b). On Sternberg’s “Reply to


differences. Annual Review of Psychology, 7, 173– Gottfredson.” Intelligence, 31, 415– 424. doi:
196. doi:10.1146/annurev.ps.07.020156.001133 10.1016/S0160-2896(03)00024-2
Cronbach, L. J. (1957). The two disciplines of scien- Gottfredson, L. S., & Saklofske, D. H. (2009). Intel-
tific psychology. American Psychologist, 12, 671– ligence: Foundations and issues in assessment. Ca-
684. doi:10.1037/h0043943 nadian Psychology, 50, 183–195. doi:10.1037/
Diaz, T., Ingerick, M., & Lightfoot, M. A. (2004). a0016641
Evaluation of alternative aptitude area (AA) com- Greenston, P. M. (2002). Proposed new Army apti-
posites and job families for Army classification tude area composites: A summary of research re-
(Study Report No. 2005– 01). Arlington, VA: U.S. sults (Study Rep. No. 2002– 03). Alexandria, VA:
Army Research Institute for the Behavioral and U.S. Army Research Institute for the Behavioral
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Social Sciences. and Social Sciences.


This document is copyrighted by the American Psychological Association or one of its allied publishers.

Drasgow, F., Embretson, S. E., Kyllonen, P. C., & Greenston, P. M. (2012). Classification research. In
Schmitt, N. (2006). Technical review of the Armed P. F. Ramsberger, N. R. Wooten, & M. G. Rumsey
Services Vocational Aptitude Battery (Final Rep. (Eds.), A history of the research into methods for
No. 06 –25). Alexandria, VA: Human Resources selecting and classifying U.S. Army personnel (pp.
Research Organization. 275–338). Lewiston, NY: Mellen Press.
Duckworth, A. L., Peterson, C., Matthews, M. D., & Guion, R. M., & Gottier, R. F. (1965). Validity of
Kelly, D. R. (2007). Grit: Perseverance and passion personality measures in personnel selection. Per-
for long-term goals. Journal of Personality and So- sonnel Psychology, 18, 135–164. doi:10.1111/j
cial Psychology, 92, 1087–1101. doi:10.1037/0022- .1744-6570.1965.tb00273.x
3514.92.6.1087 Guilford, J. P. (1959). Personality. New York, NY:
Eagly, A. H., & Chaiken, S. (1993). The psychology McGraw-Hill.
of attitudes. Fort Worth, TX: Harcourt, Brace, Jo- Hedge, J. W., & Teachout, M. S. (1992). An inter-
vanovich. view approach to work sample criterion measure-
Earles, J. A., & Ree, M. J. (1992). The predictive
ment. Journal of Applied Psychology, 77, 453–
validity of the ASVAB for training grades. Edu-
461. doi:10.1037/0021-9010.77.4.453
cational and Psychological Measurement, 52,
Held, J. D., Carretta, T. R., & Rumsey, M. G. (2014).
721–725. doi:10.1177/0013164492052003022
Evaluation of tests of perceptual speed/accuracy
Floyd, R. G., Bergeron, R., McCormack, A. C., An-
and spatial ability for use in military occupational
derson, J. L., & Hargrove-Owens, G. L. (2005).
classification. Military Psychology, 26, 199 –220.
Are Cattell-Horn-Carroll broad ability composites
exchangeable across batteries? School Psychology doi:10.1037/mil0000043
Review, 34, 329 –357. Hogan, R., DeSoto, C. B., & Solano, C. (1977).
Floyd, R. G., McGrew, K. S., Barry, A., Rafael, Traits, tests, and personality research. American
F. A., & Rogers, J. (2009). General and specific Psychologist, 32, 255–264. doi:10.1037/0003-
effects on Cattell-Horn-Carroll broad ability com- 066X.32.4.255
posites: Analysis of the Woodcock-Johnson III Holland, J. L. (1973). Making vocational choices: A
normative update CHC factor clusters across de- theory of careers. Englewood Cliffs, NJ: Prentice
velopment. School Psychology Review, 38, 249 – Hall.
265. Holland, J. L. (1997). Making vocational choices: A
Frederiksen, N. (1986). Toward a broader conception theory of vocational personalities and work envi-
of human intelligence. American Psychologist, 41, ronments (3rd ed.). Englewood Cliffs, NJ: Prentice
445– 452. doi:10.1037/0003-066X.41.4.445 Hall.
French, J. W. (1965). The relationship of problem- Horn, J. L., & Cattell, R. B. (1966). Refinement and
solving styles to the factor composition of tests. test of the theory of fluid and crystallized general
Educational and Psychological Measurement, 25, intelligences. Journal of Educational Psychology,
9 –28. 57, 253–270.
Funder, D. C. (2001). Personality. Annual Review of Humphreys, L. G. (1994). Intelligence from the
Psychology, 52, 197–221. doi:10.1146/annurev standpoint of a (pragmatic) behaviorist. Psycho-
.psych.52.1.197 logical Inquiry, 5, 179 –192. doi:10.1207/
Gardner, H. (1983). Frames of mind: The theory of s15327965pli0503_1
multiple intelligences. New York, NY: Basic Ingerick, M., Oliver, J., Allen, M., Knapp, D., Hoff-
Books. man, R., Greenston, P., & Owens, K. (2010). Pro-
Gottfredson, L. S. (2003a). Dissecting practical in- totype procedures to describe Army jobs (Research
telligence theory: Its claims and evidence. Intelli- Rep. No. 1926). Arlington, VA: U.S. Army Re-
gence, 31, 343–397. doi:10.1016/S0160-2896(02) search Institute for the Behavioral and Social Sci-
00085-5 ences.
248 RUMSEY AND ARABIAN

Ingerick, M., & Rumsey, M. G. (2014). Taking the gram: Phase II report (Tech. Rep. No. 1174).
measure of work interests: Past, present, and fu- Arlington, VA: U.S. Army Research Institute for
ture. Military Psychology, 26, 165–181. doi: the Behavioral and Social Sciences.
10.1037/mil0000045 Knapp, D. J., & Campbell, R. C. (Eds.). (2011).
James, L. R., McIntyre, M. D., Glisson, C. A., National Army enlisted assessment program: Cost
Bowler, J., & Mitchell, T. R. (2004). The condi- analysis and summary (Research Note No. 2012–
tional reasoning measurement system for aggres- 03). Arlington, VA: U.S. Army Research Institute
sion: An overview. Human Performance, 17, 271– for the Behavioral and Social Sciences.
295. doi:10.1207/s15327043hup1703_2 Kuhn, T. S. (1970). The structure of scientific revo-
James, L. R., McIntyre, M. D., Glisson, C. A., Green, lutions. Chicago, IL: University of Chicago Press.
P. D., Patton, T. W., LeBreton, J. M., . . . Williams, Kyllonen, P. C., Walters, A. M., & Kaufman, J. C.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

L. J. (2005). A conditional reasoning measure for (2005). Noncognitive constructs and their as-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

aggression. Organizational Research Methods, 8, sessment in graduate education: A review. Edu-


69 –99. doi:10.1177/1094428104272182 cational Assessment, 10, 153–184. doi:10.1207/
Johnson, C. D., & Zeidner, J. (1995). Differential s15326977ea1003_2
assignment theory sourcebook (Research Note No. Laabs, G. L., & Baker, H. G. (1989). Selection of
95– 43). Alexandria, VA: U.S. Army Research In- critical tasks for Navy job performance measures.
stitute for the Behavioral and Social Sciences. Military Psychology, 1, 3–16. doi:10.1207/
Johnson, C. D., & Zeidner, J. (2006). Evaluation of s15327876mp0101_1
alternative aptitude area (AA) composites and job Locke, E. A. (2005). Why emotional intelligence is
families for Army classification: A reply (Study an invalid concept. Journal of Organizational Be-
Note No. 2006 – 03). Arlington, VA: U.S. Army havior, 26, 425– 431. doi:10.1002/job.318
Research Institute for the Behavioral and Social Locke, E. A., & Allison, J. A. (2013). What makes
Sciences. great business leaders? In M. G. Rumsey (Ed.),
Johnson, C. D., Zeidner, J., & Leaman, J. A. (1992). The Oxford handbook of leadership (pp. 63–75).
Improving classification efficiency by restructuring New York, NY: Oxford University Press.
Army job families (Tech. Rep. No. 947). Alexan- Maddi, S. R., Matthews, M. D., Kelly, D. R., Villar-
dria, VA: U.S. Army Research Institute for the real, B., & White, M. (2012). The role of hardiness
Behavioral and Social Sciences. and grit in predicting performance and retention of
Jones, G. E., & Ree, M. J. (1998). Aptitude test score USMA cadets. Military Psychology, 24, 19 –28.
validity: No moderating effect due to job ability doi:10.1080/08995605.2012.639672
requirement differences. Educational and Psycho- Maier, M. H. (1993). Military aptitude testing: The
logical Measurement, 58, 284 –294. doi:10.1177/ past fifty years (Tech. Rep. No. 93– 007). Mon-
0013164498058002011 terey, CA: Personnel Testing Division, Defense
Kanfer, R., Wolf, M. B., Kantrawitz, T. M., & Ack- Manpower Data Center.
erman, P. L. (2010). Ability and trait complex Matthew, C. T., & Stemler, S. E. (2013). Assessing
predictors of academic and job performance: A mental flexibility with a new word recognition test.
person-situation approach. Applied Psychology: Personality and Individual Differences, 55, 915–
An International Review, 59, 40 – 69. doi:10.1111/ 920. doi:10.1016/j.paid.2013.07.464
j.1464-0597.2009.00415.x Mayer, J. D., Caruso, D. R., Panter, A. T., & Salovey,
Keenan, P. A., & Campbell, R. C. (2006). Develop- P. (2012). The growing significance of hot intelli-
ment of a prototype self-assessment program in gence. American Psychologist, 67, 502–503. doi:
support of soldier competency assessment (Study 10.1037/a0029456
Rep. No. 2006 – 01). Arlington, VA: U.S. Army McCann, C., Joseph, D. L., Newman, R. A., & Roberts,
Research Institute for the Behavioral and Social R. A. (2014). Emotional intelligence is a second-
Sciences. stratum factor of intelligence: Evidence from hierar-
Kluckhohn, C. (1951). Values and value orientations chical and bifactor models. Emotion, 14, 358 –374.
in the theory of action. In T. Parsons & E. A. Shils doi:10.1037/a0034755
(Eds.), Toward a general theory of action (pp. McCloy, R. A. (1994). Predicting job performance
388 – 433). Cambridge, MA: Harvard University scores for jobs without performance data. In B. F.
Press. Green & A. S. Mavor (Eds.), Modeling cost and
Knapp, D. J., & Campbell, R. C. (2004). Army en- performance for military enlistment (pp. 61–99).
listed personnel competency assessment program Washington, DC: National Academy Press.
Phase I (Vol. 1): Needs analysis (Tech. Rep. No. Messick, S. (1995). Validity of psychological assess-
1151). Arlington, VA: U.S. Army Research Insti- ment: Validation of inferences from personal re-
tute for the Behavioral and Social Sciences. sponses and performances as scientific inquiry into
Knapp, D. J., & Campbell, R. C. (Eds.). (2006). Army score meaning. American Psychologist, 50, 741–
enlisted personnel competency assessment pro- 749. doi:10.1037/0003-066X.50.9.741
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 249

Moriarty, K. O., & Knapp, D. J. (Eds.). (2007). Army Peterson, N. G., Wise, L. L., Arabian, J., & Hoffman,
enlisted personnel competency assessment pro- R. G. (2001). Synthetic validation and validity
gram: Phase III pilot tests (Tech. Rep. No. 1198). generalization when empirical validation is not
Arlington, VA: U.S. Army Research Institute for possible. In J. P. Campbell & D. J. Knapp (Eds.),
the Behavioral and Social Sciences. Exploring the limits of personnel selection and
Motowidlo, S. J., Hooper, A. C., & Jackson, H. L. classification (pp. 411– 451). Mahwah, NJ: Erl-
(2006). Implicit policies about relations between baum.
personality traits and behavioral effectiveness in Pulakos, E. D., Arad, S., Donovan, M. A., & Plam-
situational judgment items. Journal of Applied ondon, K. E. (2000). Adaptability in the work-
Psychology, 91, 749 –761. doi:10.1037/0021-9010 place: Development of a taxonomy of adaptive
.91.4.749 performance. Journal of Applied Psychology, 85,
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Motowidlo, S. J., & Van Scotter, J. R. (1994). Evi- 612– 624. doi:10.1037/0021-9010.85.4.612
This document is copyrighted by the American Psychological Association or one of its allied publishers.

dence that task performance should be distin- Ramsberger, P. F. (2012). 1946 –1973: A stable sit-
guished from contextual performance. Journal of uation in Army selection and classification. In P. F.
Applied Psychology, 79, 475– 480. doi:10.1037/ Ramsberger, N. R. Wooten, & M. G. Rumsey
0021-9010.79.4.475 (Eds.), A history of the research into methods for
Murphy, K. R. (2009). Content validation is useful selecting and classifying U.S. Army personnel (pp.
for many things, but validity isn’t one of them. 129 –144). Lewiston, NY: Mellen Press.
Industrial and Organizational Psychology: Per- Rate, C. R., Clark, J. A., Lindsay, D. R., & Sternberg,
spectives on Science and Practice, 2, 453– 464. R. J. (2007). Implicit theories of courage. The
doi:10.1111/j.1754-9434.2009.01173.x Journal of Positive Psychology, 2, 80 –98. doi:
Murphy, K. R., & Davidshofer, C. O. (1991). Psy- 10.1080/17439760701228755
chological testing, principles and applications. Ree, M. J., & Earles, J. A. (1991). Predicting training
Englewood Cliffs, NJ: Prentice Hall. success: Not much more than g. Personnel Psy-
Newton, J. H., & McGrew, K. S. (2010). Introduction chology, 44, 321–332. doi:10.1111/j.1744-6570
.1991.tb00961.x
to the special issue: Current research in Cattell-
Ree, M. J., & Earles, J. A. (1993). g is to psychology
Horn-Carroll-based assessment. Psychology in the
what carbon is to chemistry: A reply to Sternberg
Schools, 47, 621– 634.
and Wagner, McClelland, and Calfee. Current Di-
Nisbett, R. E., Aronson, J., Blair, C., Dickens, W.,
rections in Psychological Science, 2, 11–12. doi:
Flynn, J., Halpern, D. F., & Turkheimer, E. (2012).
10.1111/1467-8721.ep10770509
Intelligence: New findings and theoretical devel-
Ree, M. J., Earles, J. A., & Teachout, M. S. (1994).
opments. American Psychologist, 67, 130 –159. Predicting job performance: Not much more than
doi:10.1037/a0026699 g. Journal of Applied Psychology, 79, 518 –524.
Ones, D. S., Viswesvaran, C., & Schmidt, F. L. doi:10.1037/0021-9010.79.4.518
(1993). Comprehensive meta-analysis of integrity Reeve, C. L., & Hakel, M. D. (2002). Asking the
test validities: Findings and implications for per- right questions about g. Human Performance, 15,
sonnel selection and theories of job performance. 47–74. doi:10.1080/08959285.2002.9668083
Journal of Applied Psychology, 78, 679 –703. doi: Rieke, M. L., & Guastello, S. J. (1995). Unsolved
10.1037/0021-9010.78.4.679 issues in honesty and integrity testing. American
Ones, D. S., Viswesvaran, C., & Schmidt, F. L. Psychologist, 50, 458 – 459. doi:10.1037/0003-
(1995). Integrity tests: Overlooked facts, resolved 066X.50.6.458
issues and remaining questions. American Psy- Roberts, R. D., Goff, G. N., Anjou, F., Kyllonen,
chologist, 50, 456 – 457, doi:10.1037/0003-066X P. C., Pallier, G., & Stankov, L. (2000). The
.50.6.456 Armed Services Vocational Aptitude Battery: Lit-
Paullin, C., Sinclair, A. L., Moriarty, K. O., Vasipou- tle more than acculturated learning (Gc)!? Learn-
los, N. L., Campbell, R. C., & Russell, T. L. ing and Individual Differences, 12, 81–103. doi:
(2011). Army officer job analysis: Identifying per- 10.1016/S1041-6080(00)00035-2
formance requirements to inform officer selection Rokeach, M. (1971). Long-range experimental
and assignment (Tech. Rep. No. 1295). Arlington, modification of values, attitudes and behavior.
VA: U.S. Army Research Institute for the Behav- American Psychologist, 26, 453– 459. doi:
ioral and Social Sciences. 10.1037/h0031450
Peterson, N. G., Mumford, M. D., Borman, W. C., Rokeach, M., & Rokeach, S. J. (1989). Stability and
Jeanneret, P. R., & Fleishman, E. A. (Eds.). change in American value priorities, 1968 –1981.
(1999). An occupational information system for American Psychologist, 44, 775–784. doi:10.1037/
the 21st century: The development of OⴱNET. 0003-066X.44.5.775
Washington, DC: American Psychological Associ- Rosenbaum, D. A., Carlson, R. A., & Gilmore, R. O.
ation. (2001). Acquisition of intellectual and perceptual-
250 RUMSEY AND ARABIAN

motor skills. Annual Review of Psychology, 52, annual meeting of the American Psychological As-
453– 470. doi:10.1146/annurev.psych.52.1.453 sociation, Washington, DC.
Russell, T. L., Sinclair, A., Erdheim, J., Ingerick, M., Sternberg, R. J. (2013). The WICS model of leader-
Owens, K., Peterson, N., & Pearlman, K. (2008). ship. In M. G. Rumsey (Ed.), The Oxford hand-
Evaluating the OⴱNET occupational analysis sys- book of leadership (pp. 63–75). New York, NY:
tem for Army competency development (Tech. Rep. Oxford University Press.
No. 1237). Arlington, VA: U.S. Army Research Sternberg, R. J., & Detterman, D. K. (1986). What is
Institute for the Behavioral and Social Sciences. intelligence? Contemporary viewpoints on its na-
Sackett, P. R., & Lievens, F. (2008). Personnel selec- ture and definition. Norwood, NJ: Ablex.
tion. Annual Review of Psychology, 59, 419 – 450. Sternberg, R. J., & Grigorenko, E. L. (1997). Are cog-
doi:10.1146/annurev.psych.59.103006.093716 nitive styles still in style? American Psychologist, 52,
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Scherbaum, C. A. (2005). Synthetic validity: Past, 700 –712. doi:10.1037/0003-066X.52.7.700


This document is copyrighted by the American Psychological Association or one of its allied publishers.

present and future. Personnel Psychology, 58, Toquam, J. L., Dunnette, M. D., Corpe, V. A., Hous-
481–515. doi:10.1111/j.1744-6570.2005.00547.x ton, J. S., Peterson, N. G., Russell, T. L., & Han-
Schmitt, N. (2004). Beyond the Big Five: Increases son, M. A. (1987). Cognitive paper-and-pencil
in understanding and practical utility. Human measures: Pilot testing. In N. G. Peterson (Ed.),
Performance, 17, 347–357. doi:10.1207/ Development and field test of the Trial Battery for
s15327043hup1703_5 Project A (Tech. Rep. No. 739, pp. 3–1 to 3–36).
Schmitt, N., & Kunce, C. (2002). The effects of Alexandria, VA: U.S. Army Research Institute for
required elaboration of answers to biodata ques- the Behavioral and Social Sciences.
tions. Personnel Psychology, 55, 569 –587. doi: Trippe, D. M., Moriarty, K. O., Russell, T. L., Car-
10.1111/j.1744-6570.2002.tb00121.x retta, T. R., & Beatty, A. S. (2014). Development
Schmitt, N., Oswald, F. L., Kim, B. H., Gillespie, of a cyber/information technology knowledge test
M. A., Ramsay, L. J., & Yoo, T. (2003). Impact of for military enlisted technical training qualifica-
elaboration on socially desirable responding and tion. Military Psychology, 26, 182–198. doi:
10.1037/mil0000042
the validity of biodata measures. Journal of Ap-
Van Iddekinge, C. H., Putka, D. J., & Campbell, J. P.
plied Psychology, 88, 979 –988. doi:10.1037/0021-
(2011). Reconsidering vocational interests for per-
9010.88.6.979
sonnel selection: The validity of an interest-based
Scholarios, D. M., Johnson, C. D., & Zeidner, J.
selection test in relation to job knowledge, job per-
(1994). Selecting predictors for maximizing the
formance, and continuance intentions. Journal of Ap-
classification efficiency of a battery. Journal of
plied Psychology, 96, 13–33. doi:10.1037/a0021193
Applied Psychology, 79, 412– 424. doi:10.1037/
Welsh, J. R., Kucinkas, S. K., & Curran, L. T. (1990).
0021-9010.79.3.412 Armed Services Vocational Aptitude Battery
Schwartz, A. C., & Mael, F. A. (1992). An introduc- (ASVAB): Integrative review of validity studies
tion to the Army personnel system (Special Rep. (Tech. Rep. No. 2004 – 002). Seaside, CA: Defense
No. S-19). Alexandria, VA: U.S. Army Research Manpower Data Center.
Institute for the Behavioral and Social Sciences. White, L. A., Rumsey, M. G., Mullins, H. M., Nye,
Shavelson, R. J., Hubner, J. J., & Stanton, G. C. C. D., & LaPort, K. A. (2014). Toward a new
(1976). Self-concept: Validation of construct inter- attrition screening paradigm: Latest Army ad-
pretations. Review of Educational Research, 46, vances. Military Psychology, 26, 138 –152. doi:
407– 441. doi:10.3102/00346543046003407 10.1037/mil0000047
Snow, R. E. (1986). Individual differences and the White, L. A., Young, M. C., Hunter, A. E., & Rum-
design of educational programs. American Psy- sey, M. G. (2008). Lessons learned in transitioning
chologist, 41, 1029 –1039. doi:10.1037/0003-066X personality measures from research to operational
.41.10.1029 settings. Industrial and Organizational Psychol-
Snow, R. E. (1992). Aptitude theory: Yesterday, to- ogy: Perspectives on Science and Practice, 1, 291–
day, and tomorrow. Educational Psychologist, 27, 295. doi:10.1111/j.1754-9434.2008.00049.x
5–32. doi:10.1207/s15326985ep2701_3 Willemin, L. P., & Karcher, K., Jr. (1958). Development of
Stark, S., Chernyshenko, O. S., Drasgow, F., Nye, combat Aptitude Areas (Tech. Research Report No.
C. D., White, L. A., Heffner, T., & Farmer, W. L. 1110). Washington, DC: Department of the Army, Per-
(2014). From ABLE to TAPAS: A new generation sonnel Research and Procedures Division.
of personality tests to support military selection Williams, R. M. (1968). Values: I. The concept of
and classification decisions. Military Psychology, values. In D. L. Sills (Ed.), International encyclo-
26, 153–164. doi:10.1037/mil0000044 pedia of the social sciences (Vol. 16, pp. 283–
Statman, M. A. (1993, August). Developing optimal 287). New York, NY: Macmillan.
predictor equations for differential job assignment Williams, R. M. (1979). Change and stability in
and vocational counseling. Paper presented at the values and value system: A sociological perspec-
MILITARY ENLISTMENT SELECTION AND CLASSIFICATION 251

tive. In M. Rokeach (Ed.), Understanding human tiered classification system for the Army. Vol. 1:
values (pp. 15– 46). New York, NY: Free Press. Report (Tech. Rep. No. 1108). Alexandria, VA:
Wise, L. L. (1994). Goals of the selection and clas- U.S. Army Research Institute for the Behavioral
sification decision. In M. G. Rumsey, C. B. and Social Sciences.
Walker, & J. H. Harris (Eds.), Selection and clas- Zeidner, J., Johnson, C. D., & Scholarios, D.
sification research: New directions (pp. 351–361). (1997). Evaluating military selection and classi-
Hillsdale, NJ: Erlbaum. fication systems in the multiple job context. Mil-
Zeidner, J., & Johnson, C. D. (1994). Is personnel itary Psychology, 9, 169 –186. doi:10.1207/
classification a concept whose time has passed?. In s15327876mp0902_4
M. G. Rumsey, C. B. Walker, & J. H. Harris Zeidner, J., Scholarios, D., & Johnson, C. D. (2003).
(Eds.), Selection and classification research: New Evaluating job knowledge criterion components
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

directions (pp. 377– 410). Hillsdale, NJ: Erlbaum. for use in classification research. Military Psychol-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

Zeidner, J., Johnson, C., Vladimirsky, Y., & Weldon, ogy, 15, 97–116. doi:10.1207/S15327876
S. (2000). Specifications for an operational two- MP1502_1

Вам также может понравиться