Вы находитесь на странице: 1из 28

Criminal Justice and Behavior

http://cjb.sagepub.com/

Personality Testing in Law Enforcement Employment Settings: A Metaanalytic Review


Jorge G. Varela, Marcus T. Boccaccini, Forrest Scogin, Jamie Stump and Alicia Caputo
Criminal Justice and Behavior 2004 31: 649
DOI: 10.1177/0093854804268746
The online version of this article can be found at:
http://cjb.sagepub.com/content/31/6/649

Published by:
http://www.sagepublications.com

On behalf of:

International Association for Correctional and Forensic Psychology

Additional services and information for Criminal Justice and Behavior can be found at:
Email Alerts: http://cjb.sagepub.com/cgi/alerts
Subscriptions: http://cjb.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations: http://cjb.sagepub.com/content/31/6/649.refs.html

>> Version of Record - Nov 10, 2004


What is This?

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

10.1177/0093854804268746
CRIMINAL JUSTICE AND BEHAVIOR
Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT

PERSONALITY TESTING IN
LAW ENFORCEMENT
EMPLOYMENT SETTINGS
A Meta-Analytic Review
JORGE G. VARELA

Wilford Hall USAF Medical Center

MARCUS T. BOCCACCINI
FORREST SCOGIN
JAMIE STUMP
ALICIA CAPUTO
The University of Alabama

Meta-analysis was used to (a) assess the overall validity of personality measures as predictors of
law enforcement officer job performance, (b) investigate the moderating effects of study design
characteristics on this relation, and (c) compare effects for commonly used instruments in this
setting. Results revealed a modest but statistically significant relation between personality test
scores and officer performance. Prediction was strongest for the California Psychological Inventory and weaker for the Minnesota Multiphasic Personality Inventory and Inwald Personality
Inventory. Effect sizes were larger for studies examining current job performance, as opposed to
future job performance. Implications for using personality tests in the law enforcement officer
hiring process are discussed, and recommendations for future research are provided.
Keywords: meta-analysis; police; law enforcement; personality assessment

t is standard practice in most major law enforcement agencies to


employ the services of mental health professionals to screen job
candidates. The general approach followed in these evaluations is one
of screening out unfit candidates, rather than selecting in preferred
candidates. The typical psychological evaluation involves screening
for both major mental illness and personality traits that may interfere

CRIMINAL JUSTICE AND BEHAVIOR, Vol. 31 No. 6, December 2004 649-675


DOI: 10.1177/0093854804268746
2004 American Association for Correctional Psychology

649

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

650

CRIMINAL JUSTICE AND BEHAVIOR

with law enforcement officer job performance. Such traits may


include an inability to deal with stressful situations, being prone to
violent outbursts, and potential for substance abuse. Screening out
unfit job candidates is especially important in law enforcement settings because law enforcement officers are entrusted with the responsibility of protecting the public from harm. Often, this work is done in
an environment in which public opinion of the police subculture is low
and where the demands and stress of police work may be unappreciated. Psychological screening is one mechanism for identifying officer candidates that may be unable to uphold their responsibilities in
this environment.
Many law enforcement agencies use personality measures as part
of their employee selection procedures. Ash, Slora, and Britton
(1990) surveyed 99 major metropolitan and U.S. state police departments. Of the 99 surveys distributed, 62 were returned. The researchers found that 42 (67.7%) of the departments that responded reported
using personality tests as part of their employee selection procedures.
Twenty-five (40.3%) of the departments reported using two or more
personality tests for each candidate. Although the frequent use of personality testing in officer selection procedures suggests that those
involved in the officer selection process believe these tests contribute
useful information, the extent to which personality tests are predictive
of officer performance is unclear. Numerous published and unpublished studies have examined the relation between personality measures and law enforcement officer performance. Most frequent in the
empirical literature are studies attempting to identify specific personality test scales or groups of scales that are predictive of objective performance criteria, such as termination, absenteeism, tardiness, citizen
complaints, and commendations, or subjective performance criteria,
AUTHOR NOTE: Jorge G. Varela is with the U.S. Air Force at Wilford Hall Medical
Center in San Antonio, Texas. This research is based on Jorge G. Varelas doctoral
dissertation at The University of Alabama. Forrest Scogin and Jamie Stump, Department of Psychology, The University of Alabama. Marcus T. Boccaccini, Ph.D., is now
at Sam Houston State University, Texas. Alicia Caputo is now with the Department of
Human Services in Alexandria, Virginia. Correspondence concerning this manuscript should be addressed to Forrest Scogin, Ph.D., Psychology Department, Box
870348, The University of Alabama, Tuscaloosa, AL 35487; telephone: 205-3481924; fax: 205-348-8648; e-mail: Fscogin@gp.as.ua.edu.

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 651

such as supervisor and peer ratings of performance. Despite the growing number of studies in this area, there is a lack of quantitative integration in this literature. Because researchers in this field have used
different personality assessment instruments, different outcome measures, and different study designs, there is no clear consensus about
what can be predicted from law enforcement officers personality test
scores. The present study uses meta-analysis to provide a clearer
picture of the validity of personality measures in law enforcement
settings.
VALIDITY OF PERSONALITY MEASURES IN EMPLOYMENT SETTINGS

Several published meta-analyses have examined the validity of personality testing in employment settings. Findings from these metaanalyses are reviewed here for two purposes. First, the research methodologies used in these meta-analyses provide a basis for the design of
this study. Second, three of the meta-analyses included some data
from law enforcement settings, and their results provide an estimate of
the effect sizes that might be expected from a meta-analysis based on a
more comprehensive review of the existing law enforcement officer
performance literature.
Schmitt, Gooding, Noe, and Kirsch (1984) conducted a metaanalysis of 99 employee selection studies published in the Journal of
Applied Psychology or Personnel Psychology between 1964 and
1982.1 These researchers examined the effectiveness of several different types of predictors (personality measures, aptitude assessments,
physical ability measures) across several occupational groups (professional, managerial, clerical, sales, skilled, and unskilled). Performance criteria included both subjective (performance ratings) and
objective (turnover, achievement/grades, status changes, and wages)
measures. Using only personality measures as predictors, an overall
mean correlation of .149 was observed. A mean correlation of .206
was observed for subjective performance criteria, and mean correlations ranged from .121 (turnover) to .152 (achievement/grades) for
objective performance criteria. Schmitt et al. also found that effect
sizes varied depending on study design characteristics. Studies using
a concurrent design (data from incumbents) or purely predictive
design (recruit data not used for hiring decisions) produced larger

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

652

CRIMINAL JUSTICE AND BEHAVIOR

mean correlations than studies using an employee selection design


(recruit data used for hiring decisions). However, the extent to which
study design characteristics influenced the predictive validity of personality measures was not reported. In their analyses of study design
characteristics, the researchers combined correlation coefficients
across all available predictors (aptitude assessments, physical ability
measures, personality measures).
Tett, Jackson, Rothstein, and Reddon (1994) conducted a metaanalysis of 86 studies examining the relation between personality
measures and job performance (see also Tett, Jackson, & Rothstein,
1991). Eleven of these studies were of law enforcement officers or
recruits. The overall corrected sample-weighted mean correlation
between personality measures and job performance was .174. The
researchers compared correlations for studies that predicted specific
relations between personality measures and performance (confirmatory design) and studies without a clear rationale for expecting significant results (exploratory design) and found that prediction was significantly stronger for confirmatory studies (.238) than exploratory
studies (.035). Mean correlations were also significantly larger for
studies using job recruits (.267) compared to studies using only job
incumbents (.120) and for published studies (.215) compared to dissertations (.049). Although correlations were higher for subjective
performance measures (.186) than objective performance measures
(.103), this difference was not large enough to achieve statistical
significance.
Barrick and Mount (1991) conducted a meta-analysis to examine
the predictive validity of the Big Five personality traits across five occupational groups (professionals, police, managers, sales, and skilled or
semiskilled) for three job performance criteria (job proficiency, training proficiency, and personnel data). Findings were reported for each
of the Big Five personality dimensions for police officers, despite the
fact this occupational group accounted for only 13% of the 162 samples included in the meta-analysis. Mean correlations for police officers, corrected for range restriction and measurement error in criterion and predictor variables, were .09 (extraversion), .10 (emotional
stability), .10 (agreeableness), .22 (conscientiousness), and .00 (openness to experience). Salgado (1997) conducted a similar metaanalysis using samples from European countries and also reported

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 653

separate effects for law enforcement samples (k = 3 for most effects).


Mean correlations for police officers, corrected for range restriction
and measurement error in criterion and predictor variables, were .13
(extraversion), .15 (emotional stability), .09 (agreeableness), .24
(conscientiousness), and .12 (openness to experience).
In sum, meta-analyses of the validity of personality tests in employment settings have found that personality test scores have a consistent
but modest relation to job performance indices. Barrick and Mounts
(1991) and Salgados (1997) results suggest that similar effect sizes
might be observed in meta-analyses using only law enforcement
officer samples, but the small number of law enforcement studies
included in these meta-analyses precludes strong predictions. The
meta-analyses reviewed above also suggest that effect sizes can vary
as a result of study design characteristics. Tett et al. (1994) found that
prediction was better for confirmatory studies compared to exploratory studies and for job recruits compared to job incumbents. Findings from the Schmitt et al. (1984) meta-analysis suggest that prediction is strongest for studies using a concurrent or predictive design and
weakest for studies using an employee selection design. The question
remains as to whether these same relations exist in studies using personality measures to predict law enforcement officer performance.
UNPUBLISHED META-ANALYSIS OF PERSONALITY TESTS
IN LAW ENFORCEMENT SETTINGS

To our knowledge, there is at least one existing, but unpublished,


meta-analysis of the validity of personality testing in law enforcement
settings. OBrien (1996) conducted a meta-analysis of 29 published
studies (N = 4,466) examining the relation between personality measures and law enforcement officer performance. In this unpublished
review, an overall validity coefficient of .25 was reported. OBrien
examined several moderator variables, including the personality
instrument (Minnesota Multiphasic Personality Inventory [MMPI]
vs. California Psychological Inventory [CPI]) and type of performance criteria that were used (subjective vs. objective; training vs.
incumbent). When comparing the overall predictive validity of the
MMPI and CPI, OBrien used two different strategies. First, she
examined effect sizes from studies in which a clinicians interpreta-

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

654

CRIMINAL JUSTICE AND BEHAVIOR

tion of personality test data (sometimes accompanied by clinical interview data) were used to predict performance. Under these circumstances (k = 10), she found similar effect sizes for the MMPI (.46) and
the CPI (.32). Second, she examined the predictive validity of individual test scales. Based on the pattern of her results, she concluded that
prediction was stronger for CPI scales (17 of 22 mean validity coefficients were significantly different from 0) compared to MMPI scales
(1 of 13 mean validity coefficients was significantly different from 0).
With respect to performance criteria, subjective and objective criteria
were predicted equally well (.20 and .25, respectively), as were training criteria and actual job performance criteria (.19 and .27,
respectively).
Despite the existence of the OBrien (1996) meta-analysis, there
are several reasons why further integration of this literature is needed.
First, OBrien included only published findings in her meta-analysis.
As a result, it is likely that her effect sizes are inflated, because journals tend to publish studies with significant findings. Indeed, Tett et al.
(1994) found that correlations between personality measures and job
performance indices were significantly larger in published studies
compared to unpublished studies. Second, OBriens comparison
of personality tests did not include the Inwald Personality Inventory (IPI; Inwald, Knatz, & Shushman, 1982), a measure specifically
designed for screening law enforcement applicants. According to the
publisher of the IPI, Hilson Research, Inc. (2000-2001), their instrument is used by more than 30% of the nations state police departments. Finally, OBriens meta-analysis has not been published and,
to our knowledge, has not been subjected to peer review.
PURPOSE OF THE CURRENT META-ANALYSIS

This article reports the results of a meta-analytic review of the


validity of personality testing in law enforcement settings. Although
previous meta-analyses have included law enforcement samples (e.g.,
Barrick & Mount, 1991; Salgado, 1997; Tett et al., 1994), the number
of samples included were small given the size of the law enforcement
officer performance research literature. The current study is designed
to extend what is known about the relation between personality test
scores and law enforcement officer job performance by examining

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 655

effects from a substantially larger number of law enforcement samples (both published and unpublished) compared to previous metaanalyses. We report effects for the overall validity of personality tests
in this setting and examine the impact of several moderator variables,
including predictor type (MMPI, CPI, IPI), study design characteristics, sample characteristics, and publication status (published vs.
unpublished findings).
METHOD
CASE SELECTION

Data for this study were retrieved from scholarly journals, books,
conference presentations, dissertations, theses, and unpublished reports
from practitioners and test publishers. Several methods were used to
identify relevant studies. First, searches of PsycInfo, Dissertation
Abstracts International, and the National Criminal Justice Research
Service were conducted to identify all references to personality
assessment in law enforcement settings. Second, all volumes of the
following journals were hand searched: Journal of Applied Psychology, Personnel Psychology, Professional Psychology: Research and
Practice, Journal of Police Science and Administration, Law and
Human Behavior, Behavioral Sciences and the Law, Criminal Justice
and Behavior, Journal of Police and Criminal Psychology, and Journal of Personality Assessment. Third, reference lists of already identified sources were reviewed. Fourth, a request for data was placed in
the American Psychological Association Division 18 (Psychologists
in Public Service) newsletter, and requests for data were submitted to
the Internet discussion forums of the International Association of
Chiefs of PolicePsychological Services Section and Division 41 of
the American Psychological Association (American Psychology
Law Society). Fifth, leading researchers in the field were contacted,
including Robin Inwald (Hilson Research), Robert Hogan (Hogan
Assessment Systems), George Hargrave, Dierdre Hiatt, Curt Bartol,
Larry Beutler, Stanley Azen, and Mark Axelberd. Using these methods, approximately 175 studies containing personality data were
identified.

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

656

CRIMINAL JUSTICE AND BEHAVIOR

Inclusionary and exclusionary criteria were established to identify


data that were appropriate for the current meta-analysis. The following inclusionary criteria were used: (a) studies using police, correctional, government security, or other law enforcement personnel, (b)
studies linking personality test data with job performance, (c) studies
using either training performance data or job performance data, and
(d) studies using either objective (e.g., absenteeism, tardiness, citizen
complaints) or subjective (e.g., supervisor ratings, peer ratings) indices of performance. In addition, the following exclusionary criteria
were used: (a) studies reporting only the results of multiple-predictor
analyses (e.g., multiple regression), and (b) studies failing to report
data necessary for the computation of effect sizes. Of the initial 175
data sets that were identified, 78 met the study criteria and were
included in the meta-analysis. Many of the originally identified data
sets were excluded because they did not compare personality data to
job performance (e.g., reported personality data only). Others were
excluded because they only reported data for multiple-predictor analyses or because the personality tests used were actually measures of
cognitive ability or vocational interest.
Almost half of the usable data sets came from journal articles (k =
36)2, whereas others came from theses or dissertations (k = 27), government reports (k = 4), conference presentations (k = 4), researchers
(k = 3), books or book chapters (k = 2), and test manuals (k = 2). These
studies ranged in date from 1950 to 1999.
DATA CODING

Coders were used to extract relevant data and study design information from the identified studies. Coders were asked to identify data
that were appropriate for inclusion in the meta-analysis and to report
the number and type or types of statistical analyses used in each study.
Data were coded as being published if they were obtained from journal articles, books or book chapters, test manuals, or government
reports, and unpublished if they were obtained from theses, dissertations, conference presentations, or researchers. Officer characteristics, such as the average age and educational level of the sample, were
coded. Job performance predictors were identified, including the
names of personality tests and scales that were used. Measures of offi-

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 657

cer performance were identified and coded as either objective (reprimands, complaints, suspensions, days of work missed) or subjective
(supervisor or peer ratings). Performance criteria were also identified
as reflecting either training performance or on-the-job performance.
The amount of time that elapsed between personality testing and collection of performance data (measurement interval) was recorded.
Finally, the design of each study was classified on each of the following characteristics: (a) Were the personality tests administered and
used as part of the hiring process (screening) or as part of a research
project only (analogue)? (b) Were the personality tests used to predict
future job performance (predictive design) or were they administered
to incumbent officers to examine the relation between personality
dimensions and current performance (concurrent design)? (c) Did the
authors select tests and/or test subscales based on a priori hypotheses
about the relations between specific personality measures and performance indices (confirmatory design), or did they select personality
measures without a clear rationale for expecting significant results
(exploratory design)?
The quality of each study was rated using a modified version of the
Instrument for Evaluating Experimental Research Reports (IEERR)
(Suydam, 1968). The IEERR was developed for rating the quality of
studies comparing the effectiveness of different educational programs. For this study, modifications to the IEERR were made to
accommodate the nature of the literature under review.3 Items on the
IEERR are summed to provide a single study quality rating for each
study.
Coders initially received 2 hours of training and instruction concerning the proper use of the coding manual and data collection forms.
After the initial training was completed, each of the five coders was
given a practice set of studies to code. A second training session was
then conducted to clarify questions about the coding manual and data
collection forms. After the second training session, the coders collected the data used in the meta-analysis.
CODER AGREEMENT

Each study was reviewed and coded by two coders. Discrepancies


about the proper coding of study design characteristics were resolved

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

658

CRIMINAL JUSTICE AND BEHAVIOR

through critical discussion. The coders ratings of study quality on the


IEERR were found to be consistent (r = .89), and subsequent analyses
examining study quality were completed using the average rating for
each study.
STATISTICAL CONVERSIONS AND CORRECTIONS

Tests of significance were converted to r for use as a common effect


size using computational formulas provided by Wolf (1986) and
Rosenthal (1994). In cases where only means and standard deviations
were reported, r values were estimated by first generating test statistics (e.g., t, F) and then converting these values to r.
An important consideration in the aggregation of correlational data
is the sign of the reported coefficients. Because the purpose of this
study was to determine the magnitude of the effect sizes associated
with prediction of job performance, irrespective of directionality,
absolute values of correlation coefficients were used. Tett, Jackson,
Rothstein, and Reddon (1999) have noted that the use of absolute values leads to an upward bias in mean validity coefficients. However,
these authors have also reported that the amount of upward bias is
minimal when sample sizes and the value of rho are large (e.g., bias of
.01 when = .15 and N 100). Given such a small estimated bias in
findings, we used absolute values of correlations without correcting
for upward bias.
For each variable examined in this meta-analytic review, observed
(uncorrected) and corrected validity coefficients were calculated.
Correlations were corrected for three types of study artifacts: attenuation due to unreliability of measures, attenuation or enhancement due
to range restriction, and attenuation due to dichotomization of outcome variables (discontinuity). Although these corrections typically
increase the size of correlation coefficients, they are undertaken to
provide estimates of outcome values if the studies had been conducted
without methodological flaws (Hunter & Schmidt, 1994).
The first correction made to the observed correlations was for
attenuation due to unreliability of predictor variables (Hunter &
Schmidt, 1994). Reliability estimates for predictor variables were
obtained from test manuals, published reliability studies, and study
authors. When more than one reliability estimate was located for a

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 659

particular test scale, the largest estimate was used because it led to the
smallest amount of correction. Test scales for which no reliability data
were available were left uncorrected.
The second correction made to observed correlations was for attenuation or enhancement due to range departure of predictor variables.
When the standard deviation of a sample differs from the population
standard deviation, the observed correlation is distorted. When the
standard deviation of the sample is smaller than the population standard deviation, the observed correlation is attenuated, and, conversely, when the sample standard deviation is larger than the population standard deviation, the observed correlation is inflated. Range
restriction is common in personnel selection settings because the sample under investigation has typically passed an initial screening and
represents a small proportion of all applicants. In the current metaanalysis, observed correlations were corrected for range departure
using the procedures recommended by Hunter and Schmidt (1994).
The crucial determinant of the magnitude of this correction is the ratio
of the sample standard deviation to the population standard deviation.
Correction for range departure could only be completed for studies
that reported sample standard deviations. Estimates of population
standard deviation were found in test manuals, handbooks, and published research.
The last correction made to observed single predictor correlations
was for attenuation due to dichotomization of performance variables.
The correction formula recommended by Hunter and Schmidt (1994)
was applied to studies reporting dichotomous variables.
According to the Hunter and Schmidt (1994) model of meta-analysis,
each of the three attenuating or enhancing factors is independent.
Thus, observed correlations were corrected for more than one factor
when applicable using the procedures recommended by Hunter and
Schmidt.
WITHIN-STUDY AGGREGATION

The number of validity coefficients reported in individual studies


ranged from 1 to 1,222. Prior to within-study aggregation, the metaanalytic data set consisted of 3,954 correlations. Validity coefficients
were aggregated within studies so that findings would not be unduly

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

660

CRIMINAL JUSTICE AND BEHAVIOR

influenced by studies reporting numerous findings. Specifically, validity coefficients were aggregated according to categorical moderator
variables of interest. This aggregation procedure was used so that
the influence of moderator variables could be examined in the metaanalysis. If we had simply computed a single mean across all of the
findings in each study, the influence of moderator variables would
have been lost in the aggregation. For example, if a study used both
turnover and supervisor ratings as performance criteria, aggregating
across these criteria would make it impossible to examine differences
in the prediction of subjective and objective performance criteria.
The effects of the following categorical moderator variables were
examined (see Data Coding section above for variable descriptions):
predictor type (MMPI vs. CPI vs. IPI), nature of outcome measure
(subjective vs. objective, training vs. incumbent), study design (confirmatory vs. exploratory, concurrent vs. predictive, screening vs. analogue), and data source (published vs. unpublished).
Data aggregations within studies were made using sample-weighted
means so that coefficients based on larger samples were accorded
greater weight in the aggregation. Sample-weighted means were also
used to compute the average sampling error estimates (Hunter &
Schmidt, 1990).
TESTING THE SITUATIONAL SPECIFICITY HYPOTHESIS

The variance of a population correlation is used to test the situational specificity hypothesis. This test is conducted to determine if the
observed mean correlation generalizes across samples. If the observed
mean correlation generalizes across samples, it is not situationally
specific. A common way of evaluating the situational specificity
hypothesis is to use the three-fourths rule. This rule states that when
less than 75% of the population correlation variance is accounted for
by sampling error variance, the role of moderator variables should be
examined (Hunter & Schmidt, 1990). In these situations, the variance
not accounted for by sampling error may be attributable to systematic
sources, such as study design characteristics or differences in the
characteristics of the samples being studied.
The amount of variance explained by sampling error is calculated
by dividing the sampling error of the population correlation by the

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 661

variance of the observed sample-weighted mean correlation. This


value can exceed 100% because sampling error is estimated algebraically, whereas the variance associated with each mean correlation is
determined from the observed effect sizes (Hunter & Schmidt, 1990).
Sampling error was computed using the formula provided by Hunter
and Schmidt (1990). Sampling error estimates were not corrected for
variance due to study artifacts. This approach has been referred to as a
bare-bones meta-analysis (Hunter & Schmidt, 1990, p. 156).
DETECTION OF MODERATOR VARIABLES

The influence of moderator variables was examined using Pearson


correlations for continuous moderator variables and z-tests for categorical moderator variables (see Hunter & Schmidt, 1990, p. 437438). Uncorrected validity coefficients were used in all of the moderator analyses.
FINAL DATA SET CHARACTERISTICS

The meta-analyses reported in this article were conducted using


validity coefficients from 78 studies with a combined total of 11,725
participants. After within-study aggregation, the meta-analytic data
set contained 168 validity coefficients.
RESULTS
PREDICTIVE VALIDITY ACROSS ALL PREDICTOR AND
OUTCOME VARIABLES

Meta-analysis results for all samples and all categorical moderator


subgroups are provided in Table 1. The mean sample-weighted correlation across all predictors and outcomes was .134, which increased to
.218 when corrected for study artifacts (see Table 1, row 1). The lower
bound of the 95% confidence interval for the uncorrected coefficient
was greater than 0, indicating a statistically significant relation
between personality tests scores and performance criteria. However,
only 64% of the variance in the overall validity coefficient (uncor-

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

662

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

78
13
11
41
18
72
53
41
59
21
49
30

Moderator

All samples

CPI
IPI
MMPI

Training
Performance

Subjective
Objective

Predictive
Concurrent

Screening
Analogue

8,168
3,616

10,185
1,661

5,962
7,115

3,820
9,747

2,049
2,537
6,940

11,725

.118
.162

.125
.199

.134
.138

.112
.143

.155
.100
.108

.134

N-Wtd. r

.009
.013

.009
.020

.009
.012

.006
.013

.006
.001
.010

.011

Obs.
Var. (r)d

.006
.009

.006
.013

.008
.006

.006
.007

.008
.004
.005

.007

Samp.
Error Var.e

65
67

68
66

89
49

100
59

100
100
52

64

% Var. (r) Due


to Samp. Errorf

TABLE 1: Meta-Analysis Results for All Samples and Categorical Moderators

.106
.148

.115
.181

.118
.128

.100
.129

.141
.090
.096

.122

Lower CI
(95%)g

.130
.176

.135
.217

.150
.148

.124
.157

.169
.110
.120

.157

Upper CI
(95%)h

.213
.226

.216
.228

.192
.239

.188
.231

.251
.196
.206

.218

Corr. NWtd. ri

1.77

2.23*

.18

1.37

z-Score
Diff.j

663

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

64
15
34
44

10,373
1,447
5,203
6,522

.133
.132
.112
.149

.010
.013
.012
.009

.007
.009
.007
.007

63
68
59
75

.121
.116
.075
.121

.145
.148
.149
.177

.224
.161
.182
.245
1.57

.03

Note. CPI = California Psychological Inventory; IPI = Inwald Personality Inventory; MMPI = Minnesota Multiphasic Personality Inventory.
a. Number of studies providing data to the given aggregation.
b. Number of individual participants contributing data to the given aggregation.
c. Observed sample-weighted mean correlation.
d. Variance in the observed correlations.
e. Sampling error variance.
f. Proportion of variance in the observed correlations due to sampling error.
g. Lower limit of 95% confidence interval (CI) around the observed sample-weighted mean correlation.
h. Upper limit of 95% confidence interval around the observed sample-weighted mean correlation.
i. Corrected sample-weighted mean correlation (measurement error, range restriction, discontinuity).
j. Significance test of the difference between uncorrected validity coefficient for the variable in this row and the row directly above it using a z
score.
k. See Table 2 for significance tests comparing the MMPI, CPI, and IPI.
*p < .05, two-tailed.

Exploratory
Confirmatory
Unpublished
Published

664

CRIMINAL JUSTICE AND BEHAVIOR

TABLE 2: Fishers z Score Values for Comparisons of the Minnesota Multiphasic Personality Inventory (MMPI), California Psychological Inventory (CPI), and Inwald Personality Inventory (IPI)

Instrument

CPI

MMPI
IPI

1.77
2.34*

MMPI
.44

Note. A positive z-score indicates that the column variable had a higher validity coefficient than the row variable. A negative z-score indicates that the row variable had a
higher validity coefficient than the column variable. Validity coefficients are provided in
Table 1.
*p < .05.

rected) was accounted for by sampling error, suggesting that other


variables may be moderating the relation between personality test
scores and law enforcement officer performance.
CATEGORICAL MODERATORS

Results for the categorical moderator subgroup analyses are provided in Table 1. Table 2 contains Z-score values for comparisons
between the mean correlations of the CPI, IPI, and MMPI reported in
Table 1. The lower bound values of the 95% confidence intervals were
greater than 0 for the uncorrected validity coefficients for all of the
subgroups, indicating statistically significant relations between personality measures and performance for each subgroup. Sampling
error accounted for at least 75% of the variance of the uncorrected correlations for only 5 of the 15 groupings, suggesting that other variables may be moderating these effects.
Moderator analyses indicated two statistically significant differences. First, correlations were larger for studies using concurrent as
opposed to predictive designs (see Table 1). Second, prediction was
strongest for the CPI and lower for both the IPI and MMPI (see Table
2). The difference in the mean correlations of the CPI and IPI was
large enough to reach statistical significance, whereas the difference
between the CPI and MMPI was nearly large enough to achieve statistical significance (Z = 1.77, p < .10).

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 665


TABLE 3: Correlations Between the Overall Validity Coefficient and Continuous
Moderator Variables

Moderator

Average age
Years of education
a
Measurement interval
b
Year of study
Study quality

38
12
49
75
80

r
.145
.283
.260
.099
.173

a. Amount of time in months elapsed between collection of personality and performance


data.
b. Unit of analysis was the 4-digit year.
p > .05 for all correlations.

CONTINUOUS MODERATORS

Table 3 contains the sample-weighted correlations between continuous moderator variables and the observed overall validity coefficients. None of the continuous moderator correlations was large
enough to reach statistical significance.
DISCUSSION

Findings from the current meta-analysis indicate that there is a


modest but statistically significant relation between personality test
data and law enforcement officer job performance. The mean correlations observed in this meta-analysis are similar to those from previous
meta-analyses of personality testing in employment settings (Barrick
& Mount, 1991; Salgado, 1997; Schmitt et al., 1984; Tett et al., 1994).
Sampling error accounted for less than 75% of the variability in many
of the mean correlations, suggesting that moderator variables not
examined in this study moderate the relation between personality test
scores and office performance (see Future Research section below).
Despite the modest effect sizes observed in the meta-analysis and
questions about the extent to which findings generalize across samples, several noteworthy findings emerged from the moderator analyses. First, prediction was strongest for the CPI and weaker for the IPI
and MMPI. These findings are consistent with OBriens (1996)

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

666

CRIMINAL JUSTICE AND BEHAVIOR

unpublished meta-analysis, in which it was found that 17 of 22 CPI


scales were predictive of officer performance compared to only 1 of
13 MMPI scales. One possible reason for the superior performance of
the CPI in this setting is that the CPI is designed to be a measure of
normal personality traits (Gough, 1995), whereas the MMPI and IPI
are, for the most part, measures of psychopathology, deviant personality traits, and maladaptive behavior. In many law enforcement settings, applicants must survive a rigorous initial screening process
before they are asked to complete a personality measure. The screening process often includes civil service testing, background investigations, criminal history investigations, and evaluative interviews. Most
pathological job candidates are eliminated during this process. Personality measures designed to detect pathological personality traits
may be redundant when they are administered after the initial screening process. In contrast, personality measures that are designed to
assess normal personality traits, such as the CPI, may be more useful
in this context because they provide information that is not obtained
during the initial screening process. For instance, the CPI was
designed to provide information about consistent styles of interpersonal behavior. Because being a successful police officer requires
effective interpersonal skills (e.g., interacting with community
members, other officers, and supervisors), the CPI may be a useful
measure for predicting this important aspect of officer performance.
Second, mean correlations were larger for studies using a concurrent design compared to those using a predictive design. This finding
suggests that personality measures are somewhat better at predicting
current job performance than future job performance. Studies using a
concurrent design examine the relation between personality test data
and officer performance at the same point in time, whereas studies
using a predictive design attempt to link personality test data with
future performance. If an officer is experiencing noticeable psychological problems, it makes sense that these problems would affect his
or her current performance. However, current psychological problems
may not always affect future performance, which reduces the likelihood that measures of psychological functioning can be used to
predict future job performance.
Finally, there was no significant difference in mean effect sizes
from published and unpublished studies. One reason for conducting

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 667

this meta-analysis was that previous meta-analyses of personality


testing in law enforcement settings have focused on data from published studies (e.g., Barrick & Mount, 1991; OBrien, 1996; Salgado,
1997). The current meta-analysis included data from both published
and unpublished studies. Because scholarly journals tend to publish
studies with significant findings, it is often expected that effects from
unpublished studies will be smaller than those from published studies.
Indeed, Tett et al. (1994) found in their meta-analysis that correlations
between personality measures and job performance indices were significantly larger in published studies. One possible explanation for the
lack of a significant difference in the current meta-analysis is that
many of the unpublished studies may have gone unpublished for reasons other than significance of their findings. Indeed, many of the 34
unpublished studies included in the meta-analysis did contain significant effects. Because many of the unpublished studies were masters
theses and doctoral dissertations, it is possible that they were never
published because graduate students conducting the research were not
interested in pursuing academic careers. It is also possible that they
may have been submitted for publication but were rejected because
they did not provide enough new information to warrant publication.
Irrespective of the reasons why these unpublished studies were never
published, the inclusion of so many unpublished studies in the current
meta-analysis suggests that the effects reported here are not likely to
be inflated due to the inclusion of only published research.
IMPLICATIONS FOR PRACTICE

Although effect sizes observed in this meta-analysis are modest,


there are several reasons why it would be inappropriate at this point to
conclude that personality measures should not be used in the law
enforcement officer hiring process. First, the hiring process in law
enforcement settings is lengthy and complex, and job candidates are
typically evaluated on a number of different psychological and medical variables. Personality functioning is just one of these variables.
Each variable is intended to provide a unique piece of information
about a job candidate that can be combined with other pieces of information to provide an estimate of the candidates suitability for hire.
Support for using this type of multifaceted approach comes from

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

668

CRIMINAL JUSTICE AND BEHAVIOR

research studies showing that predictive validity can be enhanced


when different types of psychological and medical variables are used
in combination to predict officer performance (e.g., Scogin,
Schumacher, Gardner, & Chaplin, 1995).
Second, it should not be expected that every personality test scale
can predict officer performance. It is likely that many personality test
scales are not predictive of officer performance and that practitioners
should only interpret the few scales that have a meaningful relation to
officer performance. This argument is supported by findings from
studies in which researchers have been able to use multiple-predictor
analyses (e.g., multiple regression, discriminant function analysis) to
identify optimally weighted combinations of personality test scales
that can be used to successfully predict officer performance. We have
identified 41 studies (published and unpublished) that have used multiple predictor analyses to predict law enforcement officer performance from personality test data. It would be inappropriate to make
conclusions about the predictive validity of personality tests in law
enforcement settings without considering the findings from these 41
studies. Table 4 contains a stem and leaf plot of the effect sizes from
the 128 multiple predictor analyses reported in these studies. Although
caution should be used when interpreting the values in Table 4
because they have not been corrected for capitalization on chance,
they clearly show that prediction of officer performance can be quite
good when an optimally weighted subset of personality scales is used.
Finally, the purpose of using personality measures in the law
enforcement officer hiring process is to reduce the likelihood that a
dangerous officer will be hired. Dangerous officers can be harmful to
members of the public, to fellow officers, and to the publics trust in
law enforcement agencies. Given the risks associated with allowing a
dangerous officer to be hired, the small overall predictive effects of
personality measures may be salient in some specific cases.
FUTURE RESEARCH

Findings from the current meta-analyses have implications for


future research, both at the meta-analytic and individual study levels.
At the meta-analytic level, the existing research literature can be used

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 669


TABLE 4: Stem and Leaf Diagram of Effect Sizes From Multiple-Predictor Analyses Examining the Relation Between Personality Measures and Officer Performance
5
12
13
18
9
15
10
12
8
5
6
5
4
1
2
1
0
0
1
0
1

.0
.0
.1
.1
.2
.2
.3
.3
.4
.4
.5
.5
.6
.6
.7
.7
.8
.8
.9
.9
1.0

:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:

02344
555677778999
0000012344444
555566666677777888
122224444
556677778889999
0000013334
566667778899
00112334
55559
111122
55667
0001
6
02
7
3
0

Note. Diagram includes 128 coefficients from 41 studies (published and unpublished).
Coefficients are multiple correlations (R) from regression analyses and phi coefficients
calculated from discriminant function analysis classification tables. A list of these studies is available from the first author (JGV).

to examine the validity of individual personality tests scales.


Although we found that the overall validity coefficients associated
with personality measures were modest, there may be individual test
scales that are more strongly associated with officer performance.
Moreover, the existing research literature can be used to examine the
predictive validity of specific personality traits, such as those of the
five-factor model, in predicting specific components of law enforcement office performance (e.g., Barrick & Mount, 1991; Salgado,
1997).
Finally, given the relatively superior performance of the CPI, future
individual studies should examine the predictive validity of other
existing measures of normative personality traits, such as the 16 Personality Factor Questionnaire, Personality Assessment Inventory, and
NEO Personality Inventory. There currently are an insufficient num-

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

670

CRIMINAL JUSTICE AND BEHAVIOR

ber of existing studies to provide reliable estimates of the predictive


validity of these instruments in law enforcement employment settings.
NOTES
1. Schmitt, Gooding, Noe, & Kirsch (1984) did not report how many, if any, of the 99 studies
included in their meta-analysis examined employee selection in law enforcement settings.
2. Data from some samples were reported in multiple journal articles. For the purpose of this
study, effects from different articles reporting on the same sample were grouped together as
being from the same sample.
3. The version of the revised Instrument for Evaluating Experimental Research Reports
(IEERR) that was used in this study is available from the first author.

REFERENCES
References marked with an asterisk indicate studies included in the meta-analysis.
*Anson, R. A., Mann, J. D., & Sherman, D. (1986). Niederhoffers cynicism scale: Reliability
and beyond. Journal of Criminal Justice, 14, 295-305.
Ash, P., Slora, K. B., & Britton, C. F. (1990). Police agency officer selection practices. Journal of
Police Science and Administration, 17, 258-269.
*Azen, S. P., Snibbe, H. M., & Montgomery, H. R. (1973). A longitudinal predictive study of success and performance of law enforcement officers. Journal of Applied Psychology, 57, 190192.
*Azen, S. P., Snibbe, H. M., Montgomery, H. R., Fabricatore, J., & Earle, H. H. (1974). Predictors of resignation and performance of law enforcement officers. American Journal of Community Psychology, 2, 79-86.
*Baehr, M. E., Furcon, J. E., & Froemel, E. C. (1968). Psychological assessment of patrolmen
qualifications in relation to field performance: The identification of predictors for overall
performance of patrolmen and the relation between predictors and specific patterns of
exceptional and marginal performance. Washington, DC: Government Printing Office.
*Band, S. R., & Manuele, C. A. (1987). Stress and police officer performance: An examination of
effective coping behavior. Journal of Police and Criminal Psychology, 3, 30-42.
Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1-26.
*Bartol, C. R. (1982). Psychological characteristics of small-town police officers. Journal of
Police Science and Administration, 10, 58-63.
*Bartol, C. R. (1991). Predictive validation of the MMPI for small-town police officers who fail.
Professional Psychology: Research and Practice, 22, 127-132.
*Bartol, C. R., Bergen, G. T., Volckens, J. S., Knoras, K. M. (1992). Women in small-town policing: Job performance and stress. Criminal Justice and Behavior, 19, 240-259.
*Benner, A. W. (1991). The changing cop: A longitudinal study of psychological testing within
law enforcement. Unpublished doctoral dissertation, Saybrook Institute, San Francisco.

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 671


*Beutler, L. E., Storm, A., Kirkish, P., Scogin, F., & Gaines, J. A. (1985). Parameters in the prediction of police officer performance. Professional Psychology: Research and Practice, 16,
324-335.
*Boyce, T. N. (1988). Psychological screening for high risk police specialization. Unpublished
doctoral dissertation, Georgia State University, Atlanta.
*Bozza, C. M. (1990). Improving the prediction of police officer performance from screening
information. Unpublished doctoral dissertation, United States International University, San
Diego.
*Bradford, A. C. (1991). Psychological screening for narcotics officers and detectives. Unpublished doctoral dissertation, Miami University, Oxford, OH.
*Cope, J. R. (1981). Personality characteristics of successful versus unsuccessful police officers.
Unpublished doctoral dissertation, Florida Institute of Technology, Melbourne.
*Corey, D. M. (1988). The psychological suitability of police officer candidates. Unpublished
doctoral dissertation, Fielding Institute, Santa Barbara, CA.
*Cortina, J. M., Doherty, M. L., Schmitt, N., Kaufman, G., & Smith, R. G. (1992). The Big
Five personality factors in the MMPI and IPI: Predictors of police performance. Personnel
Psychology, 45, 119-140.
*Costello, R. M., Schoenfeld, L. S., & Kobos, J. (1982). Police applicant screening: An analogue
study. Journal of Clinical Psychology, 38, 216-221.
*Daley, R. E. (1978). The relationship of personality variables to suitability for police work.
Unpublished doctoral dissertation, Florida Institute of Technology, Melbourne.
*Dean, D. (1974). The relationship between Eysenckian personality variables and ratings of job
performance and promotion potential of a group of police officers. Unpublished doctoral dissertation, Ball State University, Muncie, IN.
*DuBois, P. H., & Watson, R. I. (1950). The selection of patrolmen. Journal of Applied Psychology, 34, 80-95.
*Eisenberg, T., & Dowdle, M. (1981). Officer selection & performance study, San Jose, California police department. Los Gatos, CA: Personnel Performance.
*Geraghty, M. F. X. (1986). The CPI test as a predictor of law enforcement officer performance.
Unpublished doctoral dissertation, Florida Institute of Technology, Melbourne.
*Gottlieb, M. C., & Baker, C. F. (1974, May). Predicting police officer effectiveness. Paper presented at the annual meeting of the Southwestern Psychological Association, El Paso, TX.
Gough, H. G. (1995). California Psychological Inventory: Introduction to form 434. Palo Alto,
CA: Consulting Psychologists.
*Griffith, T. L. (1991). Correlates of police and correctional officer performance. Unpublished
doctoral dissertation, Florida State University, Tallahassee.
*Hargrave, G. E. (1985). Using the MMPI and CPI to screen law enforcement applicants: A
study of reliability and validity of clinicians decisions. Journal of Police Science and Administration, 13, 221-224.
*Hargrave, G. E., & Hiatt, D. (1989). Use of the California psychological inventory in law
enforcement officer selection. Journal of Personality Assessment, 53, 267-277.
*Hargrave, G. E., Hiatt, D., & Gaffney, T. W. (1986). A comparison of MMPI and CPI test profiles for traffic officers and deputy sheriffs. Journal of Police Science and Administration, 14,
250-258.
*Hargrave, G. E., Hiatt, D., & Gaffney, T. W. (1988). F+4+9+Cn: An MMPI measure of aggression in law enforcement officers and applicants. Journal of Police Science and Administration, 16, 268-273.
*Henderson, N. D. (1979). Criterion-related validity of personality and aptitude scales: A comparison of validation results under voluntary and actual conditions. In C. D. Spielberger

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

672

CRIMINAL JUSTICE AND BEHAVIOR

(Ed.), Police selection and evaluation: Issues and techniques (pp. 179-195). New York:
Praeger.
*Hess, L. R. (1972). Police entry tests and their predictability of score in police academy and
subsequent job performance. Unpublished doctoral dissertation, Marquette University, Milwaukee, WI.
*Hiatt, D., & Hargrave, G. E. (1988). MMPI profiles of problem peace officers. Journal of Personality Assessment, 52, 722-731.
*Hiatt, D., & Hargrave, G. E. (1988). Predicting job performance problems with psychological
screening. Journal of Police Science and Administration, 16, 122-125.
Hilson Research, Inc. (2000-2001). Testing/assessment services for public safety & security
[Brochure]. Kew Gardens, NY: Author.
*Hogan, R. (1971). Personality characteristics of highly rated policemen. Personnel Psychology,
24, 679-686.
*Hogan R., & Hogan J. (1995). Sheriff deputies. Hogan Personality Inventory manual. Tulsa,
OK: Hogan Assessment Systems.
*Hogan R., & Hogan J. (1995). Validity of the Hogan Personality Inventory for selecting police
officers in (anonymous). Tulsa, OK: Hogan Assessment Systems.
*Hogan R., & Hogan J. (1995). Validity of the Hogan Personality Inventory for selecting police
officers in an Ohio municipality. Tulsa, OK: Hogan Assessment Systems.
*Hooke, J. F., & Krauss, H. H. (1971). Personality characteristics of successful sergeant applicants. Journal of Law, Criminology, and Police Science, 62, 104-106.
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in
research findings. Newbury Park, CA: Sage.
Hunter, J. E., & Schmidt, F. L. (1994). Correcting for sources of artificial variation across studies.
In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 323-336). New
York: Russell Sage.
*Hwang, G. S. (1988). Validity of the California Psychological Inventory for police selection.
Unpublished masters thesis, North Texas State University, Denton.
*Inwald, R. E., & Brockwell, A. L. (1991). Predicting the performance of government security
personnel with the IPI and MMPI. Journal of Personality Assessment, 56, 522-535.
*Inwald, R. E., Flanagan, C. L., & Kaufman, J. C. (1991, August). Officer supervisory ratings
classifications. Paper presented at the annual convention of the American Psychological
Association, San Francisco.
*Inwald, R. E., Kaufman, J. C., & Solomon, R. (1991, August). IPI and HPP/SQ predictions of
peer ratings and class standings. Paper presented at the annual convention of the American
Psychological Association, San Francisco.
Inwald, R. E., Knatz, H., & Shusman, E. (1982). Inwald Personality Inventory manual. Kew Gardens, NY: Hilson Research.
*Inwald, R. E., & Patterson, T. (1990). Use of the IPI and HPP/SQ for predicting trainee performance in a government law enforcement agency. Kew Gardens, NY: Hilson Research.
*Inwald, R. E., & Sakales, S. R. (1982, August). Role of two personality screening measures to
identify on-the-job behavior problems of law enforcement officer recruits. Paper presented at
the annual convention of the American Psychological Association, Washington, DC.
*Inwald, R. E., & Shusman, E. J. (1984). The IPI and MMPI as predictors of academy performance for police recruits. Journal of Police Science and Administration, 12, 1-11.
*Inwald, R. E., & Shusman, E. J. (1984). Personality and performance sex differences of law
enforcement officer recruits. Journal of Police Science and Administration, 12, 339-347.

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 673


*Kleiman, L. S., & Gordon, M. E. (1986). An examination of the relationship between police
training academy performance and job performance. Journal of Police Science and Administration, 14, 293-299.
*Levine, M. (1979). Development of an MMPI subscale as an aid in police officer selection.
Unpublished doctoral dissertation, California School of Professional Psychology, Berkeley.
*Mandel, K. (1970). The predictive validity of on-the-job performance of policemen from
recruitment information. Unpublished doctoral dissertation, University of Utah, Ogden.
*Marsh, S. H. (1962, January). Validating the selection of deputy sheriffs. Public Personnel
Review, 41-44.
*Mass, G. (1979). Using judgment and personality measures to predict effectiveness in
policework: An exploratory study. Unpublished doctoral dissertation, Ohio State University,
Columbus.
*Matyas, G. S. (1980). The relationship of MMPI and biographical data to police selection and
police performance. Unpublished doctoral dissertation, University of MissouriColumbia.
*McDonough, L. B., & Monahan, J. (1975). The quality of community caretakers: A study of
mental health screening in a sheriffs department. Community Mental Health Journal, 11,
33-43.
*Merian, E. M., Stefan, D., Schoenfeld, L. S., & Kobos, J. C. (1980). Screening of police applicants: A 5-item MMPI index. Psychological Reports, 47, 155-158.
*Mills, C. J., & Bohannon, W. E. (1980). Personality characteristics of effective state police officers. Journal of Applied Psychology, 65, 680-684.
*Mills, M. C. (1980). The MMPI and the prediction of police job performance. Unpublished doctoral dissertation, University of Southern California, Los Angeles.
*Mufson, D. W., & Mufson, M. A. (1998). Predicting police officer performance using the
Inwald personality inventory: An illustration from Appalachia. Professional Psychology:
Research and Practice, 29, 59-62.
*Neal, B. (1986). The K Scale (MMPI) and job performance. In J. T. Reese & H. A. Goldstein
(Eds.), Psychological services for law enforcement (pp. 83-90). Washington, DC: Government Printing Office.
OBrien, S. G. (1996). The predictive validity of personality testing in police selection: A metaanalysis. Unpublished masters thesis, University of Guelph, Guelph, Canada.
*Pugh, G. (1985). The California Psychological Inventory and police selection. Journal of Police
Science and Administration, 13, 172-177.
*Rand, T. M., & Wagner, E. E. (1973). Correlations between hand test variables and patrolmen
performance. Perceptual and Motor Skills, 37, 477-478.
*Reming, G. C. (1988). Personality characteristics of supercops and habitual criminals. Journal
of Police Science and Administration, 16, 163-167.
*Roberg, R. R. (1978). An analysis of the relationships among higher education, belief systems,
and job performance of patrol officers. Journal of Police Science and Administration, 6, 336344.
Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.),
The handbook of research synthesis (pp. 231-244). New York: Russell Sage.
*Rybicki, S. L., & Hogan, J. C. (1997). Validity of the Hogan Personality Inventory for selecting
deputy sheriff correctional officers. Tulsa, OK: Hogan Assessment Systems.
Salgado, J. F. (1997). The five factor model of personality and job performance in the European
Community. Journal of Applied Psychology, 82, 30-43.

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

674

CRIMINAL JUSTICE AND BEHAVIOR

*Sanchione, C. D., Cuttler, M. J., Muchinsky, P. M., & Nelson-Gray, R. O. (1998). Prediction of
dysfunctional job behaviors among law enforcement officers. Journal of Applied Psychology, 83, 904-912.
*Saxe, S. J., & Reiser, M. (1976). A comparison of three police applicant groups using the
MMPI. Journal of Police Science and Administration, 4, 419-425.
Schmitt, N., Gooding, R. Z., Noe, R. A., & Kirsch, M. (1984). Metaanalyses of validity studies
published between 1964 and 1982 and the investigation of study characteristics. Personnel
Psychology, 37, 407-422.
*Schoenfeld, L. S., & Kobos, J. C. (1980). Screening police applicants: A study of reliability
with the MMPI. Psychological Reports, 47, 419-425.
*Schuerger, J. M., Kochevar, K. F., & Reinwald, J. E. (1982). Male and female corrections officers: Personality and rated performance. Psychological Reports, 51, 223-228.
Scogin, F., Schumacher, J. E., Gardner, J., & Chaplin, W. (1995). Predictive validity of psychological testing in law enforcement settings. Professional Psychology: Research and Practice, 26, 68-71.
*Sendo, J. A. (1972). A study of the potential use of the Mann Attitude Inventory in the selection
of police recruits. Unpublished doctoral dissertation, Michigan State University, East
Lansing.
*Serko, B. A. (1981). Police selection: A predictive study. Unpublished doctoral dissertation,
Florida School of Professional Psychology, Tampa.
*Shaw, J. H. (1986). Effectiveness of the MMPI in differentiating ideal from undesirable police
officer applicants. In J. T. Reese & H. A. Goldstein (Eds.), Psychological services for law
enforcement (pp. 91-95). Washington, DC: Government Printing Office.
*Shusman, E. J., Inwald, R. E., & Landa, B. (1984). Correction officer job performance as predicted by the IPI and MMPI: A validation and cross-validation study. Criminal Justice and
Behavior, 11, 309-329.
*Sterrett, M. R. (1984). The utility of the Bipolar Psychological Inventory for predicting tenure of
law enforcement officers. Unpublished doctoral dissertation, Claremont Graduate University, Claremont, CA.
*Super, J. T. (1995). Psychological characteristics of successful SWAT/tactical response team
personnel. Journal of Police and Criminal Psychology, 10, 60-63.
Suydam, M. N. (1968). An instrument for evaluating experimental educational research reports.
The Journal of Educational Research, 61(3), 200-203.
*Sweda, M. G. (1988). The Iowa Law Enforcement Personnel Study: Prediction of law enforcement job performance from biographical and personality variables. Unpublished doctoral
dissertation, University of Iowa, Iowa City.
*Swope, M. R. (1989). Validating state police trooper career performance with the Sixteen Personality Factor questionnaire. Unpublished doctoral dissertation, Wayne State University,
Detroit, MI.
*Talley, J. E., & Hinz, L. D. (1990). Performance prediction of public safety and law enforcement
personnel: A study in race and gender differences and MMPI subscales. Springfield, IL:
Charles C Thomas.
Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.
Tett, R. P., Jackson, D. N., Rothstein, M., & Reddon, J. R. (1994). Meta-analysis of personalityjob performance relations: A reply to Ones, Mount, Barrick, and Hunter (1994). Personnel
Psychology, 47, 157-172.
Tett, R. P., Jackson, D. N., Rothstein, M., & Reddon, J. R. (1999). Meta-analysis of bidirectional
relations in personality-job performance research. Human Performance, 12, 1-29.

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT 675


*Tomini, B. A. (1995). The person-job fit: Implications of selecting police personnel on the basis
of job dimensions, attitudes, and personality. Unpublished doctoral dissertation, University
of Windsor, Windsor, Canada.
*Topp, B. W., & Kardash, C. A. (1986). Personality, achievement, and attrition: Validation in a
multiple-jurisdiction police academy. Journal of Police Science and Administration, 14, 234241.
*Uno, E. A. (1979). The prediction of job failure: A study of police using the MMPI. Unpublished
doctoral dissertation, California School of Professional Psychology, Berkeley.
*Ward, J. C. (1981). The predictive validity of personality and demographic variables in the
selection of law enforcement officers. Unpublished doctoral dissertation, University of South
Florida, Tampa.
*Weekes, E. M. (1994). The influence of personality dimensions and physical abilities on a pistol
shooting task. Unpublished doctoral dissertation, University of Houston, Houston, TX.
*Weiss, W. U., & Beuhler, K. (1995). The Psychopathic Deviate scale of the MMPI in police
selection. Journal of Police and Criminal Psychology, 10, 57-60.
*West, S. D. (1988). The validity of the MMPI in the selection of police officers. Unpublished
masters thesis, North Texas State University, Denton.
Wolf, F. M. (1986). Meta-analysis: Quantitative methods for research synthesis. Beverly Hills,
CA: Sage.

Downloaded from cjb.sagepub.com at University of Bucharest on August 5, 2014

Вам также может понравиться