Вы находитесь на странице: 1из 18

The Emerald Research Register for this journal is available at http://www.emeraldinsight.


The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/0048-3486.htm

Personnel Review 32,1 22

Revised October 2001 Accepted August 2002

Evidence of differences in applicant pool quality

Department of Management, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, USA
Keywords Recruitment, Recruitment advertising, Employee attitudes, Job descriptions, Advertising effectiveness Abstract Despite general assumptions that recruitment is important to organizational success, little empiric evidence exists to confirm that different recruitment approaches lead to meaningful differences in attraction outcomes. This study begins to address this research need by examining the attraction outcomes of firms competing head-to-head for recruits for similar positions. Results of an analysis of 391 applicant pools representing 18 different job families suggest that applicant pool quality can vary substantially within and across job families. Utility estimates, based on the hiring of a single employee and using Grade Point Average (GPA) as a measure of applicant quality, produced differences within applicant pools for hiring a single individual valued as high as $15,000. The average difference between the highest and lowest quality applicant pools across 18 job families was $6,394.45 (SD = $3,533.20).

Mary L. Connerley, Kevin D. Carlson and Ross L. Mecham, III

Organizations engage in recruitment to attract potential employees and to get them to accept offers of employment. Recent recognition of the important role that recruitment plays in assuring organizational success (e.g. Barber, 1998; Grossman, 2000; Nakache, 1997; Pomeroy, 2000) presumes that the approach an organization takes to recruitment makes a difference. It assumes that recruitment outcomes depend on something more than the existing dynamics of labor supply and demand and that engaging in superior recruitment tactics can produce meaningful differences in outcomes. However, because few organizations engage in formal recruitment evaluation efforts (e.g. Society for Human Resource Management (SHRM)/CCH (SHRM/CCH, 1999), very little evidence exists to confirm whether meaningful differences in attraction outcomes exist across firms or across positions within firms. This paper addresses the need for research on this fundamental question. Using data from college recruitment at one university, we examine whether applicant quality differs across applicant pools for similar positions drawn from the same applicant population and estimate the value of those differences. Evidence of meaningful differences in attraction outcomes would highlight the need for more systematic research into the effects of recruiting tactics.
Personnel Review Vol. 32 No. 1, 2003 pp. 22-39 # MCB UP Limited 0048-3486 DOI 10.1108/00483480310454709

The role of attraction in recruiting effectiveness Attraction (i.e. getting individuals to apply for or otherwise place their name under consideration for an organization's positions) plays a critical role in recruitment and overall staffing effectiveness (Barber, 1998; Breaugh, 1992;

Rynes, 1991). Attraction is the first, and perhaps the most important, of three recruitment outcomes. The others are status maintenance (i.e. getting individuals to remain in the selection process long enough for the organization to determine whether to offer them a position) and job acceptance (i.e. getting individual's offered positions to accept them). Attraction establishes the pool of applicants from which new hires will eventually be chosen. If top candidates do not apply, an organization has no opportunity to hire them. Exceptional effort at maintaining the status of top applicants and getting them to accept offers is valuable, but this can only prevent the loss of current top candidates, it cannot make-up for ineffective attraction (Carlson et al., 2002). Status maintenance and job acceptance outcomes have been examined in prior research (e.g. Barber et al., 1994; Maurer et al., 1992; Rynes et al., 1991; Schwab et al., 1987). No studies, though, have systematically examined differences in attraction outcomes across applicant pools. The lack of research data on attraction outcomes can be attributed to a general lack of recruitment assessment. According to a recent SHRM/CCH (1999) survey of 279 companies, a majority of organizations do not engage in formal recruitment evaluation. When evaluation does occur the most commonly used measures are: time to fill a position, retention and turnover rates, cost per hire, number of applicants generated, the job performance of new hires after a specified period of time, and equal employment opportunity impact. Of these, only data on recruitment costs are readily available often resulting in an information imbalance that allows cost data to dominate decision making (Grossman, 2000). The organizational benefits of better attraction are rarely assessed directly. Informal assessments appear to be the norm, often taking the form of coarsegrained assessments of new hire effectiveness (i.e. are the new hires performing adequately?), or formal assessments of new hire performance after some probationary period (Breaugh, 1992; Phillips, 1998; Rynes, 1991). Post-hire outcomes in any form, though, are not suited for the evaluation of attraction activities (Williams et al., 1993). They are, first, separated from attraction by several intermediate processes; second, available only long after attraction activities have been completed; and third, subject to a number of influences not related to recruitment. As a result they are a poor means of evaluating attraction outcomes, diagnosing attraction problems, or identifying areas of opportunity (e.g. Werbel and Landau, 1996; Williams et al., 1993). Another measure of attraction outcomes is number of applicants generated. As shown in the responses to the SHRM/CCH (1999) survey, quantity of applicants generated is assessed by many organizations and is often used as an indirect assessment of hire quality. The underlying assumption is that by attracting large numbers of applicants the organization can be more selective, hiring only the highest quality applicants in each pool. However, since applicant quality is rarely measured, the relationship between greater quantity

Applicant pool quality


Personnel Review 32,1 24

of applicants and hire quality is unknown. Selection test validation research shows that substantial variation in applicant quality exists within applicant pools (Wonderlic, 1992), but it is unclear whether systematic differences in applicant quality exist across pools or whether those differences are meaningful. Factors influencing attraction outcomes Several factors are hypothesized to account for differences in attraction outcomes. The general effects of changes in labor supply and labor demand characteristics over time can have on attraction outcomes are clear and well established. In periods of low demand for workers and high unemployment, employers are able to attract large numbers of qualified candidates for open positions. Applicant pools are often large and filled with qualified applicants. Under these circumstances attraction techniques receive little attention. However, recent labor market conditions marked by increasing labor demand and limited supply have resulted in organizations being unable to attract or retain needed personnel. When labor supply is low relative to labor demand, attracting sufficient numbers of qualified applicants can be difficult and effective attraction takes on increased importance. Thus, changes in labor supply and demand over time can influence applicant pool characteristics. Job characteristics alone can also influence attraction outcomes. Sackett and Ostgaard (1994) provide strong evidence that different jobs (i.e. groups of similarly titled positions with common or similar job responsibilities) can produce systematic differences in applicant characteristics. They reviewed data on Wonderlic Personnel Test scores for applicants for 80 different job families and found systematic differences in mean test scores across job families. For example, they found that applicants for department head positions had higher mean test scores (M = 28.0, SD = 6.7, n = 614) than those for general office worker applicants (M = 22.8, SD = 7.0, n = 5,949) that, in turn, were higher than those for general factory workers (M = 17.5, SD = 8.1, n = 11,537). These data indicate that individuals with higher general mental ability scores were systematically attracted to different types of positions than were individuals with lower scores. This is related to research that shows that applicants choose positions based on the type of work they will be performing (e.g. Jenkins, 1991; Knesper and Hirtle, 1981; Turban et al., 1998). Recruitment practices are also believed to produce differences in attraction outcomes. Recruitment practices are defined broadly to refer to those actions or decisions made by the organization to influence the attractiveness of an organization or position. This includes both how an organization attempts to attract applicants and the decision it makes regarding the inducements attached to a position. Evidence that both types of factors can influence attraction exists. Mason and Belt (1986) found that greater specificity of applicant qualifications in a job ad had a significant negative effect on the

number of unqualified (based on GPA and major) individuals responding. Interestingly, their results showed that if quantity of qualified applicants was the focus, an optimal ad should combine a specific job description with a vague job specification. However, if organizations wished to maximize the difference in response rate between unqualified and qualified applicants, then the optimal ad would include both a specific job description and a specific job specification. Further, research on job inducements attempts to identify those factors that individuals find most attractive. This research has identified pay level, benefits, location, culture, opportunity for training and advancement as factors that may also influence who is attracted to given positions (e.g. Jurgensen, 1978; Turban et al., 1998). However in the relatively long history of recruitment research no study has directly compared the outcomes of applicant attraction to determine whether the controllable factors hypothesized to influence attraction outcomes actually result in meaningful differences in attraction outcomes. For example, the literature contains no evidence that meaningful differences in attraction outcomes do in fact exist when organizations compete for recruits for similar jobs in the same labor pool. Thus, in the current study we attempt to determine if sufficient evidence exists to confirm whether controllable aspects of an organization's recruitment approach actually result in differences in attraction outcomes. To examine this question we examine differences in attraction outcomes for different organizations competing for applicants for similar positions from the same applicant populations. To control for the effects of differences in labor market factors, we compare attraction outcomes that occur in the same time period and in the same geographic location. In addition, to control for differences in outcomes that could be attributed to gross differences in job characteristics, we control for job type. We expect that differences in attraction outcomes will exist under these conditions and those differences will be meaningful. Methods This study examined applicants for on-campus interviews with corporate recruiters at a southeastern university from September 1999 to May 2000. Each applicant pool was generated from a job posting originated by a recruiting organization at the university's Career Services Center. Students were not restricted in the number of positions to which they could submit re sume s, though no student submitted re sume s to all positions in a job family. The sample of positions examined in this study included applicant pools for full-time, permanent positions with at least 50 applicants. By removing pools with less than 50 applicants we sought to eliminate any applicant pools where the small number of applicants could have been attributable to recruitment anomalies (i.e. late additions of recruiting schedules, inability to engage in normal recruitment efforts, lack of recruiter availability). This resulted in

Applicant pool quality


Personnel Review 32,1 26

comparisons of outcomes for organizations that were able to attract a reasonably large number of applicants and resulted in selection ratios for initial screening of less than 0.30. While this eliminates data for some applicant pools, we believe it strengthens the design of this study by removing a potential source of extraneous variance. The resulting 425 jobs were sorted using a Q-sort methodology to construct families of highly similar positions. The Q-sort methodology involved providing a decision maker with a listing of all job titles represented by the 425 job postings and allowing that decision maker to sort those job titles into related groups. The objective of the Q-sort was to create families of highly similar jobs with similar knowledge, skill and ability (KSA) requirements. Since the KSAs required for positions determine the relevant applicant population (i.e. all individuals qualified to hold that position), we attempted to form families of highly similar positions so that individuals who applied for one position in that job family should have also felt qualified to apply for any of the other positions from that job family. After sorting, 18 job families with at least eight positions (i.e. 391 applicant pools) were retained for analysis. Table I lists the job families and the position titles in each. Thus, with respect to positions within job families, we felt they were quite homogenous with respect to the KSAs required. Further, at this point in the recruitment process applicants were unlikely to be able to (i.e. when titles are identical) or less likely to (i.e. when titles are very similar) differentiate between positions based on titles alone. That is, if applicants chose to apply to some positions in a job family and not others, they were unlikely to be able to do so based solely on position title. Measures Quantity. The number of individuals who submitted re sume s for each position posting is the measure of applicant quantity. Initial analyses indicate that on average, 118 applicants dropped re sume s for each position with a range of 50 to over 1,100 re sume s (SD = 91.66). Quality. College grade point average (GPA) has been associated with recruiters' initial screening decisions (e.g. Brown and Campion, 1994; Hutchinson and Brefka, 1997; Rynes et al., 1997; Thoms et al.,, 1999) though McKinney et al. (2001) found the association between GPA and screening decisions varied across recruiters (p = 0.07; SDp = 0.213). In a recent review of GPA validity data, Roth et al. (1996) reported the validity of GPA for predicting job performance of college graduates corrected for attenuation due to error of measurement in the criterion as rc = 0.23 (SDrc = 0.06). Because it is the one re sume component with a reported validity for predicting job performance, we used GPA as an indicator of applicant quality in this study. Unfortunately, not all re sume s included GPA information. In those instances where an overall GPA was not included but an in-major GPA was reported, the

Job family Accounting Analyst Associate engineer Consultant Engineer Finance Hardware technician Information systems Information technology Management Mechanical engineer Production engineer Production/ operations Programmer/ analyst Sales Software design Software technician Technical staff

Job titles Accountant, staff accountant, cost accountant Analyst, process analyst, business analyst Associate engineer, assistant engineer, entry-level engineer Consultant, associate consultant, logistics consultant Engineer, engineering specialist Financial management, financial analyst, loan originator Network hardware engineer, hardware engineer, hardware design engineer Information systems specialist, manager information systems, systems development associate Associate technology analysts, information technology specialist, information technologist Management associate, human resource manager, site manager Mechanical engineer, mechanical design engineer Plant engineer, process engineer, physical engineer Business operations, operations supervisor, operations services assistant Entry level programmer, programmer/analyst, systems analyst Entry level sales, marketing representative, outside sales representative Software design, software developer, software assurance Software development engineer, software technical engineer Technical staff, entry level position technical, technology analyst

Applicant pool quality


Notes: Job families were developed by sorting job titles into similar groups; of the 425 job titles sorted, 391 were clustered into these 18 job titles

Table I. Job families

in-major GPA was used. Out of a total set of 45,059 re sume s (including multiple submissions), 9,583 reported no GPA data. Career services counselors at this university frequently encourage individuals with GPAs below 3.00 on a 4.00 scale not to include their GPA on their re sume . To enable us to calculate an overall average GPA in each applicant pool we estimated a GPA value for those re sume s that did not report one. Assuming that most individuals who did not report a GPA did so because it was low, we developed a substitute value using the mean of the lowest 25 percent of reported GPAs. This value (M = 2.76) was substituted for the missing GPA values. Prior to substituting for missing data the overall average GPA across all re sume s was M = 3.22 (SD = 0.40) and after substitution it was M = 2.92 (SD = 0.47). The average GPA for all seniors at this institution is M = 2.80 (SD = 0.54). As seen in Table II, we developed two measures of applicant pool quality. ``ALL'' incorporates data from all applicants in each applicant pool. ``TOP15'' is based on data for the 15 individuals in each applicant pool with the highest reported GPAs. The number 15 was chosen because it represented the typical

Personnel Review 32,1 28

Job family Software technician Software design Programme/analysts Analyst Hardware technician Consultant Technical staff Information technology Information systems Mechanical engineer Associate engineer Production engineer Finance Production Management Engineer Accounting Sales

Number of Jobs Re sume s 33 22 28 16 15 20 25 16 16 17 16 16 30 8 27 40 11 35 3,732 2,665 3,154 2,153 1,231 3,471 4,226 2,197 1,916 1,128 1,823 1,507 3,071 664 3,599 3,765 717 4,040

MN 3.32 3.27 3.23 3.22 3.20 3.20 3.19 3.18 3.18 3.17 3.16 3.13 3.13 3.12 3.10 3.10 3.10 3.05

ALL SD 0.48 0.48 0.47 0.46 0.49 0.44 0.45 0.45 0.45 0.50 0.47 0.47 0.42 0.45 0.44 0.49 0.42 0.41

Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

MN 3.95 3.92 3.88 3.89 3.83 3.89 3.86 3.91 3.87 3.78 3.81 3.79 3.76 3.77 3.80 3.76 3.66 3.65

TOP15 SD Rank 0.09 0.12 0.19 0.17 0.17 0.12 0.18 0.11 0.13 0.25 0.22 0.20 0.18 0.14 0.21 0.27 0.23 0.26 1 2 6 4 9 5 8 3 7 13 10 12 15 14 11 16 17 18

Table II. OVERALL and TOP15 means and standard deviations by job family

Notes: MN refers to simple means; SD refers to standard deviations; the ALL and TOP15 means and standard deviations were calculated by first calculating the mean quality score (i.e. GPA) of all applicants or the top 15 applicants in each applicant pool, respectively, then calculating the simple mean and standard deviation of these data for all applicant pools in each job family

number of individuals who were selected for interviews. Even though some organizations populated more than one interview schedule from their applicant pool, we still used data from only 15 individuals in TOP15 calculations in order to maintain a consistent basis for comparison. Where ALL data include substituted values for missing GPA data, TOP15 data include only actual reported GPAs. Data analysis strategy Our analysis strategy examines differences in applicant quality across applicant pools within job families. We conduct the same analysis for each of 18 job families. Figure 1 offers an overview of how these analyses are conducted. Each job family is associated with an applicant population. Every position in each job family is assumed to require similar KSAs and therefore to draw from a common applicant population. Recruitment efforts result in the generation of an applicant pool for each position (i.e. a subset of the individuals from the relevant applicant population who submitted re sume s for that position opening). Figure 1 provides an example where seven positions from one job family and their associated applicant pools are shown. As noted above, our measure of applicant quality is based on identifying the 15 individuals with the highest GPAs in each applicant pool. These individuals are represented as

Applicant pool quality


Figure 1. Schematic view of the applicant pool comparisons reported in Table III

Personnel Review 32,1 30

the shaded region in the upper tail of each applicant pool distribution. We averaged the 15 highest GPAs in each applicant pool and then standardized these scores using the mean and standard deviation for all seniors in the university. We used these values to rank the applicant pools in each job family. The average standardized score for each applicant pool is represented by the mark in the middle of each shaded region. In Table III we report three different comparisons based on these applicant quality ranks for each job family. First, we compared the highest ranked applicant pool to the applicant pool ranked second best. Next, we compared the highest ranked pool with the applicant pool that was mid-ranked for that job family. Finally, we compared the highest ranked applicant pool with the applicant pool that was lowest ranked for that job family. Utility analysis While GPA data can provide evidence of differences across applicant pools, the value of those differences cannot be evaluated from data in the GPA metric or from standardized scores. Utility analysis (UA) can be used to convert
Job family Software technician Software design Programmer analyst Analyst Hardware technician Consultant Technical staff Information technology Information systems Mechanical engineer Associate engineer Production engineer Finance Production Management Engineer Accounting Sales Overall Highest MN SD 4.00 4.00 3.99 3.99 3.98 4.00 4.00 3.99 3.98 3.93 3.99 3.93 3.95 3.88 4.00 3.97 3.81 4.00 0.00 0.00 0.02 0.01 0.03 0.00 0.00 0.02 0.04 0.07 0.02 0.06 0.05 0.09 0.00 0.03 0.22 0.00 2nd highest MN SD 4.00 3.99 3.98 3.99 3.98 4.00 4.00 3.98 3.97 3.92 3.97 3.92 3.94 3.86 4.00 3.96 3.80 3.93 0.00 0.01 0.02 0.02 0.02 0.00 0.00 0.04 0.04 0.09 0.04 0.07 0.06 0.10 0.02 0.03 0.14 0.09 Median MN SD 3.96 3.96 3.93 3.95 3.88 3.93 3.90 3.91 3.90 3.85 3.88 3.84 3.80 3.72 3.83 3.82 3.58 3.64 0.06 0.05 0.07 0.06 0.10 0.06 0.10 0.05 0.10 0.11 0.05 0.14 0.13 0.12 0.13 0.13 0.19 0.19 Lowest MN SD 3.85 3.66 3.23 3.49 3.62 3.63 3.46 3.66 3.73 3.04 3.41 3.43 3.39 3.67 3.45 2.91 3.41 3.28 0.11 0.16 0.33 0.32 0.21 0.14 0.23 0.20 0.18 0.35 0.25 0.33 0.27 0.10 0.54 0.52 0.25 0.21 0.504 Maximum Difference 0.15 0.34 0.76 0.50 0.36 0.37 0.54 0.33 0.25 0.89 0.58 0.50 0.56 0.21 0.55 1.06 0.40 0.72 0.238

Notes: Highest refers to the applicant pool in each job family that had the highest average GPA among the top 15 applicants. Second refers to the applicant pool in each job family that had the 2nd highest average GPA for the top 15 applicants. The median applicant pool Table III. is the applicant pool in each job family where the average GPA values were at the mid-point Comparison of applicant pools within for the job family. Lowest refers to the applicant pool in each job family with the lowest average GPA for the top 15 candidates. MN refers to a mean. SD indicates a standard job families using TOP15 quality measure deviation

differences in standardized scores to a dollar value that can be evaluated directly. Similar uses of UA can be found in prior research. Boudreau and Rynes (1985) used UA to examine a hypothetical comparison of in-house recruiting versus the use of recruiting agencies. Murphy (1986) used UA to examine three scenarios to illustrate how applicants' rejection of offers may impact utility estimates. de Corte (1994) discusses how utility models can be applied to both traditional selection settings and also to settings when a single cohort of new employees is hired. In the current study, UA calculations were performed using the following equation, modified from the utility equations presented in Boudreau (1991): U rxy SDy Zx T N C ; 1 where U is the estimated difference in utility in dollars between the attraction outcomes for two different applicant pools. rxy is the validity of the method used to estimate applicant quality. UA recognizes differences between true job performance (y) and our imperfect attempts to predict job performance levels (x). For example, rxy refers to the correlation between scores on a predictor measure (i.e. GPA in our example) and future true job performance. SDy is the standard deviation of performance in dollars. It permits a dollar value to be attached to differences in standardized predictor scores (i.e. Zx ). UA assumes applicants differ in their job performance potential and SDy is an estimate of the value of difference in performance for individuals whose standardized differences in true job performance differ by one standard deviation. Thus, SDy represents the value of true differences in job performance. Zx is the standardized difference in applicant quality measures between the applicant pools. T is the average expected tenure of the individuals hired for this position. N is the number of applicants to be hired. C is the difference in the costs incurred to attract the applicant pools being compared. In the current analysis, common estimates of rxy , T , N and C are used. Our objective was to use realistic, yet conservative estimates in all analyses. rxy = 0.23 is an estimate of the validity of GPA for predicting job performance, corrected for attenuation in the criterion measure (Roth et al., 1996). T = 2 signifies that a new hire is expected to remain in his/her initial position for two years. N = 1 indicates that one hire is projected to come from the applicant pool. Data on actual differences in recruiting activity across organizations or positions are not available for analysis in this study. As a result, costs are assumed to be equal for all positions, hence C = 0. These values are deemed appropriate for all job families. Estimates of SDy are established for each job family. SDy is set equal to 40 percent of the salary based on evidence by Schmidt et al. (1977). Estimates of average starting salaries are developed for each job family. It is assumed that SDy differs across jobs according to both the amount of variability in outcomes that can occur (i.e. a function of the level of autonomy the individual has over

Applicant pool quality


Personnel Review 32,1 32

his/her choice of tactics for accomplishing outcomes) and level of responsibility (i.e. addresses the impact that differences in performance can have). Using the same salary estimate for all positions within each job family leaves only Zx as a source of utility differences across applicant pools within each job family. Zx values are calculated at the applicant pool level using equation (2). In these calculations we have chosen to standardize GPA data using the mean and standard deviation of GPAs for all senior students at the university: Zx meanGPAALLorTOP15applicants 2:80=0:54 2 Two options for standardizing GPA were available to us: first, standardizing within applicant pools and, second, standardizing all applicant pools based on university level data. Ideally, we would have standardized quality scores using estimates of the distribution of quality scores in each applicant population. However, because a substantial number of applicants (around 20 percent) do not report GPA on their re sume s, our ALL data does not accurately estimate the mean or standard deviation of these values. Substituting for the missing GPA data adjusts the mean to a more accurate value, but still results in an underestimate of the standard deviation. Since UA focuses on the differences between standardized scores, not the level of the scores, it is more critical to have an accurate estimate of the standard deviation. The university level data for all seniors is based on complete information and provides the best available estimate of the standard deviation of grade point averages. Once Zx scores are calculated, Zx is calculated by determining the difference between the mean Zx scores for the applicant pool comparison of interest. Results Descriptive statistics for the 18 job families are reported in Table II. The first two numeric columns list the number of jobs and number of re sume s in each job family. The two right hand columns list the means and standard deviations (in parentheses) for ALL and TOP15 measures. Job families are listed in descending order according to ALL scores. As expected differences in both quality measures exist across job families. The software technician job family had the highest ranked applicant population (i.e. M = 3.32, SD = 0.48 based on our ALL measure). The sales job family produced the lowest ranked applicant population (i.e. M = 3.05, SD = 0.41 based on our ALL measure). The rank ordering of job families differs for the two quality measures suggesting that choice of quality measure could affect decision making. Table III reports means and standard deviations for four specific applicant pools from each job family. These data are based on TOP15 quality assessments prior to standardization (i.e. they are reported in GPA units). The first four numeric columns of Table III report data for the applicant pool in each job family with the highest ranked mean, second highest ranked mean, median ranked mean, and lowest ranked mean, respectively. These data show that

substantial differences in the mean TOP15 GPA exist across applicant pools in most job families. Comparisons of applicant pools with the highest and lowest means within job families are reported in the right hand column of Table III. These values range from a high of 1.06 GPA units for general engineers (i.e. 3.97 versus 2.91) to a low of 0.15 GPA units for software technicians (i.e. 4.00 versus 3.85). The average difference across the 18 job families was 0.504 (SD = 0.238) GPA units. As noted earlier, Figure 1 shows a schematic view of the applicant pool comparisons reported in Table III. In comparisons of TOP15 measures for applicant pools with the highest versus median quality, the largest differences occurred in the job families of general management, 0.17 GPA units (i.e. 4.00 versus 3.83), accounting, 0.23 GPA units (i.e. 3.81 versus 3.58 GPA units) and sales, 0.36 GPA units (i.e. 4.00 versus 3.64). The smallest differences were found for software technician and software design, both were 0.04 GPA units (4.00 versus 3.96). The average difference in comparisons of highest and median quality applicant pools was M = 0.117 (SD = 0.079) GPA units. Comparisons of TOP15 measures for the two highest quality applicant pools indicate only small differences. The largest difference was for sales positions, 0.07 GPA units (i.e. 4.00 versus 3.93). Differences in all other job families were 0.02 GPA units or less. As noted earlier, GPA data can provide indications of the magnitudes of differences, but not the value of those differences. In Table IV, UA was used to convert the differences in quality measures to a dollar metric. The first two columns of Table IV report job family and estimated starting salary. The remaining columns report the standardized difference in mean GPA scores and the associated difference in utility in dollars for each of the three applicant pool comparisons outlined in Figure 1 based on the GPA data reported in Table III (i.e. highest to second, highest to median, highest to lowest). Comparisons of the highest to lowest quality applicant pools show substantial differences in utility ranging from a low of $1,854.72 for software technicians and $2,187.76 for production managers to a high of $15,069.60 for general engineers and $13,880.96 for mechanical engineers. The average dollar difference for comparisons of highest and lowest quality applicant pools was $6,394.45 (SD = 3,533.20). The average dollar value across the 18 job families comparing highest to median quality applicant pools was $1,270.62 (SD = 726.88). Differences between the top two applicant pools were even smaller, averaging only $135.24 (SD = 163.76). Discussion The current study fills a void in the recruitment literature by demonstrating that meaningful dollar differences in attraction outcomes exist in actual recruitment settings when organizations recruit for similar positions from the same job family. Using mean grade point average of the top 15 individuals in the applicant pool as the mean of applicant quality, the average difference

Applicant pool quality


Personnel Review 32,1 34

Job family Software technician Software design Programmer/analyst Analyst Hardware technician Consultant Technical staff Information technology Information systems Mechanical engineer Associate engineer Production engineer Finance Production Management Engineer Accounting Sales Average utility differences

Salary 34,000 34,000 36,000 37,000 38,000 40,000 34,000 36,000 36,000 46,000 42,000 40,000 36,000 29,000 28,000 42,000 36,000 28,000

High-2nd high Xx $ Utility 0.00 0.01 0.01 0.01 0.01 0.00 0.00 0.02 0.01 0.02 0.04 0.03 0.01 0.04 0.01 0.00 0.04 0.13 0.00 62.56 66.24 68.08 69.92 0.00 0.00 136.16 66.24 169.28 309.12 220.80 66.24 213.44 51.52 0.00 264.96 669.76 135.24

High-median Xx $ Utility 0.06 0.06 0.11 0.07 0.19 0.13 0.18 0.14 0.14 0.15 0.19 0.14 0.26 0.31 0.32 0.28 0.18 0.67 397.44 375.36 728.64 476.56 1,328.48 956.80 1,126.08 953.12 927.36 1,269.60 1,468.32 1,030.40 1,722.24 1,654.16 1,648.64 2,163.84 1,192.32 3,451.84 1,270.62

High-lowest Xx $ Utility 0.28 0.63 1.41 0.93 0.67 0.67 1.00 0.61 0.46 1.64 1.06 0.94 1.03 0.41 1.09 1.95 0.75 1.34 1,854.72 3,941.28 9,339.84 6,331.44 4,684.64 4,931.20 6,256.00 4,152.88 3,047.04 13,880.96 8,194.18 6,918.40 6,822.72 2,187.76 5,615.68 15,069.60 4,968.00 6,903.68 6,394.45

Notes: The utility equation used was drawn from Boudreau (1991): U rxy SDy Zx T N C ; where U is the estimated difference in utility in dollars, rxy = 0.23 which is the validity of the GPA for predicting job performance (Roth et al., 1996). As shown in equation (2), Zx = (mean GPA [ALL or TOP15 applicants] 2.80)/ 0.54, Zx is the standardized difference in applicant quality measures between the applicant Table IV. Differences in applicant pools; T = 2 signifies the average amount of time a new hire is expected to remain in his/ her initial position is two years; n = 1, indicates one hire is projected to come from the pool quality and their applicant pool, and C = 0, since differences in recruiting activity were not known or impact on utility assessed within job families

across job families between the best and the poorest attraction outcomes was $6,394.45. The magnitude of the difference in attraction outcomes for firms in some job families was as large as $15,000. These values are substantial given that they refer to the attraction of a single individual. Given the lack of empiric evidence in the recruitment literature, demonstrating that differences in attraction outcomes can occur and that those differences can be substantial are important findings. These findings also indicate that controllable factors other than gross differences in position descriptions are likely to influence applicant behavior and attraction outcomes. This supports general assumptions concerning the role of recruitment in staffing effectiveness in actual recruitment settings that had not been empirically demonstrated in prior research. Given that our results indicate that substantial opportunities for improvement exist in the attraction outcomes of many organizations, attempting to identify which of these factors are potent influencers of attraction outcomes should be the focus of future research.

This study represents a first attempt to assess applicant pools in terms of their potential to impact future job performance and to estimate the dollar impact of those differences. Not surprisingly, the potential returns to efforts to improve attraction outcomes appear greatest for organizations currently experiencing the poorest attraction outcomes. The attraction outcomes of organizations attracting applicant pools of median quality for their job family trailed the highest attraction outcomes by a smaller average value, $1,270.62. In addition, given that there appears to be only slight differences in attraction outcomes at the top of the distribution research that examines how high levels of attraction outcomes can be maintained most efficiently is likely to be of value. It is important to recognize that the utility differences reported here are underestimates. In an effort to develop a more severe test we examined only those organizations that were able to attract ``sufficient'' numbers of applicants (i.e. 50 or more). Nearly half of the applicant pools for the recruiting season examined attracted fewer than 50 applicants. We eliminated these applicant pools from the analysis in order to avoid any anomalous recruitment outcomes that may have represented misapplication of the organization's recruitment tactics. Obviously, many of these applicant pools represent the outcomes of appropriate execution of the organization's planned recruitment efforts. Had we included all applicant pools in the analysis, it is very likely that some of these applicant pools would have produced TOP15 quality scores lower than those currently included. This would likely have increased the differences in utility for the highest to lowest comparison in each job family. In addition to comparing quality across applicant pools in the same recruiting period, important information can be gained through comparisons of quality outcomes across recruiting seasons. Interestingly, quality scores for applicant pools for the 1999-2000 recruiting season were compared to a matching set of applicant pools for the same organizations for the previous year. ALL quality measures correlated r = 0.70. This suggests that, not only do differences in applicant quality exist, but that those differences appear to persist across college recruiting seasons. Future longitudinal research is needed to determine if this consistency stems from job characteristics, organizational image, or possible consistent campus event participation of organizations. Thus, the next step will be disentangling the influence of recruiting activities from organizational related inducements. Finally, although the primary focus of this study was to examine whether meaningful differences exist across applicant pools within job families, these data can also be used to examine whether differences in applicant quality exist across job families. Finding differences in attraction outcomes across job families would demonstrate the comparability of these findings to others in the research literature. The second to right hand column of Table II (ALL) reports mean overall GPAs by job family. An examination of the ALL means shows

Applicant pool quality


Personnel Review 32,1 36

that the maximum difference across job families was 0.27 GPA units, or 0.50 standard deviation units. This is less variability than was found by Sackett and Ostgaard (1994) where differences between the highest and lowest means across job families was 1.41 standard deviations. However, the job families in this study represent a much smaller range of jobs than those considered in Sackett and Ostgaard. The job families here come from college recruiting and included neither the low end (e.g. clerk) nor higher end positions (e.g. research scientist) included in Sackett and Ostgaard's analysis. Drawing a comparable subset of Sackett and Ostgaard's jobs to those found in our data (i.e. from purchasing agent to general office manager) produces a comparable standardized difference in quality ratings across job families (i.e. 0.09 versus 0.11 for comparable jobs in this study). Limitations In this study, we attempted to control for extraneous sources of variance so that any observed differences could be attributed solely to controllable influences on attraction outcomes. Three sources of variance were controlled seasonal differences in labor market factors, gross differences in job descriptions, and differences due to failures to properly execute recruitment plans. We controlled labor market differences by using data from a single recruitment season and analyzing positions by job family. We used job titles to further control for differences in attraction that might occur due to perceived differences in the type of work. Finally, we removed small applicant pools (i.e. those less than 50) to eliminate any outcomes due to poor recruiting execution. To the extent that other sources of variance not attributable to controllable factors exist that were not controlled in this study, alternative explanations for these findings may exist. However, we believe that these differences are likely to be indicative of the differences in attraction outcomes that can be attributed to recruitment factors under the organization's control. We used GPA as our sole measure of applicant quality. This is not to suggest that GPA is considered to be an adequate or appropriate measure of applicant quality by a majority of organizations. Data on the validity of GPA for predicting job performance, though, suggest that it is at least a reasonable measure for this purpose (Roth et al., 1996) and GPA has been associated with screening decisions (e.g. Brown and Campion, 1994; Rynes et al., 1997; Thoms et al., 1999). In our study, it was also the only measure that was available for a majority of candidates and for which validity data existed. Further assessment of attraction outcomes in organizations will depend on the development of other measures that can be used with initial applicant data to estimate future job performance. Moving forward with the evaluation of attraction outcomes will depend upon the development of such measures (c.f. Carlson et al., 2002). Evaluating the quality of an applicant pool requires an appropriate point of reference (i.e. what mean and standard deviation of applicant quality scores

will be used for standardization). In the current study we standardized GPA data using the mean GPA value for all seniors at this university. An alternative strategy would have been to standardize GPA data within colleges or majors. We chose not to do this because these differences were not large for the primary majors represented in these data. However, standardization within common groups may be preferred if differences in means and standard deviations differ substantially across groups. The Q-sort we conducted should not have a large impact on the outcomes since it was narrow enough to focus on similar KSAs within a job family. If broader bands had been used, differences in applicant populations would have been introduced that would have made the outcome differences larger. Additionally, our utility analysis methods focused on the differences between standardized scores, not the actual level of the standardized scores. Thus the mean of the GPA distribution was not as critical as having an appropriate estimate of the standard deviation. A final limitation involved our UA assumptions. When estimating the values to be input into the utility analysis other than Zx values, we tried to use reasonable, yet conservative values. We recognize that others may have chosen to use alternative values. We have provided complete information concerning the formulas and values used in our calculations so that interested individuals could reconstruct their own analyses based on different estimates of rxy , SDy , T , N , or C if they desire to do so. If organizations use UA to compare attraction outcomes, they will want to estimate values for each input based on their best estimates of the values appropriate for evaluating attraction outcomes for the position in question. Conclusion This study fills a void in the recruitment literature by demonstrating that meaningful differences in applicant attraction outcomes do exist across applicant pools for different organizations recruiting for similar positions from the same narrow applicant population. Using 391 applicant pools representing 18 different job families drawn from an entire year of college recruiting activity, we found differences in the valuation of attraction outcomes based on the quality of the top candidates within each applicant pool that were as great as $15,000 for a single hire in some job families. This suggests that substantial opportunity for improvement in recruitment outcomes exists for many organizations. Future research should focus on attempting to identify which of these factors are the most potent influencers of attraction outcomes.
References Barber, A.E. (1998), Recruiting Employees: Individual and Organizational Perspectives, Sage Publications, Thousand Oaks, CA. Barber, A.E., Daly, C.L., Giannantonio, C.M. and Phillips, J.M. (1994), ``Job search activities: an examination of changes over time'', Personnel Psychology, Vol. 47, pp. 739-66.

Applicant pool quality


Personnel Review 32,1 38

Boudreau, J. (1991), ``Utility analysis for decisions in human resource management'', in Dunnette, M.D. and Hough, L.M. (Eds), Handbook of Industrial and Organizational Psychology, Vol. 2, Consulting Psychologists Press, Palo Alto, CA, pp. 621-752. Boudreau, J.W. and Rynes, S.L. (1985), ``Role of recruitment in staffing utility analysis'', Journal of Applied Psychology, Vol. 70, pp. 354-66. Breaugh, J.A. (1992), Recruitment: Science and Practice, PWS-Kent, Boston, MA. Brown, B.K. and Campion, M.A. (1994), ``Biodata phenomenology: recruiters' perceptions and use of biographical information in re sume screening'', Journal of Applied Psychology, Vol. 79, pp. 897-908. Carlson, K.D., Connerley, M.L. and Mecham, R.L. III (2002), ``Recruitment evaluation: the case for assessing the quality of applicants attracted'', Personnel Psychology, Vol. 55 No. 2, pp. 461-90. de Corte, W. (1994), ``Utility analysis for the one-cohort selection decision with a probationary period'', Journal of Applied Psychology, Vol. 79, pp. 402-11. Grossman, R.J. (2000), ``Measuring up: appropriate metrics help HR prove its worth'', HR Magazine, Vol. 45 No. 1, pp. 28-35. Hutchinson, K.L. and Brefka, D.S. (1997), ``Personnel administrators' preferences for re sume content: ten years after'', Business Communication Quarterly, Vol. 60, pp. 67-75. Jenkins, M. (1991), ``The problems of recruitment: a local study'', British Journal of Occupational Therapy, Vol. 54 No. 12, pp. 449-52. Jurgensen, C.E. (1978), ``Job preferences (what makes a job good or bad?)'', Journal of Applied Psychology, Vol. 6, pp. 267-76. Knesper, D.J. and Hirtle, S.C. (1981), ``Strategies to attract psychiatrists to state mental hospital work'', Archives of General Psychiatry, Vol. 38 No. 10, pp. 1135-40. Mason, N.A. and Belt, J.A. (1986), ``Effectiveness of specificity in recruitment advertising'', Journal of Management, Vol. 12, pp. 425-32. Maurer, S.D., Howe, V. and Lee, T.W. (1992), ``Organizational recruiting as marketing management: an interdisciplinary study of engineering graduates'', Personnel Psychology, Vol. 45, pp. 807-33. McKinney, A., Mecham, R., D'Angelo, N., Carlson, K.D. and Connerley, M.L. (2001), ``GPA and the likelihood of adverse impact in screening decisions'', paper presented at the 16th Annual Society for Industrial and Organizational Psychology Conference, San Diego, CA. Murphy, K.R. (1986), ``When your top choice turns you down: effect of rejected offers on the utility of selection tests'', Psychological Bulletin, Vol. 99, pp. 133-8. Nakache, P. (1997), ``Cisco's recruiting edge'', Fortune, 29 September, Vol. 136 No. 6, pp. 275-6. Phillips, J.M. (1998), ``Effects of realistic job previews on multiple organizational outcomes: a meta-analysis'', Academy of Management Journal, Vol. 41, pp. 673-90. Pomeroy, A. (2000), ``EMA keynote speaker says recruiters must declare war'', HR News, Vol. 19 No. 6, pp. 1,16. Roth, P.L., BeVier, C.A., Switzer III, F.S. and Schippmann, J.S. (1996), ``Meta-analyzing the relationship between grades and job performance'', Journal of Applied Psychology, Vol. 81, pp. 548-56. Rynes, S.L. (1991), ``Recruitment, job-choice, and post-hire consequences'', in Dunnette, M.D. (Ed.), Handbook of Industrial and Organizational Psychology, 2nd ed., Sage Publications, Palo Alto, CA, pp. 399-444. Rynes, S.L., Bretz, R.D. and Gerhart, B. (1991), ``The importance of recruitment in job choice: a different ways of looking'', Personnel Psychology, Vol. 44, pp. 487-521.

Rynes, S.L., Orlitzky, M.O. and Bretz, R.D. (1997), ``Experienced hiring versus college recruiting: practices and emerging trends'', Personnel Psychology, Vol. 50, pp. 309-39. Sackett, P.R. and Ostgaard, D.J. (1994), ``Job-specific applicant pools and national norms for cognitive ability tests: implications for range restriction corrections in validation research'', Journal of Applied Psychology, Vol. 79, pp. 680-4. Schmidt, F.K., Hunter, J.E., McKenzie, R.C. and Muldrow, T.W. (1977), ``Impact of valid selection procedures on work-force productivity'', Journal of Applied Psychology, Vol. 64, pp. 609-26. Schwab, D.P., Rynes, S.L. and Aldag, R.L. (1987), ``Theories and research on job search and choice'', in Rowland, K. and Ferris, G. (Eds), Research in Personnel and Human Resources Management, Vol. 5, JAI Press, Greenwich, CT, pp. 129-66. SHRM/CCH (1999), in Kaylor, N. (Managing Ed.), Human Resources Management: Ideas and Trends in Personnel, Summer. Thoms, P., McMasters, R., Roberts, M.R. and Dombkowski, D.A. (1999), ``Re sume characteristics as predictors of an invitation to interview'', Journal of Business and Psychology, Vol. 13, pp. 339-56. Turban, D.B., Forret, M.L. and Hendrickson, C.L. (1998), ``Applicant attraction to firms: influences of organization reputation, job and organizational attributes, and recruiter behaviors'', Journal of Vocational Behavior, Vol. 52 No. 1, pp. 24-44. Werbel, J.D. and Landau, J. (1996), ``The effectiveness of difference recruitment sources: a mediating variable analysis'', Journal of Applied Psychology, Vol. 26, pp. 1337-50. Williams, C.R., Labig, C.E. and Stone, T.H. (1993), ``Recruitment sources and posthire outcomes for job applicants and new hires: a test of two hypotheses'', Journal of Applied Psychology, Vol. 78, pp. 163-72. Wonderlic, E.F. (1992), Wonderlic Personnel Test Manual, E.F. Wonderlic & Associates, Northfield, IL.

Applicant pool quality