Вы находитесь на странице: 1из 9

pymetrics: Using Neuroscience and Data Science to

Revolutionize Talent Management


pymetrics technology introduces two science-based improvements to help companies hire
smarter and help job seekers find career paths that capitalize on their strengths. These two
advances are: 1) neuroscience assessment and 2) data science analytics and algorithms.

Technological advances leveraged by pymetrics

1. Neuroscience assessment

pymetrics games are directly adapted from neuroscience research. They use people’s
behavior to assess cognitive, social, and personality traits. In research contexts, behavior-
based assessments have largely replaced self-report instruments wherever possible.
There is a growing body of scientific research pointing towards the use of behavior, rather
than self-report, as a better method of measurement.1 This is because behavior can be
directly measured, and this type of measure is free of some of the conscious and
unconscious bias that is inherent to self-report measures.

2. Data science techniques

Classical analytic methods have been disrupted by modern data science techniques such as
artificial intelligence and machine learning. These newer data-driven approaches are better
at dealing with complex data sets, better at capturing nonlinear relationships, and better at
predicting future outcomes. As a result, they are better suited to modeling real-life complex
problems.

According to a report by McKinsey and Company, a leading healthcare organization has


used these data science techniques to generate more than $100 million in savings, while
simultaneously improving the engagement of its workforce.2 McKinsey has also reported
that banking organizations which have replaced classic statistical techniques with machine
learning techniques have experienced 10 percent increases in sales of new products, 20
percent savings in capital expenditures, 20 percent increases in cash collections and 20
percent declines in churn.3

1 Donaldson, S. I., & Grant-Vallone, E. J. (2002). Understanding self-report bias in organizational behavior research.
Journal of business and Psychology, 17(2), 245-260
2
McKinsey Quarterly, Power to the new people analytics, March 2015
3
McKinsey Quarterly, An executive’s guide to machine learning, June 2015
How pymetrics leverages these technologies

1. Data collection using neuroscience games.

pymetrics games are created based on decades of well-established behavioral sciences


research. We have patented a set of 12 neuroscience games that assess 50+ key
cognitive, social and emotional traits, but we are continuously brainstorming additional
games and traits to meet the needs of our clients. Many competences can be acquired on
the job, but pymetrics focuses heavily on traits that are the hardest to train — such as
flexibility, proclivity for risk, and decisiveness. Used in combination with the gamified
assessment, pymetrics’ leverages 5 additional games to measure numerical and logical
reasoning. Together, these games provide a snapshot of a person’s unique
characteristics.
Candidate Engagement. Traditional assessment tools, such as personality tests (e.g.,
Hogan Personality Inventory) and assessment centers and others, can be long and
cumbersome. pymetrics breaks this pattern by offering engaging online games that are
quick to play and let the test taker forget that they are even being assessed for fit to a job.
Our games are hosted on both web and mobile (iOS and Android) platforms.
Non-directional. pymetrics games are non-directional, meaning that unlike school GPA or
standardized tests, one end of the spectrum isn’t any better than the other. Instead, the
games measure traits where either end of the spectrum can be beneficial based on the
demands of a particular role.
Multi-trait assessment. Meta-analysis studies show that multi-trait assessments, meaning
an assessment that measures more than one type of input (e.g., cognitive and personality
traits), greatly outperform single measure tests4 (e.g., only measuring personality).
pymetrics is a multi-trait assessment.

2. Analytics using data science.

We utilize state-of-the-art data science techniques often used to recommend movies


(Netflix), products (Amazon), and music (Pandora). pymetrics builds custom models for
client companies, specific to each job opening. Based on these proprietary matching
algorithms, we recommend job candidates for jobs. Job seekers play pymetrics games and
a job fit score band for each individual is calculated by running the individual’s game data
through the job model custom-built for the client company (model building details below).

4
Harvard Business Review, The Problem with Using Personality Tests for Hiring, August 27th, 2014
Custom-built company models. Our job-specific models utilize data collected from current
company employees. When companies look to build a custom model for a job function
within their firm, they provide us with email access to incumbents they have identified as
strong performers to allow them to play our games. The selected incumbents then play our
games, and we compare the aggregate trait profiles of these ‘high performing’ incumbents
against the aggregate trait profiles of a random subset (between 20,00 and 50,000) from our
database of over 100,000 demographically matched individual. Data collection is all done
anonymously using random IDs.
Bias testing. It is well known that gender and ethnic bias during resume review is real and
pervasive. Findings on this issue have been extensively published.5 We firmly believe that
diversity is beneficial to a company, both economically and socially. The value of diversity in
the workplace is manifested in better business outcomes including problem-solving6,
decision-making, and sales and profits7.
In the pymetrics methodology, this is tackled at two different steps in the process. Firstly,
pymetrics utilizes assessments that academic peer-reviewed literature has shown to be
generally unrelated to any demographic variable. That is, our models are built on an
unbiased foundation. Secondly and unique to pymetrics, the behavioral traits extracted from
the games are stripped of gender and ethnic biased components using statistical methods
post-model build using Audit-AI - our open sourced solution to detecting bias in machine
learning models. If bias is detected, problematic traits are reweighted and/or removed from
the model until parity for different gender and ethnic groups is found, as tested on another
independent baseline sample with gender and ethnicity data, resulting in the final model. As
a result, pymetrics does not supply any models that show gender or ethnic bias.
Improved prediction. The science-based improvements of pymetrics (neuroscience
assessment and data science techniques) is much more likely to show more powerful
predictions over typical psychometric assessment. Traditional assessment tools often show
limited capacity in predicting performance - only capturing between 9% (personality tests
such as Conscientiousness) and 25% (cognitive ability) of variance in performance8.
Employee retention. Given the proverbial war for talent, increasing number of voluntary
departures, and performance/talent and monetary costs of having to replace employees, it is
essential that companies think about retention even before an employee is hired. With
pymetrics, companies are getting recommendations on candidates in regard to how likely
they are to fit the trait profile of a top-performing incumbent, and therefore how well they will
fit with all aspects of the role. Research shows that when person-job fit is high, turnover is

5
Moss-Racusin et al., 2012; RAND Corporation 2005; What works: gender equality by design, 2016
6
Hong & Page, 2004; Page, 2007
7
Hoogendoorn, Oosterbeek, & van Praag, 2013
8
Schmidt & Hunter, 1998
drastically reduced9. Data to date shows that pymetrics has helped several companies
reduce turnover by up to 28%.
Employee internal mobility. pymetrics games can also be integrated directly into a
company’s careers marketplace to help employees looking for a change find a more suitable
role within the same organization and also advance talent management practices.
Incorporating such a system helps employers leverage talent and cultural investments and
maximize these investments through the proper distribution of realized internal talent.
Possibly more important, such a system could elevate a company’s talent management
practice through its capacity to (a) develop future leaders via fitting them with new
opportunities to learn and grow and (b) streamline and enrich succession planning vis-à-vis
bench strength and insightful reporting.

Compliance

Uniform Guidelines
Each step of the pymetrics process has been implemented using methods in line with the
EEOC Uniform Guidelines on Employee Selection Procedures. This includes:
1. Validation. pymetrics uses several methods of validation:
a. Criterion related validation. This is the gold standard in validating assessment
tools under the Uniform Guidelines. pymetrics satisfies both concurrent and
predictive criterion related components.
i. Concurrent validity is satisfied during the model building process using
the k-fold cross-validation method. In this method, the original incumbent
sample is randomly partitioned into k non-overlapping subsets, where one
of the k subsets is used as the test set and the other k-1 subsets are put
together to form a training set. Then the average error across all k-1 trials is
computed and is used to estimate how accurately a predictive model will
generalize to an independent data set.
ii. Predictive validity is satisfied over time for each client and is studied
under a lens tailored to the context. Most commonly, we examine the
relationship between pymetrics results and job relevant specific KPI data
among those hired with pymetrics scores (when pymetrics is used as a
data point and therefore when there is variance in pymetrics results
among hires) and/or improvements on aggregate level performance

9
Kristof-Brown, Zimmerman, & Johnson, 2005
between work groups (pre-post analysis and/or comparison of work
groups using and not using pymetrics). pymetrics’ IO Psychology team is
happy to work with clients to determine the most appropriate data and
analysis when considering the local predictive validity of the clients’
model(s). To date, current clients have shown high predictive validity in
regard to various performance dimensions (e.g., associated with 139%
increase in median sales vs. target) and turnover (associated with upwards
of 28% increase in retention).
2. Construct validation. A common method used to satisfy construct validity is to use an
already established and reliable measure of that same construct. The pymetrics game
battery is adapted from well-established peer-reviewed academic research in the fields
of Neuroscience, Cognitive Psychology, Social Psychology, and Behavioral Economics.
This research thoroughly evidence that the games indeed measure the traits they
purport to measure, and relevant research is available upon request.
3. Content validation. pymetrics has procedures in place for conducting a structured job
analysis questionnaire (pyJAQ) which provides a rationale mapping of a client’s job
requirements to the traits extracted through pymetrics assessment - another source of
validity evidence. The pyJAQ is a survey that measures Knowledge, Skills, Abilities,
Work Activities important in role and is based on the Occupational Information Network
which was developed under the sponsorship of the U.S. Department of
Labor/Employment and Training Administration (USDOL/ETA) through a grant to the
North Carolina Department of Commerce. Work by these groups of experts in the field
have validated the items within pyJAQ for use in distinguishing between hundreds of
jobs.

4. Reliability. Each of our neuroscience games evidences strong reliability - both from the
academic research pertaining to their usage in neuroscience contexts and from
pymetrics' own test-retest and split half reliability analyses. This is perhaps unsurprising
given their standard use in clinical environments to identify relatively stable differences
between groups of individuals. pymetrics has conducted reliability studies (internal
consistency in test-retest) to ensure games are dependable, repeatable, and yield
consistent information.
5. Fairness. Fairness is a core value at pymetrics and as such, we proactively work to debias
all selection models. The bias-testing process is described above.

Accommodating disabled individuals. In accordance with Americans with Disabilities Act


(1990), pymetrics currently offers accommodations for color-blindness, learning disabilities
and attention deficit hyperactivity disorder (ADHD). pymetrics is also actively working on
widening the range of accommodated disabilities.

Accommodating international participants. Beyond all the aforementioned processes in


place to enhance fairness within our selection tool, we also offer our games in multiple
languages including Arabic, Chinese (Simple), Dutch, English, French, German, Greek,
Indonesian, Italian, Japanese, Portuguese (Brazilian), Portuguese (Portugal), Spanish
(LATAM), Spanish (Spain), Turkish, and Vietnamese. Immense efforts including forward and
backward translations went into ensuring that these different versions of our games
demonstrate the same veracity as shown by the original/English version. pymetrics is also
actively working on growing the number of accommodated native languages.

Data protection. pymetrics is ISO27001 and Privacy Shield certified, and adheres to the
GDPR principles. Data is hosted in AWS and stored in physically secured, geographically
distributed data centers; encryption controls are in place.

pymetrics offerings

Selection. Companies run their new applicants through our assessments, which include a set
of 12 award winning, neuroscience-based games that assess a candidates fit to a role based
on their cognitive, social, and emotional traits. We have additional 5 games that measure
numerical and logical reasoning. Based on these assessments, we return score bands for the
applicants based on the custom job model. These score bands are used as a data point or cut
off in the hiring process. We highly recommend building specific job profiles (using the
above outlined methodology) and giving candidates score bands for all of the job profiles.

Sorting. Instead of requiring candidates to choose a specific role to apply for upfront, our
Sorting feature allows candidates to go through the pymetrics games first in order to guide
which roles they might apply to based on their fit, demystifying the application process for
candidates and diversifying the roles for which applicants apply.

Internal Mobility. Employees of a company are put through our assessments. They are then
sorted into various positions within the company, according to their fit with various jobs as
determined by custom job models algorithms.

Marketplace. Our goal is to help everyone find their place in the world of work. When a
candidate is not a fit to the role they apply for, they can be invited to our marketplace to find
other opportunities where they are a better fit.
Client Dashboard. The pymetrics platform includes a client dashboard presenting: the trait
‘fingerprint’ of the job in question, top-fit candidates for that job, the custom model accuracy
And cross-validation results, and proof of model fairness based on ethnicity/race and
gender. Our platform offers a myriad of advantages - including allowing companies to
discover quality, diverse candidates on an easy to use interface.

Workforce Insights. Based on the comprehensive data pymetrics gathers when building
models, we are able to produce unique and actionable insights for your team based on the
trait patterns we find in your existing employees, such as fluidity between roles based on
traits.

Evidence of our validity and effectiveness: Case


Studies

We have empirical evidence supporting the use of pymetrics for selection across a variety of
contexts. The sources of evidence fall into the following three categories:
1. Predictive criterion-related validity
2. Construct Validity
3. Fairness/Adverse Impact

1. Case Study for Predictive Criterion-Related Validity


A global consulting firm contracted with pymetrics to build a custom model for selection
of candidates into their intense 2.5-month training program whereby afterwards high-
performing trainees were given permanent offers. The performance of those the client
invited to participate in the training program was then evaluated by the global consulting
firm. The training survival rate (i.e., ratio of trainees that were offered full time roles to
total number of trainees) prior to pymetrics was at 75%, but after employing pymetrics,
the survival rate jumped to an astounding 91%, constituting a 60% increase in training-
based selection.

2. Case Study for Construct Validity


2.1 Case Study for Increased Efficiency.
In assessing applicants for a leading global consulting firm, we found the
following:

We examined how using the pymetrics methodology in place of the resume review
altogether would impact yield results. Using only pymetrics, and no resume, resulted
in 2x yield: from 8.5% with resume-review only, to 16.7% with pymetrics only.

Interview decided by Selected to interview Offers Yield


Resume review only 94 8 8.5%
Pymetrics only 30 5 16.7%

*All interviews were conducted blind as to whether a candidate was a resume pass-on or a
pymetrics pass-on.

2.2 Case Study for Reducing Loss of Talent.


A global management consulting firm asked us to recommend 10 applicants for
interview from a group of 141 candidates who were rejected through the standard
resume-review assessment. Of the 10 we recommended based on pymetrics
methodology, 1 received an offer (10% yield). This matched the company’s yield of
offer : successfully-interviewed rate.

3. Case Study for Fairness — Increased Diversity


3.1 Multinational Financial Services Corporation.
pymetrics sourced candidates for a multinational financial services corporation.
The pymetrics recommendation pool consisted of 43% women as opposed to
their traditional turnout of 31% women. The use of pymetrics increased the
number of females in first round interviews by 19%. In the overall candidate pool,
the use of pymetrics increased the number of females matches by 30%.
3.2 Global Professional Services Company.
pymetrics sourced applicants for a global professional services company.
pymetrics’ methodology recommended approximately 48% women and 52%
men. Our recommendations led to a gender balanced hiring outcome — of the
total extended offers, 44% were women and 56% were men.
3.3 Global Financial Institution.
pymetrics helped a global financial institution achieve a 50/50 gender balance for
two straight years, in a role that was traditionally 80% male and 20% female.

Summary

We have provided evidence that our tool currently can be utilized for four distinct purposes.
First and foremost, we guarantee that our models will help build a more diverse workforce by
carefully detecting and removing bias from our selection process. Second, every candidate that
plays our games is provided with an engaging assessment experience, an insightful feedback
report, and potentially a perfect career match. Third, we will raise efficiency and effectiveness in
prediction by increasing the yield, reducing missed talent and expanding the reach of your
recruiting team. Lastly, we help companies fight attrition and retain employees by modeling
different roles within the company and matching current employees who are ready to leave with
a new position.

Вам также может понравиться