A hiring test that doesn't predict performance is an expensive way to filter candidates. Concurrent validity is how you figure out whether a test actually works before you bet hiring decisions on it. The method is simple: give the test to current employees, compare the scores to their performance ratings or output metrics, and look at the correlation. If the correlation is strong, candidates who score well on the test are likely to perform well in the role. If the correlation is weak, the test isn't measuring what matters. The EEOC pays close attention to this evidence in adverse-impact discrimination cases.
How Concurrent Validity Studies Are Run The study has three components. First, identify current employees in the target role with known performance data. Second, administer the test to that group under the same conditions it would be given to candidates. Third, calculate the statistical relationship between test scores and performance. The most-used metric is a correlation coefficient (Pearson's r or a point-biserial), though regression-based approaches are common in larger samples.
A coefficient of 0.30 is the rough floor for legally defensible use in hiring. Values between 0.30 and 0.50 are considered moderate; above 0.50 is rare but strong.
Concurrent Validity vs. Predictive Validity Predictive validity does the same thing as concurrent validity, but across time. The test goes to candidates, you hire a batch of them regardless of score, and you measure performance 6 to 12 months later. The approach is more rigorous but much harder to run because you have to hire people based on something other than the test you're validating. Most employers use concurrent validity because it's faster and doesn't require hiring based on unvalidated data.
Which Validity Method Does the EEOC Prefer? The Uniform Guidelines on Employee Selection Procedures accept three methods: criterion-related validity (which includes both concurrent and predictive), content validity, and construct validity. None is formally preferred, though criterion-related studies with good statistical power tend to hold up best under legal challenge. The guidelines are codified at 29 CFR Part 1607.
Where Concurrent Validity Breaks Down Three failure modes. Range restriction: your current employees all passed the old hiring bar, so their test scores cluster at the top of the scale and the correlation looks artificially low. Criterion contamination: the performance review ratings or turnover metrics you're using are biased, so the test is being validated against a noisy target. Sample size: fewer than 50 test takers makes the results statistically unreliable.
Building a Concurrent Validity Study That Holds Up Under Legal Review Document everything. The EEOC and courts look closely at the study design, sample, performance metrics, and statistical methods. Work with an industrial-organizational psychologist (or a qualified internal PhD) for anything involving hundreds of candidates. Run the study separately for each protected group where sample size allows, and report the results transparently. The test can't just work on average; it must not disproportionately screen out protected groups relative to its ability to predict performance.
The Uniform Guidelines on Employee Selection Procedures are at ecfr.gov/current/title-29/subtitle-B/chapter-XIV/part-1607 , and the EEOC guidance on employment tests and selection procedures explains the validation framework. The U.S. Department of Labor's Employment and Training Administration publishes assessment validation guidance for the public workforce system at dol.gov/agencies/eta .