AI Hiring Fail: When Overfitting Leads to Bias

Kareem Saleh
Start Your ValidationSee the Agents in Action
Cartoon man holding a lacrosse stick outdoors

Just heard a wild one!

A company launched a new AI hiring tool.

Its top predictor of employee success?

Being named Jared and having played high school lacrosse. U+1F92F

What does that tell us about the AI?

For starters, “Jared” and “lacrosse” probably aren’t causally linked to job performance.

This is classic overfitting—where the model mistakes quirky patterns in the training data for real insight.

It’s not finding signal; it’s chasing noise.

It also points to a likely problem with the training data—maybe a small, homogeneous group of past hires.

At best, this is a funny story.

At worst, it’s a biased model that could reinforce stereotypes and lock out qualified candidates.

I’m all for data-driven insights, but this is a hilarious (and slightly terrifying) reminder that AI isn’t infallible.

That’s why at FairPlay, we build bias detection solutions to ensure AI decisions are based on merit, not stereotypes.

Because let’s face it: We can’t all be Jareds with lacrosse sticks to succeed!

Cartoon man holding lacrosse stick in field with clouds
Abstract blue and purple gradient digital artwork

Sign Up for Our Newsletter

We cover the latest in financial regulation, compliance regulation and fair lending practices and trends.