If you feed an AI your company's promotion data from the last 10 years, will it predict future success, or just replicate past biases?

Be careful: AI trained on biased data doesn’t reduce risk — it multiplies it.

A powerful example is COMPAS, the algorithm used in the U.S. justice system to predict reoffending risk. It was meant to be fair and data-driven. Instead, it consistently flagged Black defendants as “high risk” at nearly twice the rate of white defendants.

The algorithm didn’t find criminal patterns — it found patterns of systemic injustice and called them “truth.”

The reason this happens is simple: an AI optimizes to find patterns, not to be fair. If you give it data reflecting systemic injustice, the AI won't correct it; it will learn it, scale it, and call it "efficiency."

At Board&Leaders, we’ve chosen a different path.

We eliminate this problem at the source by not relying on subjective historical data. Instead, our proprietary AI operates on a validated, bias-resistant framework rooted in neuroscience: the Consistency model developed by Klaus Grawe. We combine this foundation with objective biometrics and cognitive responses to assess potential - independently of past performance or evaluator bias.

This allows us to understand a person’s potential with no dependency on an interviewer's opinion or flawed historical records.

In the end, the COMPAS case teaches us a fundamental lesson. In human decisions, the quality of your data isn't just a technical issue—it's a moral imperative.

And we chose to start from a clean source.

Learn more about the COMPAS case and the risk of multiplying bias through AI:

Machine Bias” – ProPublica (2016): ProPublica found that Black defendants were nearly twice as likely to be incorrectly flagged as high risk compared to white defendants.

AI Bias: How It Impacts AI Systems (2025): Tredence explains how AI bias can lead to unfair hiring, lending, and customer engagement decisions. Explore the types of bias in AI systems, real-world examples, and strategies to build ethical, unbiased AI.

Serious Games and Virtual Reality for Mental Health (2021): A Review of Recent Developments (Giglioli et al. (2021): Virtual behavioral simulations grounded in the Consistency Model offer a low-bias alternative to traditional tools by providing more ecologically valid environments and directly measuring responses to real-world psychological triggers.

Related Blogs

The Cost of a Bad Hire (David Robles Fosg)

read blog
Icon

Look beyond the Interview Brain: Measure System 1 Thinking

read blog
Icon