The legal landscape

  • EEOC has stated that Title VII applies to algorithmic hiring tools. If a vendor's AI causes disparate impact, the employer is liable.
  • NYC Local Law 144 (AEDT) requires bias audits before use, plus annual re-audits and candidate notification.
  • EU AI Act classifies hiring AI as "high-risk" requiring conformity assessments.
  • Several states (Illinois, California) have AI-in-hiring laws phased in 2025–2026.

What a bias audit looks like

  1. Run the AI on a representative dataset.
  2. Compute selection rate by protected class (gender, race, age, disability).
  3. Apply the 4/5ths rule: any group's selection rate < 80% of the highest = potential disparate impact.
  4. Document and either fix the model or stop using it.

Practical guidance

  • Vendor due diligence: Get the audit report from your AI vendor. Don't take "we audit internally" as an answer.
  • Don't black-box. If you can't explain why the AI rejected a candidate, you can't defend it.
  • Keep human in the loop. AI surfaces, humans decide. This single principle solves 80% of the legal exposure.
  • Document overrides. When a recruiter goes against the AI ranking, capture the reason. Pattern of overrides correlated with protected class is a red flag.