
By 2025, Artificial Intelligence finally hit its stride. But while economists debate whether we are riding an AI bubble, one thing is certain: the widespread adoption of AI has introduced a new, complex form of liability—algorithmic discrimination.
Last year brought several high-profile cases that highlight the risks of turning hiring over to machines.
The Cases
In May 2025, a federal court in California approved a class-action lawsuit against Workday (Mobley v. Workday), alleging age-based algorithmic discrimination. The plaintiffs claim that Workday’s AI screening tools systematically disadvantaged older applicants. According to the lawsuit, the software rejected hundreds of applications within hours of submission—often at odd times when no human would likely be reviewing them. Furthermore, the plaintiffs argue that no legitimate reason existed for many of these rejections. Estimates suggest that the issue affected 1.1 billion applications. If proven, the case could affect millions of workers in the United States, signaling massive ramifications for the HR tech industry.
The case of Harper v. Sirius XM Radio, LLC, filed in August 2025, revealed a similar pattern. In Harper, the plaintiff alleged that the company’s AI model used historic hiring data to perpetuate deeply rooted biases. Anyone who uses social media understands this phenomenon: watch one video on Instagram, and your feed is suddenly flooded with similar or identical content. The AI creates an echo chamber.
Similarly, the plaintiff in Harper—after being rejected for nearly 150 positions—alleged that the company’s AI platform used metrics such as education, home zip code, and employment history as proxies for racial discrimination to keep the workforce homogenous. If proven, this would be a classic example of “disparate impact” discrimination: when an employer applies an allegedly neutral factor in a way that produces discriminatory results.
Dozens of similar cases materialized in 2025, nearly all concerning the hiring process. However, as AI becomes ubiquitous, we can expect these allegations to broaden. Employers are increasingly using AI for employee management, including productivity monitoring and annual reviews, opening new fronts for potential litigation.
The “Why” and the “How”
Why does algorithmic discrimination occur? The answer is both simple and mysterious.
The simple part is the source material. AI programs are trained on human-created content, which is inherently fallible and often prejudicial. Studies have confirmed that AI can mimic discriminatory patterns that even “reinforced learning from human feedback” cannot fully stamp out. These discriminatory occurrences have been found in employment, housing, rental applications, medical diagnoses, and even purchasing a bicycle, where AI platforms disadvantaged people based on whether the users’ name sounds black or white.
The mystery lies in the specific application. Like humans, AI is unpredictable. Even top data scientists do not fully understand why AI makes a specific decision in a specific moment. It’s often a black box. Did the technology discriminate? Or did it act appropriately? If HR departments cannot explain how the technology works, they will struggle to defend against its demons in court.
Protecting Your Organization
If HR departments and business leaders want to avoid becoming a headline like Workday or Sirius XM, they must take proactive steps:
- Change Your Expectations: Stop assuming AI is neutral. Assume that it may discriminate and plan accordingly. Carefully vet AI platforms and scrutinize vendor methodologies. If a vendor cannot provide their latest bias audit, move on. Demand clear explanations of how candidates are assessed.
- Be Ready to Listen and Act: Every business should regularly conduct bias audits of its AI practices (ideally under privilege with legal counsel). Establish systems where applicants or employees who feel unfairly rejected by AI can appeal to a human for review. Simply trusting the machine is not enough, and “the algorithm did it” will never be a valid legal defense.
- Watch the Indemnification Clauses: In a perfect world, your AI vendor would indemnify you for damages caused by their faulty code. However, indemnification is becoming rare now that vendors like Workday face direct liability. Review your contracts closely. If the platform fails a bias audit right after a hiring round, will you be protected?
- Prompt Carefully: Even the best AI system can discriminate if prompted poorly. Be hyper-aware of the language in your job postings. Avoid adjectives like “energetic,” “enthusiastic,” or “dynamic,” as an AI recruiting assistant may interpret these as a command to filter for younger employees.
- Check Your Insurance: Does your current policy cover algorithmic discrimination? Whether it is a general employment policy or specific Employment Practices Liability Insurance, every business using AI must review its coverage gaps.
AI is a powerful engine for efficiency, but it lacks a moral compass. It does not know the law; it only knows patterns. If you hand the keys to your hiring process over to an algorithm without keeping a hand on the wheel, do not be surprised when it drives you straight into a courtroom.
Brian Bouchard is an attorney at Sheehan Phinney in Manchester, practicing labor and employment, business litigation, and construction litigation. He can be reached at bbouchard@sheehan.com. For more information, visit Sheehan.com.