When the ATS Becomes the Defendant
What the Workday Lawsuit Signals for Algorithmic Hiring Risk
October, 2025 | The Wildfire Group AI Hiring Risk Advisory & Talent Strategy – Insights & Briefings
Executive Summary
Most employment discrimination cases focus on employer behavior. The Mobley v. Workday, Inc. lawsuit introduces a materially different risk: legal scrutiny of the algorithmic hiring system itself.
In granting conditional certification under the Age Discrimination in Employment Act, a federal court signaled that algorithmic decision systems used at scale may be evaluated not merely as tools, but as mechanisms capable of producing unlawful disparate impact.
This briefing examines what the case reveals about emerging legal exposure in AI-mediated hiring and outlines why algorithmic governance, auditability, and accountability have become non-negotiable for employers relying on automated screening and ranking systems.
The Algorithm on Trial
The plaintiff in Mobley v. Workday does not allege intentional discrimination by the vendor. Instead, the claim centers on disparate impact by proxy.
AI hiring systems are trained to identify patterns, not protected classes. However, when models rely on variables correlated with age such as graduation dates, employment gaps, or inferred career trajectories, they can systematically disadvantage older workers even in the absence of explicit age-based rules.
Under U.S. employment law, intent is not required. If a process produces a statistically significant adverse impact on a protected group, the burden shifts to demonstrate that the process is job-related and consistent with business necessity.
That standard, long applied to human decision-making, is now being tested against algorithmic systems.
When Compliance Moves Into the Codebase
Historically, compliance programs in hiring focused on training, documentation, and human behavior. The Workday case underscores a shift already underway:
Compliance risk now resides inside the hiring infrastructure itself.
This raises unresolved but unavoidable questions:
When a third-party system screens or ranks candidates, who is accountable for outcomes?
At what point does software function less like a tool and more like a decision-making actor?
How defensible are employer decisions when the rationale is embedded in opaque models?
Regardless of how liability is ultimately allocated between vendors and clients, employers should assume that shared responsibility will be the default position.
What Talent and Legal Leaders Should Understand Now
1. Vendor responsibility does not eliminate employer exposure
Even if a vendor is deemed an agent, employers remain accountable for employment outcomes. Contractual indemnities may shift costs, but they do not prevent claims, investigations, or reputational damage.
2. Algorithmic hiring audits are no longer optional
Routine analysis of pass-through rates, demographic impact, and decision logic must become standard governance practice. Claims of ignorance will not withstand legal or public scrutiny.
3. Transparency is becoming a baseline requirement
Organizations will increasingly be expected to understand how systems are trained, what data they rely on, and how models are validated over time. Black-box decision systems are rapidly becoming indefensible.
4. Scale amplifies risk
Workday has acknowledged processing over a billion job applications during the relevant period. At that scale, even modest bias effects can become systemic civil rights issues rather than isolated operational failures.
A Systems Risk Perspective
Automated hiring systems were once treated as back-office utilities. That assumption no longer holds.
When systems filter, rank, or reject candidates at scale, they effectively participate in decision-making. Courts are beginning to recognize this reality, and governance models must adapt accordingly.
Organizations should be asking, now rather than in discovery:
Can we explain how candidates are screened and ranked?
Have we tested outcomes for disparate impact across protected classes?
Do we know where human oversight begins and ends?
Could we defend these decisions if challenged?
An applicant tracking system that cannot be explained, audited, or governed is no longer an efficiency tool. It is a liability surface.
What Comes Next
Automation in hiring will continue. The question is not whether AI will be used, but whether it will be governed.
The Workday litigation represents more than a single case. It signals a broader shift toward treating algorithmic hiring systems as accountable infrastructure rather than neutral technology.
Organizations that invest now in algorithmic literacy, oversight, and defensible governance frameworks will be better positioned to manage legal, regulatory, and reputational risk. Those that do not may find themselves explaining system behavior after harm has already occurred.
Source
This briefing is adapted from analysis originally authored by Keri Tietjen Smith and revised for Wildfire Group’s Insights & Briefings to reflect an institutional risk and governance perspective.

