AI Screening America’s Workers: The Law Is Waking Up
Editor’s note: This article was originally published on LinkedIn by Keri Tietjen Smith and is republished here for Wildfire clients and partners.
WASHINGTON (Feb. 17, 2026) — A familiar dismissal shows up whenever someone raises concerns about artificial intelligence in hiring.
If the risk were real, skeptics say, the Equal Employment Opportunity Commission would be all over it. And even if it were real, President Donald Trump can just ignore the EEOC anyway.
That story is comforting, and it’s wrong in ways that matter.
The legal pressure tied to hiring technology isn’t waiting for a single dramatic “federal crackdown” moment. It’s building through a layered system that includes charge volume, investigations, settlements, private lawsuits and state and local rules that force companies to explain what their systems did and why. Even if one channel slows, the rest of the system doesn’t stop.
Start with the plainest signal, the one that doesn’t trend on social media: discrimination charges.
The EEOC reported 88,531 new discrimination charges in fiscal year 2024, an increase of more than 9% compared with fiscal 2023. The agency ended that year with 52,080 charges pending, a slight increase over the prior year, which indicates the pipeline wasn’t just receiving more claims, it was carrying more of them forward.
In fiscal 2025, the EEOC described an intake system still operating at industrial scale, receiving about 270,000 inquiries annually that result in roughly 90,000 new charges each year.
That’s not an enforcement system asleep at the wheel. It’s a system under constant load.
Now add what companies and vendors are underestimating: private litigation around applicant tracking systems and AI hiring tools is expanding in quieter, more flexible legal theories than classic discrimination claims. The most visible “AI hiring” lawsuit, Mobley v. Workday, is testing whether a major HR platform can be pulled into anti-discrimination liability when its tools function like a screening gate.
But the bigger early-2026 shift is this: plaintiffs aren’t only suing over discrimination. They’re suing over transparency, notice and dispute rights, arguing that AI systems generate “reports” and “scores” about people without telling them and without giving them a meaningful way to correct errors.
That shift became harder to ignore on Jan. 21, 2026, when Reuters reported a proposed class action against Eightfold AI alleging the company helped employers “secretly” score job seekers, violating the federal Fair Credit Reporting Act and California consumer protection laws.
So the pushback misses the landscape. The risk signal isn’t only EEOC lawsuits. It’s the overall volume of disputes entering the system, the legal theories being tested in court, and the fact that state and local rules increasingly force disclosure and documentation, whether Washington is interested or not.
Why EEOC lawsuits aren’t the right dashboard
When people say “the EEOC isn’t paying attention,” they often mean “I don’t see lots of EEOC-filed lawsuits.”
That’s a bad metric.
EEOC lawsuits are just one output of a larger enforcement ecosystem. Most of the system is intake, investigation, findings, conciliation and resolutions that never become a headline.
The EEOC’s own performance reporting emphasizes outcomes well beyond courtroom battles. In fiscal 2024, the agency said it recovered nearly $700 million for victims of discrimination through administrative enforcement and litigation combined. That money comes from settlements and resolutions, not only trials.
Even more important, the legal system is designed so private litigation can proceed even when the EEOC doesn’t sue.
The EEOC tells the public plainly that once it issues a Notice of Right to Sue, a person generally has 90 days to file a lawsuit in court. The point is simple: fewer EEOC lawsuits does not mean fewer lawsuits overall. It can mean the agency is narrowing priorities, resolving cases through other mechanisms, or processing volume. Meanwhile, the private pipeline continues.
That matters for hiring technology because the funnel itself is a perfect target for scalable claims.
The funnel is the target, not just the employer
AI hiring tools sit at the top of the pipeline, touching huge populations. They screen, rank, score and route candidates before a human ever speaks to them. Standardized workflows applied to thousands of people are exactly what class actions and systemic claims are built for.
When plaintiffs’ lawyers look at a modern hiring stack, they don’t only ask whether an algorithm is biased. They ask whether the process is lawful, documented, disclosed and accessible.
Did the system create an automated score that influenced selection? Did the candidate know? Was there a meaningful path to dispute inaccurate data? Could a candidate request an accommodation inside the process, or did the workflow block it? Did the system collect biometric identifiers like face geometry or voiceprints?
Those are process questions, and process questions are easier to litigate than intent. That’s why the risk can feel “quiet” and still be severe.
Workday: the bellwether case for platform-layer liability
The headline case in the AI hiring space remains Mobley v. Workday because it tests a core assumption inside HR technology: that liability sits with the employer, not the platform.
In July 2024, Reuters reported a federal judge in California allowed key claims to proceed in a proposed class action alleging Workday’s AI-powered hiring software discriminated against applicants based on race, age and disability. Reuters reported the judge said Workday could be considered an “employer” under certain anti-discrimination laws because it performed key hiring functions on behalf of client companies, even though it did not directly hire the applicants.
Earlier, Reuters reported the EEOC filed an amicus brief supporting the idea that Workday could fall under coverage theories tied to employment gatekeeping functions.
Workday has denied wrongdoing and said it will refute the claims. The final outcome matters, but the strategic impact is already here: courts are willing to entertain the idea that a vendor platform can be pulled into the legal frame if it functions like part of the selection mechanism.
That changes the exposure calculus for everyone.
When vendors can plausibly be defendants, litigation becomes more complex, discovery becomes broader, and employers have a harder time claiming the “tool” was independent of their decisions. Meanwhile, vendors can no longer count on being shielded as neutral infrastructure.
Eightfold: the early 2026 shift to “secret scoring” and information-rights claims
If Workday represents the discrimination-coverage battle, the Eightfold lawsuit represents the next wave: treating AI-generated applicant scoring as a regulated information product.
Reuters reported on Jan. 21, 2026, that job seekers sued Eightfold AI, alleging it violated the Fair Credit Reporting Act and California consumer protection laws by generating secretive reports that evaluated applicants without their knowledge or consent and without giving them an opportunity to dispute inaccuracies.
This theory matters because it expands risk beyond discrimination arguments, which can be statistically and technically complex. Information-rights claims can be framed around notice, authorization, access and dispute mechanisms.
In plain English: you can be sued not only because the system was biased, but because it created a shadow profile and didn’t tell you.
That expands legal exposure even for companies that believe their systems are “fair.” Courts will ultimately decide how far these theories go, but the lawsuit itself is a signal that the plaintiff bar has discovered a new hook in the hiring funnel.
Biometric privacy: big exposure, even without proving discrimination
Some of the fastest-scaling liability theories in hiring don’t require proving discrimination at all.
Illinois’ Biometric Information Privacy Act, commonly called BIPA, has been used to sue over the collection and use of biometric identifiers such as facial geometry and voiceprints. In the hiring context, that can arise when vendors analyze video interview footage or voice data.
Bloomberg Law reported on Jan. 21, 2026, that job seekers dropped a lawsuit against HireVue over alleged collection of facial and voice information during automated video interviews. Cases like that have made one point unavoidable: biometric analysis in hiring isn’t a hypothetical issue, it’s already been in court.
BIPA’s design is what makes it dangerous for employers and vendors. It can impose statutory damages and has supported class-action litigation, which means a misstep can scale quickly even when there’s no allegation of discriminatory intent.
Disability discrimination: the EEOC has warned about AI screening risks for years
One reason it’s inaccurate to say the EEOC “isn’t paying attention” is that the agency has been issuing guidance and warnings about automated tools for years, especially related to disability.
In May 2022, the EEOC and the Justice Department warned employers that AI and algorithmic tools can unlawfully screen out people with disabilities in violation of the Americans with Disabilities Act. The EEOC maintains an “Artificial Intelligence and the ADA” resource hub that includes its technical assistance on AI tools used to assess job applicants and employees.
In 2023, the EEOC issued technical assistance discussing how employers should assess adverse impact in software, algorithms and AI used in employment selection procedures under Title VII.
This guidance doesn’t mean every employer is violating the law. It does mean the federal government has been on record that AI tools in hiring can create liability, and that employers remain responsible for their selection procedures even when a vendor supplies the tool.
New York City’s AEDT law: documentation and notice are now enforceable requirements
Not all of the pressure is federal.
New York City’s Automated Employment Decision Tools law, known as Local Law 144, requires employers and employment agencies to conduct a bias audit within one year of using an automated tool, make certain information publicly available and provide specific notices to candidates.
A December 2025 audit from New York’s state comptroller summarized the law’s core obligations: bias audits, public posting of results, and candidate notice about AEDT use and data collection.
Whether Local Law 144 is perfect is beside the point. It’s the direction of travel. More jurisdictions are moving toward rules that force companies to document and disclose their use of automated tools. “We didn’t know what the vendor did” isn’t a defense that gets stronger over time.
The politics claim: Trump can influence the EEOC, but he can’t switch off the legal system
It’s true that a president can influence an agency’s leadership and priorities.
Reuters reported in January 2025 that Trump fired Democratic EEOC commissioners and the agency’s general counsel in a move described as unprecedented and likely to prompt legal challenges. Reuters later reported the Trump administration argued those firings were legal and sought to dismiss a lawsuit brought by former commissioner Jocelyn Samuels challenging her removal.
Reuters also reported that the EEOC signaled a more aggressive posture toward certain DEI programs, warning in March 2025 that some common DEI practices could violate federal anti-discrimination law. In December 2025, Reuters reported EEOC Chair Andrea Lucas said corporate America would face a “DEI reckoning” in 2026 as the agency scrutinized workplace initiatives that consider protected characteristics in employment decisions.
Those are significant policy shifts. They influence what the EEOC prioritizes, what guidance it issues and what cases it pursues.
But none of that equals “Trump can ignore the EEOC.” That phrase collapses under basic civics.
A president can’t repeal Title VII, the ADA, or the ADEA by preference. Courts remain open. Statutes remain enforceable. Private plaintiffs can still sue, and state and local rules can still bite. Even if a federal agency shifts priorities, litigation can accelerate through private claims and local enforcement.
There’s also a second point most people miss: in early 2026, the EEOC moved to centralize litigation authority at the commission level, which makes the system less “one-person-driven,” not more.
On Jan. 21, 2026, the EEOC voted to modify its delegation of litigation authority. The EEOC announced that the commission adopted a resolution requiring commission approval of almost all litigation, returning to the EEOC chair and commissioners the authority to approve or disapprove new and intervening cases.
That’s an internal governance change, and it matters. It undercuts the simplistic idea that a president can personally “turn off” the enforcement system.
An important twist: disparate impact and the legal vacuum problem
There’s another political and legal dynamic that matters specifically for AI.
Disparate impact, the legal concept that targets seemingly neutral policies that disproportionately harm certain groups, has been one of the most important tools for addressing systemic discrimination, including in hiring systems.
In October 2025, the Associated Press reported that the EEOC would stop investigating workplace discrimination complaints based on disparate impact, following an executive order discouraging the use of disparate impact in civil rights enforcement. Critics told AP that this shift could weaken protections and place more burden on individuals to pursue claims in court, even as AI-driven hiring systems become more common.
If disparate impact enforcement narrows at the federal level, that doesn’t make risk disappear. It reshapes it.
It can push more disputes into private litigation and into state and local venues, where plaintiffs and regulators may fill the gap with different legal theories, including consumer protection, disability accommodation failures, biometric privacy, and local compliance rules. The litigation shifts, it doesn’t vanish.
What’s actually happening in early 2026
Put the record together and a clearer picture forms.
The EEOC is processing high volumes of charges and inquiries. Courts are testing novel theories that could expand vendor liability in AI hiring systems, as in the Workday litigation. Plaintiffs are bringing transparency and consumer-law claims tied to “secret scoring,” as in the Eightfold lawsuit. Biometric privacy theories have already reached the hiring context, including suits involving video interview platforms. And cities like New York are imposing compliance duties that require bias audits, public summaries, and candidate notices.
This is not a fringe issue that nobody is watching. It’s an issue that many employers and vendors are under-documenting.
That’s the real risk signal. The companies that get burned in 2026 and beyond won’t necessarily be the ones that used automation. They’ll be the ones that can’t explain it.
When a hiring system is automated at scale and opaque to applicants, it creates a predictable collision with law. People will demand to know why they were screened out. Regulators will demand to know whether the process excluded protected groups or blocked accommodations. Courts will demand to know whether an AI-generated score is effectively a “report” about a person. And when employers can’t answer, litigation fills the vacuum.
In 2026, that vacuum is getting smaller.
Keri Tietjen Smith uses Organizational I/O psychology and years of Talent Acquisition Recruitment and Operations experience from Startups to Fortune 50 companies, to advise clients on AI Policy, Governance and accountability in AI-influenced hiring and workforce decision systems. Her background includes a BS in Psychology, certifications in Human Design and AI Governance from ASU and Oxford University, and currently attending Purdue University to pursue her MS in AI Management and Policy.
She is a Executive Director, Talent Systems Infrastructure at Wildfire Group AI Hiring Risk Advisory & Talent Strategy, where she advises organizations and policymakers on hiring systems risk, compliance, and the downstream labor impacts of automation. Her work examines workers’ rights, litigation as a driver of AI governance, and the policy gaps emerging as employment decisions become increasingly automated.
Wildfire Group Talent Design and AI Risk Advisory
Sources
EEOC, “2024 Annual Performance Report” (FY 2024 charge volume and outcomes).
EEOC, “Fiscal Year 2025 Agency Financial Report” (inquiries and charge estimates).
Bloomberg Law report on HireVue biometric data collection allegations in hiring (Jan. 21, 2026).
Associated Press, report on EEOC and disparate impact investigations (Oct. 2025).

