Workday’s AI hiring tools screened over a billion job applications. A nationwide collective action now alleges those tools discriminated against applicants over 40. Workday argued it merely provides the platform and does not make hiring decisions. The court disagreed. If your SaaS product scores, ranks, or filters people in any context that touches a protected class, this case applies to you.
Derek Mobley is an African American man over the age of 40 who suffers from anxiety and depression. He holds a bachelor’s degree in finance from Morehouse College and has worked in financial, IT help-desk, and customer service roles, including positions at Hewlett Packard Enterprise, the Internal Revenue Service, and AT&T. Since 2017, he has applied for over 100 jobs with companies that use Workday’s applicant screening platform. He was rejected every time.
In some instances, Mobley received rejection emails within minutes of submitting his application. In one documented case, he received a rejection at 1:50 a.m., less than an hour after applying at 12:55 a.m. The speed of the rejections suggested automated screening rather than human review.
On February 21, 2023, Mobley filed a lawsuit in the U.S. District Court for the Northern District of California alleging that Workday’s AI-powered applicant screening tools discriminated against him on the basis of race, age, and disability. The case has since become the first large-scale legal challenge to AI-driven hiring tools in the United States, and its procedural trajectory offers a roadmap for how courts are likely to treat SaaS vendors whose products make or influence consequential decisions about people.
What Workday’s Tools Do
Workday provides human resource management services on a subscription basis to businesses across industries. More than 10,000 companies use Workday as their applicant tracking system. The platform collects, processes, and screens job applications through AI and machine learning tools that score, rank, and sort candidates.
Workday’s website states that it can reduce time to hire by automatically moving candidates forward or removing them from the recruiting process. Its Candidate Skills Match feature compares skills extracted from resumes to job requirements and assigns matching scores. In April 2024, Workday acquired HiredScore, which adds two additional AI features: Spotlight (matching candidates to job requisitions based on title, location, experience, and education) and Fetch (surfacing internal employees or previously unsuccessful candidates for alternate roles).
The core allegation is that these tools do not simply implement employer criteria mechanically. They participate in the decision-making process by recommending some candidates and rejecting others, and the algorithms that drive those recommendations reflect and amplify biases present in the training data. Mobley alleges that the tools use proxy indicators (such as schools attended, employment history patterns, and credential types) that correlate with race, age, and disability status, producing disparate impact even without discriminatory intent.
The “We Just Provide the Tool” Defense
Workday’s primary argument was straightforward: it is a software vendor, not an employer. It does not make hiring decisions. The companies that subscribe to Workday’s platform make their own hiring choices. Workday provides the tool. The employer uses it.
The court rejected this defense.
In July 2024, Judge Rita Lin denied Workday’s motion to dismiss, finding that Mobley had plausibly alleged Workday could be held liable as an “agent” of the employers who use its tools. The anti-discrimination statutes under which Mobley sued (Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA)) prohibit discrimination not only by employers but also by agents of employers. The court found that Workday’s tools are sufficiently involved in the hiring process, screening applications, scoring candidates, and recommending or rejecting applicants, that the company could be treated as acting on behalf of the employers.
This is the ruling that makes the case relevant beyond employment law. The court held that a SaaS vendor whose product plays a meaningful role in a decision-making process can be treated as an agent of the customer making the decision. Workday does not hire anyone. It provides software that employers use to decide whom to hire. But because that software participates in the decision (not just facilitates it), the vendor shares the legal exposure.
The Equal Employment Opportunity Commission (EEOC), the federal agency responsible for enforcing workplace anti-discrimination laws, reinforced this theory in April 2024 by filing an amicus brief supporting Mobley’s position. The brief stated that algorithmic hiring tools can violate anti-discrimination laws even without explicit intent, and that the vendors who build and sell those tools can be held accountable alongside the employers who deploy them.
Collective Certification: The Scale of the Exposure
On May 16, 2025, Judge Lin granted preliminary certification of a nationwide collective action under the ADEA. The collective comprises all individuals aged 40 and over who, from September 24, 2020, through the present, applied for job opportunities using Workday’s platform and were denied employment recommendations.
To appreciate the scale: Workday disclosed in its filings that 1.1 billion applications were rejected through its system during the relevant period. The collective could potentially include hundreds of millions of applicants. Judge Lin addressed Workday’s argument that the class was unmanageably large by stating that allegedly widespread discrimination is not a basis for denying notice. The court suggested that if traditional notice methods were impractical, class notice could be issued via social media or Workday’s own platform.
On July 29, 2025, the court expanded the scope of the collective to include applicants processed through HiredScore AI features. Workday had argued that HiredScore was acquired after Mobley’s original complaint was filed and that its AI operates differently from Workday’s native screening tools. The court rejected both arguments, finding that HiredScore is part of Workday’s job application platform and that material differences in scoring algorithms would be addressed at the decertification stage, not as a bar to initial certification. Workday was ordered to provide a list of customers who enabled HiredScore AI features by August 20, 2025.
The court approved a notice plan on December 2, 2025. Opt-in notices are being issued to affected applicants.
It is important to note what has not happened. This is a collective certification, not a merits ruling. The court has not found that Workday’s tools actually discriminate. It has found that Mobley’s allegations are sufficient to proceed and that the proposed collective members are similarly situated. Workday can still move to decertify the collective at a later stage. Discovery is underway. Expert testimony on algorithmic bias will follow. A decision on the merits could arrive in 2026, but the case may also settle before then.
Why This Matters for SaaS Founders
The Workday case is about AI hiring tools. Most SaaS founders are not building applicant tracking systems. But the legal principle the case establishes, that a SaaS vendor can be liable as an agent of its customer when the vendor’s product participates meaningfully in a consequential decision, applies far beyond employment.
If your product scores, ranks, filters, or recommends in any context that produces outcomes with legal significance for individuals, you are in Workday’s position. The specific domain varies, but the pattern is the same.
Lending and credit. If your SaaS product scores loan applicants, recommends approval or denial, or ranks borrowers by risk, the Equal Credit Opportunity Act and Fair Housing Act apply. Disparate impact on the basis of race, national origin, sex, or age creates exposure for both the lender and the vendor whose algorithm drove the decision.
Insurance underwriting. Insurance is inherently about risk differentiation, and actuarially justified distinctions are legal. But most states prohibit the use of race and national origin as rating factors, and an increasing number restrict the use of facially neutral variables that serve as proxies for prohibited characteristics. If your product scores insurance applicants or recommends coverage decisions using AI, the state insurance regulatory framework applies, and the rules vary by state and line of insurance. Colorado’s AI Act, effective mid-2026, specifically covers AI systems used in insurance decisions.
Tenant screening. If your product evaluates rental applicants, assigns risk scores, or recommends approval or denial, the Fair Housing Act applies. The U.S. Department of Housing and Urban Development (HUD) has taken the position that algorithmic screening tools that produce discriminatory outcomes violate the Act regardless of intent.
Content moderation and platform access. If your product determines who can access a service, whose content is surfaced or suppressed, or who is flagged for review, the decisions may implicate civil rights protections depending on the context.
In each of these domains, the Workday defense (“we just provide the tool, the customer makes the decision”) failed. The court looked at what the tool actually does, not how the vendor describes its role. If the tool participates in the decision by scoring, ranking, or filtering, the vendor is not merely facilitating. It is acting as an agent of the decision-maker.
What Your Legal Stack Needs to Address
Acceptable use restrictions on high-risk automated decisions. Your AUP should restrict the use of your AI features for fully automated decisions that produce legal or similarly significant effects on individuals without appropriate human review. This does not mean prohibiting all automated decision-making. It means defining boundaries. If your product scores candidates, applicants, or claimants, your terms should require the customer to maintain human oversight of final decisions and prohibit the customer from using your tool as the sole basis for adverse actions against individuals. This limits your exposure to claims that your tool “made” the decision, and it creates a contractual basis for shifting liability to the customer if they automate beyond what your terms authorize.
Indemnification for algorithmic bias claims. Standard SaaS indemnification covers IP infringement. It typically does not cover discrimination claims arising from how the customer uses your product. If your tool participates in consequential decisions about people, your indemnification structure needs to address this exposure. The allocation depends on who controls the inputs (the customer provides the job criteria, the training data, the decision thresholds) and who controls the algorithm (you do). A reasonable allocation: the customer indemnifies you for claims arising from their criteria and their use of the tool outside your documented guidelines; you indemnify the customer for claims arising from defects in the algorithm itself, including bias in model design or training data you control.
Liability cap carve-outs for discrimination claims. Your limitation of liability clause likely caps total liability at 12 months of fees with a consequential damages exclusion. If your product faces a discrimination class action, the damages will dwarf your annual contract value. Consider whether algorithmic bias claims should be carved out from the general cap (like IP infringement and confidentiality breaches typically are) or addressed through separate insurance coverage.
Bias testing and documentation. The EEOC’s amicus brief in this case stated that algorithmic hiring tools can violate anti-discrimination laws even without intent. If your product makes or influences consequential decisions about people, you should be testing for disparate impact across protected classes and documenting the results. This is not just good practice. It is becoming a legal requirement. Colorado’s AI Act, effective mid-2026, mandates impact assessments for high-risk AI systems used in employment, lending, insurance, and housing. California and Illinois have enacted their own requirements. The documentation you create through bias testing is both a compliance obligation and a defense asset if your product is challenged.
Transparency about how your AI works. Workday’s tools operate as a black box from the applicant’s perspective. The applicant submits an application, the algorithm processes it, and a recommendation is generated. The applicant never knows what factors influenced the outcome. As regulatory frameworks mature (the EU AI Act, Colorado AI Act, and California’s automated decision-making regulations all require varying degrees of transparency), your ability to explain how your AI reaches its conclusions becomes a contractual and regulatory obligation, not just a product feature.
The Through-Line
The first three posts in this series covered privacy and data handling: wiretapping claims against AI voice assistants, fabricated consent in medical records, and BAA breaches by health-tech vendors. This post covers a different category of risk: what happens when your AI product’s outputs affect people in ways that implicate anti-discrimination law.
The common thread is the same. In every case, the SaaS vendor argued that it merely provides a tool. In every case, the court looked at what the tool actually does and found the vendor’s involvement sufficient to create liability. ConverseNow was not just a tape recorder. Sharp’s AI scribe was not just a note-taker. Verily was not just a data processor. And Workday is not just a software platform. Each product participated in a consequential process, and that participation is what the courts are holding vendors accountable for.
For SaaS founders, the principle is consistent across all four cases: if your product does something consequential with customer data or to end users, your contracts need to account for the specific ways that activity creates legal exposure. The “we just provide the tool” defense is not working.
This is the fourth post in the AI Privacy Litigation series. Previous: The Verily HIPAA Whistleblower Case: What Happens When a SaaS Vendor Breaches Its BAA. Next: FTC AI-Washing Enforcement: What SaaS Founders Get Wrong About Marketing AI Features.
For the contractual framework, see the AI-Enabled SaaS series, particularly AI-Specific Acceptable Use and AI Outputs: IP Ownership, Accuracy Warranties, and the Marketing Claims Problem.
No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them. No Boiler is not a law firm, and communications with us do not create an attorney-client relationship or carry any expectation of confidentiality. Use of our platform and content is governed by our Terms of Service and Privacy Policy.