AI Governance, Ethics and Leadership
AI Governance, Ethics and Leadership Podcast
AI Governance in Action: Workday's Bias in Hiring Problem
5
4
0:00
-16:11

AI Governance in Action: Workday's Bias in Hiring Problem

What happened, Why AI Bias is Costly to Fix and Relevant Questions to Properly Vet AI-driven HRIS systems
5
4

In this today’s deep dive, we explore Workday’s Bias in Hiring allegations.

  • About Workday

  • A series of Bias in Hiring complaints.

  • What Likely Happened. (from an Engineering Leader’s Perspective)

  • How in house ethics/governance teams should have helped

  • How to properly Vet AI-driven HRIS Systems

About Workday: A System Built for Scale - But at What Cost?

Founded in 2005, Workday is a major force in human capital management (HCM) and enterprise resource planning (ERP) software. With approximately 11,000 corporate clients, including many Fortune 500 companies, its AI-driven Human Resource Information System (HRIS) influences hiring decisions across industries.

At this scale, any algorithmic bias within Workday’s AI-powered hiring tools isn’t a small glitch - it’s a systemic governance failure affecting millions of job seekers. Workday’s recent class-action lawsuit over alleged hiring discrimination raises urgent concerns about AI fairness, accountability, and the unseen risks embedded within.

Image generated with Canva AI

A Timeline of Bias in Hiring Complaints Against Workday

🗓 February 2023First major lawsuit filed Derek Mobley sues Workday in the District Court of Northern California, alleging that Workday’s AI-powered hiring tools discriminated based on race, age, and disability, leading to over 100 job rejections.

🗓 July 2024Federal judge allows lawsuit to proceed A judge acknowledges that Workday’s AI may have contributed to discriminatory hiring outcomes, allowing Mobley’s case to move forward.

🗓 May 2025Nationwide collective action expands More plaintiffs join the lawsuit, arguing that Workday’s AI unfairly penalized older candidates through automated scoring and ranking systems, making it a landmark AI bias case.

🗓 May 2025 - A spokesperson from Workday stated “We continue to believe this case is without merit. We’re confident that once Workday is permitted to defend itself with the facts, the plaintiff’s claims will be dismissed”

At this point, Workday’s AI hiring processes are no longer just HR automation tools they are embedded decision-makers shaping employment opportunities at scale. Which means atleast 40% of the working population has applied for a role via Workday. The question isn’t just whether Workday’s algorithms were flawed, but whether its governance structures failed to intervene before harm was done.

How AI Models Reinforce Hiring Bias

Most people assume AI makes hiring decisions based on merit - but AI doesn’t "think" like humans. It detects patterns and repeats them without understanding context. If hiring data historically discriminated against older candidates, Workday’s AI wouldn’t question that pattern, it would simply learn the pattern and replicate it.

Here's how bias emerges in AI-driven hiring:

1️⃣ Learning from Existing Data AI models train on past hiring decisions, absorbing historical patterns - even biased ones.

2️⃣ Pattern Matching Without Judgment If past data shows that older candidates were rejected more often, the AI assumes that’s the correct hiring standard, reinforcing discrimination without explicit intent.

3️⃣ Scaling the Bias Across Thousands of Clients Workday’s AI doesn’t just impact one company’s hiring—it influences decisions across 11,000 organizations. If bias isn’t corrected early, it spreads at scale, affecting millions of job seekers.

The result? AI bias (by an LLM) isn’t intentional, it’s just math gone unchecked. And that’s why the business who created and use that LLM are accountable for those LLMs and liable for their failure.

The Congressional Case Study: AI Bias Would Disqualify 90% of U.S. Lawmakers

Let’s take a real-world example to illustrate the impact of the alleged hiring bias: Congress.

🔹 The median age of House members is 57.5 years, while in the Senate, it’s 64.7 years and 90% of Congress members are over 40, the same age group allegedly penalized by Workday’s hiring AI.

🔹 If Workday’s HRIS system were used to vet lawmakers for new jobs, most of Congress would be automatically rejected.

This example makes a critical point:

Age doesn’t determine capability. Congress members manage high-stakes policy, negotiate billion-dollar budgets, and navigate crises, yet under Workday’s hiring AI, they’d be deemed less desirable candidates.

AI bias in hiring isn’t just unfair, it’s deeply flawed logic.

To reinforce the point further. Let’s apply this hiring bias against the Workday leadership team.

Based on Workday’s C-level leadership team and their estimated ages, approximately 85% of them would likely struggle to be hired through Workday’s own AI-driven HRIS system, if the bias allegations against candidates over 40 are valid.

Breakdown:

  • Estimated total executives listed: 13

  • Executives over 40: 11

  • Executives under 40 or borderline: 2

  • Percentage likely impacted by AI bias: ~85%

This further emphasizes the irony of Workday’s hiring system. If it were used to screen its own leadership team, most of them wouldn’t make the cut. This statistic is a compelling way to highlight the dangers of age bias in AI-driven hiring decisions.

Why Fixing AI Hiring Bias Is Hard and Expensive

As someone who spent over ten years in the tech industry and over ten years as an Engineering Leader. I know firsthand that most companies will assume AI hiring bias can be fixed easily, but they’re wrong. The process is costly, time-intensive, and politically complicated. I’m sure Workday engineering leaders called attention to this risk but likely failed to communicate why resolving AI Hiring Bias is hard. Let me break it down:

🔴 Biased Training Data Must Be Scrubbed. You see, AI learns from past hiring decisions - so fixing bias means rewriting its historical knowledge, which requires:

  • Sourcing unbiased datasets, which is expensive and labor-intensive.

  • Retraining the entire model (terabytes of data), requiring computational power and expert intervention.

  • Ongoing monitoring requiring even more team expertise and compute.

🔴 Bias Audits Are Expensive and Ongoing Fixing AI bias isn’t one-time hotfix. It requires constant oversight, independent audits, and legal compliance updates. Large-scale AI governance audits can cost millions on a quarterly basis.

🔴 Many Companies Prefer to Hide the Problem Rather than fixing AI bias, most companies are stuck on launch timelines and maximizing shareholder value, so they bury the issue and hope no one will notice. Workday now faces legal consequences because it didn’t address these biases before lawsuits forced them to act.

The reality? Fixing AI bias. It is about providing your clients an HRIS system they can trust. It is about protecting competitive advantage by setting an example in the industry.

How In-House Ethics & Governance Teams Should Have Helped—But Couldn’t

I looked into it and, Workday has a dedicated Responsible AI team tasked with preventing algorithmic bias. In theory, these teams should have been empowered to intervene early, catching and correcting hiring bias before lawsuits emerged.

So why did bias persist? My guess is that the was not empowered to make any decisions. Here’s what I’ve witnessed at tech companies:

Weak Internal Oversight Power – Responsible AI teams and ethics boards exist, but event with executive buy-in, their influence is limited.

Governance teams shouldn’t just exist, they should be empowered to act.

How to Properly Vet AI-Driven HRIS Systems

To avoid Workday’s mistakes, procurement teams need to hold themselves accountable to ask the tough questions before adopting AI-driven hiring tools:

What data is the AI trained on? Is that data monitored for bias thresholds?

Has the AI model been independently audited for bias? What were the results?

Can hiring managers override AI decisions?

How often is training data fine tuned?

What legal safeguards prevent discrimination lawsuits?

If 2025 proves nothing else, it will show that AI governance isn’t about fixing bias after lawsuits, it’s about preventing real harm before it happens.

5 Tough Questions for Workday Leadership

If Workday wants to prove its AI hiring tools are ethical, its leadership must answer these questions:

How does Workday ensure that its AI-driven hiring tools don’t reinforce historical bias?

Has Workday ever conducted independent bias audits, and will they release the findings?

What role do Workday’s ethics and governance teams play in monitoring AI hiring decisions?

When bias concerns were raised internally, how did Workday respond?

What steps is Workday taking to course-correct after the bias allegations?

If Workday can’t answer these questions transparently, it’s not fixing the problem—it’s just managing public fallout.

What’s Next for Workday? Predictions on AI Governance & HR Tech

Image generated with Canva AI

Workday is now at a critical leadership crossroads. Here’s my take on what might happen next:

🔹 Workday may change their spokesperson and adjust their messaging.

🔹 Clients may distance themselves from Workday, releasing their own statements or joining the lawsuit if they feel it best serves their business.

🔹Competitors will exploit Workday’s scandal to win over its clients.

🔹 Clients may face fallout from job applicants who feel they were unfairly rejected from roles.

🔹 A potential rebrand of Workday to signal reform and distance from past failures.

Regardless of what Workday decides to do, this lawsuit will shape AI hiring regulations from this moment onward.

Final Thoughts: AI Governance Must Be Proactive, Not Reactive

Workday’s hiring bias allegations are a wake-up call for AI governance. HR tech companies must be held accountable. AI governance isn’t and optional add on (if there’s room in the budget), it’s essential. Companies that fail to act today will pay for that negligence tomorrow.

Data Sources

  • Lawsuit Claims Discrimination by Workday’s Hiring Tech Prevented People Over 40 from Getting Hired – CNN https://www.wfft.com/news/lawsuit-claims-discrimination-by-workday-s-hiring-tech-prevented-people-over-40-from-getting-hired/article_ffcc8a8b-0900-57f5-9c24-a2135d0fe3bc.html

  • Workday Faces Lawsuit Over Alleged AI Hiring Tool Discrimination – KOAT (CNN Affiliate) https://www.koat.com/article/workday-discrimination-lawsuit-ai-hiring-tools/64853267

  • Responsible AI Governance at Workday – Workday Blog https://blog.workday.com/en-us/responsible-ai-governance-workday.html

  • Responsible AI: Ensuring Trust and Leadership in Innovation – Workday https://www.workday.com/en-us/artificial-intelligence/responsible-ai.html

  • How Workday Has Built a Governance Regime for Responsible AI – TechUK https://www.techuk.org/resource/how-workday-has-built-a-governance-regime-for-responsible-ai.html

  • Workday Announces Fiscal 2024 Fourth Quarter and Full-Year Financial Results – Workday Newsroom https://newsroom.workday.com/2024-02-26-Workday-Announces-Fiscal-2024-Fourth-Quarter-and-Full-Year-Financial-Results

  • Quarterly Results | Investor Relations | Workday https://investor.workday.com/quarterly-results

  • Federal Court Allows Collective Action Lawsuit Over Alleged AI Hiring Bias – Holland & Knight https://www.hklaw.com/en/insights/publications/2025/05/federal-court-allows-collective-action-lawsuit-over-alleged

  • Discrimination Lawsuit Over Workday’s AI Hiring Tools Can Proceed as Class Action – JD Supra https://www.jdsupra.com/legalnews/discrimination-lawsuit-over-workday-s-9232140/

  • Workday Faces Lawsuit Over Alleged AI Hiring Tool Discrimination – KCRA https://www.kcra.com/article/workday-discrimination-lawsuit-ai-hiring-tools/64853267

Discussion about this episode

User's avatar