A $10 Billion AI Recruiting Platform Just Got Breached, and Your Candidate Data Might Be Next
Today’s digest carries one story that should stop every HR leader mid-scroll: a massive AI recruiting data breach at one of the industry’s most prominent hiring startups. Mercor, valued at $10 billion and used by OpenAI, Anthropic, and Meta, confirmed that hackers stole up to 4TB of candidate data through a supply chain attack on the popular open-source LiteLLM library. If you’re using any AI recruiting tool that connects to third-party LLM providers, this one is personal. Meanwhile, Salesforce turned Slack’s bot into something much closer to an autonomous work agent, SHRM picked its next batch of HR tech startups to fund, and Tennessee became the latest state to draw legal lines around AI in sensitive domains.
Mercor AI Recruiting Data Breach Exposes 4TB of Candidate Information
Mercor, a three-year-old AI recruiting startup valued at $10 billion, confirmed it was hit by a cyberattack linked to a compromised open-source library called LiteLLM. The attack was a supply chain compromise: a hacking group called TeamPCP used stolen maintainer credentials to publish two malicious versions of the LiteLLM Python package (versions 1.82.7 and 1.82.8) on PyPI. The poisoned code was live for roughly 40 minutes before being detected and removed. (Source: TechCrunch)
That 40-minute window was enough. LiteLLM is downloaded millions of times per day by developers who use it to plug applications into AI services from OpenAI, Anthropic, and others. The malicious code harvested credentials across potentially over 1,000 SaaS environments. Extortion group Lapsus$ then claimed it had specifically targeted Mercor and obtained 4TB of data, including candidate profiles, personally identifiable information, employer data, video interviews, source code, and VPN credentials. Lapsus$ is reportedly auctioning the stolen data. (Source: Fortune)
Here is why this matters if you run an HR team. Mercor’s clients include some of the biggest names in AI. They recruit experts across medicine, law, and technical fields to provide training data for AI models. If your organization has candidates in Mercor’s pipeline, or if you use AI-powered recruiting tools that depend on open-source libraries, the attack surface just expanded significantly. An AI recruiting data breach at this scale affects not just one company. It ripples through every organization whose candidate data sat in that system.
What to do now: Audit your recruiting tech stack’s third-party dependencies. Ask your ATS and AI hiring tool vendors whether they use LiteLLM or similar proxy libraries, and what their response to this incident has been. If you collect video interviews or detailed candidate profiles through any AI tool, confirm that data is encrypted at rest and that your vendor has a breach notification protocol. This is a good week to review your full-cycle recruiting process with security in mind.
Salesforce Turns Slackbot Into an Agentic AI Work Assistant
Salesforce unveiled 30 new AI features for Slack at a San Francisco event on March 31, and the centerpiece is hard to ignore: Slackbot is now an agentic AI assistant. It can transcribe and summarize meetings, create action items, update CRM records, and even monitor your desktop activity to understand context across deals, calendars, and conversations. The new Slackbot also functions as an MCP (Model Context Protocol) client, meaning it can connect to and coordinate with external services including Agentforce, Salesforce’s AI agent platform. (Source: TechCrunch)
The most interesting addition for teams managing AI agents in HR workflows is “reusable AI skills,” which let users define specific tasks for Slackbot that can then be applied across different contexts. Build a skill once for onboarding check-ins, and it works for exit interview follow-ups too. Salesforce’s interim Slack CEO Rob Seaman framed the release as a step toward making Slack an “agentic operating system,” a single surface where workers interact with AI agents, enterprise apps, and each other.
If your team lives in Slack, this is worth testing. But the desktop monitoring capability raises questions about employee privacy, especially for remote teams. Check with your legal and compliance team before turning that on.
SHRM Labs Funds AI-Powered HR Startups in 2026 Accelerator
SHRM Labs announced its 2026 WorkplaceTech Accelerator cohort, and the picks signal where the HR tech investment thesis is heading. Two standouts: Clearbrief, an AI platform that brings legal-grade accuracy to HR investigations inside Microsoft Word, and Mento, an AI-led leadership coaching platform that matches leaders with coaches using a proprietary LLM trained on over 100 matching variables. Each startup receives a $200K SAFE Note investment with milestone-based follow-on funding. (Source: SHRM)
The Clearbrief pick is especially telling. HR investigations, think workplace harassment claims, compliance audits, and internal disputes, are document-heavy and error-prone. An AI tool that auto-links evidence to claims inside Word could save HR teams dozens of hours per investigation. Mento addresses a different pain point: leadership development at scale without blowing the coaching budget. If you’re evaluating AI tools for your HR stack, keep both on your radar. The cohort will showcase at SHRM Annual Conference & Expo 2026.
Tennessee Bans AI Systems From Posing as Mental Health Professionals
Tennessee Governor Bill Lee signed SB 1580, making it illegal to develop or deploy an AI system that represents itself as a qualified mental health professional. The bill passed the Senate 32-0 and the House 94-0, bipartisan unanimity that is rare for AI legislation. It takes effect July 1, 2026. Nebraska is advancing similar legislation with LB 1185. (Source: Transparency Coalition)
This matters beyond mental health apps. The bill sets a precedent for how states define the line between AI assistance and professional representation. If your company offers AI-powered employee assistance programs, wellness chatbots, or mental health benefits through AI vendors, check whether those tools could be interpreted as “representing themselves” as qualified professionals. Tennessee’s law applies to anyone who develops or deploys such systems, not just companies headquartered there. HR compliance teams in multi-state organizations should flag this for legal review now, before the July effective date.
Quick Hits
- Disney goes all-in on generative AI. The Walt Disney Company has moved past pilot programs to officially embed generative AI across its entire operating structure, a signal that enterprise AI adoption is no longer experimental at the largest companies. (Crescendo AI)
- Anthropic accidentally leaks Claude Code source code. An npm packaging error exposed nearly 2,000 files and 500,000 lines of Claude Code’s internal source, including unreleased features and model performance data. Anthropic confirmed it was human error, not a hack, and said no customer data was involved. Still, coming the same week as the Mercor breach, it shows that operational security gaps exist even at safety-focused AI labs. (Axios)
- AI industry spends $100M+ on 2026 midterms. AI-related political groups are pouring at least $100 million into the 2026 midterm elections as government regulation looms. (ABC News)
Today’s Mercor breach is a reminder that the AI skills gap in HR isn’t just about knowing how to use AI tools. It’s about knowing how to vet them. If your recruiting stack touches candidate PII and connects to third-party AI services, treat security audits with the same urgency as compliance audits. The cost of an AI recruiting data breach, measured in candidate trust, regulatory exposure, and legal liability, is only going up.
Frequently Asked Questions
How does an AI recruiting data breach affect job candidates?
Candidates whose data is held by AI recruiting platforms face exposure of personally identifiable information, video interviews, resumes, and assessment results. In the Mercor breach, Lapsus$ claims to have obtained 4TB of such data. Affected individuals may face identity theft, unauthorized use of their professional profiles, and loss of control over sensitive career information shared during hiring processes.
What is agentic AI in workplace tools like Slack?
Agentic AI refers to AI systems that can take independent actions on your behalf, rather than just answering questions. Salesforce’s updated Slackbot can now transcribe meetings, create action items, update CRM records, and connect to external services via MCP. For HR teams, this means AI assistants that can handle onboarding workflows, schedule interviews, and flag compliance deadlines without manual intervention.
Are US states passing laws to regulate AI in healthcare and HR?
Yes, and the pace is accelerating. Tennessee’s SB 1580 bans AI from representing itself as a mental health professional, effective July 2026. Nebraska is advancing similar legislation. These laws signal a broader trend of state-level AI regulation that directly affects companies offering AI-powered employee wellness programs, recruiting tools, and HR chatbots. Multi-state employers should monitor compliance requirements in every jurisdiction where they operate.
Not to be considered as tax, legal, financial or HR advice. Regulations change over time so please consult a lawyer, accountant or Labour Law expert for specific guidance.
