Four months from today, every company using AI to screen candidates in the EU faces a legal reckoning. The AI hiring compliance deadline under the EU AI Act is August 2, 2026, and HR teams still don’t know if that date will hold. Meanwhile, Salesforce just turned Slackbot into an autonomous agent that follows you across your desktop, and Google shipped open-source models built for agentic workflows. Here’s what matters this week.
The AI Hiring Compliance Deadline Just Got Complicated
If your company uses AI to screen resumes, rank candidates, or assess performance, you’re operating a “high-risk AI system” under the EU AI Act. The original enforcement date for these systems is August 2, 2026. That means bias audits, technical documentation, human oversight mechanisms, and Data Protection Impact Assessments, all due in four months. (Source: HR-ON)
But here’s the twist. The European Commission’s Digital Omnibus proposal, introduced in November 2025, would push that AI hiring compliance deadline back to December 2, 2027, for standalone high-risk systems. The rationale: the technical standards needed for compliance aren’t ready yet. The EU Parliament’s committee has backed the delay with a fixed 2027 backstop. (Source: European Parliament Legislative Train)
The problem is that the Digital Omnibus hasn’t been adopted yet. Final trilogues aren’t expected before mid-2026. So if you’re an HR leader at a company with EU-based candidates, you have two bad options: prepare for August 2026 and potentially over-invest in compliance that gets delayed, or bet on the delay and risk fines of up to €15 million or 3% of global annual turnover if the original date holds. (Source: Business & Human Rights Resource Centre)
What to do now: Don’t wait for political clarity. Start documenting how your AI recruitment tools make decisions. Map which systems qualify as high-risk. Build your audit trail. If the deadline holds, you’re ready. If it moves, you’ve built better governance anyway. That work is never wasted.
Salesforce Turns Slackbot Into an Always-On AI Agent
Salesforce announced 30 new AI features for Slackbot on March 31, transforming the familiar chat assistant into something much more ambitious: an autonomous work agent. Powered by Anthropic’s Claude model, the updated Slackbot can now listen to meetings on Zoom, Google Meet, or Slack Huddles, summarize decisions, and create action items the moment a call ends. (Source: TechCrunch)
The bigger shift is desktop integration. Slackbot now operates outside the Slack interface, following you across your computer. It reads your deals, conversations, calendar, and habits to proactively suggest actions. It can also update CRM records automatically when a deal or contact comes up in a channel. (Source: SiliconANGLE)
For HR teams, this matters on two levels. First, if you run onboarding or employee communications through Slack, an AI agent that can autonomously schedule, summarize, and follow up changes how you staff those workflows. Second, the always-on desktop tracking raises serious questions for any company with an AI tools governance policy. If your employees start using an agent that reads everything on their screen, your data privacy framework needs to account for that.
Google Ships Gemma 4: Agentic Open Models for Edge Devices
Google released Gemma 4 on April 2, its most capable family of open-source AI models to date. The lineup spans four sizes, from 2 billion to 31 billion parameters, all released under the Apache 2.0 license. The 31B model already ranks third on open LLM benchmarks. (Source: Google Blog)
What makes Gemma 4 different from prior open releases is the focus on agentic workflows. These models are designed to take multi-step actions, not just answer questions. They’re also optimized for on-device use, with up to 4x speed improvements and 60% less battery consumption. They run on phones, Raspberry Pi hardware, and NVIDIA Jetson boards. (Source: tbreak)
For HR tech vendors and startups building AI agents for HR, this is a practical inflection point. Running a capable model on-device means you can process sensitive employee data without sending it to a cloud API. That changes the compliance math entirely, especially as the AI hiring compliance deadline conversation heats up across jurisdictions.
Quick Hits
- Anthropic tests Conway, an always-on agent platform. Conway is a persistent Claude environment with extensions, webhooks, and browser control. It stays active 24/7 and handles multi-step tasks autonomously, signaling a shift from chatbots to agents that act without waiting for a prompt. (Source: Dataconomy)
- Noah Labs gets FDA designation for voice-based heart failure detection. Their AI, Vox, can detect heart failure from a five-second voice recording. Workplace wellness programs take note. (Source: Crescendo AI)
- Global AI venture funding hit $300B in Q1 2026. That’s an all-time high, with 80% of total venture capital going directly to AI companies. (Source: devFlokers)
If you’re evaluating how AI regulation and autonomous agents will affect your HR stack, the gap between what these tools can do and what compliance frameworks allow is widening fast. Asanify’s AI skills gap analysis is a good starting point for assessing where your team stands on readiness.
FAQ: AI Hiring Compliance and Workplace Automation
What is the AI hiring compliance deadline under the EU AI Act?
The current legally binding deadline is August 2, 2026. Any AI system used to screen, rank, or evaluate job candidates in the EU is classified as high-risk and must meet documentation, bias audit, and human oversight requirements by that date. The European Commission’s Digital Omnibus proposal may push this to December 2027, but that change hasn’t been formally adopted yet.
How does the Salesforce Slackbot update affect HR teams?
Salesforce’s 30 new AI features turn Slackbot into an autonomous agent that can summarize meetings, update CRM records, and track activity across your desktop. HR teams using Slack for onboarding or internal communications should evaluate how always-on desktop AI monitoring fits within their data privacy and employee surveillance policies.
Can open-source AI models like Gemma 4 be used for HR applications?
Yes. Google’s Gemma 4 models run on-device under an open Apache 2.0 license, which means HR tech vendors can process sensitive employee data locally without sending it to external cloud APIs. This is especially relevant for companies navigating the AI hiring compliance deadline, since on-device processing simplifies data residency and privacy requirements.
Not to be considered as tax, legal, financial or HR advice. Regulations change over time so please consult a lawyer, accountant or Labour Law expert for specific guidance.
