AI News Digest, April 8: Open Weights Agentic AI Lands, India Rewrites the Rules
Editor’s note, April 8, 2026. Three stories today tell one bigger story. Open weights agentic AI is moving out of cloud datacenters and onto laptops and phones, India’s data protection rules finally went from paper to enforceable, and one of India’s high courts told its judges to keep AI out of actual decision-making. Each lands on a different HR desk for a different reason, but the common thread is the same question you should be asking your team this week. Who is accountable when a model acts on its own inside your workflow?
Google’s Gemma 4 Drops and the On-Device Open Weights Agentic AI Era Starts
Google released Gemma 4 on April 2, a four-model family built from the same research stack as Gemini 3 and shipped under a permissive Apache 2.0 license. The lineup spans an “Effective 2B” edge model, a 4B variant, a 26B mixture-of-experts model that activates only 3.8B parameters per token, and a 31B dense model. It supports more than 140 languages, native function calling, and audio and video inputs, and it is live on Hugging Face, Kaggle, Ollama, and Google AI Studio (Engadget).
If you run HR or ops at a company between 50 and 2,000 people, this matters more than the headline benchmarks. Open weights agentic AI means you can run a capable tool-calling model on a workstation or a company laptop, without sending employee data to a third-party API. Resume parsing, interview note summarization, policy Q&A, expense categorization, a lot of the grunt work that costs your team hours each week, can now run behind your firewall with no per-seat fee. The Apache 2.0 license also means your legal team does not need to renegotiate terms every quarter when a vendor updates its acceptable use policy.
The catch is accountability. An agentic model that books travel, adjusts benefits selections, or files a payroll correction is taking action, not answering a question. AI agents for HR need logging, rollback, and a human checkpoint on anything that touches comp, termination, or personal data. Start by listing the three HR workflows that eat the most of your team’s week, then pilot an agent on the lowest-risk one. Do not wire it to your HRIS on day one.
India’s DPDP Rules Go Live, and Most Enterprises Aren’t Ready
India’s Digital Personal Data Protection Rules were finalized in November 2025. The Data Protection Board is now operational, the Consent Manager framework is scheduled for November 13, 2026, and full compliance is required by May 13, 2027 (Rainmaker). A recent readiness survey cited by Storyboard18 found that 68% of companies with Indian operations admit they do not fully understand Phase 1 obligations.
If you employ people in India, or even run payroll for Indian contractors, you are a Data Fiduciary under this law. That means consent capture for every HR workflow, a published grievance officer, breach notification within stipulated timelines, and stricter rules for processing children’s and employees’ personal data. The open weights agentic AI trend from the top story actually helps here. On-device inference keeps employee PII on infrastructure you control, which is a cleaner compliance story than shipping records to a US-hosted chatbot. 2026 is your build year. Review your data processing terms, map every HR system that touches Indian employee data, and name an owner for each data flow.
Gujarat High Court Draws a Hard Line on AI in Judicial Work
On April 4 the Gujarat High Court issued a policy barring judges and court staff from using AI for any form of judicial decision-making, reasoning, order drafting, judgment preparation, or bail and sentencing analysis. AI is still allowed for research and precedent lookup, but judges are personally liable for any AI-assisted output they sign off on (Bar & Bench, LiveLaw).
This is not just a courtroom story. It is a preview of how Indian institutions will treat AI in any high-stakes decision, including hiring and performance management. Expect the same “personal liability on the user” logic to show up in labor tribunals and employment cases the next time an algorithmic hiring decision lands in front of a judge. If your ATS scores or rejects candidates with a model, you want an audit trail that shows a human reviewed the output, the criteria are documented, and adverse impact testing is in place. AI in HR recruitment works only when the accountability chain is clear.
A Frontier Lab Pitches a Robot Tax, Public Wealth Fund, and 32-Hour Workweek
OpenAI published a 13-page policy document on April 6 titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” It calls for a state-backed public wealth fund that pays dividends to citizens (modeled on Alaska’s Permanent Fund), a shift in the tax base from labor to capital, a “robot tax” pilot, and a subsidized 32-hour workweek with no pay cut (TechCrunch, Computerworld).
None of this becomes policy overnight. But the fact that a frontier lab is publicly lobbying for a shorter workweek puts the topic squarely on the founder’s desk, not the policy wonk’s. If you run a 50-person startup, your employees will read this and ask whether your company’s workweek policy is evolving. Have a point of view ready. A credible answer sounds like “we will pilot compressed schedules on teams where the output is measurable” or “we are watching the research, here is what would need to be true for us to change.” A non-answer looks worse this quarter than last.
Quick Hits
- Eclipse, the Cerebras backer, closed a $1.3B fund for robotics, AI infrastructure and defense startups, part of a broader Q1 2026 venture surge (Reuters).
- Anthropic signed a multi-gigawatt TPU deal with Google and Broadcom, with capacity coming online from 2027 (Reuters Tech).
- OpenAI launched an external Safety Fellowship running September 2026 to February 2027 (OpenAI).
What Open Weights Agentic AI Means for Your Week
The short version. Open weights agentic AI is now a realistic build option for HR teams, not just a research curiosity, but only if your governance catches up. Pick one HR workflow that is high-effort and low-risk. Assign a human checkpoint. Log every action. If you operate in India, turn your DPDP review into a weekly standing item until Phase 1 is cleared. Reference architectures for AI payroll automation and AI in Human Resource Management are worth a look if you want a starting point that keeps compliance front and center.
FAQ
Q: What is open weights agentic AI, and why does it matter for HR?
Open weights agentic AI refers to models whose parameters are publicly downloadable and that can take actions through tool calling, not just generate text. For HR teams it matters because on-device and on-premises deployment keeps employee data out of third-party APIs, which simplifies compliance with rules like India’s DPDP and the EU AI Act.
Q: Do India’s new DPDP rules apply to foreign companies hiring Indian contractors?
Yes. Any organization processing personal data of individuals in India, including employees and contractors, is treated as a Data Fiduciary under the DPDP Act and must comply with consent, grievance handling, and breach notification obligations. The Data Protection Board is already active, and full compliance is required by May 13, 2027.
Q: Should we tell candidates when AI is part of our hiring process?
Yes, and increasingly you have to. Regulators in the EU and India are moving toward mandatory disclosure and human-review requirements for automated hiring tools. The cleanest policy is a short notice in your application flow explaining where AI is used, what decisions it informs, and how a candidate can request human review.
Not to be considered as tax, legal, financial or HR advice. Regulations change over time so please consult a lawyer, accountant or Labour Law expert for specific guidance.
