India just put AI governance on the same table where it argues about jobs, industrial policy, and national security. A new inter-ministerial panel pulls together the IT minister, the Chief Economic Adviser, NITI Aayog, and the National Security Council Secretariat, with a labour-impact mandate baked in. On the other side of the ledger, a four-month-old lab that nobody had heard of a week ago just raised $500 million from GV and Nvidia to try to automate AI research itself. That’s the split screen today: governments racing to catch up, and labs racing to make yesterday’s model stack obsolete before regulators even finish a first draft. If you care about India AI governance, enterprise HR, or where the next agent platform lives, this one’s worth 10 minutes.
India AI Governance Gets a Cabinet-Level War Room With a Jobs Mandate
India’s government constituted the AI Governance and Economic Group (AIGEG), a high-level inter-ministerial body chaired by Union IT Minister Ashwini Vaishnaw, with Minister of State Jitin Prasada as vice chair (IndiaAI / MeitY). The core group pulls in the Principal Scientific Adviser, the Chief Economic Adviser, and the CEO of NITI Aayog, with secretaries from MeitY, Telecommunications, Economic Affairs, and Science & Technology, plus the National Security Council Secretariat. A parallel Technology and Policy Expert Committee will feed it advisory input on emerging tech and risk.
What makes this different from every other “AI committee” is the explicit labour-market mandate. AIGEG is tasked with assessing which job profiles AI adoption will hit first, mapping geographic concentration, and developing mitigation and transition plans that account for informality, skill diversity, and regional variation (Storyboard18). In other words, it isn’t just deciding whether to allow model X or regulate dataset Y. It’s being asked to pre-load an economic response to AI-driven workforce change.
If you’re building or hiring in India, this changes the planning horizon. A 100-person startup in Bengaluru running AI pilots across customer support and sales can no longer assume the only AI rules in play are the DPDP Act and whatever the EU AI Act spills over. An AIGEG-shaped framework will likely touch sectoral rules first, labour codes next, and compliance reporting after that. The smart move this quarter: map which of your current AI tools sit in “high-impact” categories (hiring, promotion, termination, credit, healthcare triage), and document how humans stay in the loop. If India takes even a lighter-touch EU-style approach, you’ll want that paper trail ready.
What to do: Ask your ops lead which AI workflows would need a human-sign-off audit trail if a sector regulator asked tomorrow. If the answer is “I’m not sure,” that’s the first thing to fix. Teams hiring AI engineers in India should add governance-readiness to the job description, not treat it as a lawyer problem.
A Four-Month-Old Lab Raises $500M to Automate AI Research
Recursive Superintelligence, a lab incorporated on December 31, 2025 with roughly 20 people, raised more than $500 million from GV and Nvidia at a $4 billion valuation, with the round reportedly oversubscribed to near $1 billion (The Decoder). The founding team is notable. Richard Socher (former Salesforce chief scientist), Tim Rocktäschel (ex-director at Google DeepMind, UCL professor), Josh Tobin, Jeff Clune, and Tim Shi, each with OpenAI or DeepMind backgrounds. The pitch: build systems that improve themselves across evaluation, data selection, training, and research direction, without human-in-the-loop at each step.
So what? Two things matter for founders. First, the capital intensity of frontier AI just got another order of magnitude more comfortable with zero-revenue pre-product bets, which means the competitive floor for anyone building on top of these labs keeps moving. Second, if “self-improving” actually compounds even 1.5x per cycle, your API provider roadmap gets harder to predict. Plan on a 12-month horizon for anything you’re locking into a specific model family, not 36.
Workday Ships 300+ Agent Skills Into Its HR and Finance Stack
Workday’s returning co-founder-CEO Aneel Bhusri went public with the next wave of Sana from Workday, including a Self-Service Agent that ships with 300+ prebuilt skills across pay, time, absence, and expense, plus Sana Enterprise extending agents into Gmail, Outlook, Salesforce, ServiceNow, Slack, and Jira (Workday newsroom; TechTarget). Notably, these agents are bundled into existing subscriptions via Flex Credits, not a separate SKU.
If your people-ops team still routes every payroll correction and leave query through a ticket queue, that queue is the target. Agentic AI agents for HR moving from pilot to default-on inside the big HCM stack means your own tool stack gets a second-order rebuild too. Budget for a 2-quarter review of which HR tickets humans still need to own, and which ones an agent can close without anyone touching it.
Gemini 3 Deep Think Starts Flagging Logical Errors in Peer-Reviewed Math Papers
Google DeepMind’s upgraded Gemini 3 Deep Think reasoning mode has started doing something uncomfortable for academia: identifying subtle logical flaws in mathematics papers that already cleared human peer review (Google). Rutgers mathematician Lisa Carbone reportedly used Deep Think to surface one such error. DeepMind paired it with Aletheia, a verifier-equipped agent that has autonomously tackled open conjectures (DeepMind).
Why this matters outside math. Any workflow where a human reviewer catches “logic errors” (legal briefs, compliance policies, engineering specs, payroll audit logic) is now a candidate for a reasoning-model second pass. Treat the analogy with caution. DeepMind itself flags that Aletheia still makes more errors than human experts. But if you’re running a 200-person ops team, a model that re-reads its own memos and flags the weird one before it goes out is a real line item.
Quick Hits
- Z.ai (formerly Zhipu) shipped GLM-5.1, a 744B-parameter open-source MoE under MIT license. It scored 58.4 on SWE-Bench Pro, edging past GPT-5.4 and Claude Opus 4.6, and was trained entirely on Huawei Ascend 910B chips with zero Nvidia hardware (VentureBeat).
- Nvidia’s Agent Toolkit launched at GTC 2026 with 17 enterprise adopters including Adobe, Salesforce, SAP, ServiceNow, Palantir, and Red Hat. The stack bundles the OpenShell runtime, an AI-Q deep-research blueprint, and Nemotron models as open plumbing for autonomous enterprise agents (VentureBeat).
- South Africa gazetted its Draft National AI Policy on April 10. Comms Minister Solly Malatsi opened a 60-day public comment window closing June 10, picking a sector-specific, multi-regulator model instead of a single AI authority (Baker McKenzie).
What the India AI Governance Shift Means for Your Team
The thread running through today’s stories is institutional catch-up. India AI governance now has a named cabinet-level venue. Enterprise HR stacks are shipping agents as default. Research itself is starting to be audited by models. If you’re running a distributed team with India payroll, EU hiring, and a US cap table, none of this is optional to track. Asanify’s global HRMS and payroll platform already handles multi-country compliance and integrates with the AI agent layer your ops team is likely piloting, so you’re not stitching governance and automation after the fact.
India AI Governance FAQ
Q: What is India’s new AI Governance and Economic Group?
A: It’s a high-level inter-ministerial body chaired by IT Minister Ashwini Vaishnaw, with Jitin Prasada as vice chair. It pulls in the Principal Scientific Adviser, Chief Economic Adviser, NITI Aayog’s CEO, and secretaries from MeitY, Telecom, Economic Affairs, Science & Technology, plus the NSC Secretariat, with a mandate covering cross-sectoral AI policy and labour-market impact of AI adoption.
Q: Will India AI governance affect how my company hires AI engineers in India?
A: Yes, over time. The group’s remit includes mapping which job profiles AI hits first and building transition plans, so future sectoral rules on hiring AI, performance AI, and agent-based automation are likely. Companies closing the AI skills gap and documenting human oversight of AI-driven hiring decisions will be better placed when rules arrive.
Q: How does India AI governance compare to the EU AI Act?
A: India looks closer to a coordinating-body model than the EU’s horizontal law-and-penalty model. The AIGEG is designed to align ministries and regulators around shared principles, with sector-specific rules expected to follow, rather than one binding statute covering all high-risk AI uses. The direction is similar, the instrument is different.
Not to be considered as tax, legal, financial or HR advice. Regulations change over time so please consult a lawyer, accountant or Labour Law expert for specific guidance.
