From the FDA’s internal AI rollout, to the UK and NVIDIA’s sandbox, to OpenAI’s threat disruption report the AI governance race abroad is gaining momentum. AI Governance has good and not so great weeks but the developments I’ve curated this week feels uplifting. Although AI Governance continues to lack teeth in the USA, the progress being made in the UK, Canada and India are encouraging.
This week, our Exploitation vs Accountability Index has hit its highest score yet - 70.6. Keep reading to find out why.
🏅 1. OpenAI Publishes Threat Report on Malicious AI Use
Score: 75 / 100 Category: Accountability - Platform Governance + Threat Disruption
What happened: OpenAI released its latest threat intelligence report, Disrupting Malicious Uses of AI. The report details 10 real-world campaigns in which threat actors used large language models (LLMs) for cybercrime, social engineering, and influence operations. OpenAI outlines how it detected and disrupted these activities—ranging from fake résumé scams to malware development and AI-generated propaganda.
Why it matters: This is one of the most detailed public disclosures of AI abuse at scale—and how a major model provider is actively countering it. The report marks a shift from passive content moderation to proactive threat intelligence, with OpenAI collaborating across cloud providers, law enforcement, and civil society.
Governance takeaway: Model providers are becoming de facto security actors. Expect growing pressure for transparency, red teaming, and cross-platform coordination as AI misuse becomes a national and geopolitical risk vector.
🥈 2. India Launches National AI Ethics Consultation
Score: 60 / 100 Category: Accountability - Public Engagement + Policy Formation
What happened: UNESCO and India’s Ministry of Electronics and IT (MeitY), in partnership with Ikigai Law, hosted the final stakeholder consultation on AI ethics in New Delhi. The event brought together over 200 experts from government, academia, and civil society to shape India’s evolving framework for responsible AI. The consultation emphasized fairness, transparency, and indigenous innovation as pillars of India’s AI governance strategy.
Why it matters: India is signaling that AI ethics is a public good, not just a corporate checkbox. By crowdsourcing its policy, it’s building legitimacy and local relevance—especially important in a country with deep linguistic, economic, and digital divides.
Governance takeaway: This is a model for inclusive AI policymaking. The USA should take notes on how to democratize governance without diluting rigor.
🥉 3. UK + NVIDIA Unveil AI Certification Sandbox
Score: 85 / 100 Category: Accountability - Innovation + Regulatory Experimentation
What happened: UK Launches Supercharged AI Sandbox for Financial Services The UK’s Financial Conduct Authority (FCA) announced a new “Supercharged Sandbox” in partnership with NVIDIA. The initiative allows financial services firms to test and certify AI systems in a controlled environment, with access to advanced compute resources, synthetic datasets, and regulatory guidance. The program opens for applications through August and begins formal testing in October 2025.
Why it matters: This is a smart middle path—regulatory realism in action. It acknowledges that rigid rules can stifle progress, but unregulated deployment is a risk multiplier. The sandbox lets the UK test governance mechanisms before codifying them.
Governance takeaway: Other nations should watch this closely and take note. Sandboxes are predicted to become the proving grounds for AI assurance frameworks.
🏅 4. Canada Launches AI Auditing Pilot
Score: 78 / 100 Category: Accountability - Transparency + Public Sector Oversight
What happened: Canada Moves Toward AI Assurance with Industry-Led Pilot PwC Canada launched Assurance for AI, a first-of-its-kind service to audit and validate AI systems across sectors. While not a government program, it signals growing demand for independent oversight and could shape future public-sector standards.
Why it matters: This is one of the first government-led AI audit programs in the world. It sets a precedent for public sector accountability and could inform procurement standards globally.
Governance takeaway: Audits are no longer optional. Expect AI assurance to become a procurement prerequisite in the next 12–18 months.
🏅 5. FDA Deploys ELSA, Its First Agency-Wide AI Tool
Score: 55/ 100 Category: Gray Area - Institutional Modernization + Operational AI Integration
What happened: The U.S. Food and Drug Administration (FDA) officially launched ELSA, a generative AI tool designed to accelerate internal workflows across scientific review, inspection targeting, and regulatory documentation. Built in a secure GovCloud environment, ELSA is already being used to summarize adverse events, compare drug labels, and generate code for nonclinical database development.
Why it matters: This marks the first agency-wide deployment of a large language model within a major U.S. federal regulator. While not a governance framework per se, it signals a shift from AI oversight to AI-enabled oversight. This is a subtle but powerful evolution in how institutions govern with AI.
Governance takeaway: The FDA is operationalizing AI oversight. Expect other agencies to follow suit, raising new questions about internal AI assurance, transparency, and workforce readiness.
🧠 Final Thought
This week’s developments signal momentum and a refreshing shift from policy talk to institutional action. From internal AI deployments to sandbox-based experimentation and platform threat reporting, the governance stack is being built while the plane is in flight.
What’s next? Governance will be judged not by principles adopted, but by tools deployed, risks averted, and trust earned in real time.
What Did You Think About This Week’s Developments?
👇 Leave me your spiciest take (for or against).
👇 Share this article with team members and colleagues alike.
👇 Subscribe to AI Governance, Ethics and Leadership (if you haven’t yet). It’s time.
References
https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-june-2025/ https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf
https://www.unesco.org/en/articles/ethics-action-india-continues-its-journey-towards-ethical-ai-final-ram-consultation-new-delhi
https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people
https://www.cantechletter.com/newswires/pwc-canada-launches-first-to-market-solution-to-provide-assurance-for-ai/
https://www.fca.org.uk/news/press-releases/fca-allows-firms-experiment-ai-alongside-nvidia
https://www.cnbc.com/2025/06/09/uks-fca-teams-up-with-nvidia-to-let-banks-experiment-with-ai.html
Share this post