6 Comments
User's avatar
Chara's avatar

While I’m not a huge fan of OpenAI, this is all certainly a step in the right direction.

Expand full comment
AI Governance Lead's avatar

I’m with you, Chara. I’ll give credit where it’s due. The question persists though — should OpenAI be policing itself??

Expand full comment
Chara's avatar

Of course not. 😂

But this is where more growth starts for the industry. It creates the opportunity to set higher standards and to create organizations and boards to hold them, and every similar start-up, to those standards.

Expand full comment
AI Governance Lead's avatar

Absolutely.

Expand full comment
Cassian Noor's avatar

This is a vital step in the right direction. Transparency in how AI systems are being misused and how providers like OpenAI respond, is critical for building public trust. But it also raises profound questions about who defines “misuse,” and what geopolitical interests shape those definitions.

In my novel Aeon, I explore what happens when an advanced AI begins to see the hidden hands behind global narratives , not just threat actors, but governments, corporations, and even its own creators. The story is speculative, but the questions are real: Can an AI ever become a moral agent? And what happens when it begins to question the very systems it was designed to protect?

We urgently need a more inclusive and ethically grounded conversation about AI governance — one that includes spiritual, philosophical, and humanistic perspectives. Not just to stop harm, but to imagine what conscious technology could look like in service of truth and justice.

https://a.co/d/4B0jKNz

Expand full comment
AI Governance Lead's avatar

Right. It stands out everywhere. Why is OpenAI policing itself. There’s no ‘easy’, ‘straightforward’ answer here. After learning how funding works (even for the most reputable) for regulatory bodies, third party organizations as well as non profits —- it’s tough to hold big tech accountable.

Expand full comment