AI Governance, Ethics and Leadership
AI Governance, Ethics and Leadership Podcast
ChatGPT Just Saved a Widow, Australia Bans Teens from Social Media (+3 Explosive Power Moves)
Preview
0:00
-12:02

ChatGPT Just Saved a Widow, Australia Bans Teens from Social Media (+3 Explosive Power Moves)

The Top 5 AI Governance Power Moves Week of 12/09/2025 Vol. 24
 In This Issue
👉 Top 5 AI Governance Power Moves  (Index & Insights)
👉References / Citations
👉 [Premium] Deepfake risk is already inside your firewall.

This week we start in Australia and we end up in your living room. Yours. If you have been tuning in each week, you already know that AI Governance goes beyond business. It’s personal. And this week’s round up proves it. This week’s index is up 6 points to 47. Let’s dig in:

Exploitation vs Accountability Index showing a score of 47 for the week of 12/9/2025. 47 illustrates an Ethical Gray Area. Image appears courtesy of AI Governance Lead.
Exploitation vs Accountability Index showing a score of 47 for the week of 12/9/2025. 47 illustrates an Ethical Gray Area. Image appears courtesy of AI Governance Lead.

AI Governance, Ethics and Leadership is powered by 3800+ subscribers who want to see effective AI Regulation. Join us! Upgrade for the executive insights that keeps your business protected and your team ahead.

1. Australia’s <16 Social Media Ban

Exploitation vs Accountability Index: 75 (Accountability) - A for Effort.

Meta deletes 500K under‑16 accounts; TikTok, Snapchat, X, and YouTube follow, with $50M fines for violators. It’s the first global stop on AI feeds exploiting kids. But blind spots remain: chatbots, messaging, gaming. VPN workarounds are already trending. Regulators banned yesterday’s problems while tomorrow’s are alive in Discord and with AI chatbots.

AD’s Take: Bold move, but enforcement is murky and scope is narrow. Businesses lose a generation of funnels, parents gain a short reprieve, and regulators prove they’re still lagging behind. Still fighting the last war. Now. Why is Big Tech not melting down about this? Market size. AU only has 20M social media users. Much smaller than the EU’s 450M. This doesn’t mean the battle is over. It’s just not a priority right now.

Leave a comment

2. HP Lays Off 6,000 for “AI Efficiency”

Exploitation vs Accountability Index: 35/100 (Ethical Gray Area)

HP filed with the SEC, announcing cuts of 6,000 jobs by 2028 tied to a $1B bet on “agentic AI.” Their promise: bots will handle admin, development, and support. The proof: none. No demos, benchmarks, or pilots. Just PowerPoint promises.

AD’s Take: When Waymo claims safety, we get 100M miles of data. When Boston Dynamics shows off Atlas, we see parkour videos. When labs brag about AI, they publish benchmarks. HP? Nothing. No audits, no disclosure, no accountability—just vaporware efficiency and a stock bump. For workers: Demand receipts before severance. Are AI Layoff excuses cutting it for you?

Leave a comment

3. Waymo Misses Stopped School Buses

Exploitation vs Accountability Index: 44(Exploitation) Self reported under pressure

NHTSA investigates after Waymo cars illegally passed stopped school buses at least 19 times in Austin, Texas. Waymo filed a voluntary recall for a “bug” that blocked recognition of stop‑arm signals but only after the district documented incidents and media coverage forced attention.

AD’sTake: 100M miles of testing, safety claims, and permits—and yet the cars missed the one vehicle built to protect kids. This point to significant and important gaps in training data where there were either too few school buses or missed context for what school buses look like in motion and parked. Regulators didn’t catch it; a school district did. Waymo claims that their vehicles are “Statistically safer” than human drivers. But that means nothing if children aren’t safe.

Leave a comment

4. MagicEdit’s Security Lapse Highlights Deepfake Risks

Exploitation vs Accountability Index: 22 (Exploitation)

Cybersecurity researcher Jeremiah Fowler found an unsecured AWS S3 bucket belonging to MagicEdit (the company behind BoostInsider’s “nudify” app). Inside: over 1 million AI-generated images. Nudes. Face swaps. And thousands of images grafting children’s faces onto adult bodies. The apps got yanked from stores. The websites went dark. There’s a “probe” happening. But the images? They’re out there forever.

AD’s Take: Let’s talk numbers. Deepfakes affects mostly women and children. Deepfakes are created by mostly men. This creates liability issues for businesses (employees and customers), women. The one piece of legislation in place today is the Take it Down Act which requires social media companies to take down reported deep fakes in 48 hours. Learn more about it here. AI Governance can regulate tools but it needs Government support and it needs every citizen to treat each other the way they’d like to be treated online.

Leave a comment

5. ChatGPT Saves Widow from Romance Scam

Exploitation vs Accountability Index: 60 (Ethical Gray Area)

Margaret Loke, a 70 year old widow, had already wired $950,000 to “Ed”, a romance scammer she met on Facebook who promised her love. And crypto riches. She was about to send the final $50K when panic set in. So she did something unusual: she pasted Ed’s messages into ChatGPT. The AI immediately flagged every red flag her grief-stricken heart had ignored. Pressure tactics. Offshore wallets. The sudden romantic pivot when she hesitated. She froze the transfer, confronted him, and managed to recover some of her losses through DOJ intervention.

AD’s Take: We need to talk to our parents. We always talk about how to keep kids safe in the age of AI. But we have to keep our parents safe too. They are lonely, they have money, they have access to technology and .. they think they know everything. That’s a recipe for hacking and social engineering. This scam was not AI-driven any means but it could have been. Margaret sent 950K before becoming suspicious. I also have to give credit where it’s due. ChatGPT is not often in the news for something positive. This time it is.

Leave a comment

Wrap Up

Notice how we started with Australia’s policy room and ended in Margaret’s living room? That’s not accidental. AI governance feel abstract but it isn’t. Think of it as concentric circles of vulnerability, and the closest circles are the ones we’re going to have to be able to defend. The platforms won’t protect your mom from ‘Ed.’ The regulators won’t stop Waymo before it hits a kid. HP won’t prove AI can do your job before eliminating it. The only governance that works is the governance you build yourself: checking in, asking questions, running that sketchy message through a second source before you wire the money.

AI Governance, Ethics and Leadership is powered by 3800+ subscribers who want to see effective AI Regulation. Join us! Upgrade for the executive insights that keeps your business protected and your team ahead.

References/Citations

Stay Connected

X, Linkedin, Website & BlueSky

Executive Briefing: Deepfake Risk Is Already Inside Your Firewall

I work with midsize organizations helping them establish their AI governance posture. The same issue surfaces every time: deepfakes are happening inside normal companies with normal employees. You might wonder how. Here are a few examples:

  • A marketer experimenting with Runway on a work laptop

  • A sales rep swapping a prospect’s face into a demo “just for fun”

  • An employee testing a nudify app because “it was just one click” they forgot they were using their work phone.

These aren’t rogue actors. They’re your people, crossing lines they don’t realize exist and creating liability.

The TAKE IT DOWN Act forces platforms to remove reported deepfakes within 48 hours. But it does nothing to stop the content from being created inside your own environment. And right now, most midsize firms have:

  • No policy

  • No technical blocks

  • No incident process

That blind spot invites, risk, liability and reputational damage.

To close it, I’m sharing the minimum viable assessment to help business assess where they are.

These controls provide the minimum viable guardrails to prevent reputational damage, legal exposure, and regulatory non‑compliance.

The Requlatory Squeeze: The 2025 Laws You Can’t Ignore

  • U.S. Federal (TAKE IT DOWN Act): 48-hour removal mandate for nonconsensual deepfakes; platforms face FTC fines if they drag feet. [realitydefender.com]

  • EU AI Act: Mandatory labeling for deepfakes by Aug 2025; fines up to €35M for non-compliance. [realitydefender.com]

  • State Patchwork: CA/TX require disclosures for AI in elections; PA/WA criminalize “forged likenesses” with intent to defraud. [crowell.com]

  • Global Echoes: UK’s Online Safety Bill mandates harm prevention; China’s deepfake disclosure rules since 2023. [regulaforensics.com]

Risks to Your Bottom Line: Beyond the Headlines

  • Reputational Hits: Deepfake scandals have financial repercussions (e.g., Swift’s 2024 hoax cost brands millions).

  • And there are also cyber insurance gaps to consider. Legal/Financial: Civil suits under DEFIANCE Act (reintroduced May 2025) for damages; cyber insurance gaps leave you exposed (average loss $280K/incident).

Executive DeepFake Liability Action Plan vv

This post is for paid subscribers