Back to Blog

What 40 Years in AI Teaches Us About AI Governance and Safety

There's a question that keeps surfacing in every serious AI conversation I've had this year - and it's not about benchmarks, model size, or inference cost. It's this: Are we building something the world can actually sustain?

I sat down with Dr. James Ong in Singapore, one of the most thoughtful voices in AI governance across Asia-Pacific, and that question ran underneath almost everything we discussed. James has been in the AI space since 1986. He’s seen a full boom-and-bust cycle play out - and has spent the last decade thinking about how we avoid repeating it. When someone with that vantage point says "I've seen this before," it’s worth listening closely.

The AI Industry Has Crashed Before - Are the Warning Signs Back?

James’ first observation isn’t about technology - it’s about history, and it comes as a warning. He was in an AI research lab in 1986. He watched the investment euphoria build - and then saw the field collapse under the weight of promises the technology wasn’t ready to deliver. The AI winters of the '80s and '90s didn't just slow progress - they erased momentum that took decades to rebuild. Today, looking at circular investment deals and self-reinforcing valuations, he recognizes the pattern immediately.

Is the industry genuinely at risk of repeating that cycle?

“We want to make sure we don’t repeat the same mistake,” he said. This isn’t a general caution - it’s a reading of history from someone who lived through the last correction and has spent the past decade working to prevent the next. That conviction led him to establish the Artificial Intelligence International Institute (AIII) and co-author “AI for Humanity: Building a Sustainable AI for the Future.” His argument is simple but non-negotiable: AI must be sustainable across three dimensions: technology, governance, and commercialization. Not sequentially. All at once.

Because the last time the industry prioritized momentum over foundations, it didn’t just slow down - it lost a generation of progress.

AI for Humanity: Beyond Safety Guardrails

So what does "AI for Humanity" actually mean in practice - is it just safety guardrails rebranded?

James is specific about what it means operationally. He's been advocating for AI to be formally recognized as UN Sustainable Development Goal Number 18 - anchoring AI development to the same anti-poverty, health, and equity targets the world has already agreed upon. He made that case at a UN University conference in Macau, and has taken the same message to Davos, the World AI Conference in Shanghai, and South by Southwest in Austin. Across wildly different cultural and political contexts, the concern is identical: something important is being decided right now, without enough of the right people in the room.

For developers, the point is simple - the systems you build are part of this governance story, whether you intend it or not.

Agentic AI Risks: The Design Problem Most Teams Are Ignoring

This is where the conversation got most technically concrete.

Traditional risk management in finance - fraud detection, payment verification - is largely a solved territory. The real exposure is agentic AI: systems that don't just analyze and surface recommendations, but act. Commit resources. Execute financial transactions on a user's behalf, sometimes without an explicit approval gate.

How much decision-making authority should an AI agent actually hold?

"You may end up spending your money or making financial commitments without knowing," James said plainly. That's not a theoretical edge case. It's a product category being actively shipped right now.

For those building agent-based systems, the design question this raise is: where exactly is your intervention point? What does the human-in-the-loop look like when decisions happen in milliseconds? This is an architecture problem before it's a policy problem, and it deserves to be in the spec from day one - not added as a compliance checkbox before launch.

Voice AI Has a Trust Deficit - And Developers Own Part of It

How far has deepfake audio actually eroded real-world trust in voice?

"I've reached a point where I don't receive phone calls anymore," James said. For someone with 40 years in this field, that's a striking statement. Verification is now a prerequisite, not a courtesy. For developers shipping voice-first agents, this is the environment they're entering. Trust is not the default state. It has to be earned and architecturally reinforced - through authentication layers, clear disclosure, and verification steps at the right moments.

He also flagged something actionable for agent builders: persistent memory has already crossed into uncomfortable territory for sophisticated users. His ask was simple - a zero-memory mode, a clean-slate option. That's not an edge case. That's a user control feature your power users will eventually demand.

Developers’ Responsibility in Building Ethical AI

James closed with a message aimed squarely at developers, not policymakers.

"Whatever you do may sound small, but you can actually make a difference in changing the direction of AI towards AI for humanity."

To make a meaningful difference in the direction of AI, he urges three things: build explicit human intervention points into every agentic system before it ships; give users genuine, accessible control over their data and memory; and actively contribute to AI governance frameworks - including supporting AI for Humanity as UN SDG Goal Number 18.

The decisions that shape this cycle are being made right now - in architecture reviews, product specs, and engineering calls. Small actions, taken consistently across thousands of teams, compound into structural change that no policy body can mandate alone.

Forty years of pattern recognition is hard to argue with. There's a lot more where this came from - watch the full conversation.

Check out the full episode here: https://www.youtube.com/watch?v=JBJKm5CSRuE&t=2s

RTE Telehealth 2023
Join us for RTE Telehealth - a virtual webinar where we’ll explore how AI and AR/VR technologies are shaping the future of healthcare delivery.

Learn more about Agora's video and voice solutions

Ready to chat through your real-time video and voice needs? We're here to help! Current Twilio customers get up to 2 months FREE.

Complete the form, and one of our experts will be in touch.

Try Agora for Free

Sign up and start building! You don’t pay until you scale.
Try for Free