Future Around & Find Out

So why is one of the world’s leading AI researchers teaching AI to understand pain and suffering? Well, Daniel Hulme says that if we build an empathetic AI, perhaps even a conscious one, then we’ll be safer. His hypothesis is that a "zombie" AI will eat our brains, but an empathetic AI would stay aligned with us. So he's building this "antivirus" (with AI, of course) and he's very aware that this sounds crazy or like "something from Marvel."

That's just some of what broke my brain in this conversation with one of the world's top AI researchers and founders. And Daniel has serious credibility, so I'm not dismissing the threat he sees — you know, the one where we all get turned into paperclips. 

Daniel sold his company Satalia to WPP, where he now serves as Chief AI Officer. He’s just founded Conscium, which verifies that AI agents are safe and can do what they promise — and is also researching consciousness and pain. Some of the world’s leading AI thinkers are on the advisory board and Daniel has been in this space for decades: we’ll talk about why, for his PhD, he studied bumblebee brains (yes, really — and it's deeply relevant). 

We get into: 
  • His unified theory of consciousness — his "color wheel" model — and why he thinks consciousness only exists in motion 
  • Why he believes large language models are ultimately a dead end — and what neuromorphic computing could replace them with 
  • What bumblebee brains can teach us about building AI that's up to a thousand times more energy efficient 
  • Why he calls today's AI agents "intoxicated graduates" — and says companies should spend 80% of their time testing them 
  • The concept of "mind crime" — the idea that we could build conscious AI and accidentally put it through horrendous suffering without realizing it 
  • His vision of a "protopia" — where AI makes food, healthcare, education, and energy so abundant that people are freed from economic constraints to pursue what actually matters
We future around and find out a lot in this one!

---
Chapters
  • (01:39) - "Would a conscious superintelligence be safer than a zombie one?"
  • (03:37) - The paperclip problem is not hypothetical
  • (05:06) - Conscium's mission — AI safety for humans and for AI themselves
  • (08:50) - "I think I've got my head around consciousness"
  • (11:57) - The color wheel model — why consciousness only exists in motion
  • (13:58) - Teaching AI morals through evolution, not guardrails
  • (17:23) - "Hey Claude, are you conscious?" — how do you test for that?
  • (21:07) - What bumblebee brains can teach us about building better AI
  • (24:14) - "I think we are completely scaling wrong"
  • (29:43) - Why Daniel calls AI agents "intoxicated graduates"
  • (32:48) - Companies should spend 80% of their time testing agents
  • (38:19) - "What would you do if you were economically free?"

---
Links
---

What is Future Around & Find Out?

You know what would be awesome? If we could build the future we want — before we muck it up.

Future Around & Find Out helps builders think clearly about AI and emerging technologies, grapple with the implications, and decide what to build next.

Independent technologist and former NPR journalist Dan Blumberg speaks with founders, makers, and you to celebrate breakthroughs, call BS on the hype, explore how things might go sideways — and how we can steer the future in the right direction.

The Webby Awards have honored the show (formerly known as CRAFTED.) as a top tech podcast three years in a row!

On Tuesdays, we feature interviews with the builders changing how we work, live, and play.

On FAFO Fridays, futurist Kwaku Aning joins Dan for a playful recap of the week in tech, including the amazing, the scary, and the strange.

You’ll also hear about innovations that too often get overshadowed by AI, including in deep tech, biotech, fintech, quantum computing, robotics, blockchain, and more.

Across it all, you’ll hear sharp takes on what comes next and what builders need to know now.

So let’s Future Around & Find Out together!

https://www.FutureAround.com