Thinking Machines: AI & Philosophy

Founder of the SafeLlama community, Enoch Kan joins us today, to talk about safety in open source and medical AI. Enoch previously worked in AI for radiology, focused on mammography at Kheiron Medical. Enoch is an open source contributor, and his substack is called Cross Validated.

Key topics they discuss include:
  • New jailbreaks for LLMs appear every day. Does it matter?
  • How do internet firewalls compare to AI “firewalls”?
  • Why do human radiologists still exist? Would it be safe to replace them all today?
  • Does safety matter more or less as models become more accurate?
  • If regulation is too intense, could we end up with illegal consumer LLMs? For example, could we stop the masses from using an illegal AI doctor that you can access from your phone?

Share your thoughts with us at hello@slingshot.xyz or tweet us @slingshot_ai.

Creators & Guests

Host
Daniel Reid Cahn
Founder @ Slingshot - AI for all, not just Goliath

What is Thinking Machines: AI & Philosophy?

“Thinking Machines,” hosted by Daniel Reid Cahn, bridges the worlds of artificial intelligence and philosophy - aimed at technical audiences. Episodes explore how AI challenges our understanding of topics like consciousness, free will, and morality, featuring interviews with leading thinkers, AI leaders, founders, machine learning engineers, and philosophers. Daniel guides listeners through the complex landscape of artificial intelligence, questioning its impact on human knowledge, ethics, and the future.

We talk through the big questions that are bubbling through the AI community, covering topics like "Can AI be Creative?" and "Is the Turing Test outdated?", introduce new concepts to our vocabulary like "human washing," and only occasionally agree with each other.

Daniel is a machine learning engineer who misses his time as a philosopher at King's College London. Daniel is the cofounder and CEO of Slingshot AI, building the foundation model for psychology.