Thinking Machines: AI & Philosophy

"Understanding what's going on in a model is important to fine-tune it for specific tasks and to build trust."

Bhavna Gopal is a PhD candidate at Duke, research intern at Slingshot with experience at Apple, Amazon and Vellum.

We discuss
  • How adversarial robustness research impacts the field of AI explainability.
  • How do you evaluate a model's ability to generalize?
  • What adversarial attacks should we be concerned about with LLMs?

Creators & Guests

Host
Daniel Reid Cahn
Founder @ Slingshot - AI for all, not just Goliath

What is Thinking Machines: AI & Philosophy?

“Thinking Machines,” hosted by Daniel Reid Cahn, bridges the worlds of artificial intelligence and philosophy - aimed at technical audiences. Episodes explore how AI challenges our understanding of topics like consciousness, free will, and morality, featuring interviews with leading thinkers, AI leaders, founders, machine learning engineers, and philosophers. Daniel guides listeners through the complex landscape of artificial intelligence, questioning its impact on human knowledge, ethics, and the future.

We talk through the big questions that are bubbling through the AI community, covering topics like "Can AI be Creative?" and "Is the Turing Test outdated?", introduce new concepts to our vocabulary like "human washing," and only occasionally agree with each other.

Daniel is a machine learning engineer who misses his time as a philosopher at King's College London. Daniel is the cofounder and CEO of Slingshot AI, building the foundation model for psychology.