Into AI Safety

Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded
⁠StakeOut.AI, a non-profit focused on making AI go well for humans.
00:54 - Intro03:15 - Dr. Park, x-risk, and AGI08:55 - StakeOut.AI12:05 - Governance scorecard19:34 - Hollywood webinar22:02 - Regulations.gov comments23:48 - Open letters 26:15 - EU AI Act35:07 - Effective accelerationism40:50 - Divide and conquer dynamics45:40 - AI "art"53:09 - Outro
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.

StakeOut.AI
AI Governance Scorecard (go to Pg. 3)
Pause AI
Regulations.gov
USCO StakeOut.AI Comment
OMB StakeOut.AI Comment

AI Treaty open letter
TAISC
Alpaca: A Strong, Replicable Instruction-Following Model
References on EU AI Act and Cedric O
Tweet from Cedric O
EU policymakers enter the last mile for Artificial Intelligence rulebook
AI Act: EU Parliament’s legal office gives damning opinion on high-risk classification ‘filters’
EU’s AI Act negotiations hit the brakes over foundation models
The EU AI Act needs Foundation Model Regulation
BigTech’s Efforts to Derail the AI Act

Open Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulation
Divide-and-Conquer Dynamics in AI-Driven Disempowerment

Show Notes

Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded ⁠StakeOut.AI, a non-profit focused on making AI go well for humans.

00:54 - Intro
03:15 - Dr. Park, x-risk, and AGI
08:55 - StakeOut.AI
12:05 - Governance scorecard
19:34 - Hollywood webinar
22:02 - Regulations.gov comments
23:48 - Open letters
26:15 - EU AI Act
35:07 - Effective accelerationism
40:50 - Divide and conquer dynamics
45:40 - AI "art"
53:09 - Outro

Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.

Creators & Guests

Editor
Chase Precopia
Guest
Dr. Peter S. Park
AI Existential Safety Postdoctoral Fellow @MIT, @Tegmark Lab. @Harvard PhD '23, @Princeton '17. Alum of @JoHenrich Lab. Studies cognition (both human and AI).

What is Into AI Safety?

The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI"

For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io
For even more content and community engagement, head over to my Patreon at https://www.patreon.com/IntoAISafety