Into AI Safety

As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI along with Harry Luk and one other cofounder whose name has been removed due to requirements of her current position. The non-profit had a simple but important mission: make the adoption of AI technology go well, for humanity, but unfortunately, StakeOut.AI had to dissolve in late February of 2024 because no granter would fund them. Although it certainly is disappointing that the organization is no longer functioning, all three cofounders continue to contribute positively towards improving our world in their current roles.If you would like to investigate further into Dr. Park's work, view his website, Google Scholar, or follow him on Twitter00:00:54 ❙ Intro00:02:41 ❙ Rapid development00:08:25 ❙ Provable safety, safety factors, & CSAM00:18:50 ❙ Litigation00:23:06 ❙ Open/Closed Source00:38:52 ❙ AIxBio00:47:50 ❙ Scientific rigor in AI00:56:22 ❙ AI deception01:02:45 ❙ No takesies-backsies01:08:22 ❙ StakeOut.AI's start01:12:53 ❙ Sustainability & Agency01:18:21 ❙ "I'm sold, next steps?" -you01:23:53 ❙ Lessons from the amazing Spiderman01:33:15 ❙ "I'm ready to switch careers, next steps?" -you01:40:00 ❙ The most important question01:41:11 ❙ OutroLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.StakeOut.AIPause AIAI Governance Scorecard (go to Pg. 3)CIVITAIArticle on CIVITAI and CSAMSenate Hearing: Protecting Children OnlinePBS Newshour CoverageThe Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted WorkOpen Source/Weights/Release/InterpretationOpen Source InitiativeHistory of the OSIMeta’s LLaMa 2 license is not Open SourceIs Llama 2 open source? No – and perhaps we need a new definition of open…Apache License, Version 2.03Blue1Brown: Neural NetworksOpening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generatorsThe online tableSignalBloomz model on HuggingFaceMistral websiteNASA TragediesChallenger disaster on WikipediaColumbia disaster on WikipediaAIxBio RiskDual use of artificial-intelligence-powered drug discoveryCan large language models democratize access to dual-use biotechnology?Open-Sourcing Highly Capable Foundation Models (sadly, I can't rename the article...)Propaganda or Science: Open Source AI and Bioterrorism RiskExaggerating the risks (Part 15: Biorisk from LLMs)Will releasing the weights of future large language models grant widespread access to pandemic agents?On the Societal Impact of Open Foundation ModelsPolicy briefApart ResearchScienceCiceroHuman-level play in the game of Diplomacy by combining language models with strategic reasoningCicero webpageAI Deception: A Survey of Examples, Risks, and Potential SolutionsOpen Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulationAI Safety CampInto AI Safety Patreon

Show Notes

As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.
As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI along with Harry Luk and one other cofounder whose name has been removed due to requirements of her current position. The non-profit had a simple but important mission: make the adoption of AI technology go well, for humanity, but unfortunately, StakeOut.AI had to dissolve in late February of 2024 because no granter would fund them. Although it certainly is disappointing that the organization is no longer functioning, all three cofounders continue to contribute positively towards improving our world in their current roles.
If you would like to investigate further into Dr. Park's work, view his website, Google Scholar, or follow him on Twitter
00:00:54 ❙ Intro
00:02:41 ❙ Rapid development
00:08:25 ❙ Provable safety, safety factors, & CSAM
00:18:50 ❙ Litigation
00:23:06 ❙ Open/Closed Source
00:38:52 ❙ AIxBio
00:47:50 ❙ Scientific rigor in AI
00:56:22 ❙ AI deception
01:02:45 ❙ No takesies-backsies
01:08:22 ❙ StakeOut.AI's start
01:12:53 ❙ Sustainability & Agency
01:18:21 ❙ "I'm sold, next steps?" -you
01:23:53 ❙ Lessons from the amazing Spiderman
01:33:15 ❙ "I'm ready to switch careers, next steps?" -you
01:40:00 ❙ The most important question
01:41:11 ❙ Outro
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.

Creators & Guests

Host
Jacob Haimes
Host of the podcast and all-around great dude.
CP
Editor
Chase Precopia
Guest
Dr. Peter S. Park
AI Existential Safety Postdoctoral Fellow @MIT, @Tegmark Lab. @Harvard PhD '23, @Princeton '17. Alum of @JoHenrich Lab. Studies cognition (both human and AI).

What is Into AI Safety?

The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI"

For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io
For even more content and community engagement, head over to my Patreon at https://www.patreon.com/IntoAISafety