Into AI Safety

Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorship programs.
Join the Mech Interp Discord server and attend reading groups at 11:00am on Wednesdays (Mountain Time)!
Check out Alice's website.
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.

EleutherAI
Join the public EleutherAI discord server

Distill
Effective Altruism (EA)
MATS Retrospective Summer 2023 post
Ambitious Mechanistic Interpretability AISC research plan by Alice Rigg
SPAR
Stability AI
During their most recent fundraising round, Stability AI had a valuation of $4B (Bloomberg)

Mech Interp Discord Server

Show Notes

Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorship programs.

Join the Mech Interp Discord server and attend reading groups at 11:00am on Wednesdays (Mountain Time)!

Check out Alice's website.

Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.


Creators & Guests

Host
Jacob Haimes
Host of the podcast and all-around great dude.
Editor
Chase Precopia

What is Into AI Safety?

The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI"

For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io
For even more content and community engagement, head over to my Patreon at https://www.patreon.com/IntoAISafety