We might need to shut it all down, AI governance seems more important than ever and technical research is challenged.
Opportunities
- Join us for the interpretability hackathon with Neel Nanda in a couple of a weeks: https://itch.io/jam/interpretability-hackathon
- Come along for the launch event for the newly founded European Network for AI Safety, a decentralized organization for coordination across Europe: https://forms.gle/RiJ7A5YuAk1BjbDM7
- AI100 essay writing competition: https://ai100.stanford.edu/prize-competition
- Join an info security course from a former Google security officer: https://forum.effectivealtruism.org/posts/zxrBi4tzKwq2eNYKm/ea-infosec-skill-up-in-or-make-a-transition-to-infosec-via
Sources
- AI governance ideathon: https://itch.io/jam/ai-gov/results
- Pause AGI development: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
- Gary Marcus on reasons for signing: https://garymarcus.substack.com/p/the-open-letter-controversy
- Stop AGI development: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
- Zvi's overview of AI tools out of the box: https://thezvi.substack.com/p/gpt-4-plugs-in
- LangChain: https://pub.towardsai.net/inside-langchain-the-open-source-llm-framework-everyone-is-talking-about-22f69e4bf808
- Zapier Natural Language Actions: https://nla.zapier.com/get-started/
- GPT-4 Plugins: https://openai.com/blog/chatgpt-plugins
- Eliezer on podcasts: https://www.youtube.com/watch?v=AaTRHFaaPG8
- Complaint filed: https://www.theverge.com/2023/3/30/23662101/ftc-openai-investigation-request-caidp-gpt-text-generation-bias
- ARC Evals: https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/
Programming on Github: https://twitter.com/mckaywrigley/status/1641204093074145281
What is ML Safety Report?
A weekly podcast updating you with the latest research in AI and machine learning safety from people such as DeepMind, Anthropic, and MIRI.