ML Safety Report

In this week's newsletter, we explore the topic of modern large models’ alignment and examine criticisms of extreme AI risk arguments.

Show Notes

In this week's newsletter, we explore the topic of modern large models’ alignment and examine criticisms of extreme AI risk arguments.

Join the Alignment Jam hackathon this weekend to get experience in doing ML safety research! https://ais.pub/scale

Opportunities
Sources
His article about superintelligence: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4222347 

What is ML Safety Report?

A weekly podcast updating you with the latest research in AI and machine learning safety from people such as DeepMind, Anthropic, and MIRI.