Future of Life Institute Podcast

On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI's global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps.  

You can learn more about Jeffrey's work at: https://jeffreyjding.github.io  

Timestamps:  

00:00:00 Preview and introduction  

00:01:36 A US-China AI arms race?  

00:10:58 Attitudes to AI safety in China  

00:17:53 Diffusion of AI  

00:25:13 Innovation without diffusion  

00:34:29 AI development concentration  

00:41:40 Learning from the history of technology  

00:47:48 Translating Chinese AI writings  

00:55:36 Automating translation of AI writings

Show Notes

On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps.  

You can learn more about Jeffrey’s work at: https://jeffreyjding.github.io  

Timestamps:  

00:00:00 Preview and introduction  

00:01:36 A US-China AI arms race?  

00:10:58 Attitudes to AI safety in China  

00:17:53 Diffusion of AI  

00:25:13 Innovation without diffusion  

00:34:29 AI development concentration  

00:41:40 Learning from the history of technology  

00:47:48 Translating Chinese AI writings  

00:55:36 Automating translation of AI writings

What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.