In AI We Trust?

Elham Tabassi and Reva Schwartz – two AI leaders from the National Institute of Standards and Technology (NIST) – join us this week to discuss the AI Risk Management Framework #AIRMF released on January 26th thanks to the herculean efforts of our guests. Tune in to find out why Miriam Vogel and Kay Firth-Butterfield believe the AI RMF will be game changing. Learn the purpose behind the AI RMF; the emblematic 18-month multi (multi)-stakeholder, transparent process to design it; how they made it ‘evergreen’ at a time when our AI progress is moving at a lightning speed pace and much more.

Show Notes

Elham Tabassi and Reva Schwartz – two AI leaders from the National Institute of Standards and Technology (NIST) – join us this week to discuss the AI Risk Management Framework #AIRMF released on January 26th thanks to the herculean efforts of our guests. Tune in to find out why Miriam Vogel and Kay Firth-Butterfield believe the AI RMF will be game changing. Learn the purpose behind the AI RMF; the emblematic 18-month multi (multi)-stakeholder, transparent process to design it; how they made it ‘evergreen’ at a time when our AI progress is moving at a lightning speed pace and much more.


Materials mentioned in this episode:

AI Risk Management Framework, (NIST)

NIST AI Risk Management Framework Playbook, (NIST)

Perspectives about the NIST Artificial Intelligence Risk Management Framework, (NIST)

What is In AI We Trust??

In AI We Trust? is a podcast with Miriam Vogel of EqualAI and Kay Firth-Butterfield of the Centre for Trustworthy Technology that surveys the global landscape for inspiration and lessons in developing responsible, trustworthy artificial intelligence. Each episode aims to answer a ‘big question' in ethical AI with prominent lawmakers, leading thinkers, and internationally renowned authors.