TalkRL: Reinforcement Learning Interviews

{{ show.title }}Trailer Bonus Episode {{ selectedEpisode.number }}
{{ selectedEpisode.title }}
|
{{ displaySpeed }}x
{{ selectedEpisode.title }}
By {{ selectedEpisode.author }}
Broadcast by

Summary

Antonin Raffin and Ashley Hill discuss Stable Baselines past, present and future, State Representation Learning, S-RL Toolbox, RL on real robots, big compute for RL and much more!

Show Notes

Antonin Raffin is a researcher at the German Aerospace Center (DLR) in Munich, working in the Institute of Robotics and Mechatronics. His research is on using machine learning for controlling real robots (because simulation is not enough), with a particular interest for reinforcement learning.

Ashley Hill is doing his thesis on improving control algorithms using machine learning for real time gain tuning. 
He works mainly with neuroevolution, genetic algorithms, and of course reinforcement learning, applied to mobile robots.  He holds a masters degree in Machine learning, and a bachelors in Computer science from the Université Paris-Saclay.


Featured References

stable-baselines on github
Ashley Hill, Antonin Raffin primary authors.

S-RL Toolbox
Antonin Raffin, Ashley Hill, René Traoré, Timothée Lesort, Natalia Díaz-Rodríguez, David Filliat

Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics
Antonin Raffin, Ashley Hill, René Traoré, Timothée Lesort, Natalia Díaz-Rodríguez, David Filliat


Additional References


What is TalkRL: Reinforcement Learning Interviews?

TalkRL podcast is All Reinforcement Learning, All the time. In-depth interviews with brilliant people at the forefront of RL research and practice. Guests from places like MILA, MIT, DeepMind, Google Brain, Brown, Caltech, and more. Hosted by Robin Ranjit Singh Chauhan. Technical content.