Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions?
Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects.
Note: Katja's organisation AI Impacts is currently hiring part- and full-time researchers.
There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast.
But there are also many things we’re able to predict confidently today -- like the climate of Oxford in five years -- that we no longer give ourselves much credit for.
Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones.
Links to learn more, summary and full transcript.
One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability?
And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity?
A significant historical example was the development of nuclear weapons. Over thousands of years, the efficacy of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities.
In today’s interview we also discuss:
* Why is AI impacts one of the most important projects in the world?
* How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions?
* How does writing an academic paper differ from posting a summary online?
* When will unguided machines be able to produce better and cheaper work than humans for every possible task?
* What’s one of the most likely jobs to be automated soon?
* Are people always just predicting the same timelines for new technologies?
* How do AGI researchers different from other AI researchers in their predictions?
* What are attitudes to safety research like within ML? Are there regional differences?
* How much should we believe experts generally?
* How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world?
* How quickly has the processing capacity for machine learning problems been increasing?
* What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive?
* What should we expect from a post AI dominated economy?
* How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this?
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours podcast is produced by Keiran Harris.
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Hosted by Rob Wiblin and Luisa Rodriguez.