
E1011 - #1011 - Eliezer Yudkowsky - Why Superhuman AI Would Kill Us All
Published: October 25, 2025
Duration: 1:37:08
Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute.
Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late
Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing...