
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov
Published: March 25, 2026
Duration: 11:51
I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why.
Humanity's Response to the AGI Threat May Be Extremely Incompetent
There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example:
At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally...
Humanity's Response to the AGI Threat May Be Extremely Incompetent
There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example:
At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally...