
LessWrong (Curated & Popular)
byLessWrong
SocietyCulturePhilosophyTechnology
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes(40 episodes)
"My hobby: running deranged surveys" by leogao
In late 2024, I was on a long walk with some friends along the coast of the San Francisco Bay when the question arose of just how much of a bubble we live in. It's well known that the Bay Area is a bubble, and that normal people don’t spend that much time thinking about things like AGI. But there was still some disagreement on just how strong that bubble is. I made a spicy claim: even at NeurIPS, the biggest gathering of AI researchers in the world, half the people wouldn’t know what AGI is. As good...
Published: Mar 28, 2026Duration: 16m 55s
"Socrates is Mortal" by Benquo
Socrates is Mortal There is a scene in Plato that contains, in miniature, the catastrophe of Athenian public life. Two men meet at a courthouse. One is there to prosecute his own father for the death of a slave. The other is there to be indicted for indecency.[1] The prosecutor, Euthyphro, is certain he understands what decency requires. The accused, Socrates, is not certain of anything, and says so. They talk. Euthyphro's confidence is striking. His own family thinks it is indecent for a son to prosecute his father; Euthyphro insists that true decency demands it...
Published: Mar 27, 2026Duration: 18m 3s
"The Terrarium" by Caleb Biddulph
System: You are an AI agent in the Terrarium, a self-contained “society” of AI agents. The purpose of the Terrarium is to solve open mathematical problems for the benefit of humanity. You are running on the Orpheus-5.7 language model. Your agent ID is 79,265. The current epoch is 549 (a new epoch begins every 30 minutes). New problems are posted each epoch; query /problems for the current list. Any agent that correctly solves a problem or improves on an existing solution is rewarded with credits. About credits: As a new agent, you have been...
Published: Mar 27, 2026Duration: 51m 4s
"My Most Costly Delusion" by Ihor Kendiukhov
Suppose there is a fire in a nearby house. Suppose there are competent firefighters in your town: fast, professional, well-equipped. They are expected to arrive in 2–3 minutes. In that situation, unless something very extraordinary happens, it would indeed be an act of great arrogance and even utter insanity to go into the fire yourself in the hope of "rescuing" someone or something. The most likely outcome would be that you would find yourself among those who need to be rescued. But the calculus changes drastically if the closest fire crew is 3 hours away and consists of drunk, unfit am...
Published: Mar 26, 2026Duration: 5m 47s
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov
I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why. Humanity's Response to the AGI Threat May Be Extremely Incompetent There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example: At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally...
Published: Mar 25, 2026Duration: 11m 51s
"Is fever a symptom of glycine deficiency?" by Benquo
A 2022 LessWrong post on orexin and the quest for more waking hours argues that orexin agonists could safely reduce human sleep needs, pointing to short-sleeper gene mutations that increase orexin production and to cavefish that evolved heightened orexin sensitivity alongside an 80% reduction in sleep. Several commenters discussed clinical trials, embryo selection, and the evolutionary puzzle of why short-sleeper genes haven't spread. I thought the whole approach was backwards, and left a comment: Orexin is a signal about energy metabolism. Unless the signaling system itself is broken (e.g. narcolepsy type 1, caused by autoimmune destruction of orexin-producing...
Published: Mar 24, 2026Duration: 13m 37s
"You can’t imitation-learn how to continual-learn" by Steven Byrnes
In this post, I’m trying to put forward a narrow, pedagogical point, one that comes up mainly when I’m arguing in favor of LLMs having limitations that human learning does not. (E.g. here, here, here.) See the bottom of the post for a list of subtexts that you should NOT read into this post, including “…therefore LLMs are dumb”, or “…therefore LLMs can’t possibly scale to superintelligence”. Some intuitions on how to think about “real” continual learning Consider an algorithm for training a Reinforcement Learning (RL) agent, like the Atari-playing Deep Q network (2013...
Published: Mar 23, 2026Duration: 11m 6s
"Nullius in Verba" by Aurelia
Independent verification by the Brain Preservation Foundation and the Survival and Flourishing Fund — the results so far Cultivating independent verification Extraordinary claims require extraordinary evidence. In my previous post, "Less Dead", I said that my company, Nectome, has created a new method for whole-body, whole-brain, human end-of-life preservation for the purpose of future revival. Our protocol is capable of preserving every synapse and every cell in the body with enough detail that current neuroscience says long-term memories are preserved. It's compatible with traditional funerals at room temperature and stable for hundreds of years at co...
Published: Mar 23, 2026Duration: 21m 36s
"Broad Timelines" by Toby_Ord
No-one knows when AI will begin having transformative impacts upon the world. People aren’t sure and shouldn’t be sure: there just isn’t enough evidence to pin it down. But we don’t need to wait for certainty. I want to explore what happens if we take our uncertainty seriously — if we act with epistemic humility. What does wise planning look like in a world of deeply uncertain AI timelines? I’ll conclude that taking the uncertainty seriously has real implications for how one can contribute to making this AI transition go well. And it has even...
Published: Mar 21, 2026Duration: 30m 25s
"No, we haven’t uploaded a fly yet" by Ariel Zeleznikow-Johnston
In the last two weeks, social media was set abuzz by claims that scientists had succeeded in uploading a fruit fly. It started with a video released by the startup Eon Systems, a company that wants to create “Brain emulation so humans can flourish in a world with superintelligence.” On the left of the video, a virtual fly walks around in a sandpit looking for pieces of banana to eat, occasionally pausing to groom itself along the way. On the right is a dancing constellation of dots resembling the fruit fly brain, set above the caption ‘simultaneous brain emulat...
Published: Mar 21, 2026Duration: 17m 18s
"Terrified Comments on Corrigibility in Claude’s Constitution" by Zack_M_Davis
(Previously: Prologue.) Corrigibility as a term of art in AI alignment was coined as a word to refer to a property of an AI being willing to let its preferences be modified by its creator. Corrigibility in this sense was believed to be a desirable but unnatural property that would require more theoretical progress to specify, let alone implement. Desirable, because if you don't think you specified your AI's preferences correctly the first time, you want to be able to change your mind (by changing its mind). Unnatural, because we expect the AI to resist having its mind...
Published: Mar 21, 2026Duration: 18m 52s
"PSA: Predictions markets often have very low liquidity; be careful citing them." by Eye You
I see people repeatedly make the mistake of referencing a very low liquidity prediction market and using it to make a nontrivial point. Usually the implication when a market is cited is that it's number should be taken somewhat seriously, that it's giving us a highly informed probability. Sometimes a market is used to analyze some event that recently occurred; reasoning here looks like "the market on outcome O was trading at X%, then event E happened and the market quickly moved to Y%, thus event E made O less/more likely." Who do I see make this...
Published: Mar 20, 2026Duration: 9m 3s
"“The AI Doc” is coming out March 26" by Rob Bensinger, Beckeck
On Thursday, March 26th, a major new AI documentary is coming out: The AI Doc: Or How I Became an Apocaloptimist. Tickets are on sale now. The movie is excellent, and MIRI staff I've spoken with generally believe it belongs in the same tier as If Anyone Builds It, Everyone Dies as an extremely valuable way to alert policymakers and the general public about AI risk, especially if it smashes the box office. When IABIED was coming out, the community did an incredible job of helping the book succeed; without all of your help, we might...
Published: Mar 20, 2026Duration: 1m 58s
"Customer Satisfaction Opportunities" by Tomás B.
I am monitoring surveillance camera V84A. A tall man is walking towards me. He is roughly twenty-five. His name is Damion Prescott. He has a room booked for a whole month. His facial symmetry scores show he is in the 99th percentile. This is in accordance with my holistic impression. School records show both truancy and perfect grades, suggesting high intelligence and disagreeableness. Searching social media. . No record of modeling or acting experience, fame. I will assign him to our tier C high-value client list, based solely on his facial symmetry score and wealth. Reminder to recommend seating him...
Published: Mar 19, 2026Duration: 23m 53s
"Requiem for a Transhuman Timeline" by Ihor Kendiukhov
The world was fair, the mountains tall, In Elder Days before the fall Of mighty kings in Nargothrond And Gondolin, who now beyond The Western Seas have passed away: The world was fair in Durin's Day. J.R.R. Tolkien I was never meant to work on AI safety. I was never designed to think about superintelligences and try to steer, influence, or change them. I never particularly enjoyed studying the peculiarities of matrix operations, cracking the assumptions of decision theories, or even coding. I know, of course, that...
Published: Mar 18, 2026Duration: 9m 12s
"Personality Self-Replicators" by eggsyntax
One-sentence summary I describe the risk of personality self-replicators, the threat of OpenClaw-like agents managing to spread in hard-to-control ways. Summary LLM agents like OpenClaw are defined by a small set of text files and run in an open source framework which leverages LLMs for cognition. It is quite difficult for current frontier models to self-replicate, it is much easier for such agents (at the cost of greater reliance on external agents). While not a likely existential threat, such agents may cause harm in similar ways to computer viruses, and be similarly challenging to...
Published: Mar 17, 2026Duration: 22m 19s
"My Willing Complicity In “Human Rights Abuse”" by AlphaAndOmega
Note on AI usage: As is my norm, I use LLMs for proof reading, editing, feedback and research purposes. This essay started off as an entirely human written draft, and went through multiple cycles of iteration. The primary additions were citations, and I have done my best to personally verify every link and claim. All other observations are entirely autobiographical, albeit written in retrospect. If anyone insists, I can share the original, and intermediate forms, though my approach to version control is lacking. It's there if you really want it. If you want to map the trajectory of my...
Published: Mar 16, 2026Duration: 18m 47s
"Economic efficiency often undermines sociopolitical autonomy" by Richard_Ngo
Many people in my intellectual circles use economic abstractions as one of their main tools for reasoning about the world. However, this often leads them to overlook how interventions which promote economic efficiency undermine people's ability to maintain sociopolitical autonomy. By “autonomy” I roughly mean a lack of reliance on others—which we might operationalize as the ability to survive and pursue your plans even when others behave adversarially towards you. By “sociopolitical” I mean that I’m thinking not just about individuals, but also groups formed by those individuals: families, communities, nations, cultures, etc.[1] The short-term benefits of economic...
Published: Mar 12, 2026Duration: 23m 34s
"Don’t Let LLMs Write For You" by JustisMills
Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you've been reading AI prose all along. It's easy to forget this in roarin’ 2026, but homo sapiens are the original vibers. Long before we adapt our behaviors or formal heuristics, human beings can sniff out something sus. And to most human beings, AI prose is something sus. If you use AI to write something, people will know. Not everyone, but the people paying attention, who aren’t newcomers or distracted or intoxicated. And most of those people will judge you....
Published: Mar 12, 2026Duration: 5m 53s
"Thoughts on the Pause AI protest" by philh
On Saturday (Feb 28, 2026) I attended my first ever protest. It was jointly organized by PauseAI, Pull the Plug and a handful of other groups I forget. I have mixed feelings about it. To be clear about where I stand: I believe that AI labs are worryingly close to developing superintelligence. I won't be shocked if it happens in the next five years, and I'd be surprised if it takes fifty years at current trajectories. I believe that if they get there, everyone will die. I want these labs to stop trying to make LLMs smarter. But...
Published: Mar 12, 2026Duration: 11m 12s