Eliezer Yudkowsky

Eliezer S. Yudkowsky (born September 11, 1979) is an American writer, blogger, and advocate for the Singularity and Friendly Artificial Intelligence.

Quotes



 * Do not flinch from experiences that might destroy your beliefs. The thought you cannot think controls you more than thoughts you speak aloud. Submit yourself to ordeals and test yourself in fire. Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts. If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm. Evaluate your beliefs first and then arrive at your emotions. Let yourself say: “If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool.” 
 * Twelve Virtues Of Rationality


 * To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.
 * Twelve Virtues Of Rationality


 * In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.
 * Above-Average AI Scientists


 * I have sometimes thought that all professional lectures on rationality should be delivered while wearing a clown suit, to prevent the audience from confusing seriousness with solemnity.
 * In reply to a comment on his The Proper Use of Doubt


 * It would actually be quite surprisingly helpful for increasing the percentage of people who will participate meaningfully in saving the planet, if there were some reliably-working standard explanation for why physics and logic together have enough room to contain morality.
 * By Which It May Be Judged


 * Declaring yourself to be operating by "Crocker's Rules" means that other people are allowed to optimize their messages for information, not for being nice to you. Crocker's Rules means that you have accepted full responsibility for the operation of your own mind — if you're offended, it's your fault. Anyone is allowed to call you a moron and claim to be doing you a favor. (Which, in point of fact, they would be. One of the big problems with this culture is that everyone's afraid to tell you you're wrong, or they think they have to dance around it.) Two people using Crocker's Rules should be able to communicate all relevant information in the minimum amount of time, without paraphrasing or social formatting. Obviously, don't declare yourself to be operating by Crocker's Rules unless you have that kind of mental discipline. Note that Crocker's Rules does not mean you can insult people; it means that other people don't have to worry about whether they are insulting you. Crocker's Rules are a discipline, not a privilege. Furthermore, taking advantage of Crocker's Rules does not imply reciprocity. How could it? Crocker's Rules are something you do for yourself, to maximize information received — not something you grit your teeth over and do as a favor.
 * Promoting "Crocker's Rules" at SL4 (c. 2000)


 * If you declare Crocker's Rules, other people don't need to worry about being tactful to you. (You still need to worry about being tactful to them — Crocker's Rules only work one way.)
 * Promoting "Crocker's Rules" in "An Introduction to SL4" (2002)


 * Crocker's Rules didn't give you the right to say anything offensive, but other people could say potentially offensive things to you, and it was your responsibility not to be offended. This was surprisingly hard to explain to people; many people would read the careful explanation and hear, "Crocker's Rules mean you can say offensive things to other people." 
 * "Radical Honesty" at LessWrong.com (10 September 2007)


 * The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
 * Artificial Intelligence as a Positive and Negative Factor in Global Risk (August 2006)


 * People go funny in the head when talking about politics. The evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, politics was a matter of life and death. And sex, and wealth, and allies, and reputation... When, today, you get into an argument about whether "we" ought to raise the minimum wage, you're executing adaptations for an ancestral environment where being on the wrong side of the argument could get you killed... Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back — providing aid and comfort to the enemy.
 * Politics Is The Mind-Killer (February 2007)


 * Ever since I adopted the rule of "That which can be destroyed by the truth should be," I've also come to realize "That which the truth nourishes should thrive." When something good happens, I am happy, and there is no confusion in my mind about whether it is rational for me to be happy. When something terrible happens, I do not flee my sadness by searching for fake consolations and false silver linings. I visualize the past and future of humankind, the tens of billions of deaths over our history, the misery and fear, the search for answers, the trembling hands reaching upward out of so much blood, what we could become someday when we make the stars our cities, all that darkness and all that light — I know that I can never truly understand it, and I haven't the words to say.
 * Feeling Rational (April 2007)


 * Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
 * Your Strength As A Rationalist (August 2007)


 * But ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person. There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.
 * Mysterious Answers To Mysterious Questions (August 2007); Yudkowsky credits the map/territory analogy to physicist/statistician Edwin Thompson Jaynes.


 * Between hindsight bias, fake causality, positive bias, anchoring/priming, et cetera et cetera, and above all the dreaded confirmation bias, once an idea gets into your head, it's probably going to stay there.
 * We Change Our Minds Less Often Than We Think (October 2007)


 * If people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing.
 * How to Seem (and Be) Deep (October 2007)


 * The strength of a theory is not what it allows, but what it prohibits; if you can invent an equally persuasive explanation for any outcome, you have zero knowledge.
 * An Alien God (November 2007)


 * The police officer who puts their life on the line with no superpowers, no X-Ray vision, no super-strength, no ability to fly, and above all no invulnerability to bullets, reveals far greater virtue than Superman — who is only a mere superhero.
 * Superhero Bias (December 2007)


 * Lonely dissent doesn't feel like going to school dressed in black. It feels like going to school wearing a clown suit.
 * Lonely Dissent (December 2007)


 * Science has heroes, but no gods. The great Names are not our superiors, or even our rivals, they are passed milestones on our road; and the most important milestone is the hero yet to come.
 * Guardians of Ayn Rand (December 2007)


 * The human brain cannot release enough neurotransmitters to feel emotion a thousand times as strong as the grief of one funeral. A prospective risk going from 10,000,000 deaths to 100,000,000 deaths does not multiply by ten the strength of our determination to stop it. It adds one more zero on paper for our eyes to glaze over.
 * Cognitive Biases Potentially Affecting Judgment of Global Risks, a chapter of Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Cirkovic (2008)


 * People cling to their intuitions, I think, not so much because they believe their cognitive algorithms are perfectly reliable, but because they can't see their intuitions as the way their cognitive algorithms happen to look from the inside. And so everything you try to say about how the native cognitive algorithm goes astray, ends up being contrasted to their direct perception of the Way Things Really Are—and discarded as obviously wrong.
 * How an Algorithm Feels from the Inside, (February 2008)
 * Mystery exists in the mind, not in reality. If I am ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself. All the more so, if it seems like no possible answer can exist: Confusion exists in the map, not in the territory. Unanswerable questions do not mark places where magic enters the universe. They mark places where your mind runs skew to reality.
 * A comment on ''Wrong Questions (March 2008)


 * There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model.
 * Quantum explanations (April 2008), part of his Quantum Physics Sequence.


 * The nature of "reality" is something about which I'm still confused, which leaves open the possibility that there isn't any such thing. But Egan's Law still applies: "It all adds up to normality." Apples didn't stop falling when Einstein disproved Newton's theory of gravity. Sure, when the dust settles, it could turn out that apples don't exist, Earth doesn't exist, reality doesn't exist. But the nonexistent apples will still fall toward the nonexistent ground at a meaningless rate of 9.8 m/s2.
 * Quantum Non-Realism (May 2008)


 * If you've been cryocrastinating, putting off signing up for cryonics "until later", don't think that you've "gotten away with it so far". Many worlds, remember? There are branched versions of you that are dying of cancer, and not signed up for cryonics, and it's too late for them to get life insurance.
 * "Timeless Identity" (June 2008)


 * Physiologically adult humans are not meant to spend an additional 10 years in a school system; their brains map that onto "I have been assigned low tribal status". And so, of course, they plot rebellion—accuse the existing tribal overlords of corruption—plot perhaps to split off their own little tribe in the savanna, not realizing that this is impossible in the Modern World.
 * Rebelling Within Nature (July 2008)


 * Part of the rationalist ethos is binding yourself emotionally to an absolutely lawful reductionistic universe — a universe containing no ontologically basic mental things such as souls or magic — and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.
 * Mundane Magic(October 2008)


 * If cryonics were a scam it would have far better marketing and be far more popular.
 * A comment on reddit (2009)


 * My experience is that journalists report on the nearest-cliche algorithm, which is extremely uninformative because there aren’t many cliches, the truth is often quite distant from any cliche, and the only thing you can infer about the actual event was that this was the closest cliche.... It is simply not possible to appreciate the sheer awfulness of mainstream media reporting until someone has actually reported on you. It is so much worse than you think.
 * Predictible Fakers'' (January 2009)


 * By and large, the answer to the question "How do large institutions survive?" is "They don't!" The vast majority of large modern-day institutions — some of them extremely vital to the functioning of our complex civilization — simply fail to exist in the first place.
 * Helpless Individuals (March 2009)


 * If I'm teaching deep things, then I view it as important to make people feel like they're learning deep things, because otherwise, they will still have a hole in their mind for "deep truths" that needs filling, and they will go off and fill their heads with complete nonsense that has been written in a more satisfying style.
 * A comment on "Mind Control and Me" at LessWrong.com (March 2009)


 * We underestimate the distance between ourselves and others. Not just inferential distance, but distances of temperament and ability, distances of situation and resource, distances of unspoken knowledge and unnoticed skills and luck, distances of interior landscape.
 * Beware of Other-Optimizing (April 2009)


 * The people I know who seem to make unusual efforts at rationality, are unusually honest, or, failing that, at least have unusually bad social skills.
 * Honesty: Beyond Internal Truth (June 2009)


 * If dragons were common, and you could look at one in the zoo — but zebras were a rare legendary creature that had finally been decided to be mythical — then there's a certain sort of person who would ignore dragons, who would never bother to look at dragons, and chase after rumors of zebras. The grass is always greener on the other side of reality. Which is rather setting ourselves up for eternal disappointment, eh? If we cannot take joy in the merely real, our lives shall be empty indeed.
 * The summary of the Joy In The Merely Real sequence (October 2009)


 * This is crunch time for the whole human species, and not just for us but for the intergalactic civilization whose existence depends on us. This is the hour before the final exam and we're trying to get as much studying done as possible. It may be that you can't make yourself feel that for a decade or thirty years or however long this crunch time lasts, but the reality is one thing and the emotions are another... If you confront it full on, then you can't really justify trading off any part of intergalactic civilization for any intrinsic thing you could get nowadays, and at the same time, it's also true that there are very few people who can live like that (and I'm not one of them myself).
 * Question 5 in Less Wrong Q&A with Eliezer Yudkowsky (January 2010)


 * If you want to build a recursively self-improving AI, have it go through a billion sequential self-modifications, become vastly smarter than you, and not die, you've got to work to a pretty precise standard.
 * Question 12 in Less Wrong Q&A with Eliezer Yudkowsky (January 2010)


 * Have I ever remarked on how completely ridiculous it is to ask high school students to decide what they want to do with the rest of their lives and give them nearly no support in doing so? Support like, say, spending a day apiece watching twenty different jobs and then another week at their top three choices, with salary charts and projections and probabilities of graduating that subject given their test scores? The more so considering this is a central allocation question for the entire economy?
 * A comment on Memetic Hazards in Videogames (September 2010)


 * Rationality is the master lifehack which distinguishes which other lifehacks to use.
 * Epistle to the New York Less Wrongians (April 2011)


 * Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us.
 * Epistle to the New York Less Wrongians (April 2011)


 * Maybe you just can't protect people from certain specialized types of folly with any sane amount of regulation, and the correct response is to give up on the high social costs of inadequately protecting people from themselves under certain circumstances.
 * a comment on Hacker News (April 2012)


 * Litmus test: If you can't describe Ricardo's Law of Comparative Advantage and explain why people find it counterintuitive, you don't know enough about economics to direct any criticism or praise at "capitalism" because you don't know what other people are referring to when they use that word.
 * a comment on Facebook (June 2012)


 * I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them – just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified.
 * So You Want To Be A Seed AI Programmer


 * There's a standard Internet phenomenon (I generalize) of a Sneer Club of people who enjoy getting together and picking on designated targets. Sneer Clubs (I expect) attract people with high Dark Triad characteristics, which is (I suspect) where Asshole Internet Atheists come from - if you get a club together for the purpose of sneering at religious people, it doesn't matter that God doesn't actually exist, the club attracts psychologically f'd-up people. Bullies, in a word, people who are powerfully reinforced by getting in what feels like good hits on Designated Targets, in the company of others doing the same and congratulating each other on it.
 * from a comment on reddit (January 2015)


 * There was a conference one time on: "What are we going to do about the looming risk of AI disaster?" … And what came out of that conference was OpenAI, which was basically the worst possible way of doing anything. Like, "This is not a problem of, 'Oh no, what if secret elites get AI', it's that nobody knows how to build the thing." … So, like, "Let's open up everything! Let's accelerate everything!" It was like ChatGPT's blind version of throwing the ideals at a place where they were exactly the wrong ideals to solve the problem. … And that was it. That was me in 2015 going, "Oh. So this is what humanity will elect to do. We will not rise above. We will not have more grace, not even here, at the very end." So that is when I did my crying late at night, and then picked myself up and fought and fought and fought until I had run out all the avenues that I seem to have the capability to do.
 * Bankless #159: "We're All Gonna Die with Eliezer Yudkowsky" (2023-02-20)

Harry Potter and the Methods of Rationality (2010 - 2015)

 * Harry Potter and the Methods of Rationality (28 February 2010 - 14 March 2015), fan-fiction written under the pseudonym "Less Wrong"


 * You turned into a cat! A SMALL cat! You violated Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?
 * Harry Potter in Ch. 2


 * And someday when the descendants of humanity have spread from star to star, they won't tell the children about the history of Ancient Earth until they're old enough to bear it; and when they learn they'll weep to hear that such a thing as Death had ever once existed!
 * Ch. 45


 * Lies propagate, that’s what I’m saying. You’ve got to tell more lies to cover them up, lie about every fact that’s connected to the first lie. And if you kept on lying, and you kept on trying to cover it up, sooner or later you’d even have to start lying about the general laws of thought. Like, someone is selling you some kind of alternative medicine that doesn’t work, and any double-blind experimental study will confirm that it doesn’t work. So if someone wants to go on defending the lie, they’ve got to get you to disbelieve in the experimental method. Like, the experimental method is just for merely scientific kinds of medicine, not amazing alternative medicine like theirs. Or a good and virtuous person should believe as strongly as they can, no matter what the evidence says. Or truth doesn’t exist and there’s no such thing as objective reality. A lot of common wisdom like that isn’t just mistaken, it’s anti-epistemology, it’s systematically wrong. Every rule of rationality that tells you how to find the truth, there’s someone out there who needs you to believe the opposite. If you once tell a lie, the truth is ever after your enemy; and there’s a lot of people out there telling lies.
 * Harry Potter in Ch. 65


 * "Many boys and girls are heroes in their dreams," Dumbledore said quietly. He did not look at any of the other girls, only at her. "Fewer in the waking world. Many have stood their ground and faced the darkness when it comes for them. Fewer come for the darkness and force it to face them. It is a hard life, sometimes lonely, often short. I have told none to refuse that calling, but neither would I wish to increase their number."
 * Ch. 70


 * When you are older, you will learn that the first and foremost thing which any ordinary person does is nothing.
 * Professor Quirrell in Ch. 73