Rationality

Eliezer Yudkowsky

Walking through all of that, from a dozen different angles, can sometimes convey a glimpse of the central rhythm.


You should not ignore something just because you can’t define it.


Which is to say: Adding detail can make a scenario SOUND MORE PLAUSIBLE, even though


Which is to say: Adding detail can make a scenario SOUND MORE PLAUSIBLE, even though the event necessarily BECOMES LESS PROBABLE.


If so, then, hypothetically speaking, we might find futurists spinning unconscionably plausible and detailed future histories, or find people swallowing huge packages of unsupported claims bundled with a few strong-sounding assertions at the center.


They would need to notice the conjunction of two entire details, and be shocked by the audacity of anyone asking them to endorse such an insanely complicated prediction. And they would need to penalize the probability substantially—a factor of four, at least, according to the experimental details.


They would need to notice the conjunction of two entire details, and be shocked by the audacity of anyone asking them to endorse such an insanely complicated prediction.


You have to disentangle the details. You have to hold up every one independently, and ask, “How do we know this detail?”


Someone sketches out a picture of humanity’s descent into nanotechnological warfare, where China refuses to abide by an international control agreement, followed by an arms race . . . Wait a minute—how do you know it will be China?


More generally, this phenomenon is known as the “planning fallacy.” The planning fallacy is that people think they can plan, ha ha.


Asking subjects for their predictions based on realistic “best guess” scenarios; and Asking subjects for their hoped-for “best case” scenarios . . . . . . produced indistinguishable results.


Reality, it turns out, usually delivers results somewhat worse than the “worst case.”


Conversely, if you say something blatantly obvious and the other person doesn’t see it, they’re the idiot, or they’re being deliberately obstinate to annoy you.


A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.


Mice can see, but they can’t understand seeing. You can understand seeing, and because of that, you can do things that mice cannot do. Take a moment to marvel at this, for it is indeed marvelous.


It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.


It is even better to ask: what experience must not happen to you? Do you believe that élan vital explains the mysterious aliveness of living beings? Then what does this belief not allow to happen—what would definitely falsify this belief?


As Robin Hanson describes it, the ability to have potentially divisive conversations is a limited resource . If you can think of ways to pull the rope sideways , you are justified in expending your limited resources on relatively less common issues where marginal discussion offers relatively higher marginal payoffs.


At the Singularity Summit 2007, one of the speakers called for democratic, multinational development of Artificial Intelligence.


I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:


I am here to propose to you today that we need to balance the risks and opportunities of advanced Artificial Intelligence. We should avoid the risks and, insofar as it is possible, realize the opportunities. We should not needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm. We should respect the interests of all parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies accrue to as many individuals as possible, rather than being restricted to a few. We must try to avoid, as much as possible, violent conflicts using these technologies; and we must prevent massive destructive capability from falling into the hands of individuals. We should think through these issues before, not after, it is too late to do anything about them .


What are you to do? You certainly can’t use “probabilities.” We all know from school that “probabilities” are little numbers that appear next to a word problem, and there aren’t any little numbers here. Worse, you feel uncertain. You don’t remember feeling uncertain while you were manipulating the little numbers in word problems. College classes teaching math are nice clean places, therefore math itself can’t apply to life situations that aren’t nice and clean. You wouldn’t want to inappropriately transfer thinking skills from one context to another . Clearly, this is not a matter for “probabilities.”


What is evidence? It is an event entangled, by links of cause and effect, with whatever you want to know about.


This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise.


If your retina ended up in the same state regardless of what light entered it, you would be blind. Some belief systems, in a rather obvious trick to reinforce themselves, say that certain beliefs are only really worthwhile if you believe them unconditionally—no matter what you see, no matter what you think. Your brain is supposed to end up in the same state regardless. Hence the phrase, “blind faith.” If what you believe doesn’t depend on what you see, you’ve been blinded as effectively as by poking out your eyeballs.


If your eyes and brain work correctly, your beliefs will end up entangled with the facts. Rational thought produces beliefs which are themselves evidence.


Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it’s why a claim that your beliefs are not contagious—that you believe for private reasons which are not transmissible—is so suspicious. If your beliefs are entangled with reality, they should be contagious among honest folk.


Like a court system, science as a social process is made up of fallible humans. We want a protected pool of beliefs that are especially reliable. And we want social rules that encourage the generation of such knowledge. So we impose special, strong, additional standards before we canonize rational knowledge as “scientific knowledge,” adding it to the protected belief pool.


I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong.


Hunches can be mysterious to the huncher, but they can’t violate the laws of physics.


“Witch,” itself, is a label for some extraordinary assertions—just because we all know what it means doesn’t mean the concept is simple.


The real sneakiness was concealed in the word “it” of “A witch did it.” A witch did what?


In the school system, it’s all about verbal behavior, whether written on paper or spoken aloud. Verbal behavior gets you a gold star or a failing grade. Part of unlearning this bad habit is becoming consciously aware of the difference between an explanation and a password.


I notice that I am confused .


I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they’ll suddenly decide it’s “pseudoscience.”


It’s not that they think they have a theory of intelligence which lets them calculate a theoretical upper bound on the power of an optimization process. Rather, they associate strongly superhuman AI to the literary genre of apocalyptic literature; whereas an AI running a small corporation associates to the literary genre of Wired magazine.


They aren’t speaking from within a model of cognition. They don’t realize they need a model. They don’t realize that science is about models.


Is there any idea in science that you are proud of believing, though you do not use the belief professionally?


This was an earlier age of science. For a long time, no one realized there was a problem. Fake explanations don’t feel fake. That’s what makes them dangerous.


Speaking of “hindsight bias” is just the nontechnical way of saying that humans do not rigorously separate forward and backward messages, allowing forward messages to be contaminated by backward ones.


Jonathan Wallace suggested that “God!” functions as a semantic stopsign —that it isn’t a propositional assertion, so much as a cognitive traffic signal: do not think past this point.


Or suppose that someone says “Mexican-Americans are plotting to remove all the oxygen in Earth’s atmosphere.” You’d probably ask, “Why would they do that? Don’t Mexican-Americans have to breathe too? Do Mexican-Americans even function as a unified conspiracy?” If you don’t ask these obvious next questions when someone says, “Corporations are plotting to remove Earth’s oxygen,” then “Corporations!” functions for you as a semantic stopsign.


the junk food of curiosity.


Marcello and I developed a convention in our AI work: when we ran into something we didn’t understand, which was often, we would say “magic”—as in, “X magically does Y”—to remind ourselves that here was an unsolved problem, a gap in our understanding.


I have been writing for quite some time now on the notion that the strength of a hypothesis is what it can’t explain, not what it can —if you are equally good at explaining any outcome, you have zero knowledge. So to spot an explanation that isn’t helpful, it’s not enough to think of what it does explain very well—you also have to search for results it couldn’t explain,


The way Traditional Rationality is designed, it would have been acceptable for me to spend thirty years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera. This is enough to let the Ratchet of Science click forward, but it’s a little harsh on the people who waste thirty years of their lives.


In our ancestral environment, there were no movies; what you saw with your own eyes was true. Is it any wonder that fictions we see in lifelike moving pictures have too great an impact on us? Conversely, things that really happened, we encounter as ink on paper; they happened, but we never saw them happen. We don’t remember them happening to us.


The inverse error is to treat history as mere story, process it with the same part of your mind that handles the novels you read.


I also realized that if I had actually experienced the past—if I had lived through past scientific revolutions myself, rather than reading about them in history books—I probably would not have made the same mistake again. I would not have come up with another mysterious answer; the first thousand lessons would have hammered home the moral.


Why should I remember the Wright Brothers’ first flight? I was not there. But as a rationalist, could I dare to not remember, when the event actually happened? Is there so much difference between seeing an event through your eyes—which is actually a causal chain involving reflected photons, not a direct connection—and seeing an event through a history book? Photons and history books both descend by causal chains from the event itself.


I had to overcome the false amnesia of being born at a particular time.


Do you know how your knees work? Do you know how your shoes were made? Do you know why your computer monitor glows? Do you know why water is wet? The world around you is full of puzzles.


That which you cannot make yourself, you cannot remake when the situation calls for it.


When you contain the source of a thought, that thought can change along with you as you acquire new knowledge and new skills. When you contain the source of a thought, it becomes truly a part of you and grows along with you.


Strive to make yourself the source of every thought worth thinking.


Continually ask yourself: “How would I regenerate the thought if it were deleted?” When you have an answer, imagine that knowledge being deleted as well.


If confusion threatens when you interpret a metaphor as a metaphor, try taking everything completely literally.


“I taught you everything you know, but I haven’t taught you everything I know,” I say.


Mark frowns, puzzled. “That makes no sense. It doesn’t resolve the essential chicken-and-egg dilemma.” “Sure it does. The bucket method works whether or not you believe in it.” “That’s absurd!” sputters Mark. “I don’t believe in magic that works whether or not you believe in it!” “I said that too,” chimes in Autrey. “Apparently I was wrong.”


“What’s this so-called ‘reality’ business? I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.”


I award you 0.12 units of fitness.


Mark smiles condescendingly. “Believe me, Autrey, you’re not the first person to think of such a simple question. There’s no point in presenting it to us as a triumphant refutation.”


What should I believe? As it turns out, that question has a right answer.


When something we care about is threatened—our world-view, our in-group, our social standing, or anything else—our thoughts and perceptions rally to their defense.


You are not a Bayesian homunculus whose reasoning is “corrupted” by cognitive biases. You just are cognitive biases.


Beware when you find yourself arguing that a policy is defensible rather than optimal; or that it has some benefit compared to the null action, rather than the best benefit of any action.


It’s amazing how many Noble Liars and their ilk are eager to embrace ethical violations—with all due bewailing of their agonies of conscience—when they haven’t spent even five minutes by the clock looking for an alternative. There are some mental searches that we secretly wish would fail; and when the prospect of success is uncomfortable, people take the earliest possible excuse to give up.


We can use words to describe numbers that small, but not feelings—a feeling that small doesn’t exist,


It’s a difference of life-gestalt that isn’t easy to describe in words at all, let alone quickly.


Real tough-mindedness is saying, “Yes, sulfuric acid is a horrible painful death, and no, that mother of five children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.”


Can you imagine a politician saying that? Neither can I. But insofar as economists have the power to influence policy, it might help if they could think it privately—maybe even say it in journal articles, suitably dressed up in polysyllabismic obfuscationalization so the media can’t quote it.


To suggest otherwise is to shoulder a burden of improbability.


Someone once said, “Not all conservatives are stupid, but most stupid people are conservatives.” If you cannot place yourself in a state of mind where this statement, true or false, seems completely irrelevant as a critique of conservatism, you are not ready to think rationally about politics.


In Hansonian terms: Your instinctive willingness to believe something will change along with your willingness to affiliate with people who are known for believing it—quite apart from whether the belief is actually true. Some people may be reluctant to believe that God does not exist, not because there is evidence that God does exist, but rather because they are reluctant to affiliate with Richard Dawkins or those darned “strident” atheists who go around publicly saying “God does not exist.”


But if you can’t say “Oops” and give up when it looks like something isn’t working, you have no choice but to keep shooting yourself in the foot. You have to keep reloading the shotgun and you have to keep pulling the trigger. You know people like this. And somewhere, someplace in your life you’d rather not think about, you are people like this.


When your method of learning about the world is biased, learning more may not help. Acquiring more data can even consistently worsen a biased prediction.


People seem to make a leap from “This is ‘bounded’” to “The bound must be a reasonable-looking quantity on the scale I’m used to.” The power output of a supernova is “bounded,” but I wouldn’t advise trying to shield yourself from one with a flame-retardant Nomex jumpsuit.