Intuition Pumps And Other Tools for Thinking

Daniel C. Dennett

You can’t do much carpentry with your bare hands and you can’t do much thinking with your bare brain. —BO DAHLBOM


Like all artisans, a blacksmith needs tools, but—according to an old (indeed almost extinct) observation—blacksmiths are unique in that they make their own tools.


Labels. Sometimes just creating a vivid name for something helps you keep track of it while you turn it around in your mind trying to understand it. Among the most useful labels, as we shall see, are warning labels or alarms, which alert us to likely sources of error.


This self-conscious wariness with which we should approach any intuition pump is itself an important tool for thinking, the philosophers’ favorite tactic: “going meta”— thinking about thinking, talking about talking, reasoning about reasoning.


This whole book is, of course, an example of going meta: exploring how to think carefully about methods of thinking carefully (about methods of thinking carefully, etc.).


None of the tools on Doug’s list are his inventions, but he has contributed some fine specimens to my kit, such as jootsing and sphexishness.


I have always figured that if I can’t explain something I’m doing to a group of bright undergraduates, I don’t really understand it myself, and that challenge has shaped everything I have written.


Graduate students are often too eager to prove to each other and to themselves that they are savvy operators, wielding the jargon of their trade with deft assurance, baffling outsiders (that’s how they assure themselves that what they are doing requires expertise), and showing off their ability to pick their way through the most tortuous (and torturous) technical arguments without getting lost. Philosophy written for one’s advanced graduate students and fellow experts is typically all but unreadable—and hence largely unread.


We ask our graduate students to prove they can do it in their dissertations, and some never outgrow the habit, unfortunately.


If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish both kinds of results. —RICHARD FEYNMAN,


The smartest or luckiest of the scientists sometimes manage to avoid the pitfalls quite adroitly (perhaps they are “natural born philosophers”—or are as smart as they think they are), but they are the rare exceptions.


Sometimes you don’t just want to risk making mistakes; you actually want to make them—if only to give you something clear and detailed to fix.


Philosophy—in every field of inquiry—is what you have to do until you figure out what questions you should have been asking in the first place.


Gore Vidal once said, “It is not enough to succeed. Others must fail.”


Evolution is one of the central themes of this book, as of all my books, for the simple reason that it is the central, enabling process not only of life but also of knowledge and learning and understanding.


The chief trick to making good mistakes is not to hide them—especially not from yourself. Instead of turning away in denial when you make a mistake, you should become a connoisseur of your own mistakes, turning them over in your mind as if they were works of art, which in a way they are.


We have all heard the forlorn refrain “Well, it seemed like a good idea at the time!” This phrase has come to stand for the rueful reflection of an idiot, a sign of stupidity, but in fact we should appreciate it as a pillar of wisdom. Any being, any agent, who can truly say, “Well, it seemed like a good idea at the time!” is standing on the threshold of brilliance.


But that is not enough: you should actively seek out opportunities to make grand mistakes, just so you can then recover from them.


The good thing about long division was that it always worked, even if you were maximally stupid in making your first choice, in which case it just took a little longer.


Here is what you do: You start by telling the audience you are going to perform a trick, and without telling them what trick you are doing, you go for the one-in-a-thousand effect. It almost never works, of course, so you glide seamlessly into a second try—for an effect that works about one time in a hundred, perhaps—and when it too fails (as it almost always will), you slide gracefully into effect number 3, which works only about one time in ten, so you’d better be ready with effect number 4, which works half the time (let’s say).


“Impossible! How on earth could you have known which was my card?” Aha! You didn’t know, but you had a cute way of taking a hopeful stab in the dark that paid off. By hiding all the “mistake” cases from view—the trials that didn’t pan out—you create a “miracle.”


I am amazed at how many really smart people don’t understand that you can make big mistakes in public and emerge none the worse for it. I know distinguished researchers who will go to preposterous lengths to avoid having to acknowledge that they were wrong about something. They have never noticed, apparently, that the earth does not swallow people up when they say, “Oops, you’re right. I guess I made a mistake.” Actually, people love it when somebody admits to making a mistake. All kinds of people love pointing out mistakes.


Of course, in general, people do not enjoy correcting the stupid mistakes of others. You have to have something worth correcting, something original to be right or wrong about, something that requires constructing the sort of pyramid of risky thinking we saw in the card magician’s tricks.


the delicious phrase “by parody of reasoning,” a handy name, I think, for misbegotten reductio ad absurdum arguments, which are all too common in the rough-and-tumble of scientific and philosophical controversy.


“I have to admit,” I said, “that the views you are criticizing are simply preposterous,” and Noam grinned affirmatively, “but then what I want to know is why you’re wasting your time and ours criticizing such junk.” It was a pretty effective pail of cold water.


Just how charitable are you supposed to be when criticizing the views of an opponent? If there are obvious contradictions in the opponent’s case, then of course you should point them out, forcefully. If there are somewhat hidden contradictions, you should carefully expose them to view—and then dump on them.


How to compose a successful critical commentary: 1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.” 2. You should list any points of agreement (especially if they are not matters of general or widespread agreement). 3. You should mention anything you have learned from your target. 4. Only then are you permitted to say so much as a word of rebuttal or criticism.


Sturgeon’s Law is usually put a little less decorously: Ninety percent of everything is crap.


Now, in order not to waste your time and try our patience, make sure you concentrate on the best stuff you can find, the flagship examples extolled by the leaders of the field, the prize-winning entries, not the dregs.


One of the least impressive attempts to apply Occam’s Razor to a gnarly problem


parsimonious,


The molecular biologist Sidney Brenner recently invented a delicious play on Occam’s Razor, introducing the new term Occam’s Broom, to describe the process in which inconvenient facts are whisked under the rug by intellectually dishonest champions of one theory or another.


Conspiracy theorists are masters of Occam’s Broom, and an instructive exercise on the Internet is to look up a new conspiracy theory, to see if you (a nonexpert on the topic) can find the flaws, before looking elsewhere on the web for the expert rebuttals.


But this gracious disposition to assume more understanding than is apt to be present in one’s distinguished audience has an unfortunate by-product: experts often talk past each other.


But there is an indirect and quite effective cure: have all experts present their views to a small audience of curious nonexperts (here at Tufts I have the advantage of bright undergraduates) while the other experts listen in from the sidelines. They don’t have to eavesdrop; this isn’t a devious suggestion. On the contrary, everybody can and should be fully informed that the point of the exercise is to make it comfortable for participants to speak in terms that everybody will understand.


jootsing, which stands for “jumping out of the system.”


It helps to know the tradition if you want to subvert it. That’s why so few dabblers or novices succeed in coming up with anything truly creative.


Advising somebody to make progress by jootsing is rather like advising an investor to buy low and sell high. Yes, of course, that’s the idea, but how do you manage to do it?


Sometimes there are clues. Several of the great instances of jootsing have involved abandoning some well-regarded thing that turned out not to exist after all. Phlogiston was supposed to be an element in fire, and caloric was the invisible, self-repellent fluid or gas that was supposed to be the chief ingredient in heat, but these were dropped, and so was the ether as a medium in which light traveled the way sound travels through air and water.


I have also argued that if you think it is simply obvious that free will and determinism are incompatible, you’re making a big mistake.


But some ratherings are little more than sleight of hand, due to the fact that the word “rather” implies—without argument—that there is an important incompatibility between the claims flanking it.


a fine example of rathering by Gould in the course of his account of punctuated equilibrium: Change does not usually occur by imperceptibly gradual alteration of entire species but rather [my italics] by isolation of small populations and their geologically instantaneous transformation into new species. [1992b, p. 12] This passage invites us to believe that evolutionary change could not be both “geologically instantaneous” and “imperceptibly gradual” at the same time.


Religion is not the opiate of the masses, as Marx said; it is rather a deep and consoling sign of humanity’s recognition of the inevitability of death.


Remember: not all “rather”s are ratherings; some are legitimate.


A variation on rathering used frequently by Gould may be called piling on: We talk about the “march from monad to man” (old-style language again) as though evolution followed continuous pathways of progress along unbroken lineages. Nothing could be further from reality.


But, to use Gould’s own phrase, “Nothing could be further from reality.”


look for “surely” in the document, and check each occurrence. Not always, not even most of the time, but often the word “surely” is as good as a blinking light locating a weak point in the argument, a warning label about a likely boom crutch.


Why? Because it marks the very edge of what the author is actually sure about and hopes readers will also be sure about.


Here is a good habit to develop: Whenever you see a rhetorical question, try— silently, to yourself—to give it an unobvious answer. If you find a good one, surprise your interlocutor by answering the question.


A deepity is a proposition that seems both important and true—and profound—but that achieves this effect by being ambiguous. On one reading it is manifestly false, but it would be earth-shaking if it were true; on the other reading it is true but trivial.


Here is an example. (Better sit down: this is heavy stuff.) Love is just a word.


People should care. What could be more important, in the end, than these questions: What in the world are we, and what should we do about it? So watch your step. There is treacherous footing ahead, and the maps are unreliable.


If I try to inform you that salmon in the wild don’t wear hearing aids, you will tell me that this is not news to you, but when did you learn it? You weren’t born knowing it, it was not part of any curriculum at school, and it is extremely unlikely that you ever framed a sentence in your mind to this effect.


In other words, you assume the computer is a good chess player, or at least not an idiotic, self-destructive chess player. You treat it, in other other words, as if it were a human being with a mind.


At the dawn of the computer age, Alan Turing, who deserves credit as the inventor of the computer if anybody does, saw this prospect. He could start with mindless bits of mechanism, without a shred of mentality in them, and organize them into more competent mechanisms, which in turn could be organized into still more competent mechanisms, and so forth without apparent limit.


What we might call the sorta operator is, in cognitive science, the parallel of Darwin’s gradualism in evolutionary processes (more on this in part VI). Before there were bacteria, there were sorta bacteria, and before there were mammals, there were sorta mammals, and before there were dogs, there were sorta dogs, and so on.


In his excellent book on Indian street magic, Net of Magic: Wonders and Deceptions in India, Lee Siegel (1991) writes, “I’m writing a book on magic,” I explain, and I’m asked, “Real magic?” By real magic people mean miracles, thaumaturgical acts, and supernatural powers. “No,” I answer: “Conjuring tricks, not real magic.” Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic.


any time we can make a computer do something that has seemed miraculous, we have a proof that it can be done without wonder tissue.


It seems to be a step in the right direction, but until the details are provided about how it works, and how it evolved in the first place, declaring that there is such a language of thought is just renaming the problem without solving it.


Computers are like that, only instead of having a dozen different things they can be made to do, they can do kazillions of different things. And instead of having to plug in a different attachment for each task, you open a different program—a very long string of zeroes and ones—which changes all the necessary internal switches to just the right settings to accomplish the job.


Consider a chess-playing program written in Common Lisp (a high-level computer language) running on Windows 7 (an operating system) running on a PC. This is a PC pretending to be a Windows machine pretending to be a Lisp machine pretending to be a chess-playing machine.


(1) Substrate neutrality: The procedure for long division works equally well with pencil or pen, paper or parchment, neon lights or skywriting, using any symbol system you like. The power of the procedure is due to its logical structure, not the causal powers of the materials used in the instantiation, just so long as those causal powers permit the prescribed steps to be followed exactly.


It is often wise to study a dead-simple example in some detail, to get a secure grip on our concepts before tackling a mind-buster. (In the field of artificial intelligence, these are nicely known as toy problems. First you solve the toy problem before tackling the gnarly great real-world problem.) So this is a story—made up, for simplicity’s sake, but otherwise realistic—about how human elevator operators got replaced by computer chips.


Bear in mind, however, that nobody in cognitive science has developed a working model of the language of thought either, or has even tried very hard. It’s a very, very difficult problem. 1 I would like to encourage an open mind on this score.


(Look how many times I’ve used the sorta operator in this paragraph, so that I can use the intentional stance when giving you the specs of the two-bitser. Try to rewrite the paragraph without using the intentional stance and you will appreciate how efficient it is, and how well-nigh indispensable the sorta operator can be for such purposes.)


Don’t make the mistake of imagining that brains, being alive, or made of proteins instead of silicon and metal, can detect meanings directly, thanks to the wonder tissue in them. Physics will always trump meaning. A genuine semantic engine, responding directly to meanings, is like a perpetual motion machine—physically impossible. So


It cannot have escaped philosophers’ attention that our fellow academics in other fields—especially in the sciences—often have difficulty suppressing their incredulous amusement when such topics as Twin Earth and Swampman are posed for apparently serious consideration. Are the scientists just being philistines, betraying their tin ears for the subtleties of philosophical investigation, or have the philosophers lost their grip on reality?


Suppose you discovered a thing that attracted iron but was not M-aligned (like standard magnets). Would you call it a magnet? Or: Suppose you discovered a thing that was M-aligned but did not attract iron. Would you call it a magnet? The physicists would reply that if they were confronted with either of these imaginary objects, they would have much more important things to worry about than what to call them.


What is of interest, however, is the real covariance of “structural” and “behavioral” factors. If they find violations of the regularities, they adjust their science accordingly, letting the terms fall where they may.


“No,” says the philosopher. “It’s not a false dichotomy! For the sake of argument we’re suspending the laws of physics. Didn’t Galileo do the same when he banished friction from his thought experiment?” Yes, but a general rule of thumb emerges from the comparison: the utility of a thought experiment is inversely proportional to the size of its departures from reality.


Experience teaches, however, that there is no such thing as a thought experiment so clearly presented that no philosopher can misinterpret it,


The idea of natural selection is not very complex, but it is so powerful that some people cannot bear to contemplate it, and they desperately avert their attention as if it were a horrible dose of foul-tasting medicine.


For centuries “the arts and humanities” have been considered not just separate from the sciences but somehow protected from the invasive examinations science engages in, but this traditional isolation is not the best way to preserve what we love. Trying to hide our treasures behind a veil of mystery prevents us from finding a proper anchoring for them in the physical world. It is a common-enough mistake, especially in philosophy.


But this policy typically burdens the defenders with a brittle, extravagant (implausible, indefensible) set of dogmas that cannot be defended rationally—and hence must be defended, in the end, with desperate clawing and shouting. In philosophy, this strategic choice often shows up as absolutism of one kind or another: the sanctity of (human) life is infinite; at the core of great art lies divine and inexplicable genius; consciousness is a problem too hard for us mere mortals to understand;


and—one of my favorite targets—what I call hysterical realism: there are always deeper facts that settle the puzzle cases of meaning. These facts are real, really real, even if we are systematically unable to discover them. This is a tempting idea, in part because it appeals to our sense of proper human modesty.


Genes, like words but unlike sentences, are used over and over again in different contexts. A better analogy for a gene than either a word or a sentence is a toolbox subroutine in a computer. . . .


And just as most of the Library of Babel is gibberish, most of the places in Design Space are filled with junk, things that can’t do anything well at all. If you are like me, you can imagine just three dimensions at a time, but the more you play around with the idea in your imagination, the easier it gets to think of the familiar three dimensions as standing in for many. (This is a thinking tool that improves with practice.)


Every now and then a novelty arises—by mutation or experimentation or accident—that is an improvement, and it gets copied and copied and copied. Failed experiments go extinct. Again, publish or perish.


This move is contentious among biologists, for reasons I think I understand, and deplore.


Darwin has offered us an account of the crudest, most rudimentary, stupidest imaginable lifting process—the wedge or inclined plane of natural selection. By taking tiny—the tiniest possible—steps, this process can gradually, over eons, traverse these huge distances.


At no point would anything miraculous—from on high—be needed. Each step has been accomplished by brute, mechanical, algorithmic climbing, from the base already built by the efforts of earlier climbing.


The eukaryotic revolution opened up huge regions of Design Space, but it did not happen in order to make all these designs accessible. Cranes must be “paid for” locally, in terms of the immediate benefits they convey to those that have the design innovations. But once established they can have profound further effects.


computers were not invented in order to make word-processing and the Internet possible, but once the space of possible computer applications was rendered accessible, design processes went into overdrive creating all the “species” we now rely on every day.)


Matt Ridley’s book The Red Queen: Sex and the Education of Human Nature (1993).


Why do we send our children to school, and why do we emphasize “concepts” over “rote learning”? Because we think that the best route to competence, in any sphere of activity, is comprehension. Don’t settle for being a mindless drudge! Understand the principles of whatever we’re doing so we can do it better!


There are reasons for the structures and shapes of the termite castle, but they are not represented by any of the termites. There is no Architect Termite who planned the structure, nor do any individual termites have the slightest clue about why they build the way they do. Competence without comprehension.


It should be clear that the soundness of this explanation (which may not yet be established) does not depend on any hypothesis suggesting that locusts understand arithmetic, let alone prime numbers. Nor does it depend on the process of natural selection understanding prime numbers. The mindless, uncomprehending process of natural selection can exploit this important property of some numbers without having to understand it at all.


In each instance of button-pressing, the scientists understood exactly how each step in the computing and transmitting process worked, but they couldn’t explain the generalization. You do need a semantic interpretation to explain why the regularity exists. In other words, the “macro-causal” level at which the explanation is expressed does not “reduce” to the “micro-causal” level.


Philosophers, however, tend to be tidy, fussy users of words. Ever since Socrates persisted in demanding to be told precisely what marked the defining features of virtue, knowledge, courage, and the like, philosophers have been tempted by the idea of stopping a threatened infinite regress like this one by identifying something that is—must be—the regress-stopper:


This does not particularly worry biologists; they have learned not to fret over definitions or essences, since the processes that create all the intermediate cases are well understood.


A curious fact about every individual organism—you, or me, or your dog, or your geranium, for instance—is that it is a potential founder of a new species, the first of a long line of whatchamacallits, but it will be hundreds or thousands of generations before whatchamacallits stand out from the crowd enough to be recognized as a species, so the coronation would have to occur long after you, or I, or your dog, or your geranium had returned to dust.


The secret ingredient of improvement everywhere in life is always the same: practice, practice, practice.


It may seem that I am begging the question in favor of a computational, AI approach by describing the work done by Kasparov’s brain in this way, but the work has to be done somehow, and no other way of getting the work done has ever been articulated. It won’t do to say that Kasparov uses “insight” or “intuition,” since that just means that Kasparov himself has no privileged access, no insight, into how the good results come to him.


In his book Le Ton Beau de Marot, Doug Hofstadter (1997) draws attention to the role of what he calls spontaneous intrusions into a creative process. In the real world, almost everything that happens leaves a wake, makes shadows, has an aroma, makes noise, and this provides a bounty of opportunities for spontaneous intrusions. It is also precisely what is in short supply in a virtual world.


The exploitation of accidents is the key to creativity, whether what is being made is a new genome, a new behavior, or a new melody.


What good do you think religions provide? They must be good for something, since apparently every human culture has religion in some form or other.” Well, every human culture has the common cold too. What is it good for? It’s good for itself.


I am not confident that I have succeeded in conceiving of something until I have manipulated the relevant ideas for some time, testing out implications in my mind, doing exercises, in effect, until I get fluent in handling the tools involved.


Many cognitive scientists have made the charitable assumption that philosophers must know what they are talking about when they use this special term, and have added the term to their working vocabulary, but this is a tactical mistake. Controversy still rages on what qualia are and aren’t, quite independently from any empirical issues.


(When Europe switched to the euro, people who were used to conceiving of prices in terms of francs and marks and lire and the like went through an awkward period when they could no longer rely on “translations” into their home-grown versions of “real money.” See Dehaene and Marques, 2002, for a pioneering exploration of this phenomenon.)


There are, of course, complications that I will not dwell on, since I want to use this particular bit of imagination-stretching cognitive neuroscience to open our minds to yet another possibility, not yet found but imaginable.


Until one makes decisions about such questions of definition, the term is not just vague or fuzzy; it is hopelessly ambiguous, equivocating between two (or more) fundamentally different ideas.


You also can learn a lot about the difficulties of interdisciplinary communication, with very confident people furiously talking past each other, or participating in academic tag team wrestling of the highest caliber.


Could I be sure that I wasn’t just falling for his rhetoric? I hoped there weren’t other physicists who would want to drag me back through the technicalities, showing me that I had been taken in by this authoritative dismissal. I liked his conclusion so much I didn’t have any stomach for the details. Same copout.


Sherlock Holmes, as described by Arthur Conan Doyle in the Sherlock Holmes mystery stories, has many properties, but where Conan Doyle was silent, there is no fact of the matter. We can extrapolate a bit: The author never mentions Sherlock having a third nostril, so we are entitled to assume that he didn’t (Lewis, 1978).


What then is a center of narrative gravity? It is also a theorist’s fiction, posited in order to unify and make sense of an otherwise bafflingly complex collection of actions, utterances, fidgets, complaints, promises, and so forth, that make up a person. It is the organizer of the personal level of explanation. Your hand didn’t sign the contract; you did. Your mouth didn’t tell the lie; you did. Your brain doesn’t remember Paris; you do. You are the “owner of record” of the living body we recognize as you.


(As we say, it’s your body to do with what you like.) In the same way that we can simplify all the gravitational attractions between all the parts of the world and an obelisk standing on the ground by boiling it down to two points, the center of the earth and the center of gravity of the obelisk, we can simplify all the interactions—the handshakes, the spoken words, the ink scrawls, and much more—between two selves, the seller and the buyer, who have just completed a transaction.


This center of narrative gravity may not be a mysterious nugget of mind stuff, but if it is just an abstraction, can it be studied scientifically? Yes, it can.


Obviously the key difference between experiments with rocks, roses, and rats on the one hand, and experiments with awake, cooperative human subjects on the other, is that the latter can communicate in language and hence can collaborate with experimenters,


These methods, correctly understood and followed, obviate the need for any radical or revolutionary “first-person” science of consciousness, and leave no residual phenomena of consciousness inaccessible to controlled scientific study.


What kind of things are beliefs and desires? We may stay maximally noncommittal about this—pending the confirmation of theory—by treating beliefs and their contents or objects as theorists’ fictions or abstractions similar to centers of mass, the equator, and parallelograms of forces.


Mermaid-sightings are real events, however misdescribed, whereas mermaids don’t exist. Similarly, a catalogue of beliefs about experience is not the same as a catalogue of experiences themselves.


And if you, the subject, believe that there are still ineffable residues unconveyed after exhausting such methods, you can tell this to the heterophenomenologists, who can add that belief to the list of beliefs in your primary data: S claims that he has ineffable beliefs about X.


All the physical information there is to obtain? How much is that? Is that like having all the money in the world? What would that be like? It’s not easy to imagine, and nothing less than all will serve to make the thought experiment’s intended point. It must include all the information about all the variation in responses in all the brains, including her own, especially including all the emotional or affective reactions to all the colors under all conditions.


If Jackson had stipulated that Mary had the God-like property of being “physically omniscient”—not just about color but about every physical fact at every level from the quark to the galaxy—many if not all readers would resist, saying that imagining such a feat is just too fantastical to take seriously. But stipulating that Mary knows merely all the physical facts about color vision is not substantially less fantastical.


A besetting problem for the scientific study of consciousness has been the fact that everybody is an expert! Not really, of course, but just about everybody who has reflected for more than a few minutes on the topic seems to think the deliverances of those reflections are as authoritative as the results of any high-tech experiment or any mountain of statistics.


But you can misremember, misinterpret, misdescribe your own most intimate experiences, covertly driven by some persuasive but unreliable bit of ideology.


Here is a simple demonstration you can perform at home that may surprise you. Sit in front of a mirror so you can monitor your own compliance with the directions, which are to stare intently into your own eyes, fixating on them as a target instead of letting your eyes get attracted to peripheral goings-on. Now, without looking, take a card from the middle of a well-shuffled deck of cards and hold it, face-side toward you, at arm’s length just outside the boundaries of your peripheral vision. Wiggle the card. You will know you are doing it, but you won’t see it, of course. Start moving the card into your field of view, wiggling it as you do so. First you can see motion (the wiggling) but no color! You can’t tell whether it’s a red card or a black card or a face card, and you certainly can’t identify its number. As you move it more and more centrally, you will be amazed at how close to straight ahead it has to be for you to identify its color, or the fact that it is a face card or not. As the card gets closer and closer to your fixation point, you must concentrate on not cheating, stealing a glance at the card as it moves in. When you are finally able to identify the card, it is almost directly in front of you.


The armchair theories of philosophers who ignore this moral are negligible at best and more often deeply confused and confusing. What you “learn” about your consciousness “through introspection” is a minor but powerfully misleading portion of what we can learn about your consciousness by adopting the heterophenomenological framework and studying consciousness systematically.


Artificial intelligence (AI) has its own simple cases, known as “toy problems,” which, as the name suggests, are deliberately oversimplified versions of “serious” real-world problems.


In other words, the Life world is a toy world that perfectly instantiates the determinism made famous by the early-nineteenth-century French scientist Pierre Laplace: given the state description of this world at an instant, we observers can perfectly predict the future instants by the simple application of our one law of physics.


Notice that something curious happens to our “ontology”—our catalogue of what exists—as we move between levels. At the physical level there is no motion, only ON and OFF, and the only individual things that exist, cells, are defined by their fixed spatial location. At the design level we suddenly have the motion of persisting objects; it is one and the same glider


In other words, by the time you have built up enough pieces into something that can reproduce itself (in a two-dimensional world), it is roughly as much larger than its smallest bits as an organism is larger than its atoms. You probably can’t do it with anything much less complicated, though this has not been strictly proved.


Nobody had to design or invent the glider; it was discovered to be implied by the physics of the Life world. But that, of course, is actually true of everything in the Life world. Nothing happens in the Life world that isn’t strictly implied—logically deducible by straightforward theorem-proving—by the physics and the initial configuration of cells.


A poker face is not just for poker.


(It’s costly to prepare another agent’s environment, so your opponents won’t try to anticipate you unless they have very good evidence of what you will do.)


Contrary to ancient ideology, we don’t want our free choices to be utterly uncaused. What we all want, and should want, is that when we act, we act based on good information about the best options available to us.


If only the environment will cause us to have lots of relevant true beliefs about what’s out there, and also cause us to act on the most judicious assessment of that evidence we could achieve!


Fairness does not consist in everybody winning.


Let’s check this one out. How could it be the case that neither A nor B had a clear claim on truth? Well, what if ownership in Caesar’s day was either vague or ill defined so that Caesar only sorta owned some of his gold—perhaps


The digitization prevents the propagation of the individuality of the two CDs to later versions, and ultimately to the digital-to-analog conversion that drives the speakers or ear buds.


(Before there were computers you could buy a book that was nothing but a table of random numbers to use in your research, page after page of digits that had been scrupulously generated in such a way as to pass all the tests of randomness that mathematicians had devised.


it is “mathematically compressible” in the sense that this infinitely long sequence can be captured in a finitely specified mechanism that will crank it out.


If determinism is true, are there ever any real choices?


Here is where psychologist Donald Hebb’s dictum comes in handy: If it isn’t worth doing, it isn’t worth doing well.


Probably every philosopher can readily think of an ongoing controversy in philosophy whose participants would be out of work if Hebb’s dictum were ruthlessly applied, but we no doubt disagree on just which cottage industries should be shut down.


One good test to make sure a philosophical project is not just exploring the higher-order truths of chmess is to see if people aside from philosophers actually play the game. Can anybody outside of academic philosophy be made to care about whether Jones’s counterexample works against Smith’s principle?


Another such test is to try to teach the stuff to uninitiated undergraduates. If they don’t “get it,” you really should consider the hypothesis that you’re following a self-supporting community of experts into an artifactual trap.


(And remember, too, that if Goofmaker hadn’t made his thesis a little too bold, he never would have attracted all the attention in the first place; the temptation to be provocative is not restricted to graduate students on the lookout for a splashy entrance into the field.)


So if Sturgeon’s Law holds for philosophy as it does for everything else, what, in my view, is the good stuff? First of all, the classics really are classics for a good reason. From Plato to Russell, the standard fare in history-of-philosophy courses holds up well even after centuries of examination, and the best of the secondary literature about this primary literature is also very valuable. You will get something—a lot, really—out of reading Aristotle or Kant or Nietzsche on your own, without any background,


A good library has all the good books. A great library has all the books.


After all, Leibniz didn’t write the Monadology to be an exemplary work of seventeenth-century rationalism; he wrote it to get at the truth.


In the end, you’re not taking any philosopher seriously until you ask whether or not what they say is right. Philosophy students—and professors—sometimes forget this, and concentrate on pigeonholing and engaging in “compare and contrast,” as we say in examination questions. Whole philosophy departments sometimes fall into this vision of their goal. That’s not philosophy; that’s just philosophy appreciation.


Respect the philosopher you are reading by asking yourself, about every sentence and paragraph, “Do I believe this, and if not, why not?”


Several times in my career I have relied on the judgment of a colleague who told me not to bother with X’s work because it was foolish junk, only to learn some time later that I had been misled into ignoring a thinker with valuable ideas whose contribution to my own thinking was delayed by the bum steer.


“It’s inconceivable!” That’s what some people declare when they confront the “mystery” of consciousness, or the claim that life arose on this planet more than three billion years ago without any helping hand from an Intelligent Designer, for instance. When I hear this, I am always tempted to say, “Well of course it’s inconceivable to you. You left your thinking tools behind and you’re hardly trying.”


Even schoolchildren have little difficulty conceiving of DNA today, and it’s not because they are more brilliant than Bateson was. It’s because in the last century we have devised and refined the thinking tools that make it a snap.


Of course some people really don’t want to conceive of these things. They want to protect the mysteries from even an attempt at explanation, for fear that an explanation might make the treasures disappear.


When other people start getting inquisitive, they find that “God works in mysterious ways” is a convenient anti-thinking tool. By hinting that the questioner is arrogant and overreaching, it can quench curiosity in an instant.


I think we should stop treating this “pious” observation as any kind of wisdom and recognize it as the transparently defensive propaganda that it is. A positive response might be, “Oh good! I love a mystery. Let’s see if we can solve this one, too. Do you have any ideas?”


Conceiving of something new is hard work, not just a matter of framing some idea in your mind, giving it a quick once-over and then endorsing it. What is inconceivable to us now may prove to be obviously conceivable when we’ve done some more work on it.