On this line of thinking, you don’t get any information about who has better epistemic standards merely by observing that someone disagrees with you. After all, the other side observes just the same fact of disagreement.
If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.”
Theoretically, a liquid market should be just exploitable enough to pay competent professionals the same hourly rate as their next-best opportunity.
We might call this argument “Chesterton’s Absence of a Fence.” The thought being: I shouldn’t build a fence here, because if it were a good idea to have a fence here,
We might call this argument “Chesterton’s Absence of a Fence.” The thought being: I shouldn’t build a fence here, because if it were a good idea to have a fence here, someone would already have built it. The
We might call this argument “Chesterton’s Absence of a Fence.” The thought being: I shouldn’t build a fence here, because if it were a good idea to have a fence here, someone would already have built it.
If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization
If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency.
So a frothy housing market may see many overpriced houses, but few underpriced ones. Thus it will be easy to lose money in this market by buying stupidly, and much harder to make money by buying cleverly.
To modify an old aphorism: usually, when things suck, it’s because they suck in a way that’s a Nash equilibrium.
In the same way that inefficient markets tend systematically to be inexploitable, grossly inadequate systems tend systematically to be unfixable by individual non-billionaires.
Adequacy arguments are ubiquitous, and they’re much more common in everyday reasoning than arguments about efficiency or exploitability.
Am I running out and trying to get a SAD researcher interested in my anecdotal data? No, because when something like this doesn’t get done, there’s usually a deeper reason than “nobody thought of it.”
Where reward doesn’t follow success, or where not everyone can individually pick up the reward, institutions and countries and whole civilizations can fail at what is usually imagined to be their tasks.
Systems tend to be inexploitable with respect to the resources that large ecosystems of competent agents are trying their hardest to pursue, like fame and money, regardless of how adequate or inadequate they are.
The academic and medical system probably isn’t that easy to exploit in dollars or esteem, but so far it does look like maybe the system is exploitable in SAD innovations, due to being inadequate to the task of converting dollars, esteem, researcher hours, etc. into new SAD cures at a reasonable rate—inadequate, for example, at investigating some SAD cures that Randall Munroe would have considered obvious,13 or at doing the basic investigative experiments that I would have considered obvious. And when the world is like that, it’s possible to cure someone’s crippling SAD by thinking carefully about the problem yourself, even if your civilization doesn’t have a mainstream answer.
In our world, there are a lot of people screaming, “Pay attention to this thing I’m indignant about over here!” In fact, there are enough people screaming that there’s an inexploitable market in indignation.
Moving from bad equilibria to better equilibria is the whole point of having a civilization in the first place.
To sum up, academic science is embedded in a big enough system with enough separate decisionmakers creating incentives for other decisionmakers that it almost always takes the path of least resistance.
occupational licensing works to the benefit of professionals at the expense of consumers,
I’m afraid that our civilization doesn’t have a sufficiently stirring and narratively satisfying conception of the valor of “testing things” that our people would be massively alarmed by its impossibility.
You can’t just not kill babies and expect to get away with it.
Consider me impressed that your planet managed to reach this level of dysfunction without actually physically bursting into flames.
It’s called a Keynesian beauty contest, where everyone tries to pick the contestant they expect everyone else to pick. A
It’s called a Keynesian beauty contest, where everyone tries to pick the contestant they expect everyone else to pick.
So if you did do this to yourselves, all by yourselves with no external empire to prevent you from doing anything differently by force of arms, then why can’t you just vote to change the voting rules? No, never mind “voting”—why can’t you all just get together and change everything, period?
I suspect some of the dynamics in entrepreneur-land are there because many venture capitalists run into entrepreneurs that are smarter than them, but who still have bad startups. A venture capitalist who believes clever-sounding arguments will soon be talked into wasting a lot of money. So venture capitalists learn to distrust clever-sounding arguments because they can’t distinguish lies from truth, when they’re up against
I suspect some of the dynamics in entrepreneur-land are there because many venture capitalists run into entrepreneurs that are smarter than them, but who still have bad startups. A venture capitalist who believes clever-sounding arguments will soon be talked into wasting a lot of money. So venture capitalists learn to distrust clever-sounding arguments because they can’t distinguish lies from truth, when they’re up against entrepreneurs who are smarter than them.
Singapore might be the best-governed country in the world, and their history is approximately, “Lee Kuan Yew gained very strong individual power over a small country, and unlike the hundreds of times in the history of Earth when that went horribly wrong, Lee Kuan Yew happened to know some economics.”
Forgive me for resorting to Occam’s Razor,
What broke the silence about artificial general intelligence (AGI) in 2014 wasn’t Stephen Hawking writing a careful, well-considered essay about how this was a real issue. The silence only broke when Elon Musk tweeted about Nick Bostrom’s Superintelligence, and then made an off-the-cuff remark about how AGI was “summoning the demon.” Why did that heave a rock through the Overton window, when Stephen Hawking couldn’t? Because Stephen Hawking sounded like he was trying hard to appear sober and serious, which signals that this is a subject you have to be careful not to gaffe about. And then Elon Musk was like, “Whoa, look at that apocalypse over there!!” After which there was the equivalent of journalists trying to pile on, shouting, “A gaffe! A gaffe! A… gaffe?” and finding out that, in light of recent news stories about AI and in light of Elon Musk’s good reputation, people weren’t backing them up on that gaffe thing.
We’ve just been walking through a handful of lay economic concepts here, the kind whose structure I can explain in a few thousand words. If you truly perceived the world through the eyes of a conventional cynical economist, then the horrors, the abominations, the low-hanging fruits you saw unpicked would annihilate your very soul.
in the early days when Omegaven was just plain illegal to sell across state lines, some parents would drive for hours, every month, to buy Omegaven from the Boston Children’s Hospital to take back to their home state. I, for one, would call that an extraordinary effort. Those parents went far outside their routine, beyond what the System would demand of them, beyond what the world was set up to support them doing by default. Most people won’t make an effort that far outside their usual habits even if their own personal lives are at stake.
A lot of the times we put on our inadequacy-detecting goggles, we’re deciding whether to trust some aspect of society to be more competent than ourselves.
I think a formative moment for any rationalist—our “Uncle Ben shot by the mugger” moment, if you will—is the moment you go “holy shit, everyone in the world is fucking insane.” […] Now, there are basically two ways you can respond to this. First, you can say “holy shit, everyone in the world is fucking insane. Therefore, if I adopt the radical new policy of not being fucking insane, I can pick up these giant piles of utility everyone is leaving on the ground, and then I win.” […] This is the strategy of discovering a hot new stock tip, investing all your money, winning big, and retiring to Maui. Second, you can say “holy shit, everyone in the world is fucking insane. However, none of them seem to realize that they’re insane. By extension, I am probably insane. I should take careful steps to minimize the damage I do.” […] This is the strategy of discovering a hot new stock tip, realizing that most stock tips are bogus, and not going bankrupt.
Good reasoners don’t believe that there are goblins in their closets. The ultimate reason for this isn’t that goblin-belief is archaic, outmoded, associated with people lost in fantasy worlds, too much like wishful thinking, et cetera. It’s just that we opened up our closets and looked and we didn’t see any goblins.
This is a central disagreement I have with modest epistemology: modest people end up believing that they live in an inexploitable world because they’re trying to avoid acting like an arrogant kind of person. Under modest epistemology, you’re not supposed to adapt rapidly and without hesitation to the realities of the situation as you observe them, because that would mean trusting yourself to assess adequacy levels; but you can’t trust yourself, because Dunning-Kruger, et cetera.
But in real life, inside a civilization that is often tremendously broken on a systemic level, finding a contrarian expert seeming to shine against an untrustworthy background is nowhere remotely near as difficult as becoming that expert yourself. It’s the difference between picking which of four runners is most likely to win a fifty-kilometer race, and winning a fifty-kilometer race yourself.
you can often know things that the average authority doesn’t know… but not because you figured it out yourself, in almost every case.
Above all, reaching the true frontier requires picking your battles.
To win, choose winnable battles; await the rare anomalous case of, “Oh wait, that could work.”
Instead the retail dietary options for epileptic children involved mostly soybean oil, of which it has been said, “Why not just shoot them?”
So a realistic lifetime of trying to adapt yourself to a broken civilization looks like: 0-2 lifetime instances of answering “Yes” to “Can I substantially improve on my civilization’s current knowledge if I put years into the attempt?” A few people, but not many, will answer “Yes” to enough instances of this question to count on the fingers of both hands. Moving on to your toes indicates that you are a crackpot.
Once per year or thereabouts, an answer of “Yes” to “Can I generate a synthesis of existing correct contrarianism which will beat my current civilization’s next-best alternative, for just myself (i.e., without trying to solve the further problems of widespread adoption), after a few weeks’ research and a bunch of testing and occasionally asking for help?”
Many cases of trying to pick a previously existing side in a running dispute between experts, if you think that you can follow the object-level arguments reasonably well and there are strong meta-level cues that you can identify.
Worrying about how one data point is “just an anecdote” can make sense if you’ve already collected thirty data points. On the other hand, when you previously just had a lot of prior reasoning, or you were previously trying to generalize from other people’s not-quite-similar experiences, and then you collide directly with reality for the first time, one data point is huge.
Oh, and bet. Bet on everything. Bet real money. It helps a lot with learning. I once bet $25 at even odds against the eventual discovery of the Higgs boson—after 90% of the possible mass range had been experimentally eliminated, because I had the impression from reading diatribes against string theory that modern theoretical physics might not be solid enough to predict a qualitatively new kind of particle with prior odds greater than 9:1. When the Higgs boson was discovered inside the remaining 10% interval of possible energies, I said, “Gosh, I guess they can predict that sort of thing with prior probability greater than 90%,” updated strongly in favor of the credibility of things like dark matter and dark energy, and then didn’t make any more bets like that.
Run experiments; place bets; say oops. Anything less is an act of self-sabotage.
the impression I got was that the message of “distrust theorizing” had become so strong that Founder 1 had stopped trying to model users in detail and thought it was futile to make an advance prediction. But if you can’t model users in detail, you can’t think in terms of workflows and tasks that users are trying to accomplish, or at what point you become visibly the best tool the user has ever encountered to accomplish some particular workflow (the minimum viable product). The alternative, from what I could see, was to think in terms of “features” and that as soon as possible you would show the product to the user and see if they wanted that subset of features.
I think it felt immodest to him to claim that his company could grow to a given level; so he thought only in terms of things he knew he could try, forward-chaining from where he was rather than backward-chaining from where he wanted to go, because that way he didn’t need to immodestly think about succeeding at a particular level, or endorse an inside view of a particular pathway.
you’re… not quite doomed per se, but from the perspective of somebody like me, there will be ten of you with bad ideas for every one of you that happens to have a good idea.
the deep causal models of the world that allowed humans to plot the trajectory of the first moon rocket before launch, for example, or that allow us to verify that a computer chip will work before it’s ever manufactured.
I mean, the psychiatric patient wouldn’t say that, the same way that a crackpot wouldn’t actually give a long explanation of why they’re allowed to use the inside view. But they could, and according to modesty, That’s Terrible.
This fits into a very common pattern of advice I’ve found myself giving, along the lines of, “Don’t assume you can’t do something when it’s very cheap to try testing your ability to do it,” or, “Don’t assume other people will evaluate you lowly when it’s cheap to test that belief.”
My more recent experience seems more like 90% telling people to be less underconfident, to reach higher, to be more ambitious, to test themselves, and maybe 10% cautioning people against overconfidence.
“If you can’t remember any time in the last six months when you failed, you aren’t trying to do difficult enough things.”
If you’ve never wasted an effort, you’re filtering on far too high a required probability of success.
many people’s emotional makeup is such that they experience what I would consider an excess fear—a fear disproportionate to the non-emotional consequences—of trying something and failing. A fear so strong that you become a nurse instead of a physicist because that is something you are certain you can do. Anything you might not be able to do is crossed off the list instantly. In fact, it was probably never generated as a policy option in the first place. Even when the correct course is obviously to just try the job interview and see what happens, the test will be put off indefinitely if failure feels possible.
This is one of the emotions that I think might be at work in recommendations to take an outside view on your chances of success in some endeavor. If you only try the things that are allowed for your “reference class,” you’re supposed to be safe—in a certain social sense. You may fail, but you can justify the attempt to others by noting that many others have succeeded on similar tasks. On the other hand, if you try something more ambitious, you could fail and have everyone think you were stupid to try.
As I’ve increasingly noticed of late, and contrary to beliefs earlier in my career about the psychological unity of humankind, not all human beings have all the human emotions. The logic of sexual reproduction makes it unlikely that anyone will have a new complex piece of mental machinery that nobody else has… but absences of complex machinery aren’t just possible; they’re amazingly common.
If you’re fully asexual, then you haven’t felt the emotion others call “sexual desire”… but you can feel friendship, the warmth of cuddling, and in most cases you can experience orgasm. If you’re not around people who talk explicitly about the possibility of asexuality, you might not even realize you’re asexual and that there is a distinct “sexual attraction” emotion you are missing, just like some people with congenital anosmia never realize that they don’t have a sense of smell.
It took me a long time to understand that trying to do interesting things in the future is a status violation because your current status right now determines what kinds of images you are allowed to associate with yourself, and if your status is low, then many people will intuitively perceive an unpleasant violation of the social order should you associate with yourself an image of possible future success above some level. Only people who already have something like an aura of pre-importance are allowed to try to do important things. Publicly setting out to do valuable and important things eventually is above the status you already have now, and will generate an immediate system-1 slapdown reaction.
subconscious influences and emotional temptations are a problem, but you can often beat those if your explicit verbal reasoning is good.
Somehow, someone is going to horribly misuse all the advice that is contained within this book. Nothing I know how to say will prevent this, and all I can do is advise you not to shoot your own foot off;
pay more attention to observation than to theory in cases where you’re lucky enough to have both and they happen to conflict;
pay more attention to observation than to theory in cases where you’re lucky enough to have both and they happen to conflict; put yourself and your skills on trial in every accessible instance where you’re likely to get an answer within the next minute or the next week;
update hard on single pieces of evidence if you don’t already have twenty others.
You might remember the Free Energy Fallacy, and that it’s much easier to save yourself than your country.
I don’t have good, repeatable exercises for training your skill in this field, and that’s one reason I worry about the results. But I can tell you this much: bet on everything. Bet on everything where you can or will find out the answer. Even if you’re only testing yourself against one other person, it’s a way of calibrating yourself to avoid both overconfidence and underconfidence, which will serve you in good stead emotionally when you try to do inadequacy reasoning.
The policy of saying only what will do no harm is a policy of total silence for anyone who’s even slightly imaginative
For yourself, dear reader, try not to be part of the harm. And if you end up doing something that hurts you: stop doing it.
The world isn’t mysteriously doomed to its current level of inadequacy. Incentive structures have parts, and can be reengineered in some cases, worked around in others.
Better, I think, to not worry quite so much about how lowly or impressive you are. Better to meditate on the details of what you can do, what there is to be done, and how one might do it.
Trying to defend against that hypothetical crackpot will not lead us to devise a good system of thought.
If I saw a sensible formal epistemology underlying modesty and I saw people who advocated modesty going on to outperform myself and others, accomplishing great deeds through the strength of their diffidence, then, indeed, I would start paying very serious attention to modesty.