You Are Not a Gadget

Jaron Lanier

It would be hard for anyone, let alone a technologist, to get up in the morning without the faith that the future can be better than the past.


Science removes ideas from play empirically, for good reason. Lock-in, however, removes design options based on what is easiest to program, what is politically feasible, what is fashionable, or what is created by chance.


We ought to at least try to avoid this particularly tricky example of impending lock-in. Lock-in makes us forget the lost freedoms we had in the digital past. That can make it harder to see the freedoms we have in the digital present.


They find it incredible that I perceive a commonality in the membership of the tribe. To them, the systems Linux and UNIX are completely different, for instance, while to me they are coincident dots on a vast canvas of possibilities, even if much of the canvas is all but forgotten by now.


Then there are those recently conceived elements of the future of human experience, like the already locked-in idea of the file, that are as fundamental as the air we breathe. The file will henceforth be one of the basic underlying elements of the human story, like genes.


Emphasizing the crowd means deemphasizing individual humans in the design of society, and when you ask people not to be people, they revert to bad moblike behaviors. This leads not only to empowered trolls, but to a generally unfriendly and unconstructive online world.


Spirituality is committing suicide. Consciousness is attempting to will itself out of existence.


Love the English language (or hate it).


But in the case of digital creative materials, like MIDI, UNIX, or even the World Wide Web, it’s a good idea to be skeptical. These designs came together very recently, and there’s a haphazard, accidental quality to them.


What Do You Do When the Techies Are Crazier Than the Luddites?


The Singularity, however, would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new super-consciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living.


The antihuman approach to computation is one of the most baseless ideas in human history. A computer isn’t even there unless a person experiences it. There will be a warm mass of patterned silicon with electricity coursing through it, but the bits don’t mean anything without a cultured person to interpret them.


This is not solipsism. You can believe that your mind makes up the world, but a bullet will still kill you. A virtual bullet, however, doesn’t even exist unless there is a person to recognize it as a representation of a bullet. Guns are real in a way that computers are not.


Since implementation speaks louder than words, ideas can be spread in the designs of software.


If you believe the distinction between the roles of people and computers is starting to dissolve, you might express that—as some friends of mine at Microsoft once did—by designing features for a word processor that are supposed to know what you want, such as when you want to start an outline within your document. You might have had the experience of having Microsoft Word suddenly determine, at the wrong moment, that you are creating an indented outline. While I am all for the automation of petty tasks, this is different.


But what if information is inanimate? What if it’s even less than inanimate, a mere artifact of human thought? What if only humans are real, and information is not?


In order to understand how someone could have come up with that plan, you have to remember that before computers came along, the steam engine was a preferred metaphor for understanding human nature. All that sexual pressure was building up and causing the machine to malfunction, so the opposite essence, the female kind, ought to balance it out and reduce the pressure. This story should serve as a cautionary tale. The common use of computers, as we understand them today, as sources for models and metaphors of ourselves is probably about as reliable as the use of the steam engine was back then.


But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart.


We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species’ bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous.


Will trendy cloud-based economics, science, or cultural processes outpace old-fashioned approaches that demand human understanding? No, because it is only encounters with human understanding that allow the contents of the cloud to exist.


If the Deep Blue team had not been as good at the software problem, a computer would still have become the world champion at some later date, thanks to sheer brawn.


People already tend to defer to computers, blaming themselves when a digital gadget or online service is hard to use.


I like the term “empathy” because it has spiritual overtones. A term like “sympathy” or “allegiance” might be more precise, but I want the chosen term to be slightly mystical, to suggest that we might not be able to fully understand what goes on between us and others, that we should leave open the possibility that the relationship can’t be represented in a digital database.


Empathy inflation can also lead to the lesser, but still substantial, evils of incompetence, trivialization, dishonesty, and narcissism. You cannot live, for example, without killing bacteria. Wouldn’t you be projecting your own fantasies on single-cell organisms that would be indifferent to them at best? Doesn’t it really become about you instead of the cause at that point?


Do you think the bacteria you saved are morally equivalent to former slaves—and if you do, haven’t you diminished the status of those human beings?


If we are only able to be approximately moral, that doesn’t mean we should give up trying to be moral at all.


This takes time; in the real world the universe probably wouldn’t support conditions for life long enough for you to make a purchase. But this is a thought experiment, so don’t be picky.


For instance, you can propose that consciousness is an illusion, but by definition consciousness is the one thing that isn’t reduced if it is an illusion.


I claim that there is one measurable difference between a zombie and a person: a zombie has a different philosophy. Therefore, zombies can only be detected if they happen to be professional philosophers. A philosopher like Daniel Dennett is obviously a zombie.