Autoethics: Self Enslavement

January 11, 2012

Let’s do something a little different today.

Santino2012 is turning out to be a significantly different guy than Santino<=[2010,2011]. He’s sober, serious, and actively working on bettering himself. I found a beautiful autobiography yesterday, and decided it was time to update my ancient About Me field on facebook. Here’s what I came up with:

I am a programmer, a student, a rationalist, an engineer, a scientist, a game designer, an atheist, a mathemetician, a singer, a liberal, a philosopher, a lover of music, a teacher, an actor, and a writer. I believe in free speech, in free thought, in education, in socialism, in truth, in equal rights for everyone, in justice, in humanity, and I believe in myself, because when I don’t succeed, I learn, and when I learn, I succeed. Life isn’t always easy, but it’s always good.

Look! That’s me in three sentences. How about that for information density? In the defense of some of these claims, I’m going to get down and dirty.

And we’re going to do some philosophy. Ready? Good. Let’s go!

In the movie The Matrix, there exists a means of rapidly downloading information and skills into a brain. Let’s imagine such a device exists (but get rid of the movie preconceptions, we’re going to work this through together - the reference is just a quick introduction).

I’m loathe to use the phrase “it seems evident”, because in my experience such a phrase very often leads to poor philosophizing, but my main point isn’t really dependent on it, so… it seems evident that the brain functions in an analog fashion; that if we wanted to implant memories (and moreso skills) into one, it wouldn’t be as easy as copying bytes across a serial line. No, I’d imagine (with my current understanding of neuroscience) that instead you’d have to teach the brain, and let it serialize the ideas itself.

What do I mean? That the only way to transfer information is to provide input to the brain and let it do what it will with the signal. Everyday life certainly works this way, we interpret language and actions and everyone gets a slightly different idea out of the same input signals.

So if one were tasked with building a device of rapidly transferring information into brains, it stands to reason that at least SOMEONE (maybe this someone is me?) would come up with the solution of placing people in very specific, very focused scenarios, where the intended learning goal is easily grasped, and difficult to interpret in a significantly different manner.

If one could present enough of these micro scenarios, presumably one could reliably insert complicated concepts into a brain. And if it could be done quickly enough, it would presumably be useful ala The Matrix. Necessarily, memories of the individual lessons would have to be negligible, or coming out of such a state of idea-transfer would be extremely confusing.

This is where things start to get interesting. If the individual lessons aren’t remembered (only the overarching concepts), then we can treat the information receiver as two people - one during the lessons, and one after. Both selves are continuous with the self before idea-transfer occurred, which gives us a tree:

original----learner (during transfer)
          L-learned (after transfer)

Because the learner stops existing at the point of finishing learning, and the learned has no memory of this event, they are for all intents and purposes, different people. Yet they are still the same person!

It’s not a big leap to assume that large amounts of highly focused lessons wouldn’t be very fun. To satisfy the criterion of being singularly-interprettably, there wouldn’t be a lot of room for self-expression, friendship or amusement (unless of course those are the goals of the lessons).

Here we come to the crux of the problem. The leaned agent is subjecting the learner agent to tedium and hardships, and receiving all of the benefits of the learner’s labor for free. That sounds a lot like slavery to me. Remember, these two agents are different people, but one is exploiting the other.

It’s a question of what I coin to be autoethics - ethics pertaining only to the self. Am I entitled to enslave myself (albeit an entirely SEPARATE self)? Is this an OK thing to do?

I’m pretty sure it’s not.

Think about it, and let me know. I’m very interested to hear some different takes on this.