The Curious Case of Lecture Crashing

June 1, 2014

EDIT 2014-06-12: This post turned out significantly worse than I was hoping – the point is not to beat up on arts students but instead to show how we can reason better about complicated-seeming things. Apparently there is more inferential distance to be convered before jumping to my main points. Let’s consider this a failed experiment, but we will leave it up for historical reasons.

{% previous painting-with-paint chunking-conceptual-legos %}

Lecture Crashers

So, Friday was a little weird. I had an hour of class as early in the morning as you might care to imagine, and a lunch date in the early afternoon. Understandably this left me with more time than I knew what to do with. I was planning on finding an eigenlocation on campus to get some homework done, but on my search for such a spot, I came across a lecture hall that I’ve always wanted to have a lecture in.

A thought crossed my mind.

I opened the door, and lo and behold, a lecture was actively being taught inside. This might have been my only chance to get a lecture in the room, so I went in and sat down. A helpful young woman beside whom I had decided to sit informed me of the arbitrary sequence of letters and numbers which make up a class code. I was unable to decipher the code, and I was very unable to decipher the topic of the lecture at hand, but it seemed like neuroscience to me.

Cool! This was neat, and very, very different from my usual mathematics lectures. I was getting excited, and then the bell rang and everyone got up and left. Dang it! But I wasn’t going to give up that easily. I had become Vince Vaughn in Wedding Crashers, and I’d be damned if I weren’t going to get into trouble with nymphomaniacal redheaded vixens.

Yes, they do call me “Sandy ‘Fantastic Metaphors’ Maguire” for a reason. Thanks for asking!

So I wandered around through the math and science buildings that could generally be described as my haunt. I popped into lecture after lecture, and realized that I knew (or could derive) everything I walked in on. Maybe they only teach baby courses during the summer at the University of Waterloo, or maybe paying attention to four years’ worth of heavy mathematics has paid for itself, but whatever the reason, I wasn’t going to find much of a challenge in these buildings.

Traveling onwards, with gusto, I headed for the arts quad. I’ve taken exactly two arts courses in the last four years, and hated both of them. But regardless of my opinions on the subject matter, people seem to pay money to take these courses. Other people spend their entire lives studying the subjects. In the eternal hunt for universal human experiences, this was clearly an area rife for searching.

One of the rooms had some interesting looking people, evidently waiting for their lecture to begin. I snuck in and quietly asked what lecture this was – pretending that maybe I was just confused and couldn’t keep track of what day it was. Another course code was rattled off at me. Having learned my lesson from the last time, I asked if there was a colloquial name for the course.

There was. “Ethics”. Oh, this was going to be fun.

I followed up my questions by asking one of the attendees whether my presence would be noticed if I weren’t actually supposed to be there. She said no, and smiled when I told her about my lecture crashing plans.

She lied. The professor took attendance, but graciously didn’t say anything. Maybe she didn’t want to start any trouble. Maybe it wasn’t worth her time. Maybe she didn’t care. Maybe it wouldn’t have been ethical to kick me out?

Who can say.

Misguided Students of the Human Condition

Many things about the lecture set off my internal WTF-detector. Attendance was probably the least of them. The other students had name-tags! My co-conspirator – the one who said I wouldn’t be found out – was obviously completely clueless, also crashing the lecture, or having a little fun at my expense. But that’s okay.

The second-most disturbing vibe of the lecture was that it didn’t seem to be much of a lecture whatsoever. The professor would bring up points, and ask the class what they thought about it. Discussion would occur. Why was this a disturbing vibe? I can have conversations with people who don’t know what they’re talking about on my own for less than $1400 per semester. That’s just good economics! The cynic in me decided this was just a good tactic for padding out fluffy lectures, so we can’t deduct too many points for solid strategy.

However, this completely pales in the face of the most disturbing aspect of the lecture. The professor seemingly didn’t have any opinions of her own. She would pose questions in the form of “here’s an ethical debate. Thoughts?”, and would reply to the students’ responses with “that’s a great point! It really draws out the underlying question, doesn’t it?”.

Maybe this is what is meant by “critical thinking” – refusing to fall into the trap that questions have answers. Most of the students’ answers to ethical dilemmas were of the type “how I feel about this issue”. Sure, that’s fine, you’re allowed to have opinions (in fact, it’s encouraged!), but you get a lot more points if you have solid reasoning behind your opinions rather than just going with your gut instinct.

I feel like this is a way of thinking that studying the STEM subjects teaches relatively well. There are underlying rules to the universe, and they accurately describe very different things really really well. The equations for rotational force are of the same form as the area of a circle for deep reasons (and the answer is not because they both talk about circles). The equations for (classical) gravity and electromagnetism look the same too. These seemingly very different systems are governed by the same rules, which implies there should be a reason why this is so.

A model of a social ethical framework was presented during the lecture. A student pointed out that it was a pretty lame model, since it didn’t really describe anything that we might want a social ethical framework to describe. The professor acknowledged this fact (what’s the point of showing it, then?), and asked the group for further comments. One of the comments that this model was bad because no model could possibly describe the depth and complexity of the issues that we describe when we talk about “ethics”. The rest of the class nodded along, as if this were a Very Wise Thing To Say.

Let’s take a second to think about what that would mean. To say nothing of the numerously huge and terrifying claims about epistemology, it would imply that each and every person in the room were wasting their time by attempting to study it. If there are no possible patterns of ethics to be recognized, then you might as well take your $1400 and set it on fire as soon as possible.

What’s the point of anything if you’re just going to give up that easily?

Painting with Salt

In a rare case of charity to those who study classic philosophy, let’s say that they are not fundamentally misguided so as to waste their lives trying to solve unsolvable mysteries. Have a little heart, please. They’re simply failing to solve their mysteries. Totally different!

This mindset of hitting a hard problem and instantly assuming it is Fundamentally Unsolvable strikes me as being unnervingly common. It’s absolutely terrifying because there are all of these hard problems that need to be solved and that’s only going to happen if we actually try to solve them. The correct follow-up thought to “this model doesn’t do what we want it to do” is not “all models are broken”, but rather “we have the wrong model”.

Reality isn’t, and has never been, wrong. Whenever it doesn’t conform to our expectations, it simply means that we’re not yet smart enough to figure out why. The problem is us, not the universe. Like the laws of gravity and circles, there are laws to be found here, if we just look hard enough. It’s going to be pretty hard to find said laws if you throw up your hands every time saying “nope, can’t be done”!

Anyway, I digress. If you can’t tell, I’m a little salty about this whole thing. The reason it’s been so long since my last blog post is that I’ve been trying to figure out a good way of teaching how to Paint with Thoughts, but it’s really hard to find both a compelling AND useful starting point. I’ve been banging my head against it for the better part of a month.

So I had an idea. How about we just do a case study? Let’s take some time and analyze ethics from a Painting with Thoughts perspective, and see if it takes us anywhere useful. I’m not claiming to have “solved” ethics (what would that even mean? (side note: isn’t it interesting that this mysteriousness around philosophical concepts is so deeply rooted that we can’t talk about actually coming up with an answer?)), but I’m very confident that if you gave me a week I could come up with a better answer than any classic philosophers could do in their lifetime.

That’s a strong claim, so without further adieu, I’m going to back it up. Onwards, with gusto.

A Reliable Framework for Solving Things

Why am I confident that I can approach a solution here? Because we’ve got a system that reliably comes up with approximate solutions. The general form is “try to turn it into a problem you’ve already solved”, because if you can do that, you already know the answer. Easy!

Step 1: What question are we trying to answer?

The discussion in class was on whether or not it was ethical for a man who was committing fraud to turn himself in and bring down his conspirators with him.

Before we can answer that, let’s try asking another question. Is it art-deco for this man to turn himself in and bring down his conspirators? Is Wulky Wilkinsen a post-utopian?

The point, here, is that the question really relies on what we mean by “ethical”. We can slap any label we want onto anything else, but it’s not very useful unless we have a clear concept of what it would mean if this label were attached, rather than if it weren’t. If a label doesn’t actually help you resolve questions, it’s a waste of your time to read it in the first place.

Think about it like this: what do you learn from seeing an unhappy human skull pictogram in a triangle on a mysterious container full of liquid? Now, what do you learn when somebody tells you that Wulky Wilkinsen is a post-utopian? In the first case you learn that the mysterious contents will burn your skin off of your face if you are foolish enough to put them in contact with one another. In the latter, you learn that Wulky Wilkinsen is a post-utopian. Great.

Any label is useful insomuch as it allows us to answer questions. What questions can we answer by describing this fraudster’s actions as ethical or not? Off the top of my head, I’m going to break this down into the following questions:

  • Was it in this man’s best interests to turn himself in?
  • Was it in his company’s best interests for him to do so?
  • Was it in society’s best interest?

Look at that! We’re already making progress. Those are very different questions, but we were trying to answer all of them simultaneously with our original question. It’s no wonder that we didn’t get anywhere – they don’t have the same answer!

Lesson: Try to tease out why you’re asking the question you’re asking, and then ask that question instead. Get as specific as possible, because it’s really easy to answer specific questions.

Step 2: What axioms can we find?

We’ve broken our original question down in terms of “best interest”. But how can we ever answer what someone’s “best interest” is? Answer: use our intuition to come up with something that sounds plausible, and then see how strongly that intuition will hold together.

My intuition in this case is that acting in someone’s best interest is the same as “making the same decision that they themselves would have made, given all of the same information”. Here we’re defining our answer to be the same as their answer. The only way we can be wrong about this claim is if people themselves don’t know what’s in their best interest, in which case, who are you to believe that you have the answer?

So far, so good. We’re just making claims that need to be true unless we want to open much larger cans of worms. We’re making claims that need necessarily be true for us to even have a chance of answering the original question. We’ll see later if we come up with any preposterous results (and if we do, that means we probably picked bad assumptions).

To capture this notion of acting in the same way that someone would act themselves, it is axiomatic (true self-evidently) that everyone act in their best interest at all times. By definition, since we’ve defined “best interest” to be the way that people choose to act by themselves given all of the information that they have.

What this means is that ones “best interests” are relative to the state of knowledge they have when evaluating their best interests. This implies that their decision might change if they are aware of other factors which might weigh in on the decision. Again, this still holds with our common sense intuition – nobody ever thinks that they are making a bad decision.

Lesson: Find assumptions that are as self-evident as possible which might pertain to the problem at hand. Base your reasoning on these, and these alone. This means your reasoning can only fail if your initial assumptions are wrong, and that’s why we try to make them as simple as possible.

Step 3: Come Up With New Words; Abstract Away the Nitty Gritty

“But wait!” you might cry, “there is already a confounder in your reasoning! People don’t always act in their best interest! Consider global warming, or unhealthy diets!”

Aha, you’ve caught me! I was trying to pull a fast one, but apparently I was not fast enough. Indeed, this is a confounder! Our words are misleading here – we have an intuitive connotation of what a “best interest” is, and it doesn’t always correspond with the way we defined our terms.

So, the solution? Just make up some new words that mean exactly what you want them to. This way we can’t equivocate and mean two things simultaneously by our words. This might sound silly, but you would literally not believe how many arguments, philosophical and otherwise, are about two people using the same word in slightly different ways. So just make up a new one, free of all that emotional burden.

This is why science and math words become so foreign sounding – we just keep making things up so that we don’t get confused about what we’re talking about. But we don’t call it “making things up” we hide it behind the euphemism “being rigorous with technical language”.

Instead of all of this mumbo-jumbo about best interests, let’s make up some words. We will define a “utility function” which is a computer that, given all of the information, will churn out a number that describes how much its owner will “like” the circumstances. The device is clearly magical and no living person could ever possibly understand how it works (he says, dryly), but let’s just assume it exists. You feed a set of circumstances into the box, and it gives you a number about how much you’ll like those circumstances. The bigger the number, the more you’ll like it.

Now, instead of saying that people act in their best interests, we can say that they always attempt to maximize their utility, which is to say, they try to find circumstances which make this magical black box computer output the largest number possible. When making a decision, they will run every affordance they have through the machine, and pick the one with the biggest number. Deciding hard problems has now been simplified to “pick the biggest number”. Easy!

This device we’ve come up with neatly hides some of the complexity of the human condition. Since we have a device that works by magic, we can hide all of the parts we don’t understand inside of it, and reason our way along about the parts we DO understand. Why do people eat shitty food? Well, obviously their utility function values short-term convenience over long-term survival. Sounds like a personal problem, to me. It also hides away the problem that different people care about different things. We’re chunking all human desires into one concept, and not worrying about the nitty gritty details.

And, for good form, we’ll write ourselves a note on the side of the box saying “figure this part out later”.

The bonus of this strategy is twin fold: we don’t get stuck on hard problems in the pursuit of easy questions, and we put all of the things we don’t understand into one place, with a big label reminding ourselves that we don’t understand it. That’s why I used the word “magic”. It’s a big reminder that, just because we’re reasoning about this, it doesn’t mean we know what’s happening.

Lesson: If a word has more than one definition in the dictionary, make up a new word that has only one meaning. Take all of the parts you don’t understand, put them in a box, and write yourself a note to figure out that box later.

Step 4: Look For Existing Solutions

We’ve now broken down the question of “is this ethical?” to “how can a person get the biggest number?”. Do we know of any existing fields of study that revolve around helping people get the biggest numbers?

I can think of two: economics and game theory. Interestingly, both of these fields use the word “utility” to describe something humans want to maximize. How about that? What a weird coincidence! Both of these fields are huge, and have innumerably many very useful concepts inside of them. So why reinvent the wheel when we don’t have to? Let’s just use any results of theirs that we can!

For example, game theory talks about how you can’t just optimize to get the most utility out of the game you’re currently playing – if you want to really win, you need to consider how your actions will effect all of the games you play in the future. Here, game is defined as “any event that requires input from more than one person”. If you need to consider the entirety of the games you might play in the future, you are dissuaded from cheating in this current game. If you always cheat, people are going to stop playing games with you.

This strikes me as being very similar to the social meme of “a code of honor”. It’s dishonorable to cheat and steal and get ahead by scrupulous means, and if you disgrace your honor, you will be only in the company of those who have also fallen from grace. If nobody can trust your word, you’re only going to be dealing with those whose word you can’t trust yourself. They are going to be trying to cheat you just as hard as you are going to try to cheat them.

Lesson: People have been working on solving problems for a really long time. Dollars to donuts, someone’s going to have solved your problem already, and it’s just a matter of finding it. When in doubt, assume the solution will be technical.

At this point in my original notes, I got drawn into a bunch of underlying thoughts around ethics – one of which is why the concept exists at all. From there I found a pattern that keeps appearing whenever I talk about brain structures, and found that pretty exciting so I had to coin a term for it, and this relatively meandering post took a sharp 90-degree turn off into outer space. Personally, I think these ideas are pretty fascinating, but they’ll have to be fleshed out another time.

In Closing

Okay, cool, so now we have a model for “ethics”. Our original question of “is it ethical?” cashes out as “does this maximize society’s utility function?”. For obvious questions of ethics, both questions are answered the same (try it!), which is a promising sign. If that weren’t the case, we’d have made some serious errors. We want our models to tell us the things we already know for sure (or else it’s wrong), but we also want it to tell us about the things we don’t know for sure (or else it’s useless).

But, all of a sudden, our reframing grants us some new insight-fuel: if we’re just trying to make our utility function spit out bigger numbers, all of a sudden being ethical is not a yes-no kind of answer, but rather it’s a continuum. Rather than “is this ethical?” we should be asking “how ethical is this?”.

This result generalizes entirely. All the way up to “truth” being a continuum. Instead of asking whether or not our model is true, we can ask how true is our model. I’m not saying it’s great, but it’s definitely better than the complete lack of model we started with.

When viewed through these lenses, a lot of philosophical debates around ethics are instantly resolved. There is no longer only one ethical solution to the trolley problem, just better and worse solutions.

This is the power of Painting with Thoughts. It’s the profound clarity of mind to notice when you’re the wrong questions, and to have the strength to steer your way ever in the direction of better answers.