Saturday, January 28, 2012

What is "fair"?

[Guest post by Don Geddis.  Fourth in a series (0, 1, 2, 3).]


In my last post on exploitation, Coby asked an excellent question:
What is fair?
I'll address that in this post, but first let me respond to more of his questions:
My impression so far is that you might say any transaction is, as a manner of definition, fair if it is entered into willingly.
There might be some difficult corner cases, but as a first cut, yes I think I would agree.
I submit we are only kicking the can down the road because now we have to think carefully about what it means to be "willing."
Another great observation.  Just as I accused Ron of a non-answer, because he merely replaced "exploit" with "unfair", so Coby is complaining that I haven't answered "fair", but only replaced it with the equally vague "willing".
We can save time and advance the discussion faster if you let us know if you think life in a Chinese factory may be hard and sound miserable to us, but it is not a symptom of something wrong with an economic system.
 I absolutely agree with that.  It may become clear why, by the end.
Does the existence of hungry, sick and desperately poor people mean something needs fixing in a society? 
Sort of ... but I hope to convince you, in this post, that we disagree on some of the assumptions underlying this final question, so agreement on the question doesn't make as much sense as it first appears.

So let's go back to the root question:
What is fair?
It's hard to address this, without considering a general framework for morals and ethics.  To be up front about it, my general meta-ethical perspective is that there is no absolute moral or ethical framework.

If you back up (philosophically) far enough, the universe is a four-dimensional frozen cube of space-time, and everything that has happened or will happen is already determined.  Most moral reasoning takes the form of counterfactuals ("if this fact about the universe had been different, than this other outcome would be different").  But in point of fact, there are no counterfactuals.  There is only the actual universe, with its actual events.  "What might otherwise have happened" is not necessarily a meaningful question.  Everything simply is.  (This has shades of Hume's famous is-ought conflict.)

OK, but all is not lost.  Determinism is not necessarily in conflict with free will.  We are entities with decision procedures, and it makes perfect sense for a decision procedure to model itself as having choice.  Even if the decision procedure is deterministic, it's still the case that the deterministic procedure that is being followed, is one of imagining possible future worlds (and perhaps counterfactuals), evaluating the benefit of each world, and picking the action that the procedure predicts to most likely maximize some kind of value.  That's basically "free will".

So we finally get back to something concrete.  We're a group of three people, with free will, and we come across a cookie.  What is the "fair" way to divide it?

Now your mind will generate all sorts of moral intuitions.  Probably the first one is, "we should each take an even third".  But I can alter your moral conclusions, with some additional information.  For example, perhaps one person is a child, and the other two are adults.  Or perhaps one is starving, and the others just ate a big meal.  Or maybe one works at a cookie factory, and the others not.  Or one has just recently been rude and mean, and the other two are banding together to exclude the first.

As you explore this further, you find that people's moral intuitions have a predictable nature.  We care more about those near us -- in both time and space -- than those farther away.  We care more about those who look like us (e.g. same race) than those who look different.  This moral intuition has been expanding over the last centuries, as the concept of "us" grows larger.  Still, we care more about humans than about non-human entities, etc.

If I could give you a single lesson to take away from this post, it would be:
Intuitions are rough guides to behavior that is good on average.  They do not provide insight into God's will or objective truth.
This is as true for religion, as it is for morality.  Religious believers often talk about a transcendent feeling that they get in certain situations.  A revelation, that they feel they have experienced, but that I don't seem to understand.  But the truth is the reverse: I understand the feeling; I just think it's an error to believe that your introspective feelings are a strong guide to the truth.

White people instinctively hate black people (and vis versa), simply because they look different, which (in ancestral times) meant that they were part of a different tribe, and thus dangerous enemies.  It was important to defend "us" against "them", or else our village would be slaughtered and theirs would take over our resources.

Does this mean that a black CEO should refuse to hire a white entry-level applicant (or vis versa)?  Of course not.  Even though the CEO can appreciate that he's vaguely uncomfortable with this particular candidate, and would much rather have a beer with a guy who plays basketball, cheers for March Madness, listens to rap ... and looks dark-skinned.  But a rational CEO can realize that this intuition is not a helpful guide to finding the most skilled waiter for his restaurant.

So if we can't rely on quick-and-dirty moral intuition, then how can we ever make moral decisions?  My suggestion, if the topic is important enough, is to do it in much the same way you make every other decision.  Consider the possible actions you could take, project forward to the (complete!) details of what world would result from such actions, and then simply evaluate those final predicted worlds: which one would you prefer to live in?

This is basically a form of consequentialism, although I won't bother to go into the details of that philosophical rat hole.  But the basic point is: it is useless to try to evaluate the current state of the world as "bad" or "good".  It's even useless to try to evaluate some proposed action as "right" or "wrong".  I find the use of all those heavy morality-laden words to be ... unhelpful ... when trying to have any serious discussion.  At best, they're a form of rhetorical propaganda, where one side in a political debate is trying to attract more supporters, and the method used is to claim that anyone who disagrees must be "evil" in some way.  I certainly don't want to be evil!  Do you?  So surely we must agree with whatever the position is.  It's a way to stop thinking, and to let your emotions guide your behavior.

But I want you to think, instead of to use your emotions.  Of course, emotions are a part of life, and in fact I use them myself at the end: I'm recommending that you look at possible future worlds, and just evaluate them directly, see how you feel about living in one of them.  In that sense, personal preference is perfectly fine.

But if you think that your moral intuitions give you access to some objective truth ... you're going to be sorely disappointed when you eventually realize how inconsistent and Neanderthal they really are.

No comments: