Monday, December 10, 2012

Sub-Rationality: Winning When the Other Game Players Are Irrational

(If you're here from Reflections from the Other Side, welcome.) This expands on what I wrote in my guest post there.

For most atheists, our atheism is a result of trying to make our beliefs (and the decisions that flow from them) more rational. But for almost every human, the most important elements in our world are other humans; our intelligence and culture mean that we spend almost all of our time cooperating and competing only with others of our own species. This is fairly unique in the world of living things. If you're a squirrel, sure other squirrels are important, but in many ways watching out for the numerous supergenius giants around you (us) is more important. Humans are now in a position where we can say only other humans really matter to our survival and prospering, at least in a day-to-day sense.

But this also presents problems, because though the paragon of animals we may be, we're still not 100% rational. We're loaded with species-wide bad habits* and psychological shortcuts which are difficult or impossible to overcome, especially when we function in groups. And there's the rub - if you want to be effective as a rationalist, achieving whatever goal you deem important (including making humans more rational) then you have to play the same games as other humans. I've come to call rational obedience to irrational norms in order to ultimately further rational goals sub-rationality, echoing the economic/game theoretic idea of sub-optimization.  You can't really call these behaviors proper irrationality, which as you'll see below, it's really not - but it's still a problem for those of us who would like the world to become more rational and don't want to reinforce irrationality.

Simple example: let's say you're an investor in the stock market, and tech company A announces bad news. Now, you know damn well that when tech company A has bad fortune because of something unique to itself, say its own management making a bad decision, this is good news for their competitor companies B and C in the same space. However you also know that in keeping with past activity, the market will irrationally get nervous about the whole space, and so not only will company A's stock drop, so will B and C, despite that the bad news from A means B and C will do better. So what do you do? Sell your stock in B and C! Of course, now you are also one of the irrational investors who gets nervous when A has bad news, and you've joined the big irrationally-driven coordination game-stampede that will suck in the next otherwise rational investor who comes along. However it seems very difficult to argue that the more rational solution would be to say "I refuse to sell my stock in B and C because then I'm reinforcing the irrational behavior of the other investors." Guess what? If you want to make money, you have to care about what the other humans do, irrationality and all.

On Less Wrong, they make frequent reference to Aumann's Agreement Theorem, which boils down to: if two people have access to the same information, and the same reasoning processes (and the latter is always true unless you think reason differs person-to-person) then on any proposition they must agree.** Of course this doesn't happen much in the real world even among intelligent people despite the obvious benefits that such an open discourse would bring. But it's not not just that everyone is being a jerk; there may be rational reasons to be irrational. So many of the problems of sub-rationality I ennumerate below can be thought of as rational people attempting some kind of optimization while playing a game with irrational players. (Who may themselves otherwise be rational too!)  I make these observations in the spirit of observing how rationality runs into problems during implementation; I don't have suggestions or solutions for any of these so I offer them to the wider community hoping we can start fixing them.


1) People don't often or easily change their minds publicly. When person A changes to person B's position, a third-party observer would legitimately slightly decrease their estimate of person A's intelligence and increase person B's. The more this happens, the more an observer is justified in considering person A less reliable than person B; they might start just ignoring person A entirely.  If person A wants to be taken seriously then, they can't be public about their mind-changing, and the rational strategy if they want to preserve their own influence on others is either to refuse to change their minds, or to do it slowly and non-obviously ("That's what I believed all along") and hope that no one notices or calls them on it.  This is in fact how humans behave, although it's not usually a conscious consideration; and in the aggregate, this considerably slows the process of discarding of bad beliefs and adopting better ones.

2) A person who frequently changes their opinion in response to new information seems unreliable and is unpredictable.  Yes, this is a person who does not fall prey to the sunk cost fallacy, or is not somehow affected by emotional pre-commitment, does not have predictable behavior. They are definitely more rational than most people in the strict sense, but maybe not someone you want to spend much time with. (In Fooled by Randomness, Nicholas Nassim Taleb relates a story of George Soros's immunity to the sunk cost fallacy. Reading this account, you recognize that Soros is being eminently rational, but you can't help feeling that he's a capricious asshole who would often be dangerous to negotiate with.)

3) We each have limited time and attention. We therefore have to have some way of deciding which claims are worth even thinking about at all, without (obviously) fully evaluating each claim. To get people to listen to our claims in the first place, they have to be about something the person cares about already, phrased in a way familiar and inoffensive to them. (Hence why every political debate in the Middle East, however "liberal", seems stuck in justifying itself by some statement that Mohammed made.)

4) Changing beliefs is difficult and requires lots of attention and energy. Unless immediate damage will result, it may pay to wait for repeated exposure, or full understanding of the implications, which may be slow in coming.

5) Many (most?) attempts to alter people's behavior are bad.  They are a) made in the claimant's interest, rather than in the person evaluating the claim, b) are made for stupid reasons, and c) the stupidity and selfishness of the claim takes time and effort to detect. People are actually right to suspect most claims made by those outside their circle of emotional pre-commitment.

6) Some of the scams and behavior-manipulating conspiracies skeptics decry are actually very efficient ways for rational (lying) people to take advantage of irrational people, economically otherwise.  Prediction markets are one example (and the now under-Federal-investigation Intrade redistributed lots of money from people who thought Romney would win to people who thought Obama would win.) Or what about selling crystals/homeopathic organic iodine/magic sticks to people? If you can make money by punishing irrational people, and then donate to the Secular Coalition for America, isn't that doubly good?

7) To get into positions of authority, we usually have no choice but to appear to respect merit-less status hierarchies.   Status games (who has the nicest clothes or car, who has the best academic pedigree or title, etc.) are difficult problems because you either have to play, in which case you've now entered a positional (hence zero-sum) rat-race; or you don't play, in which case you come across either as stupid and unaware of the status game, or arrogant and alienating. Again, ironically, status rat-races may happen even when each individual player realizes the ridiculousness of fighting over some crumb that's purely symbolic and signifies only buying into the institutional culture.

8) We use body language and "the human touch" to spread rationality.  Case in point, eye contact, framing, social contact, benevolent social outreach, etc.  None of this has anything to do with the truth of most of the propositions we're debating.  Granted, this is pretty hard-wired and as such doesn't benefit one faction over another so it seems a lot more benign; as long as we're biological entities that bear the stamp of our lowly origins, this is how it will work.  But still:  doesn't it seem strange that rationalist communities are using this sort of thing, which by the way, has been known by every salesman since the Bronze Age?


No doubt some of these will strike rationalists and skeptics as particularly odious. I welcome your thoughts on these and to what extent they make sense as a sub-optimization compromise that can ultimately lead to greater rationality.



*One of those bad habits is that our entire reasoning apparatus evolved to win arguments; that is, we start with the conclusion in mind (a very bad habit indeed) and build whatever rickety rhetorical bridge from the premises to that conclusion, in an attempt to convert others to our own beliefs. Using reason this way is a good tool to dominate others but not so great for getting to the truth. In fact it's worse than not-so-great since you're more likely than not to be converting others to bad beliefs. Even scientific reasoning involves a hack of this system: you still start with the end in mind, but you try to remember that this end is provisional and you try to pick an end that can easily be tested and shown to be false if it is indeed false. Even though it concedes ground to the underlying start-with-your-mind-made-up bad habit, it's still difficult to train ourselves to think this way, and even our top scientists fall short in this regard from time to time.


**Other issues with Aumann's is that people's preferences often differ from a very fundamental biological basis (I like cilantro, you don't; there is no fixing this in the near term, and that one is trivial compared to limited resources like sex and capital). Aumann is good for static declarations about what is true in the universe but decisions about what to do next are profoundly affected by these preference differences. That said, the Less Wrong community is developing a decision theory as well.

2 comments:

david bloch said...

RE: your point #1 - an instance where the opposite to what you predict happened was when an eminent scientist A, at a lecture where he was either a panelist or the host, gave his well known point of view, his long established theory and the empirical support for it. The visiting expert, scientist B, in his reaction to his colleague, scientist A, criticised and refuted the long held theory of his colleague.
Scientist A then spoke, and said that what scientist B pointed out had caused him to do a total, and quick, reconsideration of his theory and interpretation if the supporting data, and he now agreed with scientist B.
This really happened. In fact, it happens all the time.
In fact, that it happens is part - an integral part - of the way science - empirism - advances.
As I expect you well know, one of the requirements for any worthwhile hypothesis, theory, assertion, etc., is that it be refutable (and therein lies one of the great shortcomings of religious thinking and belief), and the purpose ofb such dialogue is to collaborate in arriving at the best possible understanding of the meaning of something. The purpose is not to defeat or humiliate anyone, and it's not a game of one upmanship.
In the example I gave at the beginning of this note, there was a win-win-win outcome. Quite the opposite of the zero sum game format you describe.
Here's what happened:
Win - the field, or specific issue, was advanced and better understood.
Win - scientist A was perceived as able to intelligently understand, and listen with an open and flexible mind, scientist B, as well as perceived as an independent, fair person of humility, respect for others, and self confident.
Win - scientist B was perceived as perceptive, intelligent and someone with worthwhile ideas and knowledge.
The esteem of scientist A was not diminished. In
fact, if he stuck to his theory, in the face of compelling criticism and data that refuted it, then his reputation and the high regard by colleagues would have been considerably diminished.
Obviously, the above applies to mature, intelligent, usually educated, and considerate people, especially when discussing meaningful issues.

Michael Caton said...

What you're describing is the ideal that I think we in the rational community want to move toward. Many of the sub-optimizations I describe disappear if you're dealing with other rationalists. The question is how to balance succeeding in the world as it is with these ideals.