there is no obvious reciprocal logic. If you lie to me, this does not mean my best strategy is to lie back to you—it usually means that my best strategy is to distance myself from you or punish you.
The most creative suggestion I have heard to mathematically model deception is to adapt the ultimatum game (UG) to this problem. In the UG, a person proposes a split of, say, $100 (provided by the experimenter) —$80 to self, $20 to the responder. The responder, in turn, can accept the split, in which case the money is split accordingly, or the responder can reject the offer, in which case neither party gets any money. Often the game is played as a one-shot anonymous encounter. That is, individuals play only once with people they do not know and with whom they will not interact in the future. In this situation, the game measures an individual’s sense of injustice—at what level of offer are you sufficiently offended to turn it down even though you thereby lose money? In many cultures, the 80/20 split is the break-even point at which one-half of the population turns down the offer as too unfair.
Now imagine a modified UG in which there are two possible pots (say, $100 and $400) and both players know this. One pot is then randomly assigned to the proposer. Imagine the proposer offers you $40, which could represent 40 percent of a $100 pot (in which case you should accept) or 10 percent of a $400 pot (most people would reject). The proposer is permitted to lie and tell you that the pot is the smaller of the two when in fact it is the larger. You can trust the proposer or not, but the key is that you are permitted to pay to find out the truth from a (disinterested) third party. This measures the value you place in reducing your uncertainty regarding the proposer’s honesty.
If you then discover that the proposer lied, you should have a moral (or, at least, moralistic) motive to reject the offer, and the other way around for the truth—all compared to uncertainty, or not paying to find out. Note that from a purely economic point of view, there is no benefit in finding out the truth, since it costs money and may lead to an (otherwise) unnecessary loss of whatever is offered. The question can then be posed: How much would a responder be prepared to pay to reduce the uncertainty and go for a possibly inconvenient truth? Note that the game can be played in real life with varying degrees of anonymity and also multiple times, as in the iterated prisoner’s dilemma. As ability to discriminate develops, the other person will benefit more from your honesty (quickly seen as such) and suffer less from deception (spotted and discarded).
When we add self-deception, the game quickly becomes very complicated. One can imagine actors who are:• Stone-cold honest (cost: information given away, naive regarding deception by others).
• Consciously dishonest to a high degree but with low self-deception (cost: higher cognitive cost and higher cost when detected).
• Dishonest with high self-deception (more superficially convincing at lower immediate cognitive cost but suffering later defects and acting more often in the service of others).
And so on.
A DEEPER THEORY OF DECEPTION
Those talented at the mathematics of simple games or studying them via computer simulation might find it rewarding to define a set of people along the lines just mentioned, and then assign variable quantitative effects to explore their combined evolutionary trajectory. Perhaps results will be trivial and trajectories will depend completely on the relative quantitative effects assigned to each strategy, but it is much more likely that deeper connections will emerge, seen only when the coevolutionary struggle is formulated explicitly. The general point is, of course, that there are multiple actors in this game, kept in some kind of frequency-dependent equilibrium that itself may change over time. We choose to play different roles in different situations, presumably according to the expected payoffs. Of course it is better to begin with very simple games and only add complexity as we learn more about the underlying dynamics.
It stands to reason that if our theory of self-deception rests on a theory of deception, advances in the latter will be especially valuable. I have known this for thirty years but have not been able to think of anything myself that is original regarding the deeper logic of deception, nor have I seen much progress elsewhere. Yes, signals