Aumann's agreement theorem
Carefully, correctly LessWrong |
Singularity blues |
Aumann's agreement theorem[1] is the result of Robert Aumann's, winner of the 2005 Swedish National Bank's Prize in Economic Sciences in Memory of Alfred Nobel, groundbreaking 1976 discovery that a sufficiently respected game theorist can get anything into a peer-reviewed journal.
The one-sentence summary is "you can't actually agree to disagree": two perfectly rational agents with the same prior estimate of an event's probability and common knowledge of one another's posterior estimates cannot come to different posteriors. For such careful definitions of "perfectly rational" and "common knowledge" this is equivalent to saying that two functioning calculators will not give different answers on the same input. Per the authors: "We publish this paper with some diffidence, since once one has the appropriate framework, it is mathematically trivial."
The theorem[edit]
Consider two agents tasked with performing Bayesian analysis (this is "perfectly rational"). Both are given the same prior probability of the world being in a certain state, and separate sets of further information. Both sets of information include the posterior probability arrived at by the other, as well as the fact that their prior probabilities are the same, the fact that the other knows its posterior probability, the set of events that might affect probability, the fact that the other knows these things, the fact that the other knows it knows these things, the fact that the other knows it knows the other knows it knows, ad infinitum (this is "common knowledge"). Their posterior probabilities must then be the same.
Essentially, the proof goes that if they were not, it would mean that they did not trust the accuracy of one another's information, or did not trust the other's computation, since a different probability being found by a rational agent is itself evidence of further evidence, and a rational agent should recognize this, and also recognize that one would, and that this would also be recognized, and so on.
Geanakoplos & Polemarchakis[2] prove that if the information sets are finite, this can be extended to a communication setting even without common knowledge (though still assuming common priors). Scott Aaronson[3] sharpens this theorem by removing the common prior and limiting the number of messages communicated. In particular, the parties can agree to within ε with probability at least 1 − δ with an exchange of at most 1/(δε²) messages (read: like, within the duration of the entire existence of the universe).
The problem[edit]
This theorem is almost as much a favorite of LessWrong as the "Sword of Bayes"[4] itself, because of its popular phrasing along the lines of "two agents acting rationally... cannot agree to disagree."[5] It is, therefore, used as a defense against those who accuse them of arrogance for claiming to have come to inarguable truths about human affairs through half-cocked repeated application of Bayes' theorem with priors that often have an odd smell to them.[6]
Unlike many questionable applications of theorems, this one appears to have been the intention of the paper itself, which itself cites a 1968 paper defending the application of such techniques to the real world.[7] However, it does not change the fact that buried among the conditions is the following: "Included in the full description of a state ω of the world is the manner in which the information is imparted to the two persons." In other words, there needs already to exist an agreement on the state of the world far greater than is typically seen in reality, outside of the least controversial scientific analysis. For an illustration, how often do two mathematicians disagree on the invalidity of the proof within an agreed-upon framework, once one's objections are known to the other? Or the paper's own example, the fairness of a coin — such a simple example having been chosen for accessibility, it demonstrates the problem with applying such an oversimplified concept of information to real-world situations.
Scott Aaronson believes that Aumanns's theorem can act as a corrective to overconfidence, and a guide as to what disagreements should look like.[8].
Yudkowsky's mentor Robin Hanson tries to handwave this with something about genetics and environment,[9] but to have sufficient common knowledge of genetics and environment for this to work practically would require a few calls to Laplace's demon.
It may be worth noting that Yudkowsky has said he wouldn't agree to try to reach an Aumann agreement with Hanson.[10]
References[edit]
- ↑ Robert J. Aumann (1976). "Agreeing to Disagree". The Annals of Statistics 4 (6) 1236-1239.
- ↑ John D Geanakoplos and Heraklis M. Polemarchakis, We can't disagree forever, Journal of Economic Theory 28':1 (Oct 1982), pp. 192–200.
- ↑ Aaronson, Scott - "The complexity of agreement". Proceedings of ACM STOC: 634–643
- ↑ "Sword of Bayes"? Really? Really?!
- ↑ http://wiki.lesswrong.com/wiki/Aumann%27s_agreement_theorem
- ↑ http://lesswrong.com/lw/gr/the_modesty_argument/
- ↑ Namely: Harsayani, J (1967-8). Games of incomplete information played by Bayesian players, I-III. Management Sci. 14 159-182, 320-344, 486-502.
- ↑ Shtetl Optimized: Common Knowlede and Aumann's Agreements
- ↑ Hanson, Robin (2006). "Uncommon Priors Require Origin Disputes". Theory and Decision 61 (4) 319–328.
- ↑ http://lesswrong.com/lw/qw/principles_of_disagreement/