Aumann's agreement theorem

From RationalWiki
Jump to: navigation, search
It's not totally wrong, it's

LessWrong

TrollRidingBask.jpg
Get rational

Aumann's agreement theorem[1] is the result of Nobel laureate Robert Aumann's groundbreaking 1976 discovery that a sufficiently respected game theorist can get anything into a peer-reviewed journal.

The one-sentence summary is "you can't actually agree to disagree": two perfectly rational agents with the same prior estimate of an event's probability and common knowledge of one another's posterior estimates cannot come to different posteriors. For such careful definitions of "perfectly rational" and "common knowledge" this is equivalent to saying that two functioning calculators will not give different answers on the same input. Per the authors: "We publish this paper with some diffidence, since once one has the appropriate framework, it is mathematically trivial."

[edit] The theorem

Consider two agents tasked with performing Bayesian analysis (this is "perfectly rational"). Both are given the same prior probability of the world being in a certain state, and separate sets of further information. Both sets of information include the posterior probability arrived at by the other, as well as the fact that their prior probabilities are the same, the fact that the other knows its posterior probability, the set of events that might affect probability, the fact that the other knows these things, the fact that the other knows it knows these things, the fact that the other knows it knows the other knows it knows, ad infinitum (this is "common knowledge"). Their posterior probabilities must then be the same.

Essentially, the proof goes that if they were not, it would mean that they did not trust the accuracy of one another's information, or did not trust the other's computation, since a different probability being found by a rational agent is itself evidence of further evidence, and a rational agent should recognize this, and also recognize that one would, and that this would also be recognized, and so on.

Whether you could get two people to agree within a reasonable time-frame (reasonable, like, within the duration of the entire existence of the universe) was proved by Scott Aaronson.[2]

[edit] The problem

This theorem is almost as much a favorite of LessWrong as the "Sword of Bayes"[3] itself, because of its popular phrasing along the lines of "two agents acting rationally... cannot agree to disagree."[4] It is, therefore, used as a defense against those who accuse them of arrogance for claiming to have come to inarguable truths about human affairs through half-cocked repeated application of Bayes' theorem with priors that often have an odd smell to them.[5]

Unlike many questionable applications of theorems, this one appears to have been the intention of the paper itself, which itself cites a 1968 paper defending the application of such techniques to the real world.[6] However, it does not change the fact that buried among the conditions is the following: "Included in the full description of a state ω of the world is the manner in which the information is imparted to the two persons." In other words, there needs already to exist an agreement on the state of the world far greater than is typically seen in reality, outside of the least controversial scientific analysis. For an illustration, how often do two mathematicians disagree on the invalidity of the proof within an agreed-upon framework, once one's objections are known to the other? Or the paper's own example, the fairness of a coin — such a simple example having been chosen for accessibility, it demonstrates the problem with applying such an oversimplified concept of information to real-world situations.

Yudkowsky's mentor Robin Hanson tries to handwave this with something about genetics and environment,[7] but to have sufficient common knowledge of genetics and environment for this to work practically would require a few calls to Laplace's demon.

It may be worth noting that Yudkowsky has said he wouldn't agree to try to reach an Aumann agreement with Hanson.[8]

[edit] Footnotes

  1. Robert J. Aumann (1976). "Agreeing to Disagree". The Annals of Statistics 4 (6) 1236-1239.
  2. Arronson, Scott - "The complexity of agreement". Proceedings of ACM STOC: 634–643
  3. "Sword of Bayes"? Really? Really?!
  4. http://wiki.lesswrong.com/wiki/Aumann%27s_agreement_theorem
  5. http://lesswrong.com/lw/gr/the_modesty_argument/
  6. Namely: Harsayani, J (1967-8). Games of incomplete information played by Bayesian players, I-III. Management Sci. 14 159-182, 320-344, 486-502.
  7. Hanson, Robin (2006). "Uncommon Priors Require Origin Disputes". Theory and Decision 61 (4) 319–328.
  8. http://lesswrong.com/lw/qw/principles_of_disagreement/
Personal tools
Namespaces

Variants
Actions
Navigation
Community
Tools
support