Essay:LessWrong Bayes and Rationality

From RationalWiki
Jump to navigation Jump to search
Essay.svg This essay is an original work by Dmytry.
It does not necessarily reflect the views expressed in RationalWiki's Mission Statement, but we welcome discussion of a broad range of ideas.
Unless otherwise stated, this is original content, released under CC-BY-SA 3.0 or any later version. See RationalWiki:Copyrights.
Feel free to make comments on the talk page, which will probably be far more interesting, and might reflect a broader range of RationalWiki editors' thoughts.

So, as a finitely intelligent entity, you have incomplete set of ideas and beliefs, which itself is generated in an unavoidably cherry picked way: you only have ideas you managed to reach and those you were told of; you have finite capacity and you try to generate more of the useful ideas at the expense of fewer of the useless ideas (if you don't do that you are functionally dumber).

What will happen if you try to follow the tenets of "rationality" as described on LessWrong: assign probabilities to everything, do 'Bayesian updates' on your beliefs, 'propagate' the beliefs, and the like?

You will encounter following and many other fundamental issues:

  1. If probability propagates from proposition A to B, and from A to C, and then from B to D and from C to D, you'll simply fuck up the math unless you keep track of A influence on probabilities of B and C. I.e. you can't represent probabilities of B and C with mere scalars numbers for the purpose of further updates to D, you need to store where the probability came from! If you don't, it doesn't matter how serious is the face you put on, you are doing the wrong thing, that is wrong in principle and wrong in practice. Often the existence of A is altogether unknown, i.e. A is often not a part of the incomplete set of ideas. If you don't see why you need to keep track of where probabilities are coming from as you propagate, just go and study damn statistics properly from a textbook or best yet a course.
  2. When there's multiple people talking with each other, it is virtually impossible to pass the information on where probabilities were coming from, and the effect of cycles in the resulting inference graph is known as "circlejerk".
  3. The probability of proposition affects the construction of the graph itself.
  4. The fact that the set of hypotheses is incomplete really screws everything up. For instance: you can argue that smaller Kolmogorov's complexity (meaning to say Occam's razor in a fancy way) favours many worlds interpretation over Copenhagen interpretation because both produce something that contains you the observer, but former is shorter. The argument as such would favour iterator over all possible physical theories over many worlds interpretation, but you haven't got this in your graph and you aren't creative enough to add it (or do not have sufficient discipline to look where the K-complexity argument goes). You end up updating probability of a wrong node entirely, you end up believing there's literally parallel worlds where you live, which may affect how you think about personal risks.
  5. The propagations are also incomplete: for example you can have two beliefs A and B . Some facts and logic may require that you update your confidence in both A and B, but any such facts or logic can be presented in such a way as to result in the updates only to A or only to B, skewing the ratio between A and B and impairing your performance at any decisions dependent to the ratio.
  6. The fact that you pick up hypotheses and propagations from the writings of other people, screws everything up even more. E.g. in the above, rest assured, you'll be updating along which ever path is best for someone else's bank account. Some people will only ever help you make those propagations that help them; rest assured that after doing such propagations the relative probabilities between different hypotheses will be entirely screwed up.
  7. And even worse thing is that the very concepts you are thinking in are picked up off the floor and usually are crap. E.g. concept of intelligence as something monolithic.

I develop software for living, at the design level, at the inventing algorithms level, and at the coding level. I guarantee that if you try to do this 'rationality' and do not have a zillion of very complex nasty ad-hoc compensations for the issues resulting from incomplete processing, you'll end up with algorithm that will produce utter and complete garbage, typically arriving at very high or very low probabilities for nonsensical propositions, due to all sorts of unwanted feedback loops and absence of any damping in the system. That is just what you get if you have no damping and you have feedbacks. Assigning numerical probability to anything someone told you is not rational, only innumerate. Aspiring to do "Bayesian updates" on beliefs you got in your head, unless you actually got numerical data, is likewise not rational, only innumerate and inalgebrate.

More interestingly, the resulting faulty process of inference will be very open to malicious manipulation. For example, you can be manipulated into giving the money to people behind this whole nonsense: you will be presented with highly speculative notion of how AI works (as a maximizer of some real world quantity, such as number of real world paperclips). Then all the zillions possible general arguments as of why AI can not work like this or why it is difficult to construct AI that works like this will be presented to you in specialized form which will make you only lower the probability of "AI that does not kill the mankind" as per issue 5.

A few more fallacies down the road and you will be donating your money to a bunch of irrelevant sloppy philosophers of mind at SI, whom misrepresent their homebrew philosophy of mind work as the work on the theory of artificial intelligence. Then you end up making all your charitable donations to them as they explained is the optimum: http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/ . That is the point where net effect on the world becomes sufficiently negative as for others to get pissed off at this whole enterprise.

So, what do I do?[edit]

Critical thinking. A lot has been written by very smart people on this topic. Examine yours and other people's arguments and find flaws in them. Find what the arguments that are said to favour something, actually favour. Choose whom you are listening to (the world is huge and you are going to ignore almost entirety of mankind in any case). Know what you want; if you are seeking smug self satisfaction that's what you'll have instead of truth, and all the rules will fail you. Beware of the people whom only find their own flaws in the other people. Beware of the people whom utterly lack interest in finding the truth; those will always narrow down the arguments like in the example 5 above; they will always only help you make the belief propagations that further their goals.

If you find out that you can very easily demolish any arguments - seek greater rigour and greater understanding, don't fall into same self delusionary trap where upon discovering that you can argue for anything and 'prove' anything, you refuse to give up the clearly faulty methods of thought, and instead go by the feeling and by how easy it is to imagine something. For example, Lorentz transformations are very hard to imagine as a solution to 'everything being relative'; that does not make special relativity wrong.