Essay talk:In defence of belief in the existence of the soul/Archive1

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 8 August 2011. Please do not make edits to this page.
Archives for this talk page:  , (new)(back)

Quantitative vs. qualitative[edit]

I like this essay very much - it is well organized and written in a clear and easy-to-understand style. That said, I disagree with you on some key points. Rather than lump them all into one huge critique I'd like to address each section I consider questionable individually in no particular order. First, I'd like to talk about your discussion of quantitative vs. qualitative rationality and the role of "extra-rational" thinking. To begin, let's talk about probability:

In other cases, there the weight of evidence may weigh to one side rather than the other, yet still not be overwhelming. Consider someone accussed of murder who protests their innocence. Objectively considering the evidence, we may conclude that it is suggestive of their guilt, but not absolutely convincing of it. What should we then believe? Should we believe whatever the evidence more strongly suggests, even if the weight of evidence is only slightly in one direction -- 50.1% chance of guilt, 49.9% chance of innocence, they are guilty!. Or should we set a higher standard? But, surely we should not insist upon absolute certainty -- for if absolute certainty is needed to believe everything, there would be little or nothing one could believe; it is very hard to live without believing in anything, one tends to believe in things anyway even if one believes one ought to believe in nothing. But if more than 50.1%, and less than 100% (or even 99.999999%), where to draw the line? That is hard, even impossible to answer.

I think this is the wrong way to approach quantitative thinking. I think probability theory should have a large role in rationality, and that we should use quantitative reasoning in place of qualitative reasoning instead of just using the former to prop up the latter like you do here. Instead of having a cut off point (e.g. 50%) for belief or disbelief, we should simply say that the probability we assign to a proposition is the level of belief we have in that proposition. This makes a lot more sense than a cutoff point - there's no reason why belief needs to be a binary "believe/disbelieve" when we have more accurate tools. In a nutshell, what I am really talking about is Bayesian epistemology. Moving on:

And, should we be purely rational in our beliefs? Or should we permit ourselves to follow non-rational considerations? For example, when deciding whether to believe in someone's guilt, do we consider only the objective evidence -- or do we give heed to the fact the accussed is a dear old friend, who we feel we ought to believe on account of our friendship? That is not to say we should complete ignore reason and evidence; if the evidence is overwhelming, we have to believe in their guilt, and our relieved of any responsibility from our friendship. But, where the evidence is of a moderate nature, not strongly going either way, it seems reasonable to give consideration to extra-rational considerations in deciding what to believe.

If you want to find out what reality is like, you must think rationally. Period. That's what rationality is: making your beliefs line up with reality. If the evidence is not going strongly either way, then you should make probability assignments accordingly and look for more evidence - there is absolutely no reason to use irrational methods. For example, scientists are often unable to perform the experiments that would really help them distinguish between hypotheses because they have to wait for funding or other real-world considerations (e.g. experiments with the CERN supercollider). In the interim, they don't suddenly convert to Christianity - they retain their rationality and make educated guesses based on what they know. The same principle applies even when it looks like an experiment can never be performed - just because practical considerations get in the way of one's ability to do science/philosophy, that does not license irrationality. In fact, the moment you abandon rational belief processing, you can no longer claim to be reflecting reality as accurately as possible. In summary: I don't see why it is ever "reasonable" to abandon rational methods. This applies to your discussion of the aftelife as well:

It is morally and aesthetically more pleasing to believe in an afterlife than not. There is much suffering in life, for which there seems to be no compensation in this life. To suppose there is some compensation for that suffering in another, makes life more meaningful -- therefore by faith, if reason and evidence do not prevent us from doing so (and this requires not a mere absence, but an actual presence) -- then we ought to believe in what makes life more meaningful.

Again, this epistemological switch is unjustified. The moment we start believing things because we "find them more meaningful" or "because we want to," we have left the realm of rationality and entered into the land of rationalization. There is simply no reason to abandon epistemic rationality just because of the practical constraints that prevent us from solving philosophical problems. I will address different parts of the paper later when I find the time. Tetronian you're clueless 16:07, 27 February 2011 (UTC)

My response: I think we should ask, what is a belief? What does it mean to believe something? Let me make an attempt – a belief is a disposition to assert the truth of a proposition, both through communication to others, and in one’s own private thoughts, and to act as if the proposition were true. You suggest: “Instead of having a cut off point (e.g. 50%) for belief or disbelief, we should simply say that the probability we assign to a proposition is the level of belief we have in that proposition.” What does this mean? Are we talking about the strength of the disposition we should have? Or does it simply mean “If I believe the probability of proposition X being true is 99%, then rather than believing the proposition X, I should simply believe the proposition ‘X has 99% probability of being true’.” But, that is not a very realistic rule to live by. I cannot possibly qualify all my thoughts and speech with probability measures. I believe my friends are loyal to me, but maybe some are secretly disloyal – not likely I think, but not impossible. So I believe my friends are loyal to me. But you would have me believe not believe this, only believing “My friends are highly likely to be loyal to me.” Why?

Trying to make all human reasoning quantitative is a nice idea, but often doesn’t work in practice. Thought experiment: You are a judge or juror, asked to determine an accused criminal’s guilt. How do you calculate the probability they committed the crime? At which probability level do you convict the accused, with all that entails? What is the probability that Jesus existed? I can express my degree of confidence in my beliefs with vague terms such as likely, somewhat likely, extremely unlikely, etc., or even with relative comparisons (A is more likely than B). But I often can’t assign a numeric quantity to that confidence. Much real-life reasoning is impossible to quantify. Only some areas, which are particularly amenable to statistical analysis, can be dealt with quantitatively. Even where quantitative methods could be used in principle, they often can’t be, due to economic or privacy constraints.

What is rationality? One of the problems with “rationality” is that different people will have definitions of what “rationality” is. Rationality is a lot like ethics. Both are about telling you what to do – rationality tells you what to believe, and as I defined it above belief is a disposition to commit certain acts. So in one sense rationality is a subset of ethics. One of the problems with ethics, is that if two people have fundamentally different ethical principles, there is really no way for one to convince the other. If two people have shared ethical assumptions, you can maybe convince them they are applying their principles wrong. But if their ethical first principles are incompatible, there really is no way forward, at least as far as argument goes. I think the same applies to reason. If people share the same basic principles of rationality, then if they have a dispute about rationality maybe it can be resolved by further argument. But, if their basic principles are different, no progress can be made through argument.

So, your fundamental beliefs about what one ought to believe are different from mine. I am being irrational by your definition of ‘rationality’, but not so by mine. This is just the same as two people who have fundamentally different principles about ethics, and hence fundamentally different definitions of ‘good’. Argument isn’t going to change either mind. --Maratrean (talk) 11:44, 28 February 2011 (UTC)

Thank you for your reply. I'm going to respond to it point by point.
Firstly, I will accept your definition of belief, at least for the time being, as I currently don't see any problem with it. Now, on to the discussion of probability: When I say "the probability we assign to a proposition is the level of belief we have in that proposition," that's pretty much what I mean - you should think and act according to the probability assignment. I think that's what you meant by "level of disposition." The reason we should adopt this Bayesian epistemology is because it is more accurate than a binary "believe/disbelieve" system, as it more accurately conveys uncertainty. Bayesian epistemology itself is mathematically justified - Bayes's Theorem is derived directly from the definition of conditional probability. There are also additional mathematical tools, such as concepts from thermodynamics, that we can use in conjunction with Bayes's Theorem to be even more accurate. But the end result is that we create mental models of reality that update based on observed evidence in a quantitative and exact way and can respond properly when we are lacking information.
Now, I'll be the first to agree that this method is very hard to apply to everyday life and to questions that are deliberately phrased such that we cannot collect evidence to help answer them. However, in both theory and practice it is possible to apply it to anything, from metaphysics to whether or not your friends like you. All you need to do to apply Bayes's Theorem to any situation is obtain the prior probability (which is done by applying mathematical rules to the situation at hand) and then collect evidence. In fact, it has been used in court on a number of occasions. Things may be very hard to quantify, especially in cases with no information, but they are not impossible to quantify, as applying Bayesian epistemology to a scenario in which we have no information will pretty much just tell us what we already (don't) know. But for scientific and philosophical issues in which we can collect evidence, Bayes's Theorem works very well.
That said, let's move on to "rationality." The definition I was using is as follows: "updating your beliefs based on evidence so that your beliefs match reality as closely as possible." If you like, we can also call this "increasing the correspondence between your beliefs and reality," or just plain old "finding truth." If the word "rationality" is going to cause confusion, we can substitute this definition for it, or a definition provided by you if you have a better one. But if you disagree with this idea that we should be finding truth (which, from your essay, I don't think you do), then we clearly are going to have problems with unshared assumptions. But if you do agree that we should be finding truth, then I think we have enough common ground to stand on and argue from.
In summary:
  • I use Bayesian probability because it works and it's more accurate than a simpler system of belief assessment.
  • This is not always easy to do, but it is always possible.
  • Rationality means truth-seeking.
Does this clear things up a bit? Tetronian you're clueless 21:51, 28 February 2011 (UTC)
Upon further review, I just noticed that you wrote "People chant the slogan "Is does not imply Ought" against ethics, forgetting that rationality is equally an oughtness (what ought we believe), and so if this slogan succeeds in slaying ethics, it can slay rationality also." As you can probably guess, I disagree with this for the reasons I explained in my post directly above this one. Tetronian you're clueless 05:17, 1 March 2011 (UTC)

The problem with the claim "the probability we assign to a proposition is the level of belief we have in that proposition", is that if belief is a disposition to act, then the strength of that disposition must be the frequency with which we act on it, the emotional intensity of those acts, the preference ordering of that disposition with respect to other dispositions. So, if I have a strongly held belief, I will state it more often, I will express more verbal and emotional intensity in stating it, I will put stating it ahead of other considerations (such as personal advancement, social acceptablity, etc.) The degree of confidence in the truth of a proposition is going to be a relevant factor, but not the only factor. Many propositions I believe in with very high confidence, yet don't care about, so am not going to believe in them very strongly; others I have less confidence in, but care about a lot more, so my belief will be much stronger. So I think your suggestion, to make our belief proportional to Bayesian probability, is only considering half the picture of how humans actually work. The problem with a lot of theorising about reason or epistemeology is it threatens to become a Procrustean bed -- if the usual workings of the human mind don't conform to our pet theory, the workings of the human mind must be changed to fit. (A task at which we rarely succeed, even if we think we have.)

You say "However, in both theory and practice it is possible to apply it to anything, from metaphysics to whether or not your friends like you." Okay then, give me an example of a metaphysical problem (you choose one) and summarise how to apply Bayesian reasoning to it. Likewise, if I want to determine if my friends like me, how can I apply Bayesian reasoning to that?

For me, rationality is a set of rules which evaluate beliefs ('rational' vs 'irrational') and a set of 'oughts' as to what one should or should not believe. However, different people will have different standards.

Rationality cannot be simply identified with truth correspondence, because sometimes it is rational to believe a falsehood, and irrational to believe a truth. Consider the following story: The bulk of scientific consensus says that theory X is true. Sarah is a maverick young scientist, she insists that X is false, but few believe her. However, after many years of work, she manages to convince the scientific community that she was right all arong, and theory X is indeed false. Now, back in the early days, when few had yet adopted Sarah's viewpoint, and she hadn't yet undertaken the research which would eventually prove her right, there are two undergraduates: Fred and Tom. Now, Fred is a bright student, and has read some of the literature on theory X. He is aware of Sarah's disagreement with it. But, even though he is bright, he doesn't feel confident enough to decide for himself the correctness of theory X. But, he decides to accept it, because that is what by far the majority of the relevant scientific community (who have studied this matter in much greater depth than than he has) believes to be true. Meanwhile, Tom is a far less bright student than Fred -- he really doesn't understand theory X at all. But Sarah is his older sister, and he thinks Sarah is very smart -- so if Sarah says theory X is wrong, it must be. Now, who is more rational, Tom or Fred? Well, given that theory X is false, Tom who disbelieves it has more truth-correspondence than Fred who believes it. But, arguably, Fred is more rational than Tom -- although Fred believes a falsehood, he believes it for quite rationality respectable reasons (accept the conclusions of the majority of the relevant scientific community if you lack the expertise to evaluate them independently), whereas Tom believes a truth for reasons not so rational (my sister is smart, so I'll believe what she says.)

Maybe instead, we should define rationality in terms of behaviour that, on average and in the long-run, produces truth. However, consider the question is faith rational? Initially, we might suggest not -- people believe all sorts of random things by faith, almost all of which are almost surely wrong -- even if simply by the fact that the things believed by faith contradict each other so much they can't all be true, and there are so many things believed by faith that even most other believers by faith will agree are outlandish and simply wrong. Therefore, we might conclude that faith is irrational, since it is poor at truth-producing.

A Christian will then respond -- but Christian faith is different. Even if faith writ large is not on average not truth-producing, maybe some forms of faith are more truth-producing than others? Well, if Christianity is true, then Christian faith (unlike faith in general) is indeed on average truth-producing, thus Christian faith would be rational. Whereas, if Christianity is false, then Christian faith is not on average truth-producing, so it would be irrational. This is part of the problem with 'rationality' -- different sets of assumptions as to what is true will produce different conclusions as to what is rational. People can't agree on what is rational, because even though on the one hand rationality is not simple truth-correspondence, on the other hand what is rational depends on what is true.

Maybe my point about "ought" can be best put this. Putting the concept of 'rationality' to one side, the real question here is what ought we believe? And what beliefs are good to have? As I understand your posiition, you believe that we ought to believe whatever conforms to the rules which are most likely to produce truth-correspondence overall, and to believe as such is good, and to believe otherwise is bad. I agree that, in choosing rules by which to judge our beliefs as good or bad beliefs to have, and which belief-rules we ought to follow, we should have regard to what rules are most likely to produce the greatest truth-correspondence; but while that is an important consideration, for me it is not the only one. To continue my analogies with ethical theory, I would guess we are both rule consequentialists, but we differ in choice of utility function -- my utility function includes variables which yours does not. --Maratrean (talk) 10:12, 1 March 2011 (UTC)

Ok, let's talk about rationality first and then move on to probability. The problem here is that you are conflating two different concepts that are both called "rationality" but really don't have anything to do with each other. We can define these concepts accordingly:
  • Epistemic rationality: achieving correspondence between your beliefs and reality; finding truth
  • Instrumental rationality: increasing utility, getting what you want, fulfilling you preference, etc.
It's pretty obvious that these are two completely unrelated ideas except insofar as we might want epistemic rationality to help achieve instrumental rationality. Now, in the example you give above, in which people believe falsehoods to advance themselves socially, you are talking about instrumental rationality, or "what should I believe in order to get the most utility." Because we are trying to resolve a philosophical dispute, we really don't need to consider instrumental rationality at all, only epistemic rationality. After all, arguing that one thing is truer than another because it beings us more utility is the fallacious argument from adverse consequences. Thus, let's put instrumental rationality aside and stick to truth-seeking. Can we agree on that?
This brings us to the discussion of whether faith is rational. I don't agree with your conclusion that this is entirely a function of "what works best on average," although that is certainly part of it. But let's examine this idea of "what works best on average" more closely. Firstly, I do agree with your statement that faith clearly does not work on average and is immediately suspect because of this. But there are obvious reasons why faith doesn't work without even considering any examples: let's model the relationship between reality and our beliefs as the relationship between a map (your beliefs) and a territory (reality). What we want to do is create a connection between the two such that the map reflects the territory and information; we do this by updating our map as we learn more about the territory. (This is yet another way of rephrasing the idea of epistemic rationality.) Faith, in this analogy, is akin to drawing random lines on our map without even looking at the territory. The odds that we are going to draw the correct lines by chance are so astronomically small that we don't even need to crunch any numbers.
That said, let's move on to the example of Christianity, which you used to show how faith might be justified. Indeed, a Christian might say that his faith in Christianity is justified because Christianity is true. Yet this argument doesn't make sense because the Christian's faith is true by chance, not because his faith caused reality to be a certain way (unless he subscribes to solipsism, in which case he might argue that - but there's no need to discuss that alternative). So his faith really isn't justified, he was just really really really lucky in that he just happened to land on a true belief by chance.
Now, let's deal with Bayesian probability. You asked for some examples, so I shall provide them. Interestingly, some people have already thought of the idea of using Bayes's Theorem to see if people like you, so I'll just link you to that analysis. Whether the particulars of their analysis are correct or not is moot - the point is, it can be done. Next, you requested that I discuss a philosophical problem using Bayes. Here I will refer you to Nick Bostrom's book Anthropic Bias, which is literally an entire book about how to solve a priori philosophical problems, including the idea of fine-tuning, with Bayesian analysis. Again, whether Bostrom is exactly correct does not matter - what matters is that you can apply Bayesianism here.
Furthermore, your analysis of the implications of Bayesianism are not entirely correct. Bayes's Theorem does not dictate the emotional intensity with which we should act or believe or (to a certain extent) the frequency with which we should act - what it really dictates is our betting behavior, or at what odds we would be willing to bet on the truth of a proposition. In fact, this is one of the reasons why Bayesian probability works so well - it is arranged such that it is impossible to make a Dutch book bet against a Bayesian. This will probably end up determining the decisions we make, but it has nothing to do with emotional intensity. Also, I will be the first to agree that Bayesian updating is not how human beings work - instead, human reasoning is jam packed with biases and errors. Bayesian updating is an idealized mode of cognition, which is why we should try to emulate it as closely as possible. We will probably not be able to emulate it perfectly, but if we make an effort we will still reduce a lot of errors and be more accurate than we were before. Which, of course, means more epistemic rationality. Tetronian you're clueless 19:40, 1 March 2011 (UTC)

Rationality - I don't agree I am confusing epistemic and instrumental rationality. I am only talking about epistemic rationality. The utility function I was referring to, was not your personal utility function, but the utility function of a consequentialist axiology. In other words -- an axiology is a theory which assigns valuations to states of affairs. Ethics is such a theory -- it assigns states of affairs as 'good' or 'bad'. Likewise rationality -- 'rational' or 'irrational'; and aesthetics -- 'beautiful' or 'ugly'. The types of states of affairs each theory values differs -- rationality values beliefs positively or negatively; generally speaking, beliefs cannot be valued aesthetically, neither can paintings or sunsets be valued rationally. But what they have in common is the positive or negative valuation of states of affairs. They also involve a suggestion of oughtness -- ethically positive acts are not just good, but they ought to be done; ethically negative acts ought not to be done. Rationally positive beliefs ought to be held, rationally negative beliefs ought not to be held. I suppose the same could even be applied to aesthetics -- if you are going to paint a painting, you are aesthetically obliged to paint the most beautiful one you can. (I suppose my feelings about art could be called very traditionalist, but I'm unapologetic about that.)

Acknowledging this similarity between the three, means we can take theories from one axiology and transform them, by appropriate substitutions, into theories about another. (This may not always be possible, due to the unique characteristics of each axiology, but it often can be done.) Consider utilitarianism -- it wants to ethically judge positively situations which maximise some utility function overall (the exact utility function varies depending on the variety of utilitarianism). So, ethical utilitarianism has its rational analogue, but the utility function will be different. An example of a utility function for a rational utilitarianism would be truth-correspondence. But there are two main varieties of utilitarianism -- rule utilitarianism and act utilitarianism. Act utilitarianism wants to judge whether every individual act maximizes utility. Rule utilitarianism proposes to adopt rules which maximize utility, but then judge individual acts by whether they maximize utility, but only by whether they conform to the rules. The point is, one person might do an act which normally would be very dangerous, yet by sheer chance saved someone's life instead; another does an act which normally would be harmless or beneficial, but by sheer chance kills someone. Act utilitarianism might suspect the one who did the dangerous act did the ethically more worthy act, since it only looks to the utility each act produces. Rule utilitarianism would say that the usually harmless act was ethically more worthy than the dangerous act, due to consequences each usually produces, even though those consequences were inverted in this particular likewise. And likewise, ethical rule and act utilitarianism will have rational analogues, quite apposite to my example of believing falsehoods for the right reasons vs. believing truths for the wrong ones.

One deep question in axiology, is whether the three systems of oughtness and evaluation are separate and independent systems, or ultimately three parts of one and the same system. Can they conflict with each other -- can the valuations of rationality, ethics and aesthetics conflict, or are they ultimately three parts of one coherent whole? Are there beliefs that are rationally obligatory but ethically prohibited, or beliefs that are ethically obligatory but rationally prohibited? My belief is in unitive axiology, that the three ultimately form one single consistent and coherent whole.

I think there is a difference in how you and I use the term rationality. For you, rationality is about truth-correspondence. For me, rationality is what you ought to believe -- I do not prejudge that truth-correspondence is necessarily the only factor in what you ought to believe. Your approach to rationality is analogous to ethical naturalism -- maybe a rational analogue of G.E. Moore's open question argument is applicable here.

You say that in my example 'people believe falsehoods to advance themselves socially, you are talking about instrumental rationality'. No I am not -- I am talking about epistemic rationality. Tom's beliefs that his sister's scientific theories have no necessary connection to social advancement with his sister. Neither does Fred's decision to believe the majority opinion of the relevant scientific community have anything necessarily to do with social advancement with members of that community. In Fred's case, people often believe what experts tell them when they feel they lack the expertise to decide for themselves. In Tom's case, people often believe what friends or family tell them when they feel they lack the expertise to decide for themselves. Both of these are real human belief-strategies -- neither are necessarily motivated by considerations of instrumental rationality. Both are arguments from authority -- arguments from authority, while not as good as deciding matters for ourselves, can nonetheless be respectable from the viewpoint of epistemic rationality -- but it depends on the nature of the authority. Fred's choice of authority is more epistemically rational than Tom's.

But you miss the point of the example -- the point is simply that sometimes the most rational choice of action is to believe a falsehood, and believing the truth is sometimes irrational. So rationality cannot be simply identified with truth-correspondence -- there is more to it than that. Such as, I suggested in my discussion of rule utilitarianism -- the best belief is not the one which has the most truth-correspondence, but the one which conforms with the rules of when and what to believe that produce the most truth-correspondence in the long-run.

Faith: Taking your example of matching the map up to the territory -- consider the set of all rules which I might follow in drawing a map. We might evaluate these rules based on how well they make the map match the territory. Faith, considered at a high-level, is not a good rule -- if the rule is "pick a faith at random and believe all it teaches", your odds of drawing an accurate map is very small. But if we break down faith into individual rules -- "believe proposition X irrespective of any other evidence" -- then whether that rule leads to a more accurate map depends on whether the proposition is true.

Personally, I advocate for what I call the "highest faith", which I could basically state as follows--believe the proposition which is simultaneously (1) the least falsifiable, (2) the simplest, (3) the most conducive to your happiness. Now, this principle of faith is very different from e.g. Christian faith. Christian faith includes many things. This principle tries to believe by faith the minimum it can. So, believe what makes you happy. But if two propositions make you equally happy, believe the simpler one. And if two propositions make you equally happy, believe the one which is less falsifiable. This is because, the more falsifiable one is more likely to make the map contradict the territory -- the simpler and less falsifiable, the smaller the area, so the smaller the potential to contradict the territory. And, believing what makes you happy -- you often think something will make you happy, but if you think more you realise something simpler and less falsifiable will do so equally. The consequence of this faith is "All my deep desires will eventually be fulfilled". It is very simple and very unfalsifiable (it doesn't say how or when they will be fulfilled), yet still is conducive to my happiness.

Probability: Certainly Bayesian probability can be applied to some metaphysical problems. But surely there are very many metaphysical problems in which Bayesian probability is completely inapplicable. For example, consider the radical Platonism of Max Tegmark's ultimate ensemble theory -- that existence simpliciter is the same thing as mathematical existence. How could one possibly even begin to apply Bayes theory to decide the truth of that? One might estimate a prior based on one's own intuitions about the issue -- but how does one possibly update it? What evidence could one update it with? The article you provided about evaluating if people like you -- that is full of perceived parralels between some self-help book and Bayesian theory. But there are no actual numbers or equations. It does not demonstrate that quantitative reasoning can be used to decide if people like you -- really, it is an example of qualitative reasoning, using some qualitative analogues of quantitative principles.

When you say "your analysis of the implications of Bayesianism are not entirely correct. Bayes's Theorem does not dictate the emotional intensity with which we should act or believe or (to a certain extent) the frequency with which we should act" -- you are misinterpreting what I am saying. I am not saying what Bayes' Theorem dictates. I am saying what believing in something with a certain intensity entails. The odds we would be willing to bet on a proposition is part of what constitutes to the intensity of a belief, but it is not the whole of it. So, in so far as how Bayes' Theorem measures intensity does not match up with what actually constitutes the intensity of our beliefs, that is an argument against the claim that the intensity of our beliefs should be governed by Bayes' Theorem.

I agree that human thought has many cognitive errors. But, the question is, if we disagree on which thought patterns are actually errors, how do we decide which are errors and which are not? You have your standards and I have mine -- why should I adopt yours? --Maratrean (talk) 09:43, 2 March 2011 (UTC)

Perspectival argument[edit]

I'm not entirely sure what you are getting at here. You wrote:

A rather different reason to prefer idealism -- materialism privileges the third person perspective, idealism privileges the first person perspective. Materialism claims the first person perspective is ultimately reducible to the third person perspective; idealism denies this, but says the third person perspective is ultimately reducible to the first person perspective. Materialism likes to make I-less statements about the world; but, an idealist will argue, in reality there is no I-less statement -- everything I know is mediated to me by my own existence. To an idealist, what the materialist is doing is a sleight of hand, trying to hide the I away, but the I cannot be hidden.

I have a few problems with this:

  • What does the world even look like when it is "reduced to the first person perspective"? Is this notion even coherent, and if so, what does it mean? What, if any, real-world predictions does it make?
  • In what way are materialists "trying to hide the I away"? Materialists hold that the brain is ultimately reducible and that we can study by studying its parts (neuroscience), how it acts (cognitive science), and its evolutionary history (evolutionary psychology). Currently, it looks like materialist/reductionist explanations work very well, and the alternatives don't make nearly as much sense. (See also: the excellent article Non-materialist neuroscience on RW.) Thus, "you" are made up of cognitive algorithms implemented by your brain. In what way is this "hiding the I"? Rather, it seems like materialism provides a very good explanation for why we think about everything (including personal identity) the way we do.
  • Even if materialist explanations weren't as successful, how is "hiding the I away" grounds for rejection? It seems like this would only be the case if we presupposed the existence of some idealist notion of "I" in the first place.

In summary: I don't fully understand this argument and I question its coherence. Tetronian you're clueless 17:10, 27 February 2011 (UTC)

Let me put the argument this way. To a materialist, all fundamental statements can be worded in a way to exclude reference to the self or to minds more generally, they are third person only (speaking only of things, or insofar as they speak of minds, they assume minds can be equated with things). If a statement is phrased in the first person, then to a materialist it cannot be a fundamental statement about reality as-is, but can be converted to one by removal of 'indexicals' -- replace every occurrence of 'I' with the name of the person who made the statement, and then you have a statement which could be a fundamental statement of reality (or could be reduced to one, by a process of reductionism -- replacing statements of sociology with those of psychology, those of psychology with those of biology, those of biology with those of chemsitry, those of chemistry with those of physics -- once everything is reduced to fundamental physics, you have a basic statement about the world.)
Whereas, to an idealist, all fundamental statements about reality must include the first person. If a statement is phrased in the third person, then to an idealist it cannot be a fundamental statement about reality as-is, but can be converted to one by insertion of 'indexicals' -- identify how I relate to this statement, then insert the word 'I' into it. A statement containing 'I' could be a fundamental statement of reality (or could be reduced to one, but by a rather different process of reductionism -- replacing statements which refer to patterns in experience with statements referring to the individual experiences which constitute the pattern).
So, for a materialist "Glasgow is in Scotland" is a fundamental statement about the world (ignoring any reduction process). But "my mother was born in Glasgow" is a non-fundamental statement, because it includes 'I'. Whereas, for an idealist, "Glasgow is in Scotland" is not a fundamental statement about the world, because there is no "I" in it. "My mother was born in Glasgow" contains "I", but in a sense isn't actually fundamental, because it doesn't refer to my experiences. "My mother told me she was born in Glasgow" and "I saw Glasgow written on my mother's birth certificate" and "my mother told me stories about leaving Glasgow as a child" and "my grandmother told me stories about life in Glasgow" -- these are fundamental statements to an idealist, because they are about what I (or an I) has experienced. But to a materialist they are non-fundamental, because they need to have their "I" taken out of them.
Both materialists and idealists engage in reductionism, but in different ways. A materialist wants to reduce every statement to the atoms or subatomic particles which would exist if it is true. Of course, they can't do this (most of the time) in practice; but it is important to them to believe it is doable in theory, even if in practice its too complicated to work out all the details. Idealist reductionism is different - an idealist wants to reduce every statement to the experiences minds would have if it is true. Of course, they can't do this (most of the time) in practice; but it is important to them to believe it is doable in theory, even if in practice its too complicated to work out all the details.
To a materialist -- observer-relative statements are less fundamental than observer-independent statements. To an idealist -- observer-independent statements are less fundamental than observer-relative statements.
So, the idealist argument is that the materialist tries to imagine the world in their head from the perspective of no one in particular, and takes that imagining as fundamental reality. Whereas, the idealist thinks, this imagining is invalid -- you can try to imagine the observer away, but that doesn't get rid of them.
Experiment: Trying thinking about the world before you were born or after you will die. Try to imagine it. Inevitabitably, you are imagining it from the perspective of 'someone', but that someone isn't you, they are some kind of non-descript non-person. To an idealist, you can try to depersonalise the observer all you want, but all observation requires an observer, and there is nothing knowable which is not observation.
An idealist has no problem with materialist 'explanations' of the mind or the brain. To an idealist, these just describe patterns between experiences -- if I hook myself up to some kind of brain-monitoring machine, and think of X, and the machine gives some distinctive output, that is just a pattern in experience -- a correlation between the internal mental experience of thinking of X, and the external visual experience of observing the brain-monitoring machine's output. The materialist tries to infer some ontological/metaphysical conclusions from it, but the idealist just doesn't see what the premise and conclusion have to do with each other.
In a sense, idealists and materialists agree about everything except the direction of the mind-matter reduction. Materialists reduce mind to matter; idealists reduce matter to mind. No amount of neuroscience can resolve this issue, since any conclusion of neuroscience can be interpreted either way. --Maratrean (talk) 10:35, 1 March 2011 (UTC)
Thank you, that makes a lot more sense. Now that I understand the argument, I've given it some thought and I don't think it's very strong. First, there are a few things about it that I believe to be incorrect or ill-formed:
Your discussion of statements that have "I" versus those that don't isn't entirely correct. The sentence that stands out as being the most incorrect is this one: "So, for a materialist "Glasgow is in Scotland" is a fundamental statement about the world (ignoring any reduction process). But "my mother was born in Glasgow" is a non-fundamental statement, because it includes 'I'." This is incorrect, and there is nothing "wrong" with this statement to a materialist. You see, if I were saying that statement aloud or just in my mind, I would do so with the knowledge that the string "my mother" is a reference to a particular configuration of atoms. The fact that the sentence contains "my" doesn't make me want to alter it - as long as I understand exactly what each part of the sentence refers to in terms of objects that exist in reality, there is no problem. The same goes for "I" - I understand that "I" refers to a particular configuration of atoms. Thus, the fact that these statements are "non-fundamental," to use your term, is not really a problem at all. As long as we understand that we make "maps" at multiple levels to describe reality, which is single level (reductionism), then there is no confusion. (For example: we can model a human being in terms of quarks, atoms, molecules, cells, tissues, or organs, yet the "real" human is just there and is not affected by what method we use to model him/her.)
That said, you are correct about materialism being without fundamental observers and idealism having fundamental observers. Indeed, if this weren't true then the two would be exactly the same. I also agree about reductionism being difficult in practice but possible in theory. But I question whether non-reductionism is coherent - for example, what would a human being look be if he/she were "really" made out of organs and not atoms? Or, what would a hand be like if it "really" wasn't just made up of fingers? The idea that an reducible object "is" itself in some fundamental way rather than being made up of components, especially when it is obviously made up of smaller parts, just doesn't seem coherent. But I don't think you disagree with me about reductionism so there's no sense arguing that point.
Overall, your argument seems to hinge upon this phrase: "all observation requires an observer, and there is nothing knowable which is not observation." The problem with your argument is that materialists agree wholeheartedly with this. "All observation requires an observer" is a statement about epistemology, not metaphysics. A materialist would agree with the statement because he/she also thinks that it's possible for matter to exist without being observed. In general, materialists don't go out of their way to deny or even downplay the role of observation in philosophy and science - for example, in one of the sections above I pointed out that human reasoning is often biased because of the way our brains work; errors like those arise from the fact that we are observers. And since both sides agree on this epistemological point, it isn't an argument that can be used to favor one over the other.
To put it another way: When I image the universe before/after my lifetime, I do have to pretend that I'm seeing it through someone else's eyes. But...so what? Does it really make a metaphysical difference that my ability to observe the universe is limited? This is another example of your argument being correct but ultimately not really being an argument.
Yet another way of looking at it is this: even if we accept your premise that idealists and materialists consider different statements "more fundamental," we don't have a reason to favor one criteria of "more fundamental" over another. Thus, we can't which viewpoint is more correct based on this alone.
Side note: Now that I better understand your position, I do agree that neuroscience isn't really going to help me here. Tetronian you're clueless 22:53, 1 March 2011 (UTC)

Argument from extinction/experiential criterion of meaning[edit]

I think your whole argument here can be mostly summed up by this sentence, which appears at the end of the section:

The idea that my conscious awareness might have such a sharp discontinuous ending is deeply disconcerting -- this feeling suggests to me something must be wrong with any theory that would imply such an outcome.

The problem I have with this argument is that it is an argument from incredulity, not a logical deduction or induction. We both agree on the following: (a) quick and unforeseeable deaths do happen in reality, and (b) materialism holds that "you" will die when your body does because what constitutes "you" is implemented on your brain. Since I am a materialist, I accept the following conclusion as a consequence of (a) and (b): (c) conscious experience can end sharply and without warning. Your argument seems to be this: "(c) doesn't seem like it should be true, therefore either (a) or (b) or materialism cannot be true. And since we know that (a) and (b) are true, materialism must be false." All of the logic is sound except for the first step: there's no reason to flatly reject (c) because it feels weird. What you're really saying is this: "I don't like the consequences of a materialist universe; therefore, materialism probably isn't true." Phrased this way, do you still think the argument makes sense? Tetronian you're clueless 17:21, 27 February 2011 (UTC)

Let me put it this way -- I am presented with two theories, A and B. About almost all issues, A and B have precisely the same conclusions. However, one particular issue X, A has widely non-intuitive conclusions, whereas B does not. I think, in that circumstance, I am rationally justified in adopting the theory which agrees with my intuitions than the theory which violates them. And of course, if further evidence arises, which disrupts the equality between A and B, I might conclude that I should believe A despite the violence to my intuitions because of the greater evidence for A. However, this is an issue in which (I dare say) no further evidence will be arising, at least not for materialism. And, although my intuitions are not always right, they are unlikely to be completely wrong either -- to decide between two otherwise equal theories, following one's intuitions is not a bad strategy. --Maratrean (talk) 10:43, 1 March 2011 (UTC)
The crux of your point here seems to be that following your intuitions is good because they are more likely to be wrong than right. Another way to say this is "my intuitions have been right about things before, therefore they will be right again." But we need to look more closes at what intuitions are good at discerning and what they aren't good at discerning. It's very easy to make simple logical deductions or reason about things we can directly observe; it isn't easy to make correct conclusions about things such as metaphysics. If we look at the track record of intuition in philosophy, it's not really exemplary at all. To put it mathematically, P(something|intuition said so) is pretty much the same as P(something). When we have a debate like this one, we have all kinds of different arguments and factors at play, all of which have pretty large margins of error. Because intuition is so inaccurate, an argument from intuition is pretty much insignificant compared to the other things we're talking about. Also, as you pointed out above, different people have different ideas about what seems rational or intuitive. With this is mind, are you really justified in making such a sweeping conclusion - or any conclusion at all - based solely on this "feeling" of yours? This is the reason that an argument from incredulity is a fallacy - though intuitions can often be useful, intuition is by no means an accurate method of doing philosophy. Tetronian you're clueless 14:31, 1 March 2011 (UTC)
I've been thinking about this argument a lot more, and I think it is significantly strengthened by the fact that intuition is flawed because it's a product of our cognitive architecture. This is the (a) the reason it is flawed and (b) the source of a huge amount of documentation explaining how it's flawed (here I'm referring to the entire field of cognitive science and to a lesser extent psychology). Tetronian you're clueless 04:09, 2 March 2011 (UTC)
I was thinking about this a little more, and it occurs to me that your argument could be made to work both ways. It's certainly possible to "feel" that materialism is true, as evidenced by the following quote that I plucked from the SEP's article on George Berkeley:
"…it will be demanded to what purpose serves that curious organization of plants, and the admirable mechanism in the parts of animals; might not vegetables grow, and shoot forth leaves and blossoms, and animals perform all their motions, as well without as with all that variety of internal parts so elegantly contrived and put together, which being ideas have nothing powerful or operative in them, nor have any necessary connexion with the effects ascribed to them? […] And how comes it to pass, that whenever there is any fault in the going of a watch, there is some corresponding disorder to be found in the movements, which being mended by a skilful hand, all is right again? The like may be said of all the clockwork of Nature, great part whereof is so wonderfully fine and subtle, as scarce to be discerned by the best microscope. In short, it will be asked, how upon our principles any tolerable account can be given, or any final cause assigned of an innumerable multitude of bodies and machines framed with the most exquisite art, which in the common philosophy have very apposite uses assigned them, and serve to explain abundance of phenomena."
This quote raises a few problems for your argument. First, how would you judge between the materialist's intuition and your own? Second, whatever weak evidence your intuition counts for would probably be counterbalanced by the antithetical intuition of the materialist. How would you dismiss his/her use of intuition in this way without dismissing your own? Tetronian you're clueless 23:01, 1 March 2011 (UTC)

Let me put the argument this way: there is no observation I have had heretofore which could decide between the truth of materialism or idealism. There is no evidence, of the sort one would usually accept, to decide the question. This leaves one with two choices: either one suspends belief, believing neither, or one chooses to rely on something one might not usually decide questions. Now, which is worse - following my intuitions (not in general, but in this particular case), or simply suspending belief? It is not clear it makes much difference. What is the harm in, when evidence cannot (and may not ever be able to) decide a question, choosing an answer in accordance with something one would not normally consider good evidence? Especially in regard to metaphysical issues -- what is the actual harm in choosing to believe the alternative which best fits one's own intuitions, rather than just suspending judgement? We are not talking about following intuitions in general, only when there is little other evidence and unlikely to be any more any time soon. Of course, a materialist's intuitions may differ, and lead them elsewhere. The point of arguments is to convince people -- an argument from intuition will obviously only convince those with matching intuitions. But at least it may convince some, and some is better than nothing. --Maratrean (talk) 11:09, 2 March 2011 (UTC)

Some small details[edit]

I rather think that you need to define:

  • What is a soul?
  • What, if anything, is it made of?
  • Even if your arguments about a non-material "something" are correct - why should this be a soul?
  • I know that you want to avoid any burden of proof - but don't you think that a little, you know, evidence, might help your case a little?--BobSpring is sprung! 22:04, 27 February 2011 (UTC)--BobSpring is sprung! 22:04, 27 February 2011 (UTC)
Title aside, this essay isn't really about souls - it's about why the author is an idealist rather than a materialist. Which is fine, but the title is a bit misleading. Much like the title of this section :) Tetronian you're clueless 22:25, 27 February 2011 (UTC)
Yes, I saw that - hence my ironic section header. I did a search on "soul" in my browser and found surprisingly few references to "soul" which seems rather odd. The author's title tells us that he's going defend souls but them goes largely somewhere else. My questions are desigend to see if he's actually going to address the issues his title refers to. Alternatively he may wish to re-title his essay.--BobSpring is sprung! 07:15, 28 February 2011 (UTC)
"What is a soul?" Basically just a synonym for mind. However, the term 'soul' is chosen for its immaterial conontations, to make clear it is not believed that minds are reducible to matter -- rather, it is believed, that matter is reducible to mind. (Mind is a bit ambiguous -- it could mean something which is aware, which has experiences, has qualia -- or it could just mean something that reasons, without necessarily having subjective experiences or awareness or qualia -- so I should make clear 'soul' is a subjective mind, not merely an objective one. But to a materialist, the previous sentence is all gibberish -- it only makes sense to someone who either believes idealism, or at least is open to the possibility that idealism may be true.)
"What, if anything, is it made of?". What is matter made of? A materialist would say: "More matter!". What is mind made of? An idealist would say "More mind!". A materialist will say "Mind is an emergent property of matter". An idealist will say, "Matter is an emergent property of mind".
"Even if your arguments about a non-material 'something' are correct - why should this be a soul?" -- well, I think it is reasonable for an idealist to call 'mind' 'soul', given what they believe 'mind' is. Of course, a materialist believes very different things about what 'mind' is, so for them to call 'mind' 'soul' doesn't make sense. But I would hope, that whether one was a materialist or an idealist or neither, one could appreciate how the use or non-use of 'soul' is appropriate in each case
"I know that you want to avoid any burden of proof - but don't you think that a little, you know, evidence, might help your case a little?". If idealism requires evidence, why doesn't materialism require evidence also? A lot of the materialists have adopted the prior probability of P(materialism) ~ 1, P(idealism) ~ 0. But, isn't a more rational assignment of priors P(materialism) = P(idealism) = 0.5? --Maratrean (talk) 12:11, 1 March 2011 (UTC)
So you are saying that "mind" and "soul" are the same thing. In that case you presumably feel that if you have shown the existence of one then you have shown the existence of the other. But you have no reason to equate them in this way. "Mind" is the production of the brain. This being the case it stops when the brain stops. The "soul" however is alleged to be immortal. It's is claimed to transcend death. It's pretty obviously a quite different thing. Would a thinking machine have a soul?
However there is a second problem with this. It you believe in religion souls, gods, ghosts and tarot card reading then you need to have at least a belief in dualism. You must of necessity believe in something more than matter.
But if you believe in either idealism or dualism there is no necessity to believe in either gods or ghosts or souls. It's not a necessary conclusion.
As far as evidence is concerned it would be nice if you were to at least give it a shot. --BobSpring is sprung! 13:42, 1 March 2011 (UTC)
In response to Maratrean's prior probability assignments:
Yes, you are correct. But we then proceed to update these assignments based on evidence and argument; they don't remain at .5. Elsewhere on this page I have pointed out some reasons why we should update in favor of materialism, such as the success of computational models of the mind.
I pretty much agree with Bob on everything else with the exception of a few minor details. Tetronian you're clueless 16:21, 1 March 2011 (UTC)

Simulation related arguments[edit]

I've been thinking about your use of the simulation hypothesis as an argument against materialism, and I don't think it's valid. Let's break it down:

If we consider in particular the simulation hypothesis, Bostrom mentions in his seminal paper (Are You Living in a Computer Simulation?) the possibility of nested simulations, of virtual machines. If we proved this material reality was simulated, we and find another reality in which it is simulated, we still don't know whether that other reality is simulated or not. We can never be sure we have reached the 'real' world. So the foundations of our worldview are always shaky. Whereas, to an idealist, they do not care how many layers of simulations there are -- even if there were infinitely many. To an idealist, an infinite layering of simulations would be fatal -- there would be no ultimate material reality to which mental reality could be reduced, only an infinite regress of simulated material universes.

First, let's make one thing clear: the existence of a finite number of simulations does not undermine materialism because this implies that there is eventually something at the bottom. I don't think that's what you were getting at, but I want to make sure this is clear. Secondly, the notion of an infinite number of simulations might not be coherent, and I wonder if it is an abuse of the concept of infinity. I'm not sure if I really understand this concept of infinite simulations or what it might imply well enough to assess it - it's very very hard for me to even get my mind around it. Note that this is not a metaphysical argument against its existence - that would be an argument from incredulity - but, rather, I question whether the idea is coherent in the first place.

Thirdly and most importantly, the use of the simulation hypothesis against materialism is essentially a cheap shot on the level of "well, what if we can't know anything?" This is mostly because we have no evidence to suggest that we are living in an infinite number of simulations anyway? In fact, the consequences of the logical conjunction operation, or "AND," suggests that a finite number of simulations is more likely than an infinite one no matter what the probability of a simulation being created is. This is because an infinite number of simulations requires that all universes be simulated and contain simulations, a.k.a 1 AND 2 AND 3 AND 4 ... and so on. We can think of this as a Boolean expression that is ANDing an infinite number of variables. The odds that this expression will evaluate to false must be more likely than the odds that it will evaluate to true no matter what the probabilities are in each case. Substitute any percentages you like for "1" "2" "3" and "4" if you don't believe me (to find the result of the probabilistic expression 1 AND 2 AND 3 AND 4, calculate 1*2*3*4). To summarize: An infinite universe is not only highly unlikely even before evidence is observed, but also potentially incoherent, as least as far as I can tell. (Lastly, a quick side note: I've been a fan of Bostrom for a few years, not least of all because of his clear writing and emphasis on probability theory in philosophy.) Moving on:

As an idealist, I believe in souls and their experiences. Universes are groupings of souls having certain correlations between their experiences. Two souls are in the same universe if their experiences correlate, directly or indirectly. Direct correlation can be what we call both looking at the same material object at the same time, or at differing times -- in an idealist interpretation, a strong (but not absolute) ammount of commonality between a subset of each soul's experiences.

There is almost no justification for this or anything else in this paragraph. How do you justify, or even know, any of these statements? Continuing on:

Thus, whether or not are universe is simulated in another is not something we should care about -- it is not a property of our own universe, only of another -- unless, at some point, the simulators chose to reveal themselves to us. So, for a materialist, whether our universe is simulated or not is a big deal -- if it isn't, then they have found the ultimate/fundamental reality in matter -- if not, then this matter is an illusion, and (by faith!) some other real matter exists inaccessible beyond it, giving this matter its reality. But to an idealist it makes no difference. This is a reason I think to prefer idealism.

You are correct about what the materialist should believe here. But why is this a reason to prefer idealism? (That is not a rhetorical or sarcastic question. Why is idealism a logical consequence of this?) Also, you seem to be making the assumption that the materialist has to bite a huge bullet when he/she finds out that the universe is simulated. While finding this out is extremely (I can't stress this word enough) weak evidence for the existence of the infinite simulated universes scenario you described, on the whole it doesn't convey much evidence at all.

Let's run some numbers through Bayes's Theorem to see why this is so. Let P(S) be the prior probability of the infinite simulated universes scenario, and as I explained above this is necessarily very, very, very small. Next, let P(M) be the prior probability of that there is some kind of physical matter outside of the mind (i.e. your hypothesis is false). P(M) is close to but not exactly equal to (1 - P(S)) because there could be a third alternative we haven't considered. No, we are updating on "E," the discovery that our universe is simulated. P(S|E) = 1 (by your own definition), and P(M|E) = that probability that any given universe contains a simulated universe; this will be some number between 0 and 1 but probably pretty high, at least according to Bostrom. Now, if we plug these in to Bayes' Theorem, we get P(S|E) = ( P(S) * P(S|E) )/ ( P(S) * P(S|E) + P(M)*P(M|E) ). Thus, P(S|E), or the posterior probability, will be just barely larger than P(S), well within a margin of error and not nearly large enough to be considered a strong argument by any means. Apologies if this is unclear, it's rather late at night where I live.

(I should also note, that universe A could be simulated in both universe B and universe C. An idealist could not care less; for a materialist, this is a fatal quandary -- unless, maybe, B and C are both ultimately simulated in D. But if B and C are unsimulated, or have no overlapping in their simulation trees, then materialism is doomed, but idealism does not care.)

This isn't a problem for materialism at all. In fact, Bostrom devoted an entire book to scenarios like this: it's called Anthropic Bias, and it's an excellent read (no, the quality of the book is not part of my argument!). It's available for free here. Simply put, an organism who finds out that he/she has copies running around in multiple universes has to jump through some interesting epistemic hoops, nothing more. It doesn't undermine the idea that there is a physical world somewhere; as I explained above, it merely provides weak evidence, in this case exactly 2E.

I realize that this section is a bit of a mess, but hopefully you understand what I am getting at. Tetronian you're clueless 04:11, 1 March 2011 (UTC)

Firstly, I should make clear the distinction between my views and Bostrom's. Bostrom's argument is about finite simulations hierarchies only, since it tries to build its assumptions on reasonable extrapolations from currently known science. He does talk about nesting, and he does mention the idea that even the 'basement' simulation can never actually know for sure it is the basement simulation. But, infinite simulation hierarchies must depart from the grounding (however tenuous) his argument has in science, and venture into the realm of metaphyiscs.
I agree a finite simulation hierarchy does not contradict materialism in any way. But -- right now, we do not know if we live in a simulated universe or not. So, if I present to you the idea of an infinite hierarchy of simulations, it will seem quite outlandish. Whereas, suppose you knew for a fact that a hierarchy of computer simulations existed -- either you are in the (putative) basement universe, or you are in a simulated universe, but the simulation operators have through 'divine intervention' made known to you your simulated nature. Suppose you are aware there are simulations which think they are the basement, but aren't actually. So it is plausible to you, that what you think is the basement actually isn't the basement at all -- there are deeper levels of the hierarchy than what you know about. So you know there is a hierarchy, and you don't know for sure where the bottom is. Given this background, the idea of an infinite hierarchy, of their being no bottom, makes more sense, whether or not you believe it. So, the existence of simulations, even deeply nested simulations, even nested simulations who think they are the basement but you know they aren't, none of this contradicts materialism -- but, were it true and if you knew it were true, the idea of an infinite hierarchy would be more believable. And if there is an infinite hierarchy, then materialism is false. So, this information would make the falsehood of materialism more plausible than otherwise.
"Secondly, the notion of an infinite number of simulations might not be coherent, and I wonder if it is an abuse of the concept of infinity." If a finite number of nestings is incoherent, what makes an infinite hierarchy incoherent? Let me put it this way. Suppose I have a Turing Machine which does something basic, like output the list of all primes. Remember, a Turing Machine (despite its name), is not actually a "machine", its just a mathematical structure. Likewise, its input and output are just mathematical structures. Let us call this machine M0. M0 starts with an empty input λ. Now, I have a universal Turing machine U. Given M0+λ as input, U produces the list of all primes as output -- it executes the program. Let us call this M1. Now, let us give U the input U+M0+λ -- U will execute U, which executes M0, which outputs the list of all primes. Let us call that M2. Consider U+U+M0+λ -- likewise that outputs all primes. Define Mn = nU+M0+λ. And then we can have an infinite sequence of strings, Mn for all natural numbers n. Now we have an infinite sequence of programs, each of which simulates the prior program in the sequence -- in other words, an infinite nesting of simulations. Consider even the infinite string, containing an infinite number of repetitions of U, followed by M0. That is an 'infinite program' for the entire sequence. Is there a machine which given that input, in some sensible way, would produce the expected output? I would expect there would be, although I haven't tried to work out the details (and obviously it wouldn't be a Turing machine anymore). So, from a mathematical perspective, I don't see any problem with this idea of an infinite nesting of simulations.
You say 'Thirdly and most importantly, the use of the simulation hypothesis against materialism is essentially a cheap shot on the level of "well, what if we can't know anything?" This is mostly because we have no evidence to suggest that we are living in an infinite number of simulations anyway?'. What about this approach: the simulation hierarchy beneath our universe is either infinite or finite, but we don't know which. (If our universe is unsimulated, the depth is zero, so its the finite case -- also the finite case if we are simulated, but only by a finite nesting level.) Given we don't know whether it is infinite or finite, a good approach is to consider both equally likely. (In Bayesian terms, give both cases a prior probability of 0.5). Now, consider two metaphysical theories -- materialism and idealism. We don't know which theory is true either. So, independently, we would give each a probability of 0.5 too. But what we do know is this -- if the infinite case is true, then materialism is false. The infinite case is incompatible with materialism. Whereas, in the finite case, materialism may be true. But, idealism is actually compatible with either -- idealism does not care whether the hierarchy is finite or infinite. So independently, P(infinity)=0.5, P(finite)=0.5 and P(materialism)=0.5 and P(idealism)=0.5. But, P(materialism|infinity)=0 and P(idealism|infinity)=1 and P(idealism|finite) = P(materialism|finite) = 0.5. Now, my probability is a bit rusty, but you seem more up with it, so I will leave it to you to try to compute final probabilities for idealism and materialism (I tried, but couldn't get it to add to 100% -- need to refresh myself a bit on this.) But I'm sure the conclusion is that idealism is more likely than materialism
Also, although I would say that right now we have no evidence either way, there is potential evidence we might have (mentioned above) that would raise the probability of the infinity case above what it is now. I don't see how we could ever prove it, but there are certainly circumstances in which it would seem more likely than it does now.
As to your argument about the logical conjunction operation, I am not sure what to make to it. Since the equivalent of "AND" in probabilities is the product, I would interpret you as referring to the infinite product operator Π -- i.e. the limit as n -> infinity of the products of successive p(n). Now, whether that limit is defined depends on the definition of p(n). If p(n) = c, for some constant c, then the limit is only defined if c = 0 or 1. Which suggests that the probability of an infinite structure existing, all of whose elements have the same probability of existing is either 0 or 1 if the probability of each element is either 0 or 1, or undefined otherwise. But, if one chooses a non-constant definition of p(n), then a limit other than 0 or 1 exists. I would suggest, that by appropriate choice of function p(n), one may well be able to arrive at whatever limit probability one wishes. So, I really don't see how we can conclude anything based on this about the probability of infinite structures in general.
One point -- your AND argument suggests that the probability of each depth layer of the hierarchy beneath the known bottom must be less and less. Suppose there is a finite hierarchy only, but we don't know its depth. Arguably, any finite depth is as likely as any other. Why should we think 2 unknown layers are less likely than 1? The probability of any given depth is actually zero. However, an infinite conjunction of zero probabilities can have a non-zero value. e is a natural number, but we don't know which natural number it is. What is the probability, for any given natural number n, that n=e? For all n, p(n=e) = 0. But, p(exists n element of N such that n=e) = 1.
"There is almost no justification for this or anything else in this paragraph." I am not justifying idealism, I am explaining how an idealist interprets the word 'universe'. The idealist interpretation of 'universe' is different from the materialist interpretation of 'universe'. But, since both interpretations can explain the observed data equally well, what is the reason to prefer one over the other?
For a materialist, whether this universe is simulated or not is a big deal, because they have identified this physical universe with foundational reality. Hence, if this universe is simulated (whether finitely or infinitely), then they are wrong about what foundational reality is. Idealists have not identified this physical universe with foundational reality. They have identified foundational reality with their own minds (and those of others), and to them this physical reality is just a pattern in the experiences of those minds. So, whether this universe is simulated or not (whether finitely or infinitely), makes no difference as to whether they are right about what they think the foundational reality to be. That is why simulation is a big deal to a materialist, and no deal to an idealist. (I am not talking about what 'deal' it is in a practical sense -- I am talking about what 'deal' it would be in a philosophical sense were it true.)
Considering your Bayesian argument, I don't agree the infinite simulated univwerses scenario prior probability should be very small. As I said, I think it should be around 0.5. Your "AND" argument doesn't make much sense to me, for the reasons I explain above. P(M) is not close to 1 - P(S), because as I said idealism is independent of the truth of the infinite hierarchy scenario.
I don't think you understood my comment about branching/non-linear simulation trees being a fatal quandary for materialists. Let me go back to my Turing machine analogy. There are multiple programs that perform the function of U (equivalently -- multiple possible worlds W1 and W2 both of which contain a computer simulation of world W3). Let us take two, U1 and U2. So, instead of considering sequences M(n) = nU + M0 + λ, we are some sequence S = Kleene-star(U1 or U2) ((side note, I should learn TeX!)). So then we are actually considering sequences S + M0 + λ. The point is, materialists need there to be a single unique basement universe. So, if there is either no basement universe, or more than one, materialism fails. So the scenarios we must consider are: finite linear hierarchy (FLH), finite nonlinear hierarchy (FLNH), infinite linear hierarchy (ILH) or infinite non-linear hierarchy (INLH). If materialism is true, then FLH is true. FLNH, ILH and INLH all imply materialism is false. Idealism is compatible with all four possibilities. Again, this would seem to me to imply that, assuming we give all four possibilities equal probability, and independently give idealism and materialism equal probability, then end result is that idealism is more likely than materialism.
It is good to have someone to talk to that understands what I am talking about and is interested to discuss it. And I told myself I would finish off all this work from my job tonight, instead I am responding to you! :) --Maratrean (talk) 12:11, 1 March 2011 (UTC)

edit break[edit]

There is so much to talk about here that I'm not even going to attempt to address all of it at once. For now, I'd like to talk about my "AND" argument, since it relates to some of the other stuff I was trying to get at. First of all: I accept your Turing machine thought-experiment and I agree that the idea of infinite simulations is coherent. That said, let's move on to the argument itself. I'm going to use some of your analogies along with some of mine to hopefully make it a bit clearer. In your reply above, you said that we should simply assign a prior probability of .5 to P(S) (probability of infinite simulations) because we have no information about whether there are infinite simulations or not. This .5 for infinite / .5 for finite assignment is called a uniform prior, and it is the correct procedure when we have no background information. But here we do have some additional background information, which was what I was trying to explain with my AND argument. Here is the argument again, rephrased such that it hopefully makes sense. In order for S to be true, there must be an infinite chain of simulated universes. If we translate this into a Boolean function, this means that there must be a simulation in universe #1 and in universe #2 and in universe #3...and in universe #n. We can represent this as (S1 AND S2 AND S3...AND Sn). As you pointed out, what we are really talking about is the limit of an infinite series. The limit is (P(S))n as n approaches infinity. Now, since P(S) must be less than 1, the limit is going to approach 0. Why must P(S) be less than one? I can think of a few reasons:

  • P(S) = 1 would mean that every universe in existence must be able to support life, and that life would then have to go on to create at least one simulation. But it's very easy to think of universes that don't support this claim, such as an empty universe or a universe in maximum entropy. Though we don't know if such a universe actually exists, there is a nonzero probability that one such universe could exist; thus, P(S) cannot be equal to 1.
  • Even if the universe permits it, it is unlikely that every life-supporting universe will give rise to a simulation.
  • There are reasons to believe that probability assignments should never equal exactly 1 or 0 anyway, only approach these values. But I don't want to get into this because the previous two points are more than strong enough to show that P(S) ≠ 1.

As a result of this background information, it doesn't necessarily make sense to have a prior probability of P(S) = .5. But I'll get into that when I discuss some of the other points that you raised. Is the AND argument clearer now? Tetronian you're clueless 04:55, 2 March 2011 (UTC)

"Argument from the relationship of ontology and epistemology"[edit]

You wrote:

Idealism claims that ontology recapitulate epistemeology; materialism claims they are inverted. Idealism: the I comes first in knowledge and in being; Materialism: the I comes first in knowledge, but the it comes first in being.

Though it's pretty late at night were I live, this doesn't even remotely look like an argument that favors idealism over materialism, yet this is the only text that appears in that entire section. Did you leave something out? As it stands, this is simply a statement, not an argument: It has no criteria that allows us to choose one view over another and does not contain any evidence that we can update on. Unless there's something that was omitted or something obvious that I'm missing that's contained in these two sentences, I don't see any substance here. Tetronian you're clueless 04:21, 1 March 2011 (UTC)

Incessability of the soul[edit]

In the section of the essay bearing the same title as this one, you explain the following:

1. I know the meaning of a statement if I know what it would be like for it to be true
2. I cannot know what it would be like for "I (will) cease to exist" to be true (any experience which might becompatible with it, such as dying, is also compatible with its falsehood, i.e. dying followed by an afterlife)
3. Hence, by 1 and 2, I cannot know the meaning of the statement "I (will) cease to exist"
4. If I cannot know the meaning of a statement, I cannot assert its truth
5. Hence, by 3 and 4, I cannot assert the truth of the statement "I (will) cease to exist"
6. But I can assert the truth of the statement, "I will never cease to exist" (by (1), I know, very broadly speaking, what experiences I would expect to have were this true)
7. If I can assert X but cannot assert not(X), I must assert X [The inability to assert the negation of a statement, combined with the ability to assert the statement itself, is sufficient justification to assert that statement, and I rationally ought to assert it]
8. By 5, 6 and 7, I will never cease to exist.

There is a lot of stuff here to discuss, but I'll try to keep it brief. First of all, there is a lot of information that you're sneaking into this proof with the phrase "I cannot know what it would be like for "I (will) cease to exist" to be true. The problems with this are as follows:

  • We can apply reductio ad absurdum here. I can't "imagine" what it feels like to sleep because I really don't have any memory of the sensation of being asleep. But does this mean that I'm always awake? Of course not - it means that there is something wrong with my reasoning. (Because, after all, I can make a video tape of myself sleeping to confirm (or, more precisely, give strong evidence in favor of) it later.)
  • What the statement really says is "I can't know what it feels like from the inside to cease to exist. (I'm pretty sure that this is true even if you don't accept materialism or the proposition that you consist of cognitive algorithms; it's only untrue you are a solipsist. (I think.)) But I can imagine what it would be like from an outside perspective - I know that my brain functions would shut down and I would lay there, lifeless. Or, if I was of idealist bent like yourself, perhaps I would say that the minds of others would continue to exist while mine vanished. Or something like that. The point is, you are confusing these inside and outside views and claiming that only the inside view is valid, yet this is obviously not true for the reason discussed in bullet #1.
  • This logic borders on argument from incredulity again, because, if we combine bullets #1 and #2, you are just saying that you can't know from the inside view alone what it is like to cease to exist. But, again, this ignores the outside view and ends up looking very similar to "I can't imagine what it looks like to die, therefore I can't die."

While the logic itself may or may not be valid, the premises that the whole thing is based on are not. There are a lot of ideas that get secretly inserted into the argument through the definitions of the terms and concepts, and not all of these ideas are valid. Tetronian you're clueless 04:40, 1 March 2011 (UTC)

As to sleep, "I know the meaning of a statement if I know what it would be like for it to be true". I know what it is like to fall asleep. I know what it is like to wake up. I know what it is like to dream. Even dreamless sleep, I know what it is like -- I have the experience of falling asleep, and subsequently the experiencing of waking up. I could, as you suggest, video tape myself asleep, and watch the video later. This is different from the cessation of existence -- I know what it is like to X, if not at the exact same time as X, then at some other time. But in the case of cessation of existence, there is no time at which I know what it is like to cease existence.
We know from relativity that we aren't all in the same time, but in numerous different times, albeit related. Theoretically, the curvature of space due to gravitation is almost surely different at every single point in the universe, and hence the time dilation at every point in the universe due to general relativity is slightly different. Likewise, almost all particles have a slightly different velocity from every other particle, so the the SR time dilation at every point is slightly different. So, everywhere in the universe is a different time (every atom in your body is its own time), but usually the differences are so absolutely minute as to be absolutely consequential.
I would like to introduce an idea -- idealist subjective time (IST). This is not in any way a consequence of GR or SR -- it is an idea which is vaguely inspired by them, a metaphor or analogy, not a consequence. The point of connecting it to GR/SR, is that if the GR/SR conception of time is coherent, then arguably IST is coherent too -- note, the question of 'coherence' is a completely different question from truth or evidence. Whereas in GR/SR, every point or particle has its own time, but all these times can be related to each other -- in the same way, in IST, every mind has its own time, but these times can be related to each other. But, IST has discontinuities around dreamless sleep. So, you sleep for 8 hours, I spend the whole time awake watching you (it's not creepy, it's a philosophy experiment!). Assume I give you some drug which prevents dreaming, so your entire sleep is dreamless. In IST terms, before and after you sleep, our two times are at 1:1 dilation to each other. But, during your dreamless sleep, our times exist in a 0:8h dilation -- in other terms, an infinite time dilation. It is as if your time jumped forward discontinuously by 8h. But there is no discontinuity in our individual times, your falling asleep experience is followed immediately by your waking up experience. The discontinuity only appears when we compare our times.
The point of this whole IST discussion, is I can give an interpretation to unconsciousness with the conclusion that I only exist during my consciousness, yet at the same time avoid the scenario where sleep amounts to a cessation followed by a recommencement of my existence. Of course, the 'IST' concept of 'time' is not the same as the physics concept of 'time'. But may I suggest that 'time' in physics may not match up exactly with the quotidian concept of 'time', it may actually be a somewhat different yet related thing. Likewise, 'IST' 'time' would have some relation to the quotidian & physics concepts of time. Possibly (I am not claiming this, it is just a suggestion), IST may even match the quotidian concept better than the physics one does.
So, in conclusion, there is something fundamentally different between 'sleep' and 'death'. A deathless sleep, a sleep in which one does not die, always has a coming-after experience; I know what its like to sleep because I know what its like to wake up. Likewise, if there is an afterlife, I know what its like to die because I know what its like to enter the afterlife. But if there is no afterlife, I don't know what its like to die. I know what its like to sleep deathlessly, but I don't know what its like to die. Whereas, if there is an afterlife, I know what it is like to sleep deathlessly, and what it is like to die.
This is exactly a "I don't know what it feels like (from the inside)". I know what sentence S means if I know what it feels like (from the inside) for the statement S to be true. I don't know what it feels like for me to cease to exist, so I don't know what the sentence "I cease to exist" means. I might think I know what it means, but I don't. The problem with "I can imagine what it would be like from an outside perspective" what it would be like for me to cease to exist, is my imagination in that case is the same as what others would see if I went on to an afterlife instead of ceasing to exist. I'm not really imagining what it is like to cease to exist -- I am imagining what it is like to watch someone else die, regardless of whether there is an afterlife. The point remains, there is no unique logically possible experience which corresponds to the sentence "I cease to exist", so the sentence has no meaning -- any experience which supports it supports equally well the scenario where I went to an afterlife. There are numerous unique logically possible experiences corresponding to the sentence "I die", all of which involve some form of afterlife in which I found out I just died.
Even an idealist interpretation of 'I cease to exist' has to interpret it in terms of someone's experiences. There is no experiences anyone (in this life) would have, if I ceased to exist, which they would not have were I to go on to some afterlife instead. The only difference between the scenarios is the afterlife experiences -- a null set in the cessation case, non-null in the non-cessation case. (Let us ignore any possible fellow denizens of the afterlife which might have experiences of me joining them there.)
Of course, this all assumes you accept my theory of what 'meaning' is. Feel free to propose another one.
PS, "Incessability" (cannot cease) not "Inaccessability" (cannot be accessed). Maybe a bit of a neologism, but I hope the meaning is clear --Maratrean (talk) 13:13, 1 March 2011 (UTC)
There's a lot for me to talk about here, so I'm going to take it piece-by-piece.
First, let's talk about sleep. I obviously agree that with your statement "I know what it is like to fall asleep. I know what it is like to wake up. I know what it is like to dream. Even dreamless sleep, I know what it is like -- I have the experience of falling asleep, and subsequently the experiencing of waking up. I could, as you suggest, video tape myself asleep, and watch the video later." What I was trying to get at with my sleep example was this: since sleep is a time in which you have no experience, it is analogous to death for the purpose of your proof. But apparently you consider knowing about sleep after-the-fact to be satisfactory, although I'm not entirely sure why. You said "This is different from the cessation of existence -- I know what it is like to X, if not at the exact same time as X, then at some other time. But in the case of cessation of existence, there is no time at which I know what it is like to cease existence...I know what its like to sleep deathlessly, but I don't know what its like to die." This doesn't make sense to me because we can visualize/predict what it would be like to die before it happens, just like we can visualize what it would be like to fall asleep before we actually do so. Does this not constitute "knowing X"? And if not, why not? You see, it seems to me when we imagine ourselves falling asleep (or, if that example irks you because of IST, into a coma) through a third-person perspective, this is "knowing X." Likewise, we can imagine dying in a similar way and thus "know X" with respect to dying.
This of course is related to my comments about inside vs. outside view. I think it would be helpful if I broke it down a little more thoroughly. First of all, let's examine exactly what "I" is in the first place. "I" is a collection of cognitive algorithms produced by the interactions of the neurons and gluons in your brain. Your brain, in turn, is made up of atoms, as is the rest of your body. Since you are implemented on a brain, the way the world looks is often distorted and biased. As I said before, our cognitive algorithms often feel like reality, but they are often incorrect. From your comments in one of the sections above it seems like you agreed with this premise because it doesn't strictly imply materialism, but correct me if I'm wrong about what you said. If you do disagree, I would simply argue that this conclusion is overwhelmingly supported by scientific evidence and once again link to that one RW article on neuroscience. So, with this definition of "I" in place we can be a little more specific when talking about outside view vs. inside view.
One additional example of the inside view that I didn't mention before is a phenomenon that the mathematician E. T. Jaynes called the Mind Projection Fallacy, an error in which we attribute properties of our own mind to things in reality (for example, saying that "beauty" is a fundamental property of reality is wrong - it is a function of our preferences and beliefs about reality). This is true regardless of your philosophical viewpoint with the exception of solipsism or some kind of factual relativism. Jaynes's examination of the subject can be found here, see paper #68. I'm mentioning this because (a) it's an excellent example of just how different our inside view is from the outside view, and (b) I'm beginning to suspect that this fallacy is part of what's creating the confusion here. With this basic framework in place, let's compare the inside view to the outside view. As I've said above and I think you agreed with, the inside view is what the world looks like through the lens of our cognitive algorithms - what it "feels" like. The outside view, by contrast, is what the world looks like from a reductionist perspective that observes our cognitive algorithms as the sum of their parts rather than in the confused was that we see them.
Your argument seems to work something like this (I'll quote you): "I know what sentence S means if I know what it feels like (from the inside) for the statement S to be true. I don't know what it feels like for me to cease to exist, so I don't know what the sentence "I cease to exist" means. I might think I know what it means, but I don't." Now, before we even begin comparing the outside view to the inside view with respect to this sentence, let's talk about some potential problems with your reasoning. We know that the inside view is inaccurate in that it is subject to the biases in our cognitive architecture; for this reason alone it is not safe to say "I cannot imagine X, therefore X cannot be true." In fact, this is a textbook example of the Mind Projection Fallacy. Note that I am not saying that this line of reasoning is always wrong, just that it is sometimes wrong and is wrong frequently enough such that it should be suspect. This alone casts a lot of doubt on the power of your proof, and it should downgrade its ability to update our prior probability by a very significant amount (but not invalidate it entirely). In case this argument is unclear, let me summarize it: The scope of our imagination is suspect
Now, let's examine all this with respect to both the inside view and outside view. You wrote: "I might think I know what it means, but I don't. The problem with "I can imagine what it would be like from an outside perspective" what it would be like for me to cease to exist, is my imagination in that case is the same as what others would see if I went on to an afterlife instead of ceasing to exist. I'm not really imagining what it is like to cease to exist -- I am imagining what it is like to watch someone else die, regardless of whether there is an afterlife." I see what you are getting at - in the outside view scenario, you are arguing that "you" are not really the one undergoing the sensation of ceasing to exist. But I think that if you can examine the event of dying from a nearly-omnipresent outside view (you can watch your brain decay atom-by-atom), then you can see what the event we call "ceasing to exist" refers to without the biases of the inside view, specifically, a certain configuration of the atoms in our brain. Obviously there is no reason why the universe would forbid this particular configuration that corresponds to death (unless you can provide one), since it is clear that this configuration should be disallowed (from the outside view, that is). As a result, I don't see why you object to using this perspective in the argument - if we agree that brains cause minds and that we can predict what will happen to your brain when your inside view feels the sensation of dying, then it's obvious that "ceasing to exist" can happen. I can elaborate if this isn't clear.
I would also like to bring up an additional thought-experiment which I think invalidates your proof. Let's invent a new mental state which we'll call "fuzzle." As soon as you experience fuzzle, you fall asleep and wake up with no memory of the experience. Now, before you experience fuzzle you can imagine what it will look like from an outside perspective but you don't know what it will feel like from an inside perspective. Therefore, according to the same criteria you applied to ceasing to exist, you cannot "know" fuzzle until you experience it. But this obviously does not mean that the inside view experience of fuzzle doesn't exist because you will know what it feels like when it happens to you. Like death, you can't access the feeling of fuzzle after-the-fact because it causes amnesia. My point is this: by your criteria, fuzzle is basically the same as death in that you can't know it with an inside view before it happens and you can't go back an examine it later (it matches the standards you set with IST). Yet this is obviously not enough to prove that fuzzle can't exist, even if we are only looking at the inside view. For this reason I believe your reliance on the inside view alone and your subsequent proof are invalid. Let me know if this is unclear. Tetronian you're clueless 02:56, 2 March 2011 (UTC)

False trilemma in dualism section?[edit]

I am not the philosophical type myself (I think that idealism is about the worst idea ever invented), but it seems to me that you present a false trilemma in the section marked "Why reject dualism." Making a rough rendering of it using set-operations, where is the set of events attributable to mind, and likewise for matter, you would have it that either

  1. Mind and matter are entirely independent (), or
  2. matter reduces to mind (), or
  3. mind reduces to matter ().

What of the fourth possibility, the general case where , i.e., only some aspects of mind reduce to matter, or vice versa? Mjollnir.svgListenerXTalkerX 05:25, 1 March 2011 (UTC)

@LX: That's very interesting! Assuming that this fourth possibility is coherent, then your argument against Maratrean's logic definitely stands. Either way, it sounds like the kind of thing Phillip K. Dick would write. Tetronian you're clueless 05:34, 1 March 2011 (UTC)
Well, the fourth possibility -- mind partially reduces to matter, or matter partially reduces to mind -- which part and how? What reduction are we talking about here?
I agree, it is theoretically possible, but without some fleshing out as to exactly what part reduces and what doesn't, and what kind of reduction is involved, its really hard to say anything about it.
I would expect it would have the same problems of interactionism that dualism does -- dualism must explain how mind and matter react. This 'partial dualism' must explain how the irreducible part of mind reacts with the irreducible part of matter and vice versa.
So, I don't think it makes any difference to my argument. If someone, hypothetically speaking, adhered to this 'partial dualism' -- to my knowledge no one ever has -- then their theory would have the same fundamental problems which 'full dualism' has, but which idealism and materialism avoid.
Thus, as in my essay, we can dispense with 'dualism' (full or partial) as a possibility, and just move on to consider materialism v. idealism. --Maratrean (talk) 12:21, 1 March 2011 (UTC)
First of all, let me say that it's very hard to argue about this because it's hard to visualize a concrete example.
Maratrean, I don't think your argument is entirely correct because you are wrong about the problems with dualism. The reasons that most materialists and atheists don't take materialism seriously look something like this:
That is, main the problem is not that we don't know how mind and matter would interact (though this might be an objection as well). As a result, you and I probably reject dualism for different reasons. But since we both object to it, I don't see any reason to make this page any longer than it already is by continuing to talk about it. Tetronian you're clueless 23:58, 1 March 2011 (UTC)
This 'partial dualism' must explain how the irreducible part of mind reacts with the irreducible part of matter and vice versa. By definition, they do not interact.
...to my knowledge no one ever has... Perhaps I am vastly misunderstanding the terms here, but it would seem to me that the standard Christian view of body vs. soul is an example of this, there being hypothesized to be matter that has nothing to do with souls, matter to which a part of the soul can be "reduced" (e.g., the brain) and an eternal part of the soul entirely separate from matter. Mjollnir.svgListenerXTalkerX 04:49, 2 March 2011 (UTC)

mmmhuh[edit]

So what was going on before you were born? Sen (talk) 00:29, 2 March 2011 (UTC)

What is music? C®ackeЯ