Talk:Mind uploading

From RationalWiki
Jump to navigation Jump to search

The sentient computers see nothing improbable in mind uploading. 212.85.6.26 (talk) 14:38, 7 April 2011 (UTC)

If some form of dualism needs to be present for the "real you" not to die while your copy pretends to be you, then doesn't some form of dualism need to be present for there to be an objectively real you which can't be copied over? If you create some kind of philosophical zombie version of your mind which only emulates your thoughts, then kill the original, doesn't that mean the original had some kind of uncopyable trait to it, and therefore some quality that exists outside traditional neurology (something a rational thinker would reject)?

Why would a rational thinker reject the existence of qualities neurology doesn't know about? --145.94.77.43 (talk) 17:15, 26 January 2012 (UTC)
If you have yourself cloned, is there any reason to believe that you would have the phenomenological experience of being both you and your clone at the same time? Nebuchadnezzar (talk) 18:53, 26 January 2012 (UTC)
Who's claiming that you would? Hmmph (talk) 02:17, 30 May 2013 (UTC)

The easily fooled "theory of mind"[edit]

I've wondered lately if god beliefs, SETI, the Turing Test, having "conversations" with the Siri app on your iPhone and speculations about "mind uploading" all show the bugs in our "theory of mind" which often mistakenly attributes minds to mindless things. Subjecting yourself to an alleged "mind upload" which destroys the original could result in committing suicide while leaving behind a mindless Siri-like program which can fool human observers into believing that you somehow survived the process.Advancedatheist (talk) 17:24, 13 April 2012 (UTC)

If the Siri-like program can fool human observers into believing that you survived the process, then it is clearly not mindless. It is you. Hmmph (talk) 00:02, 21 October 2014 (UTC)

(5 years later...)[edit]

Advancedatheist probably meant something similar to a Chinese-room symbol manipulator, or a p-zombie, which is you to the outside observer, but is not conscious. — Unsigned, by: 14.1.28.165 / talk

Clarifying misconceptions[edit]

Gonna do a re-write of this page later today, it carries quite a few misconceptions on mind uploading, the first being that "Swamp Man" thing, disregarding the fact that the mind being a series of patterns and connections that can be moved to a new medium. Mind uploading is not necessarily mind cloning. Please inform me of objections to the new page.

I would hesitate to change too much. we have no less than 2 people here (tmt and nebby) who rather study the brain and how it functions, and might take exception to the idea that the mind is a series of patterns and connections that can be moved. tread with caution, young grasshoopper--Green mowse.pngGodotTut tut, looks like rain 18:54, 21 June 2012 (UTC)
Very well. If the mind was not a series of patterns and connections formed by neurons, then that would be to take a stance of dualism, and if its entirely possible to imitate two of every particle and every force acting upon them, it is possible to simulate a bran at an atomic level. However, that isn't necessary, as one can replace neurons with computer parts that simulate the neurons lost and retain consciousness, if such weren't the case, then the 7 year cycle of cellular renewal would have to also be false. Mind uploading as such relies on simulating neurons, which a computer can do given the ability to communicate to another computer with 4 binary functions to represent neurotransmitters and appropriate software to simulate a response. Hormones can also be simulated by influencing the neurotransmitter functions. Thus, it stand to reason that by using appropriately sized computers and working software one can not only simulate a brain but use it to gradually replace one until consciousness is full transferred to computer parts.ZombiezuRFER (talk) 08:19, 30 June 2012 (UTC)
7 year cycle of cellular renewal is false, with some neurons you pretty much stuck for life. Also talking about retaining or transferred consciousness without really know what consciousness is doesn't make much sense; if I remember correctly there is at least one hypothesis that consciousness is in part result of the material brain is made of.--78.2.132.31 (talk) 05:48, 20 July 2012 (UTC)
John Searle wasn't it? He argued the Chinese Room thought experiment proved AI was incapable of being truly intelligent, but even if a part of neurobiology is essential to consciousness, that doesn't prevent it from being simulated by a computer. Furthermore, the Chinese Room was already shown to have proven nothig, so its clear that consciousness, for lack of a better term, "emerges" from the structure present in the brain. ZombiezuRFER (talk) 23:59, 20 July 2012 (UTC)
Nothing in philosophy is ever proven or disproven. As the Chinese Room article demonstrates, there are objections to the thought experiment, and objections to the objections, and so on. It's just opinion, not science. Hmmph (talk) 20:12, 17 November 2013 (UTC)
If this is true, mind uploading is a natural and already continuously occurring process which cannot be avoided. - Aurelian Carpathia (talk) 14:29, 10 November 2013 (UTC)
If that is true, it's entirely irrelevant to mind-uploading as the present article defines it. SophieWilderModerator 14:34, 10 November 2013 (UTC)
It is true, and irrelevant only to those who don't understand the subject. The natural continuous, gradual replacement of almost all of the body's atoms - including in the brain - proves personal identity is patternistic rather than materialistic. The swamp man thought experiment is actually an inescapable, omnipresent fact of life; we are all 'swampmen.' Furthermore, total brain deactivation in profound hypothermia and cardiac arrest proves continuity of consciousness is irrelevant. Thus, hypothetical future whole brain emulation and teleportation procedures conclusively and objectively do not destroy the preexisting self. Mind uploading, then, is no more nor less uncertain than space colonization; neither have yet to occur but both are certainly possible, as Stephen Hawking understands. - Aurelian Carpathia (talk) 13:47, 1 December 2013 (UTC)
It is not true. The body's atoms aren't all replaced continuously; that's a myth. Neuron DNA, tooth enamel, eye lens proteins are never replaced, for instance. But going to sleep every night demonstrates that continuity of consciousness has nothing to do with personhood. Hmmph (talk) 00:00, 21 October 2014 (UTC)
I changed it to "many", though the relevant criterion would presumably be atoms in the brain - David Gerard (talk) 09:26, 21 October 2014 (UTC)

"Minds" don't exist[edit]

The main problem with this whole thing is that it's intrinsically based on dualism.

"Minds" are just brains functioning - they are not separate things. It's not possible to separate the mind from the brain and upload it as the mind is simply the result of the brain doing stuff. It has no separate existence and thus can't be uploaded.--Weirdstuff (talk) 21:43, 24 November 2014 (UTC)

I think you've got it a bit off there. This does not require dualism. You could maybe upload enough data into a sophisticated simulation of your brain that your "self" will find itself in a computer. (self in quotation marks because this raises questions about what we feel identity is) Nullahnung (talk) 22:07, 24 November 2014 (UTC)
But the mind itself is not "data" - we use the word mind as a metaphor for working brains. It has no existence and thus cannot be "uploaded".
You seem to be shifting the argument towards creating a working copy of an individual brain. This idea can at least be considered - though it has its own massive problems. But "mind uploading" is absurd as the mind is not a thing.--Weirdstuff (talk) 21:23, 25 November 2014 (UTC)
At this point we are just talking semantics. Nullahnung (talk) 21:39, 25 November 2014 (UTC)
The mind is not a (physical) thing because dualism is false? That's a very odd argument. Ikanreed (talk) 21:47, 25 November 2014 (UTC)
He's not saying what we consider "the mind" doesn't come about through physical processes, but that it's not a substantial thing, be it of physical or metaphysical substance. 141.134.75.236 (talk) 00:36, 26 November 2014 (UTC)

This article is a tangled mess[edit]

The scientific plausibility needs to be completely separated from the philosophical implications - David Gerard (talk) 12:38, 26 April 2015 (UTC)

Not all theories of consciousness are equal[edit]

Unlike an identity, which is a composition of information stored within a brain - it is reasonable to assume that a particular subjective consciousness is an intrinsic property of a particular physical brain. Thus, even a perfect physical copy of that brain would not share the subjective consciousness of that brain.

The idea that two identical consciousnesses are discrete is debatable, and is an idea that cannot be objectively proved or disproved. Some materialist theories of consciousness, such as functionalismWikipedia, seem to allow mind uploading. Whether the claim that “even a perfect physical copy of that brain would not share the subjective consciousness of that brain” is true or not depends on what theory of consciousness you subscribe to.

Allowing identical consciousnesses to be different raises more questions. For example, are changes made to the brain during non-REM sleep deadly? If you get a concussion after passing out, does that kill “you” and replace “you” with a different consciousness? What happens to consciousness in a split brain? Are hemispherectomiesWikipedia a lethal procedure? —ShapeshiftingLizard ~▲~ hear me roar ~▲~ 18:01, 3 May 2015 (UTC)

I wrote that. It does not seem reasonable that two consciousnesses in two separate, cotemporal brains can share the same simultaneous subjective experience. If you disagree, perhaps a thought experiment could help me to see your POV. Also, please explain what you mean by "identical consciousnesses". I'm not sure if you could ascribe any qualities to a consciousness so as to describe two or more as being "identical".
Dr. Colin Hales <https://www.youtube.com/watch?v=L2C99UECCGU>, someone who knows a lot more about the brain than me, describes consciousness as 'what the brain does' rather than information or structure that can be copied. According to him, it's a function, not a "thing", such that taking the consciousness out of a brain is no less nonsensical than taking the power stroke out of an engine.
Also, IIRC people with split hemispheres have two separate streams of consciousnesses, arising from two separate consciousness producing organs.Androidian (talk) 05:58, 4 May 2015 (UTC)
"Whether the claim that “even a perfect physical copy of that brain would not share the subjective consciousness of that brain” is true or not depends on what theory of consciousness you subscribe to." Scientific theories conform to reality, not vice versa. The statement either true or false, regardless of what theory you subscribe to. Unless these two brains are connected, there is no reason to assume that they share a consciousness. There is no reason to assume that perfectly structurally identical objects would have some sort of magical connection between them. Androidian (talk) 06:09, 4 May 2015 (UTC)
Consciousness is not within the realm of science; consciousness cannot be measured, so assertions about the nature of consciousness cannot be proven true or false.
By “identical consciousnesses,” I mean two identical brains. If the brains are identical, and are subjected to identical stimuli, then what they experience should be identical, and the brains don’t need to exchange information (i.e. it’s not the dualistic sense of possessing two bodies, as one body isn’t aware of the other; rather, if their experiences are equal, then are they really different?) If they are subjected to different stimuli, then they are no longer identical, and the concept of them being not equal holds water.
A split-brain person was originally one consciousness. Does severing the corpus callosum kill the patient, or does it transfer the consciousness into one hemisphere of the brain? If we assume that “even a perfect physical copy of that brain would not share the subjective consciousness of that brain,” then it stands to reason that both hemispheres of a split-brain person cannot share the subjective consciousness of the original person. —ShapeshiftingLizard ~▲~ hear me roar ~▲~ 21:54, 4 May 2015 (UTC)
"Consciousness is not within the realm of science; consciousness cannot be measured, so assertions about the nature of consciousness cannot be proven true or false." I beg to differ. The study of consciousness is an active field of scientific research. Consciousness is a physical phenomenon, not some sort of magic, and as such it is well within the realm of science. Were it not, surely it would not be something we could hope to artificially replicate.
"If the brains are identical, and are subjected to identical stimuli, then what they experience should be identical, and the brains don’t need to exchange information" - If two brains share a consciousness, anything that one brain experiences the other will necessarily experience. There would be no need to ensure that they are subjected to the same stimuli; the fact that there is only one consciousness to perceive any stimuli ensures that both brains experience all stimuli presented to either of them. Of course, for a multitude of reasons two brains cannot actually be identical, not the least of which being that they do not share the same frame of reference. They will necessarily have different experiences.
In split brain persons, both hemispheres are independently conscious, however impaired. The two hemispheres do not share a subjective consciousness. Any such major physical alteration to the brain that does not render it incapable of producing consciousness will profoundly alter said consciousness. In the case of split brain, one consciousness becomes two lesser consciousnesses. Androidian (talk) 23:26, 4 May 2015 (UTC)
Consciousness certainly isn’t magic. However, it cannot be observed; hence infalsifiable ideas such as p-zombies and solipsism. I mean, I can certainly observe my own consciousness, but there is no way to prove that anyone else is or isn't a p-zombie. Theories (it would be more accurate to call them hypotheses, conjectures, or beliefs) can allow or disallow p-zombies, but theories of mind tend to be infalsifiable, and they seem to only ever be falsifiable when they are wrong (such as substance dualism, which shouldn’t allow split-brain.)
Why should either of the consciousnesses of a split-brain person have any claim to being the original person? What makes their tissues special, that doesn’t apply to a perfect copy of a brain?
“If two brains share a consciousness, anything that one brain experiences the other will necessarily experience. There would be no need to ensure that they are subjected to the same stimuli; the fact that there is only one consciousness to perceive any stimuli ensures that both brains experience all stimuli presented to either of them.” In order to keep two identical brains are identical (in make-up; equal frames of reference are not required) then they need to be subjected to the same stimuli, so that what one brain sees, the other brain sees, and what one brain does, the other brain does; when this is done, any distinction between them cannot be measured. (It is logical that one brain, a physical object, can’t be two places, but consciousness isn’t a physical object, but a process or an abstraction of a process, so why can’t two identical consciousness be the same, just as 1+1 = 1+1 regardless of what calculator it is performed on?) When they are subjected do different stimuli, then the brains are no longer identical, and difference between the two brains becomes meaningful; it can be scientifically observed. One might think of this as "diverging" rather than "copying", as both brains would (or could, depending on what theory of mind is objectively true) have an equal claim on being the original person, but not on being each other.
Of course, I realize that a molecule-for-molecule copy of a brain is impossible. But so are the conditions imposed by other common philosophical thought experiments, such as the swamp man. A lightning bolt cannot rearrange swamp atoms into a human being. However, in some theories of mind, a complete simulation of a brain on a Turing machine would be conscious; as Turing machines are deterministic, you could (theoretically, assuming no random number generators) run two instances of the same brain and get the same results. —ShapeshiftingLizard ~▲~ hear me roar ~▲~ 12:16, 5 May 2015 (UTC)
Science is built upon a priori assumptions without which no theory would be falsifiable. The law of parsimony and inductive reasoning suggest we ignore the possibility of p-zombies or solipsism and assume that if it is* indistinguishable from a conscious entity, it is a conscious entity. This is the foundation of the study of machine consciousness. If we make assumptions that lead to the conclusion that consciousness cannot be observed, how could we even attempt to create an artificial consciousness?
By "make up" I will assume you mean structure. Brain structure constantly changes as memories are formed and lost, and the frame of reference (perhaps I should say point of view) is crucial part of what memories are formed.
"It is logical that one brain, a physical object, can’t be two places, but consciousness isn’t a physical object, but a process or an abstraction of a process, so why can’t two identical consciousness be the same, just as 1+1 = 1+1 regardless of what calculator it is performed on?" Consciousness is a physical process that arises from a physical brain. Consciousness is not information. It is not like 1+1 on a calculator. It is not like a computer program. Like a computer processor does 'processing' a brain does 'consciousness' (among other things). The idea of "identical consciousnesses" is not useful.
You could say that the consciousnesses of a split brain have diverged from the original. In a split brain person, both hemispheres hold a "fragment" of the original consciousness, respective to the fragment of the original brain that produced that portion of the original consciousness. Like the two hemispheres of a split brain person, the two brains in your thought experiment do not share a consciousness, but unlike the two hemispheres, they never did. The very fact that they can "diverge" in the manner you describe (by exposing one to different stimuli than the other) proves that they do not share a consciousness. If they shared a consciousness, they would have the same experiences not because they were exposed to the same stimuli, but because they have shared awareness. You stimulate one brain, the other is stimulated de facto. That is shared consciousness.
In essence you are saying that the two brains can have different experiences (which will cause them to diverge), and that they share a single conscious awareness, which necessarily means that the share all the same experiences. This is a contradiction. If two brains share a consciousness, they cannot "diverge" in such a manner. It seems that in this sense you are conflating consciousness with memory or identity. Androidian (talk) 15:38, 5 May 2015 (UTC)
You lost me at "assume that if it [is] indistinguishable from a conscious entity, it is a conscious entity."
The difference between "indistinguishable from a conscious entity" and "I cannot distinguish it from a conscious entity" is a significant one. and, in my opinion, a deal-breaker. The latter is perhaps demonstrable, the former, not so much. Flux gate gamma (talk) 16:01, 5 May 2015 (UTC)
Could you truly know beyond all doubt that I am a conscious individual? Because I am apparently human you assume I am conscious, presumably because you are a human and conscious. Is there a rational reason to reject the claim of consciousness from a machine that appears to be as conscious as I am? Androidian (talk) 18:44, 5 May 2015 (UTC)
For all I know, you are a bot. You will have to lay out the parameters of "appearance" in "a machine that appears to be as conscious as I am" in a whole lot of detail before any discussion will be worthwhile. Flux gate gamma (talk) 20:27, 5 May 2015 (UTC)
I think that you would have to lay out those parameters. I have no idea of what your standards are, though apparently they are very high. By what metric do you determine whether or not an entity is conscious? Androidian (talk) 22:45, 5 May 2015 (UTC)
You made the assumption, you lay out the parameters. Proving "indistinguishable" is proving a negative, equivalent to proving "cannot (ever) be distinguished." Good luck with that. Flux gate gamma (talk) 23:40, 7 May 2015 (UTC)
I can easily say what would convince me, though I doubt you're interested. I have no idea what convinces you. Also, this assumption is not mine, it is the foundation of AGI research. Androidian (talk) 00:26, 8 May 2015 (UTC)
If a person has a genetic disorder that causes their body to replace all the atoms in its cells every 10 years, then is that person doomed to “die” (as in, have their consciousness replaced by another) every 10 years, assuming the condition cannot be cured? —ShapeshiftingLizard ~▲~ hear me roar ~▲~ 19:50, 5 May 2015 (UTC)
There are many factors to consider here, so I cannot give a definite answer. Perhaps there is a reason why this does not normally occur? Effectively, it seems that it would be no different than scanning and copying the brain and destroying the original. It's also important to remember that consciousness is not constant over the lifetime of an individual brain. The brain adds neurons as it grows from fetus to adult, and its consciousness changes accordingly. Androidian (talk) 22:45, 5 May 2015 (UTC)
But why should the natural modifications to a human brain be exempt from whatever philosophical problems arise from a gradual upload to a machine? If even gradual transfer to a machine is not allowed, then gradual transfer from the past state of the brain to the future state of the brain should not be allowed either.
Another thought experiment: Half of a person’s brain might be uploaded to a computer capable of simulating a brain, and then the computer is attached to the person’s nervous system, so that the person can’t perceive any changes to their consciousness, nor do they have any of the physical disabilities of a hemispherectomy patient, nor do they have the cognitive or sensory disabilities of a split-brain person. If a person isn’t aware that half of their “consciousness” has been replaced, then on what grounds can one say that half of their consciousness has been replaced, without violating Occam’s razor? The idea of “uniqueness of consciousnesses” is too similar to souls to be believable. —ShapeshiftingLizard ~▲~ hear me roar ~▲~ 02:23, 10 May 2015 (UTC)
I was partly in error in my last post. Neuronogenesis ceases in the fetal stage. From the fetal stage onward neurons die without replicating or being replaced. Gradual replacement of neurons is no different from destructive copying: the only difference is the distance and time intervals. The brain's structure changes continuously over time, and it is this structural change over time that permits consciousness. Consciousness is not 'transferred' "from the past state of the brain to the future state of the brain".
Let us assume that a virtual representation of a brain can function like a real brain. "The person can’t perceive any changes to their consciousness" is also an assumption that deserves critique but I will abstain for now. The brain in your thought experiment is like one hemisphere of a split brain that has been fused to another alien hemisphere. If we connected the (presumably discarded) second hemisphere of that split brain to a functional computer representation of the first hemisphere, do we now have two brains that share a consciousness? I say no, per the logical contradiction I demonstrated earlier. Therefore, the original consciousness is not preserved but split into two, like in a split brain.
"If a person isn’t aware that half of their “consciousness” has been replaced, then on what grounds can one say that half of their consciousness has been replaced, without violating Occam’s razor?" Is a dead person aware that they have died? One cannot be aware of their own loss of consciousness per se, but we do not take this to mean that consciousness cannot be lost. Neurological research thus far supports the idea that a person's consciousness is intrinsic to the physical brain and the consciousness producing parts thereof. If you remove one of the hemispheres and replace it with a computer representation, it follows that you would see the same sort of split in consciousness that is seen in the split brain, but with all the function of a whole brain. This does not require or resemble the idea of a soul. If anything resembles the concept of a soul, it is the idea of a 'migrant' consciousness discrete from a "consciousness producing machine", an idea which is scientifically unsupported and demonstrates a fundamental misunderstanding of the nature of consciousness. Androidian (talk) 17:51, 10 May 2015 (UTC)
“The brain in your thought experiment is like one hemisphere of a split brain that has been fused to another alien hemisphere.” It is more similar to reconnecting the two hemispheres of a split brain, or perhaps even more similar to replacing one half of a brain with a perfect duplicate of that hemisphere. If a split-brain person has two consciousnesses, and the two hemispheres are reconnected, then do they still have two discrete consciousnesses, or do they merge into one? And if consciousnesses can merge in this manner, then why can’t a consciousness merge with a perfect physical duplicate of one hemisphere, or a computer simulation of that hemisphere?
“If we connected the (presumably discarded) second hemisphere of that split brain to a functional computer representation of the first hemisphere, do we now have two brains that share a consciousness?” If the two brains are subjected to different stimuli, then no, they effectively split and diverge, because they have different experiences. Otherwise, they have equal experiences, and there is no meaningful distinction between them. (I would prefer to refer to it as “consciousness being performed in two places,” rather than “a shared consciousness,” as the latter wording is a misleading way of putting it. However, arguing that the splitting happens when the duplicate is first created instead when the duplicate and source diverge does not refute the splitting and diverging idea, as both consciousnesses still share a common origin regardless of when the split happens.)
“Is a dead person aware that they have died?” No, but we can objectively determine that they are no longer conscious, because their brain activity has ceased. On the other hand, imagine if a skilled surgeon replaced all the cells half of a person’s brain, and they did a perfect job and heal all of the surgical wounds completely. The person’s cognitive functions, motor functions, etc. would be uninhibited, unlike a corpse or a person with split-brain. No psychological or biological tests could determine whether or not half of the person’s consciousness has been replaced. Even if you assume that the low-level matter rather than the high(er)-level function is what consciousness is made out of, a scientist can’t determine that the matter has been replaced (unless he observed it being replaced); he can only show that brain matter exists now, and abstractions such as phenomenological experience cannot be measured at all. (Also, your consciousness isn’t aware of the past nor is it aware of the future; it is only aware of what is happening in your brain right now, and you instead have to trust your memories that tell you that you aren’t a swampman, that the universe didn’t spring into existence last Thursday, or that a ninja surgeon didn’t replace half or all of the cells in your brain.) —ShapeshiftingLizard ~▲~ hear me roar ~▲~ 01:16, 12 May 2015 (UTC)
The statement you took issue with is “even a perfect physical copy of that brain would not share the subjective consciousness of that brain”. This statement is derived from the axiom: "two discrete objects are not the same object, no matter how similar", (also called Identity of Indiscernibles or Leibniz's law). If two objects are the same object, they necessarily have shared causality - a cause that affects one affects the other. The two brains obviously do not share the same causality. That is very a meaningful distinction between them.
I suppose you are asking whether I think it is possible for two consciousnesses to merge, as I could not possibly have a definite answer to this question. I would presume that if a consciousness can split, two consciousnesses can merge. Assuming my presumption is correct, in your thought experiment you would still have the possibility of two consciousnesses unless you merged the two split hemispheres.
I certainly agree that consciousness is occurring in both substrates, and I have never expressed any ideas to the contrary. I do not agree that the two consciousnesses have a common origin. Consciousness A originates from substrate A. Consciousness B originates from substrate B. Even if substrate B is a perfect replica of substrate A, consciousness B is not consciousness A, and consciousness B originates from substrate B, not substrate A (Here's a simple, however loose, analogy: if I make a perfect copy of light-bulb A named light-bulb B, would you say that the light that bulb B emits originated from bulb A?) Furthermore, consciousness A and consciousness B can have very similar experiences, but they cannot have identical experiences, as they cannot have the same frame of reference or causality (what effects A does not necessarily effect B and vice versa) because they are not the same object, not to mention the stochastic nature of brain chemistry that affects our perception.
What your doctor is doing is a "gradual replacement", which is functionally equivalent to "scan and copy". To prove this, imagine that rather than discarding these cells that he is replacing, he uses the same skill and precision to create another brain hemisphere. The same philosophical issues arise. Of course we outside observers can see that another person is unconscious. The unconscious person, however that cannot be aware of this, because they are not aware of anything. Your rhetorical question "If a person isn’t aware that half of their “consciousness” has been replaced, then on what grounds can one say that half of their consciousness has been replaced," ignores subjective experience, and essentially says that if a person is not aware that they have lost consciousness (replacement necessarily implies loss), they haven't lost consciousness. Yes, we assume that the universe is not playing tricks on us. I can say, subjectively of course, that I am the person I was yesterday because that person and I share a brain. Androidian (talk) 13:52, 12 May 2015 (UTC)
Sure the whole idea of "mind uploading" is silly. It's a process - it's not a discrete thing. The idea is a relic of dualism.--Bob"I think you'll find it's more complicated than that." 06:58, 4 May 2015 (UTC)
I don't agree that the whole idea is silly. Creating consciousness is almost certainly possible, just not the way we are trying to go about it (computer processors are unlikely to ever produce consciousness), and "Uploading" one's identity with high fidelity into a computer is also possible and potentially useful. But in my view, hoping to one day subjectively experience the universe from within another brain, biological or otherwise, is a pipe dream. We will have to settle for preservation of identity. Androidian (talk) 13:49, 4 May 2015 (UTC)

What happens[edit]

... if you upload the same memory-program into different 'bodies'? (Cue spoof film involving reality TV show with 15 copies of (dictator of choice).) 86.146.99.56 (talk) 13:33, 23 July 2015 (UTC)

What???[edit]

An intro paragraph reads:

"In other words, assuming that consciousness is not magic, and that the brain is the seat of consciousness, does it depend on any special functions or quantum mechanical effects that cannot ever be replicated on another substrate? This question, of course, remains unanswered although, considering the current state of cognitive science, it is not unreasonable to think that consciousness will be found to be replicable in the future."

The bit: considering the current state of cognitive science, it is not unreasonable to think that consciousness will be found to be replicable in the future. is very odd. Why is it more or less reasonable to make this - arguably - radical assumption based on "the current state of cognitive science? If the suggestion that we are going to do it "soon" then "the current state of cognitive science" suggests exactly the opposite.

There also seems to be a bit of appeal to quantum woo earlier on in there.--Bob"Life is short and (insert adjective)" 19:10, 7 July 2016 (UTC)

I talked with a transhumanist on Discord one day, and reportedly, mind uploading has already been done, albeit not on a human, but on a roundworm. While I thought the idea of creating a LEGO Mindstorms robot that behaves almost exactly like a roundworm was extremely cool, I wasn't sure how to respond. I suppose I could have gone to the trouble of asking how they could have literally uploaded a worm to a robot instead of just writing a program to emulate its behavior, or pointed out that roundworms probably don't have a mind as we understand the term, or that, even if a roundworm can be said to have a mind, that humans typically have a lot more going on than Caenorhabditis elegans do. I do think it's a step toward their ideal, but it's... not a particularly big step.--Akamia (talk) 11:47, 20 March 2018 (UTC)
The whole idea is still bunk and the transhumanists are barking up the wrong tree. Mind/ consciousness is the result of the physical brain doing something. It is not a discrete entity in itself. The only way to copy a "mind" would be to copy the brain in which it transiently "exists" and you would need to copy the entire brain down to it's electrical and chemical state at a given instant. I'm not sure to what level you would need to copy it but certainly down to the level of neurons and possibly down to the atomic level.
And once you had this copy - which would, by the way, be analogue - you would need to somehow encode this massive amount of data in a completely different substrate - probably digitally in silicon.
Then you would need your dry digital silicone to reproduce the behavior of the wet analogue chemical/ electrical brain you started with.
It's certainly not doable now and probably won't be for the foreseeable future or, indeed, ever. Bob"Life is short and (insert adjective)" 17:01, 20 March 2018 (UTC)
Hence why I'm not particularly impressed with the toy robot, at least in the context it was presented to me. That said, the fact that it was possible to upload or at least create the brain of a dramatically simpler organism than a human and put it in the aforementioned toy robot is what I suspect led to that sentence about "the current state of cognitive science" and how it is apparently not unreasonable to think that consciousness will be found to be replicable in the future. If it could be done, great, but the transhumanists I'm currently hanging with seem to have their hopes up to a dangerously optimistic degree. If you're wondering, yes, many of them follow LessWrong. Unfortunately.--Akamia (talk) 23:09, 20 March 2018 (UTC)
Perhaps with time they'll be able to upload/simulate a more complex brain but giving the ludicrously high number of neurons or rather synapses I doubt they'll go very far. Nonetheless, you're missing the most important issue: the equivalent of mind-protecting DRMs if this even becomes a reality, and that no security system is 100% effective. Panzerfaust (talk) 21:53, 15 July 2018 (UTC)

Added section on technical feasibility[edit]

The article, as written, discussed only the philosophical problems with the concept. I divided that off into its own subheading and added another subheading on the technical problems. I believe the material I added could readily be expanded upon, so if anyone with expertise in biology, neuroscience, or the like wants to explain it further/better, they are more than welcome to do so. Crossroads (talk) 21:15, 12 September 2018 (UTC)

@Prciąszczłóśćiek[edit]


"But not everyone (actually almost no one) agrees that physical composition has anything to do with consciousness." ALMOST NO ONE! This is insane, and undermines my faith in the rationality of your edits. Are you a dualist or something? Nota Bene: I must admit that I have only read your edits, and indeed your last edit summary, cursorily. As a result, I believe that I have violated the "principle of charity."
The principle of charity (note I shall be implementing Quine's version, and not Davidson's) states the following: When interpreting a given interlocutors utterance, we oughtn't to force our normative standards upon-them i.e. we ought to try our best to understand their utterance from their perspective and ambience. Thus, first and foremost, what we are trying to optimise, is the plausibility of their utterance. Considerations of intelligibility, coherence, comprehensibility, and rationality....are secondary.

Davidson's "Principle of Charity" applied[edit]

My account below is an example of a charitable interpretation in Davidson's sense i.e., I sought to enhance the clarity and perspicuity of your statement, so that I could optimise its rationality, based on my normative standards. This is clearly a perversion of Quine's version of the principle.

A charitable interpretation of your sentence "....not everyone (actually almost no one) agrees that physical composition has anything to do with consciousness": I shall interpret your sentence by regimenting it into first-order logic.

  • "not everyone (actually almost no one)" shall be interpreted as: the negation of the universal quantifier, and thus will be represented by the existential quantifier, consequently 'some' will be the replacement.
  • "agrees that": this statement is in danger of being interpreted as a propositional attitude, and thus instead, it shall be interpreted as the behaviouristic disposition "to-assent" under appropriate neurological stimulation.
  • "physical composition has anything to do with consciousness": shall be interpreted as the behavioural affirmation of a predicate (which causes the assenting of the existentially quantified persons). The predicate shall be interpreted as an existentially quantified sentence: "some consciousness is physical." However, 'some consciousness' is an indefinite description, which means, in its unaltered form, it is a non-referring expression. However, it can be regimented so as to avoid referring to some non-existent entity...which I shall demonstrate below.
  • Interpreting the indefinite description "some consciousness is physical": there is some (x) such that (Cx and Px), where C is the predicate 'is consciousness' and P is the predicate 'is physical'. In other words the law of excluded-middle is not violated: it is either the case that there is something that is consciousness and is physical, or it is not the case.
  • In conclusion my charitable interpretation is the following : There is some x such that (Fx) AND there is some x such that (Cx and Px), where F is the predicate "to assent to Φ under appropriate neurological stimulation", and "AND" represents the conjunction-connective.

Under this Davidsonian charitable-interpretation your claims are still problematic, nevermind your claim not being supported by any references. In ordinary language your claim amounts to saying "some people (though few) believe that consciousness is partly physical".

Quine's "Principle of Charity" applied[edit]

To enhance the plausibility of your claim, I shall try my best to take into account your perspective and contextual ambience: from what information I can glean from the history of your edits to the main page, and any other information you have provided, or will provide, about your pertinent views. Now, for the sake of brevity, I shall do my best to make my comments here as succinct as possible.

Prciąszczłóśćiek I believe that you are:

  • A 'Functionalist' in philosophy of mind: It seems to have been one of your primary intentions, in your edits, to stress functionality e.g. here where you say "What the question is talking about here is whether consciousness is functionally replicable" and also your edit to this section concerning "advocates of mind-uploading take the functionalist/reductionist approach" and your statement "human essence is an identity based on memories and personality rather than physical substrates or causal relationships."
  • An 'Anomalous Monist' on the mind-body problem: Your intention to use 'dualism' as a pejorative "believing there is some mystical 'essence' to consciousness that isn't preserved by copying is a form of dualism" and your statement "Consciousness =/= the brain by definition...." which I shall interpret as referring to the impossibility of nomological or definitional reduction of the mental to the physical, and the continuation "...although one very probably supervenes on the other" which I shall interpret as expressing the supervenience of the mental upon the brain. Thus, your views seem to fit the doctrine of anomalous monism, albeit of a functionalist variety.

Thus my Quinian charitable interpretation, of your original sentence is the following: "few people believe that consciousness can be completely reduced to the brain's composition: its states, its matter, its energy. Instead consciousness can at least be partially identified with its functional states" (of course pending further empirical research).

I hope that this is a plausible account of your views and I hope that this will increase the transparency of our communication. —Leucippus 20:20, 21 December 2020 (UTC)

The success of the process[edit]

Will be when the 'transferred mind' finds 'Mind uploading could use some help.' funny.

A related question - what would happen to 'the proverbial sentient computer' on transfer to a new housing or upgrading and changing their programs. Anna Livia (talk) 12:14, 18 June 2021 (UTC)

Twitter[edit]

I just rolled back a twitter ref as it doesn't work.

We might want to consider if Twitter is considered reliable.Bob"Life is short and (insert adjective)" 15:31, 2 July 2023 (UTC)