Essay:What does it mean to say that the mind is a computer? And is it?

From RationalWiki
Jump to navigation Jump to search


Essay.svg This essay is an original work by Femilisk is watching 13:52, 22 May 2014 (UTC) 15:21, 14 February 2014 (UTC).
It does not necessarily reflect the views expressed in RationalWiki's Mission Statement, but we welcome discussion of a broad range of ideas.
Unless otherwise stated, this is original content, released under CC-BY-SA 3.0 or any later version. See RationalWiki:Copyrights.
Feel free to make comments on the talk page, which will probably be far more interesting, and might reflect a broader range of RationalWiki editors' thoughts.

Introduction[edit]

The concept of mind is so difficult to define that over the centuries scholars have sought its representation using an array of objects, usually the latest technological tools.[1][2][3] This mode of scientific discovery is known as the tools-to-theories heuristic; when the current tools used by science are incorporated into a theory and accepted due to widespread use of said tool.[2]

Currently, the most universally applicable tool in all the sciences is undoubtedly the computer, arguably the single most complex device ever created by humanity. The computer is the ultimate tool, replacing inanimate objects like typewriters, notepads, calculators, photo-albums, televisions, books; running complex models of attractors, road networks, the weather, the stock market and the universe; and replicating human bank tellers, porters, telephone operators, pilots, teachers,[4] even doctors [5] and artists.[6] Analogies have been drawn between mind and computer, but the computer is so much more than a metaphor.

My contention is that the usage, by philosophers, psychologists and cognitive scientists, of a computer as a metaphor for the mind is misuse. The mind, nature's most complex survival mechanism, is nothing more or less than a highly sophisticated, finely tuned computer.

Emergence, layers of abstraction, and Turing machines[edit]

One of the first computers was born out of the need to calculate large arithmetic operations by Babbage and was based on the fact that "an assemblage of unskilled workers, each knowing very little about the large computation" [2]:133 could be replaced by machines. Thus Babbage noticed that simple rules produce emergent phenomena.

Emergence is produced by complex biological systems [7], in which the whole is more than the sum of its parts.[8] Each worker can be thought of as a neuron or an ant, on their own practically useless, but placed in a network, or a colony, highly complex behaviour is produced by following simple rules. The simple rules percolate upwards creating a global effect: answers to complicated arithmetic operations in the case of Babbage, gliders in the case of Conway's Game of Life[9], eusociality in the case of ants, DNA in the case of organisms and minds in the case of neurons. Babbage's realisation was the first step in the creation of the computer.

Building on this was von Neumann who used computers as a representation of the human brain, closer to a tools-to-theories fashion.[10] He promoted the "binary digit" nature of the neuron[11] and "consider[ed] living organisms as if they were purely digital automata"[10]:297, although he did question "how legitimate it is to transfer our experience with computing machines to natural systems".[12]:68

In agreement with Von Neumann's comments, it is not a necessity that components of computers have a direct correspondence to those in neurological and cognitive theories in order to claim that a mind is a computer. The stored-instruction computer, the architecture bearing Von Neumann's name, clearly demonstrates that hardware, the physical realisation, can be abstracted away from software.

Abstraction layers[notes 1] are levels of analysis of any complex system. Starting from a sufficiently simple layer each new layer adds complexity, organisation and an emergent phenomenon.[13] Abstraction layers are deployed by Marr, Chomsky, Pylyshyn, Rumelhart and McClelland, Newell and Anderson in their cognitive theories,[14] in computer science and in the natural sciences.[8]

For example a desktop computer can be subdivided into the following simplified layers of abstraction: the electronics, logical gates, adders and (de)multiplexers; the hardware, HDDs, the CPU and RAM; and the software, word-processors, web-browsers and music-players. When discussing web-browsers there is no need to care about logical gates or what part of RAM is in use, because it is impossible to reduce higher layers to lower ones although anything that occurs at one layer reverberates to all others.

The ability to abstract away, ignoring, the components of the implementation of lower layers, provided the exchange of information that occurs where layers meet conforms to specified standards, is known as the principle of multiple realisability (MR). Computers can be made of vacuum tubes, neurons, DNA or quantum computing chips and are proven to be computationally equivalent.[15]

Emergence and the principle of MR appear to refute the reductionist philosophical theory, which claims that wholes are always exactly the sum of their parts.[16] As shown above wholes are independent of and more than the sum of their parts.

Turing worked on the high-level problem of whether machines can think leading to the formulation of the Turing Test.[17] Turing's work was inspired by human intelligence, unlike previously mentioned examples; he created the mathematical concept of the Turing machine (TM), on which computers today are based.

The TM is composed of a set of internal states and a head that can read and write, 0 or 1, to an infinitely long tape by moving left or right. Turing, using logic and mathematics, proved that TMs are extremely computationally powerful. The Church-Turing (C-T) thesis, which states that for any given set of instructions a TM can be built to follow them, has been met with no counter-examples to date.

Many variants of TMs exist, amongst them: analogue, continuous, non-deterministic, probabilistic, parallel, decimal and quantum TMs, and are all computationally equivalent to classical TMs; this property is known as Turing equivalence.[18] Therefore, non-determinism, seemingly a very natural, random and human property, is in fact expressible in a deterministic way since it is Turing equivalent. The same can be said for continuity, as expressed by Spivey[19], since it can be "simulated" by a non-continuous TM.

A TM, say a calculator, with its input being numbers and operations, is a very rigid framework since it can only execute specific algorithms. The universal TM (UTM) is a TM that accepts as input: a TM to be emulated along with data to be manipulated, adapting the previous example: input would be a calculator algorithm, numbers and operations on numbers. Any TM can be implemented by a UTM, any algorithm can be implemented by a TM.[20]

What does it mean to say that the mind is a computer?[edit]

The C-T thesis means that computationally, but not necessarily structurally nor biologically, minds and computers are equivalent in power and so it is physically possible to design a computer that emulates the mind on at least the highest possible abstraction layer. Such a computer would pass the Turing test with flying colours and can be considered conscious and to possess intelligence, creativity and emotions as their behavioural expressions would emerge. This is all dependent on finding a way to input to a UTM: a mind and all relevant environmental stimuli. Meaning a physical computer can be built from which a mind emerges, given the right hardware realisation for speed; it does not necessarily follow that an inspection of the human brain's internal parts would lead to the discovery of anything about its, the computer's, anatomy and vice versa.

Saying the mind is a computer means we can analyse it in an analogous way; showing parallels, but not equivalence, between abstraction layers, for example: natural languages with programming languages, mental language with machine language, brain regions with hardware components, neural networks[notes 2] with electronic circuitry, neurons with logical gates and axons with wires.

Mind-computer equivalence allows analysis of mental computations, functions, in terms of space and time complexity, but not necessarily in a Fodorian way. Space complexity is the amount of mental resources devoted to the computation and time complexity is the amount of time taken to produce output, given as a function of the input. This functionalist view of mental processing is further supported by the principle of MR. Material realisation is unimportant, functions are a layer of abstraction above.

The MR principle removes any dualist notions, minds emerge from brains in the way they could from computers and it is only due to its environment that the brain is physically realised the way it is. On the other hand if two brains could be created with identical physical structure and they were in exactly the same state at the same time they would be indistinguishable, the two minds supervening on those brains would be identical. Since we know two TMs with the same structure are identical only if they are also in the same state at the same time.[21]

The equivalence also explains why access to mental states is not always possible. A computer program at high-level does not have access to what specific logical gates are doing: the activation of a logical gate is part of its "subconscious".

Abstraction layers of mind show problems of mental causality can be solved by explicitly defining what lies on each layer. It cannot be claimed activation of a logical gate caused a web-browser to open a new window any more than a neuron firing caused a human to eat. Causality must be bounded to within layers of abstraction in order for it to make sense:[13] a logical gate caused the next gate's activation and on a higher level clicking the mouse caused a web-browser to open a new window.

The following thought[22] experiment is proposed. A computer is designed with the following parts: core instructions, library of optional instructions, perception/input mechanisms, reaction/output mechanisms; and the following abilities: interpretation and adaptation of own instructions and comparison of optional instructions. The central core of instructions enables it to interpret its own programming code and thus it possess the ability to evaluate it using feedback from its environment, thus it is able to decide what program in itself to run. This computer can program itself, by "reading" itself, deciding on modifications to its own high-level algorithms. It thus forms opinions on which rules to follow in which situation and for what sort of result. A similar experiment has been proposed by Sloman.[23] Does this machine not have theory of mind?

Do minds not perform high-level functions by programming themselves in much the same way? Part of the reason there are very few high-level rules of human behaviour, laws of psychology, is mainly because on being explicitly informed of the behavioural output, say touching one's nose whilst lying, one can easily stop oneself. Resulting in consciously "reprogramming" in order to change the output, the behaviour.

And is it?[edit]

Dreyfus claimed "commonsense capacity" and "background knowledge" [24]:275 are needed for computer to function as mind and rules and facts are not enough to represent these concepts and therefore a machine cannot contain them.[25] This is flawed reasoning, just because something is implicitly stored does not mean it is not expressible in rules and facts. Comparing implicit with explicit knowledge is not comparing like with like: they are completely different abstraction layers.[13]

Searle[26] believes that a computer's possession of a mind is equivalent to it running a simulation of a stomach digesting food; and because the stomach is not real thus the mind must also be not real. But brains can run "simulations" too, the imagination or hallucinations, and in much the same way the brain "simulates" the mind, any other view would be dualist. Consciousness is nothing more than a high-level mechanism evolved to give humans an identity.

The notion that "the mind is to the brain as the program is to the hardware"[notes 3] is only true in theory, currently. The mind and programs reside on the highest possible abstraction layer, but minds emerge yet most programs are designed and written. This creates a distinction in practice in general, but evolutionary algorithmic techniques, which are becoming increasingly popular, produce strong emergence.[7]

Searle's Chinese room (CR) thought experiment is supposed to show how computers will never have intentional states,[27] semantic meaning and that functionalism is wrong.[28] Symbols gain their meaning through percolating upwards through the use of syntax on basic semantic links to the real world; thus, there is no need for homunculi.[29] Neurons in the brain know nothing of language in the same way the person in the Chinese Room does not know Chinese. Symbolic manipulation is but a high-level addition humans perceive, TMs use symbols but a connectionist approach, using for example a hardware implementation of NNs, does not.

Searle[26] accuses strong AI of dualism ignoring that TMs prove to us hardware and software are equivalent[notes 4], which is much like saying physical and mental are the same. He claims hardware is inflexible and when it underpins a system it affects it to a great extent. This is only true about time-complexity. Ants form a collective brain, albeit without the human type of consciousness. In theory there is nothing to stop ants from evolving a collective conciousness within a eusocial colony given there is evolutionary pressure to do so.

Searle[26] also attributes the term behaviourism to strong AI due to the Turing test. Currently, there is no way to examine intelligence, other than behaviourally; providing input and evaluating output. Nothing would differentiate intelligence from a "mock-up",[24] meaning they are one and the same. Any other interpretation is dualist.[notes 5]

The Churchlands accept the mind-computer hypothesis but claim that because brains are parallel computers, neurons are analogue, axons are bidirectional and the brain is a dynamical system, they should not be thought of in terms of analogue computers.[24] But the C-T thesis and the principle of MR mean that if at each abstraction layer we have equivalent exchange of information these details do not matter.

Lucas[30] believes that computers will never have a mind, using Gödel's first incompleteness theorem, which states that all consistent axiomatic formulations of number theory include undecidable propositions.[31]:17 Formal systems using number theory[notes 6] cannot contain both a complete and consistent set of axioms, nor can an infinite list of complete and consistent axioms be produced. So there exist formulae that a computer cannot prove to be true but the mind can see and show they are true. Therefore, minds and computers are not equivalent.[30]

Gödel's theorem applies only to formal, consistent, deductive systems[30]. The claim is that the mind uses abduction and induction, along with deduction, and cognitive dissonance and cognitive biases show a lack of consistency that is highly ingrained. There is no reason for the mind to be consistent if survival can be attained without it. The mind is a quasi-consistent system, that is not only inconsistent but inconsistent in its inconsistency, some times being consistent and others fundamentally not. Computers are also able to apply non-deductive reasoning, change their goals, learn, behave non-deterministically and probabilistically and therefore are by definition unaffected by Gödel's theorem.[32]

Another point Lucas[30] makes is that computers cannot gauge their own performance without "becoming a different machine",[32]:125 adding to their complexity by augmenting their programming code. But computers perform recursion seemingly in the same way minds do. The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions."[33] When one knows fact X, one can recursively know they know fact X and know they know they know, ad infinitum, fact X. The difficulty in actually contemplating such notions grows with each step.

Conclusion[edit]

On the one hand the brain was built in single evolutionary steps from the bottom up: from a single neuron-like structure to a collection of neurons to a proto-brain which slowly incremented in size, computational ability and speed until the mind emerged. On the other hand, computers are engineered from the top down and are thus from conception easier to analyse. The function performed by the human brain to produce a mind is unknown and has evolved over time. So a computer endowed with a method of adaptation and evolutionary pressure to need to produce a mind, will do just that. In both cases a mind emerges from and supervenes on the physical realisation beneath.

Disagreements about mind-computer equivalence normally boil down to comparisons between dissimilar abstraction layers, misunderstanding the role and importance of TMs, the C-T thesis and MR. Expecting humans in about a century to not only understand but duplicate what evolution has been perfecting for millions of years is unrealistic. If something like a mind cannot be run on a TM it is because humans have yet to discover its algorithm and not because it does not exist.

Polemics of the mind-computer equivalence thesis need to face up to the reality of computers creating art[34], music[35], (almost) passing the Turing test[36], making scientific discoveries[37], learning, perceiving their environment and processing natural language all to very impressive degrees. They also need to accept that the brain attributes of identity, emotion, pain etc., are highly evolved mechanisms finely tuned by evolution for survival and, since computers can evolve, will be acquired in some form or another given enough time.

Maybe some find it hard to accept the computer-mind equivalence because they think it insulting. It is degrading to them even to postulate that computers can, even in theory, do all that their own minds can. Perhaps they also are intimidated by the notion of functional completeness, that logic, computers and therefore minds can be built using just NAND gates,[notes 7] for example.

But the fact that the mind is capable of understanding itself, the most complex machinery in nature, to the point of being able to create a, potentially, better copy, in terms of memory capacity, arithmetic manipulation, speed of processing and lifespan, is one of the greatest endeavours humanity has undertaken.



Notes[edit]

  1. Also known as integrative levels.
  2. Not to be confused with connectionist neural networks. Which are directly inspired by their biological homonyms, they store information in the connections between each "neuron". This approach might well lead to emulating minds in an effective way, as they can be easily trained, but just because they are a plausible realisation of a layer of the mind does not mean they will lead to the implementation of other layers, using the principle of MR. They are after all Turing equivalent.
  3. This is a strong AI claim, mentioned but not subscribed to by Searle.
  4. Saying "programs are not machines", as Searle says, is incorrect.
  5. Unless intelligence can be defined in terms that can encompass non-behaviouristic examination by a 2nd-person observer.
  6. TMs and computers in general.
  7. This is a type of logical gate, physically realisable, that performs the operation NOT AND.

Footnotes[edit]

  1. Draaisma, D. (2000). Metaphors Of Memory: A History Of Ideas About The Mind. Cambridge University Press.
  2. 2.0 2.1 2.2 Gigerenzer, G. & Goldstein, D. G. (1996). Mind as computer: Birth of a metaphor. Creativity Research Journal, 9(2 & 3), 131-144.
  3. Crane, T. (2003). The mechanical mind: a philosophical introduction to minds, machines and mental representation. Psychology Press, 2nd edition.
  4. Britain, S. & Liber, O. (2004). A framework for the pedagogical evaluation of elearning environments. Bangor: University of Wales, 41, 1-45.
  5. Miller, R. A. (1994). Medical diagnostic decision support systems - past, present, and future: A threaded bibliography and brief commentary. Journal Of The American Medical Informatics Association, 1(1), 8-27.
  6. Saunders, R. & Gero, J. (2001). Artificial creativity: A synthetic approach to the study of creative behaviour. Computational and Cognitive Models of Creative Design, 5, 113-139.
  7. 7.0 7.1 Bersini, H. & Philemotte, C. (2007). Emergent Phenomena Only Belong To Biology. Hugues Bersini, & Christophe Philemotte (2001). Emergent Phenomena Only Belong To Biology Advances in Artificial Life, 9th European Conference, ECAL 2007, Lisbon, Portugal, September 10-14, 2007, Proceedings DOI: 10.1007/978-3-540-74913-4_6
  8. 8.0 8.1 Corning, P. A. (2002). The re-emergence of "emergence": A venerable concept in search of a theory. Complexity, 7, 18-30.
  9. Gardner, M. (1970). Mathematical games: The fantastic combinations of john conway's new solitaire game "life". Scientific American, 223, 120-123.
  10. 10.0 10.1 Von Neumann, J. (1951). Volume v: Design of computers, theory of automata and numerical analysis: The general and logical theory of automata. Cerebral Mechanisms In Behavior, 5, 287-326.
  11. Von Neumann, J. (2000). The Computer And The Brain. Yale University Press.
  12. Von Neumann, J.(1966) . Theory of self-reproducing automata. Univeristy Of Illinois Press, 1, 64-87.
  13. 13.0 13.1 13.2 Feibleman, J. K. (1954). Theory of integrative levels. The British Journal For The Philosophy Of Science, 5(17), 59-66.
  14. Anderson, J. R. (1990). The Adaptive Character Of Thought. Lawrence Erlbaum Associate, Inc.
  15. Turing, A. (1937). On computable numbers, with an application to the entscheidungsproblem. Proceedings Of The London Mathematical Society, 42, 230-265.
  16. Sloman, A. (1978). The computer revolution in philosophy: Philosophy, science, and models of mind, volume 3. Harvester Press.
  17. Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.
  18. Deutsch, D. (1985). Quantum theory, the church-turing principle and the universal quantum computer. Proceedings Of The Royal Society Of London, A(400), 97-117.
  19. Spivey, J. (2007). The continuity of mind, volume 40. Oxford University Press, USA.
  20. Turing, A. (1937). On computable numbers, with an application to the entscheidungsproblem. Proceedings Of The London Mathematical Society, 42, 230-265.
  21. Sloman, A. (2009). What cognitive scientists need to know about virtual machines. In Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 1210-1215).
  22. Due to lack of time for implementation but not lack of perceived feasibility.
  23. Sloman, A. (1978). The computer revolution in philosophy: Philosophy, science, and models of mind, volume 3. Harvester Press.
  24. 24.0 24.1 24.2 Churchland, P. S. & Churchland, P. M. (1996). Could A Machine Think? Wiley-Blackwell.
  25. Crane, T. (2003). The mechanical mind: a philosophical introduction to minds, machines and mental representation. Psychology Press, 2nd edition.
  26. 26.0 26.1 26.2 Searle, J. (1990). Is the brain's mind a computer program. Scientific American, 262(1), 26-31.
  27. Searle, J. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(03), 417-424.
  28. Block, N. (1995). The Mind As The Software Of The Brain. Wiley-Blackwell.
  29. Harnad, S. (1990). The symbol grounding problem. Physica, D(42), 335-346.
  30. 30.0 30.1 30.2 30.3 Lucas, J. R. (1961). Minds, machines and gödel. Philosophy, 36, 112-127.
  31. Hofstadter, D. R. (1989). Godel, Escher, Bach: An Eternal Golden Braid. Vintage Books.
  32. 32.0 32.1 George, F. H. (1962). Minds, machines and gödel: Another reply to Mr. lucas. Philosophy, 37(139), 62-63.
  33. Wirth, N. (1976). Algorithms + Data Structures = Programs. Prentice-Hall.
  34. Verostko, R. (1988). Epigenetic painting. software as genotype, a new dimension of art. In 1st Symposium on Electronic Art (FISEA) Utrecht.
  35. Cope, D. (2002). Computer analysis and composition using atonal voice-leading techniques. Perspectives of New Music, 40, 121-146.
  36. Mauldin, M. (1994). Chatterbots, tinymuds, and the turing test: Entering the loebner prize
  37. Lipson, H. & Schmidt, M. (2009). Distilling free-form natural laws from experimental data. Science, 324(5923), 81-85.