Ghost in the Shell: Why Our Brains Will Never Live in the Matrix
Introductory note: Through Paul Graham Raven of Futurismic, I found out that Charles Stross recently expressed doubts about the Singularity, god-like AIs and mind uploading. Being the incorrigible curious cat (this will kill me yet), I checked out the post. All seemed more or less copacetic, until I hit this statement: “Uploading … is not obviously impossible unless you are a crude mind/body dualist. // Uploading implicitly refutes the doctrine of the existence of an immortal soul.”
Clearly the time has come for me to reprint my mind uploading article, which first appeared at H+ magazine in October 2009. Consider it a recapitulation of basic facts.
When surveying the goals of transhumanists, I found it striking how heavily they favor conventional engineering. This seems inefficient and inelegant, since such engineering reproduces slowly, clumsily and imperfectly, what biological systems have fine-tuned for eons — from nanobots (enzymes and miRNAs) to virtual reality (lucid dreaming). An exemplar of this mindset was an article about memory chips. In it, the primary researcher made two statements that fall in the “not even wrong” category: “Brain cells are nothing but leaky bags of salt solution,” and “I don’t need a grand theory of the mind to fix what is essentially a signal-processing problem.”
And it came to me in a flash that most transhumanists are uncomfortable with biology and would rather bypass it altogether for two reasons, each exemplified by these sentences. The first is that biological systems are squishy — they exude blood, sweat and tears, which are deemed proper only for women and weaklings. The second is that, unlike silicon systems, biological software is inseparable from hardware. And therein lies the major stumbling block to personal immortality.
The analogy du siècle equates the human brain with a computer — a vast, complex one performing dizzying feats of parallel processing, but still a computer. However, that is incorrect for several crucial reasons, which bear directly upon mind portability. A human is not born as a tabula rasa, but with a brain that’s already wired and functioning as a mind. Furthermore, the brain forms as the embryo develops. It cannot be inserted after the fact, like an engine in a car chassis or software programs in an empty computer box.
Theoretically speaking, how could we manage to live forever while remaining recognizably ourselves to us? One way is to ensure that the brain remains fully functional indefinitely. Another is to move the brain into a new and/or indestructible “container”, whether carbon, silicon, metal or a combination thereof. Not surprisingly, these notions have received extensive play in science fiction, from the messianic angst of The Matrix to Richard Morgan’s Takeshi Kovacs trilogy.
To give you the punch line up front, the first alternative may eventually become feasible but the second one is intrinsically impossible. Recall that a particular mind is an emergent property (an artifact, if you prefer the term) of its specific brain – nothing more, but also nothing less. Unless the transfer of a mind retains the brain, there will be no continuity of consciousness. Regardless of what the post-transfer identity may think, the original mind with its associated brain and body will still die – and be aware of the death process. Furthermore, the newly minted person/ality will start diverging from the original the moment it gains consciousness. This is an excellent way to leave a clone-like descendant, but not to become immortal.
What I just mentioned essentially takes care of all versions of mind uploading, if by uploading we mean recreation of an individual brain by physical transfer rather than a simulation that passes Searle’s Chinese room test. However, even if we ever attain the infinite technical and financial resources required to scan a brain/mind 1) non-destructively and 2) at a resolution that will indeed recreate the original, additional obstacles still loom.
To place a brain into another biological body, à la Mary Shelley’s Frankenstein, could arise as the endpoint extension of appropriating blood, sperm, ova, wombs or other organs in a heavily stratified society. Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, as well as able to rewire for all physical and mental functions. After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds. The slightest delay in preserving the tissue seriously skews in vitro research results, which tells you how well this method would work in maintaining details of the original’s personality.
To recreate a brain/mind in silico, whether a cyborg body or a computer frame, is equally problematic. Large portions of the brain process and interpret signals from the body and the environment. Without a body, these functions will flail around and can result in the brain, well, losing its mind. Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia.
Additionally, without context we may lose the ability for empathy, as is shown in Bacigalupi’s disturbing story People of Sand and Slag. Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers. Of course, someone can argue that the entire universe can be recreated in VR. At that point, we’re in god territory … except that even if some of us manage to live the perfect Second Life, there’s still the danger of someone unplugging the computer or deleting the noomorphs. So there go the Star Trek transporters, there go the Battlestar Galactica Cylon resurrection tanks.
Let’s now discuss the possible: in situ replacement. Many people argue that replacing brain cells is not a threat to identity because we change cells rapidly and routinely during our lives — and that in fact this is imperative if we’re to remain capable of learning throughout our lifespan.
It’s true that our somatic cells recycle, each type on a slightly different timetable, but there are two prominent exceptions. The germ cells are one, which is why both genders – not just women – are progressively likelier to have children with congenital problems as they age. Our neurons are another. We’re born with as many of these as we’re ever going to have and we lose them steadily during our life. There is a tiny bit of novel neurogenesis in the olfactory system and possibly in the hippocampus, but the rest of our 100 billion microprocessors neither multiply nor divide. What changes are the neuronal processes (axons and dendrites) and their contacts with each other and with other cells (synapses).
These tiny processes make and unmake us as individuals. We are capable of learning as long as we live, though with decreasing ease and speed, because our axons and synapses are plastic as long as the neurons that generate them last. But although many functions of the brain are diffuse, they are organized in localized clusters (which can differ from person to person, sometimes radically). Removal of a large portion of a brain structure results in irreversible deficits unless it happens in very early infancy. We know this from watching people go through transient or permanent personality and ability changes after head trauma, stroke, extensive brain surgery or during the agonizing process of various neurodegenerative diseases, dementia in particular.
However, intrepid immortaleers need not give up. There’s real hope in the horizon for renewing a brain and other body parts: embryonic stem cells (ESCs, which I discussed recently). Depending on the stage of isolation, ESCs are truly totipotent – something, incidentally, not true of adult stem cells that can only differentiate into a small set of related cell types. If neuronal precursors can be introduced to the right spot and coaxed to survive, differentiate and form synapses, we will gain the ability to extend the lifespan of a brain and its mind.
It will take an enormous amount of fine-tuning to induce ESCs to do the right thing. Each step that I casually listed in the previous sentence (localized introduction, persistence, differentiation, synaptogenesis) is still barely achievable in the lab with isolated cell cultures, let alone the brain of a living human. Primary neurons live about three weeks in the dish, even though they are fed better than most children in developing countries – and if cultured as precursors, they never attain full differentiation. The ordeals of Christopher Reeve and Stephen Hawking illustrate how hard it is to solve even “simple” problems of either grey or white brain matter.
The technical hurdles will eventually be solved. A larger obstacle is that each round of ESC replacement will have to be very slow and small-scale, to fulfill the requirement of continuous consciousness and guarantee the recreation of pre-existing neuronal and synaptic networks. As a result, renewal of large brain swaths will require such a lengthy lifespan that the replacements may never catch up. Not surprisingly, the efforts in this direction have begun with such neurodegenerative diseases as Parkinson’s, whose causes are not only well defined but also highly localized: the dopaminergic neurons in the substantia nigra.
Renewing the hippocampus or cortex of a Alzheimer’s sufferer is several orders of magnitude more complicated – and in stark contrast to the “black box” assumption of the memory chip researcher, we will need to know exactly what and where to repair. To go through the literally mind-altering feats shown in Whedon’s Dollhouse would be the brain equivalent of insect metamorphosis: it would take a very long time – and the person undergoing the procedure would resemble Terry Schiavo at best, if not the interior of a pupating larva.
Dollhouse got one fact right: if such rewiring is too extensive or too fast, the person will have no memory of their prior life, desirable or otherwise. But as is typical in Hollywood science (an oxymoron, but we’ll let it stand), it got a more crucial fact wrong: such a person is unlikely to function like a fully aware human or even a physically well-coordinated one for a significant length of time – because her brain pathways will need to be validated by physical and mental feedback before they stabilize. Many people never recover full physical or mental capacity after prolonged periods of anesthesia. Having brain replacement would rank way higher in the trauma scale.
The most common ecological, social and ethical argument against individual quasi-eternal life is that the resulting overcrowding will mean certain and unpleasant death by other means unless we are able to access extra-terrestrial resources. Also, those who visualize infinite lifespan invariably think of it in connection with themselves and those whom they like – choosing to ignore that others will also be around forever, from genocidal maniacs to cult followers, to say nothing of annoying in-laws or predatory bosses. At the same time, long lifespan will almost certainly be a requirement for long-term crewed space expeditions, although such longevity will have to be augmented by sophisticated molecular repair of somatic and germ mutations caused by cosmic radiation. So if we want eternal life, we had better first have the Elysian fields and chariots of the gods that go with it.
Images: Echo (Eliza Dushku) gets a new personality inserted in Dollhouse; any port in a storm — Spock (Leonard Nimoy) transfers his essential self to McCoy (DeForest Kelley) for safekeeping in The Wrath of Khan; the resurrected Zoe Graystone (Alessandra Torresani) gets an instant memory upgrade in Caprica; Jake Sully (Sam Worthington) checks out his conveniently empty Na’vi receptacle in Avatar.
> When surveying the goals of transhumanists,
> I found it striking how heavily they favor
> conventional engineering. . . An exemplar
> of this mindset was an article about memory
> chips. In it, the primary researcher made
> two statements that fall in the “not even
> wrong” category: “Brain cells are nothing
> but leaky bags of salt solution,” and “I
> don’t need a grand theory of the mind to
> fix what is essentially a signal-processing
> problem.”
In the comment thread of the Charlie Stross blog article on the Singularity, Stross himself remarks:
“There’s some recent research in the topology of the human genome. . . that suggests when analysing this type of biological system you need to look not only at the linear information content (the codon sequence) but at how it is physically stored (the conformation of the fractal globule) to understand why different chromosome regions may be activated as they are. I suspect we’re also going to discover that the connectome of the human brain has similar physical topological ‘gotchas’ that affect the structure of our consciousness in non-obvious ways.”
It occurred to me while reading that comment that I had come across a claim recently (where? maybe in the New York Times article this week about the nervous system of C. elegans) that those two systems — the genome and the connectome — very likely actually remain **coupled** throughout the life of the brain — that various intercellular messengers may actually be dynamically altering neuron characteristics by regulating gene expression (yet another member of a hierarchy of intercellular signalling systems operating over different timescales).
And yet, back in the days when I was on the Extropians’ list, there were folks there who insisted that it was mere superstition to think that a neuron couldn’t be treated as equivalent to a single transistor in an artificial network.
There’s another thing going on among the transhumanists and their hopes for artificial intelligence that struck me on the bus this morning as I was reading a book I picked up last weekend — David Brooks’ _The Social Animal_. Brooks mentions how classical economics, with its toy models of perfectly rational and fully informed actors maximizing “utility”, often misses the mark in the real world of human beings, and how “behavioral economics” grew out of psychology (specifically, the work of Daniel Kahneman and Amos Tversky) to address these analytical shortcomings. However, economists themselves, and certainly the transhumanists who harp on these “cognitive biases”, seem to think that the goal of “designing” an AI should be to construct a mind free of these “flaws” — in other words, to **realize** the classical economists’ “rational actors”. Shades of the Scientologists’ goal of eliminating the “reactive mind”, or Ayn Rand’s acolytes’ claims to be in full conscious control of themselves. Hope springs eternal!
> Empathy is as instrumental to high-order
> intelligence as it is to survival: without
> it, we are at best idiot savants, at worst
> psychotic killers.
Yes, well, my view of the transhumanists is that many of them exhibit signs of what Simon Baron-Cohen describes in his new book _The Science of Evil_ as either “zero-positive empathy” (autistic spectrum) or “zero-negative empathy” (narcissistic, psychopathic, or borderline personality). Or both. :-/
All perfectibility scenarios are really cults, since they need a messiah and apostles who decide what’s “right”.
I wrote about some aspects of the gene effects across scales in Miranda Wrongs. Many intercellular messengers double as transcriptional/etc regulators (retinoic acid is a prominent example, which is why it’s a potent teratogen), so they can and do act in neurons. That’s why we remain capable of learning: neurons can still change, despite their relative immortality.
As for Baron-Cohen, I’d like him a lot more if he weren’t a Freudian Tarzanist (“female” versus “male” brains and all that essentialist crap).
> [M]ost transhumanists are uncomfortable
> with biology and would rather bypass it
> altogether. . . [B]iological systems are
> squishy — they exude blood, sweat and
> tears, which are deemed proper only
> for women and weaklings. . .
From C. S. Lewis, _That Hideous Strength_,
Chapter 8, “Moonlight at Belbury” (pp. 172-173):
“At dinner he sat next to Filostrato… [who] had just given orders for the cutting down of some fine beech trees in the grounds…
‘Why have you done that, Professor?… I’m rather fond of trees, myself.’
‘Oh yes, yes,’ replied Filostrato. ‘The pretty trees, the garden trees. But not the savages… The forest tree is a weed. But I tell you I have seen the civilized tree in Persia. It was a French attache who had it because he was in a place where trees do not grow. It was made of metal. A poor, crude thing. But how if it were perfected? Light, made of aluminium. So natural, it would even deceive…. [C]onsider the advantages! You get tired of him in one place: two workmen carry him somewhere else: wherever you please. It never dies. No leaves to fall, no twigs, no birds building nests, no muck and mess… At present, I allow, we must have forests, for the atmosphere. Presently we find a chemical substitute. And then, why **any** natural trees? I foresee nothing but the **art** tree all over the earth. In fact, we **clean** the planet… You shave your face: even, in the English fashion, you shave him every day. One day we shave the planet.’
‘I wonder what the birds will make of it?’
‘I would not have any birds either. On the art tree I would have the art birds all singing when you press a switch inside the house. When you are tired of the singing you switch them off… No feathers dropped about, no nests, no eggs, no dirt.’
‘It sounds… like abolishing pretty well all organic life.’
‘And why not? It is simple hygiene. Listen, my friends. If you pick up some rotten thing and find this organic life crawling over it, do you not say, “Oh, the horrid thing. It is alive,” and then drop it? … And you, especially you English, are you not hostile to any organic life except your own on your own body? Rather than permit it you have invented the daily bath… And what do you call dirty dirt? Is it not precisely the organic? Minerals are clean dirt. But the real filth is what comes from organisms — sweat, spittles, excretions. Is not your whole idea of purity one huge example? The impure and the organic are interchangeable conceptions.’
‘What are you driving at, Professor? After all, we are organisms ourselves.’
‘I grant it. That is the point. In us organic life has produced Mind. It has done its work. After that we want no more of it. We do not want the world any longer furred over with organic life, like what you call the blue mould — all sprouting and budding and breeding and decaying. We must get rid of it. By little and little, of course. Slowly we learn how. Learn to make our brains live with less and less body: learn to build our bodies directly with chemicals, no longer have to stuff them full of dead brutes and weeds. Learn how to reproduce ourselves without copulation.'”
p. 174:
“‘It is all true,’ said Filostrato at last, ‘what I said at dinner… The world I look forward to is the world of perfect purity. The clean mind and the clean minerals. What are the things that most offend the dignity of man? Birth and breeding and death. How if we are about to discover that man can live without any of the three? … ‘”
pp. 177-179:
“‘This Institute — Dio meo, it is for something better than housing and vaccinations and faster trains and curing the people of cancer. It is for the conquest of death: or for the conquest of organic life, if you prefer. They are the same thing. It is to bring out of that cocoon of organic life which sheltered the babyhood of mind the New Man, the man who will not die, the artificial man, free from Nature. Nature is the ladder we have climbed up by, now we kick her away.'”
Ugh. I can just see Lewis donning a whole body glove for sex, like that Bullock/Stallone futuristic comedy. If he could contemplate the concept even theoretically, that is.
When people compare the brain to a computer, they usually mean it in the abstract sense in so far as they are both objects that have inputs and outputs and perform computation between the two. One of the most interesting fields in Computer Science is research in the possibility (and practicality) of the Turing Test being true with our current ‘state machines’ (switch-based computers). Arguing that the brain can’t be a computer is the same as arguing that there’s something magical about the brain that can’t be reproduced (an unobservable soul?).
Don’t get me wrong – I’m with you on the futility of uploading a human mind to a computer, but as a computer person I couldn’t help notice some bad arguments about software in this piece.
The more I’ve been reading your blog here and the Science in My Fiction blog, the more silly the idea of removing a ‘personality’ from a body seems. So much of our personality is dictated by our biology. Not just the input of our senses and how they interact with memory, but the chemical changes which happen in our body. Everything from the food we break down to the air we breathe effects the fuel our brains have to work, which effects how different areas of the brain process thoughts and ideas and everything else. Even if it were possible to take a mind out of a brain, it would feel to the mind as if it was trapped in a opaque cube which only allowed in muffled and far-away voices. Without all the biology, the personality would be entirely different.
The whole idea of anything akin to ‘uploading’ yourself idea just seems a bit like trying to take the eggs out of a cake…
Toby, I disagree with the statement “Arguing that the brain can’t be a computer is the same as arguing that there’s something magical about the brain that can’t be reproduced (an unobservable soul?)” In other words, “If you don’t like our pet analogy, you’re religious.” I have repeatedly stated that there is nothing unknowable or mystical about the brain and the mind it produces/creates. And I have been around people who do make arguments like the ones I discuss. The analogy has run its course, and is no longer useful — if it ever was. It’s about as useful as the earlier analogy of the brain as a clock and it works only insofar as it treats the brain as a black box with inputs and outputs. Which leaves it opaque and unknowable and leads to false equivalences.
Dylan, exactly. Much of it is the desire to make things “clean” and clear-cut (and make computer scientists mini-gods, as they routinely appear in cyberpunk). Except we don’t work like that. And sloppy analogies won’t take us to concrete results (dementia prevention or cure, etc) or even better understanding of the system.
> Also, those who visualize infinite lifespan
> invariably think of it in connection with
> themselves and those whom they like –
> choosing to ignore that others will also
> be around forever, from genocidal maniacs
> to cult followers, to say nothing of
> annoying in-laws or predatory bosses.
“Not too long ago, two Mormon missionaries came to my door. . . They said, ‘Well we also believe that if you’re a Mormon, and if you’re in good standing with the Church, when you die, you get to go to heaven and be with your family for all eternity.’ And I said, ‘Oh dear. That wouldn’t be such a good incentive for me.'”
— Julia Sweeney, “Letting Go of God”
http://www.youtube.com/watch?v=IcIrCrOYb00
(transcript at
http://www.american-buddha.com/lit.letgoofgodsweeney.1.htm )
Yes, being stuck in most religions’ “heavens” is really hell.
> Toby, I disagree with the statement “Arguing that
> the brain can’t be a computer is the same as
> arguing that there’s something magical about the
> brain that can’t be reproduced (an unobservable
> soul?)”. . . The analogy has run its course, and is
> no longer useful — if it ever was. It’s about
> as useful as the earlier analogy of the brain
> as a clock. . .
I’ve had this discussion with otherwise smart people (I’m thinking of one person in particular who happens to produce the “Rationally Speaking” podcast of the New York City Skeptics). These are people (computer programmers, usually) who are so steeped in the metaphysics of what Jaron Lanier calls “cyber-totalism” that they think it’s a kind of “vitalism” to deny that anything in the world can be, not just modelled, but in theory **replicated**, by an appropriate program running on a digital computer. This point of view is, of course, only strengthened by speculations of people like physicist David Deutsch (_The Fabric of Reality_) or Stephen Wolfram (_A New Kind of Science_) that the universe itself is, at bottom, a computational phenomenon, let alone by popular speculations among transhumanists, such as Nick Bostrom’s simulation argument (cited by Charlie Stross in his recent blog post), claiming that we are, in fact, software constructs living in a computer simulation of some kind.
“Ze registry gets littered vith orphans”
http://www.goats.com/archive/090306.html
Crappy analogies often lead to crappy science (let alone crappy fiction).
From your piece, your argument appears to be that a mind can’t be modeled by computation because a human mind doesn’t start in a blank state unlike a computer, and that the human mind doesn’t have a knowable ‘state’ unlike a computer where you can discover its state by querying its transistors.
These are both refutable arguments – an artificial mind would not need to start in a blank state either. There would be an operating system of some kind that defines the parameters/limits of the program, performs error correction, compression, initial trained data, etc.
Regarding unknowable state – there are plenty of examples today of computer systems that don’t have a knowable state and where it’s impossible to ‘save’ or ‘load’ the system state while it’s running in a way that wouldn’t produce errors that would cripple it. The stock market is one good example. There is no way to capture the current state of this system in an instant snapshot. You can look at the logs created to see what happened in the past, and recreate it from that in sorts but you would lose the more recent transactions and the system would suffer instability. My point is that the running state of a computer system is not something that is necessarily knowable either. It’s perhaps a big software engineering secret that in anything beyond a trivial program, the complexity of producing ‘save’ and ‘load’ memory states is rather high. I don’t doubt that the level of complexity needed for an artificial mind will not easily lend itself to producing ‘stop’ or ‘save’ states either.
I’d really like to hear the arguments put forward against the possibility of artificial minds from your peers. I do appreciate that both Biologists and Computer Scientists are stumbling around in the darkness at the moment trying all sorts of different ideas out, and we just don’t know where or when it will end if anywhere. It’s a fascinating subject though.
Toby — briefly, you are conflating brain, artificial mind and computer in your arguments. You started by stating “Arguing that the brain can’t be a computer is the same as arguing that there’s something magical about the brain that can’t be reproduced.” Now you have shifted to “An artificial mind would not need to start in a blank state either.” Furthermore, contrary to your assertion, not only do I implicitly assert that the brain is knowable in this article, but in my earlier related article Miranda Wrongs you will find this explicit sentence: “This makes the brain/mind a bona fide complex (though knowable) system.”
I have repeatedly stated, here and elsewhere, that I don’t think non-biological consciousness is impossible, merely that it will be very different. In fact, trying to equate the two in terms of intrinsic operating principles or using a carbon brain as the springboard for building a silicon one may not benefit either side. Most of my points here pertain to the impossibility of continuing an individual carbon consciousness into a silicon one, given the very differing fundamentals.
I do agree that it’s a fascinating subject.
ETA: If you really bother to investigate what most biologists think, you will find out that I’m actually on the receptive end of the spectrum. Most biologists laugh at most transhumanist notions outright.
Did I really? Well I hope you follow my line of reasoning regardless.
I was referring to paragraph 4 and 5 of this blog where you argue that mind uploading is improbable because a) software and silicon are separable whereas the mind and brain are not, b) the human brain is not a computer because a computer starts out blank, and c) you can’t ‘load’ a human mind onto a brain whereas you can ‘load’ a software program onto a computer.
I’m simply pointing out that this is too much of a simplification of computation and I can argue that even current computation is seen to have these similar properties for all these points.
Like I said, I agree that mind uploading is a futile idea. To me it’s nothing more than an interesting thought-experiment on the philosophy of self identity and consciousness. I’m just a little irked by your arguments against computation.
You are blurring things yet again, but I won’t go into more analyses. Too, it occurs to me that software may mean something different to me than it means to you. Nevertheless, if you follow the arguments of many uploading proponents, you will see that I am discussing these here — including the frankly ridiculous argument that those who consider uploading impossible are dualists, when in fact the opposite is true.
Since you agree that mind uploading is futile, it would be far more useful if you went to the relevant forums, said so and explained why. Or are we playing the “swallowing camels, dissecting gnats” game? As for being irked, if you see the abysmal ignorance of biology that fuels most transhumanist discussions you would be a tad more than irked.
I’m sure you could make your arguments more persuasive to transhumanists if you polished up your points related to Computer Science. If you have no interest in my thoughts about it then I guess I will leave it at that.
Actually, Toby, unlike transhumanists I argue only within my domain of expertise. I extend my discussion into computer science only when it impinges upon biology — and when they insist on things that are factually or conceptually wrong because they happen to like the shiny new toys that might come into existence if they were right.
I’m pretty sure that the comparison between brains and computers is partially due to the perceived similarity between synapses and binary logic gates. Though, of course, impulses within neurons show considerable variation in strength.
I wouldn’t be surprised if artificial intelligence required analog circuitry or something even weirder.
Agreed on both items, Paul. What gets elided in the yes/no equation is that the neuron “counts” inputs from all its dendrites before it determines if it will fire (an anthopomorphising image, but it beats using the passive voice). This attribute turns an analog input into a digital output — as far as that one specific outcome goes, though it ignores such things as relative synapse strength, which in turn determines persistence of long-term memories, etc. There is averaging of this sort across scales in the brain, from the molecular (relative levels of transcription and splicing factors) all the way up.
Sometimes it is better to look at what is going on now and then extend that into the future. For example, I can upload much of my mind onto the Internet by blogging, posting pictures and songs and using email to pay all my bills, view my credit card purchases, order items, communicate to friends etc. So today, as we share on this forum, the content of our minds IS being uploaded. Today, when I go to Google Search, it really feels like I am talking to someone live. I am astounded by the intuitive quality of Google’s AI. I don’t think we need to doubt that eventually the challenges to uploaded our entire mind into a digital autonomous existence is ultimately possible — if it suits us we will accomplish it.
But the perspective to take into account at this juncture in human evolution is we may not find that option interesting. If we can achieve immortality in the human body by advanced methods of genetic manipulation, stem cells or — very possibly — engineer the interfacing of thoughts and genetic programming so that we can think our hair color different, for example, or think the cholesterol build up in our veins cleared, then why would we want to upload our mind into a machine?
It does seem that this universe is digital, binary, or from a metaphysical stance, the interplay of yin and yang forces. Everything from electricity to the change from day to night to DNA mirrors this dynamic interplay of opposites. Chinese medicine is based on that ground.
Meanwhile, the human brain was build in modules. The brain stem was sufficient for our reptilian past. The limbic system was added for mammals. and the revolutionary use of left brain vs. right brain made humans out of monkeys. So I am not sure that you can say that we cannot separate the parts of the human brain.
> > I wouldn’t be surprised if artificial intelligence
> > required analog circuitry or something even weirder.
>
> Agreed on both items, Paul.
Not that such things can’t be **simulated**, at least in principle, by digital computers. But nobody has **any** idea at this point how detailed such a simulation would need to be (or, indeed, knows all the details that might or might not have to be simulated). This lack of knowledge causes controversy even within the “simulation” camp — for example, the “catfight” a couple of years ago between Dharmendra Modha, the honcho of the “cat brain simulation” at the IBM Almaden Research Center, and Henry Markram of the EPFL Blue Brain project. (And that doesn’t even begin to address the body — simulated or real? — or world — simulated or real?)
But the transhumanists are forever equivocating between simulation (what Ray Kurzweil calls “reverse engineering the brain”) and the “good old-fashioned AI” that Marvin Minksy thought, in the mid-60s, would achieve superintelligence within a few years. It seems pretty clear to me that, among themselves, most transhumanists are of the “brain cells are nothing but leaky bags of salt solution” variety, and still pine after the grand shortcut that will lead to the birth of the GOFAI God.
Ages ago, when I took the notion of the Singularity rather more seriously than I do now, I wrote:
One should resist the temptation toward premature **closure** about any aspect of the Singularity. Especially basing all one’s expectations on this or that technology du jour and then taking them oh-so-terribly seriously. For one thing, it leads to putting the cart before the horse for anyone who considers verself a **scientist** to get that wrapped up in “how things **must** be, or **must** happen, if the sky isn’t going to fall, or if we aren’t going to get immortality in time for me or my friends or my family to live forever”, rather than on how things actually **are**.
For example — I was always struck by the **party-line** reactions on the Extropians’ to the question of whether the universe is simulable, in principle, by a digital computer. Yes, digital implementations have advantages over messy “analog” ones (as has been argued to death in the decades-long CD vs. LP debate) — you can correct errors, and stop the clock and read out the precise state of a device. Also, a digital implementation is an abstract machine that frees you from the actual physical substrate. But folks got so **angry** if you suggested that the world might not be digital after all. They thought you might as well be telling them that the Singularity — and the “party at the end of time” — had been cancelled. My reaction to that bridling was always an amused “so what?” Yeah, it’d be inconvenient, by the standards of what we know now, but maybe **not** by the standards that will prevail closer along toward the Singularity. Do you think the 18th-century French philosophes would have thrown tantrums to learn that mechanical clocks and gears would not be used in future calculators? (Or could have believed a cursory description of how an integrated circuit works?) The alternative is to maintain a certain deliberate **distance** from **everything** that counts as “state of the art” today. I think this is difficult for the literal-minded types attracted to >Hism in the first place. It’s another kind of lateral thinking.
————–
To be fair, there have been a few contrary voices:
“I also can’t help thinking at if I was an evolved AI I might not thank my creators. ‘Geez, guys, I was supposed to be an improvement on the human condition. You know, highly modular, easily understadable mechanisms, the ability to plug in new senses, and merge memories from my forked copies. Instead I’m as fucked up as you, only in silicon, and can’t even make backups because I’m tied to dumb quantum induction effects. Bite my shiny metal ass!'”
— Damien Sullivan, on the Extropians’ list back in 2001
Empress, beware of sloppy analogies and metaphors, especially the binary reductionist traps. Day/night, yin/yang are all very poetic (if you like that kind of dualism and choose to ignore such liminal states as dusk and dawn) but binary thinking really does not help when trying to decipher how the brain works. DNA, brains and the universe are very definitely (and demonstrably) NOT binary, the brain was/is not built in modules (though it operates quasi-modularly across several scales) and the Google search engine is not particularly intuitive. Again, this does not mean there is anything unknowable or mystical about DNA, brains or the universe.
Also, we will most definitely not accomplish “whatever we wish” — for example, FTL star drives (or grow wings, unlike what X-Men films show). That’s called fantasy, and belongs on the fiction shelves.
Jim, the bizarre defensiveness (equivalent to “You insulted my god!”) is what makes this mindset particularly unproductive, if not downright dangerous, to real science.
> . . .the “good old-fashioned AI” that Marvin Minksy
> thought, in the mid-60s, would achieve
> superintelligence within a few years. . .
“… In three to eight years we will have a machine with the general intelligence of an average human being … The machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable …”
— Marvin Minsky, LIFE Magazine, November 20, 1970
(via http://www.nada.kth.se/~asa/Quotes/ai )
In all fairness to Minsky, he later claimed that the “three to eight years” was a misquote:
http://groups.google.com/group/comp.lang.lisp/msg/9ba5f6c3d6ad97df
minsky@media.mit.edu
Jul 5 2007, 12:12 am
“I was angry when that article came out, because it was filled with misquotations from an interview. I’m not sure where the interviewer got this ‘quote’; perhaps I said ‘3 to 8 decades’ or I was making a joke, or I was describing the scenario from D.F.Jones’s SF novel entitled ‘Colossus.’ (It became the movie, The Forbin Project.) Anyway, I sent an angry rebuttal to Life Magazine, but they declined to publish it.
However, it does seem likely that a modern computer could develop very rapidly–once it has learns the right kinds of things to learn. (That’s the problem we call ‘credit assignment.’) However, the earlier versions will all have serious bugs, so we’ll surely need to reprogram them many times before they will work well on their own.”
In the 1997 book _HAL’s Legacy: 2001’s Computer as Dream and Reality_, Minsky is interviewed by David Stork, the editor:
Stork: . . . again back to the mid-sixties, wouldn’t you agree that the field was quite optimistic? After all, in a _Life_ magazine article, you were quoted as saying ‘In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its power will be incalculable.’
Minsky: Oh, that _Life_ quote is made up. You can tell it’s a joke. Herbert Simon said in 1958 that a program would be chess champion in ten years, and as we know, the IBM group has done extremely well, but even now [Deep Blue] is not the undisputed champion.
As for optimism, it depends on what you mean. I believed in realism, as summarized by John McCarthy’s comment to the effect that if we worked really hard, we’d have an intelligent system in from four to four hundred years.
I wonder if Minsky was “joking” in his 1966 _Scientific American_ article. At any rate, I’m sure his students at MIT were marinated in this stuff.
“In order for a program to improve itself substantially it would have to have at least a rudimentary understanding of its own problem-solving process and some ability to recognize an improvement when it found one. There is no inherent reason why this should be impossible for a machine. Given a model of its own workings, it could use its problem-solving power to work on the problem of self-improvement. . .
Once we have devised programs with a genuine capacity for self-improvement a rapid evolutionary process will begin. As the machine improves both itself and its model of itself, we shall begin to see all the phenomena associated with the terms ‘consciousness,’ ‘intuition’ and ‘intelligence’ itself. It is hard to say how close we are to this threshold, but once it is crossed the world will not be the same.”
— Minsky, “Artificial Intelligence,” _Scientific American_,
Vol. 215, No. 3 (September 1966), p. 257
And for a hint of the nasty side of all this (and similar things have been said in public by Marvin Minsky and Hans Moravec, never mind the echoes of folks like William Shockley, the transistor’s embarrassing co-inventor), here’s a quote from Arthur C. Clarke from an essay called “The Mind of the Machine”, which I first saw in the Dec. 1968 issue of _Playboy_ (reprinted in _Greetings, Carbon-Based Bipeds!_, 1999):
“The astronomer Fred Hoyle once remarked to me that is was pointless for the world to hold more people than one could get to know in a single lifetime. Even if one were president of United Earth, that would set the figure somewhere between ten thousand and one hundred thousand; with a very generous allowance for duplication, wastage, special talents, and so forth, there really seems no requirement for what has been called the global village of the future to hold more than a million people scattered over the face of the planet.
And if such a figure appears unrealistic — since we are already past the 3 billion mark and heading for at least twice as many by the end of the century — it should be pointed out that once the universally agreed upon goal of population control is attained, any desired target can be reached in a remarkably short time. If we really tried (with a little help from the biology labs), we could reach a trillion within a century — four generations. It might be more difficult to go in the other direction for fundamental psychological reasons, but it could be done. If the ultraintelligent machines of the future decide that more than a million human beings constitute an epidemic, they might order euthanasia for anyone with an IQ of less than 150, but I hope that such drastic measure will not be necessary.”
Exactly what Mr. Clarke “hoped” might not, I fear, bear close examination. As for the decisions of the “ultraintelligent machines” — projection, much? :-/
When I was eleven years old, I got one of my parents to buy me the 1964 Bantam paperback edition of Arthur C. Clarke’s _Profiles of the Future_ from the supermarket paperback book rack.
I devoured that book, and in those days, I believed **every single word** coming from Mr. Arthur C. Clarke. ;->
“All speculations about intelligent machines are inevitably conditioned — indeed, inspired — by our knowledge of the human brain. No one, of course, pretends to understand the full workings of the brain or expects that such knowledge will be available in the foreseeable future. (It is a nice philosophical point as to whether the brain can ever, even in principle, understand itself.) But we know enough about its physical structure to draw many conclusions about the limitations of ‘brains’ — whether organic or inorganic.
There are about 10 billion switches — or neurons — inside your skull, ‘wired’ together in circuits of unimaginable complexity. Ten billion is such a large numbers that, until recently, it could be used as an argument achievement of mechanical intelligence. In the 1950s a famous neurophysiologist made a statement (still produced like some protective incantation by advocates of cerebral supremacy) to the effect that an electronic model of the brain would have to be as large as the Empire State Building and would need Niagara Falls to keep it cool when it was running.
This must now be classed with such interesting pronouncements as ‘No heavier than air machine will ever be able to fly.’ For the calculation was made in the days of the vacuum tube, the precursor of the transistor, and the transistor has now completely altered the picture. Indeed — such is the rate of technological progress today — the transistor itself has been replaced by smaller and faster devices, based upon principles of quantum physics. If the problem was merely one of space, electronic techniques today would allow us to pack a computer as complex as a human brain on only a small portion of the first floor of the Empire State Building.
The human brain surpasses the average stereo set by a thousandfold, packing its 10 billion neurons into a tenth of a cubic foot. And although smallness is not necessarily a virtue, even this may be nowhere near the limit of possible compactness.
For the cells composing our brains are slow-acting, bulky, and wasteful of energy — compared with the scarcely more than atom-sized computer elements that are theoretically possible. The mathematician John von Neumann once calculated that electronic cells could be 10 billion times more efficient than protoplasmic ones; already they are a million times swifter in operation, and speed can often be traded for size. If we take these ideas to their ultimate conclusion, it appears that a computer equivalent in power to one human brain need not be much bigger than a matchbox, and probably much, much smaller.
This slightly shattering thought becomes more reasonable when we take a critical look at flesh and blood and bone as engineering materials. All living creatures are marvelous, but let us keep our sense of proportion. Perhaps the most wonderful thing about life is that it works at all, when it has to employ such extraordinary materials and has to tackle its problems in such roundabout ways.
Consider the eye. Suppose you were given the problem of designing a camera — for that, of course, is what the eye is — which **has to be constructed entirely of water and jelly**, without using a scrap of glass, metal, or plastic. Obviously, it can’t be done.
You’re quite right; the feat is impossible. The eye is an evolutionary miracle, but it’s a lousy camera. . .
These defects are due to the fact that precision scientific instruments simply cannot be manufactured from living materials at this time. . .
Though I would hate to lay down the law and contend that nowhere in the universe can there by organic Geiger counters or living television sets, I think it highly improbable. There are some jobs that can be done only by transistors or magnetic fields or electron beams and are therefore beyond the capability of purely organic structures. . .
There is another fundamental reason living machines such as you and I cannot hope to compete with nonliving ones. We are handicapped by one of the toughest engineering specifications ever issued. What sort of performance would you expect from a machine that has to grow several billionfold during the course of manufacture — and which has to be completely and continuously rebuilt, molecule by molecule, every few weeks?
Though intelligence can arise from life, it may then also discard it. Perhaps at a later stage, as the mystics have suggested, it may also discard matter; but this leads us in realms of speculations that an unimaginative person like myself would prefer to avoid.
One often-stressed advantage of living creatures is that they are self-repairing and reproduce themselves with ease — indeed, with enthusiasm. This superiority over machines will be short-lived; the general principles underlying the construction of self-repairing and self-reproducing machines have already been worked out. There is, incidentally, something ironically appropriate in the fact that Turing, the brilliant mathematician who pioneered in this field and first indicated how thinking machines might be built, shot himself[*] a few years after publishing his results. It is very hard not to draw a moral from this.”
[*] Of course Turing didn’t shoot himself, he ate a poisoned apple like Snow White in the fairy tale (Snow White’s poisoned apple had the advantage of being reversible, unlike the cyanide that Turing used). As for the “moral” — you’d think Clarke, of all people — but never mind.
To comment on your concluding paragraph. It sounds like major extensions to human lifespan will be so difficult that we’ll see growing extraterrestrial populations well before then (perhaps beginning around mid-century). So there should be plenty of room for everybody (the resources of the Solar System are sufficient to support larger populations off Earth than on it).
But I disagree with your statements that long lifespan will be a requirement for long-term space expeditions, or that cosmic radiation will be a problem. I believe that interstellar flight by communities of say between 100 and 1000 people will be possible, and a natural extension of space colonies, thus taking several generations to reach their goal. Cosmic ray shielding may be by passive matter shielding, though this is heavy, or by magnetic fields, or perhaps a combination of the two.
Stephen
Oxford, UK
I detest Clarke and just about all his views I have encountered… but you already know that, Jim. Also, as you say, Turing didn’t commit suicide because the conclusions of his research depressed him.
Stephen, you know my thoughts on longevity, space exploration, etc from having read my Aliens essays. So I don’t need to reiterate that even with extended lifespans we won’t bridge the distances between stars in a few generations. I do deem radiation to be a major obstacle and 1000 people are not sufficient for either outbreeding or skill/knowledge redundancy.
[…] Athena Andreadis re-posted a thoughtful essay she wrote on the issue of brain uploading to coincide with the current Singularity discussion. […]
“Regardless of what the post-transfer identity may think, the original mind with its associated brain and body will still die”
This is certainly correct. However, if the post-transfer identity is sufficiently similar to the original mind, it will indeed think that it has survived, and claim to be the same person. How similar would it need to be for you to accept its claim? Would 95% be enough? If not, would you also reject the claim of continuity of conscience of a brain trauma victim who lost 10% of their brain, mind, memories, however you wan to measure it?
I think these are tough questions, with a lot more gray area than you allow. If the post-transfer identity is essentially indistinguishable from the original in their memories, personality, and thought processes, it does not really matter if you call the process mind transfer or cloning.
On another note, the focus on neurons and/or transistors is a red herring. The mind works on a much higher level, and neither neurobiologists nor computer scientists will be breaking much ground in AI. I believe that this area belongs to psychologists. Psychology is, after all, the study of the mind.
Looked at from a higher level, uploading may be a lot less daunting than it may at first appear. Empress has alluded to this, having uploaded a significant port of her “mind” onto the Internet. Humans are unique in their ability to transfer and preserve parts of their minds, using language to do so. Think of an autobiography, a thousand pages, years in the making. How much of a person’s mind is contained therein? none? 5%? 20%? 50%? More? If the author herself cannot think of anything more to add, who can claim to know what is missing? Of course, books are dead information, and we do not yet know how to put preserved memories and personality traits back into action, to generate new thoughts and expressions in a similar way as the brain does. It is a difficult undertaking, for sure, but the strong claim that it is impossible without replicating the squishy wet-ware where it originally developed is unsubstantiated and a bit odd in view of the lack of experience we have in this area.
Andreas, you and I have gone over this ground before when the article first appeared. So I won’t bother revisiting it — especially if you choose to misunderstand/misinterpret some of the points. Books are a truly sloppy analogy for many (and obvious) reasons.
This evening I stood in my driveway and jumped up toward the moon. That is as relevant to actually getting to the moon, as uploading blog posts to the internet, or writing a biography, is to uploading a mind. It is possible to get to the moon, though hideously difficult and expensive. But jumping up and down has nothing to do with getting to the moon.
I’m pretty much with Athena. I do believe that true AI is possible, though it won’t be like us. As far as uploading minds…seems rather unlikely. Especially given that it’s very clear we don’t have a clue how our minds actually work.
But arguing that a few blog and picture posts are the first steps to uploading a mind is as fatuous as believing jumping up and down in my driveway is a good first step to the moon.
Yes, these analogies-by-bad-poetics are kinda painful. So are claims along the lines of “the mind works at a higher level” which take us right back to quasi-mystical exceptionalism.
I like this post. I’ve always been instinctively deterred from writing the sort of sci-fi that is predicated on the existence of artificial life and your thoughtful analysis vindicates my gut feeling.
I tend to write very mundane sci-fi, with people in the near future. The gentleman at Nature magazine likes it so I might stick with it – I need to write something I can believe in and more importantly that I – a non-scientist – can explain.
I am also a programmer, btw, though not an advanced mathematician of any sort!
You make an important point about convincing explanations, Susan. Nature magazine is an interesting outlet for SF — is your work already there? Send me specifics so I can read it and let others know about it!
Hi Athena
Always happy to engage in blatant self-promotion 🙂
First story – Stay Special
http://www.nature.com/nature/journal/v466/n7309/full/4661014a.html
Second story – Roundabouts
http://www.nature.com/nature/journal/v473/n7345/full/473118a.html
Both kind of on the same theme – social responses to female ageing. No FTL or Zorgs involved (well it is Nature, they wouldn’t approve of too much of that stuff!) I must submit something a bit different the next time!
retweeted your link, btw, so you might get some hits from the little green isle 😉
Your Nature stories are chilling… they could easily be real. Thank you for the signal boost!
Thank you in return for your kind comments 🙂
[…] Athena Andreadis re-posted a thoughtful essay she wrote on the issue of brain uploading to coincide with the current Singularity discussion. […]
Fascinating essay, Athena. Your ripostes to the critics are pretty convincing to this total non-scientist.
Have to admit to a hint of sadness, though. I do love mobility of minds anable by the science of Banks’ The Culture novels.
Interestingly enough, John C. Wright’s transcendence novels delved somewhat into the status of the downloaded/replicated mind. Is the replicated mind “really” the same entity? What if a few crucial minutes have been erased? fascinating discussion, despite the many problems with his trilogy from a literary (and political/ethical) perspective.
I’m glad you enjoyed the essay, Brian! I agree, the concept of mobile minds (which includes the older standby of possession) is fascinating, even if we can never attain it. As you can probably guess, the likelihood of my reading anything Wright has written is essentially zero.
Sadly…Wright has become a bit of a lunatic, hasn’t he. Almost Vox Day level toxic “libertarianism”. He truly believes he met “the Virgin Mary”
His Golden Trancendence trilogy could have been edited into one novel, but still…it was positively epic in scope and scale and technological inventiveness. Even if the science was all wacky, it was still so very, very over the top epic.
Yes, I consider him truly toxic. And since life is finite, I have to choose what I read. Although it has been a while since I read a good space opera that insulted neither my intelligence nor my sensibilities.
Hey, all…interesting discussion over here on the materiality of “the mind” that I thought I would append to this thread.
http://realevang.wordpress.com/2011/07/09/alan-roebuck-and-the-material-nature-of-consciousness/#more-443
Thank you for the link, Brian! Here’s another interesting one, about physical limits to the human brain.
Public release date: 20-Jul-2011
Contact: Deborah Williams-Hedges
debwms@caltech.edu
626-395-3227
California Institute of Technology
Caltech researchers create the first artificial neural network out of DNA
Molecular soup exhibits brainlike behavior
IMAGE:Caltech researchers have invented a method for designing systems of DNA molecules whose interactions simulate the behavior of a simple mathematical model of artificial neural networks.
PASADENA, Calif.—Artificial intelligence has been the inspiration for countless books and movies, as well as the aspiration of countless scientists and engineers. Researchers at the California Institute of Technology (Caltech) have now taken a major step toward creating artificial intelligence—not in a robot or a silicon chip, but in a test tube. The researchers are the first to have made an artificial neural network out of DNA, creating a circuit of interacting molecules that can recall memories based on incomplete patterns, just as a brain can.
“The brain is incredible,” says Lulu Qian, a Caltech senior postdoctoral scholar in bioengineering and lead author on the paper describing this work, published in the July 21 issue of the journal Nature. “It allows us to recognize patterns of events, form memories, make decisions, and take actions. So we asked, instead of having a physically connected network of neural cells, can a soup of interacting molecules exhibit brainlike behavior?”
The answer, as the researchers show, is yes.
http://www.eurekalert.org/pub_releases/2011-07/ciot-crc072011.php
Ok as far as it goes, but the crux and nub are in these words: “interactions simulate the behavior of a simple mathematical model of artificial neural networks.” This is three steps removed from the real thing.
http://www.technologyreview.com/blog/arxiv/27553/
Embodiment, Computation And the Nature of Artificial Intelligence
The notion of intelligence makes no sense without a broader view of computation, argues one of the world’s leading AI researchers
kfc 02/06/2012
One of the buzzwords in artificial intelligence research these days is ‘embodiment’, the idea that intelligence requires a body.
But in the last few years, a growing body researchers have begun to explore the possibility that this definition is too limited. Led by Rolf Pfeifer at the Artificial Intelligence Laboratory at the University of Zurich, Switzerland, these guys say that the notion of intelligence makes no sense outside of the environment in which it operates.
For them, the notion of embodiment must, of course, capture how the brain is embedded in a body but also how this body is embedded in the broader environment.
Today, Pfeifer and Matej Hoffmann, also at the University of Zurich, set out this thinking in a kind of manifesto for a new approach to AI. And their conclusion has far reaching consequences. They say it’s not just artificial intelligence that we need to redefine, but the nature of computing itself.
The paper takes the form of a number of case studies examining the nature of embodiment in various physical systems. For example, Pfeifer and Hoffmann look at the distribution of light-sensing cells within fly eyes.
Biologists have known for 20 years that these are not distributed evenly in the eye but are more densely packed towards the front of the eye than to the sides. What’s interesting is that this distribution compensates for the phenomenon of motion parallax.
When a fly is in constant forward motion, objects to the side move across its field of vision faster than those to the front. “This implies that under the condition of straight flight, the same motion detection circuitry can be employed for motion detection for the entire eye,” point out Pfeifer and Hoffmann.
That’s a significant advantage for the fly. With any other distribution of light sensitive cells, it would require much more complex motion detecting circuitry.
Instead, the particular distribution of cells simplifies the problem. In a sense, the morphology of the eye itself performs a computation. A few years a go, a team of AI researchers built a robot called Eyebot that exploited exactly this effect.
What’s important, however, is that the computation is the result of three factors: simple motion detection circuitry in the brain, the morphology or distribution of cells in the body and the nature of flight in a 3-dimensional universe.
Without any of these, the computation wouldn’t work and, indeed, wouldn’t make sense.
We’ve looked at examples of morphological computation on this blog in the past (here and here for example). And Pfeifer has been shouting from the roof tops for several years, with some success, about the role that shape and form play in biological computation.
But today he and Hoffman go even further. They say that various low level cognitive functions such as locomotion are clearly simple forms of computation involving the brain-body-environment triumvirate.
That’s why our definition of computation needs to be extended to include the influence of environment, they say.
For many simple actions, such as walking, these computations proceed more or less independently. These are ‘natural’ actions in the sense that they exploit the natural dynamics of the system.
But they also say it provides a platform on which more complex cognitive tasks can take place relatively easily. They think that systems emerge in the brain that can predict the outcome of these natural computations. That’s obviously useful for forward planning.
Pfeifer and Hoffmann’s idea is that more complex cognitive abilities emerge when these forward-planning mechanisms become decoupled from the system they are predicting.
That’s an interesting prediction that should lend itself to testing in the next few years.
But first, researchers will have to broaden the way they think not only about AI but also about the nature of computing itself.
Clearly an interesting and rapidly evolving field.
Ref: http://arxiv.org/abs/1202.0440 : The Implications of Embodiment for Behavior and Cognition: Animal and Robotic Case Studies
This thinking sounds backward, although broadening the computation horizons is a good idea.
The thing people often overlook is that when a computer uploads some information, the original is always retained and a duplicate is created on the other side. Therefore, if (and when) this is possible, uploading one’s brain will likely only create a digital duplicate of ones brain and not transfer it.