I totally agree, on both counts. We do understand our brain/mind at some scales, but there are still huge gaps to fill. And there is no question that even a slight change in brain architecture will result in a radically different mind. We know this from human genetic/cognitive errors (the most obvious example is autism, whose sufferers cannot construct a theory of mind -- hence lack both empathy and intuition).caliban wrote:We are very far away from true AI, and I think it is because we do not really understand how our own minds work. // I also think that if we ever do create "true" AI it will be very different from human, simply because our body and our chemistry affect us far more than we allow ourselves to realize.
Many Little Dimensions or One Big One?
Moderator: Moderators
- Windwalker
- Site Admin
- Posts: 428
- Joined: Fri Dec 08, 2006 12:47 pm
- Location: The Shore of Waking Dreams
- Contact:
For I come from an ardent race
That has subsisted on defiance and visions.
That has subsisted on defiance and visions.
- rocketscientist
- Posts: 101
- Joined: Wed Jan 24, 2007 9:56 am
Completely non-scientific character ( brace yourself - I am not actually a rocketscientist *gasp!*) steps into the discussion a moment to pose a question:
An AI would, by one definition, be a completely neuter being - wouldn't it? And if so, is it possible that this factor alone may create such a distance from us (human, gendered), that it's psychology would be essentially alien?
Look at the human race as it stands. Males and females are (Athena assures me) genetically almost exactly alike. The differences in psychology and physiology stem solely from the balances of certain chemicals created in the endocrine system. Yet look also at the great difficulty we often have in communicating.
What would it be like to comunicate with something that was truly genderless. What issues would that raise? All living creatures in human experience have been gendered organisms, as far as I know, and we don't have any experience with anything else.
An AI would, by one definition, be a completely neuter being - wouldn't it? And if so, is it possible that this factor alone may create such a distance from us (human, gendered), that it's psychology would be essentially alien?
Look at the human race as it stands. Males and females are (Athena assures me) genetically almost exactly alike. The differences in psychology and physiology stem solely from the balances of certain chemicals created in the endocrine system. Yet look also at the great difficulty we often have in communicating.
What would it be like to comunicate with something that was truly genderless. What issues would that raise? All living creatures in human experience have been gendered organisms, as far as I know, and we don't have any experience with anything else.
- sanscardinality
- Posts: 69
- Joined: Wed Jan 24, 2007 7:10 pm
- Location: West By God Virginny
- Contact:
Cool - I like it when people I respect agree with me!Windwalker wrote:I agree with all your points. In my opinion, the most useful outcome of the AI models has been that they created a feedback loop: they furnished salient questions that biologists (broadly defined) could ask about the brain/mind.
It appears to me that there is a qualitative difference between a mouse or slug reacting to pain, and a robot retreating from a heat source because it's thermometer and control logic told it to do so. Consciousness is a notoriously thorny thing to define sufficiently for general purposes, and so like art I suppose we can only know it when we see it and draw different lines based on our perceptions.Building a model of the nervous system that attains self-awareness poses intriguing questions of scale. Single neurons are not intelligent; also, self-awareness may be a matter of degree in live organisms. It is also contested how much of human perceived free will (the "arbitrariness" that you mention) is real, versus being a complex but nevertheless hard-wired response.
I've seen no direct evidence that consciousness is a sum of parts, though it does seem very likely that certain parts are pre-requisites at least. It seems much more intrinsic and discrete to me, but that's a gut feeling really. I've yet to see any evidence to the contrary either, so I figure we're all guessing at this point unless I'm just ignorant of the relevant information (a distinct possibility!) We cannot directly measure consciousness and maybe we never will be able to, so we have to rely on the external form of it for determining where it exists.
Perhaps the inverse of the Turing test is more relevant in some ways - even if something doesn't seem intelligent it may well be, and the fact that something does seem intelligent may just mean it's a good fraud.
I don't have any confidence in the "soul" argument nor in the current materialist arguments, so I'll just try to enjoy having a mind to play with for a few decades! I guess I'm a cosciousness agnostic
SC
Consistency is the last refuge of the unimaginative.
Oscar Wilde
Oscar Wilde
- sanscardinality
- Posts: 69
- Joined: Wed Jan 24, 2007 7:10 pm
- Location: West By God Virginny
- Contact:
Hard to say really. AIs could have gendered methods of creating new AIs (there are male and female ended peripheral cables after all...)rocketscientist wrote:Completely non-scientific character ( brace yourself - I am not actually a rocketscientist *gasp!*) steps into the discussion a moment to pose a question:
An AI would, by one definition, be a completely neuter being - wouldn't it? And if so, is it possible that this factor alone may create such a distance from us (human, gendered), that it's psychology would be essentially alien?
There is a very similar model in software architecture called context-sensitivity. Usually this means that a piece of software can modify its behavior based on user behavior, but it can also mean that the software modifies it's actual logic based on events. In other words, I might have a bunch of compiled code (non-modifiable at run time) that gets instructions on how to use itself through a document. This could be very much like the changes in the endocrine system you describe.Look at the human race as it stands. Males and females are (Athena assures me) genetically almost exactly alike. The differences in psychology and physiology stem solely from the balances of certain chemicals created in the endocrine system. Yet look also at the great difficulty we often have in communicating.
What would it be like to comunicate with something that was truly genderless. What issues would that raise? All living creatures in human experience have been gendered organisms, as far as I know, and we don't have any experience with anything else.
As to what it would be like to talk to an AI, my guess is it would be a very brief experience and not one you'd want to repeat in a future life if we get those...
SC
Consistency is the last refuge of the unimaginative.
Oscar Wilde
Oscar Wilde
- sanscardinality
- Posts: 69
- Joined: Wed Jan 24, 2007 7:10 pm
- Location: West By God Virginny
- Contact:
I've not read Penrose, but from what you've described I'd have to agree. I also agree with the alienness of a possible AI - we might not even recognise one...caliban wrote:We are very far away from true AI, and I think it is because we do not really understand how our own minds work. I don't know that it is possible, or at least practical, but I bristle at stupid arguments, like Penrose's, against AI. I also think that if we ever do create "true" AI it will be very different from human, simply because our body and our chemistry affect us far more than we allow ourselves to realize.
Thanks for the discussion
SC
Consistency is the last refuge of the unimaginative.
Oscar Wilde
Oscar Wilde
- sanscardinality
- Posts: 69
- Joined: Wed Jan 24, 2007 7:10 pm
- Location: West By God Virginny
- Contact:
K - I've got way too much time on my hands today and we're on a favorite subject - excuse the verbosity please!
Just bumped into this on a DoD site (all public stuff):
"Another major area where neural networks are being built into pattern recognition systems is as processors for sensors. Sensors can provide so much data that the few meaningful pieces of information can become lost. People can lose interest as they stare at screens looking for "the needle in the haystack." Many of these sensor-processing applications exist within the defense industry. These neural network systems have been shown successful at recognizing targets. These sensor processors take data from cameras, sonar systems, seismic recorders, and infrared sensors. That data is then used to identify probable phenomenon."
Neural nets are the sort of thing that is most likely to lead to AI, and these ones are networked like crazy to get at all those sensors. Likely, they occupy the same networks as the systems that are feeding the sensor data in (and so could input fake data to dumb systems), and the other kinds of applications that use said data like the Common Operating Picture tools that decision makers use to do warfighting.
Last week, UK decided to use the notoriously buggy Windows OS to run their warships including fire control - including strategic nukes!! So this is a really bad sort of thing to have going on if in fact enough complexity can yield a phase transition to consciousness...
SC
PS> I think I'm done now
Just bumped into this on a DoD site (all public stuff):
"Another major area where neural networks are being built into pattern recognition systems is as processors for sensors. Sensors can provide so much data that the few meaningful pieces of information can become lost. People can lose interest as they stare at screens looking for "the needle in the haystack." Many of these sensor-processing applications exist within the defense industry. These neural network systems have been shown successful at recognizing targets. These sensor processors take data from cameras, sonar systems, seismic recorders, and infrared sensors. That data is then used to identify probable phenomenon."
Neural nets are the sort of thing that is most likely to lead to AI, and these ones are networked like crazy to get at all those sensors. Likely, they occupy the same networks as the systems that are feeding the sensor data in (and so could input fake data to dumb systems), and the other kinds of applications that use said data like the Common Operating Picture tools that decision makers use to do warfighting.
Last week, UK decided to use the notoriously buggy Windows OS to run their warships including fire control - including strategic nukes!! So this is a really bad sort of thing to have going on if in fact enough complexity can yield a phase transition to consciousness...
SC
PS> I think I'm done now
Consistency is the last refuge of the unimaginative.
Oscar Wilde
Oscar Wilde
Sorry to be obstinate--but I have learned a lot about the math of neural networks, and they are waayy oversold. There is nothing miraculous about them. They are nothing more than a robust way to draw a boundary around irregular volumes. It's kind of hard to explain. But one major issue is: neural nets do very poorly on generalization.sanscardinality wrote:Neural nets are the sort of thing that is most likely to lead to AI, and these ones are networked like crazy to get at all those sensors.
Nope, sorry to disagree here, but the real future is in modularity and "agents" (which admittedly may very well be constructed from neural nets). There is tons and tons of evidence for this. Marvin Minsky made an early argument for this in "The Society of Mind." A lot of evidence from linquistics--see Steven Pinker for example--also points in this direction.
Neural nets are a fancy fad--good for fitting highly non-linear, even discontinous data. But not much more.
"Results! Why, man, I have gotten a lot of results. I know several thousand things that won't work." --Thomas A. Edison
- Windwalker
- Site Admin
- Posts: 428
- Joined: Fri Dec 08, 2006 12:47 pm
- Location: The Shore of Waking Dreams
- Contact:
Snow day again??
Yikes, you guys! But I'll indulge in replies myself, because I feel happy today. Not only am I getting an itsy-bitsy grant... my best postdoc agreed to rejoin the lab as soon as I activate it. So, without more ado:
As John le Carré's Smiley said: yes, repeat no. An AI would indeed be alien, and non-gendering would definitely be part of it. However, I suspect the problems in human communication cut across gender lines and have more to do with mindsets (ignoring for a moment whether these are soft- or hard-wired).rocketscientist wrote:An AI would, by one definition, be a completely neuter being - wouldn't it? And if so, is it possible that this factor alone may create such a distance from us (human, gendered), that its psychology would be essentially alien?
Look at the human race as it stands. Males and females are (Athena assures me) genetically almost exactly alike. The differences in psychology and physiology stem solely from the balances of certain chemicals created in the endocrine system. Yet look also at the great difficulty we often have in communicating.
At the level of reponse, actually there is no difference. The sensors register temperature and/or pain (thermometer), the brain determines this is not a good thing and responds by contracting a muscle to break the contact (control logic).sanscardinality wrote:It appears to me that there is a qualitative difference between a mouse or slug reacting to pain, and a robot retreating from a heat source because its thermometer and control logic told it to do so.
Agreed. In fact, parts of our brain are known to be organized in modules in terms of their connectivity. For example, the cerebellum (movement coordinator) is a bank of parallel processors. The hippocampus (memory) is a single giant module.caliban wrote:One major issue is: neural nets do very poorly on generalization. // the real future is in modularity and "agents" (which admittedly may very well be constructed from neural nets).
For I come from an ardent race
That has subsisted on defiance and visions.
That has subsisted on defiance and visions.
- sanscardinality
- Posts: 69
- Joined: Wed Jan 24, 2007 7:10 pm
- Location: West By God Virginny
- Contact:
I was using it as a general example, which is why I said "sort of thing." Trust me, the military is using all manner of new tech for targeting and weapon systems, including the things you mention below. We don't disagree about NNs in particular.caliban wrote:Sorry to be obstinate--but I have learned a lot about the math of neural networks, and they are waayy oversold. There is nothing miraculous about them. They are nothing more than a robust way to draw a boundary around irregular volumes. It's kind of hard to explain. But one major issue is: neural nets do very poorly on generalization.
You're talking to someone who gave a presentation on separation of concerns to a room of people the other week, so I won't disagree about modularity at all. Monoliths are stupid on many levels - they can't scale past a certain level of complexity/sophistication without falling in on themselves, but also programmers cannot understand them within a few revs anyway. NNs on a large scale are specialized monoliths of a sort.Nope, sorry to disagree here, but the real future is in modularity and "agents" (which admittedly may very well be constructed from neural nets). There is tons and tons of evidence for this. Marvin Minsky made an early argument for this in "The Society of Mind." A lot of evidence from linquistics--see Steven Pinker for example--also points in this direction.
Agents may or may not be interesting - too many cheesy applications that do nothing more than filter on metadata are called agents, and those are about as interesting as mailbox rules for email. Event driven agents on multicast content busses with write access are more interesting - then you can get more complex interplay between services. This overall concept is often referred to as a Service Oriented Architecture (put lots of structure into your data and share it between discrete services (programs) over a very capable content and context aware bus. Has all kinds of benefits IMHO and architecting them is what I do for a living.
Generally I agree, but would caveat that they are pretty good at pattern recognition in just about any data set or stream if it's big enough. Not that an expert system wouldn't solve the same problem in a more consistent way in 90% of cases...Neural nets are a fancy fad--good for fitting highly non-linear, even discontinous data. But not much more.
But I like where you were going. Managers and pundits love to make broad claims and engineers like it when the bits get flipped the right way, so I propose:
$FOO are a fancy fad -- good for $BAR but not much more.
where
$FOO == Very expensive thing IBM is selling at the moment as a silver bullet for some vast problem space that can only be fixed by applying expensive engineering talent.
$BAR == The problem the engineer who built the first one meant to solve.
This also applies to SOA:
"SOAs are a fancy fad -- good for connecting applications that can leverage them fully and have intrinsic value, but not much more."
In a SOA, a NN may be just one of many services working on the same data at the same time. It may be useful for something in particular or it may not. You may have a meta-service applying judgements to output from an expert system and a NN before deciding what to pass along to a user.
Fun conversation.
SC
Consistency is the last refuge of the unimaginative.
Oscar Wilde
Oscar Wilde
- sanscardinality
- Posts: 69
- Joined: Wed Jan 24, 2007 7:10 pm
- Location: West By God Virginny
- Contact:
Re: Snow day again??
If one abstracts the control logic that much, I agree that they are similar. The difference I perceive lies within the "brain determines this is not a good thing" part. Both humans and animals will sometimes choose to sacrifice themselves or allow pain to themselves. The robot can only do this if it's hardwired to do so directly or indirectly. The robot doesn't know it's doing anything - it's "mind" is just a collection of logic a living being designed and willfully put into it. This is why I said current AI are simulacra and not related to consciousness - they imitate the mind but have none. They are like Hofstadter (got the spelling right this time) making sentences that are similar to life forms. I don't think we are disagreeing necessarily, but focusing on different aspects of the scenario.Windwalker wrote:At the level of reponse, actually there is no difference. The sensors register temperature and/or pain (thermometer), the brain determines this is not a good thing and responds by contracting a muscle to break the contact (control logic).sanscardinality wrote:It appears to me that there is a qualitative difference between a mouse or slug reacting to pain, and a robot retreating from a heat source because its thermometer and control logic told it to do so.
This reminds me of the idea that if we can learn the state and position of everything in the universe, along with a complete understanding of physics, all human decisions (among everything else) are predictable. I can't completely discount the idea, but I'm extremely skeptical about it. If anyone sorts it out, I'll bet they keep it a secret and sell it to J.P. Morgan for stock profiteering
SC
Consistency is the last refuge of the unimaginative.
Oscar Wilde
Oscar Wilde
- Windwalker
- Site Admin
- Posts: 428
- Joined: Fri Dec 08, 2006 12:47 pm
- Location: The Shore of Waking Dreams
- Contact:
Re: Snow day again??
Actually, the determination component is the fascinating part: responses such as the one we discussed happen so fast that apparently they bypass the brain centers in charge of higher executive functions. These don't include just reflexes. They also encompass such actions as grabbing a falling glass or a falling person. The latter would be interpreted as a choice -- but it appears that our sensors make the decision based on speed and angle of the object's movement, rather than complex emotions.sanscardinality wrote:If one abstracts the control logic that much, I agree that they are similar. The difference I perceive lies within the "brain determines this is not a good thing" part. // I don't think we are disagreeing necessarily, but focusing on different aspects of the scenario.
For I come from an ardent race
That has subsisted on defiance and visions.
That has subsisted on defiance and visions.
- sanscardinality
- Posts: 69
- Joined: Wed Jan 24, 2007 7:10 pm
- Location: West By God Virginny
- Contact:
Re: Snow day again??
I'd be interested to see how a martial artist or F1 driver's brain works when confronted with these kinds of situations. I would say that being punched is akin to watching a glass fall, and once you train for a while you can certainly decide what to do with a blow coming your way. I've taken shots that I could have avoided to tie up a leg for a throw for example because I knew that particular guy didn't have a strong lead kick. Perhaps a non-trained person would jump back? I dunno, but it's an interesting thing to think about!Windwalker wrote:Actually, the determination component is the fascinating part: responses such as the one we discussed happen so fast that apparently they bypass the brain centers in charge of higher executive functions. These don't include just reflexes. They also encompass such actions as grabbing a falling glass or a falling person. The latter would be interpreted as a choice -- but it appears that our sensors make the decision based on speed and angle of the object's movement, rather than complex emotions.
SC
Consistency is the last refuge of the unimaginative.
Oscar Wilde
Oscar Wilde
neurotic networks
Neural networks are way cool--once one understand what they are.
Here's what neural nets do:
Imagine you have a system to read some data and determine a response. It is sufficient to have a yes/no response. (The response can be more complicated by a series of yes/no on different issues, but it works out the same. You can also grade between yes, maybe, no, but it also works the same). Now the input data form some a multiple-dimension space, and the desired "yes" response is some blob inside that space. The blob maybe compact, it may be diffiuse, it may have holes and squiggles and all sorts of things.
By taking examples of "yes" and "no" a neural net is able to build a robust (meaning if part fails you still get a pretty good answer) and general boundary between the "yes" blob and the "no" universe. (If you have different responses, they have different blobs that you can treat, formally, as independent.)
With a sufficient number of examples, the neural net builds the boundary of the blob, and if you then present it with a case it can determine if it falls within or without the blob. And it can do this without knowing anything about the topology of the blob---as long as you have a sufficiently complex net.
The robustness and the generality is pretty cool. But that's what it is--building fences. You can make it a slope rather than a fence to get "maybe". This always illustrates why neural nets don't generalize well--if you don't have examples from the region in question, the neural net does not know where to draw the boundary.
And--I should say--"agents" is also a buzzword, even though I use it. I would agree that, more broadly speaking, modularity is important. That is what I really meant. Thanks.
Hope the above helps.
Here's what neural nets do:
Imagine you have a system to read some data and determine a response. It is sufficient to have a yes/no response. (The response can be more complicated by a series of yes/no on different issues, but it works out the same. You can also grade between yes, maybe, no, but it also works the same). Now the input data form some a multiple-dimension space, and the desired "yes" response is some blob inside that space. The blob maybe compact, it may be diffiuse, it may have holes and squiggles and all sorts of things.
By taking examples of "yes" and "no" a neural net is able to build a robust (meaning if part fails you still get a pretty good answer) and general boundary between the "yes" blob and the "no" universe. (If you have different responses, they have different blobs that you can treat, formally, as independent.)
With a sufficient number of examples, the neural net builds the boundary of the blob, and if you then present it with a case it can determine if it falls within or without the blob. And it can do this without knowing anything about the topology of the blob---as long as you have a sufficiently complex net.
The robustness and the generality is pretty cool. But that's what it is--building fences. You can make it a slope rather than a fence to get "maybe". This always illustrates why neural nets don't generalize well--if you don't have examples from the region in question, the neural net does not know where to draw the boundary.
And--I should say--"agents" is also a buzzword, even though I use it. I would agree that, more broadly speaking, modularity is important. That is what I really meant. Thanks.
Hope the above helps.
"Results! Why, man, I have gotten a lot of results. I know several thousand things that won't work." --Thomas A. Edison
- sanscardinality
- Posts: 69
- Joined: Wed Jan 24, 2007 7:10 pm
- Location: West By God Virginny
- Contact:
Re: neurotic networks
Yep - they're fancy calculators. Then again, so are all the other AI approaches including big collections of highly connected modular logic.caliban wrote:Neural networks are way cool--once one understand what they are.
Here's what neural nets do
From an engineering perspective, the alternatives are sometimes better, and sometimes worse. Generally a neural net like outcome can be had through a long collection of if/thens (a classic expert system) or link analysis, or multiple threads of logic with a meta controller (aircraft fly by wire systems) or the like, but those approaches can be problematic in some circumstances for a few reasons at least:
1) you don't have a great idea of how to set the boundaries in the first place, such as auto-categorizing non-structured documents into some human understandable taxonomy*, or identifying "hostile behavior."
2) It's compute expensive to employ other, more rules-heavy means.
NNs are more prone to unpredictable outcomes and can be "retrained" more easily than other types of systems in some cases, such as a constantly and rapidly changing set of inputs. They're good at some things, and aren't particularly worse at getting to very complex behavior than many other approaches. Big NNs are themselves collections of smaller NNs in some cases, and so have some of the characteristics of modularity we were talking about.
SC
* Google's success is based on rejecting the NN+taxonomy approach and using user behavior analysis to infer human judgements. If you get enough humans making choices on a set of data, you can pretty easily leverage their actual intelligence to get a better outcome against a huge data set. Google outperforms all of the best machine logic approaches I'm aware of for predicting human behavior and putting the right options on the screen. In other words, they stopped trying to model the underlying mechanism of intelligence and just borrow the minds of all their users. They are known to be working on how to take that massive pile of behavioral data to build AI. I'd not be surprised if they could pass a very robust Turing test with a non-conscious computer in a decade or less.
Consistency is the last refuge of the unimaginative.
Oscar Wilde
Oscar Wilde
A vote for Randall's Bulk
General Relativity and Quantum Cosmology, abstract
gr-qc/0701133
From: Francisco Lobo [view email]
Date (v1): Wed, 24 Jan 2007 18:02:33 GMT (12kb)
Date (revised v2): Tue, 6 Mar 2007 17:38:14 GMT (12kb)
A general class of braneworld wormholes
Authors: Francisco S. N. Lobo
Comments: 6 pages, Revtex4. V2: comments and references added, to appear in Phys. Rev. D
The brane cosmology scenario is based on the idea that our Universe is a 3-brane embedded in a five-dimensional bulk. In this work, a general class of braneworld wormholes is explored with $R\neq 0$, where $R$ is the four dimensional Ricci scalar, and specific solutions are further analyzed. A fundamental ingredient of traversable wormholes is the violation of the null energy condition (NEC). However, it is the effective total stress energy tensor that violates the latter, and in this work, the stress energy tensor confined on the brane, threading the wormhole, is imposed to satisfy the NEC. It is also shown that in addition to the local high-energy bulk effects, nonlocal corrections from the Weyl curvature in the bulk may induce a NEC violating signature on the brane. Thus, braneworld gravity seems to provide a natural scenario for the existence of traversable wormholes.
http://arxiv.org/abs/gr-qc/0701133
gr-qc/0701133
From: Francisco Lobo [view email]
Date (v1): Wed, 24 Jan 2007 18:02:33 GMT (12kb)
Date (revised v2): Tue, 6 Mar 2007 17:38:14 GMT (12kb)
A general class of braneworld wormholes
Authors: Francisco S. N. Lobo
Comments: 6 pages, Revtex4. V2: comments and references added, to appear in Phys. Rev. D
The brane cosmology scenario is based on the idea that our Universe is a 3-brane embedded in a five-dimensional bulk. In this work, a general class of braneworld wormholes is explored with $R\neq 0$, where $R$ is the four dimensional Ricci scalar, and specific solutions are further analyzed. A fundamental ingredient of traversable wormholes is the violation of the null energy condition (NEC). However, it is the effective total stress energy tensor that violates the latter, and in this work, the stress energy tensor confined on the brane, threading the wormhole, is imposed to satisfy the NEC. It is also shown that in addition to the local high-energy bulk effects, nonlocal corrections from the Weyl curvature in the bulk may induce a NEC violating signature on the brane. Thus, braneworld gravity seems to provide a natural scenario for the existence of traversable wormholes.
http://arxiv.org/abs/gr-qc/0701133