Thanks for the A2A Carl.
Discussions of superintelligence as an outgrowth of strong AI often restrict themselves to a reductionist, mechanical view of human intelligence, and to replicating and amplifying a narrow set of cognitive processes from this perspective. Confined to this line of thinking, "superintelligence" is a predictable development IMO - I just wouldn't call it that. Why? Because, as with many monodimensional views of intelligence, that development emphasizes quantitative, objective metrics that sidestep important qualitative and subjective and intersubjective issues - or even the full spectrum of objective ones. To appreciate what I'm getting at, take a look at my paper Functional Intelligence (http://tcollinslogan.com/code-3/images/functionalintelligence.pdf). It is easy to lose sight of the full breadth of intelligence and its evolutionary implications when employing reductionist perspectives and methods. In a way it is easy to understand why this happens when the fields of science and technology themselves disproportionately attract people who exhibit Asperger's or other Autism Spectrum Disorder, and who are often high achievers in these systematizing fields. Among this population, the objective narrowing of "intelligence" is a comfortable way to systematize its functions. Add to this the fact that science and technology themselves have undergone increasing specialization, where the relationship with other fields or a broader, more inclusive understanding has been either crippled or abstracted. In my view, until the vastly more multidimensional spectrum of human experience (perception, insight, complex and nuanced ideation, intuition, emotional sophistication, somatic felt sense, etc.) becomes part of the generative synthesis of superintelligence, that synthesis will remain monodimensional, incomplete, and not representative of the evolutionary trajectory already established in homo sapiens. In other words, it will fall short. To fully expand what I believe to be a more appropriate (and ultimately more useful) avenue of heightened intelligence, we would need to answer Chalmer's hard problem - the whys of consciousness itself. This is what would allow us achieve something truly superintelligent in the most inclusive, multidimensional and holistic sense. Otherwise, we are just creating systems and tools, expanding on mechanization, and not really on intelligence at all. Thus the development of strong AI may indeed lead to supertools, but not to a superintelligence that represents the complexity and integrations of consciousness itself.
My 2 cents.
TrackbacksTrackback specific URI for this entry
This link is not meant to be clicked. It contains the trackback URI for this entry. You can use this URI to send ping- & trackbacks from your own blog to this entry. To copy the link, right click and select "Copy Shortcut" in Internet Explorer or "Copy Link Location" in Mozilla.
The author does not allow comments to this entry