In order for Artificial Intelligence (AI) to be realized, a comprehensive understanding of general intelligence (human intelligence) must be achieved, but not necessarily replicated. What is intelligence, anyway? Eliezer S. Yudkowsky states: In humans, intelligence is a brain with a hundred billion neurons and a hundred trillion synapses; a brain in which the cerebral cortex alone is organized into 52 cytoarchitecturally distinct areas per hemisphere. Intelligence is not the complex expression of a simple principle; intelligence is the complex expression of a complex set of principles. Intelligence is a supersystem composed of many mutually interdependent subsystems - subsystems specialized not only for particular environmental skills but for particular internal functions.
He postulates that there are no simple set of rules we can use to define the complex process of thinking abstractly and further argues that unlike physics, the field of AI cannot succeed by condensing complexity into relatively simple expressions.
Read Yudkowsky's Levels of Organization in General Intelligence to learn more about the application of general intelligence in AI. Overall, it's an interesting read.
Thoughts?
I recently attended a game developers meeting & the topic of AI came up. The general consensus was that for an AI to be fun to a player, it has to be somewhat predictable. You don't really need genetic-algorithm or neural nets. State machines are the popular mechanism.
Which got me thinking. What would we have these iRobots do for us? Armed with a complex set of principles, how predictable with they be? They probably can't run a nuclear reactor, or baby sit a 5 yrs old. I don't want my iRobot so complex that one of his neurosis would be unleashed one day.
Posted by: Anonymous Coward | February 21, 2005 at 12:35 PM
AC, yeah, it's hard to know at this point what we use AI "beings" for. I do not think we are close to creating thinking machines capable of developing neuroses. We have a great deal of work to do with developing a solid understanding of intelligence and how to "create" or enable it from non-thinking contexts (machines can not evolve an intelligence, we give it to them and we don't fully understand what it is yet, as evidenced by the paper you read).
Posted by: Carmine | February 21, 2005 at 04:01 PM