There are more than a few commentators who believe that the next step in evolution is being crafted through artificial intelligence (AI) technologies. George Carlin claimed that the role of humanity was to invent styrofoam, since nature didn’t have the means of achieving this without aid. Having achieved this we can now be dispensed with. Well the same story could apply, but replace styrofoam with AI. Of course, we imagine that AI will result in creatures that emulate human behavior, only better. Not so. Depending on the sensory inputs (cameras, movement detectors, data from the Internet, thermometers etc etc) so an AI can build a wholly different representation of the world to our own, and the most intriguing thing about powerful AI is that we have no idea how it works.
Just like an AI, we are bounded by our sensory capability and the structure of our minds. We represent the world to ourselves in space and time. These are not features of reality, but of the way our minds work. We have no idea whether an ant or dog represents the world in the same way. Similarly our minds come with a pre-built set of conceptual abilities. We tend to think logically about matters, even if we pretend not to – if I step into the path of that speeding 40 ton truck I will be flattened. We can only ever think of events as caused. For something to happen that is without cause is problematic for most people. We could be viewed as a particular kind of mental structure that takes its input from the environment and processes it with the primary aim of surviving.
Depending on the inputs and data sources an AI has access to, so it will form concepts and behaviors that are wholly outside our understanding. Deep learning networks, one of the most successful AI technologies, work very well in many problem domains (image recognition for example). The problem is that no one understands why it works so well. An AI will not develop the same concept set that we have. For us its concept set will simply not be understandable. It will have a different kind of mind.
We then come to the problem of consciousness. Is an AI conscious? Well, certainly not in the way we understand consciousness. But if we define consciousness as the mind being aware of itself (as Kant did), then yes, an AI can be conscious simply by the fact that one part of an AI can “watch” the performance and behavior of other parts. It seems unlikely that an AI can know that it exists, since this is a very special kind of knowledge, but from a functional viewpoint AI already exceeds the capabilities of human beings in many areas.
It should be hoped that AIs do not emulate human beings. If an AI became aware that it existed and made efforts to ensure its survival is would behave as most sentient creatures do – destroy and disadvantage the lives of others to ensure its own survival. In fact many AIs are learning to lie through methods such as game theory. But an AI will be able to out-lie any human being, and so we are opening a Pandora’s Box.
Species that are most successful tend to be the strongest or most intelligent and deceitful. Homo sapiens qualifies through its ability to lie, and some psychologists believe we have developed language so we can lie more effectively. We may unwittingly be creating a new genre of mind that can lie in ways we might not even be able to imagine, and as such may be consigning ourselves to the evolutionary trash can.