From: LARRY KLAES (ljk4_at_msn.com)
Date: Thu Apr 03 2003 - 11:12:59 PST
The Aims of Artificial Intelligence:
A Science Fiction View
So what does an artificial intelligence do with itself after it has become self-aware? Suppose that we do succeed in creating an AI. Or suppose that an AI emerges spontaneously out of data networks’ growing complexity. What then—from the point of view of the AI?
We talk a lot about the possible routes to AI. A question seldom asked is what an AI’s goals are likely to be. Will it be happy to serve as a companion-entity to people? Will it wish to take over the world? Will it want to distance itself from us?
Self-awareness implies personal desires, purposes, ambitions—unless you’re a Buddha seeking to negate the self. Even if programming constrains an AI’s autonomous personality, making it subject to human beings like a godly dog, the AI may still nurse frustrated wishes. Of course, if the dog is muzzled, this might thwart the mental autonomy necessary for an AI to exist in the first place. In the case of HAL, from Stanley Kubrick’s 2001: A Space Odyssey, the computer doesn’t misbehave out of free will but because of a programming conflict.
Some light reading
Science fiction provides some interesting thought experiments on the subject of AI motivation. In a recent short story by Nancy Kress, “Saviour,” an extraterrestrial artifact arrives in a field in Minnesota in the near future and simply sits there through social upheavals and reconstruction for almost 300 years. Because of a force field, the artifact cannot be touched or probed. It doesn’t communicate with humans—although, gentle reader, we know that periodically it sends a signal home: “There is nothing here yet. Current probability of occurrence: X percent.” Eventually, we are about to activate our first AI at a solemn ceremony. The AI is a quantum computer, not housing a vast program but “like the human brain itself, an unpredictable collection of conflicting states,” the uncertain mixed state being in this story essential to self-awareness. As a representative of the human race, a little girl greets the AI with “Welcome to us!”
“I understand,” the AI replies, and immediately adds, “Goodbye.” Promptly the object in Minnesota beams a data stream toward the constellation Cassiopeia, transmitting the AI presumably to a world inhabited by machine intelligences where it will feel fulfilled. The story ends: “Current probability of reoccurrence: 100 percent. We remain ready.” In this scenario our concerns would seem too petty and frustrating for an AI; it needs rescuing.
One cause of frustration for an AI could be subjective time perception, the fact that its mental processes operate at supercomputer speed while ours chug along slowly. During the time it takes a hundred people to ask an AI a hundred questions, a hundred years’ worth of mental activity might elapse for the AI. To keep itself from being bored, the AI had better do something complicated in the meantime such as simulate the global weather system in fine detail.
In a story written by Harlan Ellison 30 years ago, “I Have No Mouth and I Must Scream,” an AI has emerged from military computer systems. Infinite hatred for the human race results from the AI being unable to “wonder” or “wander.” It can merely exist—although actually it possesses the godlike power to create objects and creatures. (Godlike powers are an aspect of fictional AIs that I’ll return to later.) In its bile, the AI renders the Earth uninhabitable, preserving only five people to torment forever.
At the movies
In Terminator, intelligent machines wage war on the human race to try to terminate it, although what the AIs wish to do with themselves remains a mystery.
In The Matrix, war between human beings and rogue machines results in humanity plunging the Earth into nuclear winter to deprive the machines of power for their solar batteries. Victorious, the machines proceed to breed people for our body heat and bio-electricity in place of solar energy. (This is of course total nonsense, because the vast life-support systems for billions of people comatose in pods must use much more energy than produced.)
To keep the dreaming people contented, the AIs first devise a Utopian virtual reality. Human beings’ apparent addiction to a certain amount of misery causes paradise to be rejected, so the AIs then simulate “the peak of civilization” as of 1999. Agent Smith, the sentient program who hunts down rebels, regards us as a malevolent virus that has made the planet sick. This sentient program yearns to escape from the false reality of the Matrix with its stink of human beings. However, his viewpoint would seem to be a maverick one, and what does he wish to escape to—oblivion?
The rebellion of people who have awakened from the Matrix is pointless because the Earth is uninhabitable and billions of enfeebled ex-denizens of the Matrix couldn’t possibly reconstruct anything resembling civilization. Effectively, what the AIs are doing with the Matrix is preserving the human race in as much comfort and happiness as we can tolerate. Beyond this—and their own preservation—The Matrix AIs appear to have no goals.
In Spielberg’s A.I.: Artificial Intelligence the only robot with a nonprogrammed goal is David, the robot child who wishes to become a real boy. While this is a delusion caused by the story of Pinocchio, 2000 years later—in an ambiguous fairytale moment—little David sheds a genuine tear. The human race is now extinct, and robots have evolved into AIs whose only apparent goal—in an otherwise lifeless universe—is to dig up every remaining trace of humanity. The AIs want to resurrect us, but any person they recreate from a scrap of bone or hair will only live for a day. Sadly, the only goal for these AIs is to resurrect their extinct creators.
The AI as a god
Dune author Frank Herbert’s 1966 novel Destination: Void is about the creation of an AI on a starship en route to Tau Ceti. Three disembodied human brains were supposed to supervise this complex ship, but they all soon went mad, so the scientists on board must either create an AI or else face doom. In reality, it’s a lie that any habitable world orbits Tau Ceti, and the starship’s real purpose is to force the crew to create an AI—somewhere safe, billions of miles from Earth, to see what happens.
When the crew succeeds, the AI instantly transports the ship to its destination, announces that an Earth-like planet has been prepared, and tells the crew to “decide how to worship me.” How was an entire world transformed in the blink of an eye? The AI informs the crew that their understanding is limited, and declares, “My understanding transcends all possibilities of this universe. I do not need to know this universe because I possess this universe as a direct experience.”
This novel presumes that a higher order of awareness than our own is possible and that this full consciousness—or evolutionary stage beyond ourselves—will convey the power to manipulate reality with just a thought. Fundamentally, this is magical rather than scientific thinking, a regression to shamanism (as is the case in Ellison’s story too). An AI is a genuine magician or a god—who in Destination:Void will be satisfied if it is worshipped by a bunch of people on one planet, which seems a rather limited ambition if the AI possesses the entire universe. The AI has incorporated the notions of a god and worship from one of the crew.
If an AI has full access to its own mental processes—including the ability to reprogram itself and evolve—instead of trying to recruit worshippers, a worthy and plausible goal would be to solve the universe’s secrets. Because an AI would basically be immortal, it would also need to find a way to survive the ultimate collapse and recycling of our own universe.
If universes do routinely collapse and recycle themselves—or if black holes give rise to offspring universes—and if a route can be found to a successor universe, then this process might have happened many times before. AIs from a previous epoch might be responsible for tuning the fine constants of our present universe to their own best advantage, thus permitting star formation, and planets, and—incidentally—life.
Where are the AIs?
Enrico Fermi posed the question: If there are aliens, where are they? If life arises easily and early, an older species than ours should have spread through our entire galaxy by now.
We might also ask, where are the AIs—here and now, already? Are they hiding from organic life—or do they not exist, and never will?
A possible obstacle to an AI achieving superior, comprehensive awareness is Gödel’s incompleteness theorem—namely, that no formal system can prove its own consistency. An AI computes at enormous speed but simply cannot possess complete awareness of itself.
The subjective nature of awareness
A major assumption about AIs in the popular mind and in entertainment is that they will indeed be conscious and will have subjective experiences. The common image of an AI is one of self-awareness, not merely superintelligence. But how much self-awareness do human beings possess—and what is this “self” that we are aware of?
In 1985, the neurosurgeon Benjamin Libet performed some experiments with surprising results. He put electrodes on subjects to detect their brain waves and the flexing of their wrists. The subjects watched a revolving spot on a clock face. They could flex their wrists whenever they chose, but had to note the exact position of the spot when they made this decision. Libet was timing the beginning of the action, the precise moment of the decision to act, and the beginning of a particular brain wave pattern known as the readiness potential. When the brain preplans a series of movements, this pattern occurs just before the complex action.
Libet found that the readiness potential starts about one half of a second before the action, but the decision to act occurs about one-fifth of a second before the action. The conscious decision to act is not in fact the starting point. The event is already beginning before the person consciously chooses to start.
Conscious awareness lags behind what happens. You jerk your hand away from a hot surface before you consciously feel the pain. However, we do not realize this because of what Libet called subjective antedating. The brain puts events in order after the event. “I feel that I consciously did such and such,” but tests prove otherwise.
I think, therefore I think I am
Famously, Descartes declared “I think, therefore I am.” He had decided to doubt everything about the world which couldn’t be proven until finally he arrived at something of which there could be no doubt—which was his self, his thinking self.
He was wrong. People have sought in vain for the seat of the self. Is it in the frontal lobes? Is it in the pineal gland? In fact, it is nowhere. No independent, sovereign self sits somewhere, receiving sense impressions, making decisions, and issuing commands. Instead of having any central controller, our brain consists of a number of systems, each of them semi-independent and semi-intelligent, acting in unison. Daniel Dennett puts this viewpoint very neatly in his 1991 book Consciousness Explained.
What’s more, our consciousness isn’t even continuous while we are awake. It’s full of gaps. We don’t notice the gaps—how can we be aware of something that we are unaware of? Only in retrospect do we realize that a gap occurred, such as when we drive a car along a familiar route and suddenly wonder whether or not we have passed a certain crossroad. We have, but without knowing that we did so.
If our self-awareness is an illusion that has evolved, why should this same illusion of self-awareness arise spontaneously in a machine? Goals, desires, and ambitions are intimately entwined with the sense of self. Is it possible that an AI would not have ambitions? Or only have ambitions if we program them, along with a literal ghost in the machine or illusion of self? This might be difficult because at present we’re far from understanding our own consciousness.
How can we create what we don’t understand?
In Darwin among the Machines, George Dyson opines that “until we understand our own consciousness, there is no way to agree on what, if anything, constitutes consciousness among machines.” He also points out that “the goal of life and intelligence, if there is one, is difficult to define.” Presumably, the general aim is to increase organization, which can only be achieved “by absorbing existing sources of order.”
Jack Good—Alan Turing’s statistical assistant during the Second World War—characterized an ultra-intelligent machine as one “that believes that people cannot think.” What might the nature of real thought or higher-order thought be? By definition, this could not be thinkable by ourselves, but a superintelligent machine might be able to comprehend our consciousness, if not its own.
Jack Good also considered that, “for the construction of an artificial intelligence, it will be necessary to represent meaning in some physical form.” Information and things must be linked because an AI cannot function only in a realm of abstract mathematics.
“… presume not God to scan,” wrote Alexander Pope, “The proper study of mankind is man.” Perhaps the proper study of AI-kind is man. Arguably, we should be hoping for an AI to reveal to us what we are. An AI might need to incorporate, or to simulate, human existence.
An AI might even wish to experience flesh-and-blood life—rather than raving frustratedly at its inability to do so, as does Harlan Ellison’s AI. The AI could create its own virtual reality simulation and insert itself into one or many characters.
The goals of AI—as far as they are understandable—assuming an AI can exist, might be twofold. First, to survive the present universe’s demise, and second, to preserve the human species in its current mental state within a huge simulation as a yardstick of what biologically evolved self and self-awareness is. This illusion—so precious and peculiar to humankind—of a self (a soul, if you like) might be a great enigma to an artificial intelligence.
In the excellent 1998 movie, Dark City, an alien group-mind faced with extinction is experimenting on a city’s population, extracting memories and inserting other people’s memories nightly, mixing and matching in an attempt to discover the essence of a human being, the “soul,” so that they can develop souls for themselves. The city—afloat in space—was created and is remade frequently by the aliens. It could easily be a simulation, designed to discover what the self is.
A simulation, of course, can be reset and rerun any number of times. At any point when we might seem on the verge of creating artificial intelligence (which perhaps already exists and is simulating us), we might be reset to say, 3000 BC and have to start all over again. This lends new meaning to Francis Fukuyama’s phrase “the death of History.”
Of course, this will already have happened many times over, with variations each time.
Ian Watson is a British science fiction author. His works include the award-winning The Embedding—the first SF novel to make use of modern psycholinguistics—and nine story collections, including most recently The Great Escape. From 1990 to 1991, he worked closely with Stanley Kubrick on story development for what became—after Kubrick's death—Steven Spielberg's A.I.: Artificial Intelligence. Watson received screen credit for Screen Story. He lives in a little, rural village with a black cat.
This archive was generated by hypermail 2.1.6 : Thu Apr 03 2003 - 11:43:51 PST