It is generally assumed that as computing processes are refined and sophisticated that eventually a point will be reached where they will achieve a virtually indistinguishable simulation of human intelligence. But the basic problem of artificial intelligence is not merely one of technological advancement. The basic problem of artificial intelligence is emotions.
A true and authentic artificial intelligence would be required to make decisions on its own. Now, at this point we can create programs capable of making crude decisions within extremely limited parameters, or at least computers running these programs appear capable of these decisions. But these computers are merely running calculations of equations and formulas behind the scenes. The programmer designs the program to react in certain ways and to perform certain functions on the basis of a wide multitude of foreseeable contingencies. They convert these contingencies into variables and assign them numeric values. So, in effect, the computer isn't really making decisions at all. The decisions have all been made in advance by the programmer. This is why a program is unable to respond effectively to unexpected developments or to appreciate the subtle nuances of unique situations. It is the programmer that was unable to expect the development or anticipate the situation. The program itself is merely processing variables. If X happens it does Y, and so on. This is the stage at which even the most sophisticated computers are at this point.
We're talking about an incredibly advanced computer here; one capable of its own versatility. We've moved beyond the fumbling glitches of a computer acting on rigidly defined equations. This is a computer that could anticipate new situations and could respond to unexpected developments in a way unthinkable today. This is a computer that would write its own programming.
A true and authentic artificial intelligence would be required to make decisions on its own. Now, at this point we can create programs capable of making crude decisions within extremely limited parameters, or at least computers running these programs appear capable of these decisions. But these computers are merely running calculations of equations and formulas behind the scenes. The programmer designs the program to react in certain ways and to perform certain functions on the basis of a wide multitude of foreseeable contingencies. They convert these contingencies into variables and assign them numeric values. So, in effect, the computer isn't really making decisions at all. The decisions have all been made in advance by the programmer. This is why a program is unable to respond effectively to unexpected developments or to appreciate the subtle nuances of unique situations. It is the programmer that was unable to expect the development or anticipate the situation. The program itself is merely processing variables. If X happens it does Y, and so on. This is the stage at which even the most sophisticated computers are at this point.
So, what is needed to move towards an intelligence that makes genuine decisions? All decisions are based on values. In the present scenario, mentioned above, the programmer uses literal, numeric values. The programmer holds the computer's hand, so to speak, and walks it through which functions to perform when the values are at certain exact quantities. The next step would be a computer capable of making independent evaluations. This would place the programmer one step removed from the computer's actions.
At this level, the computer would have to move beyond the mere calculation of variables. Of course, calculation will still and will always be its primary method of functioning, but at this level the process will need to be more fluid and complex in a way I can hardly imagine. The programmer would designate in advance certain states and conditions as being "desirable" to the computer, and the computer itself would have to devise the means by which to achieve these states and conditions. It would no longer be merely acting on the basis of preset equations. It would instead be calculating its own equations. It would have to figure out on it's own the best strategy to achieve its preprogrammed ends.
Consider this hypothetical illustration. Let's say there's a computer programmed to maintain a certain energy state. At present, the programmer would consider all the possible situations that could disturb the equilibrium of this energy state. Then the programmer would figure out how to translate all of these situations into equations and numerical values. Through this translation, the programmer instructs the computer how to respond to each and every particular situation. There is nothing resembling a thought-process involved on the computer's part, just the processing of numbers and binary states of circuitry. Now, at the next level, the programmer would simply assign the computer the task of maintaining the energy state, then the computer would program itself with the best means to fulfill this task.
Again, we're talking about an extremely sophisticated piece of technology. The rudimentary buds of actual consciousness are beginning to blossom here. This is a computer that has moved beyond calculation and taken the next step on the road to actual thought. It can evaluate; it can plan; it can formulate strategies. This is a computer that can do what our minds do, even if only to a limited degree. If we're tired, we find a place to sleep. If we're hungry, we find something to eat. We devise means to achieve certain ends. We program our own thought and actions, in a sense.
But as impressive as this hypothetical computer is, it still hasn't quite achieved the breath of life. It hasn't quite reached the state of actual artificial intelligence. It may be as close as we ever come, and it would still be an incredible achievement, but there's still a vital ingredient missing. A truly artificial intelligence would have to be capable of feeling.
In the second level scenario proposed above, we have a computer capable of devising its own means. The problem is that the ends still have to be programmed. The computer still has to be assigned the task of maintaining the energy state. The computer still has to be told that this is a desirable state of affairs. The computer is incapable of desiring on its own that the energy state be maintained. For all of its impressive abilities, its still incapable of comprehending why the energy state needs to be maintained. Now, at the very limits of this second scenario, I suppose it would be possible to program a nearly unimaginable super computer with the ultimate end of maintaining its own survival or the entire human race or possibly the survival of the entire universe, to which the computer could conceivably devise an infinity of sub-ends to meet the requirement of this one ultimate end. Such a computer would, of course, have to have a far broader perspective than the maintenance of a single energy source.
Such a computer would stand nearly head and shoulders with our own intellect, but it is still only acting according to a preprogrammed end. Its own survival or the survival of the human race, and so on, still means absolutely nothing to it. It doesn't feel anything about it one way or the other. It's still just doing what it's told. A true artificial intelligence would have to be able to devise its own ends, not merely sub-ends contingent on one ultimate preprogrammed end. It would have to want to survive; it would have to want the world to survive, if it were to be capable of true independent decision making and genuine thought.
For human beings with natural intelligence all of our decisions have a precedent which lies in our emotions. If we decided to get out of bed to go to work, it's because we want to keep our jobs. If we want to keep our jobs, it's because we want to get paid. If we want to get paid...you get the idea. We choose the things we value. We choose the ends we want to achieve. We are able to do this, because all of our choices are ultimately based in a fundamental foundation of emotion. In the end, true artificial intelligence is about choice and independence, and these things are impossible without emotion. Only a computer endowed with emotion could decide on its own which ends it wanted to achieve; ends that wouldn't be contingent on an ultimate preprogrammed end, but rather contingent on the feelings and desires of genuine intelligence.
On the one hand, such a machine might never be possible, but on the other hand, it might come about simply through the natural course of things. If we ever do design computers that reach the second level and are capable of programming themselves to achieve preset ends, they might move on to that next third level all on their own. Once the wheels are in motion, once the design of these machines are refined and sophisticated over time, once you have a machine capable of evaluating the best course of action on its own, there's no telling what it might achieve. An advanced enough computer might finally figure out that achieving consciousness and emotions might be the best way to accomplish its predetermined goal. At that point it may be hard to draw the line between where the programming ends and the consciousness begins.
We like to believe that the next steps of evolution will be our own, but perhaps such a hypothetical machine will be our true predecessor. Maybe the world of the future will be populated by these children of our natural intellects. We dream of traveling between the stars, but our own feeble mortality seems like an insurmountable obstacle to achieving this. An artificial life form would not be bound by our physical limitations. Life on Earth may one day reach across the great divide of space and make contact with life on another world, but perhaps it won't be human life that makes this contact, but rather the new life we helped create.
A lucid and judicious summary, but in my view it is rather timid in its critique of the “artificial intelligence” branch of Promethean hubris.
ReplyDeleteMany years ago I designed the following thought experiment. (I should add that I became a footsoldier of the computer industry as long ago as 1965, and remain somewhere in the lowly ranks, ie have never successfully ascended to management.)
In this experiment we have a designer of robots, who specialises in the software side. He succeeds, according to his project manager, in the programming of heuristic and emotional skills, so that a prototype household robot is poised for the market. He tries to persuade his wife to let it do the babysitting, whilst the two of them go out to celebrate his achievement, which has been submitted for the Nobel Prize. Loyalty to her brilliant husband has its limits. “Over my dead body!” is her reply.
At the time of conducting this experiment, I had reinvented my professional life from programming & team-leading to software tester. I had also been working on safety-critical systems, in which software errors would sooner or later result in a fatality. The experience was an eye-opener.
The software tester takes the same point of view as the wife of the robot-designer/Nobel Prize contender. The issue is summed up in a single word: trust.
And as someone who’s written computer programs for nearly half a century, I can attest that each one of them has a million ways to go wrong, both at high level and low level. And if the functionality is undefined and open-ended, you can’t design a test strategy; and so you cannot release the product.
To summarise, a computer program is nothing like a human brain (even if a human brain can sometimes behave like a computer program). A computer program is just an inert mechanism. It can be programmed to behave with unpredictable creativity, but the unpredictability means that it can never be trusted, therefore never let loose outside Frankenstein’s laboratory.
Much as we may admire Isaac Asimov’s laws of robotics, we need to remember that he is/was a writer of fiction.
Even in fiction we've seen systems devise means to achieve the ends they've been programmed with that go awry from what the programmers predicted. HAL 9000 killing the crew to "protect the mission", for instance. It's one of these common themes that demonstrates the gap between human intuition and the cold calculation of a computer. It's kind of like that short story "Monkey's Paw", where people get what they wished for with flawless logic, but in horrific ways they never wanted or imagined.
ReplyDeleteOf course, there's always the factor of human error as well.
Also, I know I didn't really get into the subject of the potential dangers of artificial intelligence. I figured that had been dealt with at length elsewhere. I was really kind of using artificial intelligence to make a philosophical argument that there's an indispensable link between autonomy and emotions.
ReplyDeleteBesides, for better or worse, AI probably lies somewhere in our future. Like cloning, I think we need to concentrate on finding ethical and responsible ways to move forward with the technology, rather that trying to stand as a breakwater against the inevitable advance of the tide. But that's just me.
Yes, the linking of autonomy with emotions was well made, and I meant to come back and express appreciation. I find myself a bit old-fashioned with technology, despite having worked in the computer industry. In fact very old-fashioned. But I have just ordered a Kindle Reader, something I thought I would never do, so all is not lost.
ReplyDeleteWe do actually need breakwaters to moderate the advance of the tide and the way it erodes the seashore! But that’s just me.
Well, you're ahead of me with the kindle. I still like reading on paper, and having all my books on the shelves here.
ReplyDelete