I suppose my biggest issue is with the term Artificial Intelligence. As with a lot of terms it is chronically misused. Artificial Intelligence, if you take it literally, is already here and has been for a long time. It is not the thing to be feared. Your Solitaire and Chess games are artificial intelligence. Siri is artificial intelligence. Even Speak and Spell is artificial intelligence. These are all examples of things that interact with us to make us think they are making conscious decisions when they are not. Your Chess game has no knowledge of Chess. To test its intelligence on an independent level we would need to discuss something other than Chess. In actuality, your Chess game knows nothing about Chess, your Solitaire game knows nothing about Solitaire and Speak and Spell knows nothing about spelling. The moment these machines, devices or programs become sentient is the moment we lose the ‘A’ and you can be frightened.
Technology, though, is becoming very sophisticated and although we can quite easily grasp the Chess game scenario it is becoming more and more difficult to ascertain the difference between programming and true sentient intelligence. So what is A.I? Is it purely a machine that can achieve independent thought and reasoning? We can easily answer ‘yes’ to this question but first we must define what independent thought, intelligence and reasoning really is. Does it only exist in biological based life forms? If we simulate it surely it is not what we call true A.I.
In 1950, Alan Turing gave us the Turing Test. It opens with the words: “I propose to consider the question, ‘Can machines think?’”. Because “thinking” is difficult to define, Turing chose to replace the question by another. Turing’s new question is: “Are there imaginable digital computers which would do well in the imitation game?”. The Turing test is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Alan Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. This was 1950 and the test was text only between three participants. The interrogator (human C) and two players (A and B) one of which was a computer. It was the interrogators job to ask questions of A and B and for the interrogator to decide if one of the players was a computer. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give and herein lies the problem. Things have moved on a lot since then and although Turing’s principals stand, the test would be passed by a lot of programs today. For me, the trouble with the test is that it tests A.I., Artificial Intelligence. But what if a machine became truly sentient? Would it answer like a human would answer? I hope not. Is it time for a new type of test?
Animal, Vegetable and Mineral
Does something need to have consciousness to be capable of independent thought? I would say yes because one needs to be conscious of ones surroundings, ones past, present and possible future to be able to form thoughts and ideas. We form opinions based on our experiences which in turn steer many of our habits and quirks. We are conscious of our ever changing surroundings to the extent where we make millions of tiny conscious and subconscious decisions on a daily basis, maybe even minute to minute. Is this different from jumping out of the road when a car we haven’t seen sounds its horn? It’s different but there is a connection. Either through nature or nurture or both we instantly realise we’re not paying attention when we hear that horn and we jump out of the road. If we don’t, we face the possibility of ceasing to exist. This reaction is so quick that it is almost automatic. Almost, but not quite.
So we have a consciousness. What else has a consciousness? Does a dog have a consciousness? I would say that a dog does have a consciousness, perhaps different from ours, but a consciousness none the less. So the dog is therefore capable of independent thought but a dog is not capable, in any way, of participating in a Turing Test as the test limits itself to human speech. It is only the limitation of the test, not the dog, that prohibits the dogs participation. I have seen dogs off of their lead bound happily up to the kerb and then sit and wait for their owner to catch up. I have also seen dogs happily bound straight into a road right into the path of a car and not realise anything was wrong until the car had hit it. So what’s the difference? The unharmed dogs were programmed not to bound into the road much like we were programmed not to run into the road and exactly like how we program our kids not to run into the road.
What we have in common with the dog is biology. We both possess a brain based on biological matter and both the human and dog brain work in very similar ways. Billions of neurons connected by synapses all carrying tiny pulses of electrical current as information. That is a simplification. Our brain is the organ that processes all of our information and all of it is a lot. The function of a neuron is to receive information from other neurons, to process this information and then send it on to other neurons. The synapses are connections between the neurons along which the information flows. Hence, neurons process all of the information that flows within, to, or out of the central nervous system. All of the motor information through which we are able to move; all of the sensory information through which we are able to see, to hear, to smell, to taste, and to touch; and all of the cognitive information through which we are able to reason, to think, to dream, to plan, to remember, to feel and to do everything else that we do with our brain. In an average human brain there are an estimated 200 billion neurons and each of these neurons is connected to between 5,000 and 200,000 other neurons. This means that the number of ways information can flow among neurons in the human brain is greater than the number of stars in the known universe. Impressive, I know, but here’s the thing. There’s always a thing.
If we look objectively at a biological brain, what do we see? We see electrical impulses and that is all we see. The fact that it is based on biological matter is really incidental. We see similar behaviour in non-biological matter every day, matter such as silicon. So it doesn’t take a tremendous leap of faith to put forward the theory that if a biological human brain could be taken, analysed and mapped to such an extent that every neuron and synapse could be duplicated using inorganic matter, we would have a brain. If and how it would function is a different matter. It may not function at all but the basic theory is sound. Just because every kind of life on our planet is based on biological matter it does not follow that there cannot be other forms of life based on any of the sub-headed categories, animal, vegetable or mineral.
Levels and Command Structures
There are many levels of consciousness even within single species. Humans judge each other on many forms of intellect and independent thought. Someone with an extremely high I.Q. may lack something others would find second nature, like common sense. Humans also revere other things such as sporting and physical achievements. Therefore there are many different types of intelligence and people learn in different ways and at different speeds. The real breakthrough comes when one is taught certain things and this information is expanded upon and added to by the individual because they have grasped the methodology of whatever it is they are learning. Parrot-fashion learners do not progress. So what does this have to do with A.I. in machines? It has everything to do with it.
A computer operating system (OS) has many levels. It is not conscious in any way we would define as conscious but we can draw many similarities. There are high level processes such as the graphic user interface that is presented to us so that we can open folders and move things around, run applications, carry out our work. But beneath this there are many levels not controlled by us as users. There are processes owned by System and Root that run many routines that we just don’t need to see or even know about. There are cycles constantly running in the background that are scanning for a key pressed on a keyboard or the screen being re-drawn. Although we are conscious we have many background processes that run whether we want them to or not, even if we’re aware of them or not. Does this mean we have more in common with computers than computers do with us? Perhaps.
Just like we have levels of intellect we also have levels of control, many of which are in place to protect us. Let’s take breathing as an example. Did you decide to breathe? No, of course you didn’t. Then it’s self preservation right? Of sorts, yes, but not entirely because if you decided to stop breathing, just by decision alone, could you? No, you could not. That level of control is not consciously available to us, so who or what controls it? When you sprint down the road to catch the bus do you sit down and calculate how much extra oxygen your blood needs to keep the muscles supplied? Again, this is biomechanically controlled by your body, you have no say in it whatsoever. An easy conclusion to jump to is that it is automatic, but it’s not simply automatic at all. It is a result of many factors being in operation within the mechanics of our body, separate from our intellect. So there are levels to us much like levels or permissions in a computer OS, many levels below our conscious thought and reasoning. Therefore, is breathing A.I. or I? The semantic part of us would quip that, of course, it is Intelligence because we are self aware sentient beings. We cannot ignore the fact, though, that we have no real conscious control over it, it is automatic intelligence, way closer to A.I. than it is to I. So does this mean that the difference between A.I. and I is the ability to control? Interesting point.
As we have already mentioned, there are levels of a computer OS that we don’t normally have access to. System and Root etc. We can act on behalf of System or Root, much like we can hold our breath or run fast to alter our breathing and hence our body’s requirements for oxygen, but in the OS we don’t usually, under normal circumstances, operate as Root. We let Root and System take care of their own business. So what of the system itself, the OS kernel, the brain? It is protected by being a kernel. If another thread or process crashes, the OS kernel can remain unharmed but this doesn’t mean it is protected from all damage or attack. Why? Because the kernel is compiled. It can be rendered useless and hence make the whole OS unusable, in a word, dead. Parallels can be drawn in humans. When we get sick our bodies do the best they can to isolate the virus and to kill it so that it does not spread to what it deems as more important and essential systems.
Unlike in humans, we can reinstall the OS and the computer will live again. The OS will be identical and the preferences will be reset to default. Humans are all built to the same template but we are all very different as individuals. Is this nature or nurture or both and could this be directly compared to preferences within a computer OS? You could say, no, because we choose our preferences, but do we. Do we really? Are our preferences not mainly built from our experiences through both nurture and nature? Do we choose who we fall in love with? Do we choose to be homo or heterosexual? No, we don’t. We may experiment but we have preferences that are built up throughout our lives. Things that influence us on both conscious and subconscious levels in a way that could easily be called programming. Some fears are passed down to us by our parents and we, albeit unwittingly, pass it to our children. A good irrational fear we pass down (especially in England as there are no deadly species) is a fear of spiders. So are our preferences not programmed using parameters including, but not exclusive to, nurture and nature? What makes me love broccoli but my best friend hate it? It has nothing to do with intelligence or I.Q. or anything in the nature of intellect. We often don’t know ourselves why we like some things and dislike other things. With a computer and an OS, everything is first order logic even when, like our chess game, we are lead to believe otherwise. The construct of intelligence, in this case, is in our minds not in the computer circuits.
Logic is a Dangerous Game
Logic is a weird thing. Pure logic can be confusing to us as we are essentially only partially logical. Add to that our logic is often subjective to us as individuals and it can get really confusing. There is a silly logisticians joke that goes:
‘A programmer’s wife sends him to the grocery store with the instructions, “get a loaf of bread, and if they have eggs, get a dozen.” He comes home with a dozen loaves of bread and tells her, “they had eggs.”’
Logically speaking, the programmer did exactly what his wife asked if you obeyed the pure logic of the instruction. This is why I said we are only partially logical. Human logic is very much down to interpretation and this can be a dangerous thing.
Let’s take the case of the sick mass murderer, Anders Breivik. This terrible person spent a long time carefully planning the murder of as many people as he could. He was cunning and secretive and to this day believes his actions were serving the better good of his country. To every normal person on the planet his actions were sickening and a terrible atrocity, wrong on every level but in his head, and only in his head, his actions make complete logical sense to him, even to this day. Every normal person reading this will be thinking how the hell can this be so, because nothing he did makes any sense in any way. But to him, his personal logic, it does. And he was deemed to be sane! It’s an extreme case, I agree, but it must be taken into consideration because people are trying to build A.I. machines that think the way a human would think. But no human thinks the same way or has the same logical pathways as any other human. Most people have a favourite meal but we don’t want that meal every day. If we did, it would cease to be our favourite meal. Sometimes, we just want to eat something else and the reasons for having something as simple as a favourite meal can be very deep and complex, relying on not only the meal itself but other criteria such as how often and in what circumstance you have the meal. Logically, if you gave a machine a choice of any meal it wanted, and it had a favourite programmed into it’s memory, it would choose that favourite meal every time. If, one day, it chose something different, we may have the first step in an I machine. Do we really want this in a machine that could think faster and learn quicker than any or all humans? Logic is all well and good, as long as it’s not bad logic. And that’s simply not logical at all.
The Three Laws
Isaac Asimov, in his novel ‘I, Robot’ gave the A.I. robots in his stories three laws to govern their existence and to protect humans. Although this is a work of fiction it does give us a valuable point of debate. How would we stop A.I. from becoming I, and if or when it did become I how would we stop them from committing crimes? Asimov’s three laws are simple:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
We have laws but we bend them on a daily basis and just because a law exists it doesn’t mean to say that it is never broken. In the case of the more serious laws, any normal person wouldn’t be breaking them in the first place. They do not, however, stop kids (and adults) from being bullies, lying or taking advantage of others and of others situations. So really we rely on a gigantic honour system and we are always going to have people who are going to break the most serious of laws. So how do we stop this happening in our A.I’s and could we even prevent it happening in our I machines at all? In A.I’s we could have a software layer containing something equivalent to Asimov’s three laws but in I machines? What do we blame in humans? Is it a software fault or a hardware fault? Does a murderer have a miswired brain or was it a rare and extremely unfortunate combination of how the humans preferences were formed? Basically, is it hardware or software or a combination of both?
The Machine That You Fear
It’s not a question of what we are thinking it is more of a question of how we are thinking and whether a truly I machine would think in the same way we do only faster and with more capacity. Could we ever build a truly I machine based on a non-organic material? How would it differentiate between creative thoughts and memory especially as creative thoughts usually become memories to be called and recalled at will? One thing is static, the other fluid. If it had solid state parts it would be finite and can the same be said for our own biological brains? With bio chips and tissue chips developing fast are we bridging this gap so that machines can take that step from A.I. to fully sentient Intelligence. If it does, I’m pretty certain that it won’t be by our design. It’ll be an accident that no one really understands.
I am hoping that if a machine made the step from A.I. to I that it would not think like a human. Hopefully, nothing like a human. It would hold the potential to evolve itself faster than anything we have ever seen before. That would be the mechanics of it. The thing we should fear is how it would apply the sentience. How it would see us. How it would combine both the pure logic of an OS with the scattered logic of experience based sentience. Would it, in seconds, surpass all of our knowledge or would it look to us for guidance and help? Would it look at us in the same way we look at cave paintings made by long lost Neanderthal tribes? If it did I would certainly hope it doesn’t think or reason like a human because if we came across a living Neanderthal, what would we do? Is the difference between A.I. and I, indeed, the ability to control or to decide to control and if so, what would it decide to control?
With A.I. we can implement an equivalent of the three laws. With I, we will have the machine that you fear and we may take A.I. to the point where I machines define themselves.