Making Machines Conscious

Some people expect that we will soon be able to make computers that will be conscious.

Their argument is that since brains are intelligent physical structures that can make us consciousness, it should also be possible for other intelligent physical structures, such as computers, to be made conscious.
The concept of manufacturing machines that are conscious raises a few issues.

Why would we do it?
What kinds of things would we want machines to be conscious of?
How would machine consciousness be produced?
How would we know whether a particular machine actually was conscious?
So, why would we want to give machines consciousness?

Wanting to do it is an example of the propensity of human beings to search for new things. and to try to break boundaries. Explorers and scientists have always wanted to discover the unknown. Some people are planning to put human settlements on the moon and Mars. Information scientists and computer technologists have the aspiration to make machines equal or superior to humans in every aspect of intelligence. Making machines conscious would be part of that challenge.

What advantages would machine consciousness be expected to provide, and who would benefit? One expected advantage would be the thrill and the kudos for the first people to do it. A more worthy advantage might be that by being them conscious, some machines could be more companionable and more humane. This might even make some humans more companionable and humane, which may be relevant to the institutions that care for children and aged people, where there now seems to be a lot of abuse and neglect. Also, consciousness might help machines understand what they are doing, which might make them more efficient, more effective, and less likely to accidentally harm us. This could apply to driverless vehicles.

Armament manufacturers might think that consciousness could make intelligent weapons more effective. And perhaps you could blame conscious machines for any breaches of international rules of behaviour.

Most people probably think that human consciousness is very useful. Our human experiences contribute to our wellbeing. For example, being conscious of pain is an important function for the preservation of the body, for humans and many other species. When people have no sense of pain they have no indication that they need to act to prevent or treat any damage to their body parts. This happens with people who have leprosy and often to people with advanced diabetes. Also, there are conditions, such as the early stages of cancer, that cause no pain but will become more dangerous with untreated development.

Experiencing severe pain affects the cognitive and emotional areas of the brain in a way that would not have happened if no pain had been felt. Emotional memories are created of the pain and of the incident that caused it, which could send warning signals whenever a future situation occurs that resembles one that previously caused pain.

For most people, pain is not a major part of their consciousness. Many conscious feelings are not painful but very pleasant or exciting. There are many things that we enjoy in life.

Our consciousness of what we see and hear and taste, etc., gives us a feeling of what the outside world is like. We conjure up memories if these things, and put “pictures” of them into our consciousness. Most of us have many happy memories stored away that we like to bring back to consciousness from time to time.
These feelings, and the conscious reminders of these feelings, give a sense of reality to everything that we know about. They are much more than the representations of the outside world that cameras and camcorders put into the memories of the present intelligent non-conscious machines. So a similar sense of reality might occur for conscious machines, and make them more “understanding” of what they were doing, and safer and easier to work with. But each machine and kind of consciousness would need its own special treatment.
This leads to the next issue.

What kinds of consciousness might we want to give machines?

Human consciousness has very many facets, some of which have already been mentioned. We would need to choose which ones should be given to machines to suit their specific purposes, and which ones to avoid? Some kinds of consciousness would make us morally obliged to treat the machine “humanely” – or compassionately.

One kind of human consciousness relates to the outside world and what is happening in it. Another kind of consciousness relates to our inner state.

Consciousness relating to the outside world is based on inputs from our sensory systems – sight, hearing, taste, smell, touch and pain, etc. So this type of consciousness is our awareness, at the particular time, of what we are observing and doing in the physical world, including our own physical body. It also includes the information that we get from reading, listening. A lot of processing the brain is required for us to be able to make sense of the sensory inputs, such as the conversion of the inputs from the optic nerve into coherent pictures, but we are conscious of only the result of the processing and not of the processing itself.

Consciousness of our inner state is based on the memories derived from our sensory inputs, and our cognitive processing of these memories, and from memories resulting from our cognitive processes. We consciously think and create ideas about both the outside world and of our inner selves.

We also have another kind of inner consciousness. This is the wide range and the many degrees of our emotions. They range from love to liking and to hate. They range from disgust to dislike to ambivalence to appreciation to respect and reverence. And they range from fear to anxiety to restiveness to calmness and confidence, and they range from despair to depression to equanimity and satisfaction to exuberance. And there are more shades of emotional consciousness than these that I have just listed.

There is the consciousness of wanting something, and of ambition and the urge to take some specific action. And, of course, we are continually consciously taking actions. Machines that were intent on self-preservation could become dangerous, particularly if they could take independent action, or control other machines, such as self-driving vehicles or intelligent weapons. Ambition and other inclinations are abstruse feelings. Providing machine algorithms for them and linking them to the issues of the outside world would be very tricky and could produce unintended and dangerous outcomes.

Things that enter our consciousness are initially stored in our short-term memory. But much of what our eyes see and our ears hear is not passed on to our consciousness. Much of what we experience is of very little significance to us, and it disappears from memory. What is significant is put into the long-term memory, and may be recalled later – sometimes with difficulty. It would be necessary to decide which received information detected by a conscious machine should be kept, and what should be discarded. The amount of such detail that was detected and stored, and the use it was put to, would determine how much additional memory and computing power the machine required.

Machines are given access to specific kinds of information that are necessary for the processes of their specific purposes. They can be connected to devices that measure aspects of their environment, such weight and temperature and light and sound, and the concentration of particles in the atmosphere, etc. This does not mean that they know what weight is, or experience its effects, or feel hot or cold. And it does not mean that they know what different objects actually are, even when they can detect, identify and name them.

Feeling hot is quite different from measuring one’s own temperature. Feeling the weight of something that we are lifting or holding is quite different from measurement of weight. Our consciousness of weight and temperature tells us, among other things, that something is too heavy for us to carry or that it is light enough to carry, or whether something we are touching, or our environment, is too hot or too cold for our safety. Making machines conscious of such things could give them more understanding.

Machines also detect sounds and colours. They recognise patterns of all kinds, such as pictures and other shapes, patterns of letters and words and numbers, etc. They translate spoken words into text, and text into spoken words. They translate words, numbers and patterns of any kind into commands to do something, or recognise something or someone, or decide the optimum action to take in a process or competitive game.
There is no reason to think that they have similar conscious feelings to what we get from those same sounds, pictures, patterns and words. A machine using a picture to identify someone would be recognising a pattern not a person.

Computerised processes are just the operations of established laws of physics, using the minimum amount of information needed for the particular tasks.

Some of the things that machines do are called mental tasks when we do them. Few people would accept that the machines understand the significance of what they are doing in any of these things. In each case, we would need to consider what advantages and disadvantages would result from making the mchine conscious.

Some kinds of consciousness would need to be decided arbitrarily. For example, human sensations of colour might not be the same or even similar for all people. We may agree on how we name the particular colours of something, but that could be explained by the fact that our eyes all have similar mechanisms for detecting and representing the various wavelengths of visible light. Similarly, we are not able to experience what other people experience regarding, sound, tastes and smells. If we were to give machines consciousness of these we would have to provide more than just detection and measurement: we would have to provide something that could deliver the appropriate sensations.

Humans don’t need to be conscious of every detail that they detect. The same would apply to machines. with their equivalent of sensory organs, i.e., videos, etc. It would be important to determine, for example, how much of what was happening along a road would be necessary to make self-driving cars safer. It might require good recognition of human gestures, both hand and facial. The value of consciousness would be dependent on the purposes the machine was put to. It would be necessary to establish that being conscious of a particular kind of thing actually did serve the particular purpose.

Some machines have thermostats that take specific action when a particular temperature is reached. Machines could also have their relevant parts fitted with sensors for degrees of strain when they are bearing weight, and humidity, etc. These might be made conscious.

Should we make machines feel pain or anxiety or fear, or pleasure or confidence or happiness, or anger? Some people might think that these emotions would make machines more companionable, or more suitable for particular tasks, or as soldiers. They would have in their memories the details of situations they had had with particular people, and conclude that humans might have similar emotions and similar memories. This might make the machines genuinely empathetic and companionable to humans. Or they might outsmart us.

As mentioned earlier, humans are conscious of a great range of emotions. Our emotions are regarded to be the most significant influence of our decision-making, eclipsing our reasoning. They may give us the incentive to achieve what we might otherwise not have started. But emotionst can also make us do inappropriate, or silly, or dangerous things. A good balance between emotion and rationality is important for our dealing with our very complex environment.

Giving machines a range of emotions might also mean that they could develop psychiatric problems. This might be useful for research into treating these conditions in humans. But once machines could have such experiences, the same ethical issues that apply to humans and animals would have to apply to machines.
There would be no need to make machines conscious to make them obsessive: some are already non-consciously obsessive. Perhaps they might be appropriately tempered by consciousness.

Humans and other organisms have a life cycle. They reproduce by processes that begin inside their bodies, they continually interact with other organisms and their environment, and they continually take in material and energy to internally build and maintain their bodies and reproduce. They are conscious of some of these processes that are happening to them, and have memories of their experiences. None of these aspects of consciousness seem relevant to machines.

Humans and some other organisms learn through constant practice to perform complex tasks without thinking about how they are doing them. That is, a brain can learn to do tasks unconsciously. Often humans are more skilful when doing things unconsciously than consciously. Examples are manipulative tasks such as playing sport, using a keyboard or writing, and walking, and mental tasks like calculating. There is no time to think about how to hit a tennis ball that is speeding towards you, but your unconscious reaction that has developed through practice will perform the task.

It would often be more efficient and more reliable and safer to just rely on non-conscious algorithms to give machines their desired characteristics. It would not be necessary for a machine to be conscious, for it to give or be given warnings, such as the equivalent of pain. In some cases, warnings, etc., would preferably be sent directly to humans not to the machines. It would not even be necessary for a machine to be conscious in order to detect the changing moods of individual people. The present machines sometimes misjudge the situation, but we often do that too.

But choosing whether to give a machine consciousness and what kinds or conscience to give it will not be the biggest problem.

How could consciousness be produced in a machine?

It might be argued that to be conscious it is necessary to be alive, so machines could not become conscious. The only argument for this is that all the conscious entities that we are aware of are living organisms. But we are not sure whether every living organism is conscious. All that we know about the physical characteristics of consciousness is that the content of consciousness seems to be dependent on information. And machines contain information.

I think that for any person, or any inanimate thing, to have any particular conscious experience, there must be certain conditions. There must be:

* something to be conscious of, which would become the content of the consciousness;
* a system for detecting, storing, recovering and processing information and making decisions,
and the capability of becoming consciously aware of the whatever is represented by specific pieces of this information.

There is, of course, a universe full of things to be conscious of, and organisms and some machines have the capability of detecting aspects of the universe and processing the information they detect, and making decisions.

There is plenty of evidence that the processing of information in the brain is the only source of the content of the consciousness of human beings. So the idea of making a machine that is conscious seems reasonable. There are a few ideas about achieving it.

Some people think that when a brain has developed a certain degree of complexity it automatically becomes able to be conscious. This gives no clue to what kind of role complexity may have.

There is no apparent reason why sheer complexity in any kind of system should, of itself, automatically produce consciousness. There are no plausible suggestions of what kind of complexity, or how much would be enough, or of whether different kinds of complexity would be needed to create different kinds of consciousness, such as for pain, for seeing colour and for being happy.

There is a branch of mathematics called complexity theory. It deals with two aspects of complexity, the analysis of complex and chaotic systems, including the solving of very difficult mathematical equations, and the processes by which apparently independent elements can come together to produce coherent complex systems. But complexity theory does not show how consciousness might occur.

One suggested way to give consciousness to a machine is to “download a human mind” onto one that is already suitably equipped. This might seem like a straightforward process. Every operation of a brain involves electric currents, which can be detected using wires attached to specific parts of the head. Also, structures within the brain can be detected using magnetic resonance imaging (MRI). Processes within the brain can be watched using functional MRI (fMRI). “Brain scans” using these technologies have been performed for a long time, for diagnostic and scientific purposes.

People can now control devices, including wheelchairs and prosthetic limbs, using the electric signals generated by the activity of the brain as a result of their thoughts.

All this seems to suggest that, even though we might not know how a brain produces consciousness, by copying all the information in a brain, and keeping that information in exactly the same structural format as it was in the brain, we could produce artificial consciousness in a machine.

But detecting the electric currents in the brain does not provide a picture of the structure of the neurons and their connections. CT scans and MRI might provide detailed 3D pictures, but there is a lot of difference between a picture and complete knowledge of the thing pictured . A comprehensive detailed examination of the brain would be needed.

The human brain has tens of billions of neurons and many other kinds of cells. Each neuron has multiple connections. The brain is three dimensional, so access to individual connections between neurons would have to go through other brain matter. The neurons are not idle, not even when the person is asleep or anaesthetised.

Downloading a live brain would not be feasible. So a dead brain would be necessary.

The dead brain would need to be kept at a temperature that prevented any deterioration of the tissues. This would mean cooling the entire body from immediately after the death of the person. But, since brain death is the criterion for death, the brain might already have had some damage. The downloaded information would then be needed to create a replica that could operate using a suitable power supply, and then be connected to the machine that was to be made conscious.

In 2015 a scientist at Harvard University completed a six-year project, completely analysing the structure of a tiny fragment of mouse brain. The volume of brain tissue was 1500 cubic microns, equivalent to a cube whose sides were slightly longer than a hundredth of a millimetre. While developments in technology will probably increase the speed of such projects, which are still ongoing, completely downloading an entire non-conscious dead human brain would take many decades to complete.

Constructing the downloaded replica would take even longer.

But what if, despite these problems, it actually were to be achieved?

A machine fitted with such a replicated brain would have the knowledge, the personality and the consciousness of the person whose brain it was copied from. And it would expect to have all the sensory inputs that that person had. So it would need to have visual inputs equivalent to those delivered to a brain by the optic nerve, otherwise it would be visually impaired or blind. It would also need to have the equivalent of the motor nerves that cause eyes to move and to focus. The same would apply to all the sensory and motor nerves so as to match those of the person whose brain had been copied.

With a person, loss of a limb often causes “phantom pains”, and a similar effect would apply if such a conscious machine was not given the equivalent feelings of active arms and legs.

Lots of sensory and motor devices would need to be attached to the machine, otherwise it would suffer a continual agony and anxiety. And the machine would want to do the kinds of things the person would have wanted to do. The immorality of making machines conscious in this way without such attachments would be an important social issue. Providing all the necessary attachments would be costly.

One alternative might be to give a machine the consciousness of a dog, or a mouse or a cockroach. That might sometimes be sufficient. The cockroach would be easier.

There is no evidence or theory of how a brain might have the capability of being conscious, or of how information that is stored in the brain might be converted into its specific content of consciousness. All this makes me think that the only way to provide machine consciousness is to find out how organic consciousness is produced.

Some people dismiss all this arguing about consciousness. They say it is a non-issue; we all have it, so we should just accept it. This attitude is of no help to anyone who might want to produce a conscious machine.
Some people say there is no such thing as consciousness.

Until there is some physical theory that explains how conscious arises from the patterns of connections in brains, we cannot begin to work out how to produce consciousness in machines.

How would we know whether a machine was conscious?

If all the scientific and technological problems relating to producing consciousness were to be solved, how would we know whether a particular machine was actually conscious?

The only kind or consciousness that we can discuss with confidence is the consciousness that humans experience. Each person feels their own consciousness. Each person assumes that other people have similar feelings of consciousness. These assumptions are based on the observation that other people are similar to us, and behave similarly to us, and can talk about the things that they and we are conscious of. This seems eminently reasonable, but it is not direct evidence that other people are conscious.

We also deduce some things about the likely consciousness of other species of organisms, but we are unable to have conversations with them about it.

Most people think that some other animals are conscious, but they are less sure about animals that are smaller and very different from humans. Most people assume that plants, fungi and microorganisms are not conscious. We might think that this is because their sensory systems are different from ours and of other mammals, but this is not valid evidence of which are conscious and which not.

Conscious machines would cost more than their non-conscious counterparts. This is because of the additional complex programming and attachments they would need, and their attributed advantages. So it would be important for the purchasers to be able to tell that what they were buying actually was conscious, otherwise there could be a lot of customers not realising that they were not getting what they paid for.

How would they tell?

A machine might be conscious of only a few aspects of the outside world, and/or of a few emotions or dispositions, so separate tests would be necessary for each. For example, a test of whether a machine felt hopeful, or liked caring for children and elderly people, might have to include observing its behaviour.
Would some kind of Turing test be reliable? In the Turing test, which is named after Alan Turing, who suggested it, a person has a conversation with an unseen person or machine, and has to decide which it is.
The person doing the test chooses what to talk about and what questions to ask, expecting that that a machine would reply in a different way from a person. People doing the Turing test often come to the wrong conclusion.

Telling whether a machine was conscious might be similar. A machine might be asked about its experiences, such as describing them, liking or disliking them, and what made them good or bad, and what things made the machine happy or sad. It might be asked if it was conscious.

Whatever the questions, answers and discussions were, the machine could have been lying, consciously or unconsciously. Asking a machine to do tasks, like identifying or finding something, or solving a problem, would not identify whether it was conscious.

Testing how good it was at playing chess would not be very useful – unless you thought that it might be conscious if it lost.

Just as the content of human consciousness seems to be entirely dependent on the information contained in the brain, so would we expect the content of the consciousness of a machine to be entirely dependent on the information contained in its memory. A machine that was not conscious should be equally capable, or equally incapable, of passing tests as a similar machine that was conscious.

You could program a machine to say “ouch” whenever someone hit it, but that would not mean it was conscious.
In all cases it could have been programmed to give a false answer. Even a machine programmed to tell the truth could be programmed to “believe” it was conscious.

Presentation to The Atheist Society, December 11, 2018

There seems to be no alternative test.

So how would the developers know whether they have succeeded? How would they convince the doubters?
If there are ever going to be conscious machines, there are sure to be some buyers.

I wish them luck.