Can Computers Have Emotions?

Key Questions

What is an emotion?

Feelings about a situation, person, or objects that involve changes in both physical and mental states.

Can we design computers to have emotions?

Not yet. People interacting with the Sony Aibo robotic dog are quick to think that these computers have emotions. But going from that to robots which really have emotions requires more work; even so, it's perhaps not as much as you might think. As Andy Clark argues, one can design a relatively simple machine being wired up so that it can monitor and control both things in the outside world, and things inside itself. If the wiring is done right, then even a simple robot can be said to have states that are analogues of some human emotions. But it won't necessarily have a complex emotional life - to have that requires a lot more work on making machines interact with people in more natural ways.

Should we design computers to have emotions?

Yes. Some people argue that if an animal or machine is to survive, it needs something like emotion to help it respond rapidly yet appropriately to serious changes in its environment. Even if that's not so, we will definitely need ways of making complex, flexible robots easy to interact with: they must be understandable to us, and vice versa. So it would be useful if they show and understand emotions. And one view on this is that, since humans will sniff out insincere emotion immediately, the best way to fake emotions is to actually have them. If we want people to be able to trust complex robots, then the robots will need to have the analogues of emotions built into their control systems. (By the way, there will still be plenty of room in the world for simpler machines without emotions.)

Questions from Schools

1. How would the computer react to, or measure our own emotions if it was to have emotions itself? (Submitted by Brian, Gracemount Highschool)?

There’s quite a lot of work going on, which looks at ways of getting computers to recognise human emotions from information picked up from skin galvanometry (measuring levels of sweating), tone of voice, facial expression, and so on. Rosalind Picard, at MIT, is one of the leaders in this line of work. My feeling is that this research is interesting, but doesn’t get at the objects of our emotions. That is, a computer might be able to tell that someone is happy, but it couldn’t (easily) tell what they are happy about. To get this kind of information, computers need to able to tell what people are looking at, when they feel an emotion, and what they mean when they talk about the world. Much of our conversation reveals how we feel about things, directly or indirectly. To tap into this, computers will need to be able to understand human communication better.

2. Could the computer develop a personality if it had emotions? (Submitted by Sarah, Gracemount Highschool)

Yes, this seems very likely, for two reasons. First, most people experience emotions as we interact with the world and each other. But how strong, or long-lasting, the emotions are varies from person to person. And that’s a feature of personality. Saying how anxious someone is is a way of summing up how they tend to respond to events over time. So in one sense, if a computer has emotions at all, it will automatically project a personality. Secondly, one way we tell people apart is by talking about their personality differences (one person is more outgoing than another, and so on). If we are going to build emotional robots, it probably makes sense for them to be different from one another, rather than them all having identical personalities. Making them distinctive will make it easier for people to use their knowledge of ordinary interaction (with people) when they interact with robots.

The class believed that it would be good for computers to have emotions, but believed that it may prove to be too complicated (and unfeasible) for the computers to display human-like emotions. They also believed that as a machine cannot physically feel, it would be contradictory to suggest that they would be capable of having emotions.

These are good points. It’s easy for us to read emotions off faces, and I’m working on a project where a talking head can show when the underlying system is confused, or thinking, or agreeing or disagreeing. In general, it’s probably only necessary to display emotions that will help people understand the computer/robot better (so they can interact with it more effectively). And many (probably most) robots will be significantly simpler than people in terms of their skills and possible behaviours. So showing a whole range of human-like emotion might lead people to think the machines are more emotionally sophisticated than they are—and that’s probably a mistake from an engineering point of view.

Now, can you have an emotion without having a feeling (or gut reaction)? Aaron Sloman thinks you can (as when you appreciate the beauty of a mathematical proof. I’m not so sure this doesn’t involve a feeling. But as the panel agreed, fixing a machine so that it can monitor its internal states gives it a kind of feeling. We don’t know what it’s like to feel the way the machine does. But in this respect, we are no worse off than we are with bats or cats. The key challenge is to get the robot’s emotional functionality right.

3. Do you think that new developments in AI will have a significant impact on the study of psychology? (Submitted by David Clark, Stewart Melville College)

They certainly have had, and will continue to do so. In the 1980s and 1990s, the biggest influence was probably the re-emergence of connectionist systems: lots of small, parallel processors communicating with each other, in the style of a neural network. This systems offer excellent models of various human cognitive systems (like vision, hearing, reading, face recognition), both when they work, and when they break down. A key feature is that these systems can learn from experience. Currently, the idea of evolutionary, adaptive sytems is having a similar impact within both AI and psychology. And the traffic is two-way: psychology influences AI every bit as much as AI influences psychology.

4. In Bladerunner the central thrust of the film is that it is the possession of memories, which makes humans human. Emotions emanate from these memories. If computers could be taught to learn from their memories, do you think they could begin to experience emotions? (Submitted by Louis Morgan, Stewart Melville College)

Most people would agree that it’s our collection of memories that makes us into distinct individuals. And learning from past experience is also essential to intelligence (both animal and artificial). It’s also true that many of our memories have emotional colouring. (For instance, there is wonderful research on the phenomenon of mood-dependent recall: it’s easier to recall some fact if you’re in the same mood as when you first learnt it.) In building a robot, we might want to use emotional tags to help guide memory and inference. But the link is not automatic: we already build connectionist systems that do learn from their memorised past experience—but we wouldn’t want to say that these systems have or feel emotions, in anything like the human sense. The main reason is that they just aren’t complex enough, in terms of structure or behaviour.

5. Will new programming techniques need to be introduced to build such machines? (Dougie Thoms, S5 student, George Heriot's School)

Suppose we choose to build emotional robots by building connectionist systems that learn from experience, and alter their processing modes on the basis of their emotional state (for instance, by narrowing their focus of attention when they are under stress). Then we’re already quite far away from ordinary programming, because we have to design a system that autonomously (re-)programs itself. When we program it, we probably don’t give it lists of facts about the world, but instead we have to design into the machine drives to seek out new information, to protect itself from danger, and so on. And before we release these kinds of machines into the world, we want to convince ourselves (to a reasonable level of confidence) that they will work as intended. Techniques for this kind of programming emerge both from engineering experience, but also from the most theoretical end of computer sciences. For instance, theoreticians studying global computing want to find out how billions upon billions of processors will interact with each and what kinds of states will emerge over time—and what kinds of states can’t emerge.

6. Does the irrational nature of emotion not conflict with the fundamental way in which programs and computers are designed? (Dougie Thoms, S5 student, George Heriot's School)

Western thought has often distinguished emotion from reason. Antonio Damasio (in his book Descartes’ Error) thinks the separation went too far, and that lots of reasoning involves emotion. In fact, philosophers like David Hume always looked carefully at the relation between reason and emotion. In terms of programming, what matters is whether you can work out the functional relationships between cognitive states (like beliefs about the world) and affective states (like strong, positive preferences for particular experiences). So long as there are regular functional relationships, everything is fine. Irrational doesn’t mean random!

7. What is it like to be a conscious computer (after Thomas Nagel)? This topic seems to be a variation of the (intractable?) mind/body problem. Surely the subjective nature of consciousness means that we can never know if a computer is experiencing emotions, and can never have a concept of what it is like to be a computer. So are such discussions doomed to be fruitless exercises, or does the panel simply accept the Turing-esque definition "if it looks like a duck and quacks like a duck..."? After all, the only way I know the panel aren't all automata is by inference...! (Jeremy Scott, Head of Computing, George Heriot's School)

Most of the panel takes the quacks like a duck line. Given the great growth in brain imaging research (using techniques such as functional magnetic resonance imaging), and in techniques like transcranial magnetic stimulation, some people are moving towards the view that we don’t have to take people’s word for what is (or is not) conscious to them. But such techniques still don’t tell us what it’s like to be other people (or bats). So long as robots can be made to behave in complex ways similar to people, we have no reason to deny them emotional feelings, or conscious states, on the basis of their different construction materials.

8. Prior to Einstein, we did not understand the relationship between matter and energy. We still do not understand the relationship between mind and body. Does the panel believe that a similar breakthrough of understanding is close, distant or even possible? (Jeremy Scott, Head of Computing, George Heriot's School)

One view is that we’re already nearly there. The development of computers offers us a way of seeing correctly the relationship between mind and body. A body is a physical machine. Some physical machines can support a variety of information processing machines. (Aaron Sloman and others call these latter machines virtual machines). Some of these information processing machines are what we call minds. Another way to see this is in the terms introduced by the late David Marr: we can look at a single system, and consider it at the computational (behavioural) level, the algorithmic (programming) level, and the implementational (hardware) level. Which levels we focus on depend on our current purposes. But to truly understand any thing that has anything like a mind, we must consider each level of description. We can’t (and wouldn’t want to) do away with any of the levels.

AI and Cognitive Science give us the chance to investigate the general conditions required for something to be said to have emotions or a mind. Without AI, we have only one good example: people. And that’s not a lot of data, really. So, short of meeting extraterrestrials (or discovering that animals like dolphins or octopuses are even smarter than we currently think), the computational view of the mind is our best bet for a breakthrough.

Do you have any questions you would like to ask?

You may have very stong views on this subject already and some questions of your own but feel you could do with some more background knowledge before you take on an academic debate. Here are some useful resources :

  • Brief descriptions of sciences informing the debate can be found on this web site
  • Aarond Sloman has put together a web site entitled Artificial Intelligence for Schools
  • Dr Dylan Evans's article, Robots of the Future, raises many of the questions that are likely to be debated

Can Computers Have Emotions

Hot Seat Debate

The Panel
Key Questions &
Possible Answers

Here comes the science bit
Artificial Intelligence
Computer Science
Cognitive Science
Philosophy
Neuroscience

With many thanks to contributing Universities
University of Edinburgh
University of Birmingham
University of West of England

The debate took place
at 6.45 pm, 17 November 2004
Lecture Theatre F21
7 George Square


Home : News : Hotseat 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh