Can Computers Have Emotions?
The Debate Panel - Position Statements
Members:
Andy Clark's Position Statement - a view from Philosophy
I think it is useful to keep a concrete case in mind, so lets start with a (hopefully uncontroversial) illustration.
One day, you wake up and find a letter on your doormat. It
contains the results of your final exams. As soon as you see the letter
you feel a rush that is a mixture of excitement and anxiety. Your heart
races, you sweat, your skin grows sticky. You carefully pick up the
letter, rush into your room, close the door and open it. You got top
grades. Your facial muscles relax into a broad grin, you feel an urge
to run out and tell the world. You find it hard, all morning, to
concentrate on other things as the thought of your success keeps coming
back to you.
That’s a whole bunch of stuff going on, and a very mixed bag
of stuff at that. In particular, there is a bunch of stuff that seems
very BODILY: the rush of blood to your face, the racing heart, the
sticky skin/ galvanic skin response. And there is a bunch of stuff that
seems very COGNITIVE: the emotion seems to be about your success, it
seems to involve (does it?) thoughts of that success. Certainly, it
affects the way you process information, making it harder to
concentrate on other stuff And it affects the way you lay down memories
(you may in future vividly recall the moment of opening the envelope).
Many people, when first asked, suggest that a computer or a
robot could only have the cognitive aspects of emotions. Perhaps a
robot can in some sense have the thought of success. It may be able to
read the symbol grades in the letter, and it may know that these
symbols denote an excellent pass. Perhaps it can even be built (though
its not clear why we should build it this way!) so that thoughts of
excellent passes then override other things and make it seem
‘distracted’ for a while, until the memory automatically degrades.
But to really feel the emotion, a critic might say, you need
to have the bodily feelings too. But a robot can have a body, and can
be built to sense states of that body. So why think we have to leave
the bodily aspects out?
Before thinking about that, let’s bring one more idea into
play. Some accounts of emotion give the bodily stuff pride of place.
What an emotion really IS, according to these accounts, just is the
perception of changes in our own bodily states. Thus William James,
back in 1884, suggested that commonsense gets things backwards. You
might think that it is your feeling anxious that causes your heart to
race as you open the envelope. But maybe, the anxiety just IS your
perception of your heart racing, your body sweating, etc. If you
subtract all that away, James argues, you would be left with nothing
that is worth counting as the emotion.
If James is right then, the merely cognitive version is actually not even a piece of the emotion: it is no emotion at all.
Nowdays, the main exponent of something like James’ theory is
the cognitive neuroscientist Antonio Damasio. His theory is more
sophisticated, and identifies the emotion with a kind of neural image
of the bodily state. That allows him to deal with cases where the
neural image of the bodily state is activated even though the heart is
NOT racing, etc. But emotions, for Damasio, are still all about those
bodily changes. When we feel anxious and our heart does not race (maybe
due to some drug) that’s because the neural bodyimage is still the one
typically tied to the racing heart.
Damasio also believes we can have unconscious emotions. That
sounds like an odd idea, but its one I am willing to defend. Here’s a
quick example, that also shows what role Damasio thinks the emotional
bodily monitoring plays in human problem-solving.
People were given 4 decks of cards and a gambling scenario.
They were told to make as much money as they can. The game allowed them
to turn a card over from any deck. Each time, they got a reward or a
penalty, and no other feedback. The 4 decks each had different payoff
matrixes.
Deck A and B pay out big but have high penalty cards, and lots of them.
Deck C and C pay out less but have low penalties and less of them. Statistically, you will
lose usually playing decks A and B and win playing D and C. But only
slowly, by trial and error, do the payoff matrixes become visible (its
kind of like having 4 one arm bandits, with very different payout
patterns, that you need to explore to decide where best to put your
coins).
Players were fitted with galvanic skin response testers. At
first (1-10 tries) no elevation of gsr (no sweat, basically). But soon,
(10-20 tries) peaks of gsr as you reach for the bad decks. At this
stage subjects still say they have no idea which decks are good and
which bad. At 20 tries, many said they got a funny feeling when
reaching for bad decks, but didn’t know why. By 50 tries, they could
say which decks were good and which bad and even explain why. But their
performance had improved long before this, courtesy of their
unconscious knowledge of which ones were bad. This was reflected in
their gsr before they could even report the ‘odd feeling’ at trial 20.
One way to describe all this is that early on their behavior
is aided by an unfelt body-based emotion of unease. Later, the
body-based emotion becomes consciously visible and hence even more
widely dominant. On this story, ancient neural circuitry able to learn
about the statistics of the decks is creating or informing aversive
bodily responses long before the more evolutionarily recent centers of
conscious awareness come into play.
This provides a nice demo of the idea that felt emotions are
perceptions of bodily changes that themselves are a kind of quick and
dirty indicator of important information about the world.
This story is told in satisfying detail by Jesse Prinz in his book Gut Reactions. Here is how he puts it:
“Just as concepts of dogs track dogs by [an inessential
but handy indicator like] furriness, fears track dangers via heart
palpitations. Emotions are embodied.” Prinz (2004) p.68
Prinz thus reconciles the bodily story with a cognitive one.
He depicts emotions as perceptions of the body, but ones whose function
is to use the bodily reactions as a quick and dirty means of tracking
states of the world (eg dangers and opportunities). The bodily
responses are a kind of fast and frugal rule of thumb: if body feels
like this, danger present. Much simpler beings probably only needed the
bodily responses themselves, preparing them immediately for flight or
fight. Our relations to the world are more complex, so we have learnt
to perceive the bodily responses and to exploit the hard-won knowledge
they contain, as when the gamblers begin to avoid the bad decks due to
a ‘funny feeling’ before knowing why.
Though I’ve told it in a great hurry here, I do think this is
a plausible story. But where does it leave the computer and emotions in
machines?
Arguably, it leaves us much better off. We can now say this.
To really give a machine emotions we need to build it so that it uses
perceptions of its own bodily reactions as a clue to properties of the
world. That’s what emotions are, so if we want a machine to have them,
that’s what we need to do. If nothing else, this is pretty concrete.
Of course, there is one remaining problem: the spectre of the philosophical zombie.
Suppose, someone will say, you build a robot device that
replicates, at least in miniature, all the layers of evolutionary
complexity we just discussed. How do you know it will really feel any
emotions? Sure, if you fix it up right it will exhibit the right
responses. In the gambling scenario, it will learn slowly and not at
first able to say why it is staying away from the bad decks. Later on,
it will be able to say why. And it may even say, half way through, that
the bad decks just make it feel funny. But why should we believe any
funny feeling is actually felt? Maybe the machine is still just a
zombie, built to mimic the ways we talk and respond but having no real
feelings at all.
I think we have to simply reject this possibility. Getting the
right behavioral profile for the right kinds of design-based reasons
just is giving a machine emotions.
I’d like to end, though, by noticing one apparent
implication of our story. The implication is that we feel our emotions
due to a kind of evolutionary accident. We are a tinkered product, in
which new functions are built cheaply on top of the old. That seems to
be why we are getting information about danger etc into the conscious
workspace by the indirect route of monitoring our own bodily responses,
responses which are themselves mediated by older neural circuitry.
Perhaps, then, a better designed system might skip this step, feeding
that information directly to the conscious workspace. But that would
subtract away the bodiliness that seems essential to the feeling of
emotion. A nice question to pursue might be whether there are
additional positive reasons to include a loop via bodily reaction
monitoring in any intelligent device, not just one in which old
circuitry must co-exist with new.
Aaron Sloman's Position Statement - a view from Computer Science
Until the middle of the 20th Century the major advances in
science were concerned with matter and energy.
Since then the science of information has been growing in importance
as we learn more and more about different kinds of natural and man-made
information-processing systems.
In particular, we have discovered that in addition to physical
machines like windmills, steam engines, electric motors, generators, and
pottery kilns which (mostly) manipulate matter and energy there are also
abstract machines that process information. These are called
virtual machines, like the operating system that controls your
computer or the spelling checker that finds and fixes mistakes in your
spelling checker.
More complex virtual machines instead of doing just one thing can
simultaneously do many things, including interpreting sensory input,
generating new goals, deciding between different options, formulating
new plans, carrying out previously-made decisions, evaluating what is
happening and reflecting on their own mental processes.
Some interactions between sub-processes in human minds produce emotions
like being startled, being amused, fear, excited anticipation, regret,
embarrassment, gratitude, jealousy, sympathy and grief. Each of these
exists as a result of complex processes and events in virtual machines
in humans. When we understand enough about the abstract machines that we
call 'human minds' we shall be in a better position to see whether
similar processes and interactions can occur in computer-based
information-processing machines, in robots, for example.
At this stage in our knowledge there is no reason to believe it possible
or impossible apart from wishful thinking. If it is impossible a good
way to find out why is to try as hard as we can to make it happen, and
understand why it turns out to be impossible. That is also a
good way to find out how it is possible, if it is.
Perhaps we shall learn from such efforts that some things are relatively
difficult to replicate because they are closely related to physiological
mechanisms, e.g. feeling thirsty or itchy, and others relatively easy,
e.g. feeling sympathy, embarrassment, grief or pride, because they
depend only on non-physical goals, desires, preferences, attachments,
beliefs, and interactions between them, requiring little or no
replication of animal body states.
Whether we should make such machines is another question. Some
such machines may be very useful, for instance intelligent machines
helping us in disaster situations where difficult but urgent decisions
may have to be made rapidly in novel situations, using ethical
judgements and concern for others to drive creative decision making,
since previously stored rules and plans do not deal with such
situations. Emotions are not necessary in such cases, though motivation
and evaluation are.
There are people who fear that machines will turn nasty and try to take
over the world. However, I don't think machines can do anything more
nasty to us than humans already do to one another all round the world.
For more on this see this talk on wishful thinking in the study
of emotions and intelligence:
http://www.cs.bham.ac.uk/research/cogaff/talks/#cafe04
.
Do machines, natural or artificial, really need emotions?
Adam Zeman's Position Statement - view from Neuroscience
Most people, I suspect, will be doubtful doubt that
computers can have emotions, for a variety of reasons ranging from computers’
strikingly unemotional appearance to a hunch that only living creatures can
experience feelings like sadness and joy. But some further thought on the
matter inclines me to be more sympathetic to the idea that computers might be
moved by the emotions which excite you and me.
We need to decide what emotions are, what computers are,
and how similar to us a computer could be (given that we, I suppose, can have
emotions for sure).
Emotions are easily listed, less easily defined. They
include sadness, happiness, anger, fear, desire and revulsion. They are complex
states, normally involving our physiology, especially our nervous and hormonal
systems, our behavior, and our minds, in at least two ways: emotions typically
imply an evaluation or appraisal of a situation, and emotions have a ‘feel’
(perhaps we might be able to explain the ‘feel’ of anger or fear in terms of
the other aspects of emotion like their effects on our bodies, but I won’t
pursue this idea here). What do these states of mind and body have in common?
They all require that things can matter
to us – that is to say that there are some states of affairs which we prefer to
others, some which we value and others which we deplore; and they generally
presuppose that, given our preferences, there is something we can do to improve
our chances of getting what we want and avoiding what we don’t – running away
(in the case of fear), pushing away (in the case of disgust), overcoming (in
the case of anger) and so on. So, on
the face of it, we would not expect computers to have emotions unless there are
things that matter to them, and things they can do about what matters to them.
Might computers find themselves ‘minding’ and ‘doing’ in these senses? These
questions lead us naturally to ask:
What is a computer? In broad terms a computer is a device
which processes a set of inputs to
generate a set of outputs. These inputs and outputs can very hugely from
compex sums to the kinds of data required say to steer an aeroplane or a space
ship. We entrust many aspects of our lives to supervision by computers these
days, and they usually serve us well. But, you may want to point out, serve is the operative word. We decide what matters and what we want
to do, and then program our cmputer slaves to do our bidding: we supply all the values and emotions in
the projects computers help us with. It is absurd to suppose that computers
themselves might have emotions. But before we get to feel completely
comfortable with that conclusion, let’s turn to the third question I posed at
the start: how similar to us might a computer become?
Our brains, interestingly enough, are often described as
systems which process input into outputs: think for example of the inputs which
informed you about the occurrence of a meeting on whether computers can have
emotions, which your brain has now somehow tranformed into the output which
consists in your reading these words. This gives us a hint that maybe what goes
on in our brains could be modelled by a computer: perhaps our brain jsut is a
giant computer. This is controversial, but the idea is far from crazy. The neurons
which make up our brains gather in signals from the neurons which make contact
with them, ‘integrate’ this information and transmit signals, in turn, to the
cells which they contact. It is possible to build ‘silicon neurons’ which
reproduce this processs of integration. In principle one could replace every
neuron in the brain, one at a time, with a silicon substitute. The resulting
silicon brain would transform input into output in just the same way as the
biological original. It is tempting to infer that this amazing device could
have emotions, if, as the thought experiment implies, it allows just the same
kinds of evaluation and behaviour as its predecessor. Even if you doubt this,
your brain must have some physical characteristics which enable you to experience
emotion: once again, in principle, these could be built into an artificial
replica which would then, also, have emotions.
Let’s attack the problem from a different direction. We
came close to a conclusion that computers could not have emotions because we supply the ‘values’ which computers
help us to pursue. But couldn’t we build computers with ‘values’, computers
with aims and ambitions, with likes to pursue and dislikes to eschew? It seems
likely that we could: after all, we do
not invent oursevles from scratch and our own basic values have been built into
us by evolution. If we could also
provide our computer with the means to act on its values, we would have begun
to create what we have identified as the conditions for emotion.
You may want to respond that these arguments are all
very well, but you can see no reason to imagine that computers should experience feelings like you and me,
even if it is possible to build preferences into their systems and provide them
with the means to pursue them. But, then, why not? Perhaps because computers
lack a soul? When we deny the possibility that computers could have emotions we
may be exposing assumptions which we have inherited from our culture but do not
really, on reflection, wish to defend.
Jon Oberlander's Position Statement - a view from Cognitive Science
I approach this question from the point of view of human-computer
interaction (HCI) design: how do we make machines that are effective
and enjoyable to work with? Emotions of interest include pride,
anxiety or frustration. There are, of course, at least two related
questions: *can* we design computers to have emotions like these, and
if so, *should* we design them to have emotions? Like Aaron Sloman, I
don't think that basic emotions (gut reactions) are actually essential
to intelligent action. But I do believe that intelligent machines
should have emotions designed into them, for two main reasons. First,
they must be understandable to users. Displaying appropriate emotional
behaviours helps users understand and predict the system. But display
is not enough: if there is any mismatch between emotion display and
other behaviour, users will not trust the sytem. The best - but
perhaps not only - way to ensure there is no mismatch is to drive the
emotion display from the internal states of the computer
system. Secondly, intelligent machines must understand their
users. Although there are some cues to users' emotional and cognitive
states in their tone of voice or facial expression, it is not enough
for machines to reliably understand what is going on, and how to
react. So they must understand words, and the words must link into
internal emotional states within the machine. So, intelligent machines
need emotions if they are to be effective and enoyable to work
with. Then the question is, can we build such machines? On the one
hand, getting this right will involve designing in appropriate
self-preservation and safety systems, and these can be seen in terms
of simple feelings. But on the other hand, to build a system that
responds appropriately to users' anxieties or frustrations (in Aaron's
terms, their secondary emotions), it will need to understand natural
language. And, though progress is being made on this, it's still an
unsolved problem. So the answer to the original question is - in an
interesting way, for once - "it depends what you mean".
|
Hot Seat Debate
The Panel
Key Questions & Possible Answers
Here comes the science bit
Artificial Intelligence
Computer Science
Cognitive Science
Philosophy
Neuroscience
With many thanks to contributing Universities
University of Edinburgh
University of Birmingham
University of West of England
The debate took place
at 6.45 pm, 17 November 2004
Lecture Theatre F21
7 George Square
|