Inf1-CG Memory 3: Recall, semantic memory, and forgetting
Alyssa Alcorn
Henry S. Thompson
15 October 2010
1. Retrieving information
Retrieval is the process of using cues (any modality) to
recover a target memory
- bringing the target memory or information
to conscious awareness
It may be necessary to follow an associative chain of
cues to the target memory
- more on association later
Be aware that "remembering" a memory is not like opening a file
stored on your hard drive
Three large and very important differences are:
- Memories, especially episodic (event) memories, are changed
by the process of retrieval
- Human memory is content-addressable rather than storing
items at a fixed address, as does a computer
- Items stored in long-term memory tend to decay over time
(one form of forgetting)
2. Recall
Recall is when a person must generate items from
memory
This is effectively a search process. Examples include:
- Free recall (no cues)
- An experiment which asks "write down all the words you remember
were on the list you just studied"
- Cued recall in which a cue prompts recovery of the
target information
- Running into your friend’s flatmate (cue) and trying to
remember their name (target)
Free recall tasks (in which
participants can recall items in any order) show several sequencing
effects:
- The primacy effect is that the first items
presented in a list or sequence tend to be better recalled than
subsequent items
- The recency effect is that the last items in a
list or sequence tend to be recalled well
- Recency effects may be eliminated if performing a "filler task"
such as counting between hearing the items and recalling them
3. Concepts, semantic information, and the
internet
Google and the Mind (2008) is an example of using tools from
Informatics to study behaviour
This paper is strongly influenced by Anderson’s
rational analysis
- Assumes that human memory retrieval and internet search engines
share the same computational goal
- finding only the
relevant information out of a very large information
"database"
- Assumes human memory is a near-optimal solution to this
task
- Analyses the task to try to learn more about human behaviour
and memory
Research question: do human memory and internet search engines
use similar solutions to retrieve information in response to
some query?
- The authors test Google’s PageRank algorithm as a possible
optimisation function for a certain type of memory retrieval
(word fluency)
This lecture introduces some main concepts for this paper, it
will be discussed in more depth for the tutorial
4. The internet, semantic memory, and semantic
networks
The authors describe several assumed parallels between connected
web pages on the internet and information stored in human
memory.
- On the web, items for retrieval are pages connected by
hyperlinks
- In memory, words or concepts are connected by associative
links into a semantic network
- Semantic memory is a form of explicit (declarative)
long-term memory that includes facts, concepts, and
vocabulary
- Note: There are other views of how semantic memory is stored,
other than in networks
![[no description, sorry]](networks_google2007paper.jpg)
- Pretty much as in the spreading activation model of
semantic memory (Collins & Loftus 1975)
5. Semantic memory, cont'd
Possible to link concept-nodes in a network based on their
semantic relatedness (similarity to one another)
Relatedness measures "How much does concept 1 have to do with
concept 2?"
![[no description, sorry]](spreading_activation_textbook120.jpg)
In this diagram, the length of the link between two nodes
measures relatedness
- as an alternate to giving a numerical
value
6. Associative frequency and the PageRank algorithm
PageRank is a more advanced
version of the concept of associative frequency
- Assume words in a corpus (or in memory) were connected into a
network, with links between associated words
- If word 1 is given as a cue, at least one person would need to
produce word 2 as a response for those words to be connected
- A word’s associative frequency would measure the number
of other words in a network which would be linked to that
particular word
- All links are equally important—it is only
the number of links that counts.
Two key ideas about the PageRank algorithm:
- Links between web pages (items) are information about the
importance of any particular item
- Important web pages (items) will receive links from other
important items.
- Not all links are equal.
- Receiving a link from an important web page (item) is a more
informative for calculating importance than is a link from an
unimportant page
- One important link is more informative
than many small ones.
7. What kind of memory is being studied in this
paper?
This paper is interested in human word fluency, a type of
cued recall
- Fluency is a general term for how easily people retrieve
various facts from memory
- In a word fluency task, participants name the first word that
comes to mind when given a certain letter as a cue
- "C" might get the response "cat"
The paper reports an experiment to find out...
- Can PageRank predict a
human’s performance in a word fluency task (which words
produced most often)
- How do the PageRank
predictions compare to other, established methods of
prediction?
In rational analysis terms, this is comparing an optimisation
function (the algorithm) to behavioural data
8. Introduction to probability, for understanding
“Google and the Mind”
We only briefly discusses probability
theory
- You should know some general reasons why probability is
used to describe behaviour or events, but you do not need to know
the maths.
A traditional frequentist interpretation of probability
is interested in counting repeated outcomes from identical
"experiments"
- Think frequentist == interested in event frequency
- Common experiments include rolling dice, choosing cards from a
deck, flipping coins (outcomes can be calculated precisely)
- "What is the probability of drawing a king? Of rolling 10 sixes
in a row?"
But how can one calculate more complex probabilities, such as in
a model of an environment?
Probability theory is a major tool used in the Google memory model to make
predictions about events
- In this case, the events are the results of the word fluency
task
This paper, and many others, are interested in
non-frequentist probabilities
9. Probability, uncertainty, and information
processing
One of the major functions of the brain as an
information-processing system is to infer new information
from existing information
- This can also be understood as making predictions based on
data, and is inherently uncertain
A Bayesian interpretation of probability such as the one
used in the Google paper is interested in describing degrees of
certainty (belief) about various outcomes
Warning! Extremely simplified explanation follows
- If there is a hypothesis about the estimated probability of an
event in the absence of any data about how often that event has
occurred in the past...
- This hypothesis is changed (updated) in light of a given
dataset about the event (the prior probability)
- Bayes’ theorem and related equations are used to compute
how that hypothesis should be changed, to yield the
posterior probability
- This is then the new estimate of belief about an event
occurring (a prediction based on data)
In the Google paper, one form of prediction would be the
probability of producing a particular word as a response in a
fluency task
- given that word’s association with other
words (data)
10. Conclusions from Google and the Mind
This has been a very brief introduction to some concepts in this
paper, in order to illustrate an application of rational analysis
and a novel way of studying human memory
The tutorial will look at this paper in more depth:
- Word fluency tasks
- Which comparisons were being made, and where all the words and
numbers came from
- Experimental results
You will need to prepare for this tutorial in advance
Again, do not worry about the maths-heavy parts of this paper.
Try reading them, but focus on the rest of the paper. You are not
expected to learn probability maths in this class
11. Why do we
sometimes fail to retrieve information?
Forgetting can be conceptualised as a failure of
memory retrieval
- Although it is difficult to determine whether information is ever really lost completely
Forgetting or retrieval failure can be divided into two
categories:
- Incidental forgetting or unintentional forgetting (our
common use of this word)
- Motivated forgetting, which includes both conscious,
intentional forgetting and unintentional forgetting
triggered by some motivation
- Consciously deciding (or being told) to forget something
- Limiting retrieval of some experiences (usually negative) as a
form of emotional regulation
In all of these forms, losing information or failing to retrieve
it is a form of optimisation for the memory system
- We are most likely to be able to retrieve items and events that
we need to recall
- We are not overburdened by recalling endless stressful,
traumatic, and negative events
12. Patterns of forgetting: The Ebbinghaus curve
A fundamental rule for most organisms: forgetting increases
as time progresses
In one of the earliest psychological studies (still cited to
this day) Hermann Ebbinghaus (1885/ trans. 1913) used himself as a
participant to study memory for nonsense syllables over time
![[no description, sorry]](http://en.wikipedia.org/wiki/File:Ebbinghaus2.jpg)
- Learned 169 lists of nonsense syllables (like "LEV" and
"BUP")
- Chose nonsense syllables in an attempt to make sure that
content did not affect his memory (would not form associations
between learned content and prior knowledge or images)
- Re-learned each list after varying time intervals, from minutes
up to one month later.
- He used the amount of time required to re-learn each list as a
measure of how much was forgotten.
His results show that forgetting is not linear, but close
to a logarithmic curve
![[no description, sorry]](forgetting_curve_textbook194.jpg)
- This means that a lot is forgotten very soon after
learning
- Then the rate of forgetting slows down and almost
stabilises
13. Applying Ebbinghaus’s research
In a long-term example with more naturalistic stimuli, Bahrick
(1984) tested retention for foreign language grammar, reading
comprehension, recall and recognition vocabulary
- Tested 587 participants who studied Spanish in school 1 to 50
years previously
- Also gathered additional information about their language
learning
- Level of original training, grades
- Use of spoken, written Spanish language (rehearsal) since
training
![[no description, sorry]](bahrick1984_textbook195.jpg)
Memory showed exponential decline in retention for first 3-6
years, then stabilisation for up to 30 years
Apparently there is some level of memory
“permastore” affected by original training, but not by
subsequent rehearsal
14. References
Course texts
- Memory (Baddeley, Eysenck, & Anderson, 2009)
Optional reading
- Probabilistic models of cognition (Chater, Tenenbaum, &
Yuille, 2006)
Other resources, available through the library or Google
Scholar
- Technical Introduction: A primer on probabilistic inference
(Griffiths & Yuille, 2006)
- Semantic memory content in permastore: Fifty years of memory
for Spanish learned in school (Bahrick, 1984).
- Memory: A contribution to experimental psychology
(Ebbinghaus 1885,
translated 1913). An online version is available at http://psychclassics.yorku.ca/Ebbinghaus/index.htm