This page gives an overview of my research interests, with links to publications for download
- Individual differences in working memory capacity
- Why is working memory capacity limited?
- The architecture of working memory
- Age differences in working memory
- Can we think two things at once?
Individual Differences in Working Memory Capacity
When talking about working memory capacity, we assume that there is one capacity limit underlying many different observed limitations on cognitive performance. Evidence for this contention comes from factor analytic studies showing that many different ways of measuring working memory capacity load on the same or closely related factors
Moreover, the factor or factors reflecting the common variance of several working memory tasks are excellent predictors of reasoning performance:
I propose that the capacity limit of working memory arises from interference between bindings, and therefore, tasks requiring bindings to create novel structural representations should be highly correlated with working-memory capacity. Preliminary evidence for the idea that individual (and age) differences in working memory capacity are related to the ability to maintain bindings can be found in the following papers:
A further study suggests that working memory capacity is closely related to the speed of information processing, as reflected by the drift-rate parameter of the diffusion model applied to two-choice reaction-time tasks
Here is a methodological investigation into how best to measure individual differences in response times, using the diffusion model:
If you are interested in studying individual differences in working memory, we recommend our Matlab-based test battery, described here:
An updated version that enables users to present stimuli and instructions in any language of their choice can be downloaded in the software section
Why is working memory capacity limited?
Several mechanisms have been proposed for why we forget information in working memory, and why tasks become harder the more separate elements we need to hold in mind simultaneously. Reinhold Kliegl and I investigated some of the most commonly suggested mechanisms in a common formal modelling framework: limited activation resources, time-based decay, interference due to confusion between items (“crosstalk”), and interference due to overwriting of representations. A model based on interference through overwriting provided the best fit to the data.
In a follow-up paper, we refined the interference model and embedded it in a non-linear mixed-effects (nlme) modeling framework, which allowed us to estimate parameters on the group level and the level of individuals simultaneously.
Elke Lange and I provided some direct evidence for feature overwriting as a cause of interference in working memory. Words and nonwords are recalled worse when a distractor task involves other words or nonwords that share many phonemes with the memory word.
Two follow-up studies, one using serial recall of words and one using the complex-span paradigm, showed that this effect can be distinguished from similarity-based confusion
Together with Steve Lewandowsky I tested three computational models of serial recall that represent three assumptions about why information in working memory is forgotten: temporal distinctiveness, time-based decay, and interference. Again, the interference model did best.
A review of the literature on forgetting in verbal short-term memory shows that there is no convincing evidence for time-based decay. Some previous studies that have often been cited as evidence for decay are methodologically flawed or are open to alternative interpretations.
Not surprisingly, not everybody agreed with our conclusion. Here are two commentaries and our responses to them:
Assumptions about the mechanisms of remembering and the causes of forgetting in working memory can best be evaluated in the context of computational models that implement them. My colleagues and I therefore implemented a prominent decay-based theory, the time-based resource-sharing (TBRS) theory, as a computational model:
We also developed a computational model of an interference-based theory, SOB-CS:
The Matlab code for both models can be downloaded in the software section
Evidence supporting the predictions of the interference model and speaking against time-based decay can be found in the following articles:
The Architecture of Working Memory
The main function of working memory seems to be to provide the representations needed in complex cognitive tasks. This involves selection of relevant representations in long-term memory, constructing new combinations and structures, and selectively accessing individual representations for manipulation. Building on previous work by Nelson Cowan, I proposed a framework for the architecture of working memory that consists of three embedded components: (1) the activated part of long-term memory, responsible for making potentially relevant information easy to retrieve, (2) the region of direct access, responsible for establishing new bindings to build new structural representations, and (3) the focus of attention, responsible for selecting one representation at a time for processing.
Recently I extended this framework to include procedural working memory alongside the traditional declarative working memory. Whereas declarative working memory holds available representations of entities in the environment (e.g., objects, events, words, digits), procedural working memory holds available representations of cognitive operations and actions (i.e., task sets). The framework is described in the following chapter:
Evidence supporting the assumptions about the structure of declarative working memory can be found here:
More recent work provides initial evidence for the assumed analogy between declarative and procedural working memory:
The distinction between activated long-term memory and the direct-access region can be mapped onto dual-process models of recognition: I assume that activation of a representation in long-term memory forms the basis for a quick and automatic familiarity assessment, whereas comparison of a stimulus with the contents of the direct-access region provides the basis for a slower, more effortful process of recollection. The following studies offer some evidence for this proposal, and investigate the relationship between familiarity and recollection in working-memory recognition tasks.
Age Differences in Working Memory
A large part of the decline of cognitive abilities in older age can be attributed to reduced working memory capacity. My colleagues and I have made a few attempts to pin down which function of working memory is impaired in old age, using the three-component framework of working memory outlined above. It seems that old adults have specific problems with resisting intrusions from activated but irrelevant representations in long-term memory, and with maintaining information in the region of direct access, but no difficulties with switching the focus of attention from one object in working memory to another.
Can we think two things at once?
Usually not. Cognitive operations must be done one at a time, this limitation is often described as a bottleneck. With considerable practice on combining a numerical and a spatial working memory updating task, young adults can learn to do these two operations simultaneously without mutual interference. A further study has shown that old adults cannot acquire this skill.
The ability to carry out two operations simultaneously also breaks down when people need to switch between to-be-processed items in working memory:
Mental Models in Deductive Reasoning
The theory of mental models developed by Phil Johnson-Laird and Ruth Byrne describes deductive reasoning as based on semantic representations, that is, representations of the situation that is described by the premises (i.e., mental models). One of my interests is in how people integrate the information in separate premises into a single mental model. Premises in deductive reasoning tasks often describe a relation between two objects or events. We found that relational premises have an inherent directionality, that is, they instruct listeners to construct a model of the relation by placing the two elements in working memory in a particular order, starting with the reference object (or relatum) and adding the target object. As a consequence, integrating two premises is easier if the first premise already contains the relatum of the second premise, so that the target object of the second premise can simply be added in the prescribed relation to the model of the first premise.
Other research investigated the role of working memory capacity in the ability to construct complex mental models of spatial relations. I assume that working memory capacity reflects the ability to build new structural representations. It follows that people with low working memory capacity should have difficulties constructing complex mental models. This is what we found.
The Meaning of Conditionals
What do we mean by saying, for instance, “If I do one more experiment, I will understand how people reason”? One view, prominent in theories of human reasoning that are inspired by formal logic and semantic, is that conditionals express the material conditional. The material conditional is defined by a truth table: “If p then q” is true in three possible cases: (1) p and q are both true, (2) p is false and q is true, and (3) p is false and q is false. The mental models approach to reasoning with conditionals is built on this idea. The material conditional view has been criticized by many philosophers and psychologists, and the alternative many of them propose is that the conditional “if p then q” expresses a high conditional probability of q, given p. The above sentence would then mean that my probability of understanding how people reason is high, given that I do one more experiment. Some of my research together with Oliver Wilhelm focused on distinguishing between these views. We found that the majority of people understands the conditional in terms of the conditional probability, but a minority understands it differently, in a way that could be explained by the mental models approach.
Research with realistic conditionals referring to people’s world knowledge has repeatedly shown that logically valid inferences from a conditional are blocked when people can think of counterexamples to the conditional premise. The idea that reasoners search for counterexamples and accept a conclusion only if they don’t find any originates in the mental-model theory, but the probabilistic view of conditionals would predict the same effect, because counterexamples diminish the conditional probability of the consequent, given the antecedent. Sonja Geiger and I found a way to tease apart these explanations, and found support for the probabilistic explanation.
The probabilistic approach does not fare that well when it comes to explain reasoning, however. It seems that whether people accept or reject inferences from conditional premises depends more on whether they can think of counterexamples than on their degree of belief in the conditional, as determined by the conditional probability. The theory of mental models alone also cannot explain the reasoning data. The best account so far is provided by a dual-process model, with one process using probabilistic information and the other using analytical, model-based thinking.
Another line of research has tested a specific version of the probabilistic approach to conditionals, the theory of Oaksford and Chater (1998, 2001) on the Wason four-card selection task. This theory did not fare very well...