Behavioral Moderators

We have been studying how to implement theories of behavioral moderators in cognitive architectures. As a step towards this, we present here the task set, and where available, example models that perform these tasks. This set of tasks and models are designed for reuse, but this is an ongoing attempt. Please do contact us if you would like to use some or part of this set of tasks and models. As a group, we call them Cafe Nav.

We would like to start by briefly defining Behavioral Moderators and offer some background as to why we are interested in creating Behavioral Moderator Overlays for cognitive architectures.


A Behavioral Moderator, broadly defined, is anything that causes a change to an individuals physiological state or environment that alters the behavior of an individual.


Current and forthcoming research has indicated that behavioral moderators play a significant role in almost every human behavior. Because we, the Applied Cognitive Science Lab, are very interested in both learning from and replicating human behavior, it is important for us to study and hypothesize about the effects of behavioral moderators.

Our Hypothesis and Goals

Our current hypothesis is that behavioral moderators are quantifiable and that specific moderators affect cognition and behavior in a consistent relationship. Furthermore, we believe that we can derive the representational formula of this relationship and implement that in a cognitive architecture overlay. The specific behavioral moderators that we are choosing to examine initially are pre-task appraisal and caffeine.

Caffeine is the most popular psychoactive substance in the world, yet, we do not yet have a complete model of effects on cognition. For this to be possible, cognitive architectures must advance by adding overlays. Caffeine's cognitive effects are dependent on how much caffeine is currently in the body, and so the pharmacokinetics of caffeine must be examined, particuarly uptake and decay rates. Cognitive effects of caffeine include: faster reaction times, faster semantic processing, faster logical processing, and increased subjective alertness. The task will be to make these relationships explicit and quantified.

A review of caffeine and of pre-task appraisal for this use is in process.

Tasks and models for studying behavioural moderators

We are using four tasks with corresponding models to gather and analyze participants data.

1. Vigilance and simple reaction time.

In this task, participants are asked to respond by pressing a key when they see a circle, such as the one shown above. Alternatively squares can appear; however, subjects are asked to ignore these false stimuli. We collect data throughout this process, recording accuracy and response time. In this task we will be able to use Signal Detection Theory to analyze the participant's ability to detect and respond to ambiguous stimuli.

From this task, we get measures of simple reaction time, visual reaction time, accuracy (dprime), and decision threshold.

In the snapshot below, the circles appear for only 300 ms and at random locations.

This task is implemented in Allegro Common Lisp, and runs most easily on the PC. The Lisp code for this experimental task can be found here. A draft model in ACT-R 5 can be found here.

2. MODS Task: In this task, working memory is exhaustively tested by asking the participant to audibly repeat a list of letters that are displayed one at a time and then remember the digit presented at the end of the list. There are 3 - 6 letter lists presented on each trial, and thus the participant is asked to recall a 3 - 6 digit number per trial. For more information on this task and working memory please see:

Lovett, M. C., Reder, L. M., & Lebiere, C. (1999). Modeling working memory in a unified architecture: An ACT-R perspective. In A. Miyake & P. Shah (Eds.) Models of Working Memory. pp. 135-182. Cambridge, MA: Cambridge (Abstract available here)

The task is currently available from Marsha Lovett. It exists in ePrime.

A model in ACT-R 4 has been created and is available from Lovett. This model includes a spreadsheet that implements a simple regression to compute the best fitting setting of working memory capacity (W) given the performance on the task. The model carries its own task in Lisp.

We are working on revising the model into ACT-R 5.

3. Driving Task: In this task, we asked participants to play a video-game driving task. The basic system is available online at We have modified the basic code to collect subjects' data by recording the key presses that control the driving process and the random seed.

This screen shot (above) shows the screen that a subjects sees during the driving task.

The model that drives this task is here, and a model that is also worried (ACT-R 4). This model uses SegMan to interact with the task (but we may be using a more advanced version). Currently, we are not developing an ACT-R 5 model of this task, but would welcome a collaborator to do so.

The modified task (a Java program) is available here.

4. Serial Subtraction: In this task we ask participants to start subtracting seven from a given value. We assess their mood and emotional status to model the affects of emotion and worry. Currently, we are upgrading our cognitive model from the ACT-R 4.0 architecture to the ACT-R 5.0 architecture. The link to our previous work can be found here.

A more current version of our work can be viewed in two small video clips. By clicking on the challenged link you can see a video of the model running under the challenged appraisal settig. You can also click on the threatened link to see the model run in the threatened appraisal setting. Both of these models are using the ACT-R 5.0 architecture with a Tcl/Tk interface.

Challenged Appraisal Sample Movie

Threatened Appraisal Sample Movie

The default task does not require any input, the subject is just told the number to start from.

We have created a version that runs in Lisp with a keyboard, for use where subjects can't talk, or where it would be desired to have the data transcribed automatically.

5. Argus. As a more complex task, we have been using Argus, created by Mike Schoelles and Wayne Gray. It is an air-traffic control like task.

Schoelles, M. J., & Gray, W. D. (2001). Argus: A suite of tools for research in complex cognition. Behavior Research Methods, Instruments, & Computers, 33(2), 130-140.

Mike provides the task in Macintosh Common Lisp, contact him for it.

Mike also provides the model, currently available in ACT-R 5.



This work conducted by the ACS lab would not be possible without collaboration from Dr. Laura Klein and her lab at Penn State. Similarly, this research would not be possible without funding from the Office of Naval Research. Further work is also being done in collaboration with Agent Oriented Software.