PST Analogy Tutorial

Table of Contents

What it does


Analogy is a "real" Soar program, for doing deliberate analogy. It is not large, consisting of about 30 rules, but it's part of a serious piece of research:

Rieman, J., Lewis, C., Young, R. M. & Polson, P. G. (1994) "Why is a raven like a writing desk?": Lessons in interface consistency and analogical reasoning from two cognitive architectures. In Proceedings of CHI'94: Human Factors in Computing Systems. 438-444. ACM Press.


Consider a computer user who has basic Macintosh skills and knows how to launch the Word program on a Mac, but does not know how to launch programs in general. Their task is to launch Cricket Graph.

Note that the Analogy tutorial incorporates various exercises. Answers for these are also in the tutorial, and should only be viewed once you have completed the relevant exercise.

Back to Table of Contents

Models that learn

Most "user models" just perform, they don't learn. There are a few exceptions, such as TAL (Howes & Young, 1991), EXPL (Lewis, 1988) and EXPLor (Howes & Payne, 1990). The last two use a form of analogy to interpret a sequence of actions.

The Analogy model is one that models the analogy route for the task of launching Cricket Graph, and has been done in both Soar and ACT-R.

Collaborators: John Rieman, Clayton Lewis, Peter Polson, with help from Marita Franzke (all U Colorado at Boulder).

Back to Table of Contents

Analogy Method

There exists a fairly general method for analogy (based on Holyoak, etc.):

For example,
(launch Word) :: (launch CG) =
(double-click Word icon) :: (double-click CG icon).

A specific, simple version of this is used in our model:

  1. If the task involves some Effect on an Object Y,
  2. where the Object is of a known Class,
  3. and we can recall another member X of that Class,
  4. then we try to imagine an action for achieving the Effect on Object X,
  5. and if we can do it, we substitute Y for X in the action,
  6. and return it as the recommended action.

Back to Table of Contents

Problem Space Unpacking

Soar lends itself to a distinctive style of model development, which we will call problem space unpacking. We will use this technique to explain the analogy model. Like many software methods, it's an iterative technique:

  1. Starting with one problem space, we ask what knowledge an agent needs in order to be able to perform some task. We add that knowledge by hand, and try to get a satisfactory running model.

  2. We then argue that the agent wouldn't have arrived in the situation with that knowledge already in place. So we ask: What problem could the agent have set itself, solved in a lower space, that would have given rise to the knowledge by chunking?

So on each iteration we effectively push the knowledge down by one level.

As with any software development method, this approach works best if used just as guidance, and applied flexibly.

Back to Table of Contents

Initial ideas

Unpacking the top two Problem Spaces

What ideas do we have about the spaces we might need?

  1. The top space is probably a Perform space, that is, a space in which we have only fully automatised behaviour -- behaviour that occurs without any of the uncertainty about what to do that characterises an impasse. Given a task, there will be rules that specify what action to take.

    Those task-action mapping rules (TAM rules) will, of course, be learned by chunking, so the automatised behaviour will not be seen until these chunks are in place.

  2. We have a definite idea of the particular method of simple analogy that we are looking for. So somewhere there will be a space in which our six-step method is executed -- a Use-Analogy space.

    Such a multi-step sequence is most naturally thought of as the implementation space for some operator. So we'll expect to see a Use-Analogy operator somewhere, with the implementation space below it.

Back to Table of Contents

The Perform space

So we have the notion of the top space as a Perform space, with automatised task-action mapping rules.

To be definite, suppose that the state has on it a ^task to specify the task, and wants an ^action to specify the corresponding action. Once the action is known, the motor system will carry it out -- that part does not have to be learned.

The mapping from ^task to ^action could be done by an operator, but initially we'll imagine it's just a state-elaboration chunk.

Let's start building a diagram of the different spaces and their relationships:

Perform space diagram not available

Now do Exercise 1.

Back to Table of Contents

The Action-Proposal space

When the model starts, the TAM chunks in the Perform space have not yet been learned. So the appropriate ^action for the given ^task will not be known. Hence there will be an impasse -- a State No Change, given our commitments so far -- into a subgoal whose job is to propose an ^action. So we think of it as having an Action-Proposal space.

In this space we'll put knowledge (by hand) about how to launch Word and Draw, assumed known to the agent. But we don't have knowledge about CG (Cricket Graph) or XL (Excel).

Perform space/action-proposal space diagram not available

Now do Exercise 2.

Answer to Exercise 1.

Back to Table of Contents

Use Analogy (operator)

In the Action-Proposal space, if the task is to launch a program other than the ones we know about (Word, Draw), what do we do? Well, there might be several possible tactics for finding out (e.g. asking someone, ...), but the one we'll focus on is to try using analogy. So we'll propose an operator called Use-Analogy.

That operator is of course the one we had anticipated. So we know that it will impasse into an operator implementation space, where our six-step simple analogy method actually gets applied:

Perform/action-proposal/use-analogy spaces diagram not available

Answer to Exercise 2.

Back to Table of Contents

The Use-Analogy space

In the Use-Analogy space, we implement the six-step analogy method:

  1. Recognise that the task involves some effect on some object Y, so that the method is appropriate. Record this by putting (^analogy-method analogy-1) on the state.

  2. Get the class of the object. We record it by marking the state e.g. (^object-class program).

  3. Recall another member, X, of the same class. We mark the state with (^class-member X).

  4. We ask if we know how to achieve the effect with X (more on this later). If we do, we mark the state with an ^imagined-action.

  5. In that action, we replace all occurrences of X by Y. That gives us an ^analogised-action.

  6. Then we return the ^analogised-action as the recommended ^action on the top state.

Back to Table of Contents


Continuing with the problem-space unpacking,  there is the crucial issue of how we know "whether we can achieve the effect with X". We would not expect the model to start with a chunk that encodes this knowledge, so where does it come from?

Soar gives us a natural way to discover whether we can do something without having the meta-knowledge beforehand. We simply set up the task in a subgoal, and see what happens. We do this with an Imagine-Task operator, which impasses into an operator implementation space that is very much like the top Perform space, so we can see what action, if any, gets taken:

Use-analogy/imagine-task spaces diagram not available

When we put it all together, and take care of what happens if we don't know how to handle X, we get:

Full problem spaces diagram not available

Now do Exercise 3.

Back to Table of Contents

Summary of behaviour

Behaviour 1: Simple Chunk for Word

We assume a user who knows how to launch Word, but not, for example, Cricket Graph.

Presumably in the past the user learned to launch Word by instruction or demonstation, and this knowledge resides in the Propose-Action space.

So, again at some point in the past, the task of launching Word was run, and yielded a chunk that encodes a task-action mapping rule in Perform space:

Perform/action-proposal/chunk-1 diagram not available

Now, when that same task is done, the TAM chunk fires:

Perform/chunk-1 diagram not available

Behaviour 2: Learning to launch Cricket Graph by analogy

The diagram below simply summarises the way the analogy method works, as we have developed it through the problem-space unpacking.

From the top Perform space, because no action is proposed, the model impasses into the Action-Proposal space. There the Use Analogy operator goes into slot, but to be implemented it impasses into an implementation space, the Use-Analogy space. Most of the work of the analogy method is carried out there, but at the point where the model has to ask itself if it knows how to launch some other program, perhaps Word, it impasses into an Imaginary version of the Perform space.

In the Imaginary Perform space, chunk-1 fires, recalling the relevant action (double-click on Draw). This result is passed up as the result of the imagine-task operator, creating chunk-2. The action is analogised to be appropriate to the given task (say, double-click on CG), and is passed up as the result in the Action-Proposal space, forming chunk-3. From there, it is recommended into the top Perform space as the action to be done, and forms chunk-4, a new specialised TAM rule for launching CG.

Task:launch CG diagram diagram not available

Behaviour 3: Positive Transfer

The diagram below shows what happens if the model is then given the task of launching yet another program, in this case Excel.

The processing follows the same general course as in the previous case, for CG. But at the point in the Use-Analogy space where the model has to ask itself whether it knows how to launch some other program, Word, this time chunk-2 fires to give the answer directly: "Yes, you double-click on Word". This time there is no need actually to imagine doing the task in an Imaginary Perform space.

Task:launch Excel diagram not available

Answer to Exercise 3.

Back to Table of Contents

Two interesting points about the model

Note that these points are dictated by the architecture. They follow from asking: How does Soar lend itself to doing the analogy task?

Further exercises: Exercise 4.

Answer to Exercise 4 (first part).

Back to Table of Contents


Howes, A. & Payne, S. J. (1990) Semantic analysis during exploratory learning. In J. C. Chew & J. Whiteside (Eds.) CHI'90 Conference Proceedings: Human Factors in Computing Systems, 399-405. ACM Press.

Howes, A. & Young, R. M. (1991) Predicting the learnability of task-action mappings. In S. P. Robertson, G. M. Olson & J. S. Olson (Eds.) in Proceedings of CHI`91: Human Factors in Computing Systems, 113-118. ACM Press.

Howes, A. & Young, R. M. (1996) Learning consistent, interactive, and meaningful task-action mappings: A computational model. Cognitive Science, in press.

Lewis, C. H. (1988) Why and how to learn why: Analysis-based generalization of procedures. Cognitive Science, 12, 211-256.

Rieman, J., Lewis, C., Young, R. M. & Polson, P. G. (1994) "Why is a raven like a writing desk?": Lessons in interface consistency and analogical reasoning from two cognitive architectures. In Proceedings of CHI'94: Human Factors in Computing Systems. 438-444. ACM Press.

Back to Table of Contents

Return to main page of: Introduction to Psychological Soar Tutorial
(or use the Back button to return to where you just were).