Analogy is a "real" Soar program, for doing deliberate analogy. It is not large, consisting of about 30 rules, but it's part of a serious piece of research:
Rieman, J., Lewis, C., Young, R. M. & Polson, P. G. (1994) "Why is a raven like a writing desk?": Lessons in interface consistency and analogical reasoning from two cognitive architectures. In Proceedings of CHI'94: Human Factors in Computing Systems. 438-444. ACM Press.
Consider a computer user who has basic Macintosh skills and knows how to launch the Word program on a Mac, but does not know how to launch programs in general. Their task is to launch Cricket Graph.
(This is what we'll explore with the model here.)
Note that the Analogy tutorial incorporates various exercises. Answers for these are also in the tutorial, and should only be viewed once you have completed the relevant exercise.
Most "user models" just perform, they don't learn. There are a few exceptions, such as TAL (Howes & Young, 1991), EXPL (Lewis, 1988) and EXPLor (Howes & Payne, 1990). The last two use a form of analogy to interpret a sequence of actions.
The Analogy model is one that models the analogy route for the task of launching Cricket Graph, and has been done in both Soar and ACT-R.
Collaborators: John Rieman, Clayton Lewis, Peter Polson, with help from Marita Franzke (all U Colorado at Boulder).
There exists a fairly general method for analogy (based on Holyoak, etc.):
A specific, simple version of this is used in our model:
Soar lends itself to a distinctive style of model development, which we will call problem space unpacking. We will use this technique to explain the analogy model. Like many software methods, it's an iterative technique:
As with any software development method, this approach works best if used just as guidance, and applied flexibly.
What ideas do we have about the spaces we might need?
Those task-action mapping rules (TAM rules) will, of course, be learned by chunking, so the automatised behaviour will not be seen until these chunks are in place.
Such a multi-step sequence is most naturally thought of as the implementation space for some operator. So we'll expect to see a Use-Analogy operator somewhere, with the implementation space below it.
So we have the notion of the top space as a Perform space, with automatised task-action mapping rules.
To be definite, suppose that the state has on it a ^task to specify the task, and wants an ^action to specify the corresponding action. Once the action is known, the motor system will carry it out -- that part does not have to be learned.
The mapping from ^task to ^action could be done by an operator, but initially we'll imagine it's just a state-elaboration chunk.
Let's start building a diagram of the different spaces and their relationships:
Now do Exercise 1.
When the model starts, the TAM chunks in the Perform space have not yet been learned. So the appropriate ^action for the given ^task will not be known. Hence there will be an impasse -- a State No Change, given our commitments so far -- into a subgoal whose job is to propose an ^action. So we think of it as having an Action-Proposal space.
In this space we'll put knowledge (by hand) about how to launch Word and Draw, assumed known to the agent. But we don't have knowledge about CG (Cricket Graph) or XL (Excel).
Now do Exercise 2.
In the Action-Proposal space, if the task is to launch a program other than the ones we know about (Word, Draw), what do we do? Well, there might be several possible tactics for finding out (e.g. asking someone, ...), but the one we'll focus on is to try using analogy. So we'll propose an operator called Use-Analogy.
That operator is of course the one we had anticipated. So we know that it will impasse into an operator implementation space, where our six-step simple analogy method actually gets applied:
In the Use-Analogy space, we implement the six-step analogy method:
Continuing with the problem-space unpacking, there is the crucial issue of how we know "whether we can achieve the effect with X". We would not expect the model to start with a chunk that encodes this knowledge, so where does it come from?
Soar gives us a natural way to discover whether we can do something without having the meta-knowledge beforehand. We simply set up the task in a subgoal, and see what happens. We do this with an Imagine-Task operator, which impasses into an operator implementation space that is very much like the top Perform space, so we can see what action, if any, gets taken:
When we put it all together, and take care of what happens if we don't know how to handle X, we get:
Now do Exercise 3.
We assume a user who knows how to launch Word, but not, for example, Cricket Graph.
Presumably in the past the user learned to launch Word by instruction or demonstation, and this knowledge resides in the Propose-Action space.
So, again at some point in the past, the task of launching Word was run, and yielded a chunk that encodes a task-action mapping rule in Perform space:
Now, when that same task is done, the TAM chunk fires:
The diagram below simply summarises the way the analogy method works, as we have developed it through the problem-space unpacking.
From the top Perform space, because no action is proposed, the model impasses into the Action-Proposal space. There the Use Analogy operator goes into slot, but to be implemented it impasses into an implementation space, the Use-Analogy space. Most of the work of the analogy method is carried out there, but at the point where the model has to ask itself if it knows how to launch some other program, perhaps Word, it impasses into an Imaginary version of the Perform space.
In the Imaginary Perform space, chunk-1 fires, recalling the relevant action (double-click on Draw). This result is passed up as the result of the imagine-task operator, creating chunk-2. The action is analogised to be appropriate to the given task (say, double-click on CG), and is passed up as the result in the Action-Proposal space, forming chunk-3. From there, it is recommended into the top Perform space as the action to be done, and forms chunk-4, a new specialised TAM rule for launching CG.
The diagram below shows what happens if the model is then given the task of launching yet another program, in this case Excel.
The processing follows the same general course as in the previous case, for CG. But at the point in the Use-Analogy space where the model has to ask itself whether it knows how to launch some other program, Word, this time chunk-2 fires to give the answer directly: "Yes, you double-click on Word". This time there is no need actually to imagine doing the task in an Imaginary Perform space.
In particular, a chunk is learned for the Imagine-Task operator for launching Word, so the model no longer has to set itself the task and do it in imagination.
Note that these points are dictated by the architecture. They follow from asking: How does Soar lend itself to doing the analogy task?
Further exercises: Exercise 4.
Answer to Exercise 4 (first part).
Howes, A. & Payne, S. J. (1990) Semantic analysis during exploratory learning. In J. C. Chew & J. Whiteside (Eds.) CHI'90 Conference Proceedings: Human Factors in Computing Systems, 399-405. ACM Press.
Howes, A. & Young, R. M. (1991) Predicting the learnability of task-action mappings. In S. P. Robertson, G. M. Olson & J. S. Olson (Eds.) in Proceedings of CHI`91: Human Factors in Computing Systems, 113-118. ACM Press.
Howes, A. & Young, R. M. (1996) Learning consistent, interactive, and meaningful task-action mappings: A computational model. Cognitive Science, in press.
Lewis, C. H. (1988) Why and how to learn why: Analysis-based generalization of procedures. Cognitive Science, 12, 211-256.
Rieman, J., Lewis, C., Young, R. M. & Polson, P. G. (1994) "Why is a raven like a writing desk?": Lessons in interface consistency and analogical reasoning from two cognitive architectures. In Proceedings of CHI'94: Human Factors in Computing Systems. 438-444. ACM Press.