Soar: Frequently Asked Questions List


Frank E. Ritter: ritter@ist.psu.edu
Marios Avaramides: marios@ist.psu.edu
Alexander B. Wood: awood@ist.psu.edu
Last updated June 2001

Table of Contents

Section 0: Introduction

Section 1: General Questions

(G0) Where can I get hold of the Soar FAQ?
(G1) What is Soar?
(G2) Where can I get more information about Soar?
(G3) What does Soar stand for?
(G4) What do I need to be able to run Soar?
(G5) Where can I get hold of Soar?
(G6) Who uses Soar for what?
(G7) How can I learn Soar?
(G8) Is Soar the right tool for me?
(G9) How can I make my life easier when programming in Soar?
(G10) Is there any support documentation available for Soar?
(G11) How can I find out what bugs are outstanding in Soar?
(G12) Other links and Links2Go Key Resource award in Soar
(G13) How do I write fast code?
(G14) How does Soar currently stand as a Psychology theory?

Section 2: Technological Issues

(T1) What is search control?

(T2) What is data chunking?

(T3) What is the generation problem?

(T4) What do all these abbreviations and acronyms stand for?

(T5) What is this NNPSCM thing anyway?

Section 3: Programming Questions

(P1) Are there any guidelines on how to name productions?

(P2) Why did I learn a chunk here?

(P3) Why didn't I learn a chunk there (or how can I avoid leaning a chunk there)?

(P4) What is a justification?

(P5) How does Soar decide which conditions appear in a chunk?

(P6) Why does my chunk appear to have the wrong conditions?

(P7) What is all this support stuff about? (Or why do values keep vanishing?)

(P8) When should I use o-support, and when i-support?

(P9) Why does the value go away when the subgoal terminates?

(P10) What's the general method for debugging a Soar program?

(P11) How can I find out which productions are responsible for a value?

(P12) What's an attribute impasse, and how can I detect them?

(P13) Are there any templates available for building Soar code?

(P14) How can I find all the productions which test X?

(P15) Why doesn't my binary parallel preference work?

(P16) How can I do multi-agent communication in Soar 7?

(P17) How can I find out about a programming problem not addressed here?


Section 0: Introduction


This is the introduction to a list of frequently asked questions (FAQ) about Soar with answers.

The FAQ is posted as a guide for finding out more about Soar. It is intended for use by all levels of people interested in Soar, from novices through to experts. With this in mind, the questions are essentially divided into three parts: the first part deals with general details about Soar; the second part examines technological issues in Soar; the third part looks at some issues related to programming using Soar. Questions in the first section have their numbers prefixed by the letter G (for General); those in the second section are prefixed by the letter T (for Technological); and those in the third section are prefixed by the letter P (for Programming).

It also serves as a repository of the canonical "best" answers to these questions. If, however, you know of a better answer or can suggest improvements, please feel free to make suggestions.

This FAQ is updated and posted on a variable schedule. Full instructions for getting the current version of the FAQ are given in question G0.

In order to make it easier to spot what has changed since last time around, new and significantly changed items have been tagged with the "new" icon.

Suggestions for new questions, answers, re-phrasing, deletions etc., are all welcomed. Please include the word "FAQ" in the subject of your e-mail correspondence. Please use the mailing lists noted below for general questions, but if they fail or you do not know which one to use, contact one of us.

This FAQ is not just our work, but includes numerous answers from members of the Soar community, past and present. The initial versions were supported by the DERA and the ESRC Centre for Research in Development, Instruction and Training. Gordon Baxter put the first version together. Special thanks are due to John Laird and the Soar Group at the University of Michigan for helping to generate the list of questions, and particularly to Clare Bates Congdon, Peter Hastings, Randy Jones, Doug Pearson (who also provided a number of answers), and Kurt Steinkraus. The views expressed here are those of the authors and should not necessarily be attributed to the Ministry of Defence or the Pennsylvania State University.

Frank E. Ritter (ritter@ist.psu.edu)

Marios Avaramides (marios@ist.psu.edu)

Alexander B. Wood (awood@ist.psu.edu)

Gordon D. Baxter (gbaxter@psych.york.ac.uk)

 

Back to Table of Contents


Section 1: General Questions


(G0) Where can I get hold of the Soar FAQ?

The latest version of the list of Frequently Asked Questions (FAQ) for the Soar cognitive architecture is posted approximately every three to six months to the soar-group mailing list, and to the following newsgroups:
comp.ai

sci.cognitive

sci.psychology.theory

If you are reading a plain text version of this FAQ, there is also an html version available, via either of the following URLs:
    http://ritter.ist.psu.edu/soar-faq/
which you can access using any Web browser.

There are ongoing plans for mirroring this FAQ on the Soar home pages at ISI.

(If you find that material here is out of date or does not include your favorite paper or author, please let us know. The work and range of material generated by the Soar group is quite broad and has been going on for over a decade now.)

Back to Table of Contents


(G1) What is Soar?

Soar means different things to different people. Soar is used by AI researchers to construct integrated intelligent agents and by cognitive scientist for cognitive modeling. It can basically be considered in three different ways:
  1. A theory of cognition. As such it provides the principles behind the implemented Soar system.
  2. A set of principles and constraints on (cognitive) processing. Thus, it provides a (cognitive) architectural framework, within which you can construct cognitive models. In this view it can be considered as an integrated architecture for knowledge-based problem solving, learning and interacting with external environments.
  3. An AI programming language.
Soar incorporates
problem spaces as a single framework for all tasks and subtasks to be solved

production rules as the single representation of permanent knowledge

objects with attributes and values as the single representation of temporary knowledge

automatic subgoaling as the single mechanism for generating goals

and chunking as the single learning mechanism.

Back to Table of Contents

(G2) Where can I get more information about Soar?

Books

For an introduction to the idea of Soar as a Unified Theory of Cognition read:
Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard.
To find out some of the sorts of things that people have modelled using Soar look at:
Rosenbloom, P.S., Laird, J.E. & Newell, A. (1993) The Soar Papers: Readings on Integrated Intelligence. Cambridge, MA: MIT Press.

Journal Articles and Book Chapters

Recent and forthcoming publications related to Soar include:
Huffman, S., & Laird, J.E. (1995) Flexibly instructable agents. Journal of Artificial Intelligence Research

Laird, J.E., & Rosenbloom, P.S. (1996) The evolution of the Soar cognitive architecture. In T. Mitchell (ed.) Mind Matters.

Lehman, J.F., Laird, J.E., & Rosenbloom, P.S. (1996) A gentle introduction to Soar, an architecture for human cognition. In S. Sternberg & D. Scarborough (eds.) Invitation to Cognitive Science, Volume 4.

Tambe, M., Johnson, W.L., Jones, R.M., Koss, F., Laird, J.E., Rosenbloom, P.S., & Schwamb, K. (1995) Intelligent agents for interactive simulation environments. AI Magazine 16(1).

Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., & Koss, F. V. (1999). Automated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27-41.

Lewis, R.L. (2001, in preparation) Cognitive theory, Soar. In International Encyclopedia of the Social and Behavioral Sciences. Amsterdam: Pergamon (Elsevier Science).

Web Sites

There are a number of Web sites available that provide information about Soar at varying levels:

The Information Sciences Institute at the University of Southern California maintain a collection of Soar-related web pages including the Soar home page, the Soar group at ISI, and the Soar archive which contains a publication bibliography, document abstracts, official software releases, software manuals, support tools, and information about members of the Soar community.

The Artificial Intelligence lab at the University of Michigan has a collection of Web pages about Cognitive Architectures per se. This includes a section on Soar; there is also a Web page available for the Soar group at UMich.

Carnegie Mellon University - where Soar was originally developed - has its own Soar projects page.

The U. of Hertfordshire website includes Soar resources on the Web and elsewhere, a few of Richard Young's papers, and an annotated bibliography of Soar journal articles and book chapters (but not conference papers) related to psychology that is intended to be complete.

ExpLore Reasoning Systems has a summary with some new links at http://www.ers.com/Products/Soar/soar.html.

There is also a site at the University of Nottingham that includes mirrors of several of the US sites as well as some things developed at Nottingham, including the Psychological Soar Tutorial There is a nascent site at The Pennsylvania State University. will appear at Frank Ritter's homepage http://acs.ist.psu.edu/acs-lab

Mailing Lists

There are a number of mailing lists that exist within the Soar community as forums for discussion, and places to raise queries. The main ones are:
soar-requests@umich.edu - If you do not know where to ask, use this one

soar-help@umich.edu - Where to get help with Soar problems

soar-bugs@umich.edu - Where to send your bug reports

soar-group@umich.edu - General Soar discussions take place here

soar-nl@umich.edu - Natural language discussion

soar-doc@umich.edu - Send requests for documentation here

soar-tsi@umich.edu - Discussions about the Tcl/Tk Soar Interface

To subscribe to the soar-group mailing list, you should send an e-mail to soar-requests@umich.edu asking for your name to be added to the list. If you decide that you wish to unsubscribe from soar-group, you should send an e-mail to soar-requests@umich.edu asking for your name to be removed from the mailing list.

There used to be (1988 to 2000) a European mailing list. Due to the low volume of traffic sent only to the eu-soar mailing list, and speedier transatlantic connections now supporting email etc., the eu-soar list merged with the Soar-group list in June 2000.

Newsgroups

At present there is no Soar newsgroup. There has occasionally been talk about starting one, but the mailing lists tend to serve for most purposes. Matters relating to Soar occasionally appear on the comp.ai newsgroup.

Soar Workshops

There have been two workshops series, one based in the USA and one based in Europe (which led to a series of international workshops and conferences on cognitive modeling, starting with the First in Berlin, and recently at George Mason). Listed below are a few of the previous North American Workshops:

-Soar Workshop 21 (2001): http://ai.eecs.umich.edu/soar/workshop21/talks/
-Soar Workshop 20 (2000): http://www.isi.edu/soar/soar-workshop/proceedings.html
-Soar Workshop 19 (1999): http://ai.eecs.umich.edu/soar/workshop19/talks/
-Soar Workshop 17 (1997): http://www.cis.ohio-state.edu/%7Erick/Soar17/schedule.html

Soar Training

There have been Soar tutorials at several conferences and held as additional training for academia, industry and government. The U. of Michigan group has probably done it the most.

Often, a one day psychology oriented Soar tutorial was offered before EuroSoar workshops, and often at AISB conferences. It has also been offered at the Cognitive Science Conference in 1999. Email Frank Ritter or Richard Young for details.

Back to Table of Contents


(G3) What does Soar stand for?

Historically Soar stood for State, Operator And Result because all problem solving in Soar is regarded as a search through a problem space in which you apply an operator to a state to get a result. Over time, the community no longer regarded Soar as an acronym: this is why it is no longer written in upper case.

Back to Table of Contents


(G4) What do I need to be able to run Soar?

There are a number of versions of Soar available for different combinations of machine and operating system.
 

Soar Version 8.3 Release - Soar 8.3 adds several new features to the architecture and resolves a number of bug reports. A change to the default calculation of O-Support may require changes to existing Soar programs. These are described in the Soar 8.3 Release Notes. Soar-8.3 still includes a compatibility mode that allows users to run using the old Soar 7 architecture and methodology. Users must specify "soar8 -off" before loading any productions to run in Soar7-compatibility mode. Available for Unix, Mac, and Windows.

Soar Version 8.2 - Soar 8 introduces architectural changes which require changes to existing Soar programs. These are described in the Soar 8.2 Release Notes. Soar-8.2 does include a compatibility mode that allows users to run using the old Soar 7 architecture and methodology. Users must specify "soar8 -off" before loading any productions to run in Soar7-compatibility mode.
Previous Versions:
Unix - Soar 7.0.4. This requires of the order of 10 Mb of disk space (including source code) for most of what you will need, although the file that you retrieve via ftp is much smaller, since it is archived and compressed. The Unix version of Soar 7 is compiled with Tcl/Tk, which allows users to easily add commands and displays to their environment. The Soar Development Environment (SDE), which is a set of extensions to GNU Emacs, offers a programming support environment for earlier versions of Soar, and can still be used albeit in a more limited way for more recent versions.

Mac - MacSoartk 7.0.4 + CUSP - this version MacSoar comes with Tk included, along with a number of extensions added by Ken Hughes to improve the usability of Soar on the Mac. You will require around 10 Mb of disk space to run this version, and will need version 7 of Mac OS (version 7.5 is recommended). Some versions of Soar can also be run under MacUnix.

PC - There was a version of Soar which runs under Windows 95 and Windows NT. It is a port of Soar to WIN32, and includes Tcl 7.6/Tk 4.2. It is available from the University of Michigan, as a zipped binary file. You should read the accompanying notes (available at the same URL) carefully, since there may be a problem with the Win32 binary release of Tcl 7.6/Tk 4.2.

In addition there is an older, unsupported PC version called WinSoar, based on Soar version 6.1.0, which includes a simple editing and debugging environment, runs under Microsoft Windows 3.x. It is also known as IntelSoar.

Several people have also successfully managed to install the Unix version of Soar on a PC running under the Linux operating system, although some problems have been reported under versions of Linux that have appeared since December 1996.

Version 7.1 of Soar is currently being revised to utilise the latest release of Tcl/Tk (version 8.0) prior to its official release. The new release of Soar will include the Tcl/Tk Soar Interface (TSI).Currently, Soar7.1 uses Tcl 7.6 and Tk 4.2, and not Tcl 8.0.

If you decide to get hold of one of these versions of Soar, please send an e-mail to soar-requests@umich.edu informing them which version you have retrieved. This will allow your name to be added to the appropriate mailing list so that you can be kept informed of future developments. Soar 7 and 8 have been ported to Macs and PCs.

Back to Table of Contents


(G5) Where can I get hold of Soar?

The simplest way is to simply click on the version you want on the UMICH Soar archive software Web page. This will initiate the transfer of the selected file to your machine.

The preliminary Soar 7.2 releases for Mac and Windows can be found at http://ai.eecs.umich.edu/soar2/software.html. These releases include binaries so you don't have to rebuild anything, and are accompanied by the basic README files for installing and running Soar.

There is now also a European mirror site at the University of Nottingham of the CMU Soar software archive and the ISI Soar papers archive.

NEWKB agent is a commercially available version of Soar for developing intelligent agents in enterprise environments. It is based on the public version, but the code has been optimized, updated, and reorganized for linking to other programs in Windows 95/NT. ERS has a 30-day Trial Edition of KB Agent available for download over the Web at http://www.ers.com/Products/KB_Agent/kb_agent.htmlRalph Morelli, in 1996, created a prototype of a WWW client/server model where Soar (6.2.4) is the server and a Java applet is the client. This allows one to "talk to Soar" via Netscape or some other Web browser. The fledgling demo for a Java aware browser is at http://troy.trincoll.edu/~soar/soarclient/, and shows that Soar can be delivered via the web.

A second soar server at http://troy.trincoll.edu/~soar/proofer/ does logic proofs using a natural deduction technique. It's not a complete proof system (there are some proofs that it can't find) but PROOFER shows that something more useful than echo can be done with this model!

There is a lisp based version of Soar 6 that Prof. Jans Aasman [J.Aasman@research.kpn.com] built a while back. You should contact him for details.

There is a partially completed (as of 13 Apr 00) Java based version by Sherief (shario@usa.net). Its source code and more details are available at http://www.geocities.com/sharios/soar.htm .

If you need to compile Soar with gcc, you can get gcc from http://gcc.gnu.org

Back to Table of Contents


(G6) Who uses Soar for what?

Soar is used by AI researchers to construct integrated intelligent agents, and by cognitive scientists for cognitive modelling.

The Soar research community is distributed across a number of sites throughout the world. A brief overview of each of these is given below, in no particular order.

Carnegie Mellon University

Development of models for quantitatively predicting human performance, including GOMS. More complex, forward-looking and sophisticated models are built using Soar. For more information contact Bonnie John (Bonnie.John@cs.cmu.edu).

Information Sciences Institute, University of Southern California

Soar projects cover five main areas of research: development of automated agents for simulated environments (in collaboration with UMich); learning (including explanation-based learning); planning; implementation technology (e.g., production system match algorithms); and virtual environments for training. For more information contact Randall Hill (hill@isi.edu).

University of Michigan

The Soar work at UMich has four basic research thrusts:

Learning from external environments including learning to recover from incorrect domain knowledge, learning from experience in continuous domains, compilation of hierarchical execution knowledge, and constructive induction to improve problem solving and planning;

Cognitive modelling of psychological data involving learning to perform multiple tasks and learning to recall declarative knowledge;

Complex, knowledge-rich, real time control of autonomous agents within the context of tactical air engagements (the TacAir-Soar project);

Basic architectural research in support of the above research topics.

The application of Soar to computer games.

Perhaps the largest success for Soar has been flying simulated aircraft in a hostile environment. Jones et al. (Jones, Laird, Nielsen, Coulter, Kenny, & Koss, 1999) report how Soar flew all of the US aircraft in the 48 hour STOW'97 exercise. The simulated pilots talked with each other and ground control, and carried out 95% of the missions successfully.

For more information contact John Laird (laird@eecs.umich.edu) or check out their web site noted above.

The U. of Michigan/Psychology

NL-Soar work at UofM (formerly Ohio State) focuses on modeling real-time human sentence processing, research on word sense disambiguation, and experimental tests of NL-Soar as a psycholinguistic theory. Other cognitive work in Soar includes modelling learning and performance in human-device interaction.

For more information contact Rick Lewis

The Pennsylvania State University

The Soar work at The Pennsylvania State University The general area of research involves using Soar models as a way to test theories of learning, and improving human-computer interaction. Other projects include the development of the Psychological Soar Tutorial and the Soar FAQ! For more information contact Frank Ritter (ritter@ist.psu.edu).

University of Hertfordshire

The Soar work at U. of Hertfordshire includes modelling aspects of human-computer interaction, particularly on the use of eye movements during the exploratory search of menus.

For more information contact Richard Young (R.M.Young@herts.ac.uk).

ExpLore Reasoning Systems, Inc.

As well as its academic usage, Soar is also being used by ExpLore Reasoning Systems, Inc. in Virginia in the USA. A commercial version of Soar, called KB Agent, has been developed as a tool for modelling and implementing business expertise.

University of Portsmouth

The Intelligent Agent Group at the University of Portsmouth is currently involved in a range of Soar related activities, particularly: Soar agents for intelligent control in synthetic environments Teamwork/C2 structures within groups of Soar agents off-line knowledge extraction from legacy production sets Soar development tools

Back to Table of Contents


(G7) How can I learn Soar?

Probably the best way to learn Soar is to actually visit a site where people are actively using Soar, and stay for as long as you can manage (months rather than days). In order to help people, however, there are two tutorials available for Soar.

The more general of these tutorials, known as Soar 8 tutorial, was developed for anyone interested in learning Soar, and is based on Soar 8.

The other tutorial was developed mainly with psychologists in mind. The latest version is based on Soar 7. The Web version of this tutorial was developed by Frank Ritter, Richard Young, and Gary Jones.

There is no textbook, as such, on how to program using Soar, although John Rieman has written a set of notes entitled An Introduction to Soar Programming (gzipped postcript format). Even though the notes are based on version 6 of Soar (NNPSCM) they provide a useful bare bones introduction to Soar as a programming language.

From version 7 onwards, Soar is closely linked to Tcl/Tk. If you wish to get hold of a Tcl Tutorial computer aided instruction package, you could start by looking at Clif Fynt's home page or a page of links to resources on the web . There is a set of notes on experiences with usingT cl/Tk to construct external environments, written by Richard Young. These may be useful to anyone who is heading down this line, since they highlight some of the good and bad points about Tcl/Tk.

Back to Table of Contents


(G8) Is Soar the right tool for me?

For cognitive modelling: Soar's strengths are in modelling deliberate cognitive human behavior, at time scales greater than 50 ms. Example tasks that have been explored include human computer interaction tasks, typing, arithmetic, video game playing, natural language understanding, concept acquisition, learning by instruction, and verbal reasoning. Soar has also been used for modelling learning in many of these tasks; however, learning adds significant complexity to the structuring of the task and is not for the casual user. Although many of these tasks involve interaction with external environments and the Soar community is experimenting with models of interaction, Soar does not yet have a standard model for low-level perception or motor control.

For building AI systems: Soar's strengths are in integrating knowledge, planning, reaction, search and learning within a very efficient architecture. Example tasks include production scheduling, diagnosis, natural language understanding, learning by instruction, robotic control, and flying simulated aircraft.

If all you want to do is create a small production rule system, then Soar may not be right for you. Also, if you only have limited time that you can spend developing the system, then Soar is probably not the best choice. It currently appears to require a lot of effort to learn Soar, and more practice before you become proficient than is needed if you use other, simpler, systems.

There are, however, a number of basic capabilities that Soar provides as standard. If you need to use these, then Soar may not only be just what you want, but may also be the only system available:

learning and its integration with problem solving

interruptibility as a core aspect of behaviour

large production rule systems

parallel reasoning

a knowledge description and design approach based on problem spaces

Back to Table of Contents

(G9) How can I make my life easier when programming in Soar?

There are a number of ways to make your life easier when programming in Soar. Some simple high level considerations are:
Re-use existing code

Cut and paste productions and code

Work mainly on the top level problem space, using incremental problem space expansion

Use the integrated Emacs environment, SDE, or one of the visual editors

Turn chunking (the learning mechanism) off

Use the Tcl/Tk to write simulations for the model to talk to.

Use a programming tool, such as

ViSoar

The Tcl/Tk Soar Interface (TSI) is part of Soar 7 and 8.
 
 

There is now an extension to the Soar FAQ, which is currently called The Soar less than Frequently Asked Questions List (look for latest version). The intention is that this will eventually give rise to a casebook of examples of useful utilities programmed in Soar, as well as providing hints, tips and tricks to get around some of the less common problems of using Soar.

Back to Table of Contents


(G10) Is there any support documentation available for Soar?

There are now two reference manuals available for Soar 7 in printed form:
  1. The Soar 7 User Manual.
  2. The Soar Advanced Applications Manual.
Although these manuals are not quite finished, they still provide lots of useful information about programming in Soar, and, in particular, version 7. There are a number of caveats which should be borne in mind when using these manuals.

The Soar 6 User Manual is still available for browsing on the Web.

Back to Table of Contents


(G11) How can I find out what bugs are outstanding in Soar?

You can find out the information about the current status of bugs by sending an e-mail to soar-bugs@umich.edu. The subject line should be set to one of the following, depending on which information you require:
Subject Line          Action
------------          ------
Subject: help         Returns this message
Subject: bug list     Returns the current bug list
Subject: bug ref n    Returns explicit information about
                      Soar bug number n, where n is an
                      integer
Back to Table of Contents

(G12) Other links and Links2Go Key Resource award in Soar

NEWThis page has won an award, but more importantly, there is another list of Soar resources assembled by Links2go.
 
Key Resource
Links2Go
Soar
  Congratulations! Your page:
  http://www.ccc.nottingham.ac.uk/pub/soar/nottingham/soar-faq.html
  has been selected to receive a Links2Go Key Resource award in the
  Soar topic!

  The Links2Go Key Resource award is both exclusive and objective. Fewer
  than one page in one thousand will ever be selected for
  inclusion. Further, unlike most awards that rely on the subjective
  opinion of "experts," many of whom have only looked at tens or
  hundreds of thousands of pages in bestowing their awards, the Links2Go
  Key Resource award is completely objective and is based on an analysis
  of millions of web pages. During the course of our analysis, we
  identify which links are most representative of each of the thousands
  of topics in Links2Go, based on how actual page authors, like
  yourself, index and organize links on their pages. In fact, the Key
  Resource award is so exclusive, even we don't qualify for it (yet ;)!

  Please visit:http://www.links2go.com/award/Soar.
Back to Table of Contents

(G13) How do I write fast code?

The interface may be causing a slow down. A semi-working beta version of the socketIO can be found at http://ai.eecs.umich.edu/soar/socketio/

Back to Table of Contents


(G14) How does Soar currently stand as a psychology theory?

Sadly, there is not cut and dried answer to this. Answering this fully will require you to figure out what you expect
from a psychology theory and then evaluate Soar on those criteria. If you expect a theory to predict that humans are
intelligent, and that they have been and can be shown learn in several domains, it is nearly the only game in town. If
you require limited short term memory directly in the architecture, that's not in Soar yet (try ACT-R).

That said, there are numerous resources for finding out more. The first port of call should be Newell's 1990 book,
Unified Theories of Cognition. This makes the most coherent case for Soar, although it is slowly becoming out of
date with respect to the implementation. This may satisfy you. There are also two big books, The Soar papers, that
provide numerous examples of Soar's use. The examples tend to be more biased towards AI, but there are
numerous psychology applications in them.

If you go to the ISI paper archive (or the Nottingham mirror), or often the CHI and Cognitive Science conference
proceedings, you will find some more up-to-date papers showing what the community is currently working on. You
may also find the pointers in the FAQ and lower down on individual web sites to be quite useful in seeing the current
state of play in the area you are interested.

Richard Young has prepared an annotated bibliography of Soar journal articles and book chapters (but not
conference papers) related to psychology that is intended to be complete.

The best cognitive model written in Soar is less clear. Soar models of teamwork (Tambe, 1997) , procedural learning (Nerb, Ritter, & Krems, 1999) , natural language understanding (Lewis, 1996) , categorization (Miller & Laird, 1996), and using computer interfaces (Howes & Young, 1997) .

There is a book from the National Research Council called "Modeling Human and Organizational Behavior:
Applications to Military Simulations" that provides a summary of Soar.

Todd Johnson proposed a list of fundamental cognitive capacities in 1995 that we have started to organize papers
around. Each paper (or system) has only been cited once, and it is far far from complete, but the framework is now
in place for expanding it. If you have suggestions, please do forward them for inclusion.
 

Declarative memory
     Pelton, G. A., and Lehman, J. F., ``Everyday Believability,'' Technical Report CMU-CS-95-133, School of
     Computer Science, Carnegie Mellon University, 1995. Episodic Learning to recall Learning to recognize
Learning by analogy
Instrumental Conditioning
Classical Conditioning
Causal Reasoning
Causal induction
Abduction
     Nerb, J., Krems, J., & Ritter, F. E. (1993). Rule learning and the power law: A computational model and
     empirical results. In Proceedings of the 15th Annual Conference of the Cognitive Science Society, Boulder,
     Colorado. pp. 765-770. Hillsdale, New Jersey: LEA.

External Interaction
     Pelton, G. A. and Lehman, J. F., The Breakdown of Operators when Interacting with the External World,
     Technical Report CMU-CS-94-121, School of Computer Science, Carnegie Mellon University, 1994.

     Nelson, G., Lehman, J. F., John, B., Integrating Cognitive Capabilities in a Real-Time Task, in Proceedings
     of the Sixteenth Annual Conference of the Cognitive Science Society, 1994.

     Bass, E. J., Baxter, G. D., & Ritter, F. E. (1995). Using cognitive models to control simulations of complex
     systems: A generic approach. AISB Quarterly, 93, 18-25.

     Baxter, G. D., & Ritter, F. E. (1997). Model-computer interaction: Implementing the action-perception loop
     for cognitive models. In D. Harris (Ed.), The 1st International Conference on Engineering Psychology and
     Cognitive Ergonomics, 2 (pp. 215-223). October 1996, Stratford-upon-Avon: Ashgate.

Ritter, F. E., Baxter, G. D., Jones, G., & Young, R. M. (in press). Supporting cognitive models as users. ACM Transactions on Computer-Human Interaction.

Natural language
     Lehman, J. F., Van Dyke, J., and Rubinoff, R., Natural Language Processing for IFORS: Comprehension
     and Generation in the Air Combat Domain, in Proceedings of the Fifth Conference on Computer Generated
     Forces and Behavioral Representation, 1995.


STM limitations
Classification
Categorization
Problem solving
     Lehman, J. Fain, Toward the Essential Nature of Statistical Knowledge in Sense Resolution, Proceedings of
     the Twelfth National Conference on Artificial Intelligence, 1994.
     Ritter, F. E., & Baxter, G. D. (1996). Able, III: Learning in a more visibly principled way. In U. Schmid, J.
     Krems, & F. Wysotzki (Eds.), Proceedings of the First European Workshop on Cognitive Modeling, (pp.
     22-30). Berlin: Forschungsberichte des Fachbereichs Informatik, Technische Universitaet Berlin.
Recovery from incorrect knowledge
Situated-action
     Ritter, F. E., & Bibby, P. A. (1997). Modelling learning as it happens in a diagrammatic reasoning task (Tech.
     Report No. 45). ESRC CREDIT, Dept. of Psychology, U. of Nottingham.
Reactive behavior

Nielsen, T. E., & Kirsner, K. (1994). A challenge for Soar: Modeling proactive expertise in a complex dynamic environment. In Singapore International Conference on Intelligent Systems (SPICIS-94). B79-B84.


Interruptibility
     Nelson, G., Lehman, J. F., John, B. E., Experiences in Interruptible Language Processing, Proceedings of the
     1994 AAAI Spring Symposium on Active NLP, 994.
Interleaved actions
Parallel reasoning
Managing WM (it keeps growing and growing and growing...)
Imagining
Self explanation
Limited lookahead learning
Reinforcement learning
Delayed feedback learning

Nerb Krems and Ritter (1993; 1999) was later revised, and showed some good matches to the shape of variance in the power law taken from 14 subjects and to transfer between abduction problems.  The first paper was in Cog Sci proceedings, the second in the Kognitionwissenschaft [German Cognitive Science] journal. Krems and Nerb (1992) is a monograph of Nerb's thesis, which it is based on.

Peck and John (1992) and later reported in Ritter and Larkin (1994) is Browser-Soar, a model of browsing.  It is fit to 10 episodes of verbal protocol taken from 1 subject.   The fit is sometimes quite good and allowed a measure of Soar's cycle time to be computed against subjects. It also suggested fairly strongly (because the model was matched to verbal and
non-verbal actions) that verbal protocols are appearing about 1 second after their corresponding working memory elements appear.

Nelson, G., Lehman, J. F., John (1994) proposed a model that integrated multiple forms of knowledge to start to match some protocols taken from the NASA space shuttle test director.  No detailed match.

Aasman, J., & Michon, J. A. (1992) present a model of driving.  While the book chapter does not, I believe, match data tightly, the later Aasman book (1995) does so very well.  The book is not widely read however.

John, B. E., Vera, A. H., & Newell, A. (1992; 1994) presents a model matched to 10 seconds of a subject learning how to play Mario Brothers. This was available as a CHI conference paper initially.

Chong, R. S., & Laird, J. E. (1997) present a model that learns how to perform a dual task fairly well.  I don't think it's matched to data very tightly, but it shows a very plausible mechanism.  This was a preliminary version of Chong's thesis.

Johnson et al. (1991) present a model of blood typing.  The comparison is, I believe, done loosely to verbal protocols.  This was a very hard task for subjects to do, and the point of it was that a model could do, and it was not just intuition that allowed users to perform this task.

There have been a couple of papers on integrating knowledge (ie models) in soar. Lehman, J. F., Lewis, R. L., & Newell, A. (1991) and Lewis, R. L., Newell, A., & Polk, T. A. (1989) both present models that integrate submodels.  I don't believe that either have been compared with data, but they show how different behavours can be integrated and note some of the issues that will arise.

Lewis et al. (1990) address some of the questions discussed here about the state of soar, but from a 1990's perspective.

Several models in Soar have been created that model the power law. These include Sched-Soar (Nerbet al., 1999), physics principle application (Ritter, Jones, & Baxter, 1998), Seible-Soar and R1-Soar (Newell, 1990). These models, although they use different mechanisms, explain the powerlaw as arising out of hierarchical learning (i.e. learning parts of the environment or internal goal structure) that initially learns low level actions that are very common and thus useful, and with further practice larger patterns are learned but they occur less often. The Soar models also predict that some of the noise in behaviour on individual trials is different measurable and predicted amounts of transfer between problems.

References

Nerb, J., Krems, J., & Ritter, F. E. (1993). Rule learning and the powerlaw: A computational model and empirical results. In Proceedings of the15th Annual Conference of the Cognitive Science Society, Boulder, Colorado. pp.765-770. Hillsdale, New Jersey: LEA.
This was revised and extended and published as:


Nerb, J., Ritter, F. E., & Krems, J. (1999). Knowledge level learning and the power law: A Soar model of skill acquisition in scheduling.Kognitionswissenschaft [Journal of the German Cognitive Science Society] Special issue on cognitive modelling and cognitive architectures,  D.Wallach & H. A. Simon (eds.).  20-29.

Using a process model of skill acquisition allowed us to examine the microstructure of subjects' performance of a scheduling task. The model, implemented in the Soar-architecture, fits many qualitative (e.g., learning rate) and quantitative (e.g., solution time) effects found in previously collected data. The model's predictions were tested with data from a new study where the identical task was given to the model and to 14 subjects. Again a general fit of the model was found with the restrictions that the task is easier for the model than from subjects and its performance improves more quickly. The episodic memory chunks it learns while scheduling tasks show how acquisition of general rules can be performed without resort to explicit declarative rule generation. The model also provides an explanation of the noise typically found when fitting a set of data to a power law -- it is the result of chunking over actual knowledge rather than "average'' knowledge. Only when the data are averaged (over subjects here) does the smooth power law appear.

Ritter, F. E., & Larkin, J. H. (1994). Using process models to summarize sequences of human actions. Human-Computer Interaction, 9(3), 345-383.

Nelson, G., Lehman, J. F., John, B., Integrating Cognitive Capabilities in a Real-Time Task, in Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, 1994.

Aasman, J., & Michon, J. A. (1992). Multitasking in driving. In J. A. Michon & A. Akyürek (Eds.), Soar: A cognitive architecture in perspective. Dordrecht, The Netherlands: Kluwer.

Aasman, J. (1995). Modelling driver behaviour in Soar. Leidschendam, The Netherlands: KPN Research.

John, B. E., Vera, A. H., & Newell, A. (1994). Towards real-time GOMS: A model of expert behavior in a highly interactive task. Behavior and Information Technology, 13, 255-267.

John, B. E., & Vera, A. H. (1992). A GOMS analysis of a graphic, interactive task. In CHI'92 Proceedings of the Conference on Human Factors and Computing Systems (SIGCHI).   251-258. New York, NY: ACM Press.

Chong, R. S., & Laird, J. E. (1997). Identifying dual-task executive process knowledge using EPIC-Soar. In Proceedings of the 19th Annual  Conference of the Cognitive Science Society.  107-112. Mahwah, NJ: Lawrence Erlbaum.

Johnson, K. A., Johnson, T. R., Smith, J. W. J., DeJongh, M., Fischer, O., Amra, N. K., & Bayazitoglu, A. (1991). RedSoar: A system for red blood cell antibody identification. In Fifteenth Annual Symposium on Computer Applications in Medical Care.   664-668. Washington: McGraw Hill.

Krems, J., & Nerb, J. (1992). Kompetenzerwerb beim Lösen von Planungsproblemen: experimentelle Befunde und ein SOAR-Modell (Skill acquisition in solving scheduling problems: Experimental results and a Soar model) No. FORWISS-Report FR-1992-001). FORWISS, Muenchen.

Peck, V. A., & John, B. E. (1992). Browser-Soar: A computational model of a highly interactive task. In Proceedings of the CHI '92 Conference on Human Factors in Computing Systems.  165-172. New York, NY: ACM.

Lehman, J. F., Lewis, R. L., & Newell, A. (1991). Integrating knowledge sources in language comprehension. In Thirteenth Annual Conference of the Cognitive Science Society.   461-466.

Lewis, R. L., Newell, A., & Polk, T. A. (1989). Toward a Soar theory of taking instructions for immediate reasoning tasks. In Annual Conference of the Cognitive Science Society.  514-521. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Lewis, R. L., Huffman, S. B., John, B. E., Laird, J. E., Lehman, J. F., Newell, A., Rosenbloom, P. S., Simon, T., & Tessler, S. G. (1990). Soar as a Unified Theory of Cognition: Spring 1990. In Twelfth Annual Conference of the Cognitive Science Society.   1035-1042. Cambridge, MA:

Tambe, M. (1997). Towards flexible teamwork. Journal of Artificial Intelligence Research, 7, 83-124.

Back to Table of Contents


Section 2: Technological Issues


(T1) What is search control?

Search control is knowledge that controls the search in that it guides the search through comparing proposed alternatives. In Soar, search control is encoded in production rules that create preferences for operators.

Back to Table of Contents


(T2) What is data chunking?

Data chunking is creation of chunks that allow for either the recognition or retrieval of data that is currently in working memory. Chunking is usually thought of a method for compiling knowledge or speed up learning, and not for moving data from working memory into long term memory. Data chunking is a technique in which chunking does create such recognition or retrieval productions and thus allows Soar to perform knowledge-level learning.

Simplistically, then, this is the creation of chunks that can be represented by the form a=>b i.e., when a appears on the state, the data for b does too.

Back to Table of Contents


(T3) What is the generation problem?

Whenever you subgoal to create a datachunk, you have to generate everything that seems important, and then use search control to make sure that the chunks you build are correct.

Back to Table of Contents


(T4) What do all these abbreviations and acronyms stand for?

 
CSP - Constraint Satisfaction Problem

EBG - Explanation-Based Generalisation

EBL - Explanation-Based Learning

GOMS - Goals, Operators, Methods, and Selection rules

HISoar - Highly Interactive Soar

ILP - Inductive Logic Programming

NNPSCM - New New Problem Space Computational Model

NTD - NASA Test Director

PEACTIDM - Perceive, Encode, Attend, Comprehend, Task, Intend, Decode, Move

SCA - Symbolic Concept Acquisition

Back to Table of Contents

(T5) What is this NNPSCM thing anyway?

Really, this is a number of questions rolled into one:
  1. What is the PSCM?
  2. What is the NNPSCM?
  3. What are the differences between the two?

What is the PSCM?

The Problem Space Computational Model (PSCM) is an idea that revolves around the commitment in Soar to using problem spaces as the model for all symbolic goal-oriented computation. The PSCM is based on the primitive acts that are performed using problem spaces to achieve a goal. These primitive acts are based on the fundamental object types within Soar i.e., goals, problem spaces, states and operators. The functions that they perform are shown below:
Goals
    1. Propose a goal.
    2. Compare goals.
    3. Select a goal.
    4. Refine the current information about the current goal.
    5. Terminate a goal.
Problem Spaces
    1. Propose a problem space for the goal.
    2. Compare problem spaces for the goal.
    3. Select a problem space for the goal.
    4. Refine the information available about the current problem space.
States
    1. Propose an initial state.
    2. Compare possible initial states.
    3. Select an initial state.
    4. Refine the information available about the current state.
Operators
    1. Propose an operator.
    2. Compare possible operators.
    3. Select an operator.
    4. Refine the information available about the current operator.
    5. Apply the selected operator to the current state.
More details about exactly what these function do can be found in the current User Manual.

What is the NNPSCM?

The New New Problem Space Computational Model (NNPSCM) addresses some of the issues that made the implementation of the PSCM run relatively slowly. It reformulates some of the issues within the PSCM without actually removing them, and hence changes the way in which models are implemented but we are not aware of a model that has been fundamentally influenced by this change. Starting with version 7.0.0 of Soar, all implementation is performed using the NNPSCM; in later releases of version 6.2, you can choose which version you require (NNPSCM or non-NNPSCM) when you build the executable image for Soar. The easiest way to illustrate the NNPSCM is to look at the differences between it, and the PCSCM.

What are the differences between the two?

The NNPSCM and the PSCM can be compared and contrasted in the following ways:
  1. The nature of problem space functions for NNPSCM and PSCM remain essentially the same as those described in Newell, A., Yost, G.R., Laird, J.E., Rosenbloom, P.S., & Altmann, E. (1991). Formulating the problem space computational model. In R.F. Rashid (Ed.), Carnegie-Mellon Computer Science: A 25-Year Commemorative (255-293). Reading, MA: ACM-Press (Addison-Wesley).
  2. The goal state from the PSCM now simply becomes just another state, rather than being treated as a separate, special state.
  3. The need to select between problem spaces in the NNPSCM does not require any decision making process. The problem space is simply formulated as an attribute of the state.
  4. Models implemented using NNPSCM are generally faster than their PSCM equivalents, becuase less decision cycles are required (because there is no need to decide between problem spaces).
  5. Using NNPSCM is presumed to allow better inter-domain, and inter-problem-space transfer of learning to take place.
  6. The use of the NNPSCM should help in the resolution and understanding of the issues involved in external interaction.
The differences may become more evident if we look at code examples (written using Soar version 6.2.5) for the farmer, wolf, goat and cabbage problem that comes as a demo program in the Soar distribution.

PSCM code

       (sp farmer*propose*operator*move-with
           (goal <g> ^problem-space <p>
                     ^state <s>)
           (<p> ^name farmer)
           (<s> ^holds <h1> <h2>)
           (<h1> ^farner <f> ^at <i>)
           (<h2> ^<< wolf goat cabbage >> <value>
                 ^at <i>)
           (<i> ^opposite-of <j>)
           -->
           (<g> ^operator <o>)
           (<o> ^name move-with
                ^object <value>
                ^from <i>
                ^to <j>))
NNPSCM code
        (sp farmer*propose*move-with
          (state <s> ^problem-space <p>)        ; goal <g> has disappeared
          (<p> ^name farmer)
          (<s> ^holds <h1> <h2>)
          (<h1> ^farner <f> ^at <i>)
          (<h2> ^<< wolf goat cabbage >> <value>
                ^at <i>)
          (<i> ^opposite-of <j>)
          -->
          (<s> ^operator <o>)
          (<o> ^name move-with
               ^object <value>
               ^from <i>
               ^to <j>))
On the face of it, there do not appear to be many differences, but when you look at the output trace, the consistency of the operator use, and the improvement in speed becomes more apparent:

PSCM Trace

         0: ==>G: G1
         1:    P: P1 (farmer)
         2:    S: S1
         3:    ==>G: G3 (operator tie)
         4:       P: P2 (selection)
         5:       S: S2
         6:       O: O8 (evaluate-object O1 (move-alone))
         7:       ==>G: G4 (operator no-change)
         8:          P: P1 (farmer)
         9:          S: S3
        10:          O: C2 (move-alone)
NNPSCM Trace
         0:==>S: S1
         1:   ==>S: S2 (operator tie)
         2:      O: O8 (evaluate-object O1 (move-alone)
         3:      ==>S: S3 (operator no-change)
         4:         O: C2 (move-alone)
Back to Table of Contents

Section 3: Programming Questions


(P1) Are there any guidelines on how to name productions?

Productions will load as long as their names are taken from the set of legal characters - essentially alphanumerics and "-" and "*". Names consisting only of numerics are not allowed.

Soar programmers tend to adopt a convention whereby the name of a production describes what the rule does, and where it should apply. Typically, the conventions suggest that names have the following general form:

    problem-space-name*state-name*operator-name*action
How you choose your naming convention is probably less important than the fact that you do use one.

Note that, to name working memory elements, Soar uses single alphabetic character followed by a number, such as p3. If you name a production this way it will not be printable. (It is also poor style.)

Back to Table of Contents


(P2) Why did I learn a chunk there?

Soar generally learns a chunk when a result is created in a superstate by a production which tests part of the current subgoal.

E.g. the production:

        sp {create*chunk1
           (state  ^superstate )
           -->
           ( ^score 10)}

creates a preference for the attribute "score" with value 10.

That mechanism seems simple enough. Why then do you sometimes get chunks when you do not expect them? This is usually due to shared structure between a subgoal and a superstate. If there is an object in the subgoal which also appears in a superstate, then creating a preference for an attribute on that object, will lead to a chunk. These preferences in a superstate are often called "results" although you are free to generate any preference from any subgoal, so that term can be misleading.

For example, suppose that working memory currently looks like:

     (S1 ^type T1 ^name top ^superstate nil)
     (T1 ^name boolean)
     (S2 ^type T1 ^name subgoal ^superstate S1)

so S2 is the subgoal and S1 is the superstate for S2. T1 is being shared. In this case, the production:
     sp {create*chunk2
        (state^type^name subgoal)
        -->
        (^size 2)}
will create a chunk, even though it does not directly test the superstate. This is because T1 is shared between the subgoal and the superstate, so adding ^size to T1, adds a preference to the superstate.

What to do?

Often the solution is to create a copy of the object in the subgoal. So, instead of (S2 ^type T1) create (S2 ^type T2) and (T2 ^name boolean).

For example,

     sp {copy*superstate*type
        (state  ^superstate )
        ( ^type )
        ( ^name )
        -->
        ( ^type )
        ( ^name )}
This will copy the ^type attribute to the subgoal and create a new identifier (T2) for it. Now you can freely modify T2 in the subgoal, without affecting the superstate.

Back to Table of Contents


(P3) Why didn't I learn a chunk there (or how can I avoid learning a chunk there)?

There are a number of situations where you can add something to the superstate and not get a chunk:

(1) Learning off

If you run soar and type "learn -off" all learning is disabled. No chunks will be created when preferences are created in a superstate. Instead, soar only creates a "justification". (See below for an explanation of these).

You can type "learn" to see if learning is on or off, and "learn -on" will make sure it is turned on. Without this, you cannot learn anything.

(2) Chunk-free-problem-spaces

You can declare certain problem spaces as chunk-free and no chunks will be created in those spaces. The way to do this is changing right now (in Soar7) because we no longer have explicit problem spaces in Soar. If you want to turn off chunking like this, check the manual.

(3) Quiescence t

If your production tests ^quiescence t it will not lead to a chunk. For example,

        sp {create*chunk1
           (state ^name subgoal ^superstate )
           -->
           (^score 10)}
will create a chunk, whilst
        sp {create*no-chunk
           (state ^name subgoal ^superstate ^quiescence t)
           -->
           (^score 10)}
will not create a chunk (you just get a justification for ^score 10). You can read about the reasons for this in the Soar manual. You also do not get a chunk if a production in the backtrace for the chunk tested ^quiescence t :

For example,

             sp {create*score
                (state ^name subgoal ^quiescence t)
                -->
                (^score 10)}
 
             sp {now-expect-chunk*but-dont-get-one
               
(state ^name subgoal ^score 10 ^superstate)
                -->
                (^score 10)}
The test for ^quiescence t is included in the conditions for why this chunk was created--so you get a justification, not a chunk.

A point to note is that having tested ^quiescence t, the effect of not learning chunks is applied recursively. If you had a production:

        sp {create*second-chunk
           (state  ^name medium-goal ^superstate )
           (^score 10)
           -->
           (^score-recorded yes)}
and you have a goal stack:
                (S1 ^name top ^superstate nil)
                (S2 ^name medium-goal ^superstate S1)
                (S3 ^name subgoal ^superstate S2)
then if create*chunk1 leads to ^score 10 being added to S2 and then create*second-chunk fires and adds ^score-recorded yes to S1, you will get two chunks (one for each result). However, if you use create*no-chunk instead, to add ^score 10 to S2, then create*second-chunk will also not generate a chunk, even though it does not test ^quiescence t itself. That is because ^score 10 is a result created from testing quiescence.

Back to Table of Contents


(P4) What is a justification?

Any time a subgoal creates a preference in a superstate, a justification is always created, and a chunk will also be generated unless you have turned off learning in some manner (see above). If learning has been disabled, then you only get a justification. A justification is effectively an instantiated chunk, but without any chunk being created.

For example, let us say:

sp {create*chunk1 (state ^name subgoal ^superstate ) ( ^name top) --> ( ^score 10)} leads to the chunk:

        sp {chunk-1
           :chunk
           (state  ^name top)
           -->
           (^score 10)}
If working memory was:
        (S1 ^name top ^superstate nil ^score 10)
        (S2 ^name subgoal ^superstate S1)
then if you typed "preferences S1 score 1" you would see:
        Preferences for S1 ^score:
 
        acceptables:
          10 +
            From chunk-1
(The value is being supported by chunk-1, an instantiated production just like any other production in the system).

Now, if we changed the production to:

        sp {create*no-chunk
           (state ^name subgoal ^superstate ^quiescence t)
           (^name top)
           -->
           ( ^score 10)}
we do not get a chunk anymore, we get justification-1. If you were to print justification-1 you would see:
        sp {justification-1
          :justification ;not reloadable
          (S1 ^name top)
          -->
          (S1 ^score 10)}
This has the same form as chunk-1, except it is just an instantiation. It only exists to support the values in state S1. When S1 goes away (i.e. in this case when you do init-soar ) this justification will go away too. It is like a temporary chunk instantiation. Why have justifications? Well, if you now typed "preferences S1 score 1" you would see:
        Preferences for S1 ^score:
 
        acceptables:
          10 +
            From justification-1
Justification-1 is providing the support for the value 10. If the subgoal, S2, goes away, this justification is the only reason Soar retains the value 10 for this slot. If later, the ^name attribute of S1 changes to "driving" say, this justification will no longer match (because it requires ^name top) and the justification and the value will both retract.

Back to Table of Contents


(P5) How does Soar decide which conditions appear in a chunk?

Soar works out which conditions to put in a chunk by finding all the productions which led to the final result being created. It sees which of those productions tested parts of the superstate and collects all those conditions together.

For example:

        (S1 ^name top ^size large
            ^color blue ^superstate nil)        ;# The superstate
 
        --------------------------------------  ;# Conceptual boundary
 
        (S2 ^superstate S1)                     ;# Newly created subgoal.
 
        sp {production0
           (state ^superstate nil)
           -->
           (^size large ^color blue ^name top)}
If we have:
        sp {production1
           (state ^superstate )
           (^size large)
           -->
           (^there-is-a-big-thing yes)}
and
        sp {production2
           (state ^superstate )
           (^color blue)
           -->
           (^there-is-a-blue-thing yes)}
and
        sp {production3
          (state ^superstate )
          (^name top)
          -->
          (^the-superstate-has-name top)}
and
        sp {production1*chunk
           (state ^there-is-a-big-thing yes
                      ^there-is-a-blue-thing yes
                   
;   ^superstate)
           -->
           (^there-is-a-big-blue-thing yes)}
and working memory contains (S1 ^size large ^color blue) this will lead to the chunk:
        sp {chunk-1
           :chunk
           (state ^size large ^color blue)
           -->
           (^there-is-a-big-blue-thing yes)}
The size condition is included because production1 tested the size in the superstate, created ^there-is-a-big-thing attribute and this lead to production1*chunk firing. Similarly, for the color condition (which was also tested in the superstate and lead to the result ^there-is-a-big-blue-thing yes). The important point is that ^name is not included. This is because even though it was tested by production3, it was not tested in production1*chunk and therefore the result did not depend on the name of the superstate.

Back to Table of Contents


(P6) Why does my chunk appear to have the wrong conditions?

See above for general description of how chunk conditions are computed. If you have just written a program and the chunks are not coming out correctly, then try using the "explain" tool.

So, using the example of how conditions in chunks are created (shown above), if the chunk is:

        sp {chunk-1
           :chunk
           (state  ^size large ^color blue)
           -->
           ( ^there-is-a-big-blue-thing yes)}
and you type "explain chunk-1" you will get something like:
        sp {chunk-1
           :chunk
           (state  ^color blue ^size large)
        -->
           ( ^there-is-a-big-blue-thing yes +)}
 
          1 :  (state  ^color blue)         Ground : (S1 ^color blue)
          2 :  ( ^size large)               Ground : (S1 ^size large)
This shows a list of conditions for the chunk and which "ground" (i.e. superstate working memory element) they tested.

If you want further information about a particular condition you can then type: "explain chunk-1 2" (where 2 is the condition number -- in this case (<s1> ; ^size large)) to get:

Explanation of why condition  (S1 ^size large) was included in
chunk-1
 
Production production1 matched
    (S1 ^size large) which caused
production production1*chunk to match
    (S2 ^there-is-a-big-thing yes) which caused
A result to be generated.
This shows that ^size large was tested in the superstate by production1, which then created (S2 ^there-is-a-big-thing yes). This in turn caused the production production1*chunk to fire and create a result (in this case ^there-is-a-big-blue-thing yes) in the superstate, which leads to the chunk.

This tool should help you spot which production caused the unexpected addition of a condition, or why a particular condition did not show up in the chunk.

Back to Table of Contents


(P7) What is all this support stuff about? (Or why do values keep vanishing?)

There are two forms of "support" for preferences: o-support and i-support. O-support stands for "operator support" and means the preference behaves in a normal computer science fashion. If you create an o-supported preference ^color red, then the color will stay red until you change it.

How do you get an o-supported preference? The exact conditions for this may change, but the general rule of thumb is this:

"You get o-support if your production tests the ^operator slot or creates structure on an operator"

(Specifically under Soar.7.0.0.beta this is o-support-mode 2 -- which is the mode some people recommend since they find it much easier to understand than o-support-mode 0 which is the current default).

E.g.

        sp {o-support
           (state  ^operator )
           ( ^name set-color)
           -->
           ( ^color red)}
the ^color red value gets o-support.

I-support, which stands for "instantiation support", means the preference exists only as long as the production which created it still matches. You get i-supported productions when you do not test an operator:

E.g.

        sp {i-support
           (state ^object )
           (^next-to )
           -->
           (^near)}
In this case
        ^near
will get i-support. If obj1 ever ceases to be next to obj2, then this production will retract and the preference for
        ^near
will also retract. Usually this means the value for ^near disappears from working memory.

To change an o-supported value, you must explicitly remove the old value, e.g.,

        ^color red -   (the minus means reject the value red)
        ^color blue +  (the plus means the value blue is acceptable)
while changing an i-supported value requires just that the production retracts. The use of i-support makes for less work and can be useful in certain cases, but it can also make debugging your program much harder and it is recommended that you keep its use as an optimization to a minimum. By default, when you do state elaboration you automatically get i-support, whereas applying, creating or modifying an operator as part of some state structure will lead to o-support.

(One point worth noting is that operator proposals are always i-supported, but once the operator has been chosen, it does not retract even after the operator proposal goes away, because it is given a special support, c-support for "context object support").

To tell if a preference has o-support or i-support, check the preferences on the attribute.

E.g. "pref s1 color" will give:

        Preferences for S1 ^color:
 
        acceptables:
          red + [O]
while "pref s1 near" gives:
        Preferences for S1 ^near:
 
        acceptables:
          O2 +
The presence or absence of the [O] shows whether the value has o-support or not.

Back to Table of Contents


(P8) When should I use o-support, and when i-support?

Under normal usage, you probably will not have to explicitly choose between the two. By default, you will get o-support if you apply, modify or create an operator as part of some state structure; you will get i-support if all you do is elaborate the state in some way.

It therefore follows that by default, you should generally use operators to create and modify state structures whenever possible. This leads to persistent o-supported structures and makes the behaviour of your system much clearer.

I-support can be convenient occasionally, but should be limited to inferences that are always true. For example, if I know (O1 ^next-to O2), where O1 and O2 are objects then it is reasonable to have an i-supported production which infers that (O1 ^near O2). This is convenient because there might be a number of different cases for when an object is near another object. If you use i-supported productions for this, then whenever the reason (e.g. ^next-to O2) is removed, the inference (^near O2) will automatically retract.

Never mix the two for a single attribute. For example, do not have one production which creates ^size large using i-support and another that tests an operator and creates ^size medium using o-support. That is a recipe for disaster.

Back to Table of Contents


(P9) Why does the value go away when the subgoal terminates?

A common problem is creating a result in a superstate (e.g. ^size large) and then when the subgoal terminates, the value retracts. Why does this happen? The reason is that once the subgoal has terminated the preference in the superstate is supported by the chunk or justification that was formed when the preference was first created.

This chunk/justification may have different support than the support the preference had from the subgoal. It is quite common for an operator in a subgoal to create a result in the superstate which only has i-support (even though it is created by an operator). This is because the conditions in the chunk/justification do not include a test for the super-operator. Therefore the chunk has i-support and may retract, taking the result with it.

NOTE: Even if you are not learning chunks, you are still learning justifications (see above) and the support, for working memory elements created as results from a subgoal, depends on the conditions in those justifications.

Back to Table of Contents


(P10) What's the general method for debugging a Soar program?

The main tools that you need to use are the commands below. You apply them where the behaviour is odd, and use them to understand what is going on. Personally I (FER), prefer the Tcl/Tk interface for debugging because many of these commands become mouse clicks on displays.
print Prints out a value in working memory.

print -stack (pgs in earlier versions of Soar) Prints out the current goal stack: e.g. : ==>S: S1 : ==>S: S2 (state no-change)

matches (ms in earlier versions of Soar) Shows you which productions will fire on the next elaboration cycle.

matches <prod_name> Shows you which conditions in a given production matched and which did not. This is very important for finding out why a production did or did not fire. e.g. soar> matches production1*chunk >>>> (state ^there-is-a-blue-thing yes) ( ^there-is-a-big-thing yes) ( ^superstate ) 0 complete matches. This shows that the first condition did not match.

preferences <id> <attribute> 1

                        (The 1 is used to request a bit more detail.)
                        This command shows the preferences for a given
                        attribute and which productions created the
                        preferences.  A very common use is


                        pref  operator 1
 
                        which shows the preferences for the operator
                        slot in the current state.  (You can always
                        use  to refer to the current state).
Back to Table of Contents

(P11) How can I find out which productions are responsible for a value?

Use
        preferences   1
Or
        preferences   -names
(They both mean the same thing.)

This shows the values for the attribute and which production created them. For example: (S1 ^type T1) (T1 ^name boolean) "pref T1 name 1" will show the name of the production which created the ^name attribute within object T1.

Back to Table of Contents


(P12) What's an attribute impasse, and how can I detect them?

Attribute impasses occur if you have two values proposed for the same slot.

E.g.

        ^name doug +
        ^name pearson +
leads to an attribute impasse for the ^name attribute.

It is a bit like an operator tie impasse (where two or more operators are proposed for the operator slot). The effect of an attribute impasse is that the attribute and all its values are removed from working memory (which is probably what makes your program stop working) and an "impasse" structure is created.

It is usually a good idea to include the production:

        sp {debug*monitor*attribute-impasses*stop-pause
            (impasse  ^object  ^attribute 
             ^impasse )
           -->
        (write (crlf) |Break for impasse |  (crlf))
        (tcl |preferences |  | |  | 1 |)
        (interrupt)}
in your code, for debugging. If an attribute impasse occurs, this production will detect it, report that an impasse has occurred, run preferences on the slot to show you which values were competing and which productions created preferences for those values and interrupts Soar's execution (so you can debug this problem).

Very, very occasionally you may want to allow attribute impasses to occur within your system and not consider them an error, but that is not a common choice. Most Soar systems never have attribute impasses (while almost all have impasses for the context slots, like operator ties and state no-changes).

Back to Table of Contents


(P13) Are there any templates available for building Soar code?

If you use the Soar Development Environment (or SDE) which is a set of modules for Emacs, they provide some powerful template tools, which can save you a lot of typing. You specify the template you want (or use one of the standard ones) and then a few key-strokes will create a lot of code for you.

Back to Table of Contents


(P14) How do I find all the productions which test X?

Use the "pf" command (which stands for production-find). You give "pf" a pattern. Right now, the pattern has to be surrounded by a lot of brackets, but that should be fixed early on in Soar7's life.

Anyway, as of Soar.7.0.0.beta an example is:

        pf {( ^operator *)}
which will list all the productions that test an operator.

Or,

        pf {( ^operator.name set-value)}
which will list all the productions that test the operator named set-value.

You can also search for values on the right hand sides of productions (using the -rhs option) and in various subsets of rules (e.g. chunks or no chunks).

Back to Table of Contents


(P15) Why doesn't my binary parallel preference work?

Using parallel preferences can be tricky, because the separating commas are currently crucial for the parser. In the example below, there is a missing comma after the preferences for "road-quality".
  sp {elaborate*operator*make-set*dyn-features
       (state  ^operator )
       ( ^name make-set)
       -->
       ( ^dyn-features distance + &, gear + &, road-quality + &
            sign + &, other-road + &, other-sidewalk + &,
           other-sign + &)}
This production parses "road-quality & sign" as a valid binary preference, although this is not what was intended. Soar will not currently warn about the duplicate + preferences, you just have to be careful.

Back to Table of Contents


(P16) How can I do multi-agent communication in Soar 7?

Reading and writing text from a file can be used for communication. However, using this mechanism for inter-agent communication would be pretty slow and you would have to be careful to use semaphores to avoid deadlocks. With Soar 7, there is a relatively natural progression from Tcl to C in the development of inter-agent communication.
  1. Write inter-agent communication in Tcl. This is possible with a new RHS function (called "tcl") that can execute a Tcl script. The script can do something as simple as send a message to a simple simulator (which can also be written in Tcl). The simulator can then send the message to the desired recipient(s). You could also do things such as add-wme in a RHS but this makes it harder to see what is going on and more error prone.
  2. Move the simulator into C code. To speed up the simulated world in which the agents interact, recode the simulator in C. Affecting the simulator can be accomplished by adding a few new Tcl commands. The agents would be largely unchanged and the system would simply run faster.
  3. Move communication to C. This is done by writing Soar I/O functions as documented in section 6.2 of the Soar Users Manual. This is the fastest method.
Back to Table of Contents

(P17) How can I find out about a programming problem not addressed here?

There are several places to turn to, listed here in order that you should consider them.
  1. The Soar less frequently asked questions list (lfaq) includes additional and more transitory bugs.
  2. The manuals may provide general help, and you should consult them if possible before trying the mailing lists.
  3. You can consult the list of outstanding bugs.
  4. You can consult the mailing lists noted above.
Back to Table of Contents

End of Soar FAQ