Soar: Frequently Asked Questions List

Last updated 06-January-97
Gordon Baxter: gdb@psychology.nottingham.ac.uk


Table of Contents


Section 0: Introduction


This is the introduction to a list of frequently asked questions (FAQ) about Soar with answers.

The FAQ is posted as a guide for finding out more about Soar. It is intended for use by all levels of people interested in Soar, from novices through to experts. With this in mind, the questions are essentially divided into three parts: the first part deals with general details about Soar; the second part examines technological issues in Soar; the third part looks at some issues related to programming using Soar. Questions in the first section have their numbers prefixed by the letter G (for General); those in the second section are prefixed by the letter T (for Technological); and those in the third section are prefixed by the letter P (for Programming). (In section 3 the examples are written according to Soar7 syntax.)

It also serves as a repository of the canonical "best" answers to these questions. If, however, you know of a better answer or can suggest improvements, please feel free to make suggestions.

This FAQ is updated and posted roughly every three months. Full instructions for getting the current version of the FAQ are given in question G0.

In order to make it easier to spot what has changed since last time around, new and significantly changed items have been tagged with the "new" icon.

Suggestions for new questions, answers, re-phrasing, deletions etc., are all welcomed. Please include the word "FAQ" in the subject of any e-mail correspondence you may send. Please use the mailing lists noted below for general questions, but if they fail or you do not know which one to use, contact one of us.

This FAQ is not just our work, but includes numerous answers from the Soar community. The initial versions were supported by the DERA and the ESRC Centre for Research in Development, Instruction and Training. The views expressed here are those of the authors and should not necessarily be attributed to the Ministry of Defence.

Gordon D. Baxter (gdb@psychology.nottingham.ac.uk) or (lpzgdb@unix.ccc.nottingham.ac.uk)

Frank E. Ritter (Frank.Ritter@nottingham.ac.uk)

Back to Table of Contents


Section 1: General Questions


(G0) Where can I get hold of the Soar FAQ?

The latest version of the list of Frequently Asked Questions (FAQ) for the Soar cognitive architecture is posted approximately every three months to the soar-group and EU-SOAR mailing lists, and to the following newsgroups:

If you are reading a plain text version of this FAQ, there is also an html version available, via the following URL:


    http://www.psychology.nottingham.ac.uk/users/ritter/soar-faq.html

which you can access using any Web browser.

There are ongoing plans for mirroring this FAQ on the Soar home pages at ISI.

Back to Table of Contents


(G1) What is Soar?

Soar means different things to different people, but it can basically be considered in three different ways:
  1. A theory of cognition. As such it provides the principles behind the implemented Soar system.
  2. A set of principles and constraints on (cognitive) processing. Thus, it provides a (cognitive) architectural framework, within which you can construct cognitive models. In this view it can be considered as an integrated architecture for knowledge-based problem solving, learning and interacting with external environments.
  3. An AI programming language.

Soar incorporates

Back to Table of Contents


(G2) Where can I get more information about Soar?

Books

For an introduction to the idea of Soar as a Unified Theory of Cognition read:

To find out some of the sorts of things that people have modelled using Soar look at:

Journal Articles and Book Chapters

Recent and forthcoming publications related to Soar include:

Web Sites

There are a number of Web sites available that provide information about Soar at varying levels:

The Information Sciences Institute at the University of Southern California maintain a collection of Soar-related web pages including the Soar home page, the Soar group at ISI, and the Soar archive which contains a publication bibliography, document abstracts, official software releases, software manuals, support tools, and information about members of the Soar community.

The Artificial Intelligence lab at the University of Michigan has a collection of Web pages about Cognitive Architectures per se. This includes a section on Soar; there is also a Web page available for the Soar group at UMich.

Carnegie Mellon University - where Soar was originally developed - has its own Soar projects page.

There is also a site at the Ohio State University's School of Medical Informatics.

Mailing Lists

There are a number of mailing lists that exist within the Soar community as forums for discussion, and places to raise queries. The main ones are:

If you send e-mail to the soar-group mailing list it is automatically sent to the EU-SOAR mailing list.

To subscribe to the soar-group mailing list, you should send an e-mail to soar-requests@cs.cmu.edu asking for your name to be added to the list. If you decide that you wish to unsubscribe from soar-group, you should send an e-mail to soar-requests@cs.cmu.edu asking for your name to be removed from the mailing list.

To subscribe to the EU-SOAR mailing list you need to send an e-mail to LISTSERV@HEARN.BITNET (or LISTSERV@NIC.SURFNET.NL) with the text "SUBSCRIBE EU-SOAR" as the body of the message. To unsubscribe from the list, you should send an e-mail to LISTSERV@HEARN.BITNET (or LISTSERV@NIC.SURFNET.NL) with the text "SIGNOFF EU-SOAR" as the body of the message.

Newsgroups

At present there is no Soar newsgroup. There has occasionally been talk about starting one, but the mailing lists tend to serve for most purposes. Matters relating to Soar occasionally appear on the comp.ai newsgroup.

Soar Workshops

There are two workshops, one based in the USA and one based in Europe.

The dates and venue for the next North American Workshop (Soar Workshop 17) have yet to be finalised.

NEW The details of the next EuroSoar Workshop have also yet to be completely finalised. It is likely to be held in conjunction with the Second European Workshop on Cognitive Modelling, in 1998.

Soar Training

Typically a one day psychology oriented Soar tutorial is offered before EuroSoar workshops, and often at AISB conferences.

There are plans to hold a one week intensive summer school for learning Soar later this year. This course will take place in Ann Arbor at the University of Michigan.

Back to Table of Contents


(G3) What does Soar stand for?

Historically Soar stood for State, Operator And Result because all problem solving in Soar is regarded as a search through a problem space in which you apply an operator to a state to get a result. Over time, the community no longer regarded Soar as an acronym: this is why it is no longer written in upper case.

Back to Table of Contents


(G4) What do I need to be able to run Soar?

There are a number of versions of Soar available for different combinations of machine and operating system. The latest releases only are listed below:

Unix - Soar 7.0.4. This requires of the order of 10Mb of disk space (including source code) for most of what you will need, although the file that you retrieve via ftp is much smaller, since it is archived and compressed. The Unix version of Soar 7 is compiled with Tcl/Tk, which allows users to easily add commands and displays to their environment. The Soar Development Environment (SDE), which is a set of extensions to GNU Emacs, offers a programming support environment for earlier versions of Soar, and can still be used albeit in a more limited way for more recent versions.

Mac - MacSoartk 7.0.4 + CUSP - the latest MacSoar comes with Tk included, along with a number of extensions added by Ken Hughes to improve the usability of Soar on the Mac. You will require around 10 Mb of disk space to run this version, and will need version 7 of Mac OS (version 7.5 is recommended). Some versions of Soar can also be run under MacUnix.

PC - WinSoar - Soar version 6.1.0, which includes a simple editing and debugging environment, runs under Microsoft Windows 3.x. It is also known as IntelSoar. In addition, there are now a number of people who have successfully managed to install the Unix version of Soar under the Linux operating system.

If you decide to get hold of one of these versions of Soar, please send an e-mail to soar-requests@cs.cmu.edu informing them which version you have retrieved. This will allow your name to be added to the appropriate mailing list so that you can be kept informed of future developments.

Back to Table of Contents


(G5) Where can I get hold of Soar?

The simplest way is to simply click on the version you want on the ISI Soar archive software Web page. This will initiate the transfer of the selected file to your machine.

A slightly more up to date server is kept at CMU at the Soar software archive.

You can also use ftp to retrieve the file. You need to log into cs.cmu.edu using the name "anonymous" and your full e-mail address as your password. Then change directory to /afs/cs/project/soar/public and from there choose the appropriate directory, and then the file that you wish to transfer. Do not forget to make use of the README files that exist in the directories: these will help to explain something about the contents of that particular directory.

There is now also a European mirror site at the University of Nottingham of the CMU Soar software archive and the ISI Soar papers archive.

Back to Table of Contents


(G6) Who uses Soar for what?

Soar is used by AI researchers to construct integrated intelligent agents, and by cognitive scientists for cognitive modelling.

The Soar research community is distributed across a number of sites throughout the world. A brief overview of each of these is given below, in no particular order.

Carnegie Mellon University

There are two basic strands of research.

The NL-Soar explores a range of issues in natural language processing within a unified framework. Particular projects in the recent past and present include real-time comprehension and generation, learning discourse operators, and models of second language acquisition and simultaneous translation. For further information contact Jill Fain Lehman (Jef@cs.cmu.edu).

Development of models for quantitatively predicting human performance, including GOMS. More complex, forward-looking and sophisticated models are built using Soar. For more information contact Bonnie John (Bonnie.John@cs.cmu.edu).

Information Sciences Institute, University of Southern California

Soar projects cover five main areas of research: development of automated agents for simulated environments (in collaboration with CMU, and UMich); learning (including explanation-based learning); planning; implementation technology (e.g., production system match algorithms); and virtual environments for training. For more information contact Paul Rosenbloom (rosenblo@isi.edu).

University of Michigan

The Soar work at UMich has four basic research thrusts:

Learning from external environments including learning to recover from incorrect domain knowledge, learning from experience in continuous domains, compilation of hierarchical execution knowledge, and constructive induction to improve problem solving and planning;

Cognitive modelling of psychological data involving learning to perform multiple tasks and learning to recall declarative knowledge;

Complex, knowledge-rich, real time control of autonomous agents within the context of tactical air engagements (the TacAir-Soar project);

Basic architectural research in support of the above research topics.

For more information contact John Laird (laird@eecs.umich.edu).

The Ohio State University

There are two basic strands of Soar research.

NL-Soar work at OSU focuses on modeling real-time human sentence processing, research on word sense disambiguation, and experimental tests of NL-Soar as a psycholinguistic theory. Other cognitive work in Soar includes modelling learning and performance in human-device interaction.

The other work involves looking at the use of cognitive models of complex problem solving to guide the development of decision support systems and effective training techniques. Specific projects include developing a hybrid learning model of tactical decision making (using Soar and ECHO).

For more information contact Todd Johnson (TJ@medinfo.ohio-state.edu).

University of Nottingham

The general area of research involves using Soar models as a way to test theories of learning, and improving human-computer interaction. Other projects include the development of the Psychological Soar Tutorial and the Soar FAQ! For more information contact Frank Ritter (frank.ritter@nottingham.ac.uk).

Medical Research Council's Applied Psychology Unit

Soar research includes modelling aspects of human-computer interaction. For more information contact Richard Young (Richard.Young@mrc-apu.cam.ac.uk).

Back to Table of Contents


(G7) How can I learn Soar?

Probably the best way to learn Soar is to actually visit a site where people are actively using Soar, and stay for as long as you can manage (months rather than days). In order to help people, however, there are two tutorials available for Soar.

The more general of these tutorials was developed for anyone interested in learning Soar, and is based on Soar 7. It was developed by Peter Hastings at the University of Michigan.

The other tutorial was developed mainly with psychologists in mind, and is now also based on Soar version 7. The Web version of this tutorial was developed by Frank Ritter, Richard Young and Gary Jones.

Although there is no text book, as such, on how to program using Soar, John Rieman has written a set of notes entitled An Introduction to Soar Programming. Although based on version 6 of Soar (NNPSCM) the notes provide a useful bare bones introduction to Soar as a programming language.

From version 7 onwards, Soar is closely linked to Tcl/Tk. If you wish to get hold of a Tcl Tutorial computer aided instruction package, you should start by looking at Clif Fynt's home page.

There is a set of notes on experiences with using Tcl/Tk to construct external environments, written by Richard Young. These may be useful to anyone who is heading down this line, since they highlight some of the good and bad points about Tcl/Tk.

Back to Table of Contents


(G8) Is Soar the right tool for me?

For cognitive modelling: Soar's strengths are in modelling deliberate cognitive human behavior at time scales greater than 50 msec. Example tasks that have been explored include human computer interaction tasks, typing, arithmetic, video game playing, natural language understanding, concept acquisition, learning by instruction, and verbal reasoning. Soar has also been used for modelling learning in many of these tasks; however, learning adds significant complexity to the structuring of the task and is not for the casual user. Although many of these tasks involve interaction with external environments, Soar does not yet have standard models for low-level perception or motor control.

For building AI systems: Soar's strengths are in integrating knowledge, planning, reaction, search and learning within a very efficient architecture. Example tasks include production scheduling, diagnosis, natural language understanding, learning by instruction, robotic control, and flying simulated aircraft.

If all you want to do is create a small production rule system, then Soar may not be right for you. Also, if you only have limited time that you can spend developing the system, then Soar is probably not the best choice. It currently appears to require a lot of effort to learn Soar, and more practice before you become proficient than is needed if you use other, simpler, systems.

There are, however, a number of basic capabilities that Soar provides as standard. If you need to use these, then Soar may not only be just what you want, but may also be the only system available:

Back to Table of Contents


(G9) How can I make my life easier when programming in Soar?

There are a number of ways to make your life easier when programming in Soar. Some simple high level considerations are:

There is now an extension to the Soar FAQ, which is currently called The Soar less than Frequently Asked Questions List. The intention is that this will eventually give rise to a casebook of examples of useful utilities programmed in Soar, as well as providing hints, tips and tricks to get around some of the less common problems of using Soar.

Back to Table of Contents


(G10) Is there any support documentation available for Soar?

There are now two reference manuals available for Soar 7 in printed form:

  1. The Soar 7 User Manual.
  2. The Soar Advanced Applications Manual.
Although these manuals are not quite finished, they still provide lots of useful information about programming in Soar, and, in particular, version 7. There are a number of caveats which should be borne in mind when using these manuals.

The Soar 6 User Manual is still available for browsing on the Web. For paper copies of the manual, send e-mail requests to soar-doc@isi.edu

Back to Table of Contents


(G11) How can I find out what bugs are outstanding in Soar?

NEW [gordon to insert this]

Back to Table of Contents


Section 2: Technological Issues


(T1) What is search control?

Search control is knowledge that controls the search in that it guides the selection of problem spaces, states, and operators through comparing proposed alternatives. In Soar, search control is encoded in production rules that create preferences for operators.

Back to Table of Contents


(T2) What is data chunking?

Data chunking is creation of chunks that allow for either the recognition or retrieval of data that is currently in working memory. Chunking is usually thought of a method for compiling knowledge or speed up learning, and not for moving data from working memory into long term memory. Data chunking is a technique in which chunking does create such recognition or retrieval productions and thus allows Soar to perform knowledge-level learning.

Simplistically, then, this is the creation of chunks that can be represented by the form a=>b i.e., when a appears on the state, the data for b does too.

Back to Table of Contents


(T3) What is the generation problem?

Whenever you subgoal to create a datachunk, you have to generate everything that seems important, and then use search control to make sure that the chunks you build are correct.

Back to Table of Contents


(T4) What do all these abbreviations and acronyms stand for?

Back to Table of Contents


(T5) What is this NNPSCM thing anyway?

Really, this is a number of questions rolled into one:
  1. What is the PSCM?
  2. What is the NNPSCM?
  3. What are the differences between the two?

What is the PSCM?

The Problem Space Computational Model (PSCM) is an idea that revolves around the commitment in Soar to using problem spaces as the model for all symbolic goal-oriented computation. The PSCM is based on the primitive acts that are performed using problem spaces to achieve a goal. These primitive acts affect the fundamental object types within Soar i.e., goals, problem spaces, states and operators. The functions that they perform are shown below: More details about exactly what these function do can be found in the User Manual.

What is the NNPSCM?

The New New Problem Space Computational Model (NNPSCM) addresses some of the issues that made the implementation of the PSCM run relatively slowly. It reformulates some of the issues within the PSCM without actually removing them, and hence changes the way in which models are implemented. Starting with version 7.0.0 of Soar, all implementation is performed using the NNPSCM; in later releases of version 6.2, you can choose which version you require (NNPSCM or non-NNPSCM) when you build the executable image for Soar. The easiest way to illustrate the NNPSCM is to look at the differences between it, and the PCSCM.

What are the differences between the two?

The NNPSCM and the PSCM can be compared and contrasted in the following ways:
  1. The nature of problem space functions for NNPSCM and PSCM remain essentially the same as those described in Newell,A., Yost,G.R., Laird, J.E., Rosenbloom, P.S., & Altmann, E. (1991). Formulating the problem space computational model. In R.F. Rashid (Ed.), Carnegie-Mellon Computer Science: A 25-Year Commemorative (255-293). Reading, MA: ACM-Press (Addison-Wesley).
  2. The goal state from the PSCM now simply becomes just another state, rather than being treated as a separate, special state.
  3. The need to select between problem spaces in the NNPSCM does not require any decision making process. The problem space is simply formulated as an attribute of the state.
  4. Models implemented using NNPSCM are generally faster than their PSCM equivalents, becuase less decision cycles are required (because there is no need to decide between problem spaces).
  5. Using NNPSCM is presumed to allow better inter-domain, and inter-problem-space transfer of learning to take place.
  6. The use of the NNPSCM should help in the resolution and understanding of the issues involved in external interaction.
The differences may become more evident if we look at code examples (written using Soar version 6.2.5) for the farmer, wolf, goat and cabbage problem that comes as a demo program in the Soar distribution.

PSCM code


	(sp farmer*propose*operator*move-with
	   (goal <g> ^problem-space <p>
	             ^state <s>)
	   (<p> ^name farmer)
	   (<s> ^holds <h1> <h2>)
	   (<h1> ^farner <f> ^at <i>)
	   (<h2> ^<< wolf goat cabbage >> <value>
	         ^at <i>)
	   (<i> ^opposite-of <j>)
	   -->
	   (<g> ^operator <o>)
	   (<o> ^name move-with
	        ^object <value>
	        ^from <i>
        	^to <j>))
NNPSCM code


	(sp farmer*propose*move-with
	  (state <s> ^problem-space <p>)	; goal <g> has disappeared
	  (<p> ^name farmer)
	  (<s> ^holds <h1> <h2>)
	  (<h1> ^farner <f> ^at <i>)
	  (<h2> ^<< wolf goat cabbage >> <value>
	        ^at <i>)
	  (<i> ^opposite-of <j>)
	  -->
	  (<s> ^operator <o>)
	  (<o> ^name move-with
	       ^object <value>
	       ^from <i>
	       ^to <j>))
On the face of it, there do not appear to be many differences, but when you look at the output trace, the improvement in speed becomes more apparent:

PSCM Trace


	 0: ==>G: G1
	 1:    P: P1 (farmer)
	 2:    S: S1
	 3:    ==>G: G3 (operator tie)
	 4:       P: P2 (selection)
	 5:       S: S2
	 6:       O: O8 (evaluate-object O1 (move-alone))
	 7:       ==>G: G4 (operator no-change)
	 8:          P: P1 (farmer)
	 9:          S: S3
	10:          O: C2 (move-alone)

NNPSCM Trace


	 0:==>S: S1
	 1:   ==>S: S2 (operator tie)
	 2:      O: O8 (evaluate-object O1 (move-alone)
         3:      ==>S: S3 (operator no-change)
	 4:         O: C2 (move-alone)

Back to Table of Contents


Section 3: Programming Questions


(P1) Are there any guidelines on how to name productions?

Productions will load as long as their names are taken from the set of legal characters - essentially alphanumerics plus "-" and "*". Names consisting only of numerics are not allowed.

Soar programmers tend to adopt a convention whereby the name of a production describes what the rule does, and where it should apply. Typically, the conventions suggest that names have the following general form:


    problem-space-name*state-name*operator-name*action

How you choose your naming convention is probably less important than the fact that you do use one.

NEW Note that Soar uses single alphabetic character followed by a number, such as p3. If you name a production this way it will not be printable. (It is also poor style.)

Back to Table of Contents


(P2) Why did I learn a chunk there?

Soar generally learns a chunk when a result is created in a superstate, by a production which tests part of the current subgoal.

E.g. the production:


	sp {create*chunk1
	   (state <s> ^superstate <ss>)
           -->
           (<ss> ^score 10)}

creates a preference for the attribute "score" with value 10. The preference tests to see if the current state <s> is added to the superstate <ss>, therefore creating a chunk.

That mechanism seems simple enough. Why then do you sometimes get chunks when you do not expect them? This is usually due to shared structure between a subgoal and a superstate...

If there is an object in the subgoal which also appears in a superstate, then creating a preference for an attribute on that object, will lead to a chunk. These preferences in a superstate are often called "results" although you are free to generate any preference from any subgoal, so that term can be misleading.

For example, suppose that working memory currently looks like:


     (S1 ^type T1 ^name top ^superstate nil)
     (T1 ^name boolean)
     (S2 ^type T1 ^name subgoal ^superstate S1)

so S2 is the subgoal and S1 is the superstate for S2. T1 is being shared. In this case, the production:


     sp {create*chunk2
        (state <s> ^type <t> ^name subgoal)
        -->
        (<t> ^size 2)}
will create a chunk, even though it does not directly test the superstate. This is because T1 is shared between the subgoal and the superstate, so adding ^size to T1, adds a preference to the superstate.

What to do?

Often the solution is to create a copy of the object in the subgoal. So, instead of (S2 ^type T1) create (S2 ^type T2) and (T2 ^name boolean).

For example,


     sp {copy*superstate*type
        (state <s> ^superstate <ss>)
        (<ss> ^type <t>)
        (<t> ^name <name>)
        -->
        (<s> ^type <new-t>)
        (<new-t> ^name <name>)}
This will copy the ^type attribute to the subgoal and create a new identifier (T2) for it. Now you can freely modify T2 in the subgoal, without affecting the superstate.

Back to Table of Contents


(P3) Why didn't I learn a chunk there (or how can I avoid learning a chunk there)?

There are a number of situations where you can add something to the superstate and not get a chunk:

(1) Learning off

If you run soar and type "learn -off" all learning is disabled. No chunks will be created when preferences are created in a superstate. Instead, soar only creates a "justification". (See below for an explanation of these).

You can type "learn" to see if learning is on or off and "learn -on" will make sure it is turned on. Without this, you cannot learn anything.

(2) Chunk-free-problem-spaces

You can declare certain problem spaces as chunk-free and no chunks will be created in those spaces. The way to do this is changing right now (in Soar7) because we no longer have explicit problem spaces in Soar. If you want to turn off chunking like this, check the manual.

(3) Quiescence t

If your production tests ^quiescence t it will not lead to a chunk. For example,


	sp {create*chunk1
	   (state <s> ^name subgoal ^superstate <ss>)
           -->
           (<ss> ^score 10)}
will create a chunk, whilst

<pre> sp {create*no-chunk (state <s> ^name subgoal ^superstate <ss> ^quiescence t) --> (<ss> ^score 10)} will not create a chunk (you just get a justification for ^score 10). You can read about the reasons for this in the Soar manual. You also do not get a chunk if a production in the backtrace for the chunk tested ^quiescence t :

For example,


             sp {create*score
		(state <s> ^name subgoal ^quiescence t)
		-->
		(<s> ^score 10)}

	     sp {now-expect-chunk*but-dont-get-one
		(state <s> ^name subgoal ^score 10 ^superstate <ss>)
      		-->
		(<ss> ^score 10)}
The test for ^quiescence t is included in the conditions for why this chunk was created--so you get a justification, not a chunk.

A point to note is that having tested ^quiescence t, the effect of not learning chunks is applied recursively. If you had a production:


	sp {create*second-chunk
	   (state <s> ^name medium-goal ^superstate <ss>)
	   (<s> ^score 10)
	   -->
	   (<ss> ^score-recorded yes)}
and you have a goal stack:


		(S1 ^name top ^superstate nil)
		(S2 ^name medium-goal ^superstate S1)
		(S3 ^name subgoal ^superstate S2)
then if create*chunk1 leads to ^score 10 being added to S2 and then create*second-chunk fires and adds ^score-recorded yes to S1, you will get two chunks (one for each result). However, if you use create*no-chunk instead, to add ^score 10 to S2, then create*second-chunk will also not generate a chunk, even though it does not test ^quiescence t itself. That is because ^score 10 is a result created from testing quiescence.

Back to Table of Contents


(P4) What is a justification?

Any time a subgoal creates a preference in a superstate, a justification is always created, and a chunk will also be generated unless you have turned off learning in some manner (see above). If learning has been disabled, then you only get a justification. A justification is effectively an instantiated chunk, but without any chunk being created.

For example, let us say:

sp {create*chunk1 (state <s> ^name subgoal ^superstate <ss>) (<ss> ^name top) --> (<ss> ^score 10)} leads to the chunk:


	sp {chunk-1
	   :chunk
	   (state <s> ^name top)
	   -->
	   (<s> ^score 10)}
If working memory was:


	(S1 ^name top ^superstate nil ^score 10) 
	(S2 ^name subgoal ^superstate S1)
then if you typed "preferences S1 score 1" you would see:


	Preferences for S1 ^score:

	acceptables:
	  10 +
	    From chunk-1
(The value is being supported by chunk-1, an instantiated production just like any other production in the system).

Now, if we changed the production to:


	sp {create*no-chunk
	   (state <s> ^name subgoal ^superstate <ss> ^quiescence t)
	   (<ss> ^name top)
           -->
           (<ss> ^score 10)}
we do not get a chunk anymore, we get justification-1. If you were to print justification-1 you would see:


	sp {justification-1
	  :justification ;not reloadable
	  (S1 ^name top)
	  -->
	  (S1 ^score 10)}
This has the same form as chunk-1, except it is just an instantiation. It only exists for the state S1. When S1 goes away (e.g. when you do init-soar and re-run the system) this justification will go away too. It is like a temporary chunk instantiation. Why have justifications? Well, if you now typed "preferences S1 score 1" you would see:


	Preferences for S1 ^score:

	acceptables:
	  10 +
	    From justification-1
Justification-1 is providing the support for the value 10. If the subgoal, S2, goes away, this justification is the only reason Soar retains the value 10 for this slot. If later, the ^name attribute of S1 changes to "driving" say, this justification will no longer match (since it requires ^name top) and the justification and the value will both retract.

Back to Table of Contents


(P5) How does Soar decide which conditions appear in a chunk?

Soar works out which conditions to put in a chunk by finding all the productions which led to the final result being created. It sees which of those productions tested parts of the superstate and collects all those conditions together.

For example:


	(S1 ^name top ^size large 
	    ^color blue ^superstate nil)	;# The superstate

	--------------------------------------  ;# Conceptual boundary

	(S2 ^superstate S1)			;# Newly created subgoal.

	sp {production0
	   (state <s> ^superstate nil)
	   -->
	   (<s> ^size large ^color blue ^name top)}
If we have:


	sp {production1
	   (state <s> ^superstate <ss>)
           (<ss> ^size large)
	   -->
	   (<s> ^there-is-a-big-thing yes)}
and


	sp {production2
	   (state <s> ^superstate <ss>)
	   (<ss> ^color blue)
	   -->
	   (<s> ^there-is-a-blue-thing yes)}
and


	sp {production3
	  (state <s> ^superstate <ss>)
          (<ss> ^name top)
          -->
          (<s> ^the-superstate-has-name top)}
and


	sp {production1*chunk
	   (state <s> ^there-is-a-big-thing yes
		      ^there-is-a-blue-thing yes
		      ^superstate <ss>)
	   -->
	   (<ss> ^there-is-a-big-blue-thing yes)}
and working memory contains (S1 ^size large ^color blue) this will lead to the chunk:


	sp {chunk-1
	   :chunk
	   (state <s> ^size large ^color blue)
	   -->
	   (<s> ^there-is-a-big-blue-thing yes)}
The size condition is included because production1 tested the size in the superstate, created ^there-is-a-big-thing attribute and this lead to production1*chunk firing. Similarly for the color condition (which was also tested in the superstate and lead to the result ^there-is-a-big-blue-thing yes). The important point is that ^name is not included. This is because even though it was tested by production3, it was not tested in production1*chunk and therefore the result did not depend on the name of the superstate.

Back to Table of Contents


(P6) Why does my chunk appear to have the wrong conditions?

See above for general description of how chunk conditions are computed. If you have just written a program and the chunks are not coming out correctly, then try using the "explain" tool.

So, using the example of how conditions in chunks are created (shown above), if the chunk is:


	sp {chunk-1
           :chunk
	   (state <s> ^size large ^color blue)
	   -->
	   (<s> ^there-is-a-big-blue-thing yes)}
and you type "explain chunk-1" you will get something like:


	sp {chunk-1
	   :chunk
	   (state <s1> ^color blue ^size large)
	-->
	   (<s1> ^there-is-a-big-blue-thing yes +)}

	  1 :  (state <s1> ^color blue)         Ground : (S1 ^color blue)
	  2 :  (<s1> ^size large)               Ground : (S1 ^size large)
This shows a list of conditions for the chunk and which "ground" (i.e. superstate working memory element) they tested.

If you want further information about a particular condition you can then type: "explain chunk-1 2" (where 2 is the condition number -- in this case (<s1> ^size large)) to get:

Explanation of why condition  (S1 ^size large) was included in chunk-1 

Production production1 matched
    (S1 ^size large) which caused
production production1*chunk to match
    (S2 ^there-is-a-big-thing yes) which caused
A result to be generated.
This shows that ^size large was tested in the superstate by production1, which then created (S2 ^there-is-a-big-thing yes). This in turn caused the production production1*chunk to fire and create a result (in this case ^there-is-a-big-blue-thing yes) in the superstate, which leads to the chunk.

This tool should help you spot which production caused the unexpected addition of a condition, or why a particular condition did not show up in the chunk.

Back to Table of Contents


(P7) What is all this support stuff about? (Or why do values keep vanishing?)

There are two forms of "support" for preferences: o-support and i-support. O-support stands for "operator support" and means the preference behaves in a normal computer science fashion. If you create an o-supported preference ^color red, then the color will stay red until you change it. How do you get an o-supported preference? The exact conditions for this keep changing, but the general rule of thumb is this:

"You get o-support if your production tests the ^operator slot or creates structure on an operator"

(Specifically under Soar.7.0.0.beta this is o-support-mode 2--which is the mode some people recommend since they find it much easier to understand than o-support-mode 0 which is the current default).

E.g.


	sp {o-support
	   (state <s> ^operator <o>)
           (<o> ^name set-color)
           -->
           (<s> ^color red)}
the ^color red value gets o-support.

I-support, which stands for "instantiation support", means the preference exists only as long as the production which created it *still matches*. You get i-supported productions when you do not test an operator:

E.g.


	sp {i-support
	   (state <s> ^object <obj1>)
	   (<obj1> ^next-to <obj2>)
           -->
           (<obj1> ^near <obj2>)}
In this case


	^near <obj2>
will get i-support. If obj1 ever ceases to be next to obj2, then this production will retract and the preference for


	^near <obj2>
will also retract. Usually this means the value for ^near *disappears* from working memory.

To change an o-supported value, you must *explicitly* remove the old value, e.g.,


	^color red -   (the minus means reject the value red)
	^color blue +  (the plus means the value blue is acceptable)

while changing an i-supported value requires just that the production retracts. This makes for less work and can be useful in certain cases, but it can also make debugging your program much harder and it is recommended that you keep its use to a minimum. By default, when you do state elaboration you automatically get i-support, whereas applying, creating or modifying an operator as part of some state structure will lead to o-support.

(One point worth noting is that operator proposals are always i-supported, but once the operator has been chosen, it does not retract even after the operator proposal goes away, because it is given a special support, c-support for "context object support").

To tell if a preference has o-support or i-support, check the preferences on the attribute.

E.g. "pref s1 color" will give:


	Preferences for S1 ^color:

	acceptables:
	  red + [O]
while "pref s1 near" gives:


	Preferences for S1 ^near:

	acceptables:
	  O2 +
The presence or absence of the [O] shows whether the value has o-support or not.

Back to Table of Contents


(P8) When should I use o-support, and when i-support?

Under normal usage, you probably will not have to explicitly choose between the two. By default, you will get o-support if you apply, modify or create an operator as part of some state structure; you will get i-support if all you do is elaborate the state in some way.

It therefore follows taht bu default you should generally use operators to create and modify state structures whenever possible. This leads to persistent o-supported structures and makes the behaviour of your system much clearer.

I-support can be convenient occasionally, but should be limited to inferences that are always true. For example, if I know (O1 ^next-to O2), where O1 and O2 are objects then it is reasonable to have an i-supported production which infers that (O1 ^near O2). This is convenient because there might be a number of different cases for when an object is near another object. If you use i-supported productions for this, then whenever the reason (e.g. ^next-to O2) is removed, the inference (^near O2) will automatically retract.

*Never* mix the two for a single attribute. For example, do not have one production which creates ^size large using i-support and another which tests an operator and creates ^size medium using o-support. That is a recipe for disaster.

Back to Table of Contents


(P9) Why does the value go away when the subgoal terminates?

A common problem is creating a result in a superstate (e.g. ^size large) and then when the subgoal terminates, the value retracts. Why does this happen? The reason is that once the subgoal has terminated the preference in the superstate is supported by the chunk or justification that was formed when the preference was first created.

This chunk/justification may have *different* support than the support the preference had from the subgoal. It is quite common for an operator in a subgoal to create a result in the superstate which only has i-support (even though it is created by an operator). This is because the conditions in the chunk/justification do not include a test for the super-operator. Therefore the chunk has i-support and may retract, taking the result with it.

NOTE: Even if you are not learning chunks, you are still learning justifications (see above) and the support, for working memory elements created as results from a subgoal, depends on the conditions in those justifications.

Back to Table of Contents


(P10) What's the general method for debugging a Soar program?

The main tools that you need to use are:

Back to Table of Contents


(P11) How can I find out which productions are responsible for a value?

Use


	preferences <id> <attribute> 1
Or

	preferences <id> <attribute> -names
(They both mean the same thing.)

This shows the values for the attribute and which production created them. For example:

(S1 ^type T1) (T1 ^name boolean) "pref T1 name 1" will show the name of the production which created the ^name attribute within object T1.

Back to Table of Contents


(P12) What's an attribute impasse, and how can I detect them?

Attribute impasses occur if you have two values proposed for the same slot.

E.g.


	^name doug +
	^name pearson +
leads to an attribute impasse for the ^name attribute.

It is a bit like an operator tie impasse (where two or more operators are proposed for the operator slot). The effect of an attribute impasse is that the attribute and all its values are removed from working memory (which is probably what makes your program stop working) and an "impasse" structure is created.

It is usually a good idea to include the production:


	sp {debug*monitor*attribute-impasses*stop-pause
	    (impasse <im> ^object <obj> ^attribute <att>
             ^impasse <impasse>)
	   -->
	(write (crlf) |Break for impasse | <im> (crlf))
	(tcl |preferences | <obj> | | <att> | 1 |)
	(interrupt)}
in your code, for debugging. If an attribute impasse occurs, this production will detect it, report that an impasse has occured, run preferences on the slot to show you which values were competing and which productions created preferences for those values and interrupts Soar's execution (so you can debug this problem).

Very, very occasionally you may want to allow attribute impasses to occur within your system and not consider them an error, but that is not a common choice. Most Soar systems never have attribute impasses (while almost all have impasses for the context slots, like operator ties and state no-changes).

Back to Table of Contents


(P13) Are there any templates available for building Soar code?

If you use the Soar Development Environment (or SDE) which is a set of modules for Emacs, they provide some powerful template tools, which can save you a lot of typing. You specify the template you want (or use one of the standard ones) and then a few key-strokes will create a lot of code for you.

Back to Table of Contents


(P14) How do I find all the productions which test X?

Use the "pf" command (which stands for production-find). You give "pf" a pattern. Right now, the pattern has to be surrounded by a lot of brackets, but that should be fixed early on in Soar7's life.

Anyway, as of Soar.7.0.0.beta an example is:


	pf {(<s> ^operator *)}
which will list all the productions that test an operator.

Or,


	pf {(<s> ^operator.name set-value)}
which will list all the productions that test the operator named set-value.

You can also search for values on the right hand sides of productions (using the -rhs option) and in various subsets of rules (e.g. chunks or nochunks).

Back to Table of Contents


(P15) Why doesn't my binary parallel preference work?

NEW Using parallel preferences can be tricky, for the separating commas are currently crucial for the parser. In the example below, there is a missing comma after the preferences for "road-quality".
  sp {elaborate*operator*make-set*dyn-features
       (state  ^operator )
       ( ^name make-set)
       -->
       ( ^dyn-features distance + &, gear + &, road-quality + &
            sign + &, other-road + &, other-sidewalk + &,
 	   other-sign + &)}
This production parses "road-quality & sign" as a valid binary preference, although this is not what was intended. Soar will not currently warn about the duplicate + preferences, you just have to be careful.

Back to Table of Contents


(P16) How can I find out about a programming problem not addressed here?

NEW There are several places to turn to, listed here in order that you should consider them.
  1. The Soar less frequently asked questions list (lfaq) includes additional and more transitory bugs.
  2. The manuals noted above may provide general help, and you should consult them if possible before trying the mailing lists.
  3. You can consult the list of outstanding bugs, noted above in G11.
  4. You can consult the mailing lists noted above.

Back to Table of Contents