Section 0: Introduction
Section 1: General Questions
(G0) Where can I get hold of the Soar FAQ?(G1) What is Soar?(G2) Where can I get more information about Soar?(G3) What does Soar stand for?(G4) What do I need to be able to run Soar?(G5) Where can I get hold of Soar?(G6) Who uses Soar for what?(G7) How can I learn Soar?(G8) Is Soar the right tool for me?(G9) How can I make my life easier when programming in Soar?(G10) Is there any support documentation available for Soar?(G11) How can I find out what bugs are outstanding in Soar?(G12) Other links and Links2Go Key Resource award in Soar(G13) How do I write fast code?(G14) How does Soar currently stand as a Psychology theory?Section 2: Technological Issues
(T1) What is search control?(T3) What is the generation problem?
(T4) What do all these abbreviations and acronyms stand for?
Section 3: Programming Questions
(P1) Are there any guidelines on how to name productions?(P2) Why did I learn a chunk here?
(P3) Why didn't I learn a chunk there (or how can I avoid leaning a chunk there)?
(P5) How does Soar decide which conditions appear in a chunk?
(P6) Why does my chunk appear to have the wrong conditions?
(P7) What is all this support stuff about? (Or why do values keep vanishing?)
(P8) When should I use o-support, and when i-support?
(P9) Why does the value go away when the subgoal terminates?
(P10) What's the general method for debugging a Soar program?
(P11) How can I find out which productions are responsible for a value?
(P12) What's an attribute impasse, and how can I detect them?
(P13) Are there any templates available for building Soar code?
(P14) How can I find all the productions which test X?
(P15) Why doesn't my binary parallel preference work?
(P16) How can I do multi-agent communication in Soar 7?
(P17) In a WME, is the attribute always a symbolic constant?
(P18) How does one write the 'wait' operator in Soar8.3?
(P19) How does one mess with the wme's on the io link?
(P20) How can I find out about a programming problem not addressed here?
This is the introduction to a list of frequently asked questions (FAQ) about Soar with answers.
The FAQ is posted as a guide for finding out more about Soar. It is intended for use by all levels of people interested in Soar, from novices through to experts. With this in mind, the questions are essentially divided into three parts: the first part deals with general details about Soar; the second part examines technological issues in Soar; the third part looks at some issues related to programming using Soar. Questions in the first section have their numbers prefixed by the letter G (for General); those in the second section are prefixed by the letter T (for Technological); and those in the third section are prefixed by the letter P (for Programming).
It also serves as a repository of the canonical "best" answers to these questions. If, however, you know of a better answer or can suggest improvements, please feel free to make suggestions.
This FAQ is updated and posted on a variable schedule. Full instructions for getting the current version of the FAQ are given in question G0.
In order to make it easier to spot what has changed since last time around, new and significantly changed items have been tagged with the "new" icon.
Suggestions for new questions, answers, re-phrasing, deletions etc., are all welcomed. Please include the word "FAQ" in the subject of your e-mail correspondence. Please use the mailing lists noted below for general questions, but if they fail or you do not know which one to use, contact one of us.
This FAQ is not just our work, but includes numerous answers from members of the Soar community, past and present. The initial versions were supported by the DERA and the ESRC Centre for Research in Development, Instruction and Training. The Office of Navy Research has also provided support. Gordon Baxter put the first version together. Special thanks are due to John Laird and the Soar Group at the University of Michigan for helping to generate the list of questions, and particularly to Clare Bates Congdon, Peter Hastings, Randy Jones, Doug Pearson (who also provided a number of answers), and Kurt Steinkraus. The views expressed here are those of the authors and should not necessarily be attributed to the Ministry of Defence or the Pennsylvania State University.
Frank E. Ritter (ritter@ist.psu.edu)
Marios Avaramides (marios@ist.psu.edu)
Alexander B. Wood (awood@ist.psu.edu)
Gordon D. Baxter (gbaxter@psych.york.ac.uk)
If you are reading a plain text version of this FAQ, there is also an html version available, via either of the following URLs:comp.aisci.cognitive
sci.psychology.theory
http://ritter.ist.psu.edu/soar-faq/which you can access using any Web browser.
There are ongoing plans for mirroring this FAQ on the Soar home pages at ISI.
(If you find that material here is out of date or does not include your favorite paper or author, please let us know. The work and range of material generated by the Soar group is quite broad and has been going on for over a decade now.)
problem spaces as a single framework for all tasks and subtasks to be solvedproduction rules as the single representation of permanent knowledge
objects with attributes and values as the single representation of temporary knowledge
automatic subgoaling as the single mechanism for generating goals
and chunking as the single learning mechanism.
The Soar Movie is now available to download (10.4M) to find out more
about Soar.
Back to Table of Contents
To find out some of the sorts of things that people have modelled using Soar look at:Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard.
Rosenbloom, P.S., Laird, J.E. & Newell, A. (1993) The Soar Papers: Readings on Integrated Intelligence. Cambridge, MA: MIT Press.
Huffman, S., & Laird, J.E. (1995) Flexibly instructable agents. Journal of Artificial Intelligence ResearchLaird, J.E., & Rosenbloom, P.S. (1996) The evolution of the Soar cognitive architecture. In T. Mitchell (ed.) Mind Matters.
Lehman, J.F., Laird, J.E., & Rosenbloom, P.S. (1996) A gentle introduction to Soar, an architecture for human cognition. In S. Sternberg & D. Scarborough (eds.) Invitation to Cognitive Science, Volume 4.
Tambe, M., Johnson, W.L., Jones, R.M., Koss, F., Laird, J.E., Rosenbloom, P.S., & Schwamb, K. (1995) Intelligent agents for interactive simulation environments. AI Magazine 16(1).
Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., & Koss, F. V. (1999). Automated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27-41.
Lewis, R.L. (2001, in preparation) Cognitive theory, Soar. In International Encyclopedia of the Social and Behavioral Sciences. Amsterdam: Pergamon (Elsevier Science).
The Information Sciences Institute at the University of Southern California maintain a collection of Soar-related web pages including the Soar home page, the Soar group at ISI, and the Soar archive which contains a publication bibliography, document abstracts, official software releases, software manuals, support tools, and information about members of the Soar community.
The Artificial Intelligence lab at the University of Michigan has a collection of Web pages about Cognitive Architectures per se. This includes a section on Soar; there is also a Web page available for the Soar group at UMich.
Carnegie Mellon University - where Soar was originally developed - has its own Soar projects page.
The U. of Hertfordshire website includes Soar resources on the Web and elsewhere, a few of Richard Young's papers, and an annotated bibliography of Soar journal articles and book chapters (but not conference papers) related to psychology that is intended to be complete.
ExpLore Reasoning Systems has a summary with some new links at http://www.ers.com/Products/Soar/soar.html.
There is also a site at the University of Nottingham that includes mirrors of several of the US sites as well as some things developed at Nottingham, including the Psychological Soar Tutorial There is a nascent site at The Pennsylvania State University. will appear at Frank Ritter's homepage http://acs.ist.psu.edu/acs-lab
soar-requests@umich.edu - If you do not know where to ask, use this onesoar-help@umich.edu - Where to get help with Soar problems
soar-bugs@umich.edu - Where to send your bug reports
soar-group@umich.edu - General Soar discussions take place here
soar-nl@umich.edu - Natural language discussion
soar-doc@umich.edu - Send requests for documentation here
soar-tsi@umich.edu - Discussions about the Tcl/Tk Soar Interface
To subscribe to the soar-group mailing list, you should send an e-mail to soar-requests@umich.edu asking for your name to be added to the list. If you decide that you wish to unsubscribe from soar-group, you should send an e-mail to soar-requests@umich.edu asking for your name to be removed from the mailing list.
There used to be (1988 to 2000) a European mailing list. Due to the low volume of traffic sent only to the eu-soar mailing list, and speedier transatlantic connections now supporting email etc., the eu-soar list merged with the Soar-group list in June 2000.
Often, a one day psychology oriented Soar tutorial was offered before EuroSoar workshops, and often at AISB conferences. It has also been offered at the Cognitive Science Conference in 1999. Email Frank Ritter or Richard Young for details.
Version 7.1 of Soar is currently being revised to utilise the latest release of Tcl/Tk (version 8.0) prior to its official release. The new release of Soar will include the Tcl/Tk Soar Interface (TSI).Currently, Soar7.1 uses Tcl 7.6 and Tk 4.2, and not Tcl 8.0.
Soar Version 8.3 Release - Soar 8.3 adds several new features to the architecture and resolves a number of bug reports. A change to the default calculation of O-Support may require changes to existing Soar programs. These are described in the Soar 8.3 Release Notes. Soar-8.3 still includes a compatibility mode that allows users to run using the old Soar 7 architecture and methodology. Users must specify "soar8 -off" before loading any productions to run in Soar7-compatibility mode. Available for Unix, Mac, and Windows.
Soar Version 8.2 - Soar 8 introduces architectural changes which require changes to existing Soar programs. These are described in the Soar 8.2 Release Notes. Soar-8.2 does include a compatibility mode that allows users to run using the old Soar 7 architecture and methodology. Users must specify "soar8 -off" before loading any productions to run in Soar7-compatibility mode.Previous Versions:
Unix - Soar 7.0.4. This requires of the order of 10 Mb of disk space (including source code) for most of what you will need, although the file that you retrieve via ftp is much smaller, since it is archived and compressed. The Unix version of Soar 7 is compiled with Tcl/Tk, which allows users to easily add commands and displays to their environment. The Soar Development Environment (SDE), which is a set of extensions to GNU Emacs, offers a programming support environment for earlier versions of Soar, and can still be used albeit in a more limited way for more recent versions.Mac - MacSoartk 7.0.4 + CUSP - this version MacSoar comes with Tk included, along with a number of extensions added by Ken Hughes to improve the usability of Soar on the Mac. You will require around 10 Mb of disk space to run this version, and will need version 7 of Mac OS (version 7.5 is recommended). Some versions of Soar can also be run under MacUnix.
PC - There was a version of Soar which runs under Windows 95 and Windows NT. It is a port of Soar to WIN32, and includes Tcl 7.6/Tk 4.2. It is available from the University of Michigan, as a zipped binary file. You should read the accompanying notes (available at the same URL) carefully, since there may be a problem with the Win32 binary release of Tcl 7.6/Tk 4.2.
In addition there is an older, unsupported PC version called WinSoar, based on Soar version 6.1.0, which includes a simple editing and debugging environment, runs under Microsoft Windows 3.x. It is also known as IntelSoar.
Several people have also successfully managed to install the Unix version of Soar on a PC running under the Linux operating system, although some problems have been reported under versions of Linux that have appeared since December 1996.
If you decide to get hold of one of these versions of Soar, please send an e-mail to soar-requests@umich.edu informing them which version you have retrieved. This will allow your name to be added to the appropriate mailing list so that you can be kept informed of future developments. Soar 7 and 8 have been ported to Macs and PCs.
The preliminary Soar 7.2 releases for Mac and Windows can be found at http://ai.eecs.umich.edu/soar2/software.html. These releases include binaries so you don't have to rebuild anything, and are accompanied by the basic README files for installing and running Soar.
There is now also a European mirror site at the University of Nottingham of the CMU Soar software archive and the ISI Soar papers archive.
KB agent is a commercially
available version of Soar for developing intelligent agents in enterprise environments.
It is based on the public version, but the code has been optimized, updated,
and reorganized for linking to other programs in Windows 95/NT. ERS has a 30-day
Trial Edition of KB Agent available for download over the Web at http://www.ers.com/Products/KB_Agent/kb_agent.html. Ralph
Morelli, in 1996, created a prototype of a WWW client/server model where
Soar (6.2.4) is the server and a Java applet is the client. This allows one
to "talk to Soar" via Netscape or some other Web browser. The fledgling demo
for a Java aware browser is at http://troy.trincoll.edu/~soar/soarclient/,
and shows that Soar can be delivered via the web.
A second soar server at http://troy.trincoll.edu/~soar/proofer/ does logic proofs using a natural deduction technique. It's not a complete proof system (there are some proofs that it can't find) but PROOFER shows that something more useful than echo can be done with this model!
There is a lisp based version of Soar 6 that Prof. Jans Aasman [J.Aasman@research.kpn.com] built a while back. You should contact him for details.
There is a partially completed (as of 13 Apr 00) Java based version by Sherief (shario@usa.net). Its source code and more details are available at http://www.geocities.com/sharios/soar.htm .If you need to compile Soar with gcc, you can get gcc from http://gcc.gnu.org
The Soar research community is distributed across a number of sites throughout the world. A brief overview of each of these is given below, in no particular order.
Development of models for quantitatively predicting human performance, including GOMS. More complex, forward-looking and sophisticated models are built using Soar. For more information contact Bonnie John (Bonnie.John@cs.cmu.edu).
Learning from external environments including learning to recover from incorrect domain knowledge, learning from experience in continuous domains, compilation of hierarchical execution knowledge, and constructive induction to improve problem solving and planning;
Cognitive modelling of psychological data involving learning to perform multiple tasks and learning to recall declarative knowledge;
Complex, knowledge-rich, real time control of autonomous agents within the context of tactical air engagements (the TacAir-Soar project);
Basic architectural research in support of the above research topics.
The application of Soar to computer games.
Perhaps the largest success for Soar has been flying simulated aircraft in a hostile environment. Jones et al. (Jones, Laird, Nielsen, Coulter, Kenny, & Koss, 1999) report how Soar flew all of the US aircraft in the 48 hour STOW'97 exercise. The simulated pilots talked with each other and ground control, and carried out 95% of the missions successfully.
For more information contact John Laird (laird@eecs.umich.edu) or check out their web site noted above.
NL-Soar work at UofM (formerly Ohio State) focuses on modeling real-time human sentence processing, research on word sense disambiguation, and experimental tests of NL-Soar as a psycholinguistic theory. Other cognitive work in Soar includes modelling learning and performance in human-device interaction.
For more information contact Rick Lewis
For more information contact Richard Young (R.M.Young@herts.ac.uk).
Soar Technology, Inc. work has been primarily focused on projects concerning TacAir-Soar. Soar Tech has also developed useful tools for programming in Soar such as SDB, the Soar Debugger.
The more general of these tutorials, known as Soar 8 tutorial, was developed for anyone interested in learning Soar, and is based on Soar 8.
The other tutorial was developed mainly with psychologists in mind. The latest version is based on Soar 7. The Web version of this tutorial was developed by Frank Ritter, Richard Young, and Gary Jones.
There is no textbook, as such, on how to program using Soar, although John Rieman has written a set of notes entitled An Introduction to Soar Programming (gzipped postcript format). Even though the notes are based on version 6 of Soar (NNPSCM) they provide a useful bare bones introduction to Soar as a programming language.
From version 7 onwards, Soar is closely linked to Tcl/Tk. If you wish to get hold of a Tcl Tutorial computer aided instruction package, you could start by looking at Clif Fynt's home page or a page of links to resources on the web . There is a set of notes on experiences with usingT cl/Tk to construct external environments, written by Richard Young. These may be useful to anyone who is heading down this line, since they highlight some of the good and bad points about Tcl/Tk.
For building AI systems: Soar's strengths are in integrating knowledge, planning, reaction, search and learning within a very efficient architecture. Example tasks include production scheduling, diagnosis, natural language understanding, learning by instruction, robotic control, and flying simulated aircraft.
If all you want to do is create a small production rule system, then Soar may not be right for you. Also, if you only have limited time that you can spend developing the system, then Soar is probably not the best choice. It currently appears to require a lot of effort to learn Soar, and more practice before you become proficient than is needed if you use other, simpler, systems.
There are, however, a number of basic capabilities that Soar provides as standard. If you need to use these, then Soar may not only be just what you want, but may also be the only system available:
Back to Table of Contentslearning and its integration with problem solvinginterruptibility as a core aspect of behaviour
large production rule systems
parallel reasoning
a knowledge description and design approach based on problem spaces
There is now an extension to the Soar FAQ, which is currently called The Soar less than Frequently Asked Questions List (look for latest version). The intention is that this will eventually give rise to a casebook of examples of useful utilities programmed in Soar, as well as providing hints, tips and tricks to get around some of the less common problems of using Soar.Re-use existing codeCut and paste productions and code
Work mainly on the top level problem space, using incremental problem space expansion
Use the integrated Emacs environment, SDE, or one of the visual editors
Turn chunking (the learning mechanism) off
Use the Tcl/Tk to write simulations for the model to talk to.
Use a programming tool, such as
ViSoarThe Tcl/Tk Soar Interface (TSI) is part of Soar 7 and 8.
The Soar 6 User Manual is still available for browsing on the Web.
Subject Line Action ------------ ------ Subject: help Returns this message Subject: bug list Returns the current bug list Subject: bug ref n Returns explicit information about Soar bug number n, where n is an integerBack to Table of Contents
![]() Links2Go Soar |
Congratulations! Your page: http://www.ccc.nottingham.ac.uk/pub/soar/nottingham/soar-faq.html has been selected to receive a Links2Go Key Resource award in the Soar topic! The Links2Go Key Resource award is both exclusive and objective. Fewer than one page in one thousand will ever be selected for inclusion. Further, unlike most awards that rely on the subjective opinion of "experts," many of whom have only looked at tens or hundreds of thousands of pages in bestowing their awards, the Links2Go Key Resource award is completely objective and is based on an analysis of millions of web pages. During the course of our analysis, we identify which links are most representative of each of the thousands of topics in Links2Go, based on how actual page authors, like yourself, index and organize links on their pages. In fact, the Key Resource award is so exclusive, even we don't qualify for it (yet ;)! Please visit:http://www.links2go.com/award/Soar.
Game AI programs have become so sophisticated in recent years that a
few university researchers have taken an interest in the field,
including John E. Laird, a professor of electrical engineering and
computer science at the University of Michigan.
The full interview with John Laird discussing AI and the game industry can be found at:
http://www.latimes.com/technology/la-000097016dec06.story?coll=la%2Dheadline
Back to Table of Contents
That said, there are numerous resources for finding out more. The first
port of call should be Newell's 1990 book,
Unified Theories of Cognition. This makes the most coherent case for
Soar, although it is slowly becoming out of
date with respect to the implementation. This may satisfy you. There
are also two big books, The Soar papers, that
provide numerous examples of Soar's use. The examples tend to be more
biased towards AI, but there are
numerous psychology applications in them.
If you go to the ISI paper archive (or the Nottingham mirror), or often
the CHI and Cognitive Science conference
proceedings, you will find some more up-to-date papers showing what
the community is currently working on. You
may also find the pointers in the FAQ and lower down on individual
web sites to be quite useful in seeing the current
state of play in the area you are interested.
Richard Young has prepared an annotated bibliography of Soar journal articles
and book chapters (but not
conference papers) related to psychology that is intended to be complete.
The best cognitive model written in Soar is less clear. Soar models of teamwork (Tambe, 1997) , procedural learning (Nerb, Ritter, & Krems, 1999) , natural language understanding (Lewis, 1996) , categorization (Miller & Laird, 1996), and using computer interfaces (Howes & Young, 1997) .
There is a book from the National Research Council called "Modeling
Human and Organizational Behavior:
Applications to Military Simulations" that provides a summary of Soar.
Todd Johnson proposed a list of fundamental cognitive capacities in
1995 that we have started to organize papers
around. Each paper (or system) has only been cited once, and it is
far far from complete, but the framework is now
in place for expanding it. If you have suggestions, please do forward
them for inclusion.
Declarative memory
Pelton, G. A., and Lehman, J. F., ``Everyday
Believability,'' Technical Report CMU-CS-95-133, School of
Computer Science, Carnegie Mellon University,
1995. Episodic Learning to recall Learning to recognize
Learning by analogy
Instrumental Conditioning
Classical Conditioning
Causal Reasoning
Causal induction
Abduction
Nerb, J., Krems, J., & Ritter, F. E. (1993).
Rule learning and the power law: A computational model and
empirical results. In Proceedings of the 15th
Annual Conference of the Cognitive Science Society, Boulder,
Colorado. pp. 765-770. Hillsdale, New Jersey:
LEA.
External Interaction
Pelton, G. A. and Lehman, J. F., The Breakdown of Operators
when Interacting with the External World,
Technical Report CMU-CS-94-121, School of
Computer Science, Carnegie Mellon University, 1994.
Nelson, G., Lehman, J. F., John, B., Integrating Cognitive
Capabilities in a Real-Time Task, in Proceedings
of the Sixteenth Annual Conference of the Cognitive
Science Society, 1994.
Bass, E. J., Baxter, G. D., & Ritter, F.
E. (1995). Using cognitive models to control simulations of complex
systems: A generic approach. AISB Quarterly,
93, 18-25.
Baxter, G. D., & Ritter, F. E. (1997). Model-computer
interaction: Implementing the action-perception loop
for cognitive models. In D. Harris (Ed.), The 1st International
Conference on Engineering Psychology and
Cognitive Ergonomics, 2 (pp. 215-223). October 1996,
Stratford-upon-Avon: Ashgate.
Ritter, F. E., Baxter, G. D., Jones, G., & Young, R. M. (in press). Supporting cognitive models as users. ACM Transactions on Computer-Human Interaction.
Natural language
Lehman, J. F., Van Dyke, J., and Rubinoff, R., Natural
Language Processing for IFORS: Comprehension
and Generation in the Air Combat Domain, in Proceedings
of the Fifth Conference on Computer Generated
Forces and Behavioral Representation, 1995.
STM limitations
Classification
Categorization
Problem solving
Lehman, J. Fain, Toward the Essential Nature of Statistical
Knowledge in Sense Resolution, Proceedings of
the Twelfth National Conference on Artificial Intelligence,
1994.
Ritter, F. E., & Baxter, G. D. (1996). Able, III:
Learning in a more visibly principled way. In U. Schmid, J.
Krems, & F. Wysotzki (Eds.), Proceedings of the
First European Workshop on Cognitive Modeling, (pp.
22-30). Berlin: Forschungsberichte des Fachbereichs
Informatik, Technische Universitaet Berlin.
Recovery from incorrect knowledge
Situated-action
Ritter, F. E., & Bibby, P. A. (1997). Modelling
learning as it happens in a diagrammatic reasoning task (Tech.
Report No. 45). ESRC CREDIT, Dept. of Psychology, U.
of Nottingham.
Reactive behavior
Nielsen, T. E., & Kirsner, K. (1994). A challenge for Soar: Modeling proactive expertise in a complex dynamic environment. In Singapore International Conference on Intelligent Systems (SPICIS-94). B79-B84.
Interruptibility
Nelson, G., Lehman, J. F., John, B. E., Experiences
in Interruptible Language Processing, Proceedings of the
1994 AAAI Spring Symposium on Active NLP, 994.
Interleaved actions
Parallel reasoning
Managing WM (it keeps growing and growing and growing...)
Imagining
Self explanation
Limited lookahead learning
Reinforcement learning
Delayed feedback learning
Nerb Krems and Ritter (1993; 1999) was later revised, and showed some good matches to the shape of variance in the power law taken from 14 subjects and to transfer between abduction problems. The first paper was in Cog Sci proceedings, the second in the Kognitionwissenschaft [German Cognitive Science] journal. Krems and Nerb (1992) is a monograph of Nerb's thesis, which it is based on.
Peck and John (1992) and later reported in Ritter and Larkin (1994)
is Browser-Soar, a model of browsing. It is fit to 10 episodes of
verbal protocol taken from 1 subject. The fit is sometimes
quite good and allowed a measure of Soar's cycle time to be computed against
subjects. It also suggested fairly strongly (because the model was matched
to verbal and
non-verbal actions) that verbal protocols are appearing about 1 second
after their corresponding working memory elements appear.
Nelson, G., Lehman, J. F., John (1994) proposed a model that integrated multiple forms of knowledge to start to match some protocols taken from the NASA space shuttle test director. No detailed match.
Aasman, J., & Michon, J. A. (1992) present a model of driving. While the book chapter does not, I believe, match data tightly, the later Aasman book (1995) does so very well. The book is not widely read however.
John, B. E., Vera, A. H., & Newell, A. (1992; 1994) presents a model matched to 10 seconds of a subject learning how to play Mario Brothers. This was available as a CHI conference paper initially.
Chong, R. S., & Laird, J. E. (1997) present a model that learns how to perform a dual task fairly well. I don't think it's matched to data very tightly, but it shows a very plausible mechanism. This was a preliminary version of Chong's thesis.
Johnson et al. (1991) present a model of blood typing. The comparison is, I believe, done loosely to verbal protocols. This was a very hard task for subjects to do, and the point of it was that a model could do, and it was not just intuition that allowed users to perform this task.
There have been a couple of papers on integrating knowledge (ie models) in soar. Lehman, J. F., Lewis, R. L., & Newell, A. (1991) and Lewis, R. L., Newell, A., & Polk, T. A. (1989) both present models that integrate submodels. I don't believe that either have been compared with data, but they show how different behavours can be integrated and note some of the issues that will arise.
Lewis et al. (1990) address some of the questions discussed here about the state of soar, but from a 1990's perspective.
Several models in Soar have been created that model the power law. These include Sched-Soar (Nerbet al., 1999), physics principle application (Ritter, Jones, & Baxter, 1998), Seible-Soar and R1-Soar (Newell, 1990). These models, although they use different mechanisms, explain the powerlaw as arising out of hierarchical learning (i.e. learning parts of the environment or internal goal structure) that initially learns low level actions that are very common and thus useful, and with further practice larger patterns are learned but they occur less often. The Soar models also predict that some of the noise in behaviour on individual trials is different measurable and predicted amounts of transfer between problems.
References
Nerb, J., Krems, J., & Ritter, F. E. (1993). Rule learning and the powerlaw:
A computational model and empirical results. In Proceedings of the15th Annual
Conference of the Cognitive Science Society, Boulder, Colorado. pp.765-770.
Hillsdale, New Jersey: LEA.
This was revised and extended and published as:
Nerb, J., Ritter, F. E., & Krems, J. (1999). Knowledge level learning and
the power law: A Soar model of skill acquisition in scheduling.Kognitionswissenschaft
[Journal of the German Cognitive Science Society] Special issue on cognitive
modelling and cognitive architectures, D.Wallach & H. A. Simon (eds.).
20-29.
Using a process model of skill acquisition allowed us to examine the microstructure of subjects' performance of a scheduling task. The model, implemented in the Soar-architecture, fits many qualitative (e.g., learning rate) and quantitative (e.g., solution time) effects found in previously collected data. The model's predictions were tested with data from a new study where the identical task was given to the model and to 14 subjects. Again a general fit of the model was found with the restrictions that the task is easier for the model than from subjects and its performance improves more quickly. The episodic memory chunks it learns while scheduling tasks show how acquisition of general rules can be performed without resort to explicit declarative rule generation. The model also provides an explanation of the noise typically found when fitting a set of data to a power law -- it is the result of chunking over actual knowledge rather than "average'' knowledge. Only when the data are averaged (over subjects here) does the smooth power law appear.
Ritter, F. E., & Larkin, J. H. (1994). Using process models to summarize sequences of human actions. Human-Computer Interaction, 9(3), 345-383.
Nelson, G., Lehman, J. F., John, B., Integrating Cognitive Capabilities in a Real-Time Task, in Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, 1994.
Aasman, J., & Michon, J. A. (1992). Multitasking in driving. In J. A. Michon & A. Akyürek (Eds.), Soar: A cognitive architecture in perspective. Dordrecht, The Netherlands: Kluwer.
Aasman, J. (1995). Modelling driver behaviour in Soar. Leidschendam, The Netherlands: KPN Research.
John, B. E., Vera, A. H., & Newell, A. (1994). Towards real-time GOMS: A model of expert behavior in a highly interactive task. Behavior and Information Technology, 13, 255-267.
John, B. E., & Vera, A. H. (1992). A GOMS analysis of a graphic, interactive task. In CHI'92 Proceedings of the Conference on Human Factors and Computing Systems (SIGCHI). 251-258. New York, NY: ACM Press.
Chong, R. S., & Laird, J. E. (1997). Identifying dual-task executive process knowledge using EPIC-Soar. In Proceedings of the 19th Annual Conference of the Cognitive Science Society. 107-112. Mahwah, NJ: Lawrence Erlbaum.
Johnson, K. A., Johnson, T. R., Smith, J. W. J., DeJongh, M., Fischer, O., Amra, N. K., & Bayazitoglu, A. (1991). RedSoar: A system for red blood cell antibody identification. In Fifteenth Annual Symposium on Computer Applications in Medical Care. 664-668. Washington: McGraw Hill.
Krems, J., & Nerb, J. (1992). Kompetenzerwerb beim Lösen von Planungsproblemen: experimentelle Befunde und ein SOAR-Modell (Skill acquisition in solving scheduling problems: Experimental results and a Soar model) No. FORWISS-Report FR-1992-001). FORWISS, Muenchen.
Peck, V. A., & John, B. E. (1992). Browser-Soar: A computational model of a highly interactive task. In Proceedings of the CHI '92 Conference on Human Factors in Computing Systems. 165-172. New York, NY: ACM.
Lehman, J. F., Lewis, R. L., & Newell, A. (1991). Integrating knowledge sources in language comprehension. In Thirteenth Annual Conference of the Cognitive Science Society. 461-466.
Lewis, R. L., Newell, A., & Polk, T. A. (1989). Toward a Soar theory of taking instructions for immediate reasoning tasks. In Annual Conference of the Cognitive Science Society. 514-521. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Lewis, R. L., Huffman, S. B., John, B. E., Laird, J. E., Lehman, J. F., Newell, A., Rosenbloom, P. S., Simon, T., & Tessler, S. G. (1990). Soar as a Unified Theory of Cognition: Spring 1990. In Twelfth Annual Conference of the Cognitive Science Society. 1035-1042. Cambridge, MA:
Tambe, M. (1997). Towards flexible teamwork. Journal of Artificial Intelligence Research, 7, 83-124.
Simplistically, then, this is the creation of chunks that can be represented by the form a=>b i.e., when a appears on the state, the data for b does too.
Back to Table of ContentsCSP - Constraint Satisfaction ProblemEBG - Explanation-Based Generalisation
EBL - Explanation-Based Learning
GOMS - Goals, Operators, Methods, and Selection rules
HISoar - Highly Interactive Soar
ILP - Inductive Logic Programming
NNPSCM - New New Problem Space Computational Model
NTD - NASA Test Director
PEACTIDM - Perceive, Encode, Attend, Comprehend, Task, Intend, Decode, Move
SCA - Symbolic Concept Acquisition
Goals
Problem Spaces
States
Operators
PSCM code
(sp farmer*propose*operator*move-with (goal <g> ^problem-space <p> ^state <s>) (<p> ^name farmer) (<s> ^holds <h1> <h2>) (<h1> ^farner <f> ^at <i>) (<h2> ^<< wolf goat cabbage >> <value> ^at <i>) (<i> ^opposite-of <j>) --> (<g> ^operator <o>) (<o> ^name move-with ^object <value> ^from <i> ^to <j>))
NNPSCM code
(sp farmer*propose*move-with (state <s> ^problem-space <p>) ; goal <g> has disappeared (<p> ^name farmer) (<s> ^holds <h1> <h2>) (<h1> ^farner <f> ^at <i>) (<h2> ^<< wolf goat cabbage >> <value> ^at <i>) (<i> ^opposite-of <j>) --> (<s> ^operator <o>) (<o> ^name move-with ^object <value> ^from <i> ^to <j>))On the face of it, there do not appear to be many differences, but when you look at the output trace, the consistency of the operator use, and the improvement in speed becomes more apparent:
PSCM Trace
0: ==>G: G1 1: P: P1 (farmer) 2: S: S1 3: ==>G: G3 (operator tie) 4: P: P2 (selection) 5: S: S2 6: O: O8 (evaluate-object O1 (move-alone)) 7: ==>G: G4 (operator no-change) 8: P: P1 (farmer) 9: S: S3 10: O: C2 (move-alone)NNPSCM Trace
0:==>S: S1 1: ==>S: S2 (operator tie) 2: O: O8 (evaluate-object O1 (move-alone) 3: ==>S: S3 (operator no-change) 4: O: C2 (move-alone)Back to Table of Contents
Soar programmers tend to adopt a convention whereby the name of a production describes what the rule does, and where it should apply. Typically, the conventions suggest that names have the following general form:
problem-space-name*state-name*operator-name*actionHow you choose your naming convention is probably less important than the fact that you do use one.
Note that, to name working memory elements, Soar uses single alphabetic character followed by a number, such as p3. If you name a production this way it will not be printable. (It is also poor style.)
E.g. the production:
sp {create*chunk1 (state ^superstate ) --> ( ^score 10)}creates a preference for the attribute "score" with value 10.
That mechanism seems simple enough. Why then do you sometimes get chunks when you do not expect them? This is usually due to shared structure between a subgoal and a superstate. If there is an object in the subgoal which also appears in a superstate, then creating a preference for an attribute on that object, will lead to a chunk. These preferences in a superstate are often called "results" although you are free to generate any preference from any subgoal, so that term can be misleading.
For example, suppose that working memory currently looks like:
(S1 ^type T1 ^name top ^superstate nil) (T1 ^name boolean) (S2 ^type T1 ^name subgoal ^superstate S1)so S2 is the subgoal and S1 is the superstate for S2. T1 is being shared. In this case, the production:
sp {create*chunk2 (state^type^name subgoal) --> (^size 2)}will create a chunk, even though it does not directly test the superstate. This is because T1 is shared between the subgoal and the superstate, so adding ^size to T1, adds a preference to the superstate.
What to do?
Often the solution is to create a copy of the object in the subgoal. So, instead of (S2 ^type T1) create (S2 ^type T2) and (T2 ^name boolean).
For example,
sp {copy*superstate*type (state ^superstate ) ( ^type ) ( ^name ) --> ( ^type ) ( ^name )}This will copy the ^type attribute to the subgoal and create a new identifier (T2) for it. Now you can freely modify T2 in the subgoal, without affecting the superstate.
(1) Learning off
If you run soar and type "learn -off" all learning is disabled. No chunks will be created when preferences are created in a superstate. Instead, soar only creates a "justification". (See below for an explanation of these).
You can type "learn" to see if learning is on or off, and "learn -on" will make sure it is turned on. Without this, you cannot learn anything.
(2) Chunk-free-problem-spaces
You can declare certain problem spaces as chunk-free and no chunks will be created in those spaces. The way to do this is changing right now (in Soar7) because we no longer have explicit problem spaces in Soar. If you want to turn off chunking like this, check the manual.
(3) Quiescence t
If your production tests ^quiescence t it will not lead to a chunk. For example,
sp {create*chunk1 (state ^name subgoal ^superstate ) --> (^score 10)}will create a chunk, whilst
sp {create*no-chunk (state ^name subgoal ^superstate ^quiescence t) --> (^score 10)}will not create a chunk (you just get a justification for ^score 10). You can read about the reasons for this in the Soar manual. You also do not get a chunk if a production in the backtrace for the chunk tested ^quiescence t :
For example,
sp {create*score (state ^name subgoal ^quiescence t) --> (^score 10)} sp {now-expect-chunk*but-dont-get-one (state ^name subgoal ^score 10 ^superstate) --> (^score 10)}The test for ^quiescence t is included in the conditions for why this chunk was created--so you get a justification, not a chunk.
A point to note is that having tested ^quiescence t, the effect of not learning chunks is applied recursively. If you had a production:
sp {create*second-chunk (state ^name medium-goal ^superstate ) (^score 10) --> (^score-recorded yes)}and you have a goal stack:
(S1 ^name top ^superstate nil) (S2 ^name medium-goal ^superstate S1) (S3 ^name subgoal ^superstate S2)then if create*chunk1 leads to ^score 10 being added to S2 and then create*second-chunk fires and adds ^score-recorded yes to S1, you will get two chunks (one for each result). However, if you use create*no-chunk instead, to add ^score 10 to S2, then create*second-chunk will also not generate a chunk, even though it does not test ^quiescence t itself. That is because ^score 10 is a result created from testing quiescence.
For example, let us say:
sp {create*chunk1 (state ^name subgoal ^superstate ) ( ^name top) --> ( ^score 10)} leads to the chunk:
sp {chunk-1 :chunk (state ^name top) --> (^score 10)}If working memory was:
(S1 ^name top ^superstate nil ^score 10) (S2 ^name subgoal ^superstate S1)then if you typed "preferences S1 score 1" you would see:
Preferences for S1 ^score: acceptables: 10 + From chunk-1(The value is being supported by chunk-1, an instantiated production just like any other production in the system).
Now, if we changed the production to:
sp {create*no-chunk (state ^name subgoal ^superstate ^quiescence t) (^name top) --> ( ^score 10)}we do not get a chunk anymore, we get justification-1. If you were to print justification-1 you would see:
sp {justification-1 :justification ;not reloadable (S1 ^name top) --> (S1 ^score 10)}This has the same form as chunk-1, except it is just an instantiation. It only exists to support the values in state S1. When S1 goes away (i.e. in this case when you do init-soar ) this justification will go away too. It is like a temporary chunk instantiation. Why have justifications? Well, if you now typed "preferences S1 score 1" you would see:
Preferences for S1 ^score: acceptables: 10 + From justification-1Justification-1 is providing the support for the value 10. If the subgoal, S2, goes away, this justification is the only reason Soar retains the value 10 for this slot. If later, the ^name attribute of S1 changes to "driving" say, this justification will no longer match (because it requires ^name top) and the justification and the value will both retract.
For example:
(S1 ^name top ^size large ^color blue ^superstate nil) ;# The superstate -------------------------------------- ;# Conceptual boundary (S2 ^superstate S1) ;# Newly created subgoal. sp {production0 (state ^superstate nil) --> (^size large ^color blue ^name top)}If we have:
sp {production1 (state ^superstate ) (^size large) --> (^there-is-a-big-thing yes)}and
sp {production2 (state ^superstate ) (^color blue) --> (^there-is-a-blue-thing yes)}and
sp {production3 (state ^superstate ) (^name top) --> (^the-superstate-has-name top)}and
sp {production1*chunk (state ^there-is-a-big-thing yes ^there-is-a-blue-thing yes   ; ^superstate) --> (^there-is-a-big-blue-thing yes)}and working memory contains (S1 ^size large ^color blue) this will lead to the chunk:
sp {chunk-1 :chunk (state ^size large ^color blue) --> (^there-is-a-big-blue-thing yes)}The size condition is included because production1 tested the size in the superstate, created ^there-is-a-big-thing attribute and this lead to production1*chunk firing. Similarly, for the color condition (which was also tested in the superstate and lead to the result ^there-is-a-big-blue-thing yes). The important point is that ^name is not included. This is because even though it was tested by production3, it was not tested in production1*chunk and therefore the result did not depend on the name of the superstate.
So, using the example of how conditions in chunks are created (shown above), if the chunk is:
sp {chunk-1 :chunk (state ^size large ^color blue) --> ( ^there-is-a-big-blue-thing yes)}and you type "explain chunk-1" you will get something like:
sp {chunk-1 :chunk (state ^color blue ^size large) --> ( ^there-is-a-big-blue-thing yes +)} 1 : (state ^color blue) Ground : (S1 ^color blue) 2 : ( ^size large) Ground : (S1 ^size large)This shows a list of conditions for the chunk and which "ground" (i.e. superstate working memory element) they tested.
If you want further information about a particular condition you can then type: "explain chunk-1 2" (where 2 is the condition number -- in this case (<s1> ; ^size large)) to get:
Explanation of why condition (S1 ^size large) was included in chunk-1 Production production1 matched (S1 ^size large) which caused production production1*chunk to match (S2 ^there-is-a-big-thing yes) which caused A result to be generated.This shows that ^size large was tested in the superstate by production1, which then created (S2 ^there-is-a-big-thing yes). This in turn caused the production production1*chunk to fire and create a result (in this case ^there-is-a-big-blue-thing yes) in the superstate, which leads to the chunk.
This tool should help you spot which production caused the unexpected addition of a condition, or why a particular condition did not show up in the chunk.
How do you get an o-supported preference? The exact conditions for this may change, but the general rule of thumb is this:
"You get o-support if your production tests the ^operator slot or creates structure on an operator"
(Specifically under Soar.7.0.0.beta this is o-support-mode 2 -- which is the mode some people recommend since they find it much easier to understand than o-support-mode 0 which is the current default).
E.g.
sp {o-support (state ^operator ) ( ^name set-color) --> ( ^color red)}the ^color red value gets o-support.
I-support, which stands for "instantiation support", means the preference exists only as long as the production which created it still matches. You get i-supported productions when you do not test an operator:
E.g.
sp {i-support (state ^object ) (^next-to ) --> (^near)}In this case
^nearwill get i-support. If obj1 ever ceases to be next to obj2, then this production will retract and the preference for
^nearwill also retract. Usually this means the value for ^near disappears from working memory.
To change an o-supported value, you must explicitly remove the old value, e.g.,
^color red - (the minus means reject the value red) ^color blue + (the plus means the value blue is acceptable)while changing an i-supported value requires just that the production retracts. The use of i-support makes for less work and can be useful in certain cases, but it can also make debugging your program much harder and it is recommended that you keep its use as an optimization to a minimum. By default, when you do state elaboration you automatically get i-support, whereas applying, creating or modifying an operator as part of some state structure will lead to o-support.
(One point worth noting is that operator proposals are always i-supported, but once the operator has been chosen, it does not retract even after the operator proposal goes away, because it is given a special support, c-support for "context object support").
To tell if a preference has o-support or i-support, check the preferences on the attribute.
E.g. "pref s1 color" will give:
Preferences for S1 ^color: acceptables: red + [O]while "pref s1 near" gives:
Preferences for S1 ^near: acceptables: O2 +The presence or absence of the [O] shows whether the value has o-support or not.
It therefore follows that by default, you should generally use operators to create and modify state structures whenever possible. This leads to persistent o-supported structures and makes the behaviour of your system much clearer.
I-support can be convenient occasionally, but should be limited to inferences that are always true. For example, if I know (O1 ^next-to O2), where O1 and O2 are objects then it is reasonable to have an i-supported production which infers that (O1 ^near O2). This is convenient because there might be a number of different cases for when an object is near another object. If you use i-supported productions for this, then whenever the reason (e.g. ^next-to O2) is removed, the inference (^near O2) will automatically retract.
Never mix the two for a single attribute. For example, do not have one production which creates ^size large using i-support and another that tests an operator and creates ^size medium using o-support. That is a recipe for disaster.
This chunk/justification may have different support than the support the preference had from the subgoal. It is quite common for an operator in a subgoal to create a result in the superstate which only has i-support (even though it is created by an operator). This is because the conditions in the chunk/justification do not include a test for the super-operator. Therefore the chunk has i-support and may retract, taking the result with it.
NOTE: Even if you are not learning chunks, you are still learning justifications (see above) and the support, for working memory elements created as results from a subgoal, depends on the conditions in those justifications.
Back to Table of Contentsprint Prints out a value in working memory.print -stack (pgs in earlier versions of Soar) Prints out the current goal stack: e.g. : ==>S: S1 : ==>S: S2 (state no-change)
matches (ms in earlier versions of Soar) Shows you which productions will fire on the next elaboration cycle.
matches <prod_name> Shows you which conditions in a given production matched and which did not. This is very important for finding out why a production did or did not fire. e.g. soar> matches production1*chunk >>>> (state ^there-is-a-blue-thing yes) ( ^there-is-a-big-thing yes) ( ^superstate ) 0 complete matches. This shows that the first condition did not match.
preferences <id> <attribute> 1
(The 1 is used to request a bit more detail.) This command shows the preferences for a given attribute and which productions created the preferences. A very common use is pref operator 1 which shows the preferences for the operator slot in the current state. (You can always use to refer to the current state).
preferences 1Or
preferences -names(They both mean the same thing.)
This shows the values for the attribute and which production created them. For example: (S1 ^type T1) (T1 ^name boolean) "pref T1 name 1" will show the name of the production which created the ^name attribute within object T1.
E.g.
^name doug + ^name pearson +leads to an attribute impasse for the ^name attribute.
It is a bit like an operator tie impasse (where two or more operators are proposed for the operator slot). The effect of an attribute impasse is that the attribute and all its values are removed from working memory (which is probably what makes your program stop working) and an "impasse" structure is created.
It is usually a good idea to include the production:
sp {debug*monitor*attribute-impasses*stop-pause (impasse ^object ^attribute ^impasse ) --> (write (crlf) |Break for impasse | (crlf)) (tcl |preferences | | | | 1 |) (interrupt)}in your code, for debugging. If an attribute impasse occurs, this production will detect it, report that an impasse has occurred, run preferences on the slot to show you which values were competing and which productions created preferences for those values and interrupts Soar's execution (so you can debug this problem).
Very, very occasionally you may want to allow attribute impasses to occur within your system and not consider them an error, but that is not a common choice. Most Soar systems never have attribute impasses (while almost all have impasses for the context slots, like operator ties and state no-changes).
Anyway, as of Soar.7.0.0.beta an example is:
pf {( ^operator *)}which will list all the productions that test an operator.
Or,
pf {( ^operator.name set-value)}which will list all the productions that test the operator named set-value.
You can also search for values on the right hand sides of productions (using the -rhs option) and in various subsets of rules (e.g. chunks or no chunks).
sp {elaborate*operator*make-set*dyn-features (state ^operator ) ( ^name make-set) --> ( ^dyn-features distance + &, gear + &, road-quality + & sign + &, other-road + &, other-sidewalk + &, other-sign + &)}This production parses "road-quality & sign" as a valid binary preference, although this is not what was intended. Soar will not currently warn about the duplicate + preferences, you just have to be careful.
(P18) How does one write the 'wait' operator in
Soar8.3?
If your wait operator really never needs to do anything, this will
work:
sp {wait*propose*wait (state <s> ^problem-space.name wait) -(<s> ^operator.name wait) --> (<s> ^operator <o> + <, =) (<o> ^name wait) }
sp {propose*wait (state <s> ^name <x>) -{(<s> ^operator <o>) (<o> ^name wait)} --> (<s> ^operator <o> +) (<o> ^name wait) }
At a deep level (inability of users to access pref memory for input
WMEs) maybe this is a bug. However, I don't thin there's any
preference memory for the input WMEs and so I'm not surprised that a
reject preference doesn't remove them.
End of Soar FAQ