Soar: Frequently Asked Questions List

Frank E. Ritter: frank.ritter@psu.edu
Jong W. Kim: Jong.Kim@ucf.edu
Joseph P. Sanford: jps314@psu.edu

Last updated 29 February 2012


Table of Contents

Section 0: Introduction

Section 1: General Questions

(G0) Where can I get a copy of the Soar FAQ?

(G1) What is Soar?

(G2) Where can I get more information about Soar?

(G3) What does "Soar" stand for?

(G4) What do I need to be able to run Soar?

(G5) Where can I obtain a copy of Soar?

(G6) Who uses Soar for what?

(G7) How can I learn Soar?

(G8) Is Soar right tool for me?

(G9) Is there any support documentation available for Soar?

(G10) How can I find out what bugs are outstanding in Soar?

(G11) Other links and awards in Soar

(G12) How does Soar currently stand as a Psychology theory?

(G13) What tools are available for Soar?

(G14) Can Soar be embedded?

(G15) Resources for teaching Soar

(G16) Who has worked on the Soar FAQ?

Section 2: Technological/Terminology Questions

(T1) What is search control?

(T2) What is the data-chunking problem?

(T3) What is the data-chunking generation problem?

(T4) What do all these abbreviations and acronyms stand for?

(T5) What is this NNPSCM thing anyway?

(T6) How does Soar 7 differ from Soar 6?

(T7) What does using a ^problem-space.name flag buys you apart from providing a conventional label within which to enable operator proposition?

Section 3: Programming Questions

(P1) How can I make my life easier when programming in Soar?

(P2) Are there any guidelines on how to name productions?

(P3) Why did I learn a chunk here?

(P4) Why didn't I learn a chunk there (or how can I avoid leaning a chunk there)?

(P5) What is a justification?

(P6) How does Soar decide which conditions appear in a chunk?

(P7) Why does my chunk appear to have the wrong conditions?

(P8) How is it possible for Soar to generate a duplicate chunk?

(P9) What is all this support stuff about? (Or why do values keep vanishing?)

(P10) When should I use o-support, and when i-support?

(P11) Why does the value go away when the subgoal terminates?

(P12) What's the general method for debugging a Soar program?

(P13) How can I find out which productions are responsible for a WME?

(P14) What's an attribute impasse, and how can I detect them?

(P15) How can I easily make multiple runs of Soar?

(P16) Are there any templates available for building Soar code?

(P17) How can I find all the productions that test X?

(P18) Why doesn't my binary parallel preference work?

(P19) How do I use indifferent preferences to generate probabilistic behavior?

(P20) How can I do multi-agent communication in Soar 7?

(P21) How can I use sockets more easily?

(P22) How do I write fast code?

(P23) In a WME, is the attribute always a symbolic constant?

(P24) How does one write the 'wait' operator in Soar8.3?

(P25) How does one mess with the wme's on the IO link?

(P26) How can I get Soar to interpret my symbols as integers?

(P27) Has there been any work done where new operators are learnt that extend problem spaces?

(P28) How can I find out about a programming problem not addressed here?

(P29) Why identical WME's illegal in Soar?

(P30) How to count Objects including WME's?

(P31) How to find an average value of multiple objects' common attribute?

Section 4: Downloadable Models

(DM0) Burl: A general learning mechanism

(DM1) A general model of teamwork

(DM2) A model that does reflection

(DM3) A model that does concept/category acquisition

(DM4) A model for Java Agent Framework (JAF) component

(DM5) Herbal: A high level behavior representation language

(DM6) dTank: A competitive environment for distributed agents

(DM7) A model that counts attributes

Section 5: Advanced Programming Tips

(APT1) How can I get log to run faster?

(APT2) Are there any reserved words in Soar?

(APT3) How can I create a new ID and use it in the production?

(APT4) How can I access Tcl and Unix variables?

(APT5a) How can I access Soar objects?

(APT5b) How can I find an object that has attributes and values?

(APT6) How can I trace state changes?

(APT7) How can I connect Soar to large database?

(APT8) How can/do I add new Tcl/Tk libraries?

(APT9) How can and should I use add-WME and remove-WME?

(APT10) Frequently found user model bugs, or How can I stick beans up my nose metaphorically in Soar?

(APT11) Why there seem to be no preferences in preference memory for (id^attribute value) triples acquired through C input functions?

(APT12) Why is "send" in Soar different from "send" in Tk?, or What do I do if my Soar process comes up as 'soar2'?

(APT13) How to get Soar to talk to new Tk displays?

(APT14) How to avoid memory leaks?

(APT15) How to represent waypoints and partial results in hierarchical goal stacks?

(APT16) How do I use SML (Soar Markup Language)?

(APT17) How to create JavaToh project using Eclipse on Windows?

Section 6: Miscellaneous Resources

(M0) Comparisons between Soar and ACT-R

(M1) Unofficial mirror of Soar FAQ and LFAQ

(M2) Soar memorabilia

(M3) What was TAQL?

(M4) Soar and Design models

(M5) Past versions of this FAQ


Section 0: Introduction


This is the introduction to a list of frequently asked questions (FAQ) about Soar with answers. The FAQ is provided as a guide for finding out more about Soar. It is intended for use by all levels of people interested in Soar, from novices to experts. With this in mind, the questions are essentially divided into six parts as follows:

  • Section 1 (General Questions): This section deals with general details about Soar.
  • Section 2 (Technological Issues): This section examines technological issues in Soar.
  • Section 3 (Programming Questions): This section looks at some issues related to programming using Soar.
  • Section 4 (Downloadable Models): The valuable information on several major models are presented in this section.
  • Section 5 (Advanced Programming Tips): Advanced programming tips that are useful are provided to help developers.
  • Section 6 (Miscellaneous Resources): Other resources associated with an understanding of Soar are presented in this section.

Questions in the first section have their numbers prefixed by the letter G (for General); those in the second section are prefixed by the letter T (for Technological); those in the third section are prefixed by the letter P (for Programming); those in the fourth section are prefixed by the letter DM (for Downloadable Models); those in the fifth section are prefixed by the letter APT (for Advanced Programming Tips); and, finally, those in the sixth section are prefixed by the letter M (Miscellaneous Resources). It also attempts to serve as a repository of the canonical "best" answers to these questions. So, if you know of a better answer or can suggest improvements, please feel free to make suggestions.

This FAQ is updated and posted on a variable schedule.We scan the Soar Workshop Proceedings yearly. We read Soar-group emails with the FAQ in mind. We solicit answers where we can see common and important questions. Full instructions for getting the current version of the FAQ are given in question (G0). In order to make it easier to spot what has changed since last time around, new and significantly changed items have often been tagged with the "new" icon on each major reference.

Suggestions for new questions, answers, re-phrasing, deletions etc., are all welcomed. Please include the word "FAQ" in the subject of your e-mail correspondence. Please, use the mailing lists noted below for general questions, but if they fail or you do not know which one to use, contact one of us.

This FAQ is not just our work, but includes numerous answers from members of the Soar community, past and present. The initial versions were supported by the DERA and the ESRC Centre for Research in Development, Instruction and Training. The Office of Naval Research currently provides some support.

Gordon Baxter put the first version together. Special thanks are due to John Laird and the Soar Group at the University of Michigan for helping to generate the list of questions, and particularly to Clare Bates Congdon, Peter Hastings, Randy Jones, Doug Pearson (who also provided a number of answers), and Kurt Steinkraus. The views expressed here are those of the authors and should not necessarily be attributed to the UK Ministry of Defence, the US Office of Naval Research, or the Pennsylvania State University.

Frank E. Ritter: frank.ritter@psu.edu
Jong W. Kim: Jong.Kim@ucf.edu
Joseph P. Sanford: jps314@psu.edu

Back to Table of Contents


Section 1: General Questions


(G0) Where can I get a copy of the Soar FAQ?

The latest version of the list of Frequently Asked Questions (FAQ) for the Soar cognitive architecture is posted periodically to the Soar-group mailing list, and to the following newsgroups:

comp.ai

sci.cognitive

sci.psychology.theory

If you are reading a plain text version of this FAQ, there is also an html version available, via either of the following URLs:

    
acs.ist.psu.edu/soar-faq/

that you can access using any Web browser.

If you find that material here is out of date or does not include your favorite paper or author, please let us know. The work and range of material generated by the Soar group is quite broad and has been going on for over 20 years now.

Back to Table of Contents


(G1) What is Soar?

Soar means different things to different people. Soar is used by AI researchers to construct integrated intelligent agents and by cognitive scientists for cognitive modeling. It can basically be considered in several different ways:

  1. A theory of cognition: It is the principles behind the implemented Soar system.
  2. A set of principles and constraints on (cognitive) processing: Thus, it provides a (cognitive) architectural framework, within which you can construct cognitive models. In this view, it can be considered as an integrated architecture for knowledge-based problem solving, learning, and interaction with external environments.
  3. An implementation of these principles and constraints as programming language.
  4. An AI programming language.

Soar incorporates:

  • Problem spaces as a single framework for all tasks and subtasks to be solved
  • Production rules as the single representation of permanent knowledge
  • Objects with attributes and values as the single representation of temporary knowledge
  • Automatic subgoaling as the single mechanism for generating goals
  • Chunking as the single learning mechanism

The Soar licence is in the public domain and thus the software releases are free to download and use. For more information regarding Soar licence, please refer to sitemaker.umich.edu/soar/license. In addition, the Soar video is available to download (10.4M) to find out more about Soar.

Back to Table of Contents


(G2) Where can I get more information about Soar?

Books

The following book will help you to get an introductory idea of Soar as a Unified Theory of Cognition.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

If you want to find out things that people have modeled by using Soar, please, take a look at:

Rosenbloom, P. S., Laird, J. E. & Newell, A. (1993). The Soar Papers: Readings on Integrated Intelligence. Cambridge, MA: MIT Press.

This book also helps you understand modeling and simulation issues in military applications:

Pew, R. W., & Mavor, A. S. (1998). Modeling Human and Organizational Behavior: Applications to Military Simulations. Washington, D. C.: National Academy Press.

In addition, a review by Ritter et al. provides an understanding of the integration and usability of the models in synthetic environments as a general update to Pew & Mavor's book. It also includes a summary comparing Soar and ACT-R as an appendix:

Ritter, F. E., Shadbolt, N. R., Elliman, D., Young, R. M., Gobet, F., & Baxter, G. D. (2003). Techniques for modeling human and organizational behavior in synthetic environments: A supplemetary review. Wright-Patterson Air Force Base, OH: Human Systems Information Analysis Center (HSIAC).

Journal Articles and Book Chapters

Recent and forthcoming publications related to Soar include:

Huffman, S., & Laird, J.E. (1995). Flexibly instructable agents. Journal of Artificial Intelligence Research, 3, 271-324.

Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., & Koss, F. V. (1999). Automated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27-41.

Laird, J. E., & Rosenbloom, P. S. (1996) The evolution of the Soar cognitive architecture. In D. Steier & T. Mitchell (eds.), Mind Matters: A Tribute to Allen Newell. Mahwah, NJ: Lawrence Erlbaum Associates.

Lehman, J. F., Laird, J. E., & Rosenbloom, P. S. (1996) A gentle introduction to Soar, an architecture for human cognition. In S. Sternberg & D. Scarborough (eds.), Invitation to Cognitive Science, Volume 4.

Lewis, R. L. (2001). Cognitive theory, Soar. In International Encyclopedia of the Social and Behavioral Sciences. Amsterdam: Pergamon (Elsevier Science).

Lewis, R. L. (1999). Cognitive modeling, symbolic. In Wilson, R. & Keil, F. (Eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press.

Ritter, F. E. (2003). Soar. In Encyclopedia of Cognitive Science: Macmillan.

Tambe, M., Johnson, W. L., Jones, R. M., Koss, F., Laird, J. E., Rosenbloom, P. S., & Schwamb, K. (1995). Intelligent agents for interactive simulation environments. AI Magazine, 16(1), 15-39.

Web Sites

There are a number of Web sites available that provide information about Soar at varying levels:

The first place you should visit is the Soar homepage at University of Michigan where Soar is maintained. The Artificial Intelligence (AI) lab at the University of Michigan has a collection of Web pages on Cognitive Architectures. This includes a section on Soar, and there is also a web page available for the Soar group at University of Michigan.

The Information Sciences Institute (ISI) at the University of Southern California provides a collection of Soar-related Web pages including the Soar group at ISI, and the Soar archive which contains a publication bibliography, document abstracts, official software releases, software manuals, support tools, and information about members of the Soar community.

Carnegie Mellon University - where Soar was originally developed - has its own Soar projects page.

The University of Hertfordshire website includes Soar resources on the Web and elsewhere, a few of Richard Young's papers, and an annotated bibliography of Soar journal articles and book chapters (but not conference papers) related to psychology that is intended to be complete.

There is also a site at the University of Nottingham that includes mirrors of several of the US sites as well as some things developed at Nottingham, including the Psychological Soar Tutorial. There is a nascent site at the Pennsylvania State University will appear at Frank Ritter's homepage.

Frank Ritter also has papers on Soar that are available for download at acs.ist.psu.edu/papers.

Mailing Lists

There are several mailing lists that exist within the Soar community as forums for discussion, and places to raise queries. Currently, the mailing lists are provided via SourceForge.net. You can subscribe or unsubscribe the mailing lists via the SourceForge.net. The main ones are:

  • Soar-group mailing list (soar-group@lists.sourceforge.net) - You can discuss Soar and its components.

  • Soar-games mailing list (soar-games@lists.sourceforge.net) - You can get a discussion for development of games using Soar. If you want to see collection of prior postings to the list, please, visit the Soar-games Archives.

  • Soar SML projects mailing list (soar-sml-list@lists.sourceforge.net) - This is the discussion list for Soar SML projects.

  • Soar consortium mailing list (soar-consortium@lists.sourceforge.net) - This mailing list sends a mail to the current Soar Consortium Board members.

  • Soar-Umich mailing list (soar-umich@lists.sourceforge.net) - This is a mailing list for Soar researchers at the University of Michigan.

There used to be (1988 to 2000) a European mailing list. The eu-soar list merged with the Soar-group list in June 2000.

Newsgroups

At present, there is no Soar newsgroup. There has occasionally been a talk about starting one, but the mailing lists tend to serve for most purposes. Matters relating to Soar occasionally appear on the comp.ai newsgroup.

Soar Workshops

There have been two workshops series, one based in the USA and one based in Europe. Listed below are a few of the previous North American Workshops:

Soar Training

Soar tutorials are offered each year at the Soar Workshop. There have been Soar tutorials at several conferences and held as additional training for academia, industry and government. The University of Michigan group has probably done it the most. Contact John Laird for details.

Often, a one-day psychology oriented Soar tutorial was offered before EuroSoar workshops, and often at AISB conferences.

Back to Table of Contents


(G3) What does "Soar" stand for?

Historically, Soar stood for State, Operator And Result, because all problem solving in Soar is regarded as a search through a problem space in which you apply an operator to a state to get a result. Over time, the community no longer regarded Soar as an acronym: this is why it is no longer written in upper case. You can, in fact, tell who is in the Soar community by how they write the word Soar (or at least, tell who has read the FAQ!).

Back to Table of Contents


(G4) What do I need to be able to run Soar?

The Soar software page is your first port of call. Older versions of Soar are available here on the Soar software page as well. There are a number of versions of Soar available for different combinations of machines and operating systems.

Soar Version 9.3.1 Release: Soar Suite 9.3.1 is now avaiable for download, and is the most recent version of Soar as of the last date this FAQ was updated.

Soar Version 8.6.1 and 8.6.2 Releases: Soar Suite 8.6.1 and 8.6.2 are now avaiable for download. Doug Pearson and other colleagues added a help menu to the debugger in the 8.6.2 version and it includes a link to this Soar FAQ.

Soar Version 8.6.0 Release: Soar Suite 8.6.0 is now avaiable for download. This version is a Windows-only release. Before the Soar Workshop, it is anticipated that a version of 8.6.1 should include Linux and Mac releases. One of major changes is that the Soar 8.6.0 provides an alternative approach for interfacing into Soar called SML (Soar Markup Language). This interface provides several strengths such as multiple language support (Java, C++, and Tcl), and a uniform method for building I/O interfaces, etc. In addition, the Soar Java debugger is newly provided. The debugger interfaces with Soar via SML and provides much higher performance than the TSI for detailed trace.

Soar Version 8.5.2 Release (as of July 2004): Soar Suite 8.5.2 is availble. Currently, Soar Suite 8.5.2. is available for the Windows, Linux, and Mac OS X. You can also check the SourceForge site to download the latest version releases.

Soar Version 8.5.0. Soar Suite 8.5.0. is available for all Windows, Linux, and Mac OS X platforms.

Soar Version 8.3 Release: Soar 8.3 adds several new features to the architecture and resolves a number of bug reports. A change to the default calculation of O-Support may require changes to existing Soar programs. These are described in the Soar 8.3 Release Notes. Soar 8.3 still includes a compatibility mode that allows users to run using the old Soar 7 architecture and methodology. Users must specify "soar8 -off" before loading any productions to run in Soar 7-compatibility mode. Available for Unix, Mac, and Windows.

Soar Version 8.2: Soar 8 introduces architectural changes which require changes to existing Soar programs. These are described in the Soar 8.2 Release Notes. Soar 8.2 does include a compatibility mode that allows users to run using the old Soar 7 architecture and methodology. Users must specify "soar8 -off" before loading any productions to run in Soar 7-compatibility mode.

Tiny Soar: TinySoar is an implementation of Soar that is intended to run on a memory-constrained device, such as a robot. TinySoar consists of two primary components:

  • A portable, light-weight runtime that implements the Soar decision cycle as a host for a Soar agent.
  • Tcl extension that is used to create and debug Soar agents, and then export them into a format that can be compiled into the runtime component.

Scott Wallace has installed TinySoar on a lege brick when he was in University of Michigan. He mentioned that installation went relatively painlessly, except that TinySoar ends up overwriting the bricks I/O software. Thus, note that you must take the batteries out to reset the thing before you can reload anything (e.g., bug fixes to your rules).

There are a few differences between TinySoar and Soar 8.3. One such difference is that it uses (@ -- reconsider preference). Impasses are supported, although chunking is not yet implemented. For more information, please visit the TinySoar webpage.

Previous Versions:

Unix - Soar 7.0.4. This requires about 10 MB of disk space (including source code) for most of what you will need, although the file that you retrieve via ftp is much smaller, because it is archived and compressed. The Unix version of Soar 7 is compiled with Tcl/Tk, which allows users to easily add commands and displays to their environment. The Soar Development Environment (SDE), which is a set of extensions to GNU Emacs, offers a programming support environment for earlier versions of Soar, and can still be used albeit in a more limited way for more recent versions.

Mac - MacSoartk 7.0.4 + CUSP - this version MacSoar comes with Tk included, along with a number of extensions added by Ken Hughes to improve the usability of Soar on the Mac. You will require around 10 MB of disk space to run this version, and will need version 7 of Mac OS (version 7.5 is recommended). Some versions of Soar can also be run under MacUnix.

PC - There was a version of Soar which runs under Windows 95 and Windows NT. It is a port of Soar to WIN32, and includes Tcl 7.6/Tk 4.2. It is available from the University of Michigan, as a zipped binary file. You should read the accompanying notes (available at the same URL) carefully, because there may be a problem with the Win32 binary release of Tcl 7.6/Tk 4.2.

In addition there is an older, unsupported PC version called WinSoar, based on Soar version 6.1.0, which includes a simple editing and debugging environment, runs under Microsoft Windows 3.x. It is also known as IntelSoar.

Several people have also successfully managed to install the Unix version of Soar on a PC running under the Linux operating system, although some problems have been reported under versions of Linux that have appeared since December 1996.

Version 7.1 of Soar is currently being revised to utilize the latest release of Tcl/Tk (8.0) prior to its official release. The new release of Soar will include the Tcl/Tk Soar Interface (TSI). Currently, Soar 7.1 uses Tcl 7.6 and Tk 4.2, not Tcl 8.0.

Back to Table of Contents


(G5) Where can I obtain a copy of Soar?

You can simply click on the version you want from the Soar Software Page at University of Michigan. In addition, you can directly surf the SourceForge site.

There is a Lisp based version of Soar 6 that Dr. Jans Aasman built a while back. You should contact him for details.

Back to Table of Contents


(G6) Who uses Soar for what?

Soar is used by AI researchers to construct integrated intelligent agents, and by cognitive scientists for cognitive modeling.

The Soar research community is distributed across a number of sites throughout the world. A brief overview of each of these is given below, in alphabetical order.

Brigham Young University (BYU)

NL-Soar is being actively developed at BYU Soar Research Group. Deryle Lonsdale (lonz@byu.edu) is the point of contact.

Information Sciences Institute (ISI) and Institute for Creative Technologies (ICT), University of Southern California

Soar projects cover five main areas of research: development of automated agents for simulated environments (in collaboration with UMich); learning (including explanation-based learning); planning; implementation technology (e.g., production system match algorithms); and virtual environments for training. For more information contact Jonathan Gratch (gratch@ict.usc.edu), and Randall Hill (exec-dir@ict.usc.edu).

Pace University

The Robotics Lab at Pace University focuses on building and testing a robot control architecture using the Soar cognitive architecture as its basis and the DARPA Image Understanding Environment to process visual data.

Pennsylvania State University

The Soar work at the Pennsylvania State University involves using Soar models as a way to test theories of learning, creating a high level language for modeling, and improving human-computer interaction. Other projects include the development of the Psychological Soar Tutorial, Herbal and the Soar FAQ. For more information, please contact Frank Ritter (frank.ritter@psu.edu).

Soar Technologies

Bob Marinier (rmarinie@eecs.umich.edu) said that Soar Tech utilizes advanced artificial intelligence that is grounded in scientific principles of human-system interaction and implemented through sound software engineering. The work of Soar Technology, Inc. has been primarily focused on projects, like TacAir-Soar. Soar Tech develops intelligent autonomous agent software for modeling and simulation, command and control, information visualization, robotics, and intelligence analysis, for the U.S. Army, Navy, Air Force, DARPA, JFCOM, DMSO, and the intelligence community..

University of Leiden

Ernest Bovenkamp (E.G.P.Bovenkamp@lumc.nl) has been using Soar as an agent architecture for parsing and understanding medical images.

University of Michigan

The Soar work at University of Michigan has four basic research thrusts:

  • Learning from external environments including learning to recover from incorrect domain knowledge, learning from experience in continuous domains, compilation of hierarchical execution knowledge, constructive induction to improve problem solving and planning

  • Cognitive modeling of psychological data involving learning to perform multiple tasks and learning to recall declarative knowledge

  • Complex, knowledge-rich, real time control of autonomous agents within the context of tactical air engagements (the TacAir-Soar project)

  • Basic architectural research in support of the above research topics.

In addition, the application of Soar to computer games is also researched.

Perhaps, the largest success for Soar has been flying simulated aircraft in a hostile environment. Jones et al. (1999) report how Soar flew all of the US aircraft in the 48 hour STOW'97 exercise. The simulated pilots talked with each other and ground control, and carried out 95% of the missions successfully.

For more information, please contact John Laird (laird@umich.edu).

University of Michigan/Psychology

NL-Soar work at University of Michigan was done by Rick Lewis (formerly @ Ohio State) who focused on modeling real-time human sentence processing, research on word sense disambiguation, and experimental tests of NL-Soar as a psycholinguistic theory. Rick has moved to ACT-R with his work.

Deryle Lonsdale at BYU continues to work on NL-Soar.

For more information, please contact Rick Lewis (rickl@umich.edu)

Back to Table of Contents


(G7) How can I learn Soar?

Probably, the best way to learn Soar is to actually visit a site where people are actively using Soar, and stay for as long as you can manage (months rather than days).

In order to help people, however, there are manuals and tutorials available. Before studying them, you should first visit Soar's getting started page.

The manuals, PST, and tutorials, were developed for anyone interested in learning Soar, and are based on Soar 8. They are used in classes to teach Soar (e.g., at University of Michigan where they were developed, and at other universities).

Another tutorial was developed mainly with psychologists in mind. The latest version is based on Soar 8. The Web version of this tutorial was developed by Frank Ritter, Richard Young, and Gary Jones. A Powerpoint presentation on the Psychological Soar Tutorial can be found at acs.ist.psu.edu/papers/pst14/bamberg-soar.ppt, which worked with Soar 8.

There is no textbook, as such, on how to program using Soar, although John Rieman has written a set of notes entitled An Introduction to Soar Programming. Even though the notes are based on version 6 of Soar (NNPSCM), they provide a useful bare bones introduction to Soar as a programming language.

In addition, the Soar Coloring Book also provides user-friendly instructions with hands-on examples and exercises. Please, visit the web page of the Soar Coloring Book.

Also, Soar dogma will help get a feel for how to program Soar (You can find the link in G9 section).

Andrew Nuxoll wrote: The Soar dogma contains a collection of Soar wisdom gathered during a series of conversations between John Laired and myself as I mounted the Soar learning curve over the course of my first year as a graduate student at the University of Michigan. My hope was that by writing these guidelines down I might ease the curve for future Soar users. To get the most value from this document, I recommend you read it once at the beginning of your Soar experience and then read it again once you've started using Soar in earnest.

From version 7 onwards, Soar is closely linked to Tcl/Tk. If you wish to get a copy of a Tcl tutorial computer aided instruction package, you could start by looking at Clif Flynt's home page. These may be useful to anyone who is heading down this line, since they highlight some of the good and bad points about Tcl/Tk.

Back to Table of Contents


(G8) Is Soar the right tool for me?

For building AI systems: Soar's strengths are in integrating knowledge, planning, reaction, search, and learning within a very efficient architecture. Example tasks include production scheduling, diagnosis, natural language understanding, learning by instruction, robotic control, and flying simulated aircraft.

If all you want to do is to create a small production rule system, then Soar may not be right for you. Also, if you only have limited time that you can spend developing the system, then Soar is probably not the best choice. It currently appears to require a lot of effort to learn Soar, and more practice before you become proficient than is needed if you use other, simpler, systems, such as Jess.

There are, however, a number of basic capabilities that Soar provides as standard. If you need to use these, then Soar may not only be just what you want, but may also be the only system available:

  • Learning and its integration with problem solving
  • Interruptibility as a core aspect of behaviour
  • Large production rule systems
  • Parallel reasoning
  • A knowledge description and design approach based on problem spaces

For cognitive modeling: Soar's strengths are in modeling deliberate cognitive human behavior, at time scales greater than 50ms. Example tasks that have been explored include human computer interaction tasks, typing, arithmetic, video game playing, natural language understanding, concept acquisition, learning by instruction, and verbal reasoning. Soar has also been used for modeling learning in many of these tasks; however, learning adds significant complexity to the structuring of the task and is not for the casual user. Although many of these tasks involve interaction with external environments and the Soar community is experimenting with models of interaction, Soar does not yet have a standard model for low-level perception or motor control.

Back to Table of Contents


(G9) Is there any support documentation available for Soar?

As mentioned in the previous section, there are manuals, tutorials, and other documents available for Soar users:

  1. Soar 9.3.1 Users Manual
  2. The Soar Tutorial
  3. Soar Dogma

These documentations provide lots of useful information about programming in Soar, and, in particular, version 8.

The Soar 6 User Manual is still available for browsing on the Web.

Back to Table of Contents


(G10) How can I find out what bugs are outstanding in Soar?

To retrieve information on what bugs are outstanding in Soar, visit the Soar Bugzilla: winter.eecs.umich.edu/soar-bugzilla. Through this system, you can report all bugs in using Soar.

Back to Table of Contents


(G11) Other links and awards in Soar

Links2Go

This page has won an award, but more importantly, there is another list of Soar resources assembled by Links2go.

Links2Go
Soar

Congratulations! Your page 
(http://www.ccc.nottingham.ac.uk/pub/soar/nottingham/soar-faq.html) has been 
selected to receive a Links2Go Key Resource award in the Soar topic!

The Links2Go Key Resource award is both exclusive and objective. Fewer than 
one page in one thousand will ever be selected for inclusion. Further, unlike 
most awards that rely on the subjective opinion of "experts," many of whom 
have only looked at tens or hundreds of thousands of pages in bestowing their
awards, the Links2Go Key Resource award is completely objective and is based
on an analysis of millions of web pages. During the course of our analysis, we
identify which links are most representative of each of the thousands of topics
in Links2Go, based on how actual page authors, like yourself, index and
organize links on their pages. In fact, the Key Resource award is so
exclusive, even we don't qualify for it (yet)!

Please visit:
www.links2go.com/award/Soar. 

[Now, the site is no longer active, but we did win it!.]

LA Times - Using Interactive Play to Explore How We Think

Game AI programs have become so sophisticated in recent years that a few university researchers have taken an interest in the field, including John E. Laird, a professor of electrical engineering and computer science at the University of Michigan.

Back to Table of Contents


(G12) How does Soar currently stand as a psychology theory?

Sadly, there is not a cut and dried answer to this question. Answering this fully will require you to figure out what you expect from a psychology theory and then evaluate Soar on those criteria. If you expect a theory to predict that humans are intelligent, and that they have been and can be shown to learn in several domains, it is nearly the only game in town. If you require limited short term memory directly in the architecture, that's not in Soar yet (try ACT-R).

There is a list of Soar-related publications here.

With this in mind, there are numerous resources for finding out more. The first port of call should be Newell's 1990 book, Unified Theories Cognition. This may satisfy you. This makes the most coherent case for Soar, although it is slowly becoming out of date with respect to the implementation. There are also two big books, The Soar papers (Vol. 1 and 2), that provide numerous examples of Soar's use. The examples tend to be more biased towards AI, but there are numerous psychology applications in them.

If you go to the ISI paper archive, or often the CHI, ICCM, and Cognitive Science conference proceedings, and the Soar Workshop Proceedings, you will find some more up-to-date papers showing what the community is currently working on. You may also find further pointers in the FAQ on individual web sites to be quite useful in seeing the current state of play in the area you are interested.

The best cognitive model written in Soar is less clear. There are Soar models of teamwork (Tambe, 1997), procedural learning (Nerb, Ritter, & Krems, 1999), natural language understanding (Lewis, 1996), categorization (Miller & Laird, 1996), syllogistic reasoning (Polk & Newell, 1995), and using computer interfaces (Howes & Young, 1997; Ritter & Bibby, 2001).

There is a book from the National Research Council called "Modeling Human and Organizational Behavior: Applications to Military Simulations" that provides a summary of Soar.

Todd Johnson proposed a list of fundamental cognitive capacities in 1995 that we have started to organize papers around. Each paper (or system) has only been cited once, and it is far from complete, but the framework is now in place for expanding it. If you have suggestions, please do forward them for inclusion.

  • Abduction
    Nerb, J., Krems, J., & Ritter, F. E. (1993). Rule learning and the power law: A computational model and empirical results. In Proceedings of the 15th Annual Conference of the Cognitive Science Society, Boulder, Colorado. pp. 765-770. Hillsdale, New Jersey: LEA.
  • Categorization
  • Causal Reasoning
  • Causal induction
  • Classical Conditioning
  • Classification
  • Declarative memory
    Pelton, G. A., & Lehman, J. F., "Everyday Believability," Technical Report CMU-CS-95-133, School of Computer Science, Carnegie Mellon University, 1995.
  • Delayed feedback learning
  • Episodic Learning to recall Learning to recognize
  • External Interaction
    Pelton, G. A. & Lehman, J. F. (1994). The breakdown of operators when interacting with the external world. Technical Report CMU-CS-94-121, School of Computer Science, Carnegie Mellon University.
    Nelson, G., Lehman, J. F., & John, B. (1994). Integrating cognitive capabilities in a real-time task, In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society.
    Bass, E. J., Baxter, G. D., & Ritter, F. E. (1995). Using cognitive models to control simulations of complex systems: A generic approach. AISB Quarterly, 93, 18-25.
    Baxter, G. D., & Ritter, F. E. (1997). Model-computer interaction: Implementing the action-perception loop for cognitive models. In D. Harris (Ed.), The 1st International Conference on Engineering Psychology and Cognitive Ergonomics, Vol. 2, 215-223, Aldershot, UK:Ashgate.
    Ritter, F. E., Baxter, G. D., Jones, G., & Young, R. M. (2000). Supporting cognitive models as users. ACM Transactions on Computer-Human Interaction, 7(2), 141-173.
  • Imagining
  • Instrumental Conditioning
  • Interleaved actions
  • Interruptibility
    Nelson, G., Lehman, J. F., & John, B. E. (1994). Experiences in Interruptible Language Processing, In Proceedings of the 1994 AAAI Spring Symposium on Active Natural Language Processing.
  • Learning by analogy
  • Limited lookahead learning
  • Managing Working Memory (it keeps growing and growing and growing...)
  • Natural language
    Lehman, J. F., Van Dyke, J., & Rubinoff, R. (1995). Natural language processing for IFORS: Comprehension and generation in the air combat domain, In Proceedings of the Fifth Conference on Computer Generated Forces and Behavioral Representation.
    Lewis, R. L. (1999). Specifying architectures for language processing: Process, control, and memory in parsing and interpretation. In M. Crocker, M. Pickering, & C. Clifton Jr. (Eds.), Architectures and Mechanisms for Language Processing. Cambridge University Press.
    Lewis, R. L. (1998). Leaping off the garden path: Reannalysis and limited repair parsing. In J. D. Fodor & F. Ferreira (Eds.), Reanalysis in Sentence Processing. Boston: Kluwer Academic.
    Lewis, R. L. (1996). Interference in short-term memory: The magical number two (or three) in sentence processing. The Journal of Psycholinguistic Research, 25, 93-115.

    If you are interestedin natural language processing in Soar, please check the below sites:

    ai.eecs.umich.edu/cogarch0/soar/capa/nlu.html

  • Parallel reasoning
  • Problem solving
    Lehman, J. F. (1994). Toward the Essential Nature of Statistical Knowledge in Sense Resolution. In Proceedings of the Twelfth National Conference on Artificial Intelligence.
    Ritter, F. E., & Baxter, G. D. (1996). Able, III: Learning in a more visibly principled way. In U. Schmid, J., Krems, & F. Wysotzki (Eds.), In Proceedings of the First European Workshop on Cognitive Modeling, (pp.22-30), Berlin: Forschungsberichte des Fachbereichs Informatik, Technische Universitaet Berlin.
    Yost, G. R. (1996). Implementing the Sisyphus-93 Task using Soar/TAQL. International Journal of Human-Computer Studies, 44, 281-301.
  • Reactive behavior
    Nielsen, T. E., & Kirsner, K. (1994). A challenge for Soar: Modeling proactive expertise in a complex dynamic environment. In Singapore International Conference on Intelligent Systems (SPICIS-94). B79-B84.
  • Recovery from incorrect knowledge
  • Reinforcement learning
  • Self explanation
  • Situated-action
    Ritter, F. E., & Bibby, P. A. (2001). Modeling how and when learning happens in a simple fault-finding task. In Proceedings of the Fourth International Conference on Cognitive Modeling.
  • STM limitations
  • Workload
    Gluck, K. A., & Pew, R. W. (Eds.). (2005). Modeling human behavior with integrated cognitive architectures: Comparison, evaluation, and validation. Mahwah, NJ: Lawrence Erbaum Associates.

Nerb, Krems & Ritter (1993; 1999) was later revised, and showed some good matches to the shape of variance in the Power Law taken from 14 subjects and to transfer between abduction problems. The first paper was in Cog Sci proceedings, the second in the Kognitionwissenschaft (German Cognitive Science) journal. The paper from Krems & Nerb (1992) is a monograph of Nerb's thesis, which it is based on.

Peck & John (1992) and later reanalyzed in Ritter & Larkin (1994) is Browser-Soar, a model of browsing. It is fit to 10 episodes of verbal protocol taken from 1 subject. The fit is sometimes quite good and allowed a measure of Soar's cycle time to be computed against subjects. It also suggested fairly strongly (because the model was matched to verbal and non-verbal actions) that verbal protocols are appearing about 1 second after their corresponding working memory elements appear.

Nelson, G., Lehman, J. F., & John, B. E. (1994) proposed a model that integrated multiple forms of knowledge to start to match some protocols taken from the NASA space shuttle test director.  No detailed match.

Aasman, J., & Michon, J. A. (1992) presented a model of driving. While the book chapter does not match data tightly, the later Aasman book (1995) does so very well. The book is not widely available, however.

John, B. E., Vera, A. H., & Newell, A. (1992; 1994) presented a model matched to 10 seconds of a subject learning how to play Mario Brothers. This was available as a CHI conference paper initially.

Chong, R. S., & Laird, J. E. (1997) presented a model that learns how to perform a dual task fairly well. It has not matched to data very tightly, but it shows a very plausible mechanism. This was a preliminary version of Chong's thesis.

Johnson et al. (1991) presented a model of blood typing. The comparison was done loosely to verbal protocols. This was a very hard task for subjects to do, and the point of it was that a model could do the task, and it was not just intuition that allowed users to perform this task.

There have been a couple of papers on integrating knowledge (i.e., models) in Soar. Lehman, J. F., Lewis, R. L., & Newell, A. (1991) and Lewis, R. L., Newell, A., & Polk, T. A. (1989) both presented models that integrate submodels. I don't believe that either have been compared with data, but they show how different behaviours can be integrated and note some of the issues that will arise.

Lewis et al. (1990) addresses some of the questions discussed here about the state of Soar, but from a 1990's perspective.

Several models in Soar have been created that model the Power Law. These include Sched-Soar (Nerb et al., 1999), physics principle application (Ritter, Jones, & Baxter, 1998), Seible-Soar and R1-Soar (Newell, 1990). These models, although they use different mechanisms, explain the Power Law as arising out of hierarchical learning (i.e., learning parts of the environment or internal goal structure) that initially learns low level actions that are very common and thus useful, and with further practice larger patterns are learned but they occur less often. The Soar models also predict that some of the noise in behaviour on individual trials is different, measurable, and, predicted amounts of transfer between problems.

References

Aasman, J., & Michon, J. A. (1992). Multitasking in driving. In J. A. Michon & A. Akyürek (Eds.), Soar: A cognitive architecture in perspective. Dordrecht, The Netherlands: Kluwer.

Aasman, J. (1995). Modelling driver behaviour in Soar. Leidschendam, The Netherlands: KPN Research.

Chong, R. S., & Laird, J. E. (1997). Identifying dual-task executive process knowledge using EPIC-Soar. In Proceedings of the 19th Annual Conference of the Cognitive Science Society. 107-112. Mahwah, NJ: Lawrence Erlbaum.

John, B. E., Vera, A. H., & Newell, A. (1994). Towards real-time GOMS: A model of expert behavior in a highly interactive task. Behavior and Information Technology, 13, 255-267.

John, B. E., & Vera, A. H. (1992). A GOMS analysis of a graphic, interactive task. In CHI'92 Proceedings of the Conference on Human Factors and Computing Systems (SIGCHI). 251-258. New York, NY: ACM Press.

Johnson, K. A., Johnson, T. R., Smith, J. W. J., De Jong, M., Fischer, O., Amra, N. K., & Bayazitoglu, A. (1991). RedSoar: A system for red blood cell antibody identification. In Fifteenth Annual Symposium on Computer Applications in Medical Care. 664-668. Washington: McGraw Hill.

Krems, J., & Nerb, J. (1992). Kompetenzerwerb beim Lösen von Planungsproblemen: experimentelle Befunde und ein SOAR-Modell (Skill acquisition in solving scheduling problems: Experimental results and a Soar model) No. FORWISS-Report FR-1992-001). FORWISS, Muenchen.

Lehman, J. F., Lewis, R. L., & Newell, A. (1991). Integrating knowledge sources in language comprehension. In Thirteenth Annual Conference of the Cognitive Science Society. 461-466.

Lewis, R. L., Newell, A., & Polk, T. A. (1989). Toward a Soar theory of taking instructions for immediate reasoning tasks. In Annual Conference of the Cognitive Science Society. 514-521. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Lewis, R. L., Huffman, S. B., John, B. E., Laird, J. E., Lehman, J. F., Newell, A., Rosenbloom, P. S., Simon, T., & Tessler, S. G. (1990). Soar as a Unified Theory of Cognition: Spring 1990. In Twelfth Annual Conference of the Cognitive Science Society. 1035-1042. Cambridge:MA.

Nelson, G., Lehman, J. F., John, B. (1994) Integrating Cognitive Capabilities in a Real-Time Task, In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society.

Nerb, J., Krems, J., & Ritter, F. E. (1993). Rule learning and the powerlaw: A computational model and empirical results. In Proceedings of the 15th Annual Conference of the Cognitive Science Society, Boulder, Colorado. 765-770. Hillsdale, New Jersey: LEA.

This was revised and extended and published as:

Nerb, J., Ritter, F. E., & Krems, J. (1999). Knowledge level learning and the power law: A Soar model of skill acquisition in scheduling.Kognitionswissenschaft (Journal of the German Cognitive Science Society) Special issue on cognitive modelling and cognitive architectures, D.Wallach & H. A. Simon (eds.). 20-29.

Using a process model of skill acquisition allowed us to examine the microstructure of subjects' performance of a scheduling task. The model, implemented in the Soar-architecture, fits many qualitative (e.g., learning rate) and quantitative (e.g., solution time) effects found in previously collected data. The model's predictions were tested with data from a new study where the identical task was given to the model and to 14 subjects. Again a general fit of the model was found with the restrictions that the task is easier for the model than from subjects and its performance improves more quickly. The episodic memory chunks learn while scheduling tasks show how acquisition of general rules can be performed without resort to explicit declarative rule generation. The model also provides an explanation of the noise typically found when fitting a set of data to the Power Law -- it is the result of chunking over actual knowledge rather than "average" knowledge. Only when the data are averaged (over subjects here) does the smooth Power Law appear.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Peck, V. A., & John, B. E. (1992). Browser-Soar: A computational model of a highly interactive task. In Proceedings of the CHI '92 Conference on Human Factors in Computing Systems. 165-172. New York, NY: ACM.

Ritter, F. E., & Larkin, J. H. (1994). Using process models to summarize sequences of human actions. Human-Computer Interaction, 9(3), 345-383.

Tambe, M. (1997). Towards flexible teamwork. Journal of Artificial Intelligence Research, 7, 83-124.

Back to Table of Contents


(G13) What tools are available for Soar?

There are tools and projects which help you to develop your own Soar model. Please, check this site: sitemaker.umich.edu/soar/soar_tools___projects

SCA and SoarDoc:

Ronald Chong and Robert Wray at Soar Tech have been using SCA to fit some quantitative learning data and make predictions in a real-time performance/learning task (part of the AFRL AMBR program). They presented a paper at the International Conference on Cognitive Modeling on this work:

Wray, R. E. & Chong, R. S. (2003). Quantitative Explorations of Category Learning using Symbolic Concept Acquisition. In Proceedings of the 5th International Conference on Cognitive Modeling. Bamberg, Germany. April.

There are some minor modifications to SCA to update it for Soar 8. They also have (monotonically) extended SCA to map novel feature values to trained values (introducing novel values is often used in psychological "transfer" experiments). The updated source code for SCA, including a description of the transfer task extensions, is available at: www.speakeasy.org/~wrayre/soar/sca/html/index.html

Wray wrote:

Even if you are not currently interested in SCA, I encourage you to visit this URL. We documented SCA using a new tool, SoarDoc, a Doxygen-like tool that automatically generates HTML documentation for Soar systems. It includes a component that creates graphical state descriptions and works with both Soar 7 and Soar 8 source files. SoarDoc was developed by Dave Ray at Soar Technology.

Herbal: Herbal is a high-level behavior representation language. It supports creating a cognitive model for Soar architecture. For more information, please visit acs.ist.psu.edu/projects/Herbal/.

Back to Table of Contents


(G14) Can Soar be embedded?

Soar 8.6.0 addresses this problem most directly now (05/05).

Paul Benjamin wrote:

A student at Pace has embedded Soar within Java. There is a package called feather from NIST that implements a TclInterpreter class in Java, and the student extended that class to a SoarSession class. He used it to write a poker-playing Soar program. I also used it to connect Soar to our robot. With minor changes, it can be used to connect any Java system to Soar.

His source, and all the package info, is at www.codeblitz.com/poker.html.

Another option is to take the C library header files and run SWIG over them (www.swig.org). SWIG is able to generate wrapping code to interface Java, Perl, Tcl, Python, and others to any C/C++ library.

There are several ways that Soar has been tied to other pieces of software. A now out of date overview is available from:

Ritter, F. E. & Major, N. P. (1995). Useful mechanisms for developing simulations for cognitive models. AISB Quarterly, 91(Spring), 7-18.

A list of ways to fit Soar to an application, in order of complexity, is:

  • Don't, just use productions to simulate an external world.
  • Use productions working through IO to simulate the world. This can be accomplished using SML (Soar Markup Language). Further info can be found here.
  • Write small (or large) Tcl/Tk functions that are your simulation, and put them on the IO hooks that get called every elaboration cycle.
  • Use a Tcl/Tk socket utility included in some early Soars (e.g., Soar 7)
  • Use Unix sockets compiled into Soar.
  • Compile your simulation (in C) with Soar. The IFOR people do this. It provides the fastest communication speed, but probably requires the greatest technical expertise.
  • On top of sockets, implement a model of an eye and a hand (this is what Epic and the Nottingham architecture does.

You can tie Soar to itself through multi-agents, that is, having multiple Soar agents and having them talk with each other.

[I believe this is originally by Tom Head, around Oct '96.]

Reading and writing text from a file can be used for communication. However, using this mechanism for inter-agent communication would be pretty slow and you'd have to be careful to use semaphores to avoid deadlocks. With Soar 7, I see a natural progression from Tcl to C in the development of inter-agent communication.

  1. Write inter-agent communication in Tcl. This is possible with a new RHS function (called "tcl") that can execute a Tcl script. The script can do something as simple as send a message to a simple simulator (which can also be written in Tcl). The simulator can then send the message to the desired recipient(s). You could also do things such as add-wme in a RHS but I'd advise against it since its harder to see what's going on and more error prone.
  2. Move the simulator into C code. To speed up the simulated world in which the agents interact, recode the simulator in C. Affecting the simulator can be accomplished by adding a few new Tcl commands. The agents would be largely unchanged and the system would simply run faster.
  3. Move communication to C. This is done by writing Soar I/O functions as documented in section 6.2 of the Soar Users Manual. This is the fastest method.

A generic AI engine for video games

Question, Date: April 2003

I would like to know whether Soar is always a stand-alone product or if it can be incorporated in another system. I remember John Laird and Mike Van Lent presented a paper at the GDC on why Soar should be used as a generic AI engine for video games. I imagine that game developers would need to somehow embed Soar in their games the way they to third party graphics engines. So is this possible or would game and other commercial developers need to use the sockets interface that I believe the Quake bots use?

Answer:

Bob Wray wrote:

Yes, it's completely possible to run Soar within an application. We are using Unreal Tournament for an application and have a version in which Soar is complied into the game. I can run at least five agents (haven't tried more than that) within the game process and maintain game framerate on my 1 GHz P4 laptop.

This is not at all a plug-and-play solution but there is a API that makes it relatively straightforward to embed Soar, especially, if you already are comfortable with interfacing software.

One limitation of embedding Soar agents is that you generally lose the user interface (UI) for Soar (the maintained UI is implemented in Tcl). We handle this in the application by using a socket-based connection to Soar/Tcl for development and the embedded version of performance situations.

Back to Table of Contents


(G15) Resources for teaching Soar

Soar has been taught as a component of non-programming classes on cogntive architectures at several universities, including at least, CMU, Michigan, Sterling in Scotland, Pace, Penn State, Portsmouth, and in Japan.

If you want students to program with Soar, that might be difficult to do to a great depth in two weeks, but as Michigan folks, Ritter, and Young have offered hands-on one-day tutorials, so clearly you can cover something in a day. Note that the day is about 6 hours of instruction. This can take at least 6 hours of instruction in a university class to cover, or two to three weeks in a class, more if you give homework. In class you get more out of it because the exercises are done in more detail. Herbal has also been used to teach Soar at a conference and in the classroom.

More resources can be found at the Ritter's class website. Please, contact Frank Ritter.

Back to Table of Contents


(G16) Who has worked on Soar FAQ?

The current Soar FAQ has been initiated and better shaped with invaluable help from former colleagues as follows:

Kevin Tor: tor@cse.psu.edu
Alexander B. Wood: abwood@unity.ncsu.edu
Gordon D. Baxter: gdb@cs.st-andrews.ac.uk
Marios Avaramides: mariosav@ucy.ac.cy

For updating the current version of Soar FAQ, Bob Marinier (rmarinie@eecs.umich.edu) provided more than 90 comments and suggestions.

Back to Table of Contents


Section 2: Technological Issues


(T1) What is search control?

Search control is knowledge that controls search process in that it guides search through comparing proposed alternatives. In Soar, search control is encoded in production rules that create preferences for operators.

Bob Marinier (rmarinie@eecs.umich.edu) wrote:

Search control rules are rules that prefer the selection of one operator over another. Their purpose is to avoid useless operators and direct the search toward the desired state. Theoretically, you could encode rules that select the correct operator for each state. However, you would have had to already solve the problem yourself to come up with those rules. Our goal is to have the program solve the problem, using only knowledge available from the problem statement and possibly some general knowledge about problem solving. Therefore, search control will be restricted to general problem solving heuristics.

Back to Table of Contents


(T2) What is the data-chunking problem?

Data chunking is creation of chunks that allow for either the recognition or retrieval of data that is currently in working memory. Chunking is usually thought of as a method for compiling knowledge or speed-up learning, and not for moving data from working memory into long term memory. Data chunking is a technique in which chunking does create such recognition or retrieval productions, and thus allows Soar to perform knowledge-level learning.

Simplistically, then, data chunking is the creation of chunks that can be represented by the form a=>b i.e., when 'a' appears on the state, the data for 'b' does too. Another example is "The capital of France? => Paris".

Bob Marinier wrote:

The data-chunking problem is discussed in section 6.4 of Newell's "Unified Theories of Cognition" (1990).

"This is what we call the data-chunking problem - that chunks that arise from deliberate attempts to learn an arbitrarily given data item will have the item itself somewhere in the conditions, making the chunk useless to retrieve the item. Normally, chunking acquires procedures to produce an effect. When what is given is a data item, not a bit of behavior, the attempt to convert it to procedural learning seems stymied". (p 327)

To solve the data-chunking problem, Newell (1990) suggests the following. Note, GID is an example object that Soar is trying to learn to recognize:

"The key idea is to separate generating an object to be recalled from testing it. The desired object here is GID. We want the process that generates GID not to know that it is the response that is to be made - not to be the process that tests for the item. Thus, to achieve the required learning, Soar should create for itself a task to be solved by generate and test. It is alright for the test to contatin the result, namely, GID. The test is to find an instance of GID, so the test not only can have GID in some condition, it should have it. As long as the generator doesn't produce GID by consulting the given object (GID), then GID will not occur in the conditions of the chunk that will be built". (p 331-332)

For more information, see question T3 and read section 6.4 of "Unified Theories of Cognition" by Newll (1990).

Back to Table of Contents


(T3) What is the data-chunking generation problem?

Whenever you subgoal to create a datachunk, you have to generate everything in the subgoal that might be learned, and then use search control to make sure that the chunks you build are correct. Doing this, without touching the supergoal, means that the chunk that is learned does not depend on the cue. The cue is then used in search control (which is not chunked) to select the object (the response) to return.

Back to Table of Contents


(T4) What do all these abbreviations and acronyms stand for?

  • CSP: Constraint Satisfaction Problem
  • EBG: Explanation-Based Generalisation
  • EBL: Explanation-Based Learning
  • GOMS: Goals, Operators, Methods, and Selection rules
  • HI-Soar: Highly Interactive Soar
  • ILP: Inductive Logic Programming
  • PSCM: Problem Space Computational Model
  • NNPSCM: New New Problem Space Computational Model
  • NTD: NASA Test Director
  • PEACTIDM: Perceive, Encode, Attend, Comprehend, Task, Intend, Decode, Move
  • SCA - Symbolic Concept Acquisition
  • PE: Persistent elaboration
  • IE: I-supported elaboration

Back to Table of Contents


(T5) What is this NNPSCM thing anyway?

Really, this is a number of questions rolled into one:

  1. What is the PSCM?
  2. What is the NNPSCM?
  3. What are the differences between the two?

What is the PSCM?

The Problem Space Computational Model (PSCM) is an idea that revolves around the commitment in Soar using problem spaces as the model for all symbolic goal-oriented computation. The PSCM is based on the primitive acts that are performed using problem spaces to achieve a goal. These primitive acts are based on the fundamental object types within Soar i.e., goals, problem spaces, states and operators. The functions that they perform are shown below:

Goals

  1. Propose a goal.
  2. Compare goals.
  3. Select a goal.
  4. Refine the current information about the current goal.
  5. Terminate a goal.

    Problem Spaces

  6. Propose a problem space for the goal.
  7. Compare problem spaces for the goal.
  8. Select a problem space for the goal.
  9. Refine the information available about the current problem space.

    States

  10. Propose an initial state.
  11. Compare possible initial states.
  12. Select an initial state.
  13. Refine the information available about the current state.

    Operators

  14. Propose an operator.
  15. Compare possible operators.
  16. Select an operator.
  17. Refine the information available about the current operator.
  18. Apply the selected operator to the current state.

More details about exactly what these functions do can be found in the current user manual.

What is the NNPSCM?

The New New Problem Space Computational Model (NNPSCM) addresses some of the issues that made the implementation of the PSCM run relatively slow. It reformulates some of the issues within the PSCM without actually removing them, and hence changes the way in which models are implemented but we are not aware of a model that has been fundamentally influenced by this change. Starting with version 7.0.0 of Soar, all implementation is performed using the NNPSCM; in later releases of version 6.2, you can choose which version you require (NNPSCM or non-NNPSCM) when you build the executable image for Soar. The easiest way to illustrate the NNPSCM is to look at the differences between it and the PCSCM.

What are the differences between the two?

The NNPSCM and the PSCM can be compared and contrasted in the following ways:

  1. The nature of problem space functions for NNPSCM and PSCM remain essentially the same as those described in Newell, A., Yost, G.R., Laird, J.E., Rosenbloom, P.S., & Altmann, E. (1991). Formulating the problem space computational model. In R.F. Rashid (Ed.), Carnegie-Mellon Computer Science: A 25-Year Commemorative (255-293). Reading, MA: ACM-Press (Addison-Wesley).
  2. The goal state from the PSCM now simply becomes just another state, rather than being treated as a separate, special state.
  3. The need to select between problem spaces in the NNPSCM does not require any decision making process. The problem space is simply formulated as an attribute of the state. It can be assigned in a lightweight way using an elaboration rule, or it can be deliberately assigned by an operator application. Currently, this is left up to programmer or modeler.
  4. Models implemented using NNPSCM are generally faster than their PSCM equivalents, becuase less decision cycles are required. Less DCs are required because there is no need to decide between problem spaces, and, in later versions, states.
  5. Using the NNPSCM is presumed to allow better inter-domain, and inter-problem-space transfer of learning to take place.
  6. The use of the NNPSCM should help in the resolution and understanding of the issues involved in external interaction.

The differences may become more evident if we look at code examples (written using Soar version 6.2.5) for the farmer, wolf, goat, and cabbage problem that comes as a demo program in the Soar distribution.

PSCM Code

(sp farmer*propose*operator*move-with
     (goal <g> ^problem-space <p>
                     ^state <s>)
     (<p> ^name farmer)
     (<s> ^holds <h1> <h2>)
     (<h1> ^farmer <f> ^at <i>)
     (<h2> ^<< wolf goat cabbage >> <value>
                 ^at <i>)
     (<i> ^opposite-of <j>)
     -->
     (<g> ^operator <o>)
     (<o> ^name move-with
                ^object <value>
                ^from <i>
                ^to <j>))
 

NNPSCM Code

(sp farmer*propose*move-with
(state <s> ^problem-space <p>) ; goal <g> has disappeared
(<p> ^name farmer)             ;<p> is added with d DC
       (<s> ^holds <h1> <h2>)
       (<h1> ^farner <f> ^at <i>)
       (<h2> ^<< wolf goat cabbage >> <value>
                 ^at <i>)
       (<i> ^opposite-of <j>)
       -->
       (<s> ^operator <o>)
       (<o> ^name move-with
                 ^object <value>
                 ^from <i>
                 ^to <j>))
 

On the face of it, there do not appear to be many differences, but when you look at the output trace, the consistency of the operator use versus including state and problem space assignment intervening, and the improvement in speed become more apparent:

PSCM Trace

0: ==>G: G1
1:    P: P1 (farmer)
2:    S: S1
3:    ==>G: G3 (operator tie)
4:       P: P2 (selection)
5:       S: S2
6:       O: O8 (evaluate-object O1 (move-alone))
7:       ==>G: G4 (operator no-change)
8:          P: P1 (farmer)
9:          S: S3
10:          O: C2 (move-alone)
 

NNPSCM Trace

0:==>S: S1
1:   ==>S: S2 (operator tie)
2:      O: O8 (evaluate-object O1 (move-alone)
3:      ==>S: S3 (operator no-change)
4:         O: C2 (move-alone)
 

Back to Table of Contents


(T6) How does Soar 7 differ from Soar 6?

(T6-1) Question from Monica Weiland [monica_weiland@chiinc.com]

Basically, the Soar kernel architecture in Soar 7 is the same as what is in Soar 6, with additional bug fixes, and changes to the timers to be more accurate and informative when the 'stats' cmd is issued. There were also changes to using multiple agents and the NNPSCM is the only PSCM model supported (in 6, both NNPSCM (no explicit problem space slot) and the PSCM (explicit problem space slot) were supported). The advantages of Soar7 include all the user extensions that can be done by using Tcl for the user interface.

In the Soar distribution there is a tool for converting from Soar 6 format productions to Soar 7. It was written by Doug Pearson and it's called convert (written in C). I don't know if it is included in the Mac distribution, but I would assume it is. It does a good job of converting productions; most applications run after being processed by this routine. If Monica has the SimTime productions, she should be able to convert to Soar 7. If she has trouble with the conversions, she can contact me and I'll help her figure them out.

Answer from: "Karen J. Coulter" [kcoulter@eecs.umich.edu]
Date: June 9, 1997.

There are also some notes available noting some explicet changes in the command set that were presented as part of a talk at the Soar 15th workshop. These notes are out of date, but not so out of date to be useless. The directory where to find commands1.ps.Z and commands2.ps.Z.

(T6-2) Question from Tony Hirst

In general, what does just using a ^problem-space.name flag buy you apart from providing a conventional label within which to enable operator proposition? Is there a strong/formal way in which the problem-space elaboration was intended to be used (for example, we might encapsulate state required within the problem space by copying only those state <s> elaborations necessary for use in that problem space onto ^problem-space <p> (though this incurs an overhead that the rather more general:

<s> ^problem-space (<p> ^name whatever)
-->
<p> ^pstate <s>
 

avoids); productions inside the problem space may then make changes to stuff dangling off <p> (or ^problem-space.pstate ) rather than explicitly to <s> (even though it's the same stuff... the point is that changes are forced to be made ostensibly within the problem space). Problem specific elaborations should also be made to <p> (<ps>) rather than <s> by explicitly making changes to either ^top-state <ts> or <p> (^problem-space.pstate <ps>), rather than <s>, we are forced to think about and state the context of the state changes we're making.

Finally, is there a convention for naming problem spaces within subgoal spaces (e.g., if operator.name thing reaches an impasse an forces a subgoal, should the subgoal be elaborated with problem-space.name thing, or an arbitrary name?)

Answer from Randy Jones

My opinion is that the ^problem-space flag is an anachronism that should be discarded, especially for non-trivial Soar programs. The flag originally arose from Newell and Simon's problem-space hypothesis, and the notion that people tend to employ specific sets of methods and goals for specific types of problems. What this "flag-based" representation neglects, however, is the potential for sharing methods and goals across types of problems that we might normally view as being in distinct problem spaces. In TacAir-Soar, for example, we have *many* operators that can apply in a variety of different states, independent of the problem-space flag on that state. In general, (and again in my opinion), operators should be sensitive to patterns of data represented on the "current state", rather than being a slave to a single, discrete, problem-space flag. This allows the use of operators to transfer across problem spaces in useful, and sometimes surprising ways. Under this view, problem spaces "emerge" from patterns of data, rather than being defined by a single flag.

Answer from Richard Lewis, Date: February 10, 1999

While I agree with much of what Randy says, I wouldn't be too quick to discard the use of a problem-space flag. The problem-space flag permits the agent to decide (based on some knowledge) to solve this problem in some particular way, then ti change its decision later and attempt a different way, etc. It is an additional layer of deliberate control that allows the agent to "hold in place" the outcome of some decision and use that decision to guide problem solving behavior over a period of time that extends beyond a single decision cycle. Thus, the agent is not just slave to whatever immediate associations come to mind.

What I am really advocating is a view that keeps a mix of the data-driven, opportunistic style that Randy describes, along with the ability to exert more control over some extended periods of time. Such a mix may hinge in part on using the problem space flag in ways that we haven't usually done in the past: as search control rather than generator. There's a sense in which this kind of mix can't be discarded as long as it is architecturally possible, the system is learning, and we can't see any clear reasons why the agent in principle can't arrive, via learning, at a point where it behaves in such a way.

Answer from John Laird, Date: February 10, 1999

On the ^problem-space.name issue:

In earlier versions of Soar, the problem space was selected just like the operator, and thus was open to preferences. However, for the reasons Randy mentioned (problem spaces may be more emergent from many properties of the state than just a specific symbol) we abandoned the selection of the problem space. For many tasks, having a problem space symbol might be a good way to discriminate during operator selection. The convention that I've adopted is to copy the name of a super-operator to be the name of the state created below it. This doesn't cover tie impasses or state no-change, but works very well for operator no-changes.

Back to Table of Contents


(T7) What does using a ^problem-space.name flag buys you apart from providing a conventional label within which to enable operator proposition?

From Randy Jones:

My opinion is that the ^problem-space flag is an anachronism that should be discarded, especially for non-trivial Soar programs.  The flag originally arose from Newell and Simon's problem-space hypothesis, and the notion that people tend to employ specific sets of methods and goals for specific types of problems.  What this "flag-based" representation neglects, however, is the potential for sharing methods and goals across types of problems that we might normally view as being in distinct problem spaces.  In TacAir-Soar, for example, we have *many* operators that can apply in a variety of different states, independent of the problem-space flag on that state.  In general (and again in my opinion), operators should be sensitive to patterns of data represented on the "current state", rather than being a slave to a single, discrete, problem-space flag.  This allows the use of operators to transfer across problem spaces in useful, and sometimes surprising, ways.  Under this view, problem spaces "emerge" from patterns of data, rather than being defined by a single flag.

From Richard Lewis:

While I agree with much of what Randy says, I wouldn't be too quick to discard the use of a problem-space flag.  The problem-space flag permits the agent to decide (based on some knowledge) to solve its problem in some particular way, then to change its decision later and attempt a different way, etc. It is an additional layer of deliberate control that allows the agent to 'hold in place' the outcome of some decision and use that decision to guide problem solving behavior over a period of time that extends beyond a single decision cycle.  Thus, the agent is not just slave to whatever immediate associations come to mind. What I'm really advocating is a view that keeps a mix of the data-driven, opportunistic style that Randy describes, along with the ability to exert more control over some extended periods of time.  Such a mix may hinge in part on using the problem space flag in ways that we haven't usually done in the past: as search control rather than generator.  There's a sense in which this kind of mix can't be discarded as long as it is architecturally possible, the system is learning, and we can't see any clear reasons why the agent in principle can't arrive, via learning, at a point where it behaves in such a way.

From John Laird:

I agree with just about everything Randy said. On the ^problem-space.name issue - in earlier versions of Soar, the problem space was selected just like the operator, and thus was open to preferences. However, for the reasons Randy mentioned (problem spaces may be more emergent from many properties of the state than just a specific symbol) we abandoned the selection of the problem space.  For many tasks, having a problem space symbol might be a good way to discriminate during operator selection.  The convention that I've adopted is to copy the name of a super-operator to be the name of the state created below it. This doesn't cover tie impasses or state no-change, but works very well for operator no-changes.

Back to Table of Contents


Section 3: Programming Questions


(P1) How can I make my life easier when programming in Soar?

There are a number of ways to make your life easier when programming in Soar. Some simple high level considerations are:

  • Use a programming tool, such as Visual Soar or Herbal (explained below)
  • Use the Soar debugger or the TSI
  • Re-use existing code
  • Cut and paste productions and code
  • Work mainly on the top level problem space, using incremental problem space expansion
  • Use the integrated Emacs environment, SDE, or one of the visual editors
  • Turn chunking (the learning mechanism) off
  • Use the Tcl/Tk to write simulations for the model to talk to, rather than use external simulations

The Tcl/Tk Soar Interface (TSI) is part of Soar 7 and 8.

Herbal: It is a high-level behavior represention language.

A Brief History of Soar Interfaces

The first Soar interface was probably the command line from OPS5. One of the first Soar graphical interfaces was written by Amy Unruh and ran on TI lisp machines. Brian Milnes also wrote a graphical interface that ran in Common lisp under X windows. Blake Ward probably wrote the first Emacs mode for Soar. Frank Ritter revised this, and then Mike Hucka revised it, and then Frank revised it, and this went on for a while until it went to Mike and stayed at Michigan. While it was with Frank there was a submode for writing TAQL code (Ritter, 1991b). Various reports were included in the Soar Workshop Proceedings. There was a manual (Ritter, Hucka, & McGinnis, 1992). These were fairly widely used systems, maybe 1/2 of the Soar users used them at the time. This mode is still available.

Frank Ritter wrote a graphic user interface for Soar, the Developmental Soar Interface, or DSI, in Common Lisp using Garnet. This was reported in his thesis (Ritter, 1993; Ritter & Larkin, 1994) and at CHI (Ritter, 1991a). It was used to generate the initial polygons for the Soar video (Lehman, Newell, Newell, Altmann, Ritter, & McGinnis, 1994). This interface probably had about 10 users at the most, and was abandoned when Soar was implemented in C.

The Tcl/Tk Soar Interface (TSI) is a successor semi-graphical interface started around 1996 taking advantage of including Tcl/Tk with Soar (Ritter, Jones, & Baxter, 1998). Numerous people have now contributed to it. It is currently being developed at Michigan.

In 1995, a rationalised list of commands aliases was proposed for the command line (Nichols & Ritter, 1995). These were used in Soar7 and I believe in Soar 8. An unpublished study supported the results that even novices could profit from aliases.

New interfaces to Soar include a revised version of the TSI (version 3, 6/00), which includes viewers for working memory tree, production match, and chunks (contact Karen Coulter (kcoulter@eecs.umich.edu) and/or Mazin Assanie (mazina@eecs.umich.edu)). A Soar debugger to provide greater control over breakpoints, etc. is also in the works (contact Glenn Taylor, glenn@soartech.com). Visual Soar is an environment in Java to help make sure that when writing Soar programs that all attribute names are correct, and to cut and paste and reuse attribute names and sets of names. Laird, Jones, and Bauman at Michigan is working on this effort. A related effort by Tony Hirst is ongoing at the Open University (a.j.hirst@open.ac.uk).

References

Lehman, J. F., Newell, A., Newell, P., Altmann, E., Ritter, F., & McGinnis, T. (1994). The Soar Video. 11 min. video, The Soar Group, Carnegie-Mellon University.

Nichols, S., & Ritter, F. E. (1995). Theoretically motivated tool for automatically generating command aliases. In CHI '95, Human Factors in Computer Systems. 393-400. New York, NY: ACM.

Ritter, F. E. (1991a). How the Soar interface uses Garnet. Video (2 min.) shown at the Garnet user interface development environment special interest subgroup meeting at the 1991 Human Factors in Computing Systems Conference (CHI'91).

Ritter, F. E. (1991b). TAQL-mode Manual. The Soar group.

Ritter, F. E. (1993). TBPA: A methodology and software environment for testing process models' sequential predictions with protocols (Technical Report No. CMU-CS-93-101). School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.

* Ritter, F. E., & Larkin, J. H. (1994). Using process models to summarize sequences of human actions. Human-Computer Interaction, 9(3), 345-383.

Ritter, F. E., Hucka, M., & McGinnis, T. F. (1992). Soar-mode Manual (Tech. No. CMU-CS-92-205). School of Computer Science, Carnegie-Mellon University.

Ritter, F. E., Jones, R. M., & Baxter, G. D. (1998). Reusable models and graphical interfaces: Realising the potential of a unified theory of cognition. In U. Schmid, J. Krems, & F. Wysotzki (Eds.), Mind modeling - A cognitive science approach to reasoning, learning and discovery. 83-109. Lengerich, Germany: Pabst Scientific Publishing.

Back to Table of Contents


(P2) Are there any guidelines on how to name productions?

Productions will load as long as their names are taken from a set of legal characters, essentially alphanumerics and "-" and "*". Names consisting only of numerics are not allowed.

Soar programmers tend to adopt a convention whereby the name of a production describes what the rule does, and where it should apply. Typically, the conventions suggest that names have the following general form:

 
problem-space-name*state-name*operator-name*action

How you choose your naming convention is probably less important than the fact that you do use one.

Note that, to name working memory elements, Soar uses single alphabetic character followed by a number, such as p3. If you name a production this way it will not be printable. (It is also poor style.)

Bob Marinier stated some tips for naming productions as follows.

  • The way productions are named impacts the way Visual Soar and the TSI work (i.e., the way they will group the rules, etc.).
  • Typo in the sentence that begins: "Note that, to name working memory elements..." "elements" should be replaced with "identifiers" and/or "objects".
  • For more information, refer to the manual (section 3.3.1 in the Soar-8.5 manual).

Back to Table of Contents


(P3) Why did I learn a chunk there?

Soar generally learns a chunk when a result is created in a superstate by a production which tests part of the current subgoal.

e.g., the production:

sp {create*chunk1
   (state ^superstate)
   -->
   (^score 10)}
 

creates a preference for the attribute "score" with value 10.

That mechanism seems simple enough. Why then do you sometimes get chunks when you do not expect them? This is usually due to shared structure between a subgoal and a superstate. If there is an object in the subgoal which also appears in a superstate, then creating a preference for an attribute on that object, will lead to a chunk. These preferences in a superstate are often called "results" although you are free to generate any preference from any subgoal, so that term can be misleading.

For example, suppose that working memory currently looks like:

(S1 ^type T1 ^name top ^superstate nil)
(T1 ^name boolean)
(S2 ^type T1 ^name subgoal ^superstate S1)
 

so S2 is the substate and S1 is the superstate for S2. T1 is being shared. In this case, the production:

sp {lead-to*chunk2
   (state <s> ^name subgoal)
   -->
   (<s> ^size 2)}
 

will create a chunk, even though it does not directly test the superstate. This is because T1 is shared between the subgoal and the superstate, so adding ^size to T1, adds a preference to the superstate.

What to do?

Often the solution is to create a copy of the object in the subgoal. So, instead of (S2 ^type T1) create (S2 ^type T2) and (T2 ^name boolean).

For example (in psuedo code),

sp {copy*superstate*type
   (state ^superstate)
   (^type)
   (^name)
   -->
   (^type)
   (^name)}
 

This will copy the ^type attribute to the subgoal and create a new identifier (T2) for it. Now you can freely modify T2 in the subgoal, without affecting the superstate.

Back to Table of Contents


(P4) Why didn't I learn a chunk there (or how can I avoid learning a chunk there)?

There are a number of situations where you can add something to the superstate and not get a chunk:

(1) Learning off

If you run Soar and type "learn -off" all learning is disabled. No chunks will be created when preferences are created in a superstate. Instead, Soar only creates a "justification". (See below for an explanation of these.)

You can type "learn" to see if learning is on or off, and "learn -on" will make sure it is turned on. Without this, you cannot learn anything.

(2) Chunk-free-problem-spaces

You can declare certain problem spaces as chunk-free and no chunks will be created in those spaces. The way to do this is changing right now (in Soar7) because we no longer have explicit problem spaces in Soar. If you want to turn off chunking like this, check the manual.

(3) Quiescence t

If your production tests ^quiescence t, it will not lead to a chunk. For example,

sp {lead-to*chunk1
   (state <s1> ^name subgoal ^superstate <s2>)
   -->
   (<s2> ^score 10)}
 

will create a chunk, whilst

sp {lead-to*no-chunk
   (state ^name subgoal ^superstate ^quiescence t)
   -->
   (^score 10)}
 

will not create a chunk (you just get a justification for ^score 10). You can read about the reasons for this in the Soar manual. You also do not get a chunk if a production in the backtrace for the chunk tested ^quiescence t:

For example,

sp {create*score
   (state ^name subgoal ^quiescence t)
   -->
   (^score 10)}
sp {now-expect-chunk*but-dont-get-one
   (state ^name subgoal ^score 10 ^superstate)
   -->
   (^score 10)}
 

The test for ^quiescence t is included in the conditions for why this chunk was created--so you get a justification, not a chunk.

A point to note is that having tested ^quiescence t, the effect of not learning chunks is applied recursively. If you had a production:

sp {lead-to*second-chunk
   (state ^name medium-goal ^superstate)
   (^score 10)
   -->
   (^score-recorded yes)}
 

and you have a goal stack:

(S1 ^name top ^superstate nil)
(S2 ^name medium-goal ^superstate S1)
(S3 ^name subgoal ^superstate S2)
 

then if lead-to*chunk1 leads to ^score 10 being added to S2 and then lead-to*second-chunk fires and adds ^score-recorded yes to S1, you will get two chunks (one for each result). However, if you use lead-to*no-chunk instead, to add ^score 10 to S2, then lead-to*second-chunk will also not generate a chunk, even though it does not test ^quiescence t itself. That is because ^score 10 is a result created from testing quiescence.

Back to Table of Contents


(P5) What is a justification?

Any time a subgoal creates a preference in a superstate, a justification is always created, and a chunk will also be generated unless you have turned off learning in some manner (see above). If learning has been disabled, then you only get a justification. A justification is effectively an instantiated chunk, but without any chunk being created.

For example, let's say:

sp {lead-to*chunk1 (state ^name subgoal ^superstate) (^name top) --> (^score 10)} leads to the chunk:

sp {chunk-1
   :chunk
   (state <s1> ^superstate <ss1>)
   (<ss1> ^name top)
   -->
   (<ss1> ^score 10)}
 

If working memory was:

(S1 ^name top ^superstate nil ^score 10)
(S2 ^name subgoal ^superstate S1)
 

then if you typed "preferences S1 score 1" you would see:

Preferences for S1 ^score:
acceptables:
10 +
From chunk-1
 

(The value is being supported by chunk-1, an instantiated production just like any other production in the system).

Now, if we changed the production to:

sp {create*no-chunk
   (state ^name subgoal ^superstate ^quiescence t)
   (^name top)
   -->
   ( ^score 10)}
 

We do not get a chunk anymore. We get justification-1. If you were to print justification-1 you would see:

sp {justification-1
   :justification ;not reloadable
   (S1 ^name top)
   -->
   (S1 ^score 10)}
 

This has the same form as chunk-1, except it is just an instantiation. It only exists to support the values in state S1. When S1 goes away (i.e., in this case when you do init-soar ) this justification will go away too. It is like a temporary chunk instantiation. Why have justifications? Well, if you now typed "preferences S1 score 1" you would see:

Preferences for S1 ^score:
acceptables:
10 +
From justification-1
 

Justification-1 is providing the support for the value 10. If the subgoal, S2, goes away, this justification is the only reason Soar retains the value 10 for this slot. If later, the ^name attribute of S1 changes to "driving" say, this justification will no longer match (because it requires ^name top) and the justification and the value will both retract.

Back to Table of Contents


(P6) How does Soar decide which conditions appear in a chunk?

Soar works out which conditions to put in a chunk by finding all the productions that led to the final result being created. It sees which of those productions tested parts of the superstate and collects all those conditions together.

For example:

(S1 ^name top ^size large ^color blue ^superstate nil)     ;# The superstate
---------------------------------                          ;# Conceptual boundary

(S2 ^superstate S1)                                        ;# Newly created subgoal.

sp {production0
   (state ^superstate nil)
   -->
   (^size large ^color blue ^name top)}

If we have:

sp {production1
   (state ^superstate)
   (^size large)
   -->
   (^there-is-a-big-thing yes)}
 

and

sp {production2
   (state ^superstate)
   (^color blue)
   -->
   (^there-is-a-blue-thing yes)}
 

and

sp {production3
   (state ^superstate)
   (^name top)
   -->
   (^the-superstate-has-name top)}
 

and

sp {production1*chunk
   (state ^there-is-a-big-thing yes
          ^there-is-a-blue-thing yes
          ^superstate)
   -->
   (^there-is-a-big-blue-thing yes)}
 
 

and working memory contains (S1 ^size large ^color blue) this will lead to the chunk:

sp {chunk-1
   :chunk
   (state ^size large ^color blue)
   -->
   (^there-is-a-big-blue-thing yes)}
 

The size condition is included because production1 tested the size in the superstate, created ^there-is-a-big-thing attribute and this lead to production1*chunk firing. Similarly, for the color condition (which was also tested in the superstate and lead to the result ^there-is-a-big-blue-thing yes). The important point is that ^name is not included. This is because even though it was tested by production3, it was not tested in production1*chunk and therefore the result did not depend on the name of the superstate.

Back to Table of Contents


(P7) Why does my chunk appear to have the wrong conditions?

See above for general description of how chunk conditions are computed. If you have just written a program and the chunks are not coming out correctly, then try using the "explain" tool.

So, using the example of how conditions in chunks are created (shown above), if the chunk is:

sp {chunk-1
   :chunk
   (state  ^size large ^color blue)
   -->
   (^there-is-a-big-blue-thing yes)}
 

and you type "explain chunk-1" you will get something like:

sp {chunk-1
   :chunk
   (state ^color blue ^size large)
   -->
   (^there-is-a-big-blue-thing yes +)}
   1 : (state ^color blue)              Ground : (S1 ^color blue)
   2 : (^size large)                    Ground : (S1 ^size large)
 

This shows a list of conditions for the chunk and which "ground" (i.e., superstate working memory element) they tested.

If you want further information about a particular condition you can then type: "explain chunk-1 2" (where 2 is the condition number -- in this case (<s1>; ^size large)) to get:

Explanation of why condition (S1 ^size large) was included in chunk-1
Production production1 matched
     (S1 ^size large) which caused
production production1*chunk to match
     (S2 ^there-is-a-big-thing yes) which caused
A result to be generated.
 

This shows that ^size large was tested in the superstate by production1, which then created (S2 ^there-is-a-big-thing yes). This in turn caused the production production1*chunk to fire and create a result (in this case ^there-is-a-big-blue-thing yes) in the superstate, which leads to the chunk.

This tool should help you spot which production caused the unexpected addition of a condition, or why a particular condition did not show up in the chunk.

Back to Table of Contents


(P8) How is it possible for Soar to generate a duplicate chunk?

Question from Bill Kennedy:

If the new chunk is a duplicable, wouldn't the original have already fired? Is it the case that when all the applicable productions fire to resolve an impasse, the chunking then generalizing processes could build a chunk identical to one that already fired?

Answer from John Laird (Nov. 10, 2002):

Two similar results can be created during the same decision cycle, which in turn can lead to two identical chunks.

Also, it is possible to generate a result in a subgoal that is already created by a chunk if the result doesn't lead to progress in the problem solving - that is, the existence of the result from the chunk isn't tested in the subgoal (so the result is created even though it is essentially already there) and it doesn't resolve the impasse, etc. (so even after the result is created the impasse is still there).

Back to Table of Contents


(P9) What is all this support stuff about? (Or why do values keep vanishing?)

There are two forms of "support" for changes to working memory: o-support and i-support. O-support stands for "operator support" and means the preference behaves in a normal computer science fashion. If you create an o-supported preference ^color red, then the color will stay red until you change it.

How do you get an o-supported preference? The exact conditions for this may change, but the general rule of thumb is this:

"You get o-support if your production tests the ^operator attribute or creates structure on an operator"

(Specifically under Soar.7.0.0.beta this is o-support-mode 2 -- which is the mode some people recommend since they find it much easier to understand than o-support-mode 0 which is the current default).

e.g.,

sp {o-support
   (state ^operator)
   (^name set-color)
   -->
   (^color red)}
 

the ^color red value gets o-support.

I-support, which stands for "instantiation support", means the preference exists only as long as the production which created it still matches. You get i-supported productions when you do not test an operator:

e.g.,

sp {i-support
   (state ^object)
   (^next-to)
   -->
   (^near)}
 

In this case

^near
 

will get i-support. If obj1 ever ceases to be next to obj2, then this production will retract and the preference for

^near
 

will also retract. Usually this means the value for ^near disappears from working memory.

To change an o-supported value, you must explicitly remove the old value, e.g.,

^color red -  (the minus means reject the value red)
^color blue + (the plus means the value blue is acceptable)
 

while changing an i-supported value requires just that the production retracts. The use of i-support makes for less work and can be useful in certain cases, but it can also make debugging your program much harder and it is recommended that you keep its use as an optimization to a minimum. By default, when you do state elaboration you automatically get i-support, whereas applying, creating or modifying an operator as part of some state structure will lead to o-support.

(One point worth noting is that operator proposals are always i-supported, but once the operator has been chosen, it does not retract even after the operator proposal goes away, because it is given a special support, c-support for "context object support". This changes to be more dynamic in Soar 8.)

To tell if a preference has o-support or i-support, check the preferences on the attribute.

e.g., "pref s1 color" will give:

Preferences for S1 ^color:
 
acceptables:
  red + [O]
 

while "pref s1 near" gives:

Preferences for S1 ^near:
 
acceptables:
   O2 +
 

The presence or absence of the [O] shows whether the value has o-support or not.

Back to Table of Contents


(P10) When should I use o-support, and when i-support?

General information

Under normal usage, you probably will not have to explicitly choose between the two. By default, you will get o-support if you apply, modify or create an operator as part of some state structure; you will get i-support if all you do is elaborate the state in some way.

It, therefore, follows that by default, you should generally use operators to create and modify state structures whenever possible. This leads to persistent o-supported structures and makes the behaviour of your system much clearer.

I-support can be convenient occasionally, but should be limited to inferences that are always true. For example, if I know (O1 ^next-to O2), where O1 and O2 are objects then it is reasonable to have an i-supported production which infers that (O1 ^near O2). This is convenient because there might be a number of different cases for when an object is near another object. If you use i-supported productions for this, then whenever the reason (e.g. ^next-to O2) is removed, the inference (^near O2) will automatically retract.

Never mix the two for a single attribute. For example, do not have one production which creates ^size large using i-support and another that tests an operator and creates ^size medium using o-support. That is a recipe for disaster.

IE phase vs. PE phase

Robert Wray wrote (July 24, 2002):

IE instantiations indicate i-supported elaborations (IE) and PE instantiations are o-supported elaborations. The "P" is for persistent: "persistent elaborations".

The Soar 8 kernel makes a distinction between IE and PE in order to fire persistent elaborations in separate (super) phases from IEs. The general idea is:

while (!quiescence(all-productions))
   while (!quiescence(IE-productions))     //this is called
"mini-quiescence" in the kernel
        Preference Phase (IE only)
        WM Phase (IE only)
   Preference Phase (IE only)
   WM Phase (PE only)
 

Other tips

Note that there are several kinds that have been used. See the manual for your version. John Laird (larid@umich.edu) provided some useful tips (Sep. 9, 2003).

  • Changing o-support type can save you time.
  • It is necessary to see firing counts to identify where the model spends time. It is useful to check firing counts (fc 20) to see the top 20 productions that are firing.
  • It is necessary to consider which o-support mode you are using (type in o-support mode to the prompt).

Back to Table of Contents


(P11) Why does the value go away when the subgoal terminates?

A common problem is creating a result in a superstate (e.g., ^size large) and then when the subgoal terminates, the value retracts. Why does this happen? The reason is that once the subgoal has terminated the preference in the superstate is supported by the chunk or justification that was formed when the preference was first created.

This chunk/justification may have different support than the support the preference had from the subgoal. It is quite common for an operator in a subgoal to create a result in the superstate which only has i-support (even though it is created by an operator). This is because the conditions in the chunk/justification do not include a test for the super-operator. Therefore, the chunk has i-support and may retract, taking the result with it.

NOTE: Even if you are not learning chunks, you are still learning justifications (see above) and the support, for working memory elements created as results from a subgoal, depends on the conditions in those justifications.

Back to Table of Contents


(P12) What's the general method for debugging a Soar program?

The main tools that you need to use are the commands below. You apply them where the behaviour is odd, and use them to understand what is going on. Personally I (FER), prefer the Tcl/Tk interface for debugging because many of these commands become mouse clicks on displays.

print prints out a value in working memory.

print -stack (pgs in earlier versions of Soar) Prints out the current goal stack: e.g., : ==>S: S1 : ==>S: S2 (state no-change)

matches (ms in earlier versions of Soar) Shows you which productions will fire on the next elaboration cycle.

matches <prod_name> Shows you which conditions in a given production matched and which did not. This is very important for finding out why a production did or did not fire.

e.g., soar> matches production1*chunk >>>> (state ^there-is-a-blue-thing yes) (^there-is-a-big-thing yes) (^superstate) 0 complete matches. This shows that the first condition did not match.

preferences <id> <attribute> 1

(The 1 is used to request a bit more detail.)
 This command shows the preferences for a given
 attribute and which productions created the
 preferences. A very common use is
 
 
  pref operator 1
 
 which shows the preferences for the operator
 slot in the current state. (You can always
 use to refer to the current state).
 

Glenn Taylor mentioned that the Visual Soar has made some strides in minimizing user errors like typos, providing the closest thing to a type-checker for an untyped language. There is something older called ViSoar, which could be used to analyze productions in order to find some typos as well. Visual Soar may contain much of the functionality of ViSoar. There is also a run-time debugger (called SDB) and structure visualizer available through the Soar webpages, written in Tcl, so is platform-independent, though is not part of the Soar 8 distribution. Tcl Pro comes with a debugger. It may be also useful to debug Soar.

Back to Table of Contents


(P13) How can I find out which productions are responsible for a WME?

Use

preferences 1
 

Or

preferences -names
 

(They both mean the same thing.)

This shows the values for the attribute and which production created them. For example:

(S1 ^type T1) (T1 ^name boolean) pref T1 name 1

will show the name of the production which created the ^name attribute within object T1.

Back to Table of Contents


(P14) What's an attribute impasse, and how can I detect them?

Attribute impasses occur if you have two values proposed for the same slot.

e.g.,

^name doug +
^name pearson +
 

leads to an attribute impasse for the ^name attribute.

It is a bit like an operator tie impasse (where two or more operators are proposed for the operator slot). The effect of an attribute impasse is that the attribute and all its values are removed from working memory (which is probably what makes your program stop working) and an "impasse" structure is created. (Soar 7, this is not provided by default in Soar 8.)

It is usually a good idea to include the production:

sp {debug*monitor*attribute-impasses*stop-pause
   (impasse ^object ^attribute
   ^impasse)
   -->
   (write (crlf) |Break for impasse |  (crlf))
   (tcl |preferences| |  | | 1 |)
   (interrupt)}
 

in your code, for debugging. If an attribute impasse occurs, this production will detect it, report that an impasse has occurred, run preferences on the slot to show you which values were competing and which productions created preferences for those values and interrupts Soar's execution (so you can debug this problem).

Very, very occasionally you may want to allow attribute impasses to occur within your system and not consider them an error, but that is not a common choice. Most Soar systems never have attribute impasses (while almost all have impasses for the context slots, like operator ties and state no-changes), and this is probably the reason they have been removed from Soar 8.

Back to Table of Contents


(P15) How can I easily make multiple runs of Soar?

How can I run Soar many times without user intervention (i.e., in a batch mode)?

Bob Wray has put a page on the WWW describing how he ran Soar in a batch mode to collect data for his thesis. The site includes examples of csh scripts, Tcl scripts, and Soar code he used in the process. For more information, please contact Bob Wray (wray@soartech.com).

Back to Table of Contents


(P16) Are there any templates available for building Soar code?

If you use the Soar Development Environment (or SDE) which is a set of modules for Emacs, they provide some powerful template tools, which can save you a lot of typing. You specify the template you want (or use one of the standard ones) and then a few key-strokes will create a lot of code for you.

Back to Table of Contents


(P17) How do I find all the productions that test X?

Use the "pf" command (which stands for production-find). You give "pf" a pattern. Right now, the pattern has to be surrounded by a lot of brackets, but that should be fixed early on in Soar7's life.

Anyway, as of Soar.7.0.0.beta an example is:

pf {(^operator *)}
 

which will list all the productions that test an operator.

Or,

pf {(^operator.name set-value)}
 

which will list all the productions that test the operator named set-value.

You can also search for values on the right hand sides of productions (using the -rhs option) and in various subsets of rules (e.g., chunks or no chunks).

Back to Table of Contents


(P18) Why doesn't my binary parallel preference work?

Using parallel preferences can be tricky, because the separating commas are currently crucial for the parser. In the example below, there is a missing comma after the preferences for "road-quality".

sp {elaborate*operator*make-set*dyn-features
   (state ^operator)
   (^name make-set)
   -->
   (^dyn-features distance + &, gear + &, road-quality + &
     sign + &, other-road + &, other-sidewalk + &,
     other-sign + &)}

This production parses "road-quality & sign" as a valid binary preference, although this is not what was intended. Soar will not currently warn about the duplicate + preferences, you just have to be careful.

Note that the binary parallel preferences no longer exist in Soar 8.

Back to Table of Contents


(P19) How do I use indifferent preferences to generate probabilistic behavior?

Question - Michela De Vincentis wrote (Jan. 2, 2006):

I'm using the numeric-indifferent-mode and I would like to know if this output about the preferences is correct. According to you, which is the probability that the operator O7 delfino will be chosen? I 'd like it is the average of all its value , then 30/5. Is it?

Preferences for S1 ^operator:

acceptables:

  O7 (delfino) + :O   O8 (falco) + :O   O9 (orso) + :O   O10 (scimmia) + :O   O11 (tigre) + :O   O2 (circo) +
  O3 (ha4zampe) +
  O4 (intelligente) +
  O5 (nuota) +
  O6 (pericoloso) +

worsts:

  O11 (tigre) < :O   O8 (falco) < :O   O4 (intelligente) < :O   O6 (pericoloso) < :O   O5 (nuota) < :O   O2 (circo) < :O binary indifferents:
  O7 (delfino) =30 :O   O8 (falco) =30 :O   O9 (orso) =0 :O   O10 (scimmia) =0 :O   O11 (tigre) =30 :O   O7 (delfino) =0 :O   O8 (falco) =30 :O   O9 (orso) =30 :O   O10 (scimmia) =0 :O   O11 (tigre) =30 :O   O7 (delfino) =0 :O   O8 (falco) =30 :O   O9 (orso) =0 :O   O10 (scimmia) =30 :O   O11 (tigre) =30 :O   O5 (nuota) =0
  O7 (delfino) =0 :O   O8 (falco) =0 :O   O9 (orso) =30 :O   O10 (scimmia) =30 :O   O11 (tigre) =30 :O   O7 (delfino) =0 :O   O8 (falco) =0 :O   O9 (orso) =0 :O   O10 (scimmia) =0 :O   O11 (tigre) =0 :O   O2 (circo) =30
  O3 (ha4zampe) =30
  O4 (intelligente) =30
  O5 (nuota) =30
  O6 (pericoloso) =30

ps. Can you suggest papers about the numeric-indifferent-mode? Thank you! Happy New Year ! -- Michela

Answer - Robert Wray wrote (Jan. 2, 2006):

The indifferent preference selection probability is the sum of the indifferent prefs for the item, divided by the sum of all indifferent prefs (in the initial implementation, indifferent prefs without a value were given a "base" weight of 50, but this may have changed).

The kernel feature is new enough that it may not yet be doc'd in the manual. A general description of the motivation for the change and a (very high level) outline of the algorithm may be found here:

Wray, R. E., & Laird, J. E. (2003). Variability in human behavior modeling for military simulations. Proceedings of the Conference on Behavior Representation in Modeling and Simulation, Scottsdale, AZ.

Also, you didn't ask about this, but you usually don't want operator prefs (acceptables or indifferents) to be persistent (o-supported), as they are in this example.

Good luck!

--Bob--

Back to Table of Contents


(P20) How can I do multi-agent communication in Soar 7?

Reading and writing text from a file can be used for communication. However, using this mechanism for inter-agent communication is pretty slow and you would have to be careful to use semaphores to avoid deadlocks. With Soar 7, there is a relatively natural progression from Tcl to C in the development of inter-agent communication.

  1. Write inter-agent communication in Tcl. This is possible with a new RHS function (called "tcl") that can execute a Tcl script. The script can do something as simple as send a message to a simple simulator (which can also be written in Tcl). The simulator can then send the message to the desired recipient(s). You could also do things such as add-wme in a RHS but this makes it harder to see what is going on and more error prone.
  2. Move the simulator into C code. To speed up the simulated world in which the agents interact, recode the simulator in C. Affecting the simulator can be accomplished by adding a few new Tcl commands. The agents would be largely unchanged and the system would simply run faster.
  3. Move communication to C. This is done by writing Soar I/O functions as documented in section 6.2 of the Soar Users Manual. This is the fastest method.

Back to Table of Contents


(P21) How can I use sockets more easily?

Using sockets with Soar is not well-documented, but it has been done numerous times. Socket communication is included in the Eaters and TankSoar applications (which come with a tutorial document) and sockets have also been implemented with C code using a library written at Michigan called SocketIO. Eaters and TankSoar and the Soar 8 Tutorial are available on the Soar "Getting Started" web page, and SocketIO can be found near the bottom of the "Projects/Tools" web page. Soar 8.6 also has improved socket utilities.

Scott Wallace (swallace@vancouver.wsu.edu) added socket communication to Eaters (and TankSoar uses the same code) and Kurt Steinkraus (kurtas@umich.edu) wrote SocketIO. Kurt is no longer at University of Michigan, but his email gets forwarded.

Answer from: "Karen J. Coulter" (kcoulter@eecs.umich.edu), Date: June 4, 1999

Back to Table of Contents


(P22) How do I write fast code?

Here are a few general hints:

  • The interface may be causing a slow down. A semi-working beta version of the socketIO can be found at www.eecs.umich.edu/~soar/sitemaker/projects/socketio

  • You might have a lot of elaboration rules firing. Trace your rules and get what is firing.

  • In developing Soar programs, you should check off and on for unnecessary rule firings (look at some runs at Watch 1). You should also check for big memories in the RETE during the middle of a run, use the memories command.

  • Avoid conditions that check against a new attribute with the same value

    Question from Phill Smith:

    I can't make out what is wrong with following rules that I'm working up for an air-launched UAV application in Soar 8.3. Here is the code.

    sp {identify*release-phase
       (state <s> ^name opmode01
                        ^v-memory-buffer <v>
                        ^io.input-link.configuration.flight-mode 0)
       -->
       (<v> ^flight-mode release-phase)}
     

    Answer from John Laird:

    You can change the flight-mode condition to be:

    -^io.input-link.configuration.flight-mode <> 0)
     

    That should stop it from refiring every cycle.

  • Use elaborations to create intermediate values

    Question:

    Arithmetic expressions are not allowed in LHS. Why? One can test values for equality, greater, and less..What is the work-around for test like:

    sp { ...
        (<A> ^value <aVal>)
        (<B> ^value <bVal> < (+ 5 <aVal>))
         -->
        (...)
       }
    

    I do not want to 'elaborate' intermediate values like (<A> ^boundary (+ 5 <aVal>)) and than test against them - so what can I do?

    Answer from Randy Jones:

    I have also had times where I would have liked to be able to specify arithmetic on the LHS. The reason you can't is that the pattern matcher needs to match against static patterns in order to remain efficient. Including functions in the left hand side would require that you actually invoke those functions for *every* elaboration phase. This would get expensive fast, for arbitrary functions. I assume that there is a more efficient way to implement things for the comparison operators, so it's not a problem.

    Thus, the way Soar currently works, the only way to accomplish what you want is to do what you say you don't want to to: use elaborations to create intermediate values and then test against them. It is important to note that the fact that you are forced to do things this way is a key part of Soar's model of how the mind works.

Note that SocketIO has been superceded by SGIO. SGIO and its documentation are included in the current Soar 8.6 releases. SGIO is a C++ API for connecting Soar to external environments. The SGIO documentation is located in the sgio-1.1.2/docs directory where Soar is installed. --Bob Marinier (rmarinie@eecs.umich.edu; Nov. 24, 2004)--

Here are some other tips for writing a fast code.

  • Note that there are several kinds that have been used. See the manual for your version.
  • John Laird (laird@umich.edu; Sep. 9, 2003) wrote: Changing o-support type can save you time.
  • It is necessary to take a look at firing counts in order to identify where your model spends time. John Laird mentioned that it is recommended to check firing counts (fc 20) to see the top 20 productions that are firing.
  • John Laird recommended to consider that your input routines only forward changes to values or they do replace values even if they are unchanged.

Back to Table of Contents


(P23) In a WME, is the attribute always a symbolic constant?

No, it can be an identifier. For example,

-->
(<s> ^<attr> yes)
(<att> ^time earlier)
...
 }
 

is legal, although not recommended.

Back to Table of Contents


(P24) How does one write the 'wait' operator in Soar9.3?

If your wait operator really never needs to do anything, this will work:

sp {wait*propose*wait
    (state <s> ^problem-space.name wait)
   -(<s> ^operator.name wait)
    -->
    (<s> ^operator <o> + <, =)
    (<o> ^name wait)}
 

Since the proposals tests that there's no operator named wait, the operator will be retracted as soon as the wait is selected.

Alternatively you can use:

sp {propose*wait
      (state <s> ^name <x>)
    -{(<s> ^operator <o>)
      (<o> ^name wait)}
      -->
      (<s> ^operator <o> +)
      (<o> ^name wait)}
 

Back to Table of Contents


(P25) How does one mess with WMEs on the IO link?

The rule that is commonly used is:

  • If a WME is added with add-wme(), then it must be deleted with remove-wme()
  • If a WME is added via productions/pref memory, then it is always deleted via productions/pref memory.

However, this raises another issue. Currently in Soar, you *can* use remove-wme to remove something that was created by productions through preference memory. The problem is that it removes the WME, but does not remove the preference. This can lead to all sorts of nastiness. For example, say a rule created something you want to get rid of, and maybe you want the production to fire again to change the same attribute to have a different value. You use remove-wme to delete the old value. The production can fire again (maybe it fires again because it tests for the absence of the value it's creating), and it creates a preference for the new value of the attribute.

Bingo, you now have an attribute tie impasse. At least that's what you got in Soar 7. Soar 8 and 9 has no attribute tie impasse, so presumably the new value would show up in working memory.

At a deep level (inability of users to access pref memory for input WMEs) maybe this is a bug. However, I don't think there's any preference memory for the input WMEs and so I'm not surprised that a reject preference doesn't remove them.

There was a question why there seems to be no preferences in preference memory for (id ^attribute value) triples acquired through C input functions. The problem which results is that one can't remove values from working memory that were first acquired through I/O interfacing with some external program (Platform: Soar 6.2.3 with Windows interface and later versions).

Working memory elements from I/O are added directly to working memory without going Soar's preference mechanism. That's why they don't have any preferences. That's also why production rules can't make them disappear (say, with a reject preference), because those WMEs just totally bypass the preference process.

This is intentional. The idea is that the input link represents "current perception", and you can't use purely cognitive processes to "shut off" your current perceptions. If you want to remove things from the input link, you have to make the I/O code remove them. If you want the input link to be under the control of cognition, then your rules have the I/O system (via the output link) to change the values on the input link. The rules cannot do it directly.

C input functions add wmes usingthe kernel routine add_input_wme, which surgically alters working memory directly. The preference mechanism is not part of this process. The only way to remove these input WMEs is by calling the routine remove_input_wme from your input funcition. "add_input_wme" returns the pointer to the WME you added, which you must store statically somehow (on a linked list or something) to later remove the wme with remove_input_wme. I should add the caveat that I didn't actually go back and look at Soar 6, and gave you the answer for Soar version 7 and later, but I'm fairly certain that add_input_wme and remove_input_wme also were in Soar 6. (look in io.c). - comments from Randy Jones

Back to Table of Contents


(P26) How can I get Soar to interpret my symbols as integers?

Given, a right-hand-side action that calls Tcl to get a number that is then bound to an attribute's value:

(<o> ^rehearsals (tcl | get-subject-rehearsals classifications|))

the Tcl function, get-subject-rehearsals, returns a value that would ideally be interpreted as an integer. However Soar will interpret this as a symbol, such as:

(O20 ^rehearsals |5|)

instead of the ideal:

(O20 ^rehearsals 5)

To achieve the ideal, try the following:

(<o> ^rehearsals (int (tcl | get-subject-rehearsals classifications|)))

or,

(<o> ^rehearsals (float (tcl | get-subject-rehearsals classifications|)))

Note, this will change in Soar 8.6.

Back to Table of Contents


(P27) Has there been any work done where new operators are learnt that extend problem spaces?

John Laird wrote:

Scott Huffman's thesis contains a lot of the ideas that will be needed in any system that learns new operators.

Also, Doug Pearson's thesis work did not learn new operators completely from scratch, but it did learn to correct both the conditions and the actions for operators - the only thing it couldn't learn was the goal/operator termination conditions.

Mike van Lent's thesis work learns new Soar operators, but not from Soar processing - it is a preprocessor.

Contact John Laird (laird@umich.edu) for further information regarding any of these.

Rail-soar as illustrated in the Soar video did this as well. Erik Altmann (altmann@gmu.edu) wrote this.

Some of the Blood-Soar work at Ohio State may have done this as well. Todd Johnson (todd.r.johnson@uth.tmc.edu) is the person to ask.

Back to Table of Contents


(P28) How can I find out about a programming problem not addressed here?

There are several places to turn to, listed here in order that you should consider them.

  1. The manuals may provide general help, and you should consult them if possible before trying the mailing lists.
  2. You can consult the list of outstanding bugs.
  3. You can consult the mailing lists noted above.

Back to Table of Contents


(P29) Why identical WMEs illegal in Soar?

Question from Glenn Taylor (Nov. 26, 2001):

When there are multiple rule applications that affect the same attribute, only one of those attributes appears on the state as a result.

For example, if my state looks like this:

(<s1> ^my-object first ^my-object second)

And I have a production like this:

sp {state*elaboration
    (state <s> ^my-object <mo>)
-->
    (<s> ^elaboration 1)

The result is that only a single ^elaboration 1 shows up on the state, even though there are two matches on the production. When the result has the same attribute and value, only one appears as a result. If there are multiple values, the unique ones appear on the state.

What's the rationale behind this?

Answer from Seth Rogers (Nov. 26, 2001):

The rule fires twice, leading to two acceptable preferences for S1 ^elaboration 1. The preference mechanism sees no conflicts.

The rule fires twice, leading to two acceptable preferences for S1 ^elaboration 1. The preference mechanism sees no conflicts and puts S1 ^elaboration 1 in working memory. If you want two values for S1 ^elaboration, you have to make them parallel:

(<s> ^elaboration 1 + &)

Although technically that would make 2 identical working memory elements, which might be illegal. Better to use <id> instead of 1. At least, that's how it worked in Soar 10,000BC. --Seth

Answer from John Laird:

The simplest answer is that Soar's working memory is a set - there cannot be duplicates. The rationale is what Bob says below. All versions of Soar have had this property from the beginning. -- John

Answer from Robert Wray (Nov. 26, 2001):

Seth covered most of what I had to say, so here's an addendum:

I'm not sure if it necessarily *has* to be like this, but the representation below is often convinient in Soar. For example, I can have a production with a condition like this:

 (<il> ^<visual-radar-memory>.contact red-plane)

and RHS with:

-->
  (<s> ^red-plane-present *yes*)

Then I can know there's at least one red-plane present, regardless of how many there actually are (or, in this case, regardless of the channel through which I got the information.

In Soar 8, all non-operator attributes have default parallel preferences. But that presents a dilemma for achieving functionality like above -- should two production instantiations that assert preferences for the same ID-attr-value result in 1 WME or 2? (I assume!) that in order to preserve functionality as above, someone made the decision that you could not create duplicate WMEs (ie, WMEs with the same id-attr-value but different time tags). This would be the more conservative approach since I think much more Soar code depends on the many-to-one mapping as above than on paralle prefs for multiple ID-attr-value triples with the same values (actually, before Seth's message, I didn't know this was possible in Soar 7).

IMO, identical id-attr-value triples with different time tags shouldn't have ever been allowed to be asserted simultaneously. I see Glenn's problem not as a Soar problem but a KR problem: what is the semantics of multiple identical assertions? --Bob

Back to Table of Contents


(P30) How to count Objects including WMEs?

This needs to be filled in.

Back to Table of Contents


(P31) How to find an average value of multiple objects' common attribute?

Question from Zhang Qinzie (June 17, 2006):

Hi, I am doing a simulation program with Java and Soar. If there are multiple objects of same type (^type Object_A), how can I calculate the average value of a common attribute of these objects? And how can I identify the object with max/min value in this attribute?

Answer from Brian Stensrud (June 17, 2006):

See if this works for you. You can find max and min values using elaborations. Finding the average requires some operators and dirty o-supported tags to ensure a sum is calculated properly.

sp {top-state*elaborate*objects
	(state <s>	^superstate <nil>)
-->
	(<s>		^objects <objs>)
	(<objs>		^object <obj1>
			^object <obj2>
			^object <obj3>
			^object <obj4>
			^object <obj5>)
	(<obj1>		^value 1)
	(<obj2>		^value 2)
	(<obj3>		^value 3)
	(<obj4>		^value 4)
	(<obj5>		^value 5)}
	
sp {objects*elaborate*max-val
	(state <s>	^objects <objs>)
	(<objs>		^object <max-object>)
	(<max-object>	^value <max-val>)
	-(<objs>	^object.value > <max-val>)
-->
	(<objs>		^max-val <max-val>)}

sp {objects*elaborate*min-val
	(state <s>	^objects <objs>)
	(<objs>		^object <min-object>)
	(<min-object>	^value <min-val>)
	-(<objs>	^object.value < <min-val>)
-->
	(<objs>		^min-val <min-val>)}
	
sp {objects*propose*add-value-to-sum
	(state <s>	^objects <objs>)
	(<objs>		^object <obj>)
	(<obj>		^value <val>
			-^added-to-sum *yes*)
-->
	(<s>		^operator <o> + =)
	(<o>		^name add-value-to-sum
			^object <obj>)}
					
sp {add-value-to-sum*apply*first
	(state <s>	^operator <o>
			^objects <objs>)
	(<objs>		-^sum
			-^num-objects)
	(<o>		^name add-value-to-sum
			^object <obj>)
	(<obj>		^value <val>)
-->
	(<objs>		^sum <val>
			^num-objects 1)}
	
sp {add-value-to-sum*apply
	(state <s>	^operator <o>
			^objects <objs>)
	(<objs>		^sum <old-sum>
			^num-objects <old-num-objects>)
	(<o>		^name add-value-to-sum
			^object <obj>)
	(<obj>		^value <val>)
-->
	(<objs>		^sum <old-sum> - (+ <old-sum> <val>)
			^num-objects <old-num-objects> - (+ <old-num-objects> 1))}		
	
sp {add-value-to-sum*apply*tag-object
	(state <s>	^operator <o>)
	(<o>		^name add-value-to-sum
			^object <obj>)
-->
	(<obj>		^added-to-sum *yes*)}	
	
sp {objects*elaborate*average
	(state <s>	^objects <objs>)
	(<objs>		^sum <sum>
			^num-objects <num>)
-->
	(<objs>		^average (/ <sum> <num>))}		


Answer from Bob Marinier (June 17, 2006):

Another possibility, if you know ahead of time how many objects there are (or at least a small upper bound on the number), is to write a custom RHS function to calculate the average, e.g.,

sp {average
(state <s> ^object <o1>
^object {<o2> <> <o1>})
(<o1> ^value <v1>)
(<o2> ^value <v2>)
-->
(<s> ^average (exec my-avg-func <v1> | | <v2>))}

where my-avg-func is a custom RHS function that calculates the average of the values passed into it. See the source code for TestJavaSML (located in the Tools directory) for an example of how to create and register your own RHS function in Java (search for "RHS" in Application.java to see the relevant code).

But for an arbitrary number of values, Brian's approach is probably better.

Back to Table of Contents


Section 4: Downloadable Models


(DM0) Burl: A general learning mechanism

It answers the question: How do I do psychologically plausible lookahead search in Soar?

Burl is a general learning mechanism for using bottom-up recognition learning as a way to build expertise written by Todd Johnson.

Code is available from Todd at Todd.R.Johnson@uk.edu.

Back to Table of Contents


(DM1) A general model of teamwork

Milind Tambe (tambe@isi.edu) wrote (Thu, 6 Mar 1997):

As you may well know, the Soar/IFOR-CFOR project has been developing agents for complex multi-agent domains, specifically distributed interactive simulation (DIS) environments. As part of this effort at ISI, we have been developing a general model of teamwork, to facilitate agents' coherent teamwork in complex domains.

This model, called STEAM, at present involves about 250 productions. We have used this model in developing three separate types of agent teams (including one outside of DIS environments). Since this model may be of use to some of you working with multi-agent Soar systems, we are now making it available to the Soar group.

Here is a pointer to the relevant web page describing STEAM:

http://www.isi.edu/soar/tambe/steam/steam.html

Documentation for the model is available on the web page, however, if there are additional questions, I will be happy to answer them. Any feedback on this topic is very welcome.

Back to Table of Contents


(DM2) A model that does reflection

When Ellen Bass was in Georgia Tech as a graduate student, she created the reflection model written in Soar 6.2.5 NNPSCM. According to her, the model starts up after the ATC task has been performed and the aircraft has disappeared from the display.

Currently, she is an assistant professor at department of systems and information engineering in University of Virginia. Please contact Ellen Bass, if you need more information on the model (ellenbass@virginia.edu).

Back to Table of Contents


(DM3) A model that does concept/category acquisition

Symbolic Concept Acquisition (SCA) is a model of concept learning that was first developed by Craig Miller using Soar architecture. Please, refer to this site for the model:

http://www.speakeasy.org/~wrayre/soar/sca/html/index.html

Back to Table of Contents


(DM4) A model for Java Agent Framework (JAF) component

SoarProblemSolver was released. It is a JAF component wrapping my soar2java package. JAF is an agent architecture developed and maintained by University of Massachusetts. Hopefully, this component will enable soarers to use other JAF components with minimal knowledge of Java (Java is the current implementation language of JAF).

Back to Table of Contents


(DM5) Herbal: A high level behavior representation language

Herbal is a high level language for behavior representation and acts as a first step towards creating development tools that support a wide range of users in the cognitive modeling domain. Currently, the Herbal environment suppports creating a model under the Soar Cognitive Architecture. With Herbal, users can create cognitive models graphically, and have these models compiled into Soar productions.

It may be worthwhile to try out Herbal, if you are a user of Soar or will be one. For further information, please visit acs.ist.psu.edu/projects/Herbal. You will get the information on the download of the software and other useful documentation.

Back to Table of Contents


(DM6) dTank: A competitive environment for distributed agents

dTank is originally inspired by Tank-Soar that was developed by Mazin As-Sanie at the U of Mich. dTank was developed to efficiently utilize the flexibility of Java graphics and networking. For example, dTank provides an agent architecture-neutral interface to the game server, so that humans and agents can interact within the same environment over the networking. It includes a basic-tank model, and Herbal includes includes the dTank model.

For further knowledge regarding dTank, please visit acs.ist.psu.edu/projects/dTank.

Back to Table of Contents


(DM7) A model that counts attributes

Richard Young wrote:

Back in 1993, I wrote some code that counted the number of attribute-value pairs on an object entirely within the scope of a single operator. (The code was part of a program that formed an exact recognition chunk for an object, in a highly efficient manner.) This way of doing it was suggested by Bob Doorenbos as being possible.

There are just four rules, which I am retyping from a hardcopy printout, so I don't guarantee accuracy. I think this was for Soar 5 or 6, so it will need to be adapted. It makes use of attribute preferences, so it may not work in Soar 8.

(sp operator*recognise*count*initial-count
(goal <g> ^state.recognition-node <n> ^operator.name recognise)
(<n> -^count)
-->
(<n> ^count 0)              ;(thank you, Bob D.)
(sp operator*recognise*count*all-avs
(goal <g> ^state.recognition-node <n> ^operator.name recognise)
(<n> ^structure <w>)
(<w> ^<att> <val>)
-->
(<av> ^<att> <val>)
(<n> ^a-v <av> + <av> =))
;; NOTE: the above rule gets *all* the <att> <val>s from <w>;
;; You may want to restrict it to a particular attribute

(sp operator*recognise*count*done
 (goal <g> ^state.reconition-node <n> ^operator.name recognise)
 -->
 (<n> ^a-v done + done <))

 (sp operator*recognise*count*each-av
 (goal <g> ^state.recognition-node <n> ^operator.name recognise)
 (<n> ^count <c> ^a-v <av>)
 (<av> ^<att> <val>)
 -->
 (<n> ^a-v <av> - ^count <c> - (+ 1 <c>) +))

Back to Table of Contents


Section 5: Advanced Programming Tips


(APT1) How can I get log to run faster?

Question:

I'm using the log command record a trace for a very long simulation (15k+ decisions) I'm running. is there a "quiet" mode so that the trace doesn't redundantly print the screen? the printing really slows things down. is there a clever/obvious way to presently do this? maybe it's a feature to consider in the future?

Answer from Karen J. Coulter (kcoulter@eecs.umich.edu):

Date: Mon, 16 Dec 1996

There isn't a quiet mode on the log command, but in general for any xterm, ^O toggles the output to the screen. If you only want the output of the trace, you can redirect it to a file using output-strings-destination. I believe then the text would be sent only to the file and not to the screen.

Back to Table of Contents


(APT2) Are there any reserved words in Soar?

This is a slightly odd question, for Soar is a production system language, not a procedural language like Pascal. But we know what you mean. What symbols have special meaning, and how can I use them, and how do I have to use them?

The most important symbol is 'state'. To start their match, productions have to start with a clause that starts with the state, e.g.,

(state ^attribute value)

On the top state, there are a few attributes that the system uses. These are ^IO, which holds IO information; ^superstate, which holds a pointer to the superstate or nil if it's the top state; ^type, which indicates if the object is a state, operator, or user defined; ^operator, which holds the current operator. In lower states caused by impasses, there are additional attributes: ^attribute, which is the type of object causing the impasse, such as state or operator; ^choices, which holds the tied choices in a tie impasse, or none in a no-change impasse; ^impasse, which indicates the type of impasse, such as no-change; ^quiescence, which indicates if the impasses happened with rules left to fire (nil), or if the impasse happened with all the rules fired (t). If quiescence is checked in an impasse, a chunk is not built. This is a way of avoiding learning based on a lack of knowledge. These attributes can all be matched by user productions, but either cannot or should not be changed by the user.

Many modelers use ^problem-space to be treated like a reserved word. It used to be modifiable only by an explicit decision by the architecture. Now it can be done by models directly. Most objects have an ^name. It is nearly a necessity to name your operators, and if states are named, their names will appear in the trace.

The default rules (for versions of Soar that have them available) use their own conventions for using attributes. These conventions amount to reserved words that your models can use and sometimes, for example, the selection knowledge, encourages domain knowledge to assist in evaluating objects. If you are not sure about what attributes are there, just print out the objects, for this information is typically enough to get you started.

Back to Table of Contents


(APT3) How can I create a new ID and use it in the production?

Jonathan Gratch [gratch@isi.edu] wrote:

I've noticed the following:


the rule
sp {....
   (^foo (make-constant-symbol |foo|) + &)

expands to

sp {....
   (^foo (make-constant-symbol |foo|) +
    ^foo (make-constant-symbol |foo| &)}

The workaround solution, if I remember correctly, involved a two step process of generating your symbol and then separately putting it to whatever use you had in mind. Not being an expert at writing Soar productions, I may be way off base on this one, but I did not want to leave you without any initial response to your bug report. I do not believe any kernel solution to this irritating behavior would be easy to implement, but that is just a guess at this point.If you do not come up with a workaround, let me know and I will look into this further. Heck, let me know either way, the information would make a good addition to the soar-bugs archives or perhaps the FAQ.

From: Aladin Akyurek (akyurek@bart.nl):

Date: March 13, 1997

A workaround is to split the production that creates a value by make-constant-symbol for an attribute that is intended to be a multi-valued:


sp {gratch-1
   (state ^superstate nil)
   (^foo (make-constant-symbol |foo|))}

sp {gratch-2
   (state ^superstate nil ^foo <v>)
   (^foo <v> &)}

Back to Table of Contents


(APT4) How can I access Tcl and Unix variables?

soar_library and default_wme_depth, for example, are defined as global variables, and all Soar files can be obtained through relative path names from the file path. On the Mac this is a bit trickier, but doable as well. Check out the TSI source code for examples.

Unix environmental variables are also available through the "env" matrix of variables (e.g., $env(MANPATH). Advanced users can also set variables in their shells by hand or with init files.

You should keep in mind that the Tcl scoping of variables is non-intuitive (to us, at least, at times), and it is a common problem in setting up paths and installation flags.

This sort of problem is becoming a trend as its easy to overlook. All Soar global Tcl variables, such as $default, need to be defined with a Tcl "global" command when used in a script. When procedures call procedures that call scripts which call scripts ..., your variable scope will change!

If variables are not visible as you expect them to be, often the problem is that they are global but not declared in your function or file. Insert a 'global var_name' and often the variable will appear again.

Expanded from email between Tom Head (Tom_Head@anato.soar.cs.cmu.edu), Karen J. Coulter (kcoulter@eecs.umich.edu), and Aladin Akyurek (akyurek@bart.nl) on soar-bugs, Tue, 25 Mar 1997, and later email as well.

Back to Table of Contents


(APT5a) How can I access Soar objects?

Question from Harko Verhagen (verhagen@dsv.su.se): How do I get access to WMEs?

Answer from Bruce Israel (israel@ers.com), Dec. 20 1996:

If you're working in soartk, in the Tcl shell you can use the function "wmem" to retrieve WMEs. "wmem" prints its output, but you can use "output-strings-destination-push -append-to-result" to get it into a form you can use programmatically.

Here's some Tcl routines I built for accessing WMEs. You can use the routines wm_elts, allobs, and wm_value for different types of retrievals within Tcl code.

Utility routines for WM retrievals

[Written by Bruce Israel (israel@ers.com), Fri Dec 20 1996]

Copyright ExpLore Reasoning Systems, Inc. 1996. All rights reserved.

member- is ITM an element of the list LST?

Usage: member ITM LST

proc member {itm lst} {
   if {-1 == [lsearch -exact $lst $itm]} {return 0} else {return 1}}

addset - add an item to a set 

proc addset {itm lst} {
   if {! [member $itm $lst]} {
      lappend lst $itm
       }
   return $lst
   }


wm_elts - Return triples of all WM elements matching pattern 

proc wm_elts {ob attr val} {
   output-strings-destination -push -append-to-result
   set wmemstr [wmem "($ob ^$attr $val)"]
   output-strings-destination -pop
   set def ""
   while {[scan $wmemstr "%\[^\n\]%\[ \t\n\]" wm_elt ws] > 0} {
   set ct [scan $wm_elt "(%d: %s %s %\[^)\])" time nob nattr nval]
   if {$ct > 0} {
   lappend def "$nob $nattr $nval"
   set len [string length $wm_elt]
   set len [expr $len + [string length $ws]]
   set wmemstr [string range $wmemstr $len end]
    } else {
   set wmemstr ""
    }
   }
   return $def
  }


# Return all WM objects matching the specified ATTR / VAL
# e.g.,
# all objects - allobs * *
# all states - allobs superstate *
# top state - allobs superstate nil

proc allobs {attr val} {
     set obs ""
     foreach wm [wm_elts * $attr $val] {
     set obs [addset [lindex $wm 0] $obs]
      }
     return $obs
     }
 
# Return the value(s) of an attribute of a particular id.
# Multiple values are separated by newlines.
 
proc wm_value {id attr} {
     set wmitems [wm_elts $id $attr *]
     set res ""
     foreach item $wmitems {
     set val [string trim [lrange $item 2 end] "| \t\n"]
     set res "${res}\n${val}"
      }
    return $res
    }

Back to Table of Contents


(APT5b) How can I find an object that has attributes and values?

[Question from Ernst Bovenkamp, July 2000]:

I would like to know how I can find an object <object> that has attributes and values which are exactly equal to some other object <object1> and there can be only one <object> with these characteristics.

However, there is more than one <object>. Moreover, every object may have some, but not all, values equal to <object1>. Further I don't know how many attributes an object has, nor what its values should be. On top these objects are created in parallel which makes it difficult to attach a unique identification number to it because all objects are attached the same number in that case.

[Answer from Randy Jones]:

Since Soar does not have any explicit quantifiers, you usually have to use negation or double-negation "tricks" to get their effects. Note that, the double-negation would not work if you had it "standing alone". But, in the production I gave you, <object> gets bound to one object at a time by the condition: (<s> ^object <object>)

I tried testing with the following set of rules. See if they work for you (these are in Soar 8, so you have to add some parallel preferences if you are using Soar 7).

sp {elaborate*state*identical-to-key
   (state <s> ^key-object <object1>
                    ^object <object>)
   ## It is not the case that <object> has an attribute-value pair that
   ## <object1> does not have.
  -{ (<object> ^<att> <val>)
    -(<object1> ^<att> <val>) }
   ## It is not the case the <object1> has an attribute-value pair that
   ## <object> does not have.
  -{ (<object1> ^<att> <val>)
    -(<object> ^<att> <val>) }
-->
    (<s> ^identical-to-key <object>)
 }

sp {elaborate*state*key-object
   (state <s> ^superstate nil)
-->
   (<s> ^key-object <ob>)
   (<ob> ^a 1)
   (<ob> ^b 2)
   (<ob> ^c 3)
 }

sp {elaborate*state*object1
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
 }

sp {elaborate*state*object2
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
   (<ob> ^a 1)
 }

sp {elaborate*state*object3
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
   (<ob> ^a 1)
   (<ob> ^b 2)
   (<ob> ^c 3)
 }

sp {elaborate*state*object4
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
   (<ob> ^a 1)
   (<ob> ^b 2)
   (<ob> ^c 3)
   (<ob> ^d 4)
 }

sp {elaborate*state*object5
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
   (<ob> ^a 1)
   (<ob> ^b 2)
   (<ob> ^c 3)
 }

Back to Table of Contents


(APT6) How can I trace state changes?

The -trace switch to the print command is intended to be used in conjunction with a Tcl command in a RHS to provide a runtime diagnostic tool.

Andrew's [Howes's suggested] "trace" RHS function was intended to provide a way to print out objects during production firing. I thought it better to add a "-trace" option to the print command and have that command be used via the "tcl" RHS function:

syntax: print -trace [stack-trace-format-string]

Sample RHS usage: (tcl |print -trace | | | <x> ) uses a nice default format string and prints <x> out with indenting appropriate for (goal) state.

Expanded from email between Tom Head (Tom_Head@anato.soar.cs.cmu.edu), Kenneth J. Hughes (kjh@ald.net), Karen J. Coulter (kcoulter@eecs.umich.edu), and Clare Congdon on Jan 6, 1997.

Back to Table of Contents


(APT7) How can I connect Soar to large databases?

Question - Troy Kelly wrote (Feb. 14, 2006):

Has anyone out there attempted to connect Soar to a large semantic network like CYC or WordNet or ConceptNet? Assuming one could write algorithms to generate productions from the knowledge base, would such a connection create computational bottlenecks? How would Soar handle such a large set of declarative information?

Answer - Deryle Lonsdale wrote (Feb. 14, 2006):

I haven't tried to use CYC but have used and am using WordNet. About five years ago, I created Soar productions that reflected much of the content of the current WordNet. There were several hundred thousand productions. The load time in that version of Soar was prohibitively slow (at least an hour, as I recall) and I usually ran out of memory. Run time was also much too slow for our use. I think that Bob Doorenbos worked at CMU with similarly large numbers of productions, but I didn't use any optimizations he may have come up with then.

Before then and since then I have been loading information from WordNet with callouts (written in some combination of C, Tcl, and Perl) to access word sense, semantic class, morphological, part-of-speech, and frequency information. Since I'm only accessing information about one lexical item at a time and not performing exhaustive search or inference using WordNet, the interface is more than adequate for our current purposes. I should say, though, that I do use a hashed meta-index to point from a word to its respective position in the index files so I don't have to read sequentially through the index files. We have also been looking at FrameNet or PropBank for more information but so far haven't interfaced to them. If so, I'm not too sure we would use productions, but probably another interface instead.

Sean Lisse may have other information to add and other insight into the issues.

Answer - Sean Lisse wrote (Feb. 14, 2006):

This is an ongoing aspect of our research here at Soar Technology. You may find this paper interesting:

Wray, R. E., Lisse, S., & Beard, J. (2004). Investigating Ontology Infrastructures for Execution-oriented Autonomous Agents. Proceedings of the 2004 AAAI Knowledge Representation and Ontology for Autonomous Systems Symposium, Stanford, CA.

While we've not used a semantic network that is as large as CYC, we've been doing research into working with smaller-scale ontologies via our Onto2Soar product. Onto2Soar accepts OWL or DAML+OIL ontologies and translates them into Soar-accessible knowledge.

Onto2Soar currently uses a scheme similar to what you suggest below - algorithms to generate productions from the knowledge base, which re-instantiate the knowledge base in Soar's working memory when loaded into a Soar agent. Using it, we've had good success with smaller ontologies.

I believe, however, that this will not be a feasible mechanism for very large ontologies, and so we're looking into alternate representations and connection mechanisms.

Back to Table of Contents


(APT8) How can/do I add new Tcl/Tk libraries?

[note, less relevant in Soar 8.6]

Question from Rich Angros: I want to add some other Tcl widget extensions into tksoar. Does the Tcl/Tk code have any special modifications that are specific to Soar? What restrictions are there on the version of Tcl/Tk used?

The versions of Soar that are currently available on the web pages all require Tcl 7.4 and Tk 4.0. In order to allow for multiple agents in Soar, we had to extend Tcl within Soar and so we modify some of the Tcl files and keep a private copy. Then when Soar is built, the linker uses the "private" copies of these routines instead of the ones Tcl comes with. So you can pull Tcl 7.4 and Tk 4.0 off the Tcl web sites and use them when building Soar, but Soar will link in a few extended routines of its own. You should be able to add other Tcl extensions in a straightforward manner, without any concern for Soar's modifications of Tcl. In fact, Soar used to be distributed with Blt, but it was cumbersome to support building it on multiple platforms, and we weren't sure how much it was used, so we took it out. You would add Tcl extensions in the soarAppInit.c file, just as you would add it to tkAppInit.c

Soar 7.1 uses Tcl 7.6 and is completely decoupled from the Tcl routines, since Tcl 7.6 provides support for multiple interpreters. So from Soar 7.1 on, you should be able to upgrade Soar or Tcl packages independent of each other.

From "Karen J. Coulter" (kcoulter@eecs.umich.edu), Date: Mon, 9 Jun 1997

Back to Table of Contents


(APT9) How can and should I use add-wme and remove-wme?

This is a relatively long exchange noting how to use add-wme and remove-wme, while acknowledging that it violates the PSCM (or even the NNPSCM).

Question From: Harko Verhagen (verhagen@dsv.su.se), Date: Aug. 20, 1997

I'm still working on my multiagent simulation Soar code. A problem I encounter is that it seems hard to remove information that was added with add-wme. Each agent has some Tcl code to take care of communication with the environment (including other agents). Files are used to store information that needs to be transfered. The information gets read in Tcl and is tranferred to working memory with add-wme. This information may of course trigger some productions in Soar. However, the information should not persist after some processing. In Soar 5, an elaborate state production did the trick by removing the message in a rhs action. In Soar 7, the production fires but its effect does not show in the wmphase. Using a Tcl call to remove-wme makes Soar crash. Any way around this?

Excerpt of trace:
message contains y x command A find to

=>WM: (167: I7 ^from-whom x)
=>WM: (168: I7 ^to-whom y)
=>WM: (169: I7 ^mode command)
=>WM: (170: I7 ^item |A|)
=>WM: (171: I7 ^subtask find)
=>WM: (172: I7 ^misc to)

Preference Phase

Firing warehouse*propose*operator*reception-mode

-->
(O17 ^desired D1 + [O] )
(O17 ^name reception-mode + [O] )
(S4 ^operator O17 +)

Firing warehouse*elaborate*operator*wait

-->
-
-
-

Firing reception-mode*elaborate*state*remove-receive-message

-->
(I6 ^message I7 - [O])

Firing reception-mode*terminate*operator*receive-message

(S5 ^operator O19 @)

Firing reception-mode*reject*operator*dont-receive-message-again

-->
(S5 ^operator O19 -)

Firing reception-mode*propose*operator*evaluate-move-find-yes

-->
(O21 ^misc yes + [O])
(O21 ^item |A| + [O])
(O21 ^subtask find + [O])
(O21 ^mode command + [O])
(O21 ^to-whom x + [O])
(O21 ^from-whom y + [O])
(O21 ^name send-message + [O])
(S5 ^operator O21 +)

Working Memory Phase

=>WM: (227: S5 ^operator O21 +)
=>WM: (226: O21 ^misc yes)
=>WM: (225: O21 ^item |A|)
=>WM: (224: O21 ^subtask find)
=>WM: (223: O21 ^mode command)
=>WM: (222: O21 ^to-whom x)
=>WM: (221: O21 ^from-whom y)
=>WM: (220: O21 ^name send-message)

Bob notes that it is indeed possible to shoot yourself in the foot with add-wme (just as modifying the calling stack would be a bad idea in any programming language).

Answer From: Robert Wray (wray@soartech.com), Date: Aug. 21, 1997

Soar IO cleaning

(I'm cc'ing this to soar-bugs for two reasons, even though it might not be a bug per se. First, the error message that Harko reported to me and that I replicated below is somewhat misleading. As far as I can tell, the new instantiation has been added to the instanitation list and so should be available in p_node_left_removal (rete.c) for adding to the retraction list -- which is where the program aborts because it can't find a relevant instantiation. Second, because folks seem to want to add and delete WMEs from the RHS with tcl, my guess is that this problem may become frequently reported and thus worth documenting now.)

I was able to replicate the problem you are having when my productions attempted to remove a WME that was also tested in the LHS. For instance, consider this simple example:

sp {elaborate*state*problem-space*top
   (state ^superstate nil)
   -->
   (^problem-space)
   (^name top)}

sp {elaborate*state*create-some-WMEs
   (state ^problem-space.name top)
   -->
   ;# This WME gets timetag 7
   (^deep-structure <ds>)
   ;# This WME gets timetag 8
   (<ds> ^simple-structure *yes*)
   ;# This WME gets timetag 6
   (tcl |add-wme | | ^added-via-add-wme *yes*|)}

sp {elaborate*remove-structure
    :o-support
   (state ^deep-structure <ds>
   ;# ^added-via-add-wme
   *yes*)
   (<ds> ^simple-structure *yes*)
    -->
   (^new-augmentation *yes*)
   (tcl | remove-wme 6|)}

If I try to remove any of the WMEs, I test in the LHS of this production (eg, timetags 6, 7 or 8), I get the problem you reported in your message, namely: Internal error: can't find existing instantiation to retract.

Soar cannot recover from this error. Aborting...Soar (or at least the implementation in C but I think it's pretty easy to say the PSCM of Soar as well) assumes that WMEs do not disappear during the preference phase. In this case, because you are removing a WME directly from a tcl call in the RHS, this assumption is violated.

Soar is attempting to retract an instantiation (or, more specifically, add an instantiation to the retraction list) while in the process of firing the instantiation. In a normal situation, this simultaneous fire and retract is impossible because the architecture doesn't allow WME changes during the preference phase, just preference changes. Your production violates that architectural constraint, with the resulting error. Again, there are workarounds (and I'll describe one in a separate message). But, you really shouldn't be doing this. You should only be creating and deleting input WMEs when the input function is called in the INPUT PHASE, as I described yesterday. And you should use the preference mechanism to remove any regular Soar WMEs.

I have a suggestion for a workaround to your problem and what I think might be a better, long-term solution. I'll discuss the long-term solution first, because I think it's the best route to follow.

The reason remove-wme did not work for you is that the command requires the integer time tag value to identify the WME to be removed. It is not easy to access the timetag through productions; it would require a few tcl calls and some text parsing to do accomplish this. (I haven't actually tried to do removal this way -- it may be impossible but my guess is that it's not impossible, just very difficult.) The difficulty is not an oversight in the design of Soar -- it is purposeful. All WMEs except input WMEs should go through the decision process (evaluation of preferences). Input WMEs are added only through the I/O link. Thus, Add-wme and remove-wme were not really meant to be used in the RHS. Prior to Soar 7, it was very difficult to use add-wme and remove-wme in productions; tcl has made it much easier to disregard these assumptions. (Seth Rogers covered many of the ways Tcl can violate Soar assumptions in a talk entitled "Tcl Magic: How to do things you're not supposed to do in Soar" at Soar Workshop 15.)

A long-term solution to your problem would be to re-implement your system using Soar's supported I/O mechanism, the ^io link. In this case, messages would appear on the input-link of an agent. Add-wme would still be called, but it would be called by a tcl input function, rather than a RHS call. To remove the message, an operator could place a command on the output--link to remove the indicated message.

Then, via tcl code in the output function (which has access to the timetag), the message could be removed from future input with remove-wme. There is a simple example in the soar distribution (soar-io-using-tcl.tcl in the demos directory) that illustrates how to set up a tcl I/O system. Chapter 17 of the Soar Coloring Book (http://ai.eecs.umich.edu/soar/tutorial/) also covers I/O (but at a very high level).

However, maybe you need a working prototype right away? Although I strongly encourage the above solution, here's a workaround for your current problem: You can't use remove-wme for the reasons I described above. You can't use production preferences because a WME created with the add-wme command has no preferences; it is just added to WM without going through the decision process. (I'm not familiar with Soar 5 so I don't know enough about it to know why the production/preference removal worked there; my understanding of Soar 7 is that WMEs created with add-wme should only be removable by remove-wme, to discourage non-PSCM WME additions and removals) However, like all Soar WMEs, everything must be connected to the top state (directly or indirectly). So the workaround I'm proposing is to remove WMEs created with add-wme by removing via preferences the WME that the added WME was attached to.

For example, instead of attaching WMEs to the state directly, imagine that there is a place holder for messages called "message-link:"

sp {elaborate*state*message-link*bootstrap
   (state ^problem-space.name top)
   -->
   (^message-link <ml>)
   (<ml> ^message <nil-for-now>)}

Now, when you want to add messages, you always add messages to the message-link:

sp {add-message-with-tcl
   (state ^problem-space.name top
    ^message-link.message <message>)
   -->
   (tcl |add-wme | <message> | this-is-a-message *yes* |)}

Then, when you want to remove a message, you just remove the message-link (and you'll also want to add a new message-link for a future message):

sp {some-operator*apply*remove-message-link*and*create-new-message-link
   (state ^operator <o> ^message-link <ml>)
   (<ml> ^message <message>)
   ;# (whatever tests you want to make to determine
   if the message should be removed)
   (<o> ^name remove-message)
   -->
   (^message-link <ml> - <ml2> +)
   (<ml2> ^message <nil-for-now>)}

When this production fires, it will remove the existing message-link, which includes, as substructure, the WME you created with add-wme. Although this convention will work, it is a hack and does not represent the best way to solve the problem. But it will allow you to get your system running in a few minutes. In the long run, as I suggested before, I'd use Soar's I/O link for input and output. Hope these ideas have been useful. I'm sure others will have other ideas and additional suggestions about the ones I've described here.- <Bob>

Another user comments that this approach can be fragile, but that if done in as theoretically a way as possible it is workable.

Harko Verhagen wrote:

I *DO* use the io-link, add-wme is used in tcl procedures hooked to the io-cycles of soar as described in the soar-io demo. the remove-wme is also in a tcl procedure which is called from RHS and the timetags are global vars in tcl. I now also have a workaround but it looks like a hack to me. I just don't test for the presence of the message that has to be removed but for a flag I have on the operator. For some reason removing the messsage with a structure like: ^item1- is not enough, the tcl removal procedure is also necessarry to avoid looping (it makes retracting impossible).

I encountered the same problem some time ago. The way I adopted is,(a) using productions to copy messages from io-link to some other place (say, internal situational model) in WM; (b) All decision makings are always based on the internal situational model instead of the io-link (since it is volatile); (c) add-wme and remove-wme are safe in io producures; (d) as soon as messages is removed from io-link, they will disappears from the situational model too. I found Bob's answer is very informative, although I didn't quite get the reason why remove-wme doesn't work. It seems to me that remove-wme is quite fragile and in some cases it simply makes the system crash, even it is called in a tcl procudure and is given the integer time tag. We have actually been warned about this from the on-line help. However, I did find it is safe when I clearly distinguished problem-solving from io.

Randy Jones Wrote (March 2002):

The rule used is:

  • If a WME is added with add-wme(), then it must be deleted with remove-wme().
  • If a WME is added via productions/pref memory, then it is always deleted via productions/pref memory.

This raises another issue. Bob lists the rules of thumb I use also. However, the fact is that currently in Soar you can use remove-wme to remove something that was created by productions through prference memory. The problem is that it removes the WME, but does not remove the preference.

This can lead to all sorts of nastiness. For example, say a rule created something you want to get rid of, and maybe want the production to fire again to change the same attribute to have a different value. You use remove-wme to delete the old value. Then the production can fire again (maybe it fires again because it tests for the absence of the value it's creating), and it creastes a preference for the new value of the attribute. You now have an attribute tie impasse. At least that's what you got in Soar 7. Soar 8 has no attribute tie impasse, so presumably the new value would show up in working memory, but I don't know what would happen with the preference for the old value.

Back to Table of Contents


(APT10) Frequently found user model bugs, or "How can I stick beans up my nose metaphorically in Soar?"

There is intended to turn into a rogue's gallery of frequently found bugs that are hard to diagnose on your own, but are not bugs in the architecture per se.

  • Problems with O- and I- support

    If something fishy happens where items drop out of memory and come back in again in a cycle, check I and O-support. If you have a production cycling (firing repeatedly on the same symbols, the rule does not have o-support like you think it does, and it is removing one of the values that matches its conditions, which forces it to retract and its results to retract. The rule now matches again and the loop continues. You need to rewrite the rule to give it o-support, either by fixing its clauses so that they get o-support, giving it o-support explicitly, or by breaking its results up into o-supported clauses and non-o-supported clauses.

    If something fishy happens where items drop out of memory and don't come back in, check for attribute value ties. If two values are both acceptable, but not indifferent and not parallel, they will clobber each other and neither will appear.

    If an operator is proposed but not selected, and there are preferences for making it best, you may wish to check to see if the operator has an acceptable preference. In order to be selected, operators must be acceptable. Best is not good enough on its own.

  • Creating a link in a subgoal to a supergoal

    This is an error because it can make the supergoal a result of the subgoal and then all hell breaks loose. (I think, essentially returning a result to a subgoal that's not there any more.

  • Tcl calls to shell

    Answer from John Laird (May 1996):

    If you change the state of Soar with Tcl calls, you can get back to the shell fairly quickly. For example, a production that deletes itself.

    sp {delete-me
       (state ^superstate nil)
       -->
       (tcl |excise delete-me|)}
    

    Tom Head (16/1/97) noted that he didn't believe that the ramifications of all the possible Tcl RHS actions are guarded against, such as (Tcl |init-soar|), (Tcl |source monkey-time.soar|), or (Tcl |soar|). If you are looking for a way to shoot yourself in the foot, using a Tcl RHS action is easier than most.

  • Missing a close bracket?

    If your code is missing a close bracket, Soar will hang, as this exchange of emails explains.

    From: Tom Head (Tom_Head@anato.soar.cs.cmu.edu), Date: Apr 15, 1997

    To: Aladin Akyurek (akyurek@bart.nl) Subject: Re: [Soar-Bugs #197] missing closing brace makes Soar hang

    In case this is not reported, the production:

    sp {x
       (state ^{<< car bicycle >>)
       -->
       (^private-vehicle t)}
    

    with a closing brace missing makes Soar hang. On the other hand, if an opening brace is missing, we get:

    soar> sp {y
    (state ^<< car bicycle >>})
    extra characters after close-brace
    soar> -->
    (^private-vehicle t)}
    invalid command name "-->"
    invalid command name "("
    soar> 
    

    Conjunctions that are not properly enclosed with both "{" and "}" braces confuse Tcl command completion when your sp command also uses braces to enclose your production. If you replace the outer sp braces from your examples with double quotes instead, you will get a more reasonable response. A proper error can not be generated because Tcl either does not believe that a production has been completed or completes it prematurely, depending on which brace is missing.

    Clauses in Soar productions match to whole WMEs. (based on emails with David Carboni and Ken Hughes, May 1998)

    The user who wrote the production below thought that the rule would match once per state. This is not true, for each clause is matched to WMEs, and if there are no conditions, it will match to all that meet the very basic condition of starting with state. This rule will match to all the WMEs that start with state. It is an interesting exercise to print it out.

    sp {Init*Setup*Proposal
       (state)
       -->
       (^operator <o>)
       (<o> ^name initstate)
       }
    

    Gourab Nath wrote: Soar hung up while parsing (loading) the productions under the following situation: I had a '}' character within a comment.

    i.e., #(<o> ^name op1 -^cell a)}

    I cleared } and it was OK.

  • Rules that fire but don't affect state changes

    Question:

    The code and trace are as follows. As you can see the first decision that there are actually two decision phases. The first selects an operator, and the second generates an impasse. I vaguely remember that somebody mentioned this as a way to make Soar 8 systems more efficient in the case where Soar can immediately detect an operator no-change. But I can't say that I relish the thought of teaching Soar newbies that there can sometimes be two decision phases in a decision cycle! Aside from that issue, there seems to be another conceptual problem. The second decision phase in the first decision cycle seems to prevent the application phase from getting executed at all. What's the logic behind that?

    I expected to see apply*hello-world fire in the application phase of the first decision cycle. Of course that doesn't happen, because for some reason there is no application phase in the first decision cycle.

    Can someone explain why this is so? It seems undesirable, because there may be particular (non-working-memory) actions you want to take in that decision before an impasse gets generated. It tooks like Soar 8.3 is doing this special-second-decision-phase check by looking for proposed changes to working memory, instead of looking for pending rule retractions/firings. Is that what's going on? Is that the best way to do this? -Randy Jones wrote-

    sp {propose*hello-world
       (state <s> ^type state)
       -->
       (<s> ^operator <o> +)
       (<o> ^name hello-world)
       }
    
    sp {apply*hello-world
       (state <s> ^operator <0>)
       (<0> ^name hello-world)
       -->
       (write (crlf) |Hello World AMY|)
       }
    
    --- Proposal Phase ---
    --- Firing Productions (IE) ---
    Firing propose*hello-world
    --- Change Working Memory (IE) ---
    =>WM: (121: S1 ^operator O1 +)
    =>WM: (120: O1 ^name hello-world)
    --- Proposal Phase ---
    --- Decision Phase ---
    =>WM: (122: S1 ^operator O1)
    1: O: O1: (hello-world)
    --- Decision Phase ---
    =>WM: (128: S2 ^quiescence t)
    =>WM: (127: S2 ^choices none)
    =>WM: (126: S2 ^impasse no-change)
    =>WM: (125: S2 ^attribute operator)
    =>WM: (124: S2 ^superstate S1)
    =>WM: (123: S2 ^type state)
    : ==>S: S2 (operator no-change)
    --- Output Phase ---
     
    red> step
     
    --- Input Phase ---
    --- Proposal Phase ---
    --- Firing Productions (IE) ---
    Firing apply*hello-world
    Hello World AMY --- Change Working Memory (IE) ---
    --- Proposal Phase ---
    --- Firing Productions (IE) ---
    =>WM: (130: S2 ^operator O2 +)
    =>WM: (129: O2 ^name hello-world)
    --- Proposal Phase ---
    --- Decision Phase ---
    =>WM: (131: S2 ^operator O2)
    2: O: O2 (hello-world)
    --- Decision Phase ---
    =>WM: (137: S3 ^quiescence t)
    =>WM: (136: S3 ^choices none)
    =>WM: (135: S3 ^impasse no-change)
    =>WM: (134: S3 ^superstate S2)
    =>WM: (133: S3 ^superstate S2)
    =>WM: (132: S3 ^type state)
    : ==>S: S3 (operator no-change)
    --- Output Phase ---
     
    red> step
    

    Answer from John Laird:

    1. Following the decision procedure, Soar 8 seeks to see if there are any operator application rules waiting to fire. There should not be any elaboration (i-support) rules waiting to fire because they were all fired before the decision. If there are no operator application rules, it immediately creates a subgoal.

    2. The problems only occur because the operator application rule she wrote does nothing internal to Soar. If it created (or removed) a working memory element, it would fire appropriately (and there would be an operator application phase) and no impasse would result (until later if the proposal was still active). Another way to look at it is that if hello-world did the right thing and did not use a write command, but created an augmentation on the output-link to create output, this would not be a problem.

  • Using a # character in your code:

    Question:

    The following production gives errors, apparently incorrectly interpreting the # in the constant as a comment.

    sp {daml2soar*class*Vehicle
       (state <ts> ^superstate nil)
       (state <ts> ^world-knowledge <world-knowledge>)
       (<world-knowledge> ^ontology <ontology>)
       (<ontology> ^name DAML2Soar)
       -->
       (<ontology> ^class <class0>)
       (<class0> ^name Vehicle)
       (<class0> ^type <type1>)
       (<type1> ^name class)
       (<type1> ^namespace
       |file:/home/lisse/downloads/eclipse/workspace/DAML2Soar/src/com/soartech/daml2soar/test/test_classes.
       }
    

    Answer from Randy Jones (July 14, 2003):

    Remember that everything goes through the Tcl parser before it gets to Soar. So, I'm not sure the interpretation of # as a comment character is actually incorrect. You should try putting a backslash in front of it (i.e., \#).

 

Back to Table of Contents


(APT11) Why there seem to be no preferences in preference memory for (id ^attribute value) triples acquired through C input functions ?

From Randy Jones:

Working memory elements from I/O are added directly to working memory without going through Soar's preference mechanism. That's why they don't have any preferences. That's also why production rules can't make them disappear (say, with a reject preference), because those WME's just totally bypass the preference process. This is intentional. The idea is that the input link represents "current perception", and you can't use purely cognitive processes to "shut off" your current perceptions. If you want to remove things from the input link, you have to make the I/O code remove them. If you want the input link to be under the control of cognition, then your rules have Tcl tell the I/O system (via the output link) to change the values on the input link. The rules cannot do it directly.

From Karen Coulter:

C Input functions add wmes using the kernel routine add_input_wme, which surgically alters working memory directly. The preference mechanism is not part of this process. The only way to remove these input wmes is by calling the routine remove_input_wme from your input function. add_input_wme returns the pointer to the wme you added, which you must store statically somehow (on a linked list or something) to later remove the wme with remove_input_wme. I should add the caveat that I didn't actually go back and look at Soar 6, and gave you the answer for Soar version 7 and later, but I'm fairly certain that add_input_wme and remove_input_wme also were in Soar 6.

Back to Table of Contents


(APT12) Why is "send" in Soar different from "send" in Tk, or What do I do if my Soar process comes up as 'soar2'?

[This stuff will be subject to some change in Soar 7.1 and on, I suspect, -FER] Soar is and is not different from "send" in Tk. By default, Soar's send is different from Tk. When Soar 7 is compiled with Tk, and -useIPC is specified on the command line at runtime, Soar's send is the same as Tk.

From: Karen J. Coulter (kcoulter@eecs.umich.edu), Date: Apr 11, 1997
Subject: Re: [Soar-Bugs #85] Why is "send" in Soar different from "send" in Tk?

To support multiple agents, Soar 7 takes care of all the overhead of creating and maintaining multiple interpreters (which wasn't supported by Tcl < 7.6). Soar 7 creates a flat space of interpreters -- no single interpreter is the supervisor or owner of another. To allow for communication among agents, Soar uses the "send" command. Tcl itself doesn't have a "send" command, it comes from Tk. But we didn't want to require Tk (i.e., X Windows) in order to run Soar. So Karl wrote an X-server-less version of "send" that could be used to support multiple agents in Soar when Tk was not compiled in. This version of send works only for agents/interps within the same process.

Tk's send function registers with the X server to use IPC's (interprocess communications). It tries to register using the name of the application. IF, when Tk/send registers, the X server already has a process by that name, then the name gets a number appended to it so that the name will be unique and then the registration can be done. Then Soar/Tk/send also changes the name of the interpreter to match the process name that was registered by the X server. Through the X server, interpreters can can communicate with other interpreters in other processes.

What was happening to novice and not-so-novice Soar users who wanted to test applications and run multiple copies on the same machine was this: Start the first copy, soar.soar gets sourced to start the application, and "soar" gets registered with X server (invisible to the soar user). Everything (gui etc) comes up fine. Start the 2nd copy: soar interp gets renamed to soar2, no soar2.soar file is found, so no file gets sourced and the application doesn't run as expected. This was happening to nearly all the soar developers at Michigan. This was also happening for an COG SCI class [at Nottingham] trying to all run the same application (subtraction world) to explore theories of learning. It was VERY confusing. But, we had this nice little workaround since Karl had written the "send" command for Tcl-only Soar that avoided registering with the X server.

Since it was anticipated that most Soar users would not be starting out wanting IPCs ("what's that?"), we made the default method for running Soar use the IPC-less version of send. For those users savvy enough to run multiple processes and want to communicate among those processes, we added the commandline flag -useIPC. The doc could be more explicit about how "send" works in Soar, but the -useIPC flag is in the help pages. And I still believe this was the right way to go. Otherwise we would have had to tell users to have as many soar[n].soar files as they ever thought they would need to support running multiple copies of an application if Tk was compiled in their version of Soar. OR we tell the users who need IPCs (far fewer in number), oh yes, you need to specify -useIPC when you start up soar.

Now that Tcl 7.6 supports multiple interpreters, Soar no longer has to manage the overhead, we no longer have to create our own methods for communicating among agents, and we won't be shadowing Tcl or Tk commands. It also looks like Soar won't be autoloading any 'interpname'.soar files, so this whole problem goes away (only to be replaced by something else, no doubt). And Soar programmers who want to communicate with other agents and processes will have to read the Tcl doc to figure out how to do it =8-O ;)

Back to Table of Contents


(APT13) How to get Soar to talk to new Tk displays?

  • Redirect an agent's output

    Question:

    I am trying to create a child interpreter in Tcl/Tk that displays some graphics. I also have a text widget that I want to display Soar's output. I then try to alias "puts" to a command that writes to the text widget, interp alias {} puts {} WriteLog. In short, I want something like the TSI. The problem is that Soar continue to direct it's output to the shell. Doesn't Soar use puts or have I missed something?

    Answer (Karen Coulter wrote):

    Please, take a look at "help output-strings-destination". This command allows you to redirect an agent's output to another procedure, which is basically what a Tk text widget really is.

    "puts" is a Tcl command which will not do what you want, but you could possibly use your method below if you used "echo" instead. Please, see "help echo".

  • Adjust output from an agent

    There is a question whether it is possible to redirect the string destination of some agent interpret to the "interaction window". As an answer for this, if a developer uses "CreateNewAgent", it should do all the initialization stuff. It is necessary to make the TSI and the Agent Interaction Window work properly.

    Here is an example code that has the above problem.

    foreach agent $sim_agents {
         interp create $agent
            $agent alias sim sim
            load {} TK $agent
            $agent eval [list set auto_path "$auto_path"]
            $agent eval "set interp_name $agent"
            $agent eval "source $source"
            $agent eval "output-strings-destination -push -text -widget (.tsw) .t"
            $agent eval "source $cohen_maze_dir/soar_agents_lightsgui.tcl"
            $agent eval {d 2}
    

    Try changing the above code to the following:

    foreach agent $sim_agents {
         createNewAgent $agent
         $agent eval "source $source"
         $agent eval "source $cohen_maze_dir/soar_agent_lightsgui.tcl"
         $agent eval {run 2 d}
    ...
    }
    

Back to Table of Contents


(APT14) How to avoid memory leaks?

Memory issue: "fix" in Soar 8.4

Question from Henrik Putzer (Nov. 19, 2001):

I see that Soar is consuming more and more memory. Does it do some internal logging (sort of history functionality) which I can disable...or is there a known but unfixed memory leak?

Answer from Scott Wallace (Nov. 19, 2001):

There are some known memory issues with Soar. The dominating one has a "fix" in Soar 8.4, but the fix is not completely stable. In general, Soar seems to purposely keep around a fair amount of information, just in case it will be needed later. Most of this information gets flushed away when a state is destroyed, but at the top state, information may just accumulate. Soar agents, that perform lots of reasoning on the top state, are likely to suffer the most from this memory issue. Our fix attempts to get around this problem by significantly reducing the amount of memory overhead for the top-state. In Soar 8.4, you can enable it using:

#define NO_TOP_LEVEL_REFS

If you encounter problems with it, let me know as many details as possible. I may be able to help. Otherwise, you might be able to reduce memory overhead by using a more dynamic goal space.

Memory leak in soar_core_utils.c

Question from John A. Shue:

While trying to debug a memory leak in my application, I discovered the following comment in the soar_core_utils.c at line 792:

/* KNOWN MEMORY LEAK! Need to track down and free ALL structures */
/* pointed to be fields in the agent structre.                   */
 

I am using Brad Jones' SGIO distribution and compared the soar_core_utils.c in his kernel code with the latest 8.4 kernel code and the files are exactly the same.

Before I spend any time trying to fix this, I was wondering if anyone out there in the Soar community had a solution or fix for this problem. My application basically iterates over a simulation numerous times. At the start of the application, it creates a single sgio: APISoar object that it uses for the rest of the application. During each iteration, the application creates a number of agents, runs through a simulation, and lastly destroys all the agents. As the application runs the memory usage constantly increases. I attempted to alleviate this problem by creating a new sgio::APISoar object each time I run through the simulation, but the allocated memory for the agent structure is of course never being released.

Any suggestions will be greatly appreciated.

Answer from Jacob Crossman (Sept. 3, 2002):

We have the same problem in our project. We noticed about a 1 MB leak per agent that was independent of how long the agent ran. Using a memory leak analysis tool, we narrowed the biggest leaks down to the Tcl interpreter, though the kernel does appear leak some also (but I don't have a number for the kernel leak size, but it isn't nearly as large as the Tcl leaks in our application).

Making sure Tcl shuts down correctly and cleans up its memory is not easy. Another person here at the company is working on that. I'll ask him to send a reply if he has anything to add.

Our temporary solution while we are looking for more permanent fixes has been to run each simulation as a separate process and kill/recreate the process for each simulation run.

You are using sgio::APISoar, which, if I recall correctly, doesn't use the Tcl interpreter. so I'm not sure our problems are the same. Another possibility is that some of the main kernel memory pools are not getting cleaned up. We had a similar problem with TacAir-Soar at one time and implemented a fix that may not have made it back into the main branch. The fix was basically to iterate over all the kernel memory pools (linked list headed by agent.memory_pools_in_use) and free them. I've dug up the code we used to do this. You might try it out:

 

In soar_core_util.c, function soar_default_destroy_agent_procedure you can try (near the end of the function):

memory_pool* p;
...
p = delete_agent->memory_pools_in_use;
whild (p)
{
     memory_pool *next = p->next;
     free_memory_pool(p);
     p = next;
}
delete_agent->memory_pools_in_use = NIL;
 

The function "free_memory_pool" is not in Soar 8. It is defined as:

void free_memory_pool(memory_pool *p)
{
   char *trav, *next;
 
   trav = p->first_block;
   while (trav)
   {
      next = *(char **)trav;
      free_memory(trav, POOL_MEM_USAGE);
      trav = next;
   }
   p->num_blocks = 0;
   p->first_block = NIL;
   p->free_list = NIL;
  #endif
}
 

I tried this once in Soar 8, but it didn't see much improvement in memory usage. But that may have been because Tcl was causing most of our leaks. I'm not sure if it causes any problems (like crashes).

Back to Table of Contents


(APT15) How to represent waypoints and partial results in hierarchical goal stacks?

Question from Randy Jones:

Suppose that you have a problem-space structure something along the following lines:

top-ps
   execute-mission
      racetrack
 

Inside the racetrack problem space, we want to set an O-supported flag (based on our current position) marking that the racetrack waypoint was achieved. In Soar 8, we can't put that flag on the racetrack state, because as soon as our position changes the whole racetrack state will get retracted (right?). So instead I put flag up on the top-ps state.

Now the problem is that I have to clean up after myself. I need to clean up that flag when the racetrack operator is gone. Is there an "approved" way to do this? Or has someone done this and developed a clean way?

One possibility I can think of is that I propose a clean-up-racetrack operator (in the execute-mission problem-space) that is worse than the racetrack operator (and conditional on the existence of the O-supported stuff I want to delete). This makes the clean-up-racetrack operator wait until racetrack is done to do its cleaning up. Is that the best way?

Answer from Scott Wallace (March 19, 2002):

Here is what I do to deal with this situation. I've been using it for a while now, and it seems to work.

First, I use this rule to create an augmentation on the top state that will hold the subgoal data.

sp {all*elaborate*sub-goal*data-head
   (state <s> ^superstate nil)
   -->
   (<s> ^sub-goal-data <gd>)}
 

Then, I use this rule to create a unique augmentation on each subgoal.

sp {all*elaborate*goal*gensym
   (state <s> ^top-state <ts>)
   -->
   (<s> ^unique-id <u>)
   (<u> ^symbol (make-constant-symbol))}
 

This rule copies and creates sub-structures on the top-state's sub-goal-data for each state in the current stack.

sp {all*elaborate*sub-goal*data
   (state <s> ^name <name> ^unique-id.symbol <sym>
                    ^top-state <ts>)
   (<ts> ^sub-goal-data <gd>)
   -->
   (<gd> ^<name> <n>)
   (<n> ^id <sym>)
   }
 

Finally, this rule removes stale sub-goal information:

sp {any*apply*remove*old*sub-goal*data
   (state <s> ^operator.name <name>
                    ^top-state <ts>)
   (<ts> ^sub-goal-data <gd>)
   (<gd> ^<ss> <ssa>)
   (<ssa> ^id <sym>)
  -{(state <somestate> ^unique-id <>)
    (<u> ^symbol <sym>)}
   -->
   (<gd> ^<ss> <ssa> -)}
 

This seems to work for me, basically I use it for a similar thing: I have an agent, and one of its low level operators is "wander". It wanders about trying to achieve its super goal. But for any given super goal. I want to make sure that I don't wander to the same place twice. So I store this information in the sub-goal-data. Then, when I've wandered to the right place, I pop up in the goal stack, this temporary data is blown away, and then I might select a new goal which again requires me to wander about, but this time I will have a clean slate.

Back to Table of Contents


(APT16) How do I use SML (Soar Markup Language)?

Question from Michela De Vincentis (Nov. 21, 2005):

I need more informations about the methods used in sml package (e.g, class kernel,or class agent, etc.). I'd like to know if there is a manual that explain how all the objects and the methods inside the classes work (Java).

Bob Marinier wrote (Nov. 21, 2005):

Unfortunately, there is no separate manual at this time. However, all of the available functions are thoroughly documented in the header files in SoarIO/ClientSML/include. While these are C++ headers, the Java methods are virtually identical (with the obvious substitutions, i.e. String instead of char *, etc).

Note: After installing Soar 8.6.1, you can find the "SML Quick Start Guide" under "Documentation" folder. Also, you can get brief information about "SML Quick Start Guide" in APT17.

Back to Table of Contents


(APT17) How to create JavaToh project using Eclipse on Windows?

Question - Zhang Qinjie wrote (7 February 2006):

Does anyone know how to create javaToh proejct using Eclipse 3.1 on Windows? After I create the project, there are some errors. How can I import Java_sml_ClientInterface.dll as a libary in Eclipse?

Answer - Douglas Pearson wrote (8 February 2006):

We use Eclipse for almost all of the Java development and generally the easiest solution is to just make sure that the "soar-library" folder that contains Java_sml_ClientInterface.dll is on your Windows PATH. If you change the path, you may need to restart Eclipse to make sure it sees the change.

For those not using Windows or who would prefer to leave their path alone, you can also set the working directory to be the "soar-library" folder. You do that in "Run | Debug... " to bring up the run configurations and then on the "Arguments" tab you can set the working directory.

-- Doug --

Question - Zhang Qinjie wrote (9 February 2006):

I have installed the Soar 8.6.1 and I am able to open those projects using VS.NET. I am wondering is there documents available describing the functions of each project under the "SML" and "SML-dll" solution? I think it will help alot in understanding the library files (dll, jar) generated by these projects.

Answer - Douglas Pearson wrote (9 February 2006):

The "SML Quick Start Guide" is probably the place to start for understanding the way SML works as a system.

However, it doesn't go through each module in the SML solution in turn explaining them. Here's a quick explanation of those:

  • ClientSML - The interface library a client app (environment/tool) uses to communicate with Soar using SML
  • ConnectionSML - Library of communication code shared by kernel and client side of SML (you don't work with this directly)
  • KernelSML - The kernel side of the SML communication layer. SML requests are processed here.
  • ElementXML - Library used to represent an XML object when passed between client and kernel. Includes an XML parser.
  • CommandLineInterface - Module in charge of processing the command line (e.g. "run 3")
  • gSKI-static - gSKI is a wrapper library around the Soar kernel (you don't work with this directly).
  • SoarKernelSource - The heart of the system where productions fire and working memory is managed.
  • TestClientSML - Test app to stress parts of the ClientSML interface.
  • TestCommandLineInterface - Test app for working with the command line
  • TestConnectionSML - Test app for checking on the ConnectionSML library
  • TestSMLEvents - Test app for the event system in SML
  • TowersSML - Towers of Hanoi implementation using SML to communicate between Soar and an external environment of towers.
  • SML-dll.sln is the same but builds the gSKI layer as a separate DLL rather than as part of SoarKernelSML. That's not something you're likely to want.

I hope that helps,

Doug

Back to Table of Contents


Section 6: Miscellaneous Resources


(M0) Comparisons between Soar and ACT-R

Todd Johnson has written a paper laying out the theoretical distinctions in Act-R and Soar and then compared control in Act-R and Soar from a cognitive perspective. The paper looks at how each architecture fares with respect to relevant behavioral phenomena.

Todd intends to expand this paper into a general comparison of Act-R and Soar for the cognitive science community. To this end, he would appreciate your comments on the present version.


Currently at the Applied Cognitive Science Lab at Penn State University, there is a paper which provides a good comparison of Soar and ACT-R. It can be found at: acs.ist.psu.edu/papers/ritterW98.pdf

Its full citation is:

Ritter, F. E., & Wallach, D.P. (1998). Models of two-person games in ACT-R and Soar. Proceedings of the Second European Conference on Cognitive Modelling. pp. 202-203. Nottingham: Nottingham Univerisity Press.


Gary Jones has written up a comparison between Soar and ACT-R. A more condensed version was published in AISBQ:

Jones G. (1996). The architectures of Soar and ACT-R, and how they model human behaviour. Artificial Intelligence and Simulation of Behaviour Quarterly, Winter 1996, 96, 41-44.

Also, see Appendix in Ritter <Soar report citation here>. Shadbott, Young etc., 2003.

Back to Table of Contents


(M1) Unofficial mirror of Soar FAQ and LFAQ

A Belorussian translation is available.

Back to Table of Contents


(M2) Soar memorabilia

Soar on Headline News

Since the ISI group hasn't mentioned it yet, I'd like to point out that Soar made CNN Headline News on 26 Aug 1997 with reports about Steve and RWA-Soar.

You can view the story on the CNN website at http://www.cnn.com/TECH/9708/25/military.agents/index.html.

From: "Paul E. Nielsen" [nielsen@eecs.umich.edu]


A top 10 list

 

Here's a heretofore unrevealed top ten list I [Bob Doorenbos] wrote back in June '93 when, shortly before a Soar workshop, Rick Lewis asked people to send him their "concerns about Soar" -- issues, problems, worries, etc. that he would try to summarize during the workshop. I sent him the following list. For some strange reason, Rick failed to address any of these.

Top Ten Concerns About Soar

---------------------------

10. Will we have a workshop at Georgia Tech during the '96 summer olympics?

9. 100,000 chunks take up too much memory on my workstation

8. Should get Billy Joel to write an official Soar theme song

7. Lloyd Bridges

6. One of the default productions has a name 111 characters long

5. Those pesky attribute-impasses!

4. Still no Tetris-Soar

3. P.I.'s should change their middle names to "Rodham"

2. Don't have a Soar logo or Soar t-shirts, hats, coffee mugs

1. Not enough celebrity cameos in the Soar video

From Bob Doorenbos [bobd@netbot.com] Fri, 18 Apr 1997 12:56:52 -0700

Back to Table of Contents


(35) What was TAQL?

TAQL (Task AcQuisition Language) was a higher level programming language written for to work with Soar 5. The best citations for it that I know of are as follows:

Yost, G. R., Newell, A. (1989). A problem space approach to expert system specification. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence. 621-627.

Yost, G. R. (1993). Acquiring knowledge in Soar. IEEE Expert, 8(3), 26-34.

Copies of the source code and manuals are probably available at the software archive at CMU http://www.cs.cmu.edu/afs/cs/project/soar/www/soar-archive-software.html

The TAQL compiler was written in LISP. You would load TAQL into a running lisp. You put in a file templates that were designed to do common PS operations, such as create and implement an operator or a problem space. When these templates were loaded (or compiled) they would produce Soar 5 productions. These template produced productions could be augmented with additional hand-written productions.

There was also a mode in GNU Emacs to help write TAQL code, it would balance braces of various sorts and insert templates for you to fill out.It's written up in Ritter's PhD thesis. It showed that TAQL had a more complex syntax (by its grammar) than the C programming language had.

My belief about why TAQL is no longer with us is this: It was an initial pass at a necessary level in Soar. A successor is badly needed because it was a good idea -- it was a higher level language so that users did not have to write Soar productions to generate behaviour on the problem space level. It had several flaws, however. It had a manual and a short tutorial. Gregg Yost using TAQL was probably the fastest that Soar code has ever been written.

The syntax was large. This could be because there is not a simple syntax that will do the job, or it could be because it was a first draft. This complex syntax made it slightly hard to use.

TAQL ran with learning off. Learning could be turned on, but things broke. I don't have access and I don't think a full analyse of how things broke were ever done.

TAQL was written in Lisp. When Soar moved to C and Tcl/Tk, TAQL had to be translated. Greg Yost graduated around this time, and it was not carried forward. It was a large project and was most naturally written in Lisp. It would be harder, but not at all impossible to write in Tcl.

Back to Table of Contents


(M4) Soar and Design Models

From Gourabmoy Nath:

Implemented Soar-based prototype systems for engineering design exist in the following domains:

(1) Elevator configuration design
(2) Civil Engineering design: floor systems design
(3) Chemical Engineering process design
(4) Computer configuration design: R1-SOAR
(5) Algorithm design

Back to Table of Contents


(M4) Past versions of this FAQ

2 August 2010
21 March 2006
1 July 2005
11 June 2005
May 2004
October 2003
May 2002
April 2002
May 2001
March 2000

Back to Table of Contents


End of Soar FAQ