Soar: Frequently Asked Questions List

Frank E. Ritter: frank.ritter@psu.edu
Jong W. Kim: jongkim@psu.edu

Last updated 1 July 2005


Table of Contents

Section 0: Introduction

Section 1: General Questions

(G0) Where can I get a copy of the Soar FAQ?

(G1) What is Soar?

(G2) Where can I get more information about Soar?

(G3) What does "Soar" stand for?

(G4) Is Soar the right tool for me?

(G5) Who uses Soar for what?

(G6) How can I learn Soar?

(G7) What do I need to be able to run Soar?

(G8) Where can I get obtain a copy of Soar?

(G9) Is there any support documentation available for Soar?

(G10) How can I find out what bugs are outstanding in Soar?

(G11) Other links and awards in Soar

(G12) How does Soar currently stand as a Psychology theory?

(G13) What tools are available for Soar?

(G14) Can Soar be embedded?

(G15) Resources for teaching Soar

(G16) Who has worked on the Soar FAQ?

Section 2: Technological/Terminology Questions

(T1) What is search control?

(T2) What is the data-chunking problem?

(T3) What is the data-chunking generation problem?

(T4) What do all these abbreviations and acronyms stand for?

(T5) What is this NNPSCM thing anyway?

(T6) How does Soar 7 differ from Soar 6?

(T7) What does using a ^problem-space.name flag buys you apart from providing a conventional label within which to enable operator proposition?

Section 3: Programming Questions

(P1) How can I make my life easier when programming in Soar?

(P2) Are there any guidelines on how to name productions?

(P3) Why did I learn a chunk here?

(P4) Why didn't I learn a chunk there (or how can I avoid leaning a chunk there)?

(P5) What is a justification?

(P6) How does Soar decide which conditions appear in a chunk?

(P7) Why does my chunk appear to have the wrong conditions?

(P8) How is it possible for Soar to generate a duplicate chunk?

(P9) What is all this support stuff about? (Or why do values keep vanishing?)

(P10) When should I use o-support, and when i-support?

(P11) Why does the value go away when the subgoal terminates?

(P12) What's the general method for debugging a Soar program?

(P13) How can I find out which productions are responsible for a value?

(P14) What's an attribute impasse, and how can I detect them?

(P15) How can I easily make multiple runs of Soar?

(P16) Are there any templates available for building Soar code?

(P17) How can I find all the productions that test X?

(P18) Why doesn't my binary parallel preference work?

(P19) How can I do multi-agent communication in Soar 7?

(P20) How can I use sockets more easily?

(P21) How do I write fast code?

(P22) In a WME, is the attribute always a symbolic constant?

(P23) How does one write the 'wait' operator in Soar8.3?

(P24) How does one mess with the wme's on the IO link?

(P25) How can I get Soar to interpret my symbols as integers?

(P26) Has there been any work done where new operators are learnt that extend problem spaces?

(P27) How can I find out about a programming problem not addressed here?

Section 4: Downloadable Models

(DM0) Burl: A general learning mechanism

(DM1) A general model of team work

(DM2) A model that does reflection

(DM3) A model that does concept/category acquisition

(DM4) A model for Java Agent Framework (JAF) component

(DM5) Herbal: A high level behavior representation language

(DM6) dTank: A competitive environment for distributed agents

(DM7) A model that counts attributes

(DM8) VISTA: A toolkit for visualizing an agent's behavior

Section 5: Advanced Programming Tips

(APT1) How can I get log to run faster?

(APT2) Are there any reserved words in Soar?

(APT3) How can I create a new ID and use it in the production?

(APT4) How can I access Tcl and Unix variables?

(APT5a) How can I access Soar objects?

(APT5b) How can I find an object that has attributes and values?

(APT6) How can I trace state changes?

(APT7) How can/do I add new Tcl/Tk libraries?

(APT8) How can and should I use add-WME and remove-WME?

(APT9) Frequently found user model bugs, or How can I stick beans up my nose metaphorically in Soar?

(APT10) Why there seem to be no preferences in preference memory for (id^attribute value) triples acquired through C input functions?

(APT11) Why is "send" in Soar different from "send" in Tk?, or What do I do if my Soar process comes up as 'soar2'?

(APT12) How to get Soar to talk to new Tk displays?

(APT13) How to avoid memory leaks?

(APT14) How to represent waypoints and partial results in hierarchical goal stacks?

Section 6: Miscellaneous Resources

(M0) Comparisons between Soar and ACT-R

(M1) Unofficial mirror of Soar FAQ and LFAQ

(M2) Soar memorabilia

(M3) Soar in the news

(M4) Other interesting things with Soar in it

(M5) What was TAQL?

(M6) Soar and Design models


Section 0: Introduction


This is the introduction to a list of frequently asked questions (FAQ) about Soar with answers. The FAQ is provided as a guide for finding out more about Soar. It is intended for use by all levels of people interested in Soar, from novices to experts. With this in mind, the questions are essentially divided into six parts as follows:

Questions in the first section have their numbers prefixed by the letter G (for General); those in the second section are prefixed by the letter T (for Technological); those in the third section are prefixed by the letter P (for Programming); those in the fourth section are prefixed by the letter DM (for Downloadable Models); those in the fifth section are prefixed by the letter APT (for Advanced Programming Tips); and, finally, those in the sixth section are prefixed by the letter M (Miscellaneous Resources). It also attempts to serve as a repository of the canonical "best" answers to these questions. So, if you know of a better answer or can suggest improvements, please feel free to make suggestions.

This FAQ is updated and posted on a variable schedule.We scan the Soar Workshop Proceedings yearly. We read Soar-group emails with the FAQ in mind. We solicit answers where we can see common and important questions. Full instructions for getting the current version of the FAQ are given in question (G0). In order to make it easier to spot what has changed since last time around, new and significantly changed items have often been tagged with the "new" icon on each major reference.

Suggestions for new questions, answers, re-phrasing, deletions etc., are all welcomed. Please include the word "FAQ" in the subject of your e-mail correspondence. Please, use the mailing lists noted below for general questions, but if they fail or you do not know which one to use, contact one of us.

This FAQ is not just our work, but includes numerous answers from members of the Soar community, past and present. The initial versions were supported by the DERA and the ESRC Centre for Research in Development, Instruction and Training. The Office of Naval Research currently provides some support.

Gordon Baxter put the first version together. Special thanks are due to John Laird and the Soar Group at the University of Michigan for helping to generate the list of questions, and particularly to Clare Bates Congdon, Peter Hastings, Randy Jones, Doug Pearson (who also provided a number of answers), and Kurt Steinkraus. The views expressed here are those of the authors and should not necessarily be attributed to the UK Ministry of Defence, the US Office of Naval Research, or the Pennsylvania State University.

Frank E. Ritter (frank.ritter@psu.edu)
Jong W. Kim (jongkim@psu.edu)

Back to Table of Contents


Section 1: General Questions


(G0) Where can I get a copy of the Soar FAQ?

The latest version of the list of Frequently Asked Questions (FAQ) for the Soar cognitive architecture is posted periodically to the Soar-group mailing list, and to the following newsgroups:

comp.ai

sci.cognitive

sci.psychology.theory

If you are reading a plain text version of this FAQ, there is also an html version available, via either of the following URLs:

    
acs.ist.psu.edu/soar-faq/

that you can access using any Web browser.

If you find that material here is out of date or does not include your favorite paper or author, please let us know. The work and range of material generated by the Soar group is quite broad and has been going on for over 20 years now.

Back to Table of Contents


(G1) What is Soar?

Soar means different things to different people. Soar is used by AI researchers to construct integrated intelligent agents and by cognitive scientists for cognitive modeling. It can basically be considered in several different ways:

  1. A theory of cognition: It is the principles behind the implemented Soar system.
  2. A set of principles and constraints on (cognitive) processing: Thus, it provides a (cognitive) architectural framework, within which you can construct cognitive models. In this view, it can be considered as an integrated architecture for knowledge-based problem solving, learning, and interaction with external environments.
  3. An implementation of these principles and constraints as programming language
  4. An AI programming language

Soar incorporates:

  • Problem spaces as a single framework for all tasks and subtasks to be solved
  • Production rules as the single representation of permanent knowledge
  • Objects with attributes and values as the single representation of temporary knowledge
  • Automatic subgoaling as the single mechanism for generating goals
  • Chunking as the single learning mechanism

The Soar licence is in the public domain and thus the software releases are free to download and use. For more information regarding Soar licence, please refer to sitemaker.umich.edu/soar/license. In addition, the Soar video is available to download (10.4M) to find out more about Soar.

Back to Table of Contents


(G2) Where can I get more information about Soar?

Books

The following book will help you to get an introductory idea of Soar as a Unified Theory of Cognition.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard.

If you want to find out things that people have modelled by using Soar, please, take a look at:

Rosenbloom, P. S., Laird, J. E. & Newell, A. (1993). The Soar Papers: Readings on Integrated Intelligence. Cambridge, MA: MIT Press.

Journal Articles and Book Chapters

Recent and forthcoming publications related to Soar include:

Huffman, S., & Laird, J.E. (1995). Flexibly instructable agents. Journal of Artificial Intelligence Research, 3, 271-324.

Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., & Koss, F. V. (1999). Automated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27-41.

Laird, J. E., & Rosenbloom, P. S. (1996) The evolution of the Soar cognitive architecture. In D. Steier & T. Mitchell (eds.), Mind Matters: A Tribute to Allen Newell. Mahwah, NJ: Lawrence Erlbaum Associates.

Lehman, J. F., Laird, J. E., & Rosenbloom, P. S. (1996) A gentle introduction to Soar, an architecture for human cognition. In S. Sternberg & D. Scarborough (eds.), Invitation to Cognitive Science, Volume 4.

Lewis, R. L. (2001). Cognitive theory, Soar. In International Encyclopedia of the Social and Behavioral Sciences. Amsterdam: Pergamon (Elsevier Science).

Lewis, R. L. (1999). Cognitive modeling, symbolic. In Wilson, R. & Keil, F. (Eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press.

Tambe, M., Johnson, W. L., Jones, R. M., Koss, F., Laird, J. E., Rosenbloom, P. S., & Schwamb, K. (1995). Intelligent agents for interactive simulation environments. AI Magazine, 16(1), 15-39.

Ritter, F. E. (2003). Soar. In Encyclopedia of Cognitive Science: Macmillan.

Web Sites

There are a number of Web sites available that provide information about Soar at varying levels:

The first place you should visit is the Soar homepage at University of Michigan where Soar is maintained. The Artificial Intelligence (AI) lab at the University of Michigan has a collection of Web pages on Cognitive Architectures. This includes a section on Soar, and there is also a web page available for the Soar group at University of Michigan.

The Information Sciences Institute (ISI) at the University of Southern California provides a collection of Soar-related Web pages including the Soar group at ISI, and the Soar archive which contains a publication bibliography, document abstracts, official software releases, software manuals, support tools, and information about members of the Soar community.

Carnegie Mellon University - where Soar was originally developed - has its own Soar projects page.

The University of Hertfordshire website includes Soar resources on the Web and elsewhere, a few of Richard Young's papers, and an annotated bibliography of Soar journal articles and book chapters (but not conference papers) related to psychology that is intended to be complete.

ExpLore Reasoning Systems has a summary with some new links at www.ers.com/Html/soar.htm.

There is also a site at the University of Nottingham that includes mirrors of several of the US sites as well as some things developed at Nottingham, including the Psychological Soar Tutorial. There is a nascent site at the Pennsylvania State University will appear at Frank Ritter's homepage.

Frank Ritter also has papers on Soar that are available for download at acs.ist.psu.edu/papers.

Mailing Lists

There are several mailing lists that exist within the Soar community as forums for discussion, and places to raise queries. Currently, the mailing lists are provided via SourceForge.net. You can subscribe or unsubscribe the mailing lists via the SourceForge.net. The main ones are:

  • Soar-group mailing list (soar-group@lists.sourceforge.net) - You can discuss Soar and its components.


  • Soar-games mailing list (soar-games@lists.sourceforge.net) - You can get a discussion for development of games using Soar. If you want to see collection of prior postings to the list, please, visit the Soar-games Archives.


  • Soar SML projects mailing list (soar-sml-list@lists.sourceforge.net) - This is the discussion list for Soar SML projects.


  • Soar consortium mailing list (soar-consortium@lists.sourceforge.net) - This mailing list sends a mail to the current Soar Consortium Board members


  • Soar-Umich mailing list (soar-umich@lists.sourceforge.net) - This is a mailing list for Soar researchers at the University of Michigan.


There used to be (1988 to 2000) a European mailing list. The eu-soar list merged with the Soar-group list in June 2000.

Newsgroups

At present, there is no Soar newsgroup. There has occasionally been a talk about starting one, but the mailing lists tend to serve for most purposes. Matters relating to Soar occasionally appear on the comp.ai newsgroup.

Soar Workshops

There have been two workshops series, one based in the USA and one based in Europe (which led to a series of international workshops and conferences on cognitive modeling, starting with the First in Berlin, and recently at the University of Pittsburg). Listed below are a few of the previous North American Workshops:

The next 25th Soar Workshop will be held Monday, June 13 to Friday, June 17, 2005, in Ann Arbor, MI.

Soar Training

Saor tutorials are offered each year at the Soar Workshop. There have been Soar tutorials at several conferences and held as additional training for academia, industry and government. The University of Michigan group has probably done it the most. Contact John Laird for details.

Often, a one-day psychology oriented Soar tutorial was offered before EuroSoar workshops, and often at AISB conferences.

Back to Table of Contents


(G3) What does "Soar" stand for?

Historically, Soar stood for State, Operator And Result, because all problem solving in Soar is regarded as a search through a problem space in which you apply an operator to a state to get a result. Over time, the community no longer regarded Soar as an acronym: this is why it is no longer written in upper case. You can, in fact, tell who is in the Soar community by how they write the word Soar (or at least, tell who has read the FAQ!).

Back to Table of Contents


(G4) What do I need to be able to run Soar?

The Soar software page is your first port of call. Older versions of Soar are available here: sitemaker.umich.edu/soar/soar_archive. There are a number of versions of Soar available for different combinations of machines and operating systems.

Soar Version 8.6.0 Release: Soar Suite 8.6.0 is now avaiable for download. This version is a Windows-only release. Before the Soar Workshop, it is anticipated that a version of 8.6.1 should include Linux and Mac releases. One of major changes is that the Soar 8.6.0 provides an alternative approach for interfacing into Soar called SML (Soar Markup Language). This interface provides several strengths such as multiple language support (Java, C++, and Tcl), and a uniform method for building I/O interfaces, etc. In addition, the Soar Java debugger is newly provided. The debugger interfaces with Soar via SML and provides much higher performance than the TSI for detailed trace.

Soar Version 8.5.2 Release (as of July 2004): Soar Suite 8.5.2 is availble. Currently, Soar Suite 8.5.2. is available for the Windows, Linux, and Mac OS X. You can also check the SourceForge site to download the latest version releases.

Soar Version 8.5.0. Soar Suite 8.5.0. is available for all Windows, Linux, and Mac OS X platforms.

Soar Version 8.3 Release: Soar 8.3 adds several new features to the architecture and resolves a number of bug reports. A change to the default calculation of O-Support may require changes to existing Soar programs. These are described in the Soar 8.3 Release Notes. Soar 8.3 still includes a compatibility mode that allows users to run using the old Soar 7 architecture and methodology. Users must specify "soar8 -off" before loading any productions to run in Soar 7-compatibility mode. Available for Unix, Mac, and Windows.

Soar Version 8.2: Soar 8 introduces architectural changes which require changes to existing Soar programs. These are described in the Soar 8.2 Release Notes. Soar 8.2 does include a compatibility mode that allows users to run using the old Soar 7 architecture and methodology. Users must specify "soar8 -off" before loading any productions to run in Soar 7-compatibility mode.

Tiny Soar: TinySoar is an implementation of Soar that is intended to run on a memory-constrained device, such as a robot. TinySoar consists of two primary components:

  • a portable, light-weight runtime that implements the Soar decision cycle as a host for a Soar agent
  • Tcl extension that is used to create and debug Soar agents, and then export them into a format that can be compiled into the runtime component.

Scott Wallace has installed TinySoar on a lege brick when he was in University of Michigan. He mentioned that installation went relatively painlessly, except that TinySoar ends up overwriting the bricks I/O software. Thus, note that you must take the batteries out to reset the thing before you can reload anything (e.g., bug fixes to your rules).

There are a few differences between TinySoar and Soar 8.3. One such difference is that it uses (@ -- reconsider preference). Impasses are supported, although chunking is not yet implemented. For more information, please visit the TinySoar webpage.

Previous Versions:

Unix - Soar 7.0.4. This requires about 10 Mb of disk space (including source code) for most of what you will need, although the file that you retrieve via ftp is much smaller, since it is archived and compressed. The Unix version of Soar 7 is compiled with Tcl/Tk, which allows users to easily add commands and displays to their environment. The Soar Development Environment (SDE), which is a set of extensions to GNU Emacs, offers a programming support environment for earlier versions of Soar, and can still be used albeit in a more limited way for more recent versions.

Mac - MacSoartk 7.0.4 + CUSP - this version MacSoar comes with Tk included, along with a number of extensions added by Ken Hughes to improve the usability of Soar on the Mac. You will require around 10 Mb of disk space to run this version, and will need version 7 of Mac OS (version 7.5 is recommended). Some versions of Soar can also be run under MacUnix.

PC - There was a version of Soar which runs under Windows 95 and Windows NT. It is a port of Soar to WIN32, and includes Tcl 7.6/Tk 4.2. It is available from the University of Michigan, as a zipped binary file. You should read the accompanying notes (available at the same URL) carefully, because there may be a problem with the Win32 binary release of Tcl 7.6/Tk 4.2.

In addition there is an older, unsupported PC version called WinSoar, based on Soar version 6.1.0, which includes a simple editing and debugging environment, runs under Microsoft Windows 3.x. It is also known as IntelSoar.

Several people have also successfully managed to install the Unix version of Soar on a PC running under the Linux operating system, although some problems have been reported under versions of Linux that have appeared since December 1996.

Version 7.1 of Soar is currently being revised to utilize the latest release of Tcl/Tk (8.0) prior to its official release. The new release of Soar will include the Tcl/Tk Soar Interface (TSI). Currently, Soar 7.1 uses Tcl 7.6 and Tk 4.2, not Tcl 8.0.

Back to Table of Contents


(G5) Where can I obtain a copy of Soar?

You can simply click on the version you want from the Soar Software Page at University of Michigan. In addition, you can directly surf the SourceForge site.

A new version of Visual Soar has been posted. Recent improvements include fixing bugs in drag and drop operations and formatting. Also new are commands to automatically format and redraw a document in the Rule Editor. The latest version of Visual Soar can be found at www.eecs.umich.edu/~soar/projects/visualsoar/

KB agent is a commercially available version of Soar for developing intelligent agents in enterprise environments. It is based on the public version, but the code has been optimized, updated, and reorganized for linking to other programs in Windows 95/NT.

Ralph Morelli, in 1996, created a prototype of a WWW client/server model where Soar (6.2.4) is the server and a Java applet is the client. This allows one to "talk to Soar" via Netscape or some other Web browser.

There is a Lisp based version of Soar 6 that Dr. Jans Aasman (ja@franz.com) built a while back. You should contact him for details.

There is a partially completed (as of Apr.13, 2000) Java based version by Sherief (shario@usa.net). Its source code and more details used to be available at www.geocities.com/sharios/soar.htm.

Back to Table of Contents


(G6) Who uses Soar for what?

Soar is used by AI researchers to construct integrated intelligent agents, and by cognitive scientists for cognitive modeling.

The Soar research community is distributed across a number of sites throughout the world. A brief overview of each of these is given below, in alphabetical order.

Brigham Young University (BYU)

NL-Soar is being actively developed at BYU Soar Research Group. Deryle Lonsdale (lonz@byu.edu) is the point of contact.

Carnegie Mellon University

Development of models for quantitatively predicting human performance, including GOMS, and more complex, forward-looking and sophisticated models have been built using Soar. For more information contact Bonnie John (Bonnie.John@cs.cmu.edu).

ExpLore Reasoning Systems, Inc.

As well as its academic usage, Soar has been used by ExpLore Reasoning Systems, Inc. in Virginia, USA. A commercial version of Soar, called KB Agent, was developed as a tool for modeling and implementing business expertise.

Information Sciences Institute (ISI) and Institute for Creative Technologies (ICT), University of Southern California

Soar projects cover five main areas of research: development of automated agents for simulated environments (in collaboration with UMich); learning (including explanation-based learning); planning; implementation technology (e.g., production system match algorithms); and virtual environments for training. For more information contact Jonathan Gratch (gratch@ict.usc.edu), and Randall Hill (hill@isi.edu).

Pace University

The Robotics Lab at Pace University focuses on building and testing a robot control architecture using the Soar cognitive architecture as its basis and the DARPA Image Understanding Environment to process visual data.

Pennsylvania State University

The Soar work at the Pennsylvania State University involves using Soar models as a way to test theories of learning, creating a high level language for modeling, and improving human-computer interaction. Other projects include the development of the Psychological Soar Tutorial and the Soar FAQ!. For more information, please contact Frank Ritter (frank.ritter@psu.edu).

Soar Technologies

Bob Marinier (rmarinie@eecs.umich.edu) saied that Soar Tech utilizes advanced artificial intelligence that is grounded in scientific principles of human-system interaction and implemented through sound software engineering. The work of Soar Technology, Inc. has been primarily focused on projects concerning TacAir-Soar. Soar Tech develops intelligent autonomous agent software for modeling and simulation, command and control, information visualization, robotics, and intelligence analysis, for the U.S. Army, Navy, Air Force, DARPA, JFCOM, DMSO, and the intelligence community. Soar Tech has also developed useful tools for programming in Soar such as SDB, the Soar Debugger.

University of Hertfordshire and University College London Interaction Center (UCLIC)

The Soar work at University of Hertfordshire includes modeling aspects of human-computer interaction, particularly on the use of eye movements during the exploratory search of menus. Richard Young used to work at University of Hertfordshire, but he left for the UCL Interation Center (UCLIC). For more information contact Richard Young (r.m.young@acm.org).

University of Leiden

Ernest Bovenkamp (E.G.P.Bovenkamp@lumc.nl) has been using Soar as an agent architecture for parsing and understanding medical images.

University of Michigan

The Soar work at University of Michigan has four basic research thrusts:

In addition, the application of Soar to computer games is also researched.

Perhaps, the largest success for Soar has been flying simulated aircraft in a hostile environment. Jones et al. (1999) report how Soar flew all of the US aircraft in the 48 hour STOW'97 exercise. The simulated pilots talked with each other and ground control, and carried out 95% of the missions successfully.

For more information, please contact John Laird (laird@umich.edu).

University of Michigan/Psychology

NL-Soar work at University of Michigan was done by Rick Lewis (formerly @ Ohio State) who focused on modeling real-time human sentence processing, research on word sense disambiguation, and experimental tests of NL-Soar as a psycholinguistic theory. Rick has moved to ACT-R with his work.

Deryle Lonsdale at BYU continues to work on NL-Soar.

For more information, please contact Rick Lewis (rickl@umich.edu)

University of Portsmouth

The Intelligent Agent Group at the University of Portsmouth is currently involved in a range of Soar related activities, particularly: (a) Soar agents for intelligent control in synthetic environments, (b) Teamwork/C2 structures within groups of Soar agents, (c) off-line knowledge extraction from legacy production sets, and (d) Soar development tools. For further information, contact Tony Kalus (tony.kalus@port.ac.uk).

Back to Table of Contents


(G7) How can I learn Soar?

Probably, the best way to learn Soar is to actually visit a site where people are actively using Soar, and stay for as long as you can manage (months rather than days).

In order to help people, however, there are manuals and tutorials available. Before studying them, you should first visit Soar's getting started page.

The manuals and tutorials, were developed for anyone interested in learning Soar, and are based on Soar 8. They are used in classes to teach Soar (e.g., at University of Michigan where they were developed, and at other universities).

Another tutorial was developed mainly with psychologists in mind. The latest version is based on Soar 8. The Web version of this tutorial was developed by Frank Ritter, Richard Young, and Gary Jones. A Powerpoint presentation on the Psychological Soar Tutorial can be found at acs.ist.psu.edu/papers/pst14/bamberg-soar.ppt, which worked with Soar 8.

There is no textbook, as such, on how to program using Soar, although John Rieman has written a set of notes entitled An Introduction to Soar Programming (gzipped postcript format). Even though the notes are based on version 6 of Soar (NNPSCM), they provide a useful bare bones introduction to Soar as a programming language.

In addition, the Soar Coloring Book also provides user-friendly instructions with hands-on examples and exercises. Please, visit the web page of the Soar Coloring Book.

Also, Soar dogma will help get a feel for how to program Soar (You can find the link in G9 section).

Andrew Nuxoll wrote: The Soar dogma contains a collection of Soar wisdom gathered during a series of conversations between John Laired and myself as I mounted the Soar learning curve over the course of my first year as a graduate student at the University of Michigan. My hope was that by writing these guidelines down I might ease the curve for future Soar users. To get the most value from this document, I recommend you read it once at the beginning of your Soar experience and then read it again once you've started using Soar in earnest.

From version 7 onwards, Soar is closely linked to Tcl/Tk. If you wish to get a copy of a Tcl tutorial computer aided instruction package, you could start by looking at Clif Fynt's home page. There is a set of notes on experiences with using Tcl/Tk to construct external environments, written by Richard Young. These may be useful to anyone who is heading down this line, since they highlight some of the good and bad points about Tcl/Tk.

Back to Table of Contents


(G8) Is Soar the right tool for me?

For building AI systems: Soar's strengths are in integrating knowledge, planning, reaction, search and learning within a very efficient architecture. Example tasks include production scheduling, diagnosis, natural language understanding, learning by instruction, robotic control, and flying simulated aircraft.

If all you want to do is to create a small production rule system, then Soar may not be right for you. Also, if you only have limited time that you can spend developing the system, then Soar is probably not the best choice. It currently appears to require a lot of effort to learn Soar, and more practice before you become proficient than is needed if you use other, simpler, systems, such as Jess.

There are, however, a number of basic capabilities that Soar provides as standard. If you need to use these, then Soar may not only be just what you want, but may also be the only system available:

  • Learning and its integration with problem solving
  • Interruptibility as a core aspect of behaviour
  • Large production rule systems
  • Parallel reasoning
  • A knowledge description and design approach based on problem spaces

For cognitive modeling: Soar's strengths are in modeling deliberate cognitive human behavior, at time scales greater than 50 ms. Example tasks that have been explored include human computer interaction tasks, typing, arithmetic, video game playing, natural language understanding, concept acquisition, learning by instruction, and verbal reasoning. Soar has also been used for modeling learning in many of these tasks; however, learning adds significant complexity to the structuring of the task and is not for the casual user. Although many of these tasks involve interaction with external environments and the Soar community is experimenting with models of interaction, Soar does not yet have a standard model for low-level perception or motor control.

Back to Table of Contents


(G9) Is there any support documentation available for Soar?

As mentioned in the previous section, there are manuals, tutorials, and other documents available for Soar users:

  1. Soar 8 Users Manual.
  2. Online Soar kernel and Tcl interface Doxygen documentation.
  3. The Soar Tutorial
  4. Soar Dogma

These documentations provide lots of useful information about programming in Soar, and, in particular, version 8.

The Soar 6 User Manual is still available for browsing on the Web.

Back to Table of Contents


(G10) How can I find out what bugs are outstanding in Soar?

To retrieve information on what bugs are outstanding in Soar, visit the Soar Bugzilla: winter.eecs.umich.edu/soar-bugzilla. Through this system, you can report all bugs in using Soar.

Back to Table of Contents


(G11) Other links and awards in Soar

Links2Go

This page has won an award, but more importantly, there is another list of Soar resources assembled by Links2go.

Links2Go
Soar

Congratulations! Your page 
(http://www.ccc.nottingham.ac.uk/pub/soar/nottingham/soar-faq.html) has been 
selected to receive a Links2Go Key Resource award in the Soar topic!

The Links2Go Key Resource award is both exclusive and objective. Fewer than 
one page in one thousand will ever be selected for inclusion. Further, unlike 
most awards that rely on the subjective opinion of "experts," many of whom 
have only looked at tens or hundreds of thousands of pages in bestowing their
awards, the Links2Go Key Resource award is completely objective and is based
on an analysis of millions of web pages. During the course of our analysis, we
identify which links are most representative of each of the thousands of topics
in Links2Go, based on how actual page authors, like yourself, index and
organize links on their pages. In fact, the Key Resource award is so
exclusive, even we don't qualify for it (yet)!

Please visit:
www.links2go.com/award/Soar. 

[Now, the site is no longer active, but we did win it!.]

LA Times - Using Interactive Play to Explore How We Think

Game AI programs have become so sophisticated in recent years that a few university researchers have taken an interest in the field, including John E. Laird, a professor of electrical engineering and computer science at the University of Michigan.

Back to Table of Contents


(G12) How does Soar currently stand as a psychology theory?

Sadly, there is not a cut and dried answer to this question. Answering this fully will require you to figure out what you expect from a psychology theory and then evaluate Soar on those criteria. If you expect a theory to predict that humans are intelligent, and that they have been and can be shown to learn in several domains, it is nearly the only game in town. If you require limited short term memory directly in the architecture, that's not in Soar yet (try ACT-R).

With this in mind, there are numerous resources for finding out more. The first port of call should be Newell's 1990 book, Unified Theories Cognition. This may satisfy you. This makes the most coherent case for Soar, although it is slowly becoming out of date with respect to the implementation. There are also two big books, The Soar papers (Vol. 1 and 2), that provide numerous examples of Soar's use. The examples tend to be more biased towards AI, but there are numerous psychology applications in them.

If you go to the ISI paper archive, or often the CHI, ICCM and Cognitive Science conference proceedings, and the Soar Workshop Proceedings, you will find some more up-to-date papers showing what the community is currently working on. You may also find further pointers in the FAQ on individual web sites to be quite useful in seeing the current state of play in the area you are interested.

Richard Young has prepared an annotated bibliography of Soar journal articles and book chapters (but not conference papers) related to psychology that is intended to be complete.

The best cognitive model written in Soar is less clear. There are Soar models of teamwork (Tambe, 1997), procedural learning (Nerb, Ritter, & Krems, 1999), natural language understanding (Lewis, 1996), categorization (Miller & Laird, 1996), syllogistic reasoning (Polk & Newell, 1995), and using computer interfaces (Howes & Young, 1997; Ritter & Bibby, 2001).

There is a book from the National Research Council called "Modeling Human and Organizational Behavior: Applications to Military Simulations" that provides a summary of Soar.

Todd Johnson proposed a list of fundamental cognitive capacities in 1995 that we have started to organize papers around. Each paper (or system) has only been cited once, and it is far far from complete, but the framework is now in place for expanding it. If you have suggestions, please do forward them for inclusion. 

Nerb, Krems & Ritter (1993; 1999) was later revised, and showed some good matches to the shape of variance in the Power Law taken from 14 subjects and to transfer between abduction problems. The first paper was in Cog Sci proceedings, the second in the Kognitionwissenschaft [German Cognitive Science] journal. The paper from Krems & Nerb (1992) is a monograph of Nerb's thesis, which it is based on.

Peck & John (1992) and later reanalyzed in Ritter & Larkin (1994) is Browser-Soar, a model of browsing. It is fit to 10 episodes of verbal protocol taken from 1 subject. The fit is sometimes quite good and allowed a measure of Soar's cycle time to be computed against subjects. It also suggested fairly strongly (because the model was matched to verbal and non-verbal actions) that verbal protocols are appearing about 1 second after their corresponding working memory elements appear.

Nelson, G., Lehman, J. F., & John, B. E. (1994) proposed a model that integrated multiple forms of knowledge to start to match some protocols taken from the NASA space shuttle test director.  No detailed match.

Aasman, J., & Michon, J. A. (1992) presented a model of driving. While the book chapter does not match data tightly, the later Aasman book (1995) does so very well. The book is not widely available, however.

John, B. E., Vera, A. H., & Newell, A. (1992; 1994) presented a model matched to 10 seconds of a subject learning how to play Mario Brothers. This was available as a CHI conference paper initially.

Chong, R. S., & Laird, J. E. (1997) presented a model that learns how to perform a dual task fairly well. It has not matched to data very tightly, but it shows a very plausible mechanism. This was a preliminary version of Chong's thesis.

Johnson et al. (1991) presented a model of blood typing. The comparison was done loosely to verbal protocols. This was a very hard task for subjects to do, and the point of it was that a model could do the task, and it was not just intuition that allowed users to perform this task.

There have been a couple of papers on integrating knowledge (i.e., models) in Soar. Lehman, J. F., Lewis, R. L., & Newell, A. (1991) and Lewis, R. L., Newell, A., & Polk, T. A. (1989) both presented models that integrate submodels. I don't believe that either have been compared with data, but they show how different behaviours can be integrated and note some of the issues that will arise.

Lewis et al. (1990) address some of the questions discussed here about the state of Soar, but from a 1990's perspective.

Several models in Soar have been created that model the Power Law. These include Sched-Soar (Nerb et al., 1999), physics principle application (Ritter, Jones, & Baxter, 1998), Seible-Soar and R1-Soar (Newell, 1990). These models, although they use different mechanisms, explain the Power Law as arising out of hierarchical learning (i.e., learning parts of the environment or internal goal structure) that initially learns low level actions that are very common and thus useful, and with further practice larger patterns are learned but they occur less often. The Soar models also predict that some of the noise in behaviour on individual trials is different, measurable, and, predicted amounts of transfer between problems.

References

Aasman, J., & Michon, J. A. (1992). Multitasking in driving. In J. A. Michon & A. Akyürek (Eds.), Soar: A cognitive architecture in perspective. Dordrecht, The Netherlands: Kluwer.

Aasman, J. (1995). Modelling driver behaviour in Soar. Leidschendam, The Netherlands: KPN Research.

Chong, R. S., & Laird, J. E. (1997). Identifying dual-task executive process knowledge using EPIC-Soar. In Proceedings of the 19th Annual Conference of the Cognitive Science Society. 107-112. Mahwah, NJ: Lawrence Erlbaum.

John, B. E., Vera, A. H., & Newell, A. (1994). Towards real-time GOMS: A model of expert behavior in a highly interactive task. Behavior and Information Technology, 13, 255-267.

John, B. E., & Vera, A. H. (1992). A GOMS analysis of a graphic, interactive task. In CHI'92 Proceedings of the Conference on Human Factors and Computing Systems (SIGCHI). 251-258. New York, NY: ACM Press.

Johnson, K. A., Johnson, T. R., Smith, J. W. J., De Jong, M., Fischer, O., Amra, N. K., & Bayazitoglu, A. (1991). RedSoar: A system for red blood cell antibody identification. In Fifteenth Annual Symposium on Computer Applications in Medical Care. 664-668. Washington: McGraw Hill.

Krems, J., & Nerb, J. (1992). Kompetenzerwerb beim Lösen von Planungsproblemen: experimentelle Befunde und ein SOAR-Modell (Skill acquisition in solving scheduling problems: Experimental results and a Soar model) No. FORWISS-Report FR-1992-001). FORWISS, Muenchen.

Lehman, J. F., Lewis, R. L., & Newell, A. (1991). Integrating knowledge sources in language comprehension. In Thirteenth Annual Conference of the Cognitive Science Society. 461-466.

Lewis, R. L., Newell, A., & Polk, T. A. (1989). Toward a Soar theory of taking instructions for immediate reasoning tasks. In Annual Conference of the Cognitive Science Society. 514-521. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Lewis, R. L., Huffman, S. B., John, B. E., Laird, J. E., Lehman, J. F., Newell, A., Rosenbloom, P. S., Simon, T., & Tessler, S. G. (1990). Soar as a Unified Theory of Cognition: Spring 1990. In Twelfth Annual Conference of the Cognitive Science Society. 1035-1042. Cambridge:MA.

Nelson, G., Lehman, J. F., John, B. (1994) Integrating Cognitive Capabilities in a Real-Time Task, In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society.

Nerb, J., Krems, J., & Ritter, F. E. (1993). Rule learning and the powerlaw: A computational model and empirical results. In Proceedings of the15th Annual Conference of the Cognitive Science Society, Boulder, Colorado. 765-770. Hillsdale, New Jersey: LEA.

This was revised and extended and published as:

Nerb, J., Ritter, F. E., & Krems, J. (1999). Knowledge level learning and the power law: A Soar model of skill acquisition in scheduling.Kognitionswissenschaft [Journal of the German Cognitive Science Society] Special issue on cognitive modelling and cognitive architectures, D.Wallach & H. A. Simon (eds.). 20-29.

Using a process model of skill acquisition allowed us to examine the microstructure of subjects' performance of a scheduling task. The model, implemented in the Soar-architecture, fits many qualitative (e.g., learning rate) and quantitative (e.g., solution time) effects found in previously collected data. The model's predictions were tested with data from a new study where the identical task was given to the model and to 14 subjects. Again a general fit of the model was found with the restrictions that the task is easier for the model than from subjects and its performance improves more quickly. The episodic memory chunks learn while scheduling tasks show how acquisition of general rules can be performed without resort to explicit declarative rule generation. The model also provides an explanation of the noise typically found when fitting a set of data to the Power Law -- it is the result of chunking over actual knowledge rather than "average'' knowledge. Only when the data are averaged (over subjects here) does the smooth Power Law appear.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Peck, V. A., & John, B. E. (1992). Browser-Soar: A computational model of a highly interactive task. In Proceedings of the CHI '92 Conference on Human Factors in Computing Systems. 165-172. New York, NY: ACM.

Ritter, F. E., & Larkin, J. H. (1994). Using process models to summarize sequences of human actions. Human-Computer Interaction, 9(3), 345-383.

Tambe, M. (1997). Towards flexible teamwork. Journal of Artificial Intelligence Research, 7, 83-124.

Back to Table of Contents


(G13) What tools are available for Soar?

There are tools and projects which help you to develop your own Soar model. Please, check this site out: sitemaker.umich.edu/soar/soar_tools___projects

SCA and SoarDoc:

Ronald Chong and Robert Wray at Soar Tech have been using SCA to fit some quantitative learning data and make predictions in a real-time performance/learning task (part of the AFRL AMBR program). They presented a paper at the International Conference on Cognitive Modeling on this work:

Wray, R. E. & Chong, R. S. (2003). Quantitative Explorations of Category Learning using Symbolic Concept Acquisition. In Proceedings of the 5th International Conference on Cognitive Modeling. Bamberg, Germany. April.

There are some minor modifications to SCA to update it for Soar 8. They also have (monotonically) extended SCA to map novel feature values to trained values (introducing novel values is often used in psychological "transfer" experiments). The updated source code for SCA, including a description of the transfer task extensions, is available at: www.speakeasy.org/~wrayre/soar/sca/html/index.html

Wray wrote:

"Even if you are not currently interested in SCA, I encourage you to visit this URL. We documented SCA using a new tool, SoarDoc, a Doxygen-like tool that automatically generates HTML documentation for Soar systems. It includes a component that creates graphical state descriptions and works with both Soar 7 and Soar 8 source files. SoarDoc was developed by Dave Ray at Soar Technology.

Visual Soar: Visual Soar is a development environment for Soar programming. Documentation and downloads can be found at www.eecs.umich.edu/~soar/projects/visualsoar/.

Herbal: Herbal is a high-level behavior representation language. It supports creating a cognitive model for Soar architecture. For more information, please visit acs.ist.psu.edu/projects/Herbal/.

Back to Table of Contents


(G14) Can Soar be embedded?

Soar 8.6.0 addresses this problem most directly now (05/05).

Paul Benjamin wrote:

"A student at Pace has embedded Soar within Java. There is a package called feather from NIST that implements a TclInterpreter class in Java, and the student extended that class to a SoarSession class. He used it to write a poker-playing Soar program. I also used it to connect Soar to our robot. With minor changes, it can be used to connect any Java system to Soar".

His source, and all the package info, is at www.codeblitz.com/poker.html.

Another option is to take the C library header files and run SWIG over them (www.swig.org). SWIG is able to generate wrapping code to interface Java, Perl, Tcl, Python, and others to any C/C++ library.

There are several ways that Soar has been tied to other pieces of software. A now out of date overview is available from:

Ritter, F. E. & Major, N. P. (1995). Useful mechanisms for developing simulations for cognitive models. AISB Quarterly, 91(Spring), 7-18.

A list of ways to fit Soar to an application, in order of complexity, is:

You can tie Soar to itself through multi-agents, that is, having multiple Soar agents and having them talk with each other.

[I believe this is originally by Tom Head, around Oct '96.]

Reading and writing text from a file can be used for communication. However, using this mechanism for inter-agent communication would be pretty slow and you'd have to be careful to use semaphores to avoid deadlocks. With Soar 7, I see a natural progression from Tcl to C in the development of inter-agent communication.

  1. Write inter-agent communication in Tcl. This is possible with a new RHS function (called "tcl") that can execute a Tcl script. The script can do something as simple as send a message to a simple simulator (which can also be written in Tcl). The simulator can then send the message to the desired recipient(s). You could also do things such as add-wme in a RHS but I'd advise against it since its harder to see what's going on and more error prone.
  2. Move the simulator into C code. To speed up the simulated world in which the agents interact, recode the simulator in C. Affecting the simulator can be accomplished by adding a few new Tcl commands. The agents would be largely unchanged and the system would simply run faster.
  3. Move communication to C. This is done by writing Soar I/O functions as documented in section 6.2 of the Soar Users Manual. This is the fastest method.

A generic AI engine for video games

Question, Date: April 2003

I would like to know whether Soar is always a stand-alone product or if it can be incorporated in another system. I remember John Laird and Mike Van Lent presented a paper at the GDC on why Soar should be used as a generic AI engine for video games. I imagine that game developers would need to somehow embed Soar in their games the way they to third party graphics engines. So is this possible or would game and other commercial developers need to use the sockets interface that I believe the Quake bots use?

Answer:

Bob Wray wrote:

Yes, it's completely possible to run Soar within an application. We are using Unreal Tournament for an application and have a version in which Soar is complied into the game. I can run at least five agents (haven't tried more than that) within the game process and maintain game framerate on my 1 Ghz P4 laptop.

This is not at all a plug-and-play solution but there is a API that makes it relatively straightforward to embed Soar, especially, if you already are comfortable with interfacing software.

One limitation of embedding Soar agents is that you generally lose the user interface (UI) for Soar (the maintained UI is implemented in Tcl). We handle this in the application by using a socket-based connection to Soar/Tcl for development and the embedded version of performance situations.

Back to Table of Contents


(G15) Resources for teaching Soar

Soar has been taught as a component of non-programming classes on cogntive architectures at several universities, including at least, CMU, Michigan, Sterling in Scotland, Pace, Penn State, Portsmouth, and in Japan.

If you want students to program with Soar, that might be difficult to do to a great depth in two weeks, but as Michigan folks, Ritter, and Young have offered hands-on one-day tutorials, so clearly you can cover something in a day. Note that the day is about 6 hours of instruction. This can take at least 6 hours of instruction in a university class to cover, or two to three weeks in a class, more if you give homework. In class you get more out of it because the exercises are done in more detail. Herbal has also been used to teach Soar at a conference and in the classroom.

More resources can be found at the Ritter's class website. Please, contact Frank Ritter.

Back to Table of Contents


(G16) Who has worked on Soar FAQ?

The current Soar FAQ has been initiated and better shaped with invaluable help from former colleagues as follows:

Kevin Tor: tor@cse.psu.edu
Alexander B. Wood: abwood@unity.ncsu.edu
Gordon D. Baxter: gbaxter@psych.york.ac.uk
Marios Avaramides: avaamides@psych.ucsb.edu

For updating the current version of Soar FAQ, Bob Marinier (rmarinie@eecs.umich.edu) provided more than 90 comments and suggestions.

Back to Table of Contents


Section 2: Technological Issues


(T1) What is search control?

Search control is knowledge that controls search process in that it guides search through comparing proposed alternatives. In Soar, search control is encoded in production rules that create preferences for operators.

Bob Marinier (rmarinie@eecs.umich.edu) wrote:

Search control rules are rules that prefer the selection of one operator over another. Their purpose is to avoid useless operators and direct the search toward the desired state. Theoretically, you could encode rules that select the correct operator for each state. However, you would have had to already solve the problem yourself to come up with those rules. Our goal is to have the program solve the problem, using only knowledge available from the problem statement and possibly some general knowledge about problem solving. Therefore, search control will be restricted to general problem solving heuristics".

Back to Table of Contents


(T2) What is the data-chunking problem?

Data chunking is creation of chunks that allow for either the recognition or retrieval of data that is currently in working memory. Chunking is usually thought of as a method for compiling knowledge or speed-up learning, and not for moving data from working memory into long term memory. Data chunking is a technique in which chunking does create such recognition or retrieval productions, and thus allows Soar to perform knowledge-level learning.

Simplistically, then, data chunking is the creation of chunks that can be represented by the form a=>b i.e., when 'a' appears on the state, the data for 'b' does too. Another example is "The capital of France? => Paris".

Bob Marinier wrote:

The data-chunking problem is discussed in section 6.4 of Newell's "Unified Theories of Cognition" (1990).

"This is what we call the data-chunking problem - that chunks that arise from deliberate attempts to learn an arbitrarily given data item will have the item itself somewhere in the conditions, making the chunk useless to retrieve the item. Normally, chunking acquires procedures to produce an effect. When what is given is a data item, not a bit of behavior, the attempt to convert it to procedural learning seems stymied". (p 327)

To solve the data-chunking problem, Newell (1990) suggests the following. Note, GID is an example object that Soar is trying to learn to recognize:

"The key idea is to separate generating an object to be recalled from testing it. The desired object here is GID. We want the process that generates GID not to know that it is the response that is to be made - not to be the process that tests for the item. Thus, to achieve the required learning, Soar should create for itself a task to be solved by generate and test. It is alright for the test to contatin the result, namely, GID. The test is to find an instance of GID, so the test not only can have GID in some condition, it should have it. As long as the generator doesn't produce GID by consulting the given object (GID), then GID will not occur in the conditions of the chunk that will be built". (p 331-332)

For more information, see question T3 and read section 6.4 of "Unified Theories of Cognition" by Newll (1990).

Back to Table of Contents


(T3) What is the data-chunking generation problem?

Whenever you subgoal to create a datachunk, you have to generate everything in the subgoal that might be learned, and then use search control to make sure that the chunks you build are correct. Doing this, without touching the supergoal, means that the chunk that is learned does not depend on the cue. The cue is then used in search control (which is not chunked) to select the object (the response) to return.

Back to Table of Contents


(T4) What do all these abbreviations and acronyms stand for?

  • CSP: Constraint Satisfaction Problem
  • EBG: Explanation-Based Generalisation
  • EBL: Explanation-Based Learning
  • GOMS: Goals, Operators, Methods, and Selection rules
  • HI-Soar: Highly Interactive Soar
  • ILP: Inductive Logic Programming
  • NNPSCM: New New Problem Space Computational Model
  • NTD: NASA Test Director
  • PEACTIDM: Perceive, Encode, Attend, Comprehend, Task, Intend, Decode, Move
  • SCA - Symbolic Concept Acquisition
  • PE: Persistent elaboration
  • IE: I-supported elaboration

Back to Table of Contents


(T5) What is this NNPSCM thing anyway?

Really, this is a number of questions rolled into one:

  1. What is the PSCM?
  2. What is the NNPSCM?
  3. What are the differences between the two?

What is the PSCM?

The Problem Space Computational Model (PSCM) is an idea that revolves around the commitment in Soar using problem spaces as the model for all symbolic goal-oriented computation. The PSCM is based on the primitive acts that are performed using problem spaces to achieve a goal. These primitive acts are based on the fundamental object types within Soar i.e., goals, problem spaces, states and operators. The functions that they perform are shown below:

Goals

  1. Propose a goal.
  2. Compare goals.
  3. Select a goal.
  4. Refine the current information about the current goal.
  5. Terminate a goal.

    Problem Spaces

  6. Propose a problem space for the goal.
  7. Compare problem spaces for the goal.
  8. Select a problem space for the goal.
  9. Refine the information available about the current problem space.

    States

  10. Propose an initial state.
  11. Compare possible initial states.
  12. Select an initial state.
  13. Refine the information available about the current state.

    Operators

  14. Propose an operator.
  15. Compare possible operators.
  16. Select an operator.
  17. Refine the information available about the current operator.
  18. Apply the selected operator to the current state.

More details about exactly what these functions do can be found in the current User Manual.

What is the NNPSCM?

The New New Problem Space Computational Model (NNPSCM) addresses some of the issues that made the implementation of the PSCM run relatively slow. It reformulates some of the issues within the PSCM without actually removing them, and hence changes the way in which models are implemented but we are not aware of a model that has been fundamentally influenced by this change. Starting with version 7.0.0 of Soar, all implementation is performed using the NNPSCM; in later releases of version 6.2, you can choose which version you require (NNPSCM or non-NNPSCM) when you build the executable image for Soar. The easiest way to illustrate the NNPSCM is to look at the differences between it and the PCSCM.

What are the differences between the two?

The NNPSCM and the PSCM can be compared and contrasted in the following ways:

  1. The nature of problem space functions for NNPSCM and PSCM remain essentially the same as those described in Newell, A., Yost, G.R., Laird, J.E., Rosenbloom, P.S., & Altmann, E. (1991). Formulating the problem space computational model. In R.F. Rashid (Ed.), Carnegie-Mellon Computer Science: A 25-Year Commemorative (255-293). Reading, MA: ACM-Press (Addison-Wesley).
  2. The goal state from the PSCM now simply becomes just another state, rather than being treated as a separate, special state.
  3. The need to select between problem spaces in the NNPSCM does not require any decision making process. The problem space is simply formulated as an attribute of the state. It can be assigned in a lightweight way using an elaboration rule, or it can be deliberately assigned by an operator application. Currently, this is left up to programmer or modeler.
  4. Models implemented using NNPSCM are generally faster than their PSCM equivalents, becuase less decision cycles are required. Less DCs are required because there is no need to decide between problem spaces, and, in later versions, states.
  5. Using the NNPSCM is presumed to allow better inter-domain, and inter-problem-space transfer of learning to take place.
  6. The use of the NNPSCM should help in the resolution and understanding of the issues involved in external interaction.

The differences may become more evident if we look at code examples (written using Soar version 6.2.5) for the farmer, wolf, goat, and cabbage problem that comes as a demo program in the Soar distribution.

PSCM Code

(sp farmer*propose*operator*move-with
     (goal <g> ^problem-space <p>
                     ^state <s>)
     (<p> ^name farmer)
     (<s> ^holds <h1> <h2>)
     (<h1> ^farner <f> ^at <i>)
     (<h2> ^<< wolf goat cabbage >> <value>
                 ^at <i>)
     (<i> ^opposite-of <j>)
     -->
     (<g> ^operator <o>)
     (<o> ^name move-with
                ^object <value>
                ^from <i>
                ^to <j>))
 

NNPSCM Code

(sp farmer*propose*move-with
(state <s> ^problem-space <p>) ; goal <g> has disappeared
(<p> ^name farmer)             ;<p> is added with d DC
       (<s> ^holds <h1> <h2>)
       (<h1> ^farner <f> ^at <i>)
       (<h2> ^<< wolf goat cabbage >> <value>
                 ^at <i>)
       (<i> ^opposite-of <j>)
       -->
       (<s> ^operator <o>)
       (<o> ^name move-with
                 ^object <value>
                 ^from <i>
                 ^to <j>))
 

On the face of it, there do not appear to be many differences, but when you look at the output trace, the consistency of the operator use versus including state and problem space assignment intervening, and the improvement in speed become more apparent:

PSCM Trace

0: ==>G: G1
1:    P: P1 (farmer)
2:    S: S1
3:    ==>G: G3 (operator tie)
4:       P: P2 (selection)
5:       S: S2
6:       O: O8 (evaluate-object O1 (move-alone))
7:       ==>G: G4 (operator no-change)
8:          P: P1 (farmer)
9:          S: S3
10:          O: C2 (move-alone)
 

NNPSCM Trace

0:==>S: S1
1:   ==>S: S2 (operator tie)
2:      O: O8 (evaluate-object O1 (move-alone)
3:      ==>S: S3 (operator no-change)
4:         O: C2 (move-alone)
 

Back to Table of Contents


(T6) How does Soar 7 differ from Soar 6?

(T6-1) Question from Monica Weiland [monica_weiland@chiinc.com]

Basically, the Soar kernel architecture in Soar 7 is the same as what is in Soar 6, with additional bug fixes, and changes to the timers to be more accurate and informative when the 'stats' cmd is issued. There were also changes to using multiple agents and the NNPSCM is the only PSCM model supported (in 6, both NNPSCM (no explicit problem space slot) and the PSCM (explicit problem space slot) were supported). The advantages of Soar7 include all the user extensions that can be done by using Tcl for the user interface.

In the Soar distribution there is a tool for converting from Soar 6 format productions to Soar 7. It was written by Doug Pearson and it's called convert (written in C). I don't know if it is included in the Mac distribution, but I would assume it is. It does a good job of converting productions; most applications run after being processed by this routine. If Monica has the SimTime productions, she should be able to convert to Soar 7. If she has trouble with the conversions, she can contact me and I'll help her figure them out.

Answer from: "Karen J. Coulter" [kcoulter@eecs.umich.edu]
Date: June 9, 1997.

There are also some notes available noting some explicet changes in the command set that were presented as part of a talk at the Soar 15th workshop. These notes are out of date, but not so out of date to be useless. The directory where to find commands1.ps.Z and commands2.ps.Z.

(T6-2) Question from Tony Hirst

In general, what does just using a ^problem-space.name flag buy you apart from providing a conventional label within which to enable operator proposition? Is there a strong/formal way in which the problem-space elaboration was intended to be used (for example, we might encapsulate state required within the problem space by copying only those state <s> elaborations necessary for use in that problem space onto ^problem-space <p> (though this incurs an overhead that the rather more general:

<s> ^problem-space (<p> ^name whatever)
-->
<p> ^pstate <s>
 

avoids); prodns inside the problem space may then make changes to stuff dangling off <p> (or ^problem-space.pstate ) rather than explicitly to <s> (even though it's the same stuff... the point is that changes are forced to be made ostensibly within the problem space)). Problem specific elaborations should also be made to <p> (<ps>) rather than <s> by explicitly making changes to either ^top-state <ts> or <p> (^problem-space.pstate <ps>), rather than <s>, we are forced to think about and state the context of the state changes we're making.

Finally, is there a convention for naming problem spaces within subgoal spaces (e.g., if operator.name thing reaches an impasse an forces a subgoal, should the subgoal be elaborated with problem-space.name thing, or an arbitrary name?)

Answer from Randy Jones

My opinion is that the ^problem-space flag is an anachronism that should be discarded, especially for non-trivial Soar programs. The flag originally arose from Newell and Simon's problem-space hypothesis, and the notion that people tend to employ specific sets of methods and goals for specific types of problems. What this "flag-based" representation neglects, however, is the potential for sharing methods and goals across types of problems that we might normally view as being in distinct problem spaces. In TacAir-Soar, for example, we have *many* operators that can apply in a variety of different states, independent of the problem-space flag on that state. In general, (and again in my opinion), operators should be sensitive to patterns of data represented on the "current state", rather than being a slave to a single, discrete, problem-space flag. This allows the use of operators to transfer across problem spaces in useful, and sometimes surprising ways. Under this view, problem spaces "emerge" from patterns of data, rather than being defined by a single flag.

Answer from Richard Lewis, Date: February 10, 1999

While I agree with much of what Randy says, I wouldn't be too quick to discard the use of a problem-space flag. The problem-space flag permits the agent to decide (based on some knowledge) to solve this problem in some particular way, then ti change its decision later and attempt a different way, etc. It is an additional layer of deliberate control that allows the agent to "hold in place" the outcome of some decision and use that decision to guide problem solving behavior over a period of time that extends beyond a single decision cycle. Thus, the agent is not just slave to whatever immediate associations come to mind.

What I am really advocating is a view that keeps a mix of the data-driven, opportunistic style that Randy describes, along with the ability to exert more control over some extended periods of time. Such a mix may hinge in part on using the problem space flag in ways that we haven't usually done in the past: as search control rather than generator. There's a sense in which this kind of mix can't be discarded as long as it is architecturally possible, the system is learning, and we can't see any clear reasons why the agent in principle can't arrive, via learning, at a point where it behaves in such a way.

Answer from John Laird, Date: February 10, 1999

On the ^problem-space.name issue:

In earlier versions of Soar, the problem space was selected just like the operator, and thus was open to preferences. However, for the reasons Randy mentioned (problem spaces may be more emergent from many properties of the state than just a specific symbol) we abandoned the selection of the problem space. For many tasks, having a problem space symbol might be a good way to discriminate during operator selection. The convention that I've adopted is to copy the name of a super-operator to be the name of the state created below it. This doesn't cover tie impasses or state no-change, but works very well for operator no-changes.

Back to Table of Contents


(T7) What does using a ^problem-space.name flag buys you apart from providing a conventional label within which to enable operator proposition?

From Randy Jones:

My opinion is that the ^problem-space flag is an anachronism that should be discarded, especially for non-trivial Soar programs.  The flag originally arose from Newell and Simon's problem-space hypothesis, and the notion that people tend to employ specific sets of methods and goals for specific types of problems.  What this "flag-based" representation neglects, however, is the potential for sharing methods and goals across types of problems that we might normally view as being in distinct problem spaces.  In TacAir-Soar, for example, we have *many* operators that can apply in a variety of different states, independent of the problem-space flag on that state.  In general (and again in my opinion), operators should be sensitive to patterns of data represented on the "current state", rather than being a slave to a single, discrete, problem-space flag.  This allows the use of operators to transfer across problem spaces in useful, and sometimes surprising, ways.  Under this view, problem spaces "emerge" from patterns of data, rather than being defined by a single flag.

From Richard Lewis:

While I agree with much of what Randy says, I wouldn't be too quick to discard the use of a problem-space flag.  The problem-space flag permits the agent to decide (based on some knowledge) to solve its problem in some particular way, then to change its decision later and attempt a different way, etc. It is an additional layer of deliberate control that allows the agent to 'hold in place' the outcome of some decision and use that decision to guide problem solving behavior over a period of time that extends beyond a single decision cycle.  Thus, the agent is not just slave to whatever immediate associations come to mind. What I'm really advocating is a view that keeps a mix of the data-driven, opportunistic style that Randy describes, along with the ability to exert more control over some extended periods of time.  Such a mix may hinge in part on using the problem space flag in ways that we haven't usually done in the past: as search control rather than generator.  There's a sense in which this kind of mix can't be discarded as long as it is architecturally possible, the system is learning, and we can't see any clear reasons why the agent in principle can't arrive, via learning, at a point where it behaves in such a way.

From John Laird:

I agree with just about everything Randy said. On the ^problem-space.name issue - in earlier versions of Soar, the problem space was selected just like the operator, and thus was open to preferences. However, for the reasons Randy mentioned (problem spaces may be more emergent from many properties of the state than just a specific symbol) we abandoned the selection of the problem space.  For many tasks, having a problem space symbol might be a good way to discriminate during operator selection.  The convention that I've adopted is to copy the name of a super-operator to be the name of the state created below it. This doesn't cover tie impasses or state no-change, but works very well for operator no-changes.

Back to Table of Contents


Section 3: Programming Questions


(P1) How can I make my life easier when programming in Soar?

There are a number of ways to make your life easier when programming in Soar. Some simple high level considerations are:

  • Use a programming tool, such as Visual Soar or Herbal (explained below)
  • Use the Soar debugger or the TSI
  • Re-use existing code
  • Cut and paste productions and code
  • Work mainly on the top level problem space, using incremental problem space expansion
  • Use the integrated Emacs environment, SDE, or one of the visual editors
  • Turn chunking (the learning mechanism) off
  • Use the Tcl/Tk to write simulations for the model to talk to, rather than use external simulations

ViSoar

Visual Soar: It is a development environment to help users create an agent under Soar architecture.

The Tcl/Tk Soar Interface (TSI) is part of Soar 7 and 8.

Herbal: It is a high-level behavior represention language.

A Brief History of Soar Interfaces

The first Soar interface was probably the command line from OPS5. One of the first Soar graphical interfaces was written by Amy Unruh and ran on TI lisp machines. Brian Milnes also wrote a graphical interface that ran in Common lisp under X windows. Blake Ward probably wrote the first Emacs mode for Soar. Frank Ritter revised this, and then Mike Hucka revised it, and then Frank revised it, and this went on for a while until it went to Mike and stayed at Michigan. While it was with Frank there was a submode for writing TAQL code (Ritter, 1991b). Various reports were included in the Soar Workshop Proceedings. There was a manual (Ritter, Hucka, & McGinnis, 1992). These were fairly widely used systems, maybe 1/2 of the Soar users used them at the time. This mode is still available.

Frank Ritter wrote a graphic user interface for Soar, the Developmental Soar Interface, or DSI, in Common Lisp using Garnet. This was reported in his thesis (Ritter, 1993; Ritter & Larkin, 1994) and at CHI (Ritter, 1991a). It was used to generate the initial polygons for the Soar video (Lehman, Newell, Newell, Altmann, Ritter, & McGinnis, 1994). This interface probably had about 10 users at the most, and was abandoned when Soar was implemented in C.

The Tcl/Tk Soar Interface (TSI) is a successor semi-graphical interface started around 1996 taking advantage of including Tcl/Tk with Soar (Ritter, Jones, & Baxter, 1998). Numerous people have now contributed to it. It is currently being developed at Michigan.

In 1995, a rationalised list of commands aliases was proposed for the command line (Nichols & Ritter, 1995). These were used in Soar7 and I believe in Soar 8. An unpublished study supported the results that even novices could profit from aliases.

New interfaces to Soar include a revised version of the TSI (version 3, 6/00), which includes viewers for working memory tree, production match, and chunks (contact Karen Coulter (kcoulter@eecs.umich.edu) and/or Mazin Assanie (mazina@eecs.umich.edu)). A Soar debugger to provide greater control over breakpoints, etc. is also in the works (contact Glenn Taylor, glenn@soartech.com). Visual Soar is an environment in Java to help make sure that when writing Soar programs that all attribute names are correct, and to cut and paste and reuse attribute names and sets of names. Laird, Jones, and Bauman at Michigan is working on this effort. A related effort by Tony Hirst is ongoing at the Open University (a.j.hirst@open.ac.uk).

References

Lehman, J. F., Newell, A., Newell, P., Altmann, E., Ritter, F., & McGinnis, T. (1994). The Soar Video. 11 min. video, The Soar Group, Carnegie-Mellon University.

Nichols, S., & Ritter, F. E. (1995). Theoretically motivated tool for automatically generating command aliases. In CHI '95, Human Factors in Computer Systems. 393-400. New York, NY: ACM.

Ritter, F. E. (1991a). How the Soar interface uses Garnet. Video (2 min.) shown at the Garnet user interface development environment special interest subgroup meeting at the 1991 Human Factors in Computing Systems Conference (CHI'91).

Ritter, F. E. (1991b). TAQL-mode Manual. The Soar group.

Ritter, F. E. (1993). TBPA: A methodology and software environment for testing process models' sequential predictions with protocols (Technical Report No. CMU-CS-93-101). School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.

* Ritter, F. E., & Larkin, J. H. (1994). Using process models to summarize sequences of human actions. Human-Computer Interaction, 9(3), 345-383.

Ritter, F. E., Hucka, M., & McGinnis, T. F. (1992). Soar-mode Manual (Tech. No. CMU-CS-92-205). School of Computer Science, Carnegie-Mellon University.

Ritter, F. E., Jones, R. M., & Baxter, G. D. (1998). Reusable models and graphical interfaces: Realising the potential of a unified theory of cognition. In U. Schmid, J. Krems, & F. Wysotzki (Eds.), Mind modeling - A cognitive science approach to reasoning, learning and discovery. 83-109. Lengerich, Germany: Pabst Scientific Publishing.

Back to Table of Contents


(P2) Are there any guidelines on how to name productions?

Productions will load as long as their names are taken from a set of legal characters, essentially alphanumerics and "-" and "*". Names consisting only of numerics are not allowed.

Soar programmers tend to adopt a convention whereby the name of a production describes what the rule does, and where it should apply. Typically, the conventions suggest that names have the following general form:

 
problem-space-name*state-name*operator-name*action

How you choose your naming convention is probably less important than the fact that you do use one.

Note that, to name working memory elements, Soar uses single alphabetic character followed by a number, such as p3. If you name a production this way it will not be printable. (It is also poor style.)

Bob Marinier stated some tips for naming productions as follows.

Back to Table of Contents


(P3) Why did I learn a chunk there?

Soar generally learns a chunk when a result is created in a superstate by a production which tests part of the current subgoal.

e.g., the production:

sp {create*chunk1
   (state ^superstate)
   -->
   (^score 10)}
 

creates a preference for the attribute "score" with value 10.

That mechanism seems simple enough. Why then do you sometimes get chunks when you do not expect them? This is usually due to shared structure between a subgoal and a superstate. If there is an object in the subgoal which also appears in a superstate, then creating a preference for an attribute on that object, will lead to a chunk. These preferences in a superstate are often called "results" although you are free to generate any preference from any subgoal, so that term can be misleading.

For example, suppose that working memory currently looks like:

(S1 ^type T1 ^name top ^superstate nil)
(T1 ^name boolean)
(S2 ^type T1 ^name subgoal ^superstate S1)
 

so S2 is the substate and S1 is the superstate for S2. T1 is being shared. In this case, the production:

sp {lead-to*chunk2
   (state <s> ^name subgoal)
   -->
   (<s> ^size 2)}
 

will create a chunk, even though it does not directly test the superstate. This is because T1 is shared between the subgoal and the superstate, so adding ^size to T1, adds a preference to the superstate.

What to do?

Often the solution is to create a copy of the object in the subgoal. So, instead of (S2 ^type T1) create (S2 ^type T2) and (T2 ^name boolean).

For example (in psuedo code),

sp {copy*superstate*type
   (state ^superstate)
   (^type)
   (^name)
   -->
   (^type)
   (^name)}
 

This will copy the ^type attribute to the subgoal and create a new identifier (T2) for it. Now you can freely modify T2 in the subgoal, without affecting the superstate.

Back to Table of Contents


(P4) Why didn't I learn a chunk there (or how can I avoid learning a chunk there)?

There are a number of situations where you can add something to the superstate and not get a chunk:

(1) Learning off

If you run Soar and type "learn -off" all learning is disabled. No chunks will be created when preferences are created in a superstate. Instead, Soar only creates a "justification". (See below for an explanation of these.)

You can type "learn" to see if learning is on or off, and "learn -on" will make sure it is turned on. Without this, you cannot learn anything.

(2) Chunk-free-problem-spaces

You can declare certain problem spaces as chunk-free and no chunks will be created in those spaces. The way to do this is changing right now (in Soar7) because we no longer have explicit problem spaces in Soar. If you want to turn off chunking like this, check the manual.

(3) Quiescence t

If your production tests ^quiescence t, it will not lead to a chunk. For example,

sp {lead-to*chunk1
   (state <s1> ^name subgoal ^superstate <s2>)
   -->
   (<s2> ^score 10)}
 

will create a chunk, whilst

sp {lead-to*no-chunk
   (state ^name subgoal ^superstate ^quiescence t)
   -->
   (^score 10)}
 

will not create a chunk (you just get a justification for ^score 10). You can read about the reasons for this in the Soar manual. You also do not get a chunk if a production in the backtrace for the chunk tested ^quiescence t:

For example,

sp {create*score
   (state ^name subgoal ^quiescence t)
   -->
   (^score 10)}
sp {now-expect-chunk*but-dont-get-one
   (state ^name subgoal ^score 10 ^superstate)
   -->
   (^score 10)}
 

The test for ^quiescence t is included in the conditions for why this chunk was created--so you get a justification, not a chunk.

A point to note is that having tested ^quiescence t, the effect of not learning chunks is applied recursively. If you had a production:

sp {lead-to*second-chunk
   (state ^name medium-goal ^superstate)
   (^score 10)
   -->
   (^score-recorded yes)}
 

and you have a goal stack:

(S1 ^name top ^superstate nil)
(S2 ^name medium-goal ^superstate S1)
(S3 ^name subgoal ^superstate S2)
 

then if lead-to*chunk1 leads to ^score 10 being added to S2 and then lead-to*second-chunk fires and adds ^score-recorded yes to S1, you will get two chunks (one for each result). However, if you use lead-to*no-chunk instead, to add ^score 10 to S2, then lead-to*second-chunk will also not generate a chunk, even though it does not test ^quiescence t itself. That is because ^score 10 is a result created from testing quiescence.

Back to Table of Contents


(P5) What is a justification?

Any time a subgoal creates a preference in a superstate, a justification is always created, and a chunk will also be generated unless you have turned off learning in some manner (see above). If learning has been disabled, then you only get a justification. A justification is effectively an instantiated chunk, but without any chunk being created.

For example, let's say:

sp {lead-to*chunk1 (state ^name subgoal ^superstate) (^name top) --> (^score 10)} leads to the chunk:

sp {chunk-1
   :chunk
   (state ^name top)
   -->
   (^score 10)}
 

If working memory was:

(S1 ^name top ^superstate nil ^score 10)
(S2 ^name subgoal ^superstate S1)
 

then if you typed "preferences S1 score 1" you would see:

Preferences for S1 ^score:
acceptables:
10 +
From chunk-1
 

(The value is being supported by chunk-1, an instantiated production just like any other production in the system).

Now, if we changed the production to:

sp {create*no-chunk
   (state ^name subgoal ^superstate ^quiescence t)
   (^name top)
   -->
   ( ^score 10)}
 

We do not get a chunk anymore. We get justification-1. If you were to print justification-1 you would see:

sp {justification-1
   :justification ;not reloadable
   (S1 ^name top)
   -->
   (S1 ^score 10)}
 

This has the same form as chunk-1, except it is just an instantiation. It only exists to support the values in state S1. When S1 goes away (i.e., in this case when you do init-soar ) this justification will go away too. It is like a temporary chunk instantiation. Why have justifications? Well, if you now typed "preferences S1 score 1" you would see:

Preferences for S1 ^score:
acceptables:
10 +
From justification-1
 

Justification-1 is providing the support for the value 10. If the subgoal, S2, goes away, this justification is the only reason Soar retains the value 10 for this slot. If later, the ^name attribute of S1 changes to "driving" say, this justification will no longer match (because it requires ^name top) and the justification and the value will both retract.

Back to Table of Contents


(P6) How does Soar decide which conditions appear in a chunk?

Soar works out which conditions to put in a chunk by finding all the productions that led to the final result being created. It sees which of those productions tested parts of the superstate and collects all those conditions together.

For example:

(S1 ^name top ^size large ^color blue ^superstate nil)     ;# The superstate
---------------------------------                          ;# Conceptual boundary

(S2 ^superstate S1)                                        ;# Newly created subgoal.

sp {production0
   (state ^superstate nil)
   -->
   (^size large ^color blue ^name top)}

If we have:

sp {production1
   (state ^superstate)
   (^size large)
   -->
   (^there-is-a-big-thing yes)}
 

and

sp {production2
   (state ^superstate)
   (^color blue)
   -->
   (^there-is-a-blue-thing yes)}
 

and

sp {production3
   (state ^superstate)
   (^name top)
   -->
   (^the-superstate-has-name top)}
 

and

sp {production1*chunk
   (state ^there-is-a-big-thing yes
          ^there-is-a-blue-thing yes
          ^superstate)
   -->
   (^there-is-a-big-blue-thing yes)}
 
 

and working memory contains (S1 ^size large ^color blue) this will lead to the chunk:

sp {chunk-1
   :chunk
   (state ^size large ^color blue)
   -->
   (^there-is-a-big-blue-thing yes)}
 

The size condition is included because production1 tested the size in the superstate, created ^there-is-a-big-thing attribute and this lead to production1*chunk firing. Similarly, for the color condition (which was also tested in the superstate and lead to the result ^there-is-a-big-blue-thing yes). The important point is that ^name is not included. This is because even though it was tested by production3, it was not tested in production1*chunk and therefore the result did not depend on the name of the superstate.

Back to Table of Contents


(P7) Why does my chunk appear to have the wrong conditions?

See above for general description of how chunk conditions are computed. If you have just written a program and the chunks are not coming out correctly, then try using the "explain" tool.

So, using the example of how conditions in chunks are created (shown above), if the chunk is:

sp {chunk-1
   :chunk
   (state  ^size large ^color blue)
   -->
   (^there-is-a-big-blue-thing yes)}
 

and you type "explain chunk-1" you will get something like:

sp {chunk-1
   :chunk
   (state ^color blue ^size large)
   -->
   (^there-is-a-big-blue-thing yes +)}
   1 : (state ^color blue)              Ground : (S1 ^color blue)
   2 : (^size large)                    Ground : (S1 ^size large)
 

This shows a list of conditions for the chunk and which "ground" (i.e., superstate working memory element) they tested.

If you want further information about a particular condition you can then type: "explain chunk-1 2" (where 2 is the condition number -- in this case (<s1>; ^size large)) to get:

Explanation of why condition (S1 ^size large) was included in chunk-1
Production production1 matched
     (S1 ^size large) which caused
production production1*chunk to match
     (S2 ^there-is-a-big-thing yes) which caused
A result to be generated.
 

This shows that ^size large was tested in the superstate by production1, which then created (S2 ^there-is-a-big-thing yes). This in turn caused the production production1*chunk to fire and create a result (in this case ^there-is-a-big-blue-thing yes) in the superstate, which leads to the chunk.

This tool should help you spot which production caused the unexpected addition of a condition, or why a particular condition did not show up in the chunk.

Back to Table of Contents


(P8) How is it possible for Soar to generate a duplicate chunk?

Question from Bill Kennedy:

If the new chunk is a duplicable, wouldn't the original have already fired? Is it the case that when all the applicable productions fire to resolve an impasse, the chunking then generalizing processes could build a chunk identical to one that already fired?

Answer from John Laird (Nov. 10, 2002):

Two similar results can be created during the same decision cycle, which in turn can lead to two identical chunks.

Also, it is possible to generate a result in a subgoal that is already created by a chunk if the result doesn't lead to progress in the problem solving - that is, the existence of the result from the chunk isn't tested in the subgoal (so the result is created even though it is essentially already there) and it doesn't resolve the impasse, etc. (so even after the result is created the impasse is still there).

Back to Table of Contents


(P9) What is all this support stuff about? (Or why do values keep vanishing?)

There are two forms of "support" for changes to working memory: o-support and i-support. O-support stands for "operator support" and means the preference behaves in a normal computer science fashion. If you create an o-supported preference ^color red, then the color will stay red until you change it.

How do you get an o-supported preference? The exact conditions for this may change, but the general rule of thumb is this:

"You get o-support if your production tests the ^operator attribute or creates structure on an operator"

(Specifically under Soar.7.0.0.beta this is o-support-mode 2 -- which is the mode some people recommend since they find it much easier to understand than o-support-mode 0 which is the current default).

e.g.,

sp {o-support
   (state ^operator)
   (^name set-color)
   -->
   (^color red)}
 

the ^color red value gets o-support.

I-support, which stands for "instantiation support", means the preference exists only as long as the production which created it still matches. You get i-supported productions when you do not test an operator:

e.g.,

sp {i-support
   (state ^object)
   (^next-to)
   -->
   (^near)}
 

In this case

^near
 

will get i-support. If obj1 ever ceases to be next to obj2, then this production will retract and the preference for

^near
 

will also retract. Usually this means the value for ^near disappears from working memory.

To change an o-supported value, you must explicitly remove the old value, e.g.,

^color red -  (the minus means reject the value red)
^color blue + (the plus means the value blue is acceptable)
 

while changing an i-supported value requires just that the production retracts. The use of i-support makes for less work and can be useful in certain cases, but it can also make debugging your program much harder and it is recommended that you keep its use as an optimization to a minimum. By default, when you do state elaboration you automatically get i-support, whereas applying, creating or modifying an operator as part of some state structure will lead to o-support.

(One point worth noting is that operator proposals are always i-supported, but once the operator has been chosen, it does not retract even after the operator proposal goes away, because it is given a special support, c-support for "context object support". This changes to be more dynamic in Soar 8.)

To tell if a preference has o-support or i-support, check the preferences on the attribute.

e.g., "pref s1 color" will give:

Preferences for S1 ^color:
 
acceptables:
  red + [O]
 

while "pref s1 near" gives:

Preferences for S1 ^near:
 
acceptables:
   O2 +
 

The presence or absence of the [O] shows whether the value has o-support or not.

Back to Table of Contents


(P10) When should I use o-support, and when i-support?

General information

Under normal usage, you probably will not have to explicitly choose between the two. By default, you will get o-support if you apply, modify or create an operator as part of some state structure; you will get i-support if all you do is elaborate the state in some way.

It, therefore, follows that by default, you should generally use operators to create and modify state structures whenever possible. This leads to persistent o-supported structures and makes the behaviour of your system much clearer.

I-support can be convenient occasionally, but should be limited to inferences that are always true. For example, if I know (O1 ^next-to O2), where O1 and O2 are objects then it is reasonable to have an i-supported production which infers that (O1 ^near O2). This is convenient because there might be a number of different cases for when an object is near another object. If you use i-supported productions for this, then whenever the reason (e.g. ^next-to O2) is removed, the inference (^near O2) will automatically retract.

Never mix the two for a single attribute. For example, do not have one production which creates ^size large using i-support and another that tests an operator and creates ^size medium using o-support. That is a recipe for disaster.

IE phase vs. PE phase

Robert Wray wrote (July 24, 2002):

IE instantiations indicate i-supported elaborations (IE) and PE instantiations are o-supported elaborations. The "P" is for persistent: "persistent elaborations".

The Soar 8 kernel makes a distinction between IE and PE in order to fire persistent elaborations in separate (super) phases from IEs. The general idea is:

while (!quiescence(all-productions))
   while (!quiescence(IE-productions))     //this is called
"mini-quiescence" in the kernel
        Preference Phase (IE only)
        WM Phase (IE only)
   Preference Phase (IE only)
   WM Phase (PE only)
 

Other tips

Note that there are several kinds that have been used. See the manual for your version. John Laird (larid@umich.edu) provided some useful tips (Sep. 9, 2003).

Back to Table of Contents


(P11) Why does the value go away when the subgoal terminates?

A common problem is creating a result in a superstate (e.g., ^size large) and then when the subgoal terminates, the value retracts. Why does this happen? The reason is that once the subgoal has terminated the preference in the superstate is supported by the chunk or justification that was formed when the preference was first created.

This chunk/justification may have different support than the support the preference had from the subgoal. It is quite common for an operator in a subgoal to create a result in the superstate which only has i-support (even though it is created by an operator). This is because the conditions in the chunk/justification do not include a test for the super-operator. Therefore, the chunk has i-support and may retract, taking the result with it.

NOTE: Even if you are not learning chunks, you are still learning justifications (see above) and the support, for working memory elements created as results from a subgoal, depends on the conditions in those justifications.

Back to Table of Contents


(P12) What's the general method for debugging a Soar program?

The main tools that you need to use are the commands below. You apply them where the behaviour is odd, and use them to understand what is going on. Personally I (FER), prefer the Tcl/Tk interface for debugging because many of these commands become mouse clicks on displays.

print Prints out a value in working memory.

print -stack (pgs in earlier versions of Soar) Prints out the current goal stack: e.g., : ==>S: S1 : ==>S: S2 (state no-change)

matches (ms in earlier versions of Soar) Shows you which productions will fire on the next elaboration cycle.

matches <prod_name> Shows you which conditions in a given production matched and which did not. This is very important for finding out why a production did or did not fire.

e.g., soar> matches production1*chunk >>>> (state ^there-is-a-blue-thing yes) (^there-is-a-big-thing yes) (^superstate) 0 complete matches. This shows that the first condition did not match.

preferences <id> <attribute> 1

(The 1 is used to request a bit more detail.)
 This command shows the preferences for a given
 attribute and which productions created the
 preferences. A very common use is
 
 
  pref operator 1
 
 which shows the preferences for the operator
 slot in the current state. (You can always
 use to refer to the current state).
 

Glenn Taylor mentioned that the Visual Soar has made some strides in minimizing user errors like typos, providing the closest thing to a type-checker for an untyped language. There is something older called ViSoar, which could be used to analyze productions in order to find some typos as well. Visual Soar may contain much of the functionality of ViSoar. There is also a run-time debugger (called SDB) and structure visualizer available through the Soar webpages, written in Tcl, so is platform-independent, though is not part of the Soar 8 distribution. Tcl Pro comes with a debugger. It may be also useful to debug Soar.

Back to Table of Contents


(P13) How can I find out which productions are responsible for a WME?

Use

preferences 1
 

Or

preferences -names
 

(They both mean the same thing.)

This shows the values for the attribute and which production created them. For example:

(S1 ^type T1) (T1 ^name boolean) pref T1 name 1

will show the name of the production which created the ^name attribute within object T1.

Back to Table of Contents


(P14) What's an attribute impasse, and how can I detect them?

Attribute impasses occur if you have two values proposed for the same slot.

e.g.,

^name doug +
^name pearson +
 

leads to an attribute impasse for the ^name attribute.

It is a bit like an operator tie impasse (where two or more operators are proposed for the operator slot). The effect of an attribute impasse is that the attribute and all its values are removed from working memory (which is probably what makes your program stop working) and an "impasse" structure is created. (Soar 7, this is not provided by default in Soar 8.)

It is usually a good idea to include the production:

sp {debug*monitor*attribute-impasses*stop-pause
   (impasse ^object ^attribute
   ^impasse)
   -->
   (write (crlf) |Break for impasse |  (crlf))
   (tcl |preferences| |  | | 1 |)
   (interrupt)}
 

in your code, for debugging. If an attribute impasse occurs, this production will detect it, report that an impasse has occurred, run preferences on the slot to show you which values were competing and which productions created preferences for those values and interrupts Soar's execution (so you can debug this problem).

Very, very occasionally you may want to allow attribute impasses to occur within your system and not consider them an error, but that is not a common choice. Most Soar systems never have attribute impasses (while almost all have impasses for the context slots, like operator ties and state no-changes), and this is probably the reason they have been removed from Soar 8.

Back to Table of Contents


(P15) How can I easily make multiple runs of Soar?

How can I run Soar many times without user intervention (i.e., in a batch mode)?

Bob Wray has put a page on the WWW describing how he ran Soar in a batch mode to collect data for his thesis. The site includes examples of csh scripts, Tcl scripts, and Soar code he used in the process. For more information, please contact Bob Wray (wray@soartech.com).

Back to Table of Contents


(P16) Are there any templates available for building Soar code?

If you use the Soar Development Environment (or SDE) which is a set of modules for Emacs, they provide some powerful template tools, which can save you a lot of typing. You specify the template you want (or use one of the standard ones) and then a few key-strokes will create a lot of code for you.

Back to Table of Contents


(P17) How do I find all the productions that test X?

Use the "pf" command (which stands for production-find). You give "pf" a pattern. Right now, the pattern has to be surrounded by a lot of brackets, but that should be fixed early on in Soar7's life.

Anyway, as of Soar.7.0.0.beta an example is:

pf {(^operator *)}
 

which will list all the productions that test an operator.

Or,

pf {(^operator.name set-value)}
 

which will list all the productions that test the operator named set-value.

You can also search for values on the right hand sides of productions (using the -rhs option) and in various subsets of rules (e.g., chunks or no chunks).

Back to Table of Contents


(P18) Why doesn't my binary parallel preference work?

Using parallel preferences can be tricky, because the separating commas are currently crucial for the parser. In the example below, there is a missing comma after the preferences for "road-quality".

sp {elaborate*operator*make-set*dyn-features
   (state ^operator)
   (^name make-set)
   -->
   (^dyn-features distance + &, gear + &, road-quality + &
     sign + &, other-road + &, other-sidewalk + &,
     other-sign + &)}

This production parses "road-quality & sign" as a valid binary preference, although this is not what was intended. Soar will not currently warn about the duplicate + preferences, you just have to be careful.

Note that the binary parallel preferences no longer exist in Soar 8.

Back to Table of Contents


(P19) How can I do multi-agent communication in Soar 7?

Reading and writing text from a file can be used for communication. However, using this mechanism for inter-agent communication is pretty slow and you would have to be careful to use semaphores to avoid deadlocks. With Soar 7, there is a relatively natural progression from Tcl to C in the development of inter-agent communication.

  1. Write inter-agent communication in Tcl. This is possible with a new RHS function (called "tcl") that can execute a Tcl script. The script can do something as simple as send a message to a simple simulator (which can also be written in Tcl). The simulator can then send the message to the desired recipient(s). You could also do things such as add-wme in a RHS but this makes it harder to see what is going on and more error prone.
  2. Move the simulator into C code. To speed up the simulated world in which the agents interact, recode the simulator in C. Affecting the simulator can be accomplished by adding a few new Tcl commands. The agents would be largely unchanged and the system would simply run faster.
  3. Move communication to C. This is done by writing Soar I/O functions as documented in section 6.2 of the Soar Users Manual. This is the fastest method.

Back to Table of Contents


(P20) How can I use sockets more easily?

Using sockets with Soar is not well-documented, but it has been done numerous times. Socket communication is included in the Eaters and TankSoar applications (which come with a tutorial document) and sockets have also been implemented with C code using a library written at Michigan called SocketIO. Eaters and TankSoar and the Soar 8 Tutorial are available on the Soar "Getting Started" web page, and SocketIO can be found near the bottom of the "Projects/Tools" web page. Soar 8.6 also has improved socket utilities.

http://ai.eecs.umich.edu/soar/getting-started.html

http://ai.eecs.umich.edu/soar/projects.html

Scott Wallace (swallace@vancouver.wsu.edu) added socket communication to Eaters (and TankSoar uses the same code) and Kurt Steinkraus (kurtas@umich.edu) wrote SocketIO. Kurt is no longer at University of Michigan, but his email gets forwarded.

Answer from: "Karen J. Coulter" (kcoulter@eecs.umich.edu), Date: June 4, 1999

Back to Table of Contents


(P21) How do I write fast code?

Here are a few general hints:

Note that SocketIO has been superceded by SGIO. SGIO and its documentation are included in the current Soar 8.6 releases. SGIO is a C++ API for connecting Soar to external environments. The SGIO documentation is located in the sgio-1.1.2/docs directory where Soar is installed. --Bob Marinier (rmarinie@eecs.umich.edu; Nov. 24, 2004)--

Here are some other tips for writing a fast code.

Back to Table of Contents


(P22) In a WME, is the attribute always a symbolic constant?

No, it can be an identifier. For example,

-->
(<s> ^<attr> yes)
(<att> ^time earlier)
...
 }
 

is legal, although not recommended.

Back to Table of Contents


(P23) How does one write the 'wait' operator in Soar8.3?

If your wait operator really never needs to do anything, this will work:

sp {wait*propose*wait
    (state <s> ^problem-space.name wait)
   -(<s> ^operator.name wait)
    -->
    (<s> ^operator <o> + <, =)
    (<o> ^name wait)}
 

Since the proposals tests that there's no operator named wait, the operator will be retracted as soon as the wait is selected.

Alternatively you can use:

sp {propose*wait
      (state <s> ^name <x>)
    -{(<s> ^operator <o>)
      (<o> ^name wait)}
      -->
      (<s> ^operator <o> +)
      (<o> ^name wait)}
 

Back to Table of Contents


(P24) How does one mess with wme's on the IO link?

The rule that is commonly used is:

However, this raises another issue. Currently in Soar, you *can* use remove-wme to remove something that was created by productions through preference memory. The problem is that it removes the WME, but does not remove the preference. This can lead to all sorts of nastiness. For example, say a rule created something you want to get rid of, and maybe you want the production to fire again to change the same attribute to have a different value. You use remove-wme to delete the old value. The production can fire again (maybe it fires again because it tests for the absence of the value it's creating), and it creates a preference for the new value of the attribute.

Bingo, you now have an attribute tie impasse. At least that's what you got in Soar 7. Soar 8 has no attribute tie impasse, so presumably the new value would show up in working memory.

At a deep level (inability of users to access pref memory for input WMEs) maybe this is a bug. However, I don't think there's any preference memory for the input WMEs and so I'm not surprised that a reject preference doesn't remove them.

There was a question why there seems to be no preferences in preference memory for (id ^attribute value) triples acquired through C input functions. The problem which results is that one can't remove values from working memory that were first acquired through I/O interfacing with some external program. (Platform: Soar 6.2.3. with Windows interface and later versions).

Working memory elements from I/O are added directly to working memory without going Soar's preference mechanism. That's why they don't have any preferences. That's also why production rules can't make them disappear (say, with a reject preference), because those WME's just totally bypass the preference process.

This is intentional. The idea is that the input link represents "current perception", and you can't use purely cognitive processes to "shut off" your current perceptions. If you want to remove things from the input link, you have to make the I/O code remove them. If you want the input link to be under the control of cognition, then your rules have the I/O system (via the output link) to change the values on the input link. The rules cannot do it directly.

C input functions add wmes usingthe kernel routine add_input_wme, which surgically alters working memory directly. The preference mechanism is not part of this process. The only way to remove these input WMEs is by calling the routine remove_input_wme from your input funcition. "add_input_wme" returns the pointer to the WME you added, which you must store statically somehow (on a linked list or something) to later remove the wme with remove_input_wme. I should add the caveat that I didn't actually go back and look at Soar 6, and gave you the answer for Soar version 7 and later, but I'm fairly certain that add_input_wme and remove_input_wme also were in Soar 6. (look in io.c). - comments from Randy Jones -

Back to Table of Contents


(P25) How can I get Soar to interpret my symbols as integers?

Given, a right-hand-side action that calls Tcl to get a number that is then bound to an attribute's value:

(<o> ^rehearsals (tcl | get-subject-rehearsals classifications|))

the Tcl function, get-subject-rehearsals, returns a value that would ideally be interpreted as an integer. However Soar will interpret this as a symbol, such as:

(O20 ^rehearsals |5|)

instead of the ideal:

(O20 ^rehearsals 5)

To achieve the ideal, try the following:

(<o> ^rehearsals (int (tcl | get-subject-rehearsals classifications|)))

or,

(<o> ^rehearsals (float (tcl | get-subject-rehearsals classifications|)))

Note, this will change in Soar 8.6.

Back to Table of Contents


(P26) Has there been any work done where new operators are learnt that extend problem spaces?

John Laired wrote:

Scott Huffman's thesis contains a lot of the ideas that will be needed in any system that learns new operators.

Also, Doug Pearson's thesis work did not learn new operators completely from scratch, but it did learn to correct both the conditions and the actions for operators - the only thing it couldn't learn was the goal/operator termination conditions.

Mike van Lent's thesis work learns new Soar operators, but not from Soar processing - it is a preprocessor.

Contact John Laird (laird@owl.eecs.umich.edu) for further information regarding any of these.

Rail-soar as illustrated in the Soar video did this as well. Erik Altmann (altmann@gmu.edu) wrote this.

Some of the Blood-Soar work at Ohio State may have done this as well. Todd Johnson (todd.r.johnson@uth.tmc.edu) is the person to ask.

Back to Table of Contents


(P27) How can I find out about a programming problem not addressed here?

There are several places to turn to, listed here in order that you should consider them.

  1. The manuals may provide general help, and you should consult them if possible before trying the mailing lists.
  2. You can consult the list of outstanding bugs.
  3. You can consult the mailing lists noted above.

Back to Table of Contents


Section 4: Downloadable Models


(DM0) Burl: A general learning mechanism

It answers the question: How do I do psychologically plausible lookahead search in Soar?

Burl is a general learning mechanism for using bottom-up recognition learning as a way to build expertise written by Todd Johnson.

Code is available from Todd at Todd.R.Johnson@uth.tme.edu.

Back to Table of Contents


(DM1) A general model of teamwork

Milind Tambe (tambe@isi.edu) wrote (Thu, 6 Mar 1997):

As you may well know, the Soar/IFOR-CFOR project has been developing agents for complex multi-agent domains, specifically distributed interactive simulation (DIS) environments. As part of this effort at ISI, we have been developing a general model of teamwork, to facilitate agents' coherent teamwork in complex domains.

This model, called STEAM, at present involves about 250 productions. We have used this model in developing three separate types of agent teams (including one outside of DIS environments). Since this model may be of use to some of you working with multi-agent Soar systems, we are now making it available to the Soar group.

Here is a pointer to the relevant web page describing STEAM:

http://www.isi.edu/soar/tambe/steam/steam.html

Documentation for the model is available on the web page, however, if there are additional questions, I will be happy to answer them. Any feedback on this topic is very welcome.

Back to Table of Contents


(DM2) A model that does reflection

When Ellen Bass was in Georgia Tech as a graduate student, she created the reflection model written in Soar 6.2.5 NNPSCM. According to her, the model starts up after the ATC task has been performed and the aircraft has disappeared from the display.

Currently, she is an assistant professor at department of systems and information engineering in University of Virginia. Please contact Ellen Bass, if you need more information on the model (ellenbass@virginia.edu).

Back to Table of Contents


(DM3) A model that does concept/category/acquisition

Symbolic Concept Acquisition (SCA) is a model of concept learning that was first developed by Craig Miller using Soar architecture. Please, refer to this site for the model:

http://www.speakeasy.org/~wrayre/soar/sca/html/index.html

Back to Table of Contents


(DM4) A model for Java Agent Framework (JAF) component

SoarProblemSolver was released. It is a JAF component wrapping my soar2java package. JAF is an agent architecture developed and maintained by University of Massachusetts. Hopefully, this component will enable soarers to use other JAF components with minimal knowledge of Java (Java is the current implementation language of JAF). For more information, please visit the following site:

www.geocities.com/sharios/soarproblemsolver.htm

Back to Table of Contents


(DM5) Herbal: A high level behavior representation language

Herbal is a high level language for behavior representation and acts as a first step towards creating development tools that support a wide range of users in the cognitive modeling domain. Currently, the Herbal environment suppports creating a model under the Soar Cognitive Architecture. With Herbal, users can create cognitive models graphically, and have these models compiled into Soar productions.

It may be worthwhile to try out Herbal, if you are a user of Soar or will be one. For further information, please visit acs.ist.psu.edu/projects/Herbal. You will get the information on the download of the software and other useful documentation.

Back to Table of Contents


(DM6) dTank: A competitive environment for distributed agents

dTank is originally inspired by Tank-Soar that was developed by Mazin As-Sanie at the U of Mich. dTank was developed to efficiently utilize the flexibility of Java graphics and networking. For example, dTank provides an agent architecture-neutral interface to the game server, so that humans and agents can interact within the same environment over the networking. It includes a basic-tank model, and Herbal includes includes the dTank model.

For further knowledge regarding dTank, please visit acs.ist.psu.edu/projects/dTank.

Back to Table of Contents


(DM7) A model that counts attributes

Richard Young wrote:

Back in 1993, I wrote some code that counted the number of attribute-value pairs on an object entirely within the scope of a single operator. (The code was part of a program that formed an exact recognition chunk for an object, in a highly efficient manner.) This way of doing it was suggested by Bob Doorenbos as being possible.

There are just four rules, which I am retyping from a hardcopy printout, so I don't guarantee accuracy. I think this was for Soar 5 or 6, so it will need to be adapted. It makes use of attribute preferences, so it may not work in Soar 8.

(sp operator*recognise*count*initial-count
(goal <g> ^state.recognition-node <n> ^operator.name recognise)
(<n> -^count)
-->
(<n> ^count 0)              ;(thank you, Bob D.)
(sp operator*recognise*count*all-avs
(goal <g> ^state.recognition-node <n> ^operator.name recognise)
(<n> ^structure <w>)
(<w> ^<att> <val>)
-->
(<av> ^<att> <val>)
(<n> ^a-v <av> + <av> =))
;; NOTE: the above rule gets *all* the <att> <val>s from <w>;
;; You may want to restrict it to a particular attribute

(sp operator*recognise*count*done
 (goal <g> ^state.reconition-node <n> ^operator.name recognise)
 -->
 (<n> ^a-v done + done <))

 (sp operator*recognise*count*each-av
 (goal <g> ^state.recognition-node <n> ^operator.name recognise)
 (<n> ^count <c> ^a-v <av>)
 (<av> ^<att> <val>)
 -->
 (<n> ^a-v <av> - ^count <c> - (+ 1 <c>) +))

Back to Table of Contents


(DM8) VISTA: A toolkit for visualizing an agent's behavior

VISTA is a generic tool for developing graphical displays of an agent's behavior in real time. You can download the software from the page: www.soartech.com/downloads.vista.php

Back to Table of Contents


Section 5: Advanced Programming Tips


(APT1) How can I get log to run faster?

Question:

I'm using the log command record a trace for a very long simulation (15k+ decisions) I'm running. is there a "quiet" mode so that the trace doesn't redundantly print the screen? the printing really slows things down. is there a clever/obvious way to presently do this? maybe it's a feature to consider in the future?

Answer from Karen J. Coulter (kcoulter@eecs.umich.edu):

Date: Mon, 16 Dec 1996

There isn't a quiet mode on the log command, but in general for any xterm, ^O toggles the output to the screen. If you only want the output of the trace, you can redirect it to a file using output-strings-destination. I believe then the text would be sent only to the file and not to the screen.

Back to Table of Contents


(APT2) Are there any reserved words in Soar?

This is a slightly odd question, for Soar is a production system language, not a procedural language like Pascal. But we know what you mean. What symbols have special meaning, and how can I use them, and how do I have to use them?

The most important symbol is 'state'. To start their match, productions have to start with a clause that starts with the state, e.g.,

(state ^attribute value)

On the top state, there are a few attributes that the system uses. These are ^IO, which holds IO information; ^superstate, which holds a pointer to the superstate or nil if it's the top state; ^type, which indicates if the object is a state, operator, or user defined; ^operator, which holds the current operator. In lower states caused by impasses, there are additional attributes: ^attribute, which is the type of object causing the impasse, such as state or operator; ^choices, which holds the tied choices in a tie impasse, or none in a no-change impasse; ^impasse, which indicates the type of impasse, such as no-change; ^quiescence, which indicates if the impasses happened with rules left to fire (nil), or if the impasse happened with all the rules fired (t). If quiescence is checked in an impasse, a chunk is not built. This is a way of avoiding learning based on a lack of knowledge. These attributes can all be matched by user productions, but either cannot or should not be changed by the user.

Many modelers use ^problem-space to be treated like a reserved word. It used to be modifiable only by an explicit decision by the architecture. Now it can be done by models directly. Most objects have an ^name. It is nearly a necessity to name your operators, and if states are named, their names will appear in the trace.

The default rules (for versions of Soar that have them available) use their own conventions for using attributes. These conventions amount to reserved words that your models can use and sometimes, for example, the selection knowledge, encourages domain knowledge to assist in evaluating objects. If you are not sure about what attributes are there, just print out the objects, for this information is typically enough to get you started.

Back to Table of Contents


(APT3) How can I create a new ID and use it in the production?

Jonathan Gratch [gratch@isi.edu] wrote:

I've noticed the following:


the rule
sp {....
   (^foo (make-constant-symbol |foo|) + &)

expands to

sp {....
   (^foo (make-constant-symbol |foo|) +
    ^foo (make-constant-symbol |foo| &)}

The workaround solution, if I remember correctly, involved a two step process of generating your symbol and then separately putting it to whatever use you had in mind. Not being an expert at writing Soar productions, I may be way off base on this one, but I did not want to leave you without any initial response to your bug report. I do not believe any kernel solution to this irritating behavior would be easy to implement, but that is just a guess at this point.If you do not come up with a workaround, let me know and I will look into this further. Heck, let me know either way, the information would make a good addition to the soar-bugs archives or perhaps the FAQ.

From: Aladin Akyurek (akyurek@bart.nl):

Date: March 13, 1997

A workaround is to split the production that creates a value by make-constant-symbol for an attribute that is intended to be a multi-valued:


sp {gratch-1
   (state ^superstate nil)
   (^foo (make-constant-symbol |foo|))}

sp {gratch-2
   (state ^superstate nil ^foo <v>)
   (^foo <v> &)}

Back to Table of Contents


(APT4) How can I access Tcl and Unix variables?

soar_library and default_wme_depth, for example, are defined as global variables, and all Soar files can be obtained through relative path names from the file path. On the Mac this is a bit trickier, but doable as well. Check out the TSI source code for examples.

Unix environmental variables are also available through the "env" matrix of variables (e.g., $env(MANPATH). Advanced users can also set variables in their shells by hand or with init files.

You should keep in mind that the Tcl scoping of variables is non-intuitive (to us, at least, at times), and it is a common problem in setting up paths and installation flags.

This sort of problem is becoming a trend as its easy to overlook. All Soar global Tcl variables, such as $default, need to be defined with a Tcl "global" command when used in a script. When procedures call procedures that call scripts which call scripts ..., your variable scope will change!

If variables are not visible as you expect them to be, often the problem is that they are global but not declared in your function or file. Insert a 'global var_name' and often the variable will appear again.

Expanded from email between Tom Head (Tom_Head@anato.soar.cs.cmu.edu), Karen J. Coulter (kcoulter@eecs.umich.edu), and Aladin Akyurek (akyurek@bart.nl) on soar-bugs, Tue, 25 Mar 1997, and later email as well.

Back to Table of Contents


(APT5a) How can I access Soar objects?

Question from Harko Verhagen (verhagen@dsv.su.se):, How do I get access to WMEs?

Answer from Bruce Israel (israel@ers.com), Dec. 20 1996

If you're working in soartk, in the Tcl shell you can use the function "wmem" to retrieve WMEs. "wmem" prints its output, but you can use "output-strings-destination-push -append-to-result" to get it into a form you can use programmatically.

Here's some Tcl routines I built for accessing WMEs. You can use the routines wm_elts, allobs, and wm_value for different types of retrievals within TCL code.

Utility routines for WM retrievals

[Written by Bruce Israel (israel@ers.com), Fri Dec 20 1996]

Copyright ExpLore Reasoning Systems, Inc. 1996. All rights reserved.

member- is ITM an element of the list LST?

Usage: member ITM LST

proc member {itm lst} {
   if {-1 == [lsearch -exact $lst $itm]} {return 0} else {return 1}}

addset - add an item to a set 

proc addset {itm lst} {
   if {! [member $itm $lst]} {
      lappend lst $itm
       }
   return $lst
   }


wm_elts - Return triples of all WM elements matching pattern 

proc wm_elts {ob attr val} {
   output-strings-destination -push -append-to-result
   set wmemstr [wmem "($ob ^$attr $val)"]
   output-strings-destination -pop
   set def ""
   while {[scan $wmemstr "%\[^\n\]%\[ \t\n\]" wm_elt ws] > 0} {
   set ct [scan $wm_elt "(%d: %s %s %\[^)\])" time nob nattr nval]
   if {$ct > 0} {
   lappend def "$nob $nattr $nval"
   set len [string length $wm_elt]
   set len [expr $len + [string length $ws]]
   set wmemstr [string range $wmemstr $len end]
    } else {
   set wmemstr ""
    }
   }
   return $def
  }


# Return all WM objects matching the specified ATTR / VAL
# e.g.,
# all objects - allobs * *
# all states - allobs superstate *
# top state - allobs superstate nil

proc allobs {attr val} {
     set obs ""
     foreach wm [wm_elts * $attr $val] {
     set obs [addset [lindex $wm 0] $obs]
      }
     return $obs
     }
 
# Return the value(s) of an attribute of a particular id.
# Multiple values are separated by newlines.
 
proc wm_value {id attr} {
     set wmitems [wm_elts $id $attr *]
     set res ""
     foreach item $wmitems {
     set val [string trim [lrange $item 2 end] "| \t\n"]
     set res "${res}\n${val}"
      }
    return $res
    }

Back to Table of Contents


(APT5b) How can I find an object that has attributes and values?

[Question from Ernst Bovenkamp, July 2000]:

I would like to know how I can find an object <object> that has attributes and values which are exactly equal to some other object <object1> and there can be only one <object> with these characteristics.

However, there is more than one <object>. Moreover, every object may have some, but not all, values equal to <object1>. Further I don't know how many attributes an object has, nor what its values should be. On top these objects are created in parallel which makes it difficult to attach a unique identification number to it because all objects are attached the same number in that case.

[Answer from Randy Jones]:

Since Soar does not have any explicit quantifiers, you usually have to use negation or double-negation "tricks" to get their effects. Note that, the double-negation would not work if you had it "standing alone". But, in the production I gave you, <object> gets bound to one object at a time by the condition: (<s> ^object <object>)

I tried testing with the following set of rules. See if they work for you (these are in Soar 8, so you have to add some parallel preferences if you are using Soar 7).

sp {elaborate*state*identical-to-key
   (state <s> ^key-object <object1>
                    ^object <object>)
   ## It is not the case that <object> has an attribute-value pair that
   ## <object1> does not have.
  -{ (<object> ^<att> <val>)
    -(<object1> ^<att> <val>) }
   ## It is not the case the <object1> has an attribute-value pair that
   ## <object> does not have.
  -{ (<object1> ^<att> <val>)
    -(<object> ^<att> <val>) }
-->
    (<s> ^identical-to-key <object>)
 }

sp {elaborate*state*key-object
   (state <s> ^superstate nil)
-->
   (<s> ^key-object <ob>)
   (<ob> ^a 1)
   (<ob> ^b 2)
   (<ob> ^c 3)
 }

sp {elaborate*state*object1
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
 }

sp {elaborate*state*object2
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
   (<ob> ^a 1)
 }

sp {elaborate*state*object3
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
   (<ob> ^a 1)
   (<ob> ^b 2)
   (<ob> ^c 3)
 }

sp {elaborate*state*object4
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
   (<ob> ^a 1)
   (<ob> ^b 2)
   (<ob> ^c 3)
   (<ob> ^d 4)
 }

sp {elaborate*state*object5
   (state <s> ^superstate nil)
-->
   (<s> ^object <ob>)
   (<ob> ^a 1)
   (<ob> ^b 2)
   (<ob> ^c 3)
 }

Back to Table of Contents


(APT6) How can I trace state changes?

The -trace switch to the print command is intended to be used in conjunction with a Tcl command in a RHS to provide a runtime diagnostic tool.

Andrew's [Howes's suggested] "trace" RHS function was intended to provide a way to print out objects during production firing. I thought it better to add a "-trace" option to the print command and have that command be used via the "tcl" RHS function:

syntax: print -trace [stack-trace-format-string]

Sample RHS usage: (tcl |print -trace | | | <x> ) uses a nice default format string and prints <x> out with indenting appropriate for (goal) state.

Expanded from email between Tom Head (Tom_Head@anato.soar.cs.cmu.edu), Kenneth J. Hughes (kjh@ald.net), Karen J. Coulter (kcoulter@eecs.umich.edu), and Clare Congdon on Jan 6, 1997.

Back to Table of Contents


(APT7) How can/do I add new Tcl/Tk libraries?

[note, less relevant in Soar 8.6.]

Question from Rich Angros: I want to add some other Tcl widget extensions into tksoar. Does the Tcl/Tk code have any special modifications that are specific to Soar? What restrictions are there on the version of Tcl/Tk used?

The versions of Soar that are currently available on the web pages all require Tcl 7.4 and Tk 4.0. In order to allow for multiple agents in Soar, we had to extend Tcl within Soar and so we modify some of the Tcl files and keep a private copy. Then when Soar is built, the linker uses the "private' copies of these routines instead of the ones Tcl comes with. So you can pull Tcl 7.4 and Tk 4.0 off the Tcl web sites and use them when building Soar, but Soar will link in a few extended routines of its own. You should be able to add other Tcl extensions in a straightforward manner, without any concern for Soar's modifications of Tcl. In fact, Soar used to be distributed with Blt, but it was cumbersome to support building it on multiple platforms, and we weren't sure how much it was used, so we took it out. You would add Tcl extensions in the soarAppInit.c file, just as you would add it to tkAppInit.c

Soar 7.1 uses Tcl 7.6 and is completely decoupled from the Tcl routines, since Tcl 7.6 provides support for multiple interpreters. So from Soar 7.1 on, you should be able to upgrade Soar or Tcl packages independent of each other.

From "Karen J. Coulter" (kcoulter@eecs.umich.edu), Date: Mon, 9 Jun 1997

Back to Table of Contents


(APT8) How can and should I use add-wme and remove-wme?

This is a relatively long exchange noting how to use add-wme and remove-wme, while acknowledging that it violates the PSCM (or even the NNPSCM).

Question From: Harko Verhagen (verhagen@dsv.su.se), Date: Aug. 20, 1997

I'm still working on my multiagent simulation Soar code. A problem I encounter is that it seems hard to remove information that was added with add-wme. Each agent has some Tcl code to take care of communication with the environment (including other agents). Files are used to store information that needs to be transfered. The information gets read in Tcl and is tranferred to working memory with add-wme. This information may of course trigger some productions in Soar. However, the information should not persist after some processing. In Soar 5, an elaborate state production did the trick by removing the message in a rhs action. In Soar 7, the production fires but its effect does not show in the wmphase. Using a Tcl call to remove-wme makes Soar crash. Any way around this?

Excerpt of trace:
message contains y x command A find to

=>WM: (167: I7 ^from-whom x)
=>WM: (168: I7 ^to-whom y)
=>WM: (169: I7 ^mode command)
=>WM: (170: I7 ^item |A|)
=>WM: (171: I7 ^subtask find)
=>WM: (172: I7 ^misc to)

Preference Phase

Firing warehouse*propose*operator*reception-mode

-->
(O17 ^desired D1 + [O] )
(O17 ^name reception-mode + [O] )
(S4 ^operator O17 +)

Firing warehouse*elaborate*operator*wait

-->
-
-
-

Firing reception-mode*elaborate*state*remove-receive-message

-->
(I6 ^message I7 - [O])

Firing reception-mode*terminate*operator*receive-message

(S5 ^operator O19 @)

Firing reception-mode*reject*operator*dont-receive-message-again

-->
(S5 ^operator O19 -)

Firing reception-mode*propose*operator*evaluate-move-find-yes

-->
(O21 ^misc yes + [O])
(O21 ^item |A| + [O])
(O21 ^subtask find + [O])
(O21 ^mode command + [O])
(O21 ^to-whom x + [O])
(O21 ^from-whom y + [O])
(O21 ^name send-message + [O])
(S5 ^operator O21 +)

Working Memory Phase

=>WM: (227: S5 ^operator O21 +)
=>WM: (226: O21 ^misc yes)
=>WM: (225: O21 ^item |A|)
=>WM: (224: O21 ^subtask find)
=>WM: (223: O21 ^mode command)
=>WM: (222: O21 ^to-whom x)
=>WM: (221: O21 ^from-whom y)
=>WM: (220: O21 ^name send-message)

Bob notes that it is indeed possible to shoot yourself in the foot with add-wme (just as modifying the calling stack would be a bad idea in any programming language).

Answer From: Robert Wray (wray@soartech.com), Date: Aug. 21, 1997

Soar io cleaning

(I'm cc'ing this to soar-bugs for two reasons, even though it might not be a bug per se. First, the error message that Harko reported to me and that I replicated below is somewhat misleading. As far as I can tell, the new instantiation has been added to the instanitation list and so should be available in p_node_left_removal (rete.c) for adding to the retraction list -- which is where the program aborts because it can't find a relevant instantiation. Second, because folks seem to want to add and delete WMEs from the RHS with tcl, my guess is that this problem may become frequently reported and thus worth documenting now.)

I was able to replicate the problem you are having when my productions attempted to remove a WME that was also tested in the LHS. For instance, consider this simple example:

sp {elaborate*state*problem-space*top
   (state ^superstate nil)
   -->
   (^problem-space)
   (^name top)}

sp {elaborate*state*create-some-WMEs
   (state ^problem-space.name top)
   -->
   ;# This WME gets timetag 7
   (^deep-structure <ds>)
   ;# This WME gets timetag 8
   (<ds> ^simple-structure *yes*)
   ;# This WME gets timetag 6
   (tcl |add-wme | | ^added-via-add-wme *yes*|)}

sp {elaborate*remove-structure
    :o-support
   (state ^deep-structure <ds>
   ;# ^added-via-add-wme
   *yes*)
   (<ds> ^simple-structure *yes*)
    -->
   (^new-augmentation *yes*)
   (tcl | remove-wme 6|)}

If I try to remove any of the WMEs, I test in the LHS of this production (eg, timetags 6, 7 or 8), I get the problem you reported in your message, namely: Internal error: can't find existing instantiation to retract.

Soar cannot recover from this error. Aborting...Soar (or at least the implementation in C but I think it's pretty easy to say the PSCM of Soar as well) assumes that WMEs do not disappear during the preference phase. In this case, because you are removing a WME directly from a tcl call in the RHS, this assumption is violated.

Soar is attempting to retract an instantiation (or, more specifically, add an instantiation to the retraction list) while in the process of firing the instantiation. In a normal situation, this simultaneous fire and retract is impossible because the architecture doesn't allow WME changes during the preference phase, just preference changes. Your production violates that architectural constraint, with the resulting error. Again, there are workarounds (and I'll describe one in a separate message). But, you really shouldn't be doing this. You should only be creating and deleting input WMEs when the input function is called in the INPUT PHASE, as I described yesterday. And you should use the preference mechanism to remove any regular Soar WMEs.

I have a suggestion for a workaround to your problem and what I think might be a better, long-term solution. I'll discuss the long-term solution first, because I think it's the best route to follow.

The reason remove-wme did not work for you is that the command requires the integer time tag value to identify the WME to be removed. It is not easy to access the timetag through productions; it would require a few tcl calls and some text parsing to do accomplish this. (I haven't actually tried to do removal this way -- it may be impossible but my guess is that it's not impossible, just very difficult.) The difficulty is not an oversight in the design of Soar -- it is purposeful. All WMEs except input WMEs should go through the decision process (evaluation of preferences). Input WMEs are added only through the I/O link. Thus, Add-wme and remove-wme were not really meant to be used in the RHS. Prior to Soar 7, it was very difficult to use add-wme and remove-wme in productions; tcl has made it much easier to disregard these assumptions. (Seth Rogers covered many of the ways Tcl can violate Soar assumptions in a talk entitled "Tcl Magic: How to do things you're not supposed to do in Soar" at Soar Workshop 15.)

A long-term solution to your problem would be to re-implement your system using Soar's supported I/O mechanism, the ^io link. In this case, messages would appear on the input-link of an agent. Add-wme would still be called, but it would be called by a tcl input function, rather than a RHS call. To remove the message, an operator could place a command on the output--link to remove the indicated message.

Then, via tcl code in the output function (which has access to the timetag), the message could be removed from future input with remove-wme. There is a simple example in the soar distribution (soar-io-using-tcl.tcl in the demos directory) that illustrates how to set up a tcl I/O system. Chapter 17 of the Soar Coloring Book (http://ai.eecs.umich.edu/soar/tutorial/) also covers I/O (but at a very high level).

However, maybe you need a working prototype right away? Although I strongly encourage the above solution, here's a workaround for your current problem: You can't use remove-wme for the reasons I described above. You can't use production preferences because a WME created with the add-wme command has no preferences; it is just added to WM without going through the decision process. (I'm not familiar with Soar 5 so I don't know enough about it to know why the production/preference removal worked there; my understanding of Soar 7 is that WMEs created with add-wme should only be removable by remove-wme, to discourage non-PSCM WME additions and removals) However, like all Soar WMEs, everything must be connected to the top state (directly or indirectly). So the workaround I'm proposing is to remove WMEs created with add-wme by removing via preferences the WME that the added WME was attached to.

For example, instead of attaching WMEs to the state directly, imagine that there is a place holder for messages called "message-link:"

sp {elaborate*state*message-link*bootstrap
   (state ^problem-space.name top)
   -->
   (^message-link <ml>)
   (<ml> ^message <nil-for-now>)}

Now, when you want to add messages, you always add messages to the message-link:

sp {add-message-with-tcl
   (state ^problem-space.name top
    ^message-link.message <message>)
   -->
   (tcl |add-wme | <message> | this-is-a-message *yes* |)}

Then, when you want to remove a message, you just remove the message-link (and you'll also want to add a new message-link for a future message):

sp {some-operator*apply*remove-message-link*and*create-new-message-link
   (state ^operator <o> ^message-link <ml>)
   (<ml> ^message <message>)
   ;# (whatever tests you want to make to determine
   if the message should be removed)
   (<o> ^name remove-message)
   -->
   (^message-link <ml> - <ml2> +)
   (<ml2> ^message <nil-for-now>)}

When this production fires, it will remove the existing message-link, which includes, as substructure, the WME you created with add-wme. Although this convention will work, it is a hack and does not represent the best way to solve the problem. But it will allow you to get your system running in a few minutes. In the long run, as I suggested before, I'd use Soar's I/O link for input and output. Hope these ideas have been useful. I'm sure others will have other ideas and additional suggestions about the ones I've described here.- <Bob>

Another user comments that this approach can be fragile, but that if done in as theoretically a way as possible it is workable.

Harko Verhagen wrote:

I *DO* use the io-link, add-wme is used in tcl procedures hooked to the io-cycles of soar as described in the soar-io demo. the remove-wme is also in a tcl procedure which is called from RHS and the timetags are global vars in tcl. I now also have a workaround but it looks like a hack to me. I just don't test for the presence of the message that has to be removed but for a flag I have on the operator. For some reason removing the messsage with a structure like: ^item1- is not enough, the tcl removal procedure is also necessarry to avoid looping (it makes retracting impossible).

I encountered the same problem some time ago. The way I adopted is,(a) using productions to copy messages from io-link to some other place (say, internal situational model) in WM; (b) All decision makings are always based on the internal situational model instead of the io-link (since it is volatile); (c) add-wme and remove-wme are safe in io producures; (d) as soon as messages is removed from io-link, they will disappears from the situational model too. I found Bob's answer is very informative, although I didn't quite get the reason why remove-wme doesn't work. It seems to me that remove-wme is quite fragile and in some cases it simply makes the system crash, even it is called in a tcl procudure and is given the integer time tag. We have actually been warned about this from the on-line help. However, I did find it is safe when I clearly distinguished problem-solving from io.

Randy Jones Wrote (March 2002):

The rule used is:

This raises another issue. Bob lists the rules of thumb I use also. However, the fact is that currently in Soar you can use remove-wme to remove something that was created by productions through prference memory. The problem is that it removes the WME, but does not remove the preference.

This can lead to all sorts of nastiness. For example, say a rule created something you want to get rid of, and maybe want the production to fire again to change the same attribute to have a different value. You use remove-wme to delete the old value. Then the production can fire again (maybe it fires again because it tests for the absence of the value it's creating), and it creastes a preference for the new value of the attribute. You now have an attribute tie impasse. At least that's what you got in Soar 7. Soar 8 has no attribute tie impasse, so presumably the new value would show up in working memory, but I don't know what would happen with the preference for the old value.

Back to Table of Contents


(APT9) Frequently found user model bugs, or How can I stick beans up my nose metaphorically in Soar?

There is intended to turn into a rogue's gallery of frequently found bugs that are hard to diagnose on your own, but are not bugs in the architecture per se.

 

Back to Table of Contents


(APT10) Why there seeem to be no preferences in preference memory for (id ^attribute value) triples acquired through C input functions ?

From Randy Jones:

Working memory elements from I/O are added directly to working memory without going through Soar's preference mechanism. That's why they don't have any preferences. That's also why production rules can't make them disappear (say, with a reject preference), because those WME's just totally bypass the preference process. This is intentional. The idea is that the input link represents "current perception", and you can't use purely cognitive processes to "shut off" your current perceptions. If you want to remove things from the input link, you have to make the I/O code remove them. If you want the input link to be under the control of cognition, then your rules have Tcl tell the I/O system (via the output link) to change the values on the input link. The rules cannot do it directly.

From Karen Coulter:

C Input functions add wmes using the kernel routine add_input_wme, which surgically alters working memory directly. The preference mechanism is not part of this process. The only way to remove these input wmes is by calling the routine remove_input_wme from your input function. add_input_wme returns the pointer to the wme you added, which you must store statically somehow (on a linked list or something) to later remove the wme with remove_input_wme. I should add the caveat that I didn't actually go back and look at Soar 6, and gave you the answer for Soar version 7 and later, but I'm fairly certain that add_input_wme and remove_input_wme also were in Soar 6.

Back to Table of Contents


(APT11) Why is "send" in Soar different from "send" in Tk, or What do I do if my Soar process comes up as 'soar2'?

[This stuff will be subject to some change in Soar 7.1 and on, I suspect, -FER] Soar is and is not different from "send" in Tk. By default, Soar's send is different from Tk. When Soar 7 is compiled with Tk, and -useIPC is specified on the command line at runtime, Soar's send is the same as Tk.

From: Karen J. Coulter (kcoulter@eecs.umich.edu), Date: Apr 11, 1997
Subject: Re: [Soar-Bugs #85] Why is "send" in Soar different from "send" in Tk?

To support multiple agents, Soar 7 takes care of all the overhead of creating and maintaining multiple interpreters (which wasn't supported by Tcl < 7.6). Soar 7 creates a flat space of interpreters -- no single interpreter is the supervisor or owner of another. To allow for communication among agents, Soar uses the "send" command. Tcl itself doesn't have a "send" command, it comes from Tk. But we didn't want to require Tk (i.e., X Windows) in order to run Soar. So Karl wrote an X-server-less version of "send" that could be used to support multiple agents in Soar when Tk was not compiled in. This version of send works only for agents/interps within the same process.

Tk's send function registers with the X server to use IPC's (interprocess communications). It tries to register using the name of the application. IF, when Tk/send registers, the X server already has a process by that name, then the name gets a number appended to it so that the name will be unique and then the registration can be done. Then Soar/Tk/send also changes the name of the interpreter to match the process name that was registered by the X server. Through the X server, interpreters can can communicate with other interpreters in other processes.

What was happening to novice and not-so-novice Soar users who wanted to test applications and run multiple copies on the same machine was this: Start the first copy, soar.soar gets sourced to start the application, and "soar" gets registered with X server (invisible to the soar user). Everything (gui etc) comes up fine. Start the 2nd copy: soar interp gets renamed to soar2, no soar2.soar file is found, so no file gets sourced and the application doesn't run as expected. This was happening to nearly all the soar developers at Michigan. This was also happening for an COG SCI class [at Nottingham] trying to all run the same application (subtraction world) to explore theories of learning. It was VERY confusing. But, we had this nice little workaround since Karl had written the "send" command for Tcl-only Soar that avoided registering with the X server.

Since it was anticipated that most Soar users would not be starting out wanting IPCs ("what's that?"), we made the default method for running Soar use the IPC-less version of send. For those users savvy enough to run multiple processes and want to communicate among those processes, we added the commandline flag -useIPC. The doc could be more explicit about how "send" works in Soar, but the -useIPC flag is in the help pages. And I still believe this was the right way to go. Otherwise we would have had to tell users to have as many soar[n].soar files as they ever thought they would need to support running multiple copies of an application if Tk was compiled in their version of Soar. OR we tell the users who need IPCs (far fewer in number), oh yes, you need to specify -useIPC when you start up soar.

Now that Tcl 7.6 supports multiple interpreters, Soar no longer has to manage the overhead, we no longer have to create our own methods for communicating among agents, and we won't be shadowing Tcl or Tk commands. It also looks like Soar won't be autoloading any 'interpname'.soar files, so this whole problem goes away (only to be replaced by something else, no doubt). And Soar programmers who want to communicate with other agents and processes will have to read the Tcl doc to figure out how to do it =8-O ;)

Back to Table of Contents


(APT12) How to get Soar to talk to new Tk displays?