HT Tutorial Exercise 8 - Create an Op-Implementation Problem Space


Watch an operator no-change impasse
This exercise involves adding to the rules in file ht-imps9.soar. Before you start, make a copy of the file -- for example by clicking on the filename just given, and using the Save As... facility of your browser -- and call it something like "myht-imps9.soar". Continue working with a Soar with the rules from file "ht.s8" or ht-imps9.soar loaded (or start a fresh Soar and reload the appropriate file). Now:

excise   ht*apply-op*eat

Now try running the model (for several decision cycles) through a new impasse that will appear, that of an operator no-change. This impasse occurs when an operator has been selected but the operator does not lead to any change to the state. You can compare your trace with Trace 8. This impasse can be seen in very fine detail by setting the watch level to 3 or 4.


Creating an operator implementation problem space to solve the impasse
In order to fix this impasse, you will need to write several productions (all straightforward) that create a problem space to implement the operator. Write them in the file "myht2.soar" (or whatever you called it). Productions will be needed to:
  • Create and propose an 'eating' problem space, somewhat like how ht*propose-space*ht and ht*impasse*resolution proposes their spaces.


  • Propose, in this case, an exceedingly simple operator to implement the super-state's operator (eat). If you print a state, you will see how it keeps track of its superstate. The rule will look somewhat like ht*propose-op*eat.


  • Apply the sub-problem space eating operator to the super-state (like ht*apply*eat, except modify the super-state instead of the state).

When you've loaded and run your code, you should get a chunk that looks a lot like ht*apply-op*eat.


Follow up questions
  1.  Does chunking just provide speedup learning? That is, with chunking, does the agent only get faster, not smarter?


  2. Does chunking just provide deductive closure? That is, can chunking learning something new, that is not implied in the rules already?


  3. Could chunking also provide an inductive closure? That is, could chunking learn everything possible to learn given a reprsentation?


Return to main page of: Introduction to Psychological Soar Tutorial
(or use the Back button to return to where you just were).
Program Listings
PST Soar Help