![]() |
|
|
Penn State's
School of Information Sciences and Technology
|
Our aims are to understand more clearly how learning occurs. This is done by creating and testing cognitive models that learn, and, as a necessary step, to make model creation, explanation, evaluation, and improvement more routine. Better theories of learning will help design better computer interfaces (a favorite application and worthwhile goal on its own) and instructional materials, among a host of other applications. There are also some digressions, which are somehow related.
Some current projects in the Applied Cognitive Science Lab
Other projects include the development of a user model testing environment, and optimizing the fit of a model to explore the space of developmental psychology theories.
We have been working for several years on how to tie models, particularly Soar and Act-R models to simulations. This is necessary in order to understand how perception influences behavior, and to provide a more complex set of stimuli to our models. Currently, we are having the eye and hand implemented in a commercially available graphical rapid prototyping language (SLGMS). In 1998 we expect to use it to allow a model to see and interact with an interface. We have learned that perception influences cognition, not only slowing it down in a general way, but also, for example, by restricting its behavior by precluding knowing everything at once.
|
Cognitive models tend to not have hysteresis in their behavior. That is, for most models, their behavior does not depend much on their previous behaviors. Examples of this include that models tend not to give up; they tend not to fatigue; and they tend not to be connected, let alone sensitive, to their physical environment.
Ritter, F. E., & Avraamides, M. N. (2000). Steps towards including behavioral moderators in human performance models in synthetic environments (Tech. Report No. 2000-1). Applied Cognitive Science Lab, School of Information Sciences and Technology, Penn State. [view .pdf file]
The source code for the ACT-R/A/C model can be obtained here.
![]() |
We are helping Soar Technology design and test an interface to make the mental state of a cognitive model (its situation awareness) more visible to experts asked to understand and extend the model. The panel (shown below) should also make the internal state of the model more visible to analysts (expert system programmers), which will help develop it. We are doing this by creating a task analysis of what users need to know in order to understand the model. This task analysis is driven by our previous work with Soar, interviews with local HCI, group work, military, and geographic information systems experts. Funded by Soar Technology and the Office of Navy Research until September 2001.
We've released a spreadsheet for psychology called Dismal. It works as part of GNU-Emacs. Details, including manuals and the code, are available at ritter.ist.psu.edu/dismal/dismal.html The spreadsheet provides keystroke logs, commands to assist in semi-automatically coding protocols, and functions to automatically align columns based on regular expressions. The automatic alignment is useful for comparing model predictions and human data.
We maintain the Soar Frequently Asked Questions list (or FAQ). Soar is an AI programming language used as a cognitive modeling language. The Soar FAQ is large enough that it has two sections. The frequently asked questions, and the less frequently asked questions. The less frequently asked questions are more obscure and cover more esoteric topics. Ongoing work since 1995 or so. The Soar FAQ and Soar less FAQ. (choose the latest ones in the directory.)
The project seeks to bring into extensive and intensive contact learners of foreign languages with native users of the language via telecommunication in order to create contexts for interactive communication and task-based collaboration. The underlying rationale is to set up partner classes in English, French, German and Spanish and to provide each of the classes with access to and engagement with a community of students who are native speakers of the "foreign" language being studied by the other class.
The project's interest in IST's ACS / HCI lab is to help us gain deeper insight into exactly what students are doing as they engage in chat, email, and other synchronous and asynchronous communication, especially in the context of learning a foreign language. By using the eye-tracker, video camera, and key-stroke capture, we will be able to accurately record students' behaviors, including how they negotiate among multiple texts (or textual interfaces) during this language-learning and communication process.