Models of information processing

Process models of information processing

HCI context for use by designers

Integrate important psychological factors, provide a single environment for addressing multiple tasks, instead of 'one theory per task'

1. Model Human Processor (MHP: Card, Moran and Newell, 1983)

2. Norman's Gulfs of Evaluation and Execution (POET)

3. Interacting Cognitive Subsystems (ICS: Barnard))

Focus on the processes by which information is processed in a task context

4. Programmable User Models (PUMs: Young & Green)

NB. PUMs also consider mechanisms, knowledge and address issues of learning through experience (do not yet deal with knowledge acquisition)


Model Human Processor (MHP)

Card, Moran and Newell (1983)

Believed applied information processing psychology should be based on task analysis, calculation and approximation

The MHP offers an integrated description of psychological knowledge about human performance (relevant to HCI)

Oversimplification, but the aim was to offer a model around which discussion can centre


(1) a set of memories and processors together with (2) a set of principles, the "principles of operation"

Three interacting subsystems:

(1) perceptual system

(2) the motor system

(3) the cognitive system

each with their own memories and processors

the perceptual processor (consists of sensors and associated buffer memories, the most important being a Visual Image Store and an Auditory Image Store to hold the output of the sensory system whilre it is being symbolically coded)

the cognitive processor (receives symbolically coded information from the sensory image store in its Working memory and uses previously stored information stired in Long term memory to make decision about how to respond)

the motor processor (carries out the specified response)

For some tasks the human being responds like a serial processor (e.g. pressing a key in response to light), for other tasks (like typing, reading, simultaneous translation) integrated, parallel operatopm of the three subsystems is possible, in the manner of three pipe-lined processors; information flows continuously from input to output with a characteristically short time lag showing that all the three processors are working simulataneously.

Memories are described by a few parameters .: storage capacity in items (miu), the decay time of an item (alpha) and the main code type (physical, acoustic, visual, semantic (gamma). The most important parameter of a processor is the cyle time (pie).

The anlayst generates time predictions by analysing a task into the constituent operation executed by the three subsystems and from there calculates how long the task will take and how much processing is involved. This can be doen within three bands of performance: fastman, slowman, and middleman, thus allowing predictions along at least the central and extreme points along the behavioural continuum.

Using the Model Human Processor to make predictions

parameters for each operation offer averaged time to carry out some action

In analysing a task and making time predictions, consider

MHP: Example 1: perception, visual

Compute the frame rate at which an animated image on a video display must be refreshed to give the illusion of movement

Consider: cycle time of the Perceptual Processor: closely related images which appear nearer in time than the processing time will be fused into a single image. Therefore

frame rate > 1/cycle time of processor = 1/(100 msec frame)

= 10 frames/second

Frame rate should be faster than this. Upper bound specified by for how fast the rate needs to be can be found by redoing the calculation for fast-man

max frame rate for fusion = 1/(50 msec/frame)

= 20 frames/sec

MHP: Example 2: motor skills typing behaviour

Motor skill - calculator

On a certain pocket calculator, the heavily used gold F button employed to sift the meaning of keys is located on the top row. How much time would be saved if it were located in a more convenient position just above the numbers?

Assume that the poistiom of the 5 button is a fair representation of where the hand is just before pressing the F button. From the diagram, the distance from the 5 button to the present F button is 2 inches, to the proposed location 1 inch. The button is .25 inch wide. By a version of Fitt's Law, where movement time is Im log2 (D/S + .5) where Im is expected to be about 100 msec/bit. So the difference between the two locations in time is

T = 100 [log2 (2/.25 + .5) - log2 (1/.25 + 5)]

= 100(3.09 - 2.17)

= 90 msec

Note redesign entails trade-offs!

MHP: Example 3: motor skills, typing behaviour

A manufacturer is considering whether to use an alphabetic keyboard on his small business computer system. Among several factors influencing his decision is the question of whether experienced users will find the keyboard slower for touch-typing than the standard Sholes (QWERTY) keyboard arrangement. What is the relative typing speed for expert users on the two keyboards?

Typing rate = 152/ks (72 words/min)

Typing rate (alphabetic) = 164 msec/ks (66.5 words/min)


MHP: Example 4: cognitive, working memory

A programmer is told verbally the one-syllable file names of a dozen files to load into his programming system. assuming all the names are arbotrary, in which order should the programmer write down the anems so that he remembers the greatest number of them (has to ask for the fewest number to be repeated).

Twelve arbitrary file names means the programmer has to remember 12 chunks (assuming one chunk/name) which is larger than the storage capacity of working memory, so some of the file names will be forgaottn. The act of trying to recall the file names will add new itesm to working memory, interfering with the previous names. the items likely to be in working memory but not yet in long-term mempry are those from the end of the list. If the ser treis to recall the names from the edn of the list first, he can snatch some of these from Working mempry before they are displaced. The probability of recalling teh first names will not be affected since they are in long term memory. Thus the programmer shoudl recall the last names first and then the others.

Model Human Processor: conclusions


Norman's Theory of action: Gulfs of Evaluation and Execution

An approximative theory for action which distinguishes among different stages of activities, not necessarily always used nor applied in that order, but different kinds of activities that appear to capture the critical aspects of doing things. Useful fro analysisng systems and guiding design.

Discrepancy between psychological and system terms, ETIT, matching the internal specification to the external. Two gulfs

There is a discrepancy between the person's psychologically expressed goals and the physical controls and variables of the task. The person starts with goals and intentions. These are psychological variables. they exist in the mind of the person and they relate directly to the needs and concerns of the person. However, the task is to be performed on a physical system, with physical mecahnisms to be manipluated, resulting in changes to the physical variables and system state. Thus, the person must interpret the physical variables in terms relevant to the psychological goals and must translate the psychological intentions into phsyical actions upon the mechanisms. There must therefore be a stage of interpretation that relates physical and psychological variables as well as functions that relate the manipulation of the physical variables to the resulting change in physical state.

Stages of user activities;


Example of letter draft , in emacs (editor) wich will be passed through a formatting package like T-Roff. It doesn't look right: The intention is "improve the appearnce of the letter". Call this the first intention, intention1. Note that this intention gives little hint of how the task is to be accomplished. Some problem solving is required, perhaps ending with intention2.: "change the indented paragraphs to block paragraphs". To do this requires inetntion3: "Change the pccurences of .pp in the source code for the letter to .sp. This is turn requires the preson to generate an action sequence appropriate for the text editor, and then finally to execute the actions on teh computer keyboard. Now to evaluate the reutls of the operation requires still further operations, including generation of a fourth inetnion, intention4: "Format the file" (in order to see whether intention2 and intention1 were satisfied).


Bridge the gap between goal and the system. Sometimes this will mean making things ore visible, but avoiding clutter. Empirical work will help us to get what actually works but it is good to have a basic set of guidelines.


Principles derived from Norman's analysis:

  1. use both the knowledge in the world and the knowledge in the head
  2. simplify the structue of tasks
  3. make things visible: bridge the gulfs of Execution and Evaluation
  4. get the mappings right
  5. exploit the power of constraints, both natural and artificial
  6. design for error
  7. when all else fails, standardise
  8. hide critical components: make things invisible
  9. use unnatural mappings for the execution side of the action cycle, so that the relationship of the controls to the things being controlled ininappropriate or haphazard
  10. make the action physically impossible to do
  11. require precise timing and physical manipulation
  12. do not give any feedback
  13. use unnatural mappings for the evaluation side of the action cycle, so that the system state is difficult to interpret


Interacting Cognitive Subsystems (Barnard, 1987)

Goes for breadth not depth, accounting for all low level of detail

being developed to elaborate on the job that the MHP does; this is to prduce substantive qualitative behavioural predictions on the basis of a similarly quantitative analysis.

ICS represents the nature of processing more strongly and in a less parameterised fashion than MHP

able to capture wider variations in behaviour due to performance factors, including some classes of errors

requires less input effort than MHP as a version has been implemented into an expert system shell, which actually guides the task analysis from the analyst (c.f. knowledge elicitation)



What is the point of these models?

Which one you choose depends on what you want