2. Learning is defined as a relatively permanent
change in behavior as a result of experience.
A reflex is an immediate response to some
stimulus, over which we have little, if any
control.
Sensitization refers to an increase in the
frequency or amplitude of some response to a
stimulus as a consequence of presentation of
either that stimulus or, more frequently, some
other stimulus
2PSYCHOPEDIA
3. Conditioning is the basic type of multiple
contingency learning. Assuming that higher
mental processes are an assemblage of simple
conditioned responses, behaviorists tried to study
learning in as controlled an environment as
possible. Most of the conditioning experiments
were done on animals where better control was
possible.
As a result of these experiments, they postulated
two basic types of conditioning – classical and
instrumental.
3PSYCHOPEDIA
4. An animal is able to detect a stimulus and then to
predict what is likely to happen, simply because it
has happened several times before. The animal
associates two events, which occur together.
If you have pets and you feed them with canned
food, what happens when you hit the can opener?
Sure, the animals come running even if you are
opening a can of green beans. They have
associated the sound of opener with their food.
4PSYCHOPEDIA
5. Classical conditioning was pioneered by a physiologist in the
beginning of the twentieth century – Ivan Pavlov (1849 - 1936).
He was a Russian physiologist who was awarded the Nobel Prize
for his investigation of glandular and neural factors of digestion.
Pavlov had developed an apparatus, which made it possible to hold
and measure the amount of saliva secreted by a dog under various
conditions of feeding. A calibrated glass tube was inserted through
the dog’s cheek and he was put in a harness in a relatively isolated
room with recording devices outside.
Pavlov’s striking discovery was that the animal gave the salivary
response even before the sight of food. E.g. he salivated upon
seeing the food dish or at the approach of the attendant.
5PSYCHOPEDIA
7. The most famous experiment by Watson is his
conditioning of Little Albert. Albert was a small
child who was conditioned to fear rabbits because
he was again and again presented a rabbit with a
loud bang. Eventually, the fear reaction naturally
given to the loud bang was also given to the
rabbit. Albert also generalized this reaction to all
white furry objects, so much so that he also
started fearing his mother’s white fur coat even
when his mother was wearing it.
7PSYCHOPEDIA
8. There are four essential elements of the
situation in classical conditioning that need
description:
US ,The unconditional stimulus is a natural,
innate stimulus that reliably elicits a response
without prior training. Experimenters do their
best to control the past history of the animal to
eliminate or control learning that takes place
before coming to the laboratory.
8PSYCHOPEDIA
9. UR,The unconditional response is the response
elicited by the US. Often it is a highly reflexive
response, one that happens quickly and
automatically whenever the US occurs.
CS ,The conditional stimulus is the neutral
stimulus that comes to evoke a response by
being paired with the US. It has two essential
qualities. Firstly, it must be within the sensory
range of the organism, i.e., it must be hearable,
seeable, tastable, and so on. Secondly, at the
outset of the experiment, it should not produce a
response that resembles the UR.
9PSYCHOPEDIA
10. CR,The conditional response is the response
that arises due to the pairing of the CS and the
US.
CR is a preparatory or anticipatory response,
whereas the UR is a consummatory response.
the CR is a thought, whereas a UR is a motor
response.
10PSYCHOPEDIA
11. Acquisition – Repeated pairing of the CS and the US
increases the strength of the connection until a point is
reached where no further observable gain is produced. If the
test phase is introduced, it is seen that the CR is given to the
CS alone. There are two important determinants of
acquisition:
Pairing schedule – The CS and the US may be paired on each
trial in a continuous pairing schedule; or they may be paired
only on some of the trials in an intermittent schedule. Ross
(1959) doing human eyelid conditioning to a light flash
found that intermittent schedule reduced the final level of
performance to the CS as compared to a continuous pairing.
Also more time is taken for acquisition in the intermittent
schedule. However, it also requires a greater time for
extinction (Hartman and Grant, 1960).
11PSYCHOPEDIA
12. Temporal relations or time factor – There is an
optimum time between the presentation of the CS
and the US. If the time between the bell and food
is too long or too short, the subject takes too
many trials for acquisition. Usually, an interval of
0.5 seconds is used. But the optimum interval
depends in the stimuli and responses involved.
Garcia and Erwin (1968) showed that rats could
learn taste aversion even when the CS – US
interval was several hours. Backward
conditioning i.e. presentation of food before the
bell, generally does not lead to acquisition.
12PSYCHOPEDIA
13. Extinction – The presentation of the CS without the
US results in a progressive decrease in the response.
The dog no longer salivates at the sound of the bell if
the bell has been rung a number of times without the
food being given. The omission of the US is a
necessary and sufficient condition for extinction.
However, extinction is not a reversal of the acquisition
process, resulting in a simple reduction in the
strength of association between the CS and the US or
the CS and the CR. Phenomena such as spontaneous
recovery, external inhibition, and disinhibition led
Pavlov and others to the conclusion that extinction
occurred due to an distinct inhibitory process in the
nervous system.
13PSYCHOPEDIA
14. Spontaneous recovery – One source of evidence that
extinction is not a simple reversal of prior conditioning is the
phenomenon of spontaneous recovery. It refers to the
recovery of response strength of the CR that takes place after
a rest pause during extinction trials. Pavlov found that a rest
period of as little as 20 minutes was enough to restore the
salivary response to a considerable extent. The longer the
delay following extinction, the greater the amount of
spontaneous recovery, up to a maximum of about 50% of the
response strength after a day’s rest. Even if total extinction
has taken place, it does not mean that CR is totally destroyed.
If from the next trial, US is again paired with the CS, the
animal takes a lesser number of trials for conditioning as
compared to the trials taken initially for learning. This is
another type of evidence for spontaneous recovery.
14PSYCHOPEDIA
15. External inhibition – Pavlov’s students often
found that after setting up a CR in a dog, they
could not show it to Pavlov. This was because
the presence of any extraneous stimulus
inhibited the conditioned response. Pavlov,
being an external factor in the conditioning
situation, disrupted the dog’s CR. This
phenomenon is called external inhibition.
15PSYCHOPEDIA
16. Disinhibition – An extraneous stimulus presented
during the extinction of a CR tends to reduce the
extent of extinction and to restore to some extent
the performance of the CR. Pavlov suggested that
both external inhibition and disinhibition have a
common underlying mechanism because both
have the common action of partially reversing or
counteracting the ongoing learning process,
whether it is acquisition or extinction. However,
the nature or existence of this mechanism is far
from clear.
16PSYCHOPEDIA
17. Conditioned inhibition: This phenomenon is of great interest
to many psychologists. Originally demonstrated in Pavlov’s
laboratory, in one experiment, an automobile horn served as
the CS for a salivary CR. After conditioning to the horn had
been established, trials were introduced; in which the horn
was accompanied by the ticking of a metronome and the
food US was withheld. After a number of such trials, the
salivary CR decreased to the compound CR, though, the horn
CS alone could still evoke salivation. Thus a conditioned
inhibitor prevents the performance of the CR. Pavlov also
demonstrated that the conditioned inhibitor will function as
an inhibitor of the CR even when it is paired with other CS
associated with the same CR though it had never itself been
paired with any of these CS. In other words it will inhibit the
response in any and all situations.
17PSYCHOPEDIA
18. Stimulus generalization – In the initial phases
of conditioning, the organism responds not
only to the exact CS but also to a variety of
stimuli similar to the CS. However, the
response is greatest to the CS and
progressively less to the stimulus which is
more and more dissimilar to the actual CS. A
plot of the strength of CR as a function of
distance of the test CS from the original CS is
known as the stimulus generalization gradient.
18PSYCHOPEDIA
19. Discrimination – As conditioning proceeds, the range of
stimuli to which the subject responds becomes progressively
reduced. After a while, an animal conditioned for 800
megacycles responds only to the stimulus and not to any
other frequency of the bell.
Higher order conditioning – Once a CR has been established,
the CS may be used instead of the US and may be paired with
another CS for the purpose of higher order conditioning. E.g.
When the dog has learnt to salivate to the bell, he can be
further trained to salivate to a light which is repeatedly
paired with the bell. Generally, higher order conditioning
beyond the third or the fourth order is difficult to establish
in an experimental situation.
19PSYCHOPEDIA
20. Summation – Two stimuli that have been conditioned separately to
the same response tend to evoke a larger response if they are
presented together. E.g. If a dog is conditioned separately to light
and to bell and if both light and bell are presented together, they
evoke a greater salivary response than the response they evoke
when presented alone.
Overshadowing – Sometimes, if two stimuli are presented together,
one stimulus overshadows the other stimulus due to innate
tendencies of the animal i.e. the animal gives a greater response to
one stimulus than the other or he is conditioned more easily to
one stimulus and not conditioned to the other stimulus. Pavlov
himself felt that the salience due to intensity of the stimuli was
responsible for overshadowing. Recent researchers however feel
that novelty, predictive value of the stimulus or innate tendencies
of the organism determine which stimulus will overshadow the
other.
20PSYCHOPEDIA
21. Blocking – Blocking is learnt overshadowing i.e. one CS becomes
more important than the other CS because the organism has been
initially trained with the first stimulus. Kamin (1969) was the first
to demonstrate blocking in the laboratory, arguing that the other
blocked stimulus (CS2) never attains stimulus control because it is
redundant or irrelevant to the organism.
Sensory preconditioning: When two stimuli CS1 and CS2 are paired
before conditioning with CS1 and US takes place with CS2, it is
found that the response given to CS1 is also given to CS2. This is
evidence for sensory preconditioning. Rizley and Rescorla (1972)
point out that sensory preconditioning is far more successful than
first order backward conditioning, and that it seems to be
restricted to higher order animals, and cannot be demonstrated
with lower vertebrate species such as fish and amphibians.
21PSYCHOPEDIA
22. Configuring – Configuring implies conditioning with a
complex CS so that only the total configuration evokes the CR
but a small part of the CS does not evoke the CR. There are
two procedures to obtain configuring. The first is differential
contrasting that involves pairing the compound CS with the
US and presenting each component without the US on other
trials. The second is prolonged conditioning of the compound
CS to the CR without contrasting. Configuring has been
demonstrated for both simultaneously present components
and successively presented components, though the latter is
more difficult. It is also directly related to size and
complexity of the brain, higher animals showing more rapid
configuring than lower animals, with the general conclusion
that fish and amphibians do not configure (Razran, 1971).
22PSYCHOPEDIA
23. Counter conditioning – This implies substituting the
opposite response for the initial response. This
frequently happens in the clinical situations. Perhaps
the best example is a pioneering experiment by
Watson (1912). Initially, Watson used a loud bang as
a US and a rabbit as the CS to condition a child to fear
a rabbit. After this, Watson did an experiment of
counter conditioning in which the same CS was
associated with the opposite response of relaxation.
Wolpe (1958) uses counter conditioning in his
therapy Systematic Desensitization in which the
relaxation response is substituted for the anxiety
response to various phobic stimuli or anxiety evoking
situations.
23PSYCHOPEDIA
24. Instrumental conditioning is the kind of
conditioning that allows the organism to act
upon and change the environment. According
to Gagne (1977) the term instrumental has
two advantages: (1) it emphasizes the precise,
skilled nature of the responses involved, as in
“using instruments” (2) it implies that the
learnt connection is instrumental in satisfying
some motive. The response is instrumental in
attaining the reinforcement.
24PSYCHOPEDIA
25. instrumental conditioning occurs because the
reinforcement is linked to the response.
Instrumental conditioning is also called
stimulus response learning or trial and error
learning. Skinner’s term for this kind of
conditioning is “operant conditioning”
emphasizing the fact that the organism has to
operate on the environment to get the
reinforcement.
25PSYCHOPEDIA
26. Instrumental conditioning is the process by which
many of us train our pets. A dog is rewarded with a
biscuit when he stands on his hind legs, and therefore
repeats this behavior. However he is punished with
the whack of the newspaper on his snout if he tries to
climb the bed.
it was Skinner ((1904 – 1995) who explicitly
distinguished between respondent behavior and
operant behavior and consequently two kinds of
conditioning:
Type S conditioning
Type R conditioning
26PSYCHOPEDIA
27. Examples of instrumental conditioning with
negative reinforcers are found in the studies of
the Yale psychologists – Miller, Mowrer etc.
Mowrer’s experiments in 1940s were done in
a special apparatus called the shuttle box.
27PSYCHOPEDIA
29. A shuttle box has two compartments separated by a door
that can be dropped partway through a slot in the floor,
creating a hurdle over which the dog can jump from the first
compartment to the second. Both compartments have grid
floors through which current can be passed. To demonstrate
instrumental conditioning, the dog is placed in one
compartment. At the same time that the door drops he is
given a mild electric shock through the grid floor. To escape
the shock, the dog jumps over the hurdle to the safety of the
second compartment. The door closes and the dog rests until
the experimenter drops the door and turns on the shock in
the second compartment. The dog must then jump the
hurdle again to reach the first compartment, which is now
the safe compartment.
29PSYCHOPEDIA
31. If an apparatus is arranged so that an
organism receives an electric shock until it
performs a specified response, it will quickly
learn to make that response as soon as it is
shocked. Such a procedure is called escape
conditioning
the response is learned because it is followed
by pain reduction.
31PSYCHOPEDIA
32. With avoidance conditioning, a signal such as
light reliably precedes the onset of an aversive
stimulus such as an electric shock
With avoidance conditioning, the organism
gradually learns to make the appropriate
response when the signal light comes on, thus
avoiding the shock altogether.
32PSYCHOPEDIA
33. Reinforcement is the most important
component of operant conditioning. A
reinforcer is anything that leads to an increase
in the probability of occurrence of the
response.
33PSYCHOPEDIA
34. Primary reinforcer: A primary reinforcer is naturally
reinforcing to the organism and is related to survival
e.g. Food or water.
Secondary reinforcer: A secondary reinforcer is any
neutral stimulus, which is associated with the
primary reinforcer and thus acquires reinforcing
characteristics. Examples are light, sound etc.
Generalized reinforcer: A generalized reinforcer is a
secondary reinforcer, which has been paired with
more than one primary reinforcer. Its greatest
advantage is that it does not depend upon conditions
of deprivation to be effective. E.g. money, awards,
praise etc.
34PSYCHOPEDIA
35. Positive reinforcers: A positive reinforcer is something, which
increases the probability of the occurrence of the response
when it is added to a situation by a response.
Negative reinforcers: A negative reinforcer is something,
which when removed from a situation by a certain response
increases the probability of occurrence of that response e.g.
If a shock is discontinued when the rat jumps over to
another box, the response of jumping to the other box
increases.
Thus, reinforcement occurs when the response either adds
something to the situation (positive reinforcer) or removes
something (negative reinforcer). In each case, the
probability of occurrence of the response is increased.
35PSYCHOPEDIA
37. it was Skinner who thoroughly investigated
the topic and wrote a book, ‘Schedules of
Reinforcement’ in 1957,
The contingencies that can be designed to
couple behavior and reinforcement are
collectively known as “schedules of
reinforcement”.
37PSYCHOPEDIA
38. Simple schedules are those in which a single
type of reinforcement contingency is in force
throughout the experimental session.
Compound schedules are combinations of two
or more simple schedules in the same
experiment.
38PSYCHOPEDIA
39. Continuous reinforcement schedule: Every
correct response is reinforced during
acquisition. It is usually the basic schedule
used to establish the response because it is
difficult to bring about acquisition of any
response if partial schedules are used during
initial training.
39PSYCHOPEDIA
40. Partial reinforcement schedule: In this case, the response is
not reinforced every time it occurs. The reinforcement is
given only in some of the responses, not always. It not only
leads to a slower acquisition of response but it also leads to
slower extinction. The fact that partial reinforcement leads
to slower extinction is known as the partial reinforcement
effect or Humphreys’ rule after the researcher Humphreys
(1939), who first demonstrated it. There are two
dimensions upon which the reinforcement may be made
contingent – time and behavior. An interval schedule implies
that the reinforcement is set up in terms of the time that has
elapsed from the delivery of the last reinforcement. A ratio
schedule implies that the reinforcement is based on the
number of responses emitted by the organism since the last
reinforcement.
40PSYCHOPEDIA
41. Fixed interval schedule: In this case, the organism is
reinforced only after a fixed interval of time e.g. A response
is reinforced after every 5 minutes. The organism generally
learns to anticipate the reinforcement and gives the
response only as the interval is about to end.
Fixed ratio schedule: In this case, the number of responses is
important E.g. every fifth response may be reinforced.
Theoretically, the organism must respond at a very high rate
to get the reinforcement. However, as soon as he gets the
reinforcement, there is a slight depression in the rate of
responding. This is called the post reinforcement pause.
Perhaps, this occurs because the organism learns that the
responses immediately following a reinforced response are
never reinforced.
41PSYCHOPEDIA
42. Variable interval schedule: In this case, rather than having a
fixed time interval, the organism is reinforced on an average
of 3 minutes but actually the reinforcement may be given
immediately after prior reinforcement or after 30 seconds or
7 minutes or any other time interval. This schedule
produces a steady, moderately high response rate.
Variable ratio schedule: In this case, the organism is
reinforced on the average of a particular number of
responses. E.g. every time responses on an average of 5 will
be reinforced. However, practically, it might receive 2 or 3
reinforcements in a row or it may make 10 to 15 responses
without being reinforced. This schedule produces the
highest rate of responding among all the four schedules.
42PSYCHOPEDIA
43. Differential reinforcement of low rate: Based on
the rate of the response, in this schedule an inter-
response time must be equal to or exceed a
certain value to secure the reinforcement. For
example, a reinforcement may be given only 10
seconds after the previous response. If a response
occurs before 10 seconds, the experimenter
“resets the clock” and then counts 10 seconds
again before giving the reinforcement. Obviously
such schedules lead to a very low response rate.
43PSYCHOPEDIA
44. Differential reinforcement of high rate: In this
case high response rates are reinforced. It teaches
an organism to consistently perform at a very
high rate.
Differential reinforcement of other behavior: this
is a unique schedule because it makes the
reinforcement contingent on the failure of the
response to occur for some specified period. For
example, a rat may be given a pellet of food after
every 30 seconds if and only if he has not given a
response during that period.
44PSYCHOPEDIA
45. Sequential schedules: In this case two or more
schedules run one after the other during the
experimental session, but only one is in effect at
any one time. Reinforcement may be available
after one, both, or only after the last component.
Some of them are:
› Tandem schedules: Here two or more component
schedules must be completed in sequence, before the
reinforcement is given. For example: In tand FI 10 FR
20, the animal must respond only after the first 10
seconds, and then give 20 responses before the
reinforcement will be given. Such schedules teach us to
give a delayed response.
45PSYCHOPEDIA
46. › Chained schedules: It is similar to tandem schedules, except
that a separate discriminative stimulus is associated with
each component. In chain FI 10 FR 20, for example, a light
may be present in the initial 10 second interval, but absent
after that. This schedule is useful to investigate the
properties of secondary reinforcers.
› Mixed schedules: Reinforcement is available to the subject in
each component, and each of the components may be
presented to the subject in alternating order. For example: In
mix FI 10 FR 20, the animal must respond only after the first
10 seconds, get the reinforcement, and then give 20
responses before the reinforcement will be given again.The
organism comes to know which schedule is operating after
experiencing the reinforcement, and quickly learns to match
his responses to the schedule in operation.
46PSYCHOPEDIA
47. › Multiple schedules: It is similar to the mixed
schedule, except that a separate discriminative
stimulus is associated with each component. In
mult FI 10 FR 20, for example, a light may be
present in the initial 10 second interval, and a tone
may be present after that for the next phase. Again,
this schedule is useful to study the properties of
secondary reinforcers.
47PSYCHOPEDIA
48. › Alternative schedules: Two or more components run
simultaneously, and reinforcement is given when the
organism responds to any one of the components. For
example: In alt FI 10 FR 20, the animal will get the
reinforcement for a response after 10 seconds or after 20
responses, whichever is earlier.
› Conjunctive schedules: Here all components must be
completed before reinforcement occurs. For example: In conj
FI 10 FR 20, the animal will get the reinforcement after 20
responses and only if at least one response occurs after 10
seconds.
› Concurrent schedules: The component schedules run at the
same time, but they are completely independent of each
other. They are useful to study choice behavior in organisms.
48PSYCHOPEDIA
50. Deprivation – The experimental animal is
initially put on a deprivation schedule. E.g. If
food is to be used as a reinforcer; the animal is
deprived of food for 23 hrs prior to the
experiment. Skinner does not say that these
procedures motivate the animal or produce a
drive state. Deprivation is simply a set of
procedures, which is related to better
measurable performance.
50PSYCHOPEDIA
51. Magazine training – It implies that the feeder mechanism
(also called the magazine) is triggered periodically making
sure that the animal is not near the food cup when this
happens. Gradually, the animal associates the click of the
magazine with the presence of a food pellet.
Lever pressing – Now the animal can be left in the Skinner
box on its own. Eventually, it will press the lever, which
would trigger the food magazine producing the click that
signals to the animal to go to the food cup where it is
reinforced by food. According to operant conditioning
principles, the lever press response, having been reinforced,
will tend to be repeated and when it is repeated, it is again
reinforced further increasing its probability of occurrence
and so on.
51PSYCHOPEDIA
52. Shaping – The process of operant conditioning described
above takes a long time. There is another approach that can
shorten this time period. This is the process of shaping. In
this case, the experimenter gives the reinforcement for
responses, which bring the organism closer to the desired
response. E.g. Initially, he gives reinforcement if the rat is in
the half of the Skinner box, which contains the lever.
Gradually, he reinforces organism only as he comes closer to
the lever. Next, it puts pressure on the lever and finally, only
when the animal is actually pressing the lever. This method
is also called the method of successive approximation or
differential reinforcement. It refers to the fact that only those
responses are reinforced which become increasingly similar
to the one which the experimenter wants.
52PSYCHOPEDIA
53. Discriminative operant – After we have conditioned the animal to press
the lever, we can make the situation more complex. E.g. we can arrange it
so that the animal receives a pellet of food only when the light in the
Skinner box is on but not when it is off. Thus, light becomes the
discriminative stimulus. In this case, the animal learns to press the lever
when the light is on and not to press it when the light is off. Thus, light
becomes a signal for the lever press response. We have developed a
discriminative operant whereby an operant response is given in one set of
circumstances, but not in the other.
Secondary reinforcement – Any neutral stimulus paired with a primary
reinforcer also acquires reinforcing properties. It follows that every
secondary reinforcer must also be a discriminative stimulus because it is
consistently paired with the primary reinforcement. Once established, a
secondary reinforcer is independent of the initial response i.e. it not only
strengthens that particular response, but also can condition a new
unrelated response.
53PSYCHOPEDIA
54. Generalization – A generalized reinforcer is a secondary reinforcer
that has been paired with more than one primary reinforcer. For
example: money is a generalized reinforcer because it is associated
with many primary reinforcers like food etc.
Extinction – It implies omitting the reinforcement. Skinner holds
that this is necessary and sufficient for extinction.
Spontaneous recovery – After extinction, if the animal is returned
to his home cage for a period of time and then brought back to the
experimental situation, it will again begin to press the lever for a
short period of time without any additional training. This is called
spontaneous recovery.
External inhibition – If a strong extraneous stimulus such as a tone
or light is presented to a rat that is pressing a lever to get food, the
response decreases during the presence of the extraneous
stimulus.
54PSYCHOPEDIA
55. Disinhibition – Similar to classical conditioning,
stimuli that can produce external inhibition can
also produce disinhibition if they are present
during extinction. In fact, the more effective
external inhibitor is also the more effective
disinhibitor (Brimer, 1972).
Conditioned inhibition – The discriminative
stimuli present during extinction trials can inhibit
any response, whenever they are present. To
establish a conditioned inhibitor, the response is
not reinforced when it is present, but it is
reinforced when it is absent.
55PSYCHOPEDIA
56. Superstitious behavior – If the organism is
reinforced periodically regardless of what it is
doing, it acquires strange ritualistic responses.
E.g. It may run in circles, stand up on its back
legs or bob his head i.e. it performs the
actions, which were reinforced by the
experimenter accidentally. This behavior is
similar to superstitious behavior in humans.
56PSYCHOPEDIA
57. Chaining – One response can bring the organism into contact with stimuli that act
as a discriminative stimulus for another response, which in turn causes it to
experience stimuli that lead to a third response and so on. This process is called
chaining. In fact, most behaviors involve some form of chaining. The stimuli in the
Skinner box cause the animal to turn towards the lever; the sight of the lever
causes him to approach it and then to press it. The firing of the food magazine is an
additional stimulus, which elicits the response of going to the food cup.
Consuming the food pellet is another stimulus to the animal to return to the lever
and again press to it. This sequence of events is a chain held together by the food
pellet that is the primary reinforcer. It can be said that the various elements of the
chain are held together by secondary reinforcers, but the entire chain depends in
the primary reinforcer. If the primary reinforcement is withdrawn, the chain
breaks down. It is also important to realize that the chain is developed backwards
from the primary reinforcer. Animals have been trained to perform complex
responses with the help of this principle. E.g. Skinner trained a rat to climb a
staircase, ride a cart, cross a bridge play a note on a toy piano, enter a small
elevator, pull a chain, ride the elevator down and press a lever to get the pellets of
food.
57PSYCHOPEDIA