Learning is defined as a relatively permanent change in behavior as a result of experience. This definition excludes changes that might occur solely as a result of maturation, injury, or disease. To learn is to adapt. A child might stick his or her finger in a light socket, but not more than once. Sea lions in an aquarium will learn to bark and slap the water if these behaviors prompt people to toss them food. Changes that occur as a result of learning are not always positive. We may acquire bad (maladaptive) habits, as well as good ones. Three basic kinds of learning have been studied extensively by psychologists. These are: classical conditioning, operant conditioning, and observational learning.
The pioneer of the study of classical conditioning was Ivan Pavlov. While studying salivation in dogs as part of his research on digestion, Pavlov discovered an interesting phenomenon. Dogs that had been repeatedly given meat in order to induce salivation began to salivate before the presentation of the meat. The sight of the pan containing the meat, or the sound of the experimenter's footsteps coming toward the laboratory was enough to initiate salivation. This was curious. Dogs do not normally salivate to the sound of footsteps, thus they must have acquired this response as a result of experience. In other words, learning had taken place.
Pavlov recognized the potential importance of the dogs' behavior, and subsequently turned his attention to the study of what we now know as conditioned reflexes. By carefully scrutinizing the dogs' behaviors under controlled laboratory conditions, Pavlov discovered and described the principles of classical conditioning. In order to understand its operation, there are a few key terms that need to be explained. An unconditional stimulus refers to a thing or event that triggers a response (change) reflexively or automatically. This response is referred to as an unconditional response. It is automatically produced; no learning is needed for it to occur. A neutral stimulus is a stimulus that elicits no response (or at least not the response being studied). When a neutral stimulus is repeatedly paired with an unconditional stimulus it will produce an effect similar to that of the unconditional stimulus. This mutated neutral stimulus, if you will, is referred to as a conditioned stimulus and the response it produces is called a conditioned response. The conditioned response, unlike the unconditioned response, is learned. Each pairing of an unconditional stimulus with a conditional stimulus is referred to as reinforcement. The pairing strengthens or reinforces the conditioned response. In classical conditioning it is important to remember that the initial stimulus and its response (i.e., the unconditioned stimulus and response) occur naturally; they are instinctual, so to speak.
HOW CLASSICAL CONDITIONING WORKS. In the first stage, the unconditioned (natural) response to an unconditioned stimulus occurs automatically. It is a natural, reflexive reaction. For example, eating meat will make a dog salivate to aid in digestion. In the second stage, a neutral stimulus is paired with the natural or unconditioned stimulus. Using our example of the dog and meat, suppose we ring a bell just before the meat is given to the dog. If we do this repeatedly the bell alone will cause the dog to salivate and this represents the third stage of classical conditioning. In other words the conditioned stimulus now produces a conditioned response. This response was not present before the conditioning process (or learning) took place. Conditioning occurs most quickly and effectively when the conditioned stimulus immediately precedes the unconditioned stimulus.
Because of classical conditioning, certain events can produce unwanted distress for reasons that are largely unrelated to the event itself. Young children, for example, often become fearful during their first visit to a barber. Barbers often wear white smocks, similar to those worn by doctors. There are also numerous metallic instruments (scissors, razors) in plain sight in the barbershop. Unpleasant experiences at the doctor's office (e.g., an injection) could become associated with accompanying stimuli (the doctor's white coat, silver instruments) in such a way that similar stimuli (in other settings) could trigger an anxiety response. Some children's barbers make a point of wearing colored (as opposed to white) jackets, and take pains to reduce any similarities between their work areas and doctors' examining rooms.
On the basis of his research, Pavlov assumed that the basic associations established through classical conditioning were universal. In other words, he believed that all animals would show conditioning, and that any natural response could be conditioned to any and all neutral stimuli. More recent research has shown that there are restrictions on the kinds of associations that are amenable to conditioning. For example when tastes, sounds, and visual stimuli were used as conditioned stimuli prior to being given a nausea-inducing drink, rats very quickly learned to associate taste with illness, and forever after avoided similar tasting food. This happened even if the
If the sole mechanism of learning were classical conditioning only a very limited number of responses could be learned. A dog may learn to salivate at the sound of a bell but how are new, voluntary responses learned? How does the animal learn to operate on its environment?
Operant conditioning provides some insight. In classical conditioning the animal is relatively passive. In operant conditioning the animal is an active part of its environment. It operates on the environment. Two pioneers of this approach are Edward L. Thorndike (1874-1949) and B. F. Skinner (1904-1990). At about the same time that Pavlov was performing his experiments with dogs, Thorndike began experimenting with cats. He devised a box from which a cat could escape only if it performed a particular action. For example, the cat would have to press a lever, which would, in turn, cause a rope to pull a bolt from the door and thus allow it to escape. Through trial-and-error the cat would eventually escape from the box. Thorndike noticed that over successive trials, it took progressively less and less time for the cat to solve its problem. Thorndike reasoned that the gratifying experience of being released from the box caused the correct response (pressing the lever) to occur more rapidly on the subsequent trials.
Skinner's research extended and elaborated this simple fact of life: behavior that is rewarded is more likely to recur.
Much of Skinner's research utilized laboratory rats and pigeons. He designed the now famous Skinner Box—a soundproof chamber with a bar or key, which, if pressed or pecked, would dispense a reward of food or water. Once the rat was placed into the box, the experimenter had total control over its environment. The equipment could be programmed to deliver positive or negative reinforcement. For example, the box could be rigged with a lever that, when pressed, turned off a mild electric shock (negative reinforcement). A negative reinforcer is one that strengthens a response by removing an aversive or unpleasant stimulus.
Before a response can be reinforced, it must first occur. Suppose you wanted to teach a dog to climb a ladder. Because this action has no probability of occurring spontaneously, you would wait forever for it to occur so that it could be reinforced. What to do? The solution is to use a procedure known as shaping. When we shape a behavior, we define some ultimate target behavior and then reinforce all actions that are even remotely related to the target behavior. Thus the dog might receive a reward for placing a paw on the bottom rung of the ladder. The trainer then requires responses that are more and more similar to the final, desired response. These responses that are rewarded on the way to the final target behavior are called successive approximations. With shaping (and patience) various animals can be taught to produce extraordinary sequences of behaviors. There are bears in the Russian circus that drive motorcycles. Seeing eye dogs act as the "eyes" for the blind, and can also be taught to assist people with spinal cord injuries by turning on light switches or opening doors. The basic principles of operant conditioning have important practical implications. These principles are at the heart of behavior modification therapy—a treatment approach that has demonstrated some impressive successes in schools, prisons, mental hospitals, and rehabilitation wards.
While classical conditioning and reinforcement principles are powerful and ubiquitous determinants of behavior, they do not tell the whole story, especially when it comes to human learning. We do not always learn through direct experience. Indeed, we wouldn't survive for very long if we could not learn from watching others. Observational learning plays a role in almost every aspect of our activities, from learning how to hold a fork, drive a car, smoke a cigarette, or have sex. Observational learning occurs in fish, birds, and mammals too. For example, if given a choice, rats will prefer to eat food that they have seen other rats choose. Research has demonstrated that children imitate their parents' food aversions. After the first few months of The Simpsons television show, many young girls across the country began expressing an interest in playing the baritone saxophone—Lisa Simpson's instrument of choice.
The observational learning perspective emphasizes that what is learned is 'knowledge' about behavior, in addition to the behavior itself. Role models can be quite influential. If you want to encourage a child to read, read to them, surround him or her with books and with people who read them. Not surprisingly, modeling effects cut both ways. Antisocial role models can cultivate negative patterns of behavior in the observer. Children who grow up in households where wife abuse is common are "learning" that physical assaults and intimidation are effective ways of controlling others. Models are most likely to be imitated when they have status, when their actions are rewarded, when the modeled behaviors are in the observer's repertoire, and when the observer is motivated
Barone, D., Maddux, J., & Snyder, C. R. Social cognitive psychology: History and current domains. New York: Plenum Press, 1997.
Chance, Paul. Learning and Behavior. Pacific Grove: Brooks/Cole, 1999.
Geller, E. S. The psychology of safety: How to improve behaviors and attitudes on the job. Radnor, PA: Chilton Book Co., 1996.
Steinmetz, J. E. "A renewed interest in human classical conditioning." Psychological Science 10 (1999): 24-25.
Timothy E. Moore