The last four chapters have emphasized learning based on associations between stimuli. This chapter is concerned with learning and motivational changes based on events that follow behavior and generally are a result of the behavior. A worker receives his salary following completion of a certain number of hours of work. A student receives a particular grade on a test as a result of achieving a certain test score. A child is reprimanded for using certain words. In these cases there is some relationship, called a contingency, between the person’s behavior (working a number of hours, achieving a test score, using certain words) and some resultant or contingent event (salary, grade, reprimand). Operant conditioning (also called instrumental conditioning) is the learning model based on the effects on behavior of contingent events and the learning of the nature of the contingency. Skinner is the current major authority on operant conditioning.
If the contingent event makes it more probable that the person will behave in a similar way when in a similar situation, the event is called a reinforcer.
Occasionally, when Bobby was put to bed before he wanted, he would cry. His parents dealt with this by reading him a story to quiet him down. Over time, Bobby cried more often when put to bed. In this situation, the parents’ reading him a story was a reinforcement for Bobby’s crying. On the other hand, if the contingent event makes the behavior less probable, then the event is called a punisher. For a while, Susan did all her banking at the neighborhood bank. However, because of poor service there, she gradually shifted most of her business to another bank. Here the poor service is a punishment for using the neighborhood bank.
Following the behavior, the contingent event may come on or increase (positive); or the contingent event may go off or decrease (negative). This produces four combinations: positive reinforcement, negative reinforcement, positive punishment, and negative punishment.
Positive reinforcement is an increase in the probability of a behavior due to an increase in the contingent event. Carol, a new manager in a company, began praising workers for submitting their reports on time. In a couple of weeks, this reinforcement by praise greatly increased on-time reports. Positive reinforcement, when appropriately used, is one of the most powerful of all behavior change tools.
Negative reinforcement is an increase in the probability of a behavior due to a decrease in the contingent event. A person learns to use his relaxation skills to offset anxiety, with the decrease in anxiety being a negative reinforcer. A client in aversive counterconditioning (see Chapter 6) is reinforced for putting out his cigarette by the negative reinforcer of the offset of the hot smoke in his face. Thus negative reinforcement is based on the decrease of something undesired such as pain or anxiety. Negative reinforcement is not punishment; reinforcement is an increase in the probability of behavior, while punishment is a decrease.
Negative reinforcement is the basis of escape conditioning, learning to escape an aversive situation and being reinforced by the decrease in aversion. Scotty may learn to leave a neighbor’s house when the neighbor gets drunk and obnoxious. Escape conditioning may lead to avoidance conditioning in which the person learns to avoid the aversive situation. Scotty may learn to avoid going to his drinking neighbor’s house. Many politicians avoid important political issues in which no matter what position they take a moderate number of people will get mad and perhaps later vote against them. Votes and money are two strong reinforcers accounting for much political behavior.
Positive punishment is a decrease in the probability of a behavior due to an increase in the contingent event. This is what most people mean when they use the word “punishment.” If every time Al tells his algebra teacher he is having trouble keeping up with the class he is then given extra remedial Work, then the extra work may act as a punisher resulting in a decrease in asking for help.
Negative punishment is a decrease in the probability of a behavior due to a decrease in the contingent event. This corresponds to a decrease in something desirable following some behavior. If every time a person stutters it briefly turns off a movie he is watching and describing and if this results in a decrease in stuttering, then the offset of the movie is a negative punisher for stuttering.
These four types of contingent events are shown in Figure 3. Note that the onset and offset of the same event may function differently depending on what behaviors they are contingent on. Thus if the onset of a pleasing event results in positive reinforcement, its offset often results in negative punishment. If the onset of an aversive event produces positive punishment, its offset will often produce negative reinforcement. (This is why negative reinforcement is so often confused with punishment.)
If we record how probable a behavior is, such as how often it occurs, before we establish one of the above four contingencies, this initial probability is called a baseline. Operant conditioning is the establishing of a behavior-event contingency that alters the probability of a behavior away from its baseline. If we now terminate the operant contingency, then the behavior will often return toward the baseline level, a process called extinction. Thus reinforcing a behavior increases its probability from the baseline; while later withholding the reinforcement extinguishes the behavior back toward baseline. Punishing a behavior reduces its probability from the baseline; while withholding the punishment lets the behavior extinguish back toward the baseline. Extinction may be slowed or prevented if other variables, such as other reinforcement, come to support the behavior at its new level. For example, an unassertive person may learn to be more assertive with the help of social reinforcement and encouragement from the members of his assertive training group. If the client’s new assertive behavior is useful and pleasing (reinforcing) to him in his daily life, then it may continue without the group support.
Sometimes after a behavior has been extinguished its probability drifts back in the pre-extinction direction. This is called spontaneous recovery. For example, on Monday and Tuesday David may get his teacher’s attention (reinforcement) by playing with the books in the case near his desk. On Wednesday through Friday the teacher extinguishes this behavior. Then on Monday David gives the books another try (spontaneous recovery). Fortunately, this can then be easily extinguished.
From an operant position it is important for the behavior modifier to learn to ask questions such as the following: What is the function of the behavior? What supports or reinforces the behavior? In what situations is the behavior most likely to occur? Learning to identify sources of reinforcement is one of the most powerful skills a behavior modifier can cultivate. Sources of reinforcement are often unexpected. For example, you might be using desensitization (Chapter 5) to reduce a fear in a client and having little success or having trouble getting sufficient motivation or cooperation from the client. Then when pursuing the function of this fear in the client’s life, you may find that having the fear is reinforcing and hence the resistance. Perhaps having the fear results in the client receiving special attention or favors from his peers. Or perhaps having the fear keeps him from having to deal with more difficult problems you were not aware of. In such a case these sources of reinforcement and other problems may have to be dealt with before removing the fear. Often this involves helping the client learn other ways to get the reinforcement now received for the undesired behavior. In the clinical literature the expression secondary gain is used when discussing reinforcing aspects of apparently undesired behaviors.
Now we turn to behavior change strategies that are based on operant Conditioning. This includes altering the stimulus situations in which behaviors occur (stimulus control), getting desirable behaviors to occur and reinforcing them, extinguishing and/or punishing undesired behaviors, reducing the reinforcing effects of events that support undesired behaviors, and combining operant procedures with other approaches.
Operant behaviors do not occur in a vacuum; they occur more in some Situations than others and are triggered by external and internal cues. That is, for all operant behaviors there are stimuli, called discriminative stimuli and abbreviated SD, which tend to cue the response. Discriminative stimuli do not elicit the behavior, as the CS elicits the CR, but rather set the occasion for the behavior, making it more or less probable the behavior will occur. Thus we can often alter operant behavior by altering discriminative stimuli.
One approach is to remove discriminative stimuli that cue undesired behaviors. As part of a program to reduce smoking we might remove those stimuli that increase the tendency to smoke, such as ashtrays on the table. When trying to lose weight we might change the route from work to home so it does not pass the doughnut shop.
A second stimulus control approach, called narrowing, involves restricting behaviors to a limited set of stimuli. A person who overeats probably is eating in many situations. This results in many discriminative stimuli (e.g., reading, watching TV, having a drink, socializing) cuing the tendency to eat. To cut back on this, we might restrict the eating to one place and certain times. Or in reducing smoking, we might restrict smoking to when the client is sitting in a particular chair in the basement.
Eliminating cues and narrowing are often combined. For example, in improving study habits an important component is establishing good study areas. If a student sits on the sofa when studying, eating, listening to music, and interacting with dates, then the sofa will cue thoughts, feelings, and behavior tendencies that may be incompatible with studying. It is preferable to set up an area in which nothing takes place except studying (perhaps a desk in a corner), get out of the area when doing things like daydreaming, and remove from the area stimuli (e.g., pictures, food) that cue behaviors incompatible with studying. Similarly, treatment of insomnia might involve only going to bed when sleepy; leaving the bed when not falling asleep; and not reading, eating, or watching TV when in bed.
A third stimulus control approach involves introducing stimuli that tend to inhibit the undesired behavior and/or cue behaviors incompatible with the undesired behavior. A person trying to lose weight might put signs and pictures on the refrigerator door. Or a person who has quit smoking may tell all his friends he has quit. Then the presence of one of his friends may be a stimulus to not smoke.
Because a person’s behavior gets tied into the stimuli and patterns of his daily life, it is often desirable to alter as many of these cues as possible. This stimulus change may involve a wide range of things such as rearranging furniture, buying new clothes, painting a wall, eating meals at different times, having sexual intercourse at different times and places, or joining a new club. Stimulus change is useful in situations such as part of marriage counseling or when a client is ready to significantly alter his life-style. Similarly, removing a person from his usual life situation until the change program is accomplished is often useful, particularly if coupled with stimulus change of the environment the client returns to.
Stimulus control deals with the antecedent side of operant behavior; the following sections deal with the consequence side.
The most common operant approach consists of reinforcing desirable behaviors. And this should generally be a component of all operant programs, even when the emphasis is on some other approach, such as extinction.
Nature of reinforcement
There is no theoretical agreement on the nature of reinforcement (see Mikulas, 1974b, p. 130). It is also not clear whether reinforcement affects learning and/or motivation. That is, does the reinforcement somehow strengthen the learning, such as facilitating the physiological changes that underlie the learning, and/or does the reinforcement change the person’s motivation, such as providing incentives for certain behaviors? The areas of learning and motivation subtly blend together; so in this text I have related behavior modification to learning-motivation rather than just learning.
Fortunately, these theoretical issues do not impede practical application. For in behavior modification we can take an empirical approach to reinforcement, an approach favored by Skinner. Here we merely identify events that function as reinforcers and use them. An important, but surprisingly often overlooked, point is that we must identify what actually is reinforcing to the person, not what we expect should be reinforcing to him. A good approach to determining reinforcers is to ask the person what is reinforcing, as with a Reinforcement Survey Schedule (Cautela & Kastenbaum, 1967). Similarly, events we may consider not to be reinforcing in fact are. A common example is the teacher who yells at a student as an intended punishment, when really the teacher may be reinforcing the student with attention and/or causing the student to receive social reinforcement from his peers for getting the teacher mad.
Sometimes something will not be reinforcing to the client unless he has had some moderately recent experience with it. Talking on the telephone to a relative may not be reinforcing to a mental patient who has not used the telephone for years. Playing a game may not be reinforcing to an elementary student who is unfamiliar with the game. In these cases, it is often desirable to prime the client by giving him some free experience with the reinforcer before the operant contingencies are established. This procedure is called reinforcer sampling (Ayllon & Azrin, 1 968a). Sampling of the reinforcer may be increased by having the client observe another person doing the Sampling.
Praise is a common and powerful reinforcer. When appropriately used, it has made dramatic changes in a variety of settings, including elementary classrooms and businesses. Money is another powerful reinforcer already affecting much of our behavior. One study used money as a reinforcer to reduce litter in a park in Utah (Powers et al., 1 973). A sign notified visitors that for each bag of litter turned in they would receive a choice of $.25 or a chance to win $20.00 in a weekly lottery. Over 12 weeks, $200.00 in lottery money and $8.50 in quarters were paid out and more than twice as much litter was turned in as before the reinforcement contingency. Another study used money to increase the punctuality of six workers who were chronically late to work in a Mexican manufacturing company (Hermann et al., 1973). For each day they arrived on time, the workers were given small daily bonuses, about $.16.
Reinforcers for patients on a mental ward may include a visit with the social worker, choice of whom to eat with, a trip to town, candy, cigarettes, new clothes, or gradually earning more privileges. Reinforcers for students may include longer recess, opportunity to be the teacher’s aide, field trips, dances, or time in a special reward area filled with different things to do. To date there have only been a few applications of behavior modification in business settings and related organizations (e.g., Luthans & Kreitner, 1975; Mager & Pipe, 1970; Whyte, 1970); but this is changing rapidly. Potential reinforcers in these settings include recognition and praise, bonuses, equipment and supplies, additional staff, added privileges, participation in decision making, option for overtime, and days and hours off.
One theory of reinforcement that has had some impact on behavior modification is that of Premack (1965). Basically, this theory suggests that high-probability behaviors can be used to reinforce lower-probability behaviors. (More formally: If the onset or offset of one response is more probable than the onset or offset of another response, the former will reinforce the latter, positively if the superiority is for the “on” probability and negatively if for the “off” probability.) Thus telling a child he must eat his vegetables (low probability) before he can go out and play (high probability) is using Premack’s principle, also sometimes called “Grandma’s rule,” because grandmothers and others have been using this approach for a long time. The historical importance of this theory in behavior modification is that it focused attention on opportunity to engage in various activities as sources of reinforcement. And some of these activities may be desirable in themselves. For example, students may work on math problems (low probability) in order to work on an ecology project in the library (high probability). Here we not only motivate the students to do more math, but we use a reinforcer that is educationally desirable and perhaps was already part of the program. Goldstein (1974) found that for Navajo children reinforcing activities included learning to weave, silversmithing, leather working, traditional dancing, and storytelling.
To date, however, most of the research related to Premack’s theory has been animal studies; research with humans is incomplete and inconclusive, particularly applied studies (Danaher, 1974; Knapp, 1976). The Premack theory also predicts many non-obvious sources of reinforcement from high probability behaviors such as answering a telephone when it rings, opening a door whose handle you have your hand on, and drinking from a glass you have lifted to your mouth. Although these have been used as reinforcers in some behavior modification programs (i.e., coverant control discussed in Chapter 9), little evidence exists relative to their reinforcing effects.
A variation of reinforcement, called covert reinforcement (Cautela, 1970b), involves the client imagining a pleasing scene, such as skiing down a mountain, as the reinforcement. Cautela uses covert reinforcement to reinforce behaviors which are also imagined. (Note the parallels to covert sensitization.) Cautela has his client imagine a sequence of steps of the desired behavior. As the client imagines each step Cautela says “reinforcement” and the client then imagines his pleasant scene. Later the client learns how to do this on his own. Research on the effectiveness of this procedure is mixed and lends itself to a variety of explanations (see Mahoney, 1974a, p. 104). Many of the studies seem to be best interpreted in terms of counterconditioning. For example, in one study (Marshall et al., 1974) covert reinforcement was used to successfully reduce fear of snakes in female subjects. Treatment involved having the subjects imagine snake scenes, then relax, and then imagine their reinforcing scene. The relaxation and imagining the pleasant scene may countercondition some of the anxiety associated with snakes. Other studies include successful treatment of test anxiety (Guidry & Randoif, 1974) and a rodent phobia (Blanchard & Draper, 1973). Landouceur(1974) reduced fears of rats, but the group that imagined reinforcement after the anxiety response was not significantly better than the group that imagined the reinforcement before, a result contrary to operant conditioning.
Cautela (1970a) has also suggested covert negative reinforcement, which is the same as covert reinforcement except that the client terminates imagining an aversive scene contiguous with imagining a desired behavior. This, however, results in the reinforcement preceding the desired behavior, rather than following it as is required by operant conditioning. An example of treatment would be a homosexual imagining a snake approaching his neck (aversive scene) and then shifting to a scene of hugging a naked girl. Again this may be primarily counterconditioning (e.g., aversion relief). There is currently little evidence on the effectiveness of covert negative reinforcement. One study (Marshall et al., 1974), mentioned earlier, found that covert negative reinforcement was not as effective as covert positive reinforcement in reducing fear of snakes.
A final variation of reinforcement is self-reinforcement, reinforcement People give themselves. This may be a form of covert verbal reinforcement (e.g., “That was good work.”) or a more tangible reinforcer such as buying yourself some treat. Self-reinforcement is often an important part of selfControl processes in which people reinforce themselves for desired behaviors (e.g., Bandura, 1971b; Kanfer, 1971; Mahoney, 1974b).
To reinforce desirable behavior the behavior must first occur. If a catatonic has not said anything for five years, it would not be an effective approach to wait for him to say something to reinforce his talking. Thus an important part of the operant approach is to use ways to help initiate the behaviors to be reinforced. There are many ways to do this, including shaping, modeling, fading, punishment, and guidance.
Shaping, also called successive approximation, is the reinforcing of behaviors that gradually approximate the desired behavior. The key to shaping is the use of successive approximations that are small enough steps so that there is an easy transition from one step to the next. If one is cultivating the ability to meditate for long periods of time, it may not be desirable to start trying to meditate for an hour. An alternative would be to begin at ten minutes and add one minute every other day, gradually shaping meditation for longer periods of time.
Ayllon (1963) treated a female schizophrenic who wore an excessive amount of clothing, including several sweaters, shawls, and dresses. Before each meal the patient was weighed to determine the weight of the clothing (total weight minus patient’s body weight). To receive her meal, the reinforcement, the weight of the clothing had to be less than a set value. At first the patient was allowed 23 pounds of clothing, but this was gradually decreased until she was only wearing 3 pounds of clothing.
The following is a common sequence in shaping language in non-verbal children (Harris, 1975): The child is taught to attend to the teacher. The child learns non-verbal imitative behaviors, going from gross movements such as clapping to more refined movements including use of the mouth. The child learns verbal imitation; first all vocalizations are reinforced, then vocal izations that more and more closely match those of the teacher. Finally, the child’s vocalizations are shaped toward functional speech.
Shaping involves starting where the client is, taking small enough steps so the client’s behavior smoothly changes, providing reinforcement and support for the changes, and catching mistakes or problems early because of the small steps. Practitioners often also need to use shaping when trying to change the philosophy or programs of the agency or organization where they work.
Modeling, discussed in Chapter 8, involves a change in a person’s behavior as a result of observing the behavior of another person, the model. Thus a way of initiating a behavior, particularly with a child, is to have the person observe someone doing the desired behavior and encourage imitation of the behavior. A client who is learning how to interview for a job may first watch the practitioner model appropriate behaviors in a simulated job interview. Or a teacher who praises one student for good behavior may find other students imitating this behavior.
Modeling and shaping combine together well. For example, in mode/reinforcement counseling the client listens to a tape recording of a counseling interview in which another person is reinforced by a counselor for making a certain class of statements. Then the client is reinforced for making these types of statements. This approach has been used to increase information 5eeking of high school students engaged in career planning (Krumboltz & Schroeder, 1965) and deliberation and deciding about majors by college students (Wachowiak, 1 972).
Azrin and Foxx have shown how the operant approach can dramatically facilitate toilet training in retarded (Foxx & Azrin, 1973b) and “normal” children (Azrin & Foxx, 1974). Their approach with normal children, involving modeling and shaping, includes these components: A wetting doll is used as a model; the child teaches the doll to potty in the same way the child is learning to potty. The child is given extra drinks to increase urination and then through instructions and shaping learns complete toilet procedures, including removing and putting on clothes, use of the toilet, and cleaning up. The child is continually reinforced with praise and treats for maintaining dry pants. Wet pants lead to disapproval, toilet practice, and the child changing the pants. It is important that the child be ready to learn such skills (usually about 20 months old), and Azrin and Foxx give specific tests for this readiness. It is also important that the parent devote himself full time to the program to facilitate shaping and catching accidents immediately. When testing the effectiveness of this program, Azrin and Foxx found that most children could learn the complete toilet-training skills and procedures in less than one day, with the average amount of time being less than four hours. After training, pants inspection and reinforcement are continued for a few days.
Fading involves taking a behavior that occurs in one situation and getting it to occur in a second situation by gradually changing the first situation into the second. A small child might be relaxed and cooperative at home, but frightened and withdrawn if suddenly put into a strange classroom. This fear can be circumvented if the child is gradually introduced to situations that approximate the classroom. Fading is particularly important when a client learns new behaviors in a restricted environment, such as a clinic, hospital, or half-way house. Taking a person out of such a setting and putting him directly back into his home environment may result in a loss in many of his new behaviors and skills. It is preferable to gradually fade from the therapeutic environment to the home environment. Shaping involves approximations on the response side, while fading involves approximations on the stimulus side. And both are similar to the use of a hierarchy in counterconditioning (chapter 3).
Punishment of one behavior suppresses that behavior and results in other behaviors occurring. Perhaps one of these other behaviors is a desirable behavior that can be reinforced. This is not a particularly efficient or desirable approach in most cases.
Guidance consists of physically aiding the person to make some response. Thus as part of contact desensitization (Chapter 5) or flooding (Chapter 4), the client may be guided to touch a feared object. Guidance may be used to help a client learn a manual skill or help a child who is learning to talk how to form his lips to make specific sounds.
Several variables affect the effectiveness of reinforcement. The three most important are amount of reinforcement, delay of reinforcement, and schedule of reinforcement.
Amount of reinforcement refers to both the quality and quantity of reinforcement. Within limits, and with many exceptions, as the amount of reinforcement is increased, the effect of the reinforcement increases.
Delay of reinforcement refers to the amount of time between the person’s behavior and the reinforcement for that behavior. As a general rule, you get the best results if the reinforcement occurs right after the behavior. Praising a child for sharing with a friend is generally most effective if the praise occurs right after the sharing than if it is mentioned later in the day. A strength of the toilet-training program described above is that the reinforcers and punishers occurred right after the behaviors. This facilitates the child learning exactly which responses are reinforced and which are punished.
As the delay of reinforcement increases, the effectiveness of the reinforcement decreases. If a student turns in an essay and two weeks later gets it back with the grade of A, the reinforcing effects of the A on the student’s paper writing behavior are much less than if the paper were returned the next day. If a child is required to do specified tasks around the home for his allowance on Friday, we may find the child lax in doing the chores at the beginning of the week, but working well by Thursday or Friday.
Learning to do things that have a long delay of reinforcement is a complex part of the social learning in our culture. We start as children who want immediate gratification and are gradually socialized to function under long delays of reinforcement, such as working for two weeks before getting a paycheck or going to school for many years before reaching a desired position. Learning to respond to long-term contingencies over short-term contingencies is a major aspect of self-control (Rachlin, 1974). You do not eat the extra piece of cake now for better weight and health later. You do not finish the bottle of rum now to avoid the hangover tomorrow. Many people, such as some juvenile delinquents, have not adequately learned how to behave to long delays and their behavior is often under the control of more immediate gratification, which is often undesirable in the long run. Treatment involves helping the person learn to respond to longer-range contingencies. Contingency contracting, discussed later, is a powerful behavioral tool to help bridge long delays of reinforcement.
Schedule of reinforcement refers to the pattern by which reinforcers are related to responses. The primary distinction between schedules of reinforcement is based on whether every correct response is reinforced (continuous reinforcement) or whether only some correct responses are reinforced (intermittent reinforcement). Learning is faster with continuous reinforcement than with intermittent reinforcement, but time to extinction is longer with intermittent reinforcement. Therefore,
it is often strategic first to teach the behavior under continuous reinforcement and then gradually switch to intermittent reinforcement to maintain it.
Facilitating generalization and maintenance
Often an operant program will be established in a specific setting, such as a clinic, half-way house, or classroom. Yet we usually want the behaviors and skills supported and acquired in this setting to carry over and be maintained in other settings. Hopefully, our programs are establishing behaviors with general usefulness. The behaviors usually will generalize, to some degree, from our specific setting to other settings; but it is usually desirable to facilitate this carry-over. Fading, discussed earlier, is one way of accomplishing this. Other ways to facilitate generalization and maintenance of behaviors include the following: Phase the client off the behavior change reinforcements onto more “natural” forms of reinforcement. Thus we start with a specific set of reinforcers and contingencies, as with mental patients in a half-way house or children in a classroom, and gradually switch to the types of reinforcers that should support the behaviors in the everyday environment, reinforcers such as social approval and self-reinforcement. A related approach involves gradually exposing the clients to the types of reinforcement contingencies that occur in the natural social environment. This is accomplished by switching from continuous schedules of reinforcement to intermittent schedules and by gradually helping the clients learn to function under long delays of reinforcement. Finally, we may wish to reprogram the other environments or enlist the help of others to support the newly acquired behaviors. For example, a school counselor and a teacher may set up a program in one classroom that helps Bobby learn social skills that improve his ability to get along with his peers and experience less conflict in the classroom. To facilitate these skills occurring in settings other than this one classroom, the counselor may talk with Bobby’s parents and his other teachers about ways to support these new behaviors in various settings.
There are many criticisms of programs that use reinforcement, particularly when used in classrooms (O’Leary et al., 1972). For many critics it seems inappropriate to be reinforcing people for something they should be doing; to some critics, this smacks of bribery. Another common criticism is that people will come to expect rewards for everything they do and will not work otherwise. This may foster greed or teach the person to be bad in order to be rewarded for being good.
There are a number of problems with these arguments. First, everyone operates under reinforcement contingencies. How do the students earning a reinforcement in a classroom differ from their parents working for their paychecks or the students in another classroom receiving stars or certificates for good work or good behavior? The issue should be what the student is learning and the nature of the contingencies, not whether contingencies exist. To avoid reinforcing people for behaving in some way, because they should behave in this way without reinforcement, is impractical and often to the detriment of those involved. To take the position that students should learn simply for the sake of learning will lose many students to an unrealistic ideal. An alternative is to use an operant program to provide the initial motivation for learning such things as social and academic skills. If these skills are useful to the person, they will eventually be supported by more natural forms of reinforcement. A 1 5-year-old special education student may never have learned to read and not want to learn. You may establish an operant program in which the student is reinforced for learning to read, being aware of the ethics of all such decisions. At first, the student may only be learning to read to be reinforced. But if things go well and he learns to read, he may find that the skill of reading and what he can do with it becomes reinforcing in itself. Finally, in all such programs, we phase the person off our reinforcement contingencies onto social and self- reinforcement.
Another criticism is based on the fact that some mixed data exist suggesting that in some situations the use of extrinsic reinforcement may reduce intrinsic motivation (Levine & Fasnacht, 1974; Notz, 1975). That is, reinforcing people for doing something may reduce their motivation to do it when not being reinforced. If children enjoy playing certain games and then we begin reinforcing them for playing the games, when we remove the reinforcement their interest in the games may be less than it was prior to reinforcement.
This is certainly important research and points out the need for more studies on intrinsic motivation and self-reinforcement. But it is not that damaging to operant behavior modification programs. First, most of the research involves situations in which the subjects are reinforced for performing behaviors that are already high-probability behaviors. But these are not the types of behaviors we usually need to reinforce in applied settings. Also, we can minimize the suggested problems by such approaches as only reinforcing a person until the behavior becomes intrinsically reinforcing, phase from extrinsic reinforcement to social and self-reinforcement, and support the development of intrinsic motivation.
A variation of operant procedures is contingency contracting, a program in which the operant contingencies are well-specified and clearly understood by everyone involved. These contingencies—reinforcements and punishments that can be expected for different behaviors—are formalized into a contract which is often written. Sometimes the contract is imposed on people; but often the best approach is to negotiate, as much as possible, with all people involved about the nature of the contract. Thus the role of the behavior modifier is often consultant and negotiator about contracting.
Benjamin Franklin employed many procedures for self-development that have a behavior modification flavor to them (Knapp & Shodahl, 1974). Franklin also introduced a simple form of contingency contracting when on a fort building expedition. The chaplain had low attendance at prayer meetings so Franklin suggested that the chaplain give the men their rum after prayers. This greatly increased attendance and punctuality. Franklin considered this method “preferable to the punishment inflicted by some military laws for non-attendance on divine service.”
Gupton and LeBow (1971) worked with two telephone solicitors who sold service contracts on household and garden appliances. They preferred to sell renewal contracts, as opposed to new service contracts, as there were more sales with renewal. A contract was set up in which each solicitor had to make one new service sale to be given five renewal customers to call. This resulted in an increase in sale of both types of contracts. Removing the contingency resulted in a decline in sales for both types of contracts, particularly new service contracts.
Often when running a program, such as a smoking clinic or weight loss program, it is important that the clients attend the meetings and/or do homework assignments. One way to provide the necessary motivation is to have the clients deposit money or valuables, which they earn back by fulfilling a contract they agreed to (e.g., Mann, 1 972). Thus a person may pay $50 for a clinic on how to stop smoking and be able to earn $30 of it back by attending meetings (e.g., $5 dollars back for each of six meetings). Or a person may give the practitioner some records and photos that can only be earned back by loss of specified weights.
Therapists may also work out contracts with their clients in which such things as procedures, goals, and expenses are carefully specified (Goldiamond, 1 974). This is a good way to come to grips with legal and ethical Issues. Much of therapy would dramatically change if all therapists were paid for results, specified in a contract, rather than for time spent.
Contingency contracting is powerful in classroom situations (Hayes, 1976; Homme et al., 1969; Litow & Pumroy, 1975; Mikulas, 1974a). The teacher sets up a contract, perhaps with the help of the behavior modifier, specifying what is expected of the students, academically and non- academically, and what reinforcements they may expect for behaving these ways. Thus the students may be required to bring specified supplies, abide by a list of well-specified classroom rules, and turn in their homework completed to a specified degree. Reinforcements may include opportunity to spend a certain amount of time in a reward area or opportunity to work on a special project. Ideally the teacher has negotiated all aspects of the contract with the students and all students fully understand the contract.
Consider the contingencies operative in many classrooms below the college level. Teachers have a certain amount of material they wish to cover and work they wish completed. For the students the contingent event for completing some work is more work. Hence the students learn to work well below capacity, the teachers push for more to be done, and a certain amount of antagonism develops between teachers and students. Now with contingency contracting the teacher presents the work that needs to be done and asks the students what reinforcements they would like for completing the work and what sort of classroom rules can be established to facilitate this program. This results in the students and teacher working together to establish a mutually satisfactory contract. Such an approach generally results in a decrease in behavior problems, an increase in the students liking the classroom setting, and the students doing the work much faster than would be expected. Most teachers, particularly with younger children, spend most of their time being policemen. Contingency contracting provides a behavior management system that frees the teachers to do more teaching.
Although most classroom contracts, at least at first, emphasize non- academic behaviors, such as being in your seat by the time the second bell rings, academic behaviors can also be built into a contract. Thus a student might earn a reward for improvement in his mathematics skills, independent of his absolute level of proficiency (which may be reflected in his grade). Or the teacher may specify exactly what must be done to achieve a particular grade; this approach being currently popular at the college level.
Contracts, such as those in the classroom, may apply to all the individuals as a unit, a group contract. If all students turn in their homework, the whole class gets five extra minutes of recess. This results in social pressure by the group to conform and the whole group being affected by the behavior of a few. A second approach is to have a single contract, which is applied to the people individually. A third approach is to gradually evolve individualized contracts in which each person has a personal contract geared toward his specific skills, needs, and problems. In classrooms, this is the point at which we can begin to truly individualize instruction.
Consistency is a critical aspect of most behavior change programs, while inconsistency can generate many problems. If a parent or teacher is consistent in dealing with a child, the child can easily learn what contingencies are operative and feels comfortable understanding how part of the world works. Inconsistency, on the other hand, may produce uncertainty, anxiety, tantrums, psychosomatic illness, learned helplessness (discussed later), and related problems. A parent or teacher who responds to a child more on the adult’s temporary mood than on the child’s behavior is more difficult for the child to understand than a parent or teacher whose behavior is more consistently related to the child’s behavior. Children and others also engage in rule-testing, the intentional breaking of a rule to determine if the contingency is in effect. If the system is consistent, there will be some rule-testing. If inconsistent, there will be much rule-testing. Although consistency is perhaps most important with children, it is also important with others. For example, inconsistency in a business setting may result in a drop in morale, feelings of favoritism, feeling powerless to control events, and not knowing what to expect.
A major strength of contingency contracting is that it teaches and requires people to be consistent. If one person fulfills his part of the contract, the other person must fulfill his part. This needs to be true even if the first person is taking advantage of an oversight or loophole in the contract, which will be altered later. This makes contracting in classrooms and homes popular with children because they can hold their teachers or parents responsible to an agreement, while before they may have felt at the mercy of the person in power. In classrooms, this often increases the motivation of students who may otherwise feel the teacher is biased against them.
All operant conditioning involves reciprocity, a mutual interchange of contingent events, usually reinforcements. Even teaching a rat to press a bar has this reciprocity, for the rat is reinforced with food for pressing the bar and the experimenter is reinforced for giving the rat food by the rat pressing the bar, since the experimenter wanted and is pleased by the pressing. The same is true of most human interaction situations; there is usually a mutual interchange of reinforcements. For example, in the classroom the teacher reinforces the students for various accomplishments and in turn is reinforced by these accomplishments. Contingency contracting is a way of establishing a level of reciprocity that is most satisfying for the various people involved. Thus it has proved a useful tool in marriage counseling (Azrin et al., 1973; Glisson, 1976; Hops, 1976; Jacobson & Martin, 1976; Stuart, 1969; Wieman et al., 1974) and families in general (Mikulas, 1976b; Stuart, 1971; Stuart
& Lott, 1972; Weathers & Liberman, 1975).
People who live together, such as a married couple or parents and children, need a fair interchange of reinforcements. Often the reciprocity gets out of balance and a standoff develops with various people holding back what is reinforcing to others. For example, during marriage counseling a husband may say he has no desire to rush home from work to a complaining woman dressed as a slob; instead he often goes for drinks with various friends. The wife, on the other hand, reports she does not care how she looks for a man who comes home when he wants and then immediately turns on the television. Or a mother may report that her son does not let her know where he goes, does not do his chores around the home, and is generally too irresponsible to be allowed to do what he wants. The son, on the other hand, sees no reason to cooperate with his mother, since she does not let him do things all his friends are allowed to do. Situations such as these lend themselves to contingency contracting, emphasizing problem-solving rather than fault-finding. Thus a contract involving the mother and her son would involve clear specification of the son’s chores around the house and privileges the mother agrees to let him earn by doing the chores.
Basically, the behavior modifier acts as a negotiator discussing with the various people involved what they would like and expect from the others. This is combined, discussed, and negotiated into a formal, written, well- specified contract in which the various people agree to behave in specific ways. The contract provides a powerful way to get a fair reciprocity reinstated. As the various ways of interacting catch hold and support each other, the contract is gradually phased out. Through contracting the people learn when they are rewarding others and when they are being rewarded, how to provide feedback to each other, and how to negotiate with each other. Negotiation can be facilitated by the practitioner arranging hypothetical situations the clients can practice negotiating (Kifer et al., 1 974).
In most cases of contracting the contract needs to be altered over time, adding new provisions or qualifications, plugging up loopholes, or renegotiating. However, the contract should usually not be changed retroactively, but only for the future. Contracts often have to be altered to find a good balance between behaviors and reinforcement. If too little reinforcement is given for a behavior, the behavior may not occur; if too much reinforcement is given, the system is inefficient and perhaps wasteful. Contracting is often most effective when accompanied by graphs, signs, reminders, and checklists posted in conspicuous places. Nothing should be left to memory. All aspects of the contract should be written and whenever someone completes part of the contract, it should be marked off or indicated in some written manner. This eliminates disagreements based on people’s different memories or perceptions of what is expected or took place.
Since contracting generally involves behavior change in all the people involved, it is an effective way of changing a person’s behavior even when that person sees most of the fault being with the others. For example, it is not uncommon for a teacher or parent to bring or refer a child to a practitioner because the child is misbehaving in some sense. Assessment may show that it is the teacher or parent who is responsible for much of the child’s misbehavior. Contracting then is an effective way to reasonably and honestly alter the adult’s behavior even though the adult sees it primarily as a way to change the child’s behavior.
Throughout this text remember that approaches such as desensitization and covert sensitization are being discussed independently, when in fact most actual problems will involve a combination of approaches and procedures. This is certainly true of contracting. Thus contracting may be a critical part of marriage counseling, but the practitioner may also need to deal with such things as sexual dysfunctions, communication problems, or difficulty in handling finances. Or teacher may use contracting to handle basic classroom behavior and motivation, but still deal separately with many students’ problems or individual needs. A strength of contracting is that it provides a motivational framework into which other change programs can be fitted. For example, you may be doing contracting with a family. In addition, you may be helping the mother to stop smoking and desensitizing the daughter. Here the various aspects of the smoking program and desensitization can be incorporated into the contract.
Extrapolating from the discussion of families it can be seen that contracting could be a useful component of experimental communities, such as Twin Oaks (Kincaid, 1973), which was basically founded on Skinner’s Walden Two (1948), a novel of a utopian community using operant procedures. Of the many different types of experimental communities that rise and fall, a major cause of failure is not getting the work done (e.g., “I can’t plow the field until I get my head straight about Sally”). This leads to interpersonal problems and some people doing more than their share of work. Through contracting each person can agree to do a certain number of units of work in exchange for community resources and privileges. The system can be made broad enough to handle individual differences in skills and interests and allow flexibility in when the work is done. Miller (1976) describes a community in which contracting is the basis for a variety of activities, including sharing work, leadership, and self-government. This helps create a truly democratic self-governing system in which roles such as coordinator do not become power positions.
Contracting is also applicable in institutions such as prisons, mental hospitals, and halfway houses, although most of these use token economies, which are discussed later.
Finally, contracting can often be done by people with themselves, perhaps as a component of self-control (e.g., Epstein & Peterson, 1973; Mikulas, 1976a). Here even simple contracts are often effective, requiring the completion of one activity to engage in a preferred activity. For example, first, I will finish the work in the yard, then I will go for a bike ride. For each set of five shirts I iron, I get to read a chapter in the novel. The reason this is effective is that many people have the tendency to do the opposite. Thus a person may have a tendency to watch television until in the mood to study, when contracting would require studying before television. More complex contracting may involve rewards for reaching specific points along the path to the goal, such as buying a new record when 20 chapters of a text have been read and outlined in a specified manner. Contracting provides a source of motivation for whatever program is set up, and this motivation may or may not be sufficient for behavior change. For example, contracting may be sufficient to get the windows around the house washed, but not sufficient for weight loss. In the latter case, we need to add various behavior modification procedures to change the eating habits, with the contracting providing the motivation for doing the program. Although operant procedures in general, including contracting, should emphasize reinforcement, some people find they need a punishment contract to motivate them. That is, the contract specifies a punishment, such as doing extra chores, if the person does not do what is required. For example, a graduate student having motivational problems completing his thesis may give his professor several checks in stamped envelopes made out to an organization the student dislikes. Each time the student turns in part of his paper to a specified criterion by a specified date he gets back one of the checks to destroy. Each time the student misses, his check is mailed to the organization.
In some contingency contracting programs the client is reinforced with to kens (e.g., poker chips, marks on a chart, punch holes in a special card) that can later be exchanged for a choice of reinforcers. Contingency contracting programs using tokens are called token economies. There are now a large number of such programs in a wide variety of settings (Kazdin & Bootzin, 1972). The tokens a person earns by completing his part of the contract are eventually exchanged for a choice of reinforcers from a reinforcement menu. By having a large number of items and privileges on this menu the tokens are reinforcing for most of the people most of the time, even though people will buy different things at different times. This reduces problems of a person satiating on any particular reinforcer or continually trying to determine what is currently reinforcing to any person.
A strength of token systems is that they deal with the issue of delay of reinforcement discussed earlier. The tokens are often easily dispensed and can be given fairly immediately after the desired behavior. For example, a teacher may walk around a classroom putting checks on each student’s small clipboard for appropriate behavior and accomplishment. These checks are immediately reinforcing, even though they will not be cashed in until later. They can also be dispensed without greatly disrupting the student’s work. Token systems are often used in home situations (e.g., Christophersen et al., 1972). A child may earn tokens every day, which maintains his behavior, even though his purchased reinforcement does not come until the weekend. Or the child may use some of his tokens for small daily rewards (e.g., staying up an extra half hour) and save others over a period of time for a larger reward (e.g., a new model airplane).
Most situations in which contracting is applicable also lend themselves to token economies. This includes classrooms, businesses, mental hospitals, prisons, half-way houses, homes, communities, and the military. The staff of an institution using a token system may also be on a token system.
Token economies in classrooms have been effective with a wide range of academic and social behaviors (O’Leary & Drabman, 1971; Payne et al., 1975; Walker & Buckley, 1974). This is particularly true if the teacher has positive expectations about the system and has been trained how to use praise, attention, and reprimands to aid in the shaping of behavior (O’Leary & Drabman, 1971). In addition, there are often changes in behaviors not specifically treated, such as increases in attention and class attendance. The tokens themselves may be used to teach math or simulate aspects of the real economy. Token systems are particularly useful when working with students who are behavior problems or have little motivation. Similarly, they are useful when working with retarded children (e.g., Welch & Gist, 1974).
A classic example of a token economy in a half-way house is Achievement Place, a family-style residential treatment program for pre-delinquent youths (Phillips et al., 1971, 1973). This is a home with two adults and six to eight boys who have gotten into trouble with the law. The boys live on a token economy in which they can earn tokens for learning social skills, academic skills, self-help skills, and pre-vocational skills. The tokens can be exchanged for such things as games, snacks, allowance, permission to go downtown, and special privileges. While the boys live in Achievement Place they go to regular school; and the practitioners consult and work with the boys’ parents and teachers. Eventually the boys are phased back into their homes. Follow-up suggests that as a result of this program there is a decrease in the probability the boys will later get in trouble with the law and an increase in the probability they will continue in school. Achievement Place has provided a model for similar programs elsewhere (e.g., Liberman et al., 1975).
Token economies have also been established in prisons (Musante, 1975), a domain of great potential significance, for they could provide the basis for truly rehabilitative programs. To date, unfortunately, most of these programs have been quite poor and have often merely been new names for standard, questionable disciplinary procedures, such as putting a person in solitary and requiring him to earn his way out by conforming to the guards’ wishes.
The best-known and one of the most important applications of token economies is in mental hospitals (Atthowe & Krasner, 1 968; Ayllon & Azrin, 1968b; Carlson et al., 1 972; Foreyt, 1975; Kazdin, 1 975b, 1 977; Schaefer & Martin, 1975; Ulmer, 1976). In many mental hospitals there is an inadequate number of staff to deal with all the patients, particularly if therapy is a long process only carried out by a few of the staff. This often results in the hospital being more of a custodial institution in which most of the patients are kept on drugs and receive little therapy. On many wards the patients do little more than sit, pace about, or watch television, for the contingencies are such that there is much they can do that will result in punishment, but little they can do for reinforcement. A token economy can dramatically change all of this.
With a token economy the patients can be gradually shaped to do more and more—such as taking care of themselves, learning social and vocational skills, attending and participating in physical or psychological therapy, and generally taking control of their lives. With their tokens they may buy such things as recreational opportunities or commissary items. The ward attendants and other staff can be trained to implement the program, thus providing considerable treatment for all the patients. This also frees the practitioner to supervise the overall program and tend to specific needs of individual patients. One report (Greenberg et al., 1975) suggests that such programs can be made even more effective by having the patients involved in decision making about treatment procedures.
As a patient gradually improves he may be moved to situations or wards where he has greater responsibilities and greater privileges. Eventually the client may be phased out of the hospital and phased off the tokens onto more natural sources of reinforcement. By this time self-reinforcement and the reinforcement from improvement may be sufficient and the tokens are more for back-up support. Transition into the real world needs to be gradual and carefully considered. Such a transition may be aided by a half-way house, a living situation in the community whose living conditions are half-way between the hospital system and the outside community. Or a community- based program may help the transition (e.g., DeVoge & Downey, 1975).
Although token economies have made dramatic and successful changes in mental hospitals, there are many problems in evaluation of such programs (see Carlson et al., 1 972; Gripp & Magaro, 1974). For example, in addition to the token system, the patients also may receive more attention or better physical environments. Although these effects can be factored out in controlled experiments, little such research has been done. The controls that have been used are often superficial and/or not well specified. We need research factoring out the effects of different components of token systems, the effects of different parameters of these components, and comparisons with various different types of treatment.
Although there have been many successful token economies with psychiatric patients, there have also been many problems and failures (see Atthowe, 1973; Hall & Baker, 1973; Kazdin, 1973b). These problems include the following: There is often an enormous heterogeneity of patients, making it difficult to devise a program complex enough to help them all. The program may be missing important needs and problems of the patients and thus the program should be more individualized. Some patients remain unresponsive; we need more information about such patients. Some of the staff may be uncooperative, be responding incorrectly to the patients, or need more training. Similarly, antagonistic or uncooperative administrators or outside communities may hurt the program. Finally, it is important to pay more attention to the global system and look at it in terms of basic economic principles such as wages, prices, and savings. All these problems can be seen to apply, in varying degrees, to other types of token economies.
In any behavior change program there are always important ethical, and sometimes legal, issues to be considered. In the case of token economies, particularly in mental hospitals and prisons, legal constraints are related to the person’s personal rights (see Wexler, 1973). The courts have decided, and will be deciding for a while, that patients and prisoners have basic constitutional rights, including having a comfortable bed and adequate meals, opportunity to attend religious services, receive visitors, interact with members of the opposite sex, and go on regular trips outdoors. Also, a patient who does work related to hospital functioning must be paid minimum wage, even if such work is considered therapeutic. Thus a patient cannot be required to earn tokens to buy a meal; the patient has a basic right to the meal.
Operant reinforcement strategies are some of the most powerful behavior change approaches available. Contingency contracting and token economies are ways of formalizing these approaches and thus often making them more effective. Now I turn to operant approaches for decreasing undesired behaviors. But remember that in most situations in which you are decreasing one behavior, you should be reinforcing and increasing another so that desired behaviors are encouraged and the person continues receiving reinforcement.
Establishing a contingency between a behavior and a contingent event is operant conditioning; terminating this contingency is operant extinction. Reinforcing a behavior increases the probability of that behavior; withholding the reinforcement decreases the probability. A patient in a mental hospital may learn to emit psychotic talk because it gets him extra attention from the staff and other patients. Not reinforcing this type of talk may cause it to extinguish and thus occur less. Williams (1959) reported the case of a 21- month-old male whose tantrums were reinforced by parental attention. After he was put to bed, if the parents left before he went to sleep, he would scream until they returned to the room. This tantrum behavior was easily extinguished by simply letting him scream and rage at night without reinforcing him—that is, by not returning to the room. Eventually, there were no more nighttime tantrums.
However, a person does not learn a simple behavior to a stimulus, but rather learns a whole hierarchy of behaviors. The behavior on the top of the hierarchy is the most probable to occur, the second behavior the next most probable, and on down. The position on the hierarchy and the distance between items on the hierarchy are functions of how many times the behaviors have been reinforced. If the top behavior is extinguished, then the second behavior will occur. And if this behavior is considered undesirable, it will have to be extinguished. Thus the problem with the extinction procedure is that considerable time may be spent going through the entire hierarchy or until a desirable behavior is reached. For this reason the extinction procedure is generally inefficient unless the hierarchy is small, as with many problems with children. It is generally better to emphasize reinforcing a desired behavior in place of the undesired behavior.
Another problem is that it may be difficult or undesirable not to attend to some behaviors, such as destructive or disruptive behaviors. Extinction may also have emotional side effects such as frustration, anger, or confusion. These side effects are minimized if we are simultaneously reinforcing alternative behaviors.
Cautela (1971) has suggested covert extinction in which the client imagines doing the undesired behavior and not being reinforced. At the present there is little evidence on the effectiveness of this approach and when it would be most applicable. Cautela suggests it would be useful when you cannot control the environmental contingencies or when the client will not cooperate with regular extinction.
Many people have a nervous habit such as a tic, biting fingernails, some forms of stuttering, and some typing errors including repeating a letter. Two ways of dealing with these habits are negative practice and habit-reversal.
Negative practice is the reduction of a nervous habit by continually repeating the response in as realistic a way as possible (Dunlap, 1932). A person with a nervous twitch in the mouth would intentionally make this twitch repeatedly until fatigued. Wooden (1974) described the case of a 26-year-old man who for 25 years had been banging his head into his pillow while asleep, resulting in restless sleep and damage to the skin of his forehead. Negative practice consisted of banging his head over and over in the manner he did when asleep, as observed and photographed by his wife. The negative practice was done before he went to sleep and done to the point of being aversive. Four such sessions basically eliminated the habit and resulted in peaceful sleep and less fatigue and anxiety during the day.
The data on the effectiveness of negative practice are mixed (see Rimm & Masters, 1974, p. 325). There are many reports of successful and unsuccessful cases. It is also not clear why it works. My bias is that it is primarily discrimination learning. The practice causes the person to learn to detect the stimuli associated with the habit. Later, when the habit is occurring or beginning to occur, the person will be more able to stop or reduce it. Other explanations and components include extinction, due to the habit occurring without being reinforced, and the suppressive effects of punishment and fatigue that result from the excessive practice.
Habit-reversal is a more complicated program for dealing with nervous habits (Azrin & Nunn, 1973). The client is first taught to be aware of each occurrence of the habit. Then the client is taught to make a response which is incompatible with the undesired response, such as clenching your fists at your sides is incompatible with nail-biting. This incompatible response is made whenever the undesired habit occurs or is about to occur, and the client is taught how to do this in everyday situations. Finally the practitioners increase the client’s motivation to decrease the habit and carry out the program. This involves increasing social support for the change and reducing any reinforcement supporting the habit. Habit-reversal was reported as effective with habits such as nail-biting, thumb-sucking, and head-jerking. The habits were reduced by an average of 95 percent after the first day of training, with no recovery during the several months of follow-up. A variation of this approach has been successfully used with stuttering (Azrin & Nunn, 1974).
The most common approach people use to reduce undesired behaviors, particularly in others, is punishment. This consists in applying a contingent event to a behavior that results in a decrease in the probability of the behavior. As mentioned earlier, there are two types of punishment, positive and negative.
Positive punishment is a contingent event whose onset or increase results in a decrease in the probability of the behavior it is contingent upon. If each time Richard starts eating his mother’s house plants she shows disapproval and if this disapproval reduces the probability of Richard eating the plants in the future, then the disapproval is positive punishment. Disapproval, criticism, pain, and fines are common forms of punishment.
There are many theories about punishment and its effects (Church, 1963; Johnston, 1972; Solomon, 1964). The effects of punishment include the following: By definition, the punishment has a suppressive effect on behavior-it reduces its probability of occurrence. This does not necessarily mean the behavior will extinguish more readily, only that it is suppressed. The punishment elicits various emotional reactions and possible motor reactions. The punished person may learn whatever behavior is associated with the offset of the punishment (negative reinforcement). And the punished person associates the effects of the punishment with the situations and people involved with the punishment (respondent conditioning). Varying importance is given to these factors in the different theoretical accounts of punishment.
As a behavior change procedure punishment has many disadvantages and possible bad side effects: Punishing an undesirable behavior does not necessarily result in desirable behaviors. Punishing a child in a classroom for throwing things during self-work time does not necessarily result in the child shifting to working alone. Perhaps self-work behaviors are not in the child’s repertoire. Punishment may condition in reactions such as fear, anxiety, or hate to the people who administer the punishment or the situations in which it occurs. Thus children may fear their parents, students may dislike school, criminals may resent society, and workers may not fully cooperate with their foreman. Related to this is that the person may learn to escape or avoid these people or situations, resulting in such possibilities as a school phobia or an increase in absenteeism from work. Attempted punishment of an escape or avoidance response may rather increase the strength of the avoidance. Punishing a child with a fear of the dark for not going into the basement at night alone may actually increase the fear. The punished person may spend some time making up excuses and passing the blame to others. The punishing agents may act as models (Chapter 8) for aggressive behavior. Children may model after their parents and learn to hit people when mad; workers may model their supervisors and become overcritical of the errors of their subordinates. Finally, punished people may become generally less flexible and adaptable in their behaviors.
For reasons such as these, it is usually desirable to minimize or avoid the use of punishment. However, our culture is very punishment oriented. One reason is that people often punish out of their own anger or inability to handle a situation. Also the immediate suppressive effects of the punishment are reinforcing to the punishing agent, even though the long-term effects of the punishment may be undesirable. This is another example of how a short delay of reinforcement has a greater effect on behavior than do longer delays. You will run across many situations, particularly with parents and teachers, in which they want to know effective ways of stopping undesired behaviors, such as more effective forms of punishment. In most of these situations you need to turn it around and emphasize ways of increasing desirable behaviors, as with reinforcement procedures.
If punishment is to be used, it needs to be applied immediately after the behavior and applied consistently. The earlier in the response chain the punishment occurs the better, for then it may stop or disrupt a sequence of undesired behaviors. Punishment should generally be coupled with extinction and reinforcing of alternative behaviors. If possible the punishment should be viewed, by all people involved, as part of a contractual agreement rather than a personal attack. Despite all my qualifications about punishment, many situations exist in which it seems effective and desirable (Baer, 1971; Lovibond, 1970).
Lang and Melamed (1969) worked with a nine-month-old child weighing 12 pounds whose persistent vomiting prevented weight gain. Various types of treatment (e.g., dietary changes, use of antinauseants, small feedings at a time, establishing a warm secure feeling in the child) had been ineffective and there was a chance the child would die. Lang and Melamed used an electromyogram (EMG), which measures the activity of muscles, to determine the beginning of vomiting. The child received shock to the leg when the EMG showed vomiting beginning, and the shock went off when the vomiting stopped. A total of nine such punishment sessions ended the problem, and one month later the child weighed 21 pounds. In a similar case, a six-month-old child was punished for vomiting by squirting lemon juice in her mouth (Sajwaj et al., 1 974). This effectively stopped the vomiting.
Kushner (1968) worked with a 17-year-old girl who could not stop sneezing, averaging a sneeze about once every 40 seconds. Neurologists, allergists, psychiatrists, hypnotists, and others had been no help. Kushner hooked her up to a device which gave her electric shock to the fingers every time she sneezed. After four and a half hours of treatment the uncontrolled sneezing was gone for good.
Punishment is often used more for its disruptive effects than suppressive effects. As part of a self-control program a person may wear a rubber band around his wrist which he snaps on the underside of his wrist to disrupt unwanted thoughts or feelings. (A parallel of this is thought stopping discussed in Chapter 9.) Also just wearing the rubber band then acts as a reminder about his behavior. This disruptive effect of punishment is a key in most behavioral treatments of autism.
Childhood autism is a poorly defined diagnostic category, but in its extreme includes behavioral characteristics such as the following: The child has little or no speech; some children will imitate sounds, some will not. Similarly they do not respond to language or other social cues. People often seem to be just objects to the autistic child. Part of the problem may be overselective attention. The child may appear deaf or visually impaired when he is only not responding to that sense mode. Autistic children generally engage in some type of self-stimulating behavior, such as whirling or flapping of arms. Autistic children may also engage in tantrums and self- mutilating behaviors, such as chewing their shoulders or biting off fingers. Such children are often kept bound spread-eagle on a bed. Many autistic children will spend the rest of their lives in institutions.
Lovaas and his associates have probably made the most progress in the treatment of autism (Lovaas et al., 1973; Schreibman & Koegel, 1975). They use basically an operant approach utilizing shaping, modeling, and guidance to gradually teach the child to imitate, speak, read, and write. This then leads to learning more complex personal and social behaviors. At first they have to use very basic reinforcers, such as food and hugs, until the child is responding to social reinforcers like approval. Punishment—in the form of slaps or electric shock—is necessary to disrupt tantrums or self-mutilating behaviors. That is, the punishment disrupts these behaviors so that the practitioner has the opportunity to shape and reinforce desirable behaviors. All children improved as a result of this treatment program, some much more than others. After eight months of treatment some children showed spontaneous use of language and spontaneous social interactions. Children who were returned to parents who had been trained in behavior modification continued to improve. Children whose parents sent them to institutions regressed to their old behaviors.
Similarly, Tanner and Zeiler (1974), working with a 20-year-old autistic woman who injured herself, reduced her slapping herself with the punishment of fumes from ammonia capsules. On the other hand, self-injurious behavior may be reduced by building in alternative behaviors (Azrin et al., 1975). And some people working with autistic children are teaching them sign language as a goal in itself and as a first step to possible normal speech (Offir, 1976).
Azrin and his associates have been experimenting with a form of punishment they call overcorrection (e.g., Foxx & Azrin, 1 973a). In positive practice overcorrection the client is required to practice correct behaviors each time an episode of the undesired behaviors occurs. A child marking on the wall might be required to copy a set of patterns with pencil and paper. In the case of an autistic or hyperactive child who is pounding objects or himself, he would be told of his inappropriate behavior which would be stopped. Then the child would be given verbal instructions, and physical guidance if necessary, for the overcorrection behavior; in this case a few minutes of instruction for putting hands at sides, then over head, then straight out, and so forth.
In restitutional overcorrection or restitution, clients must correct the results of their misbehavior to a better-than-normal state. A child who marks on the wall may be required to erase the marks and wash the entire wall as well. A child who turns over chairs may be required to set up those chairs and straighten up the rest of the furniture. Screaming may require a period of exceptional quiet. Creative judges sometimes use restitution in their sentences. Thus if two juveniles vandalized the home of an elderly couple, a good sentence might involve the offenders repairing what they did, as well as doing other work around the vandalized house. This would make the juveniles more aware of the results of their misdeeds on others.
Azrin and Wesolowski (1974) used restitution to stop food stealing by retarded adults. If a client were caught stealing, he not only had to return the stolen object, but also give the victim an additional object of the same kind. This stopped food stealing in three days and was more effective than a simple correction procedure in which the person only returns the stolen object. Also working with institutionalized retardates Webster and Azrin (1973) found that an effective way of treating agitative-disruptive behavior was to require of the client two hours of relaxing in bed. If the client was disruptive during the last 15 minutes, 15 additional minutes were added to the two hours. This resulted in a rapid reduction in such things as self-injury, threats, physical aggression, and screaming.
Covert punishment would consist of carrying out the punishment in the imagination. There is almost no information on such an approach. Moser (1974) worked with a 24-year-old male “paranoid schizophrenic” who had auditory and visual hallucinations of his deceased brother and mother. The hallucinations were eliminated by teaching the client to punish them with thoughts of eating cottage cheese which the client disliked. Also Cautela’s conceptualization of covert sensitization (Chapter 6) is covert punishment.
The discussion of punishment so far has emphasized positive punishment. I turn now to negative punishment, a contingent event whose offset or decrease results in a decrease in the behavior it is contingent on. This generally consists of taking away something that is reinforcing from a person when he misbehaves. The procedure of negative punishment generally also results in positive punishment and/or extinction. Hence at the present it is not possible to specify exactly what effects are specifically due to negative punishment. In behavior modification there are two major forms of negative punishment, response cost and time out.
Response cost is the withdrawal or loss of a reinforcement contingent on a behavior. This may be the loss or fine of tokens in a token system, such as a fine for the use of the word “ain’t” in Achievement Place. Response cost has been used to suppress a variety of behaviors such as smoking, overeating, stuttering, psychotic talk, aggressiveness, and tardiness (Kazdin, 1972). Possible advantages of response cost are that it may have fewer aversive side effects than positive punishment and it leaves the person in the learning situation, which time out does not. But much more research is needed in this area.
Time out (or time out from reinforcement) is the punishment procedure in which the punishment is a period of time during which reinforcement is not available. For example, time out has been an effective punishment procedure in classrooms. If a child misbehaves, he may be sent to spend ten minutes in a time-out area, perhaps a screened-off corner in the back of the classroom. For time out to be effective the area the client is removed from must be reinforcing to him. The classroom should be a reinforcing place and being in time out may result in a period of time in which the student cannot earn tokens. Also the time-out area should not be reinforcing. In a home sending a child to his room may not be a good time out, as the room may be filled with reinforcers. Usually just a few minutes in time out is sufficient; and it often gives the punished person a chance to cool off.
Cayner and Kiland (1974), working with three hospitalized patients diagnosed as chronic schizophrenics, used a time out which consisted of five minutes in a ward bedroom that only had a bed in it. This time out effectively eliminated behaviors such as screaming and swearing, tantrums, and self-mutilation Again much research is needed on time out. MacDonough and Forehand (1973) have suggested the following parameters that need to be investigated: whether a reason is given for time out, whether the person was first given a warning, ways of getting the person into the time-out area, duration of time out, presence or absence of a signal to indicate onset or offset of time out, whether the time-out area is isolated from where the misbehavior occurred, schedule of time out (e.g., continuous versus intermittent), and whether the person must behave in some way to be released from time out.
Finally, there is also the possibility of covert negative punishment, negative punishment carried out in the imagination. But there is currently almost no information on this. One study reported reducing some eating behaviors by having the clients imagine the loss of something reinforcing, such as having a car stolen (Tondo et al., 1975).
So far in this chapter I have discussed two major ways of reducing undesired behaviors, extinction and punishment. A third way is to reduce the reinforcing effects of the events supporting the undesired behavior. Aversive counterconditioning (Chapter 6) is a way to do this. A related approach is stimulus satiation in which the client is flooded with the reinforcer repeatedly until it loses much or all of its reinforcing effect. A child who keeps playing with matches might be sat down with a large number of matches to strike and light. This would be continued until lighting matches lost their reinforcing effect. It is not known how or why stimulus satiation works, but it seems to contain components of aversive counterconditioning and respondent extinction of reinforcing effects.
Ayllon (1963) worked with a 47-year-old, hospitalized female diagnosed as a chronic schizophrenic. One of her problems was hoarding towels; she had 19—29 towels in her room at one time with the nurses removing towels twice a week. Treatment consisted of intermittently giving her towels during the day, starting with 7 per day and up to 60 per day by the third week, and not removing towels from her room. When the number of towels in her room reached 625, she started taking them out and no more were given her. During the next year, she only averaged 1.5 towels in her room per week.
Stimulus satiation has been used in the treatment of smoking by dramatically increasing the number of cigarettes smoked (Resnick, 1 968) and/or the rate of smoking the cigarettes (Lichtenstein et al., 1973). In one study a metronome was used to have the clients smoke every six seconds (Lichtenstein et al., 1973). This stimulus satiation produced a significant reduction in smoking with 60 percent of the subjects abstinent at six months. The treatment was equally effective as aversive counterconditioning, using hot cigarette smoke blown in the face, and as a combination of stimulus satiation and this aversive counterconditioning. Others (e.g., Lando, 1975) have not been as successful using a variation of stimulus satiation with smokers. Also, rapid smoking should not be used with some clients with respiratory or cardiac problems.
Many operant procedures have been discussed separately in this chapter. However, it must be remembered that in any operant program or operant analysis of a situation it is necessary to consider and combine the range of operant variables and procedures discussed in this chapter. This includes stimulus control, reinforcing desirable behaviors, contracting, extinguishing and punishing undesirable behaviors, and changing the reinforcing effects of some events. More important is the necessity of often combining operant procedures with other approaches and procedures, including those in the rest of this book. For example, let us think about operant conditioning together with respondent conditioning (Chapters 3—6).
We begin with the stimuli, external and internal, as perceived and interpreted by the person. Internal stimuli include thoughts and cues associated with emotions and bodily activity. Some of the external and internal stimuli will be conditioned stimuli eliciting a range of conditioned responses of various strengths. Some of the stimuli will be discriminative stimuli cuing various possible operants. Part of our job is identifying and perhaps altering these different types of stimuli. Next is the motivation of the person. Part of the motivation may be based on conditioned responses, such as anxiety or anger, which can be altered. respondently. Part of the motivation may be based on anticipation of reinforcements and punishment, which can be altered operantly. In the presence of specific stimuli and specific motivation, the person will behave in some way based largely on past learning. Here we can provide training in alternative ways to behave in such situations. Finally, there are certain consequences to people because of their behavior, including reinforcement and punishment. Dealing with the contingencies of these consequences is operant conditioning, while altering the reinforcing or punishing effects of an event may involve respondent conditioning.
Operant conditioning is based on the effects of contingent events, events Contiguous with some behavior. Now we distinguish between two different types of contingent events, dependent and nondependent. A contingent event is a dependent event when it occurs only if a specified behavior occurs first; otherwise it is a non-dependent event. That is, dependent events only occur if the person acts a certain way, while non-dependent events occur independent of what the person does. Operant conditioning only requires that the event be contingent, dependent or non-dependent. However, most examples of operant conditioning, most of this chapter, and perhaps all applied operant programs are based on dependent contingent events. Here I consider the effects of non-dependent events.
If the non-dependent event is a reinforcement, the person may be reinforced for doing something not causally related to the reinforcement. Such behavior is called superstitious behavior (Hermstein, 1 966). For example, a therapist may decide to try some new therapy on his clients. And the clients may improve for reasons other than the specific form of therapy, perhaps because of placebo effects or personal changes outside of therapy. Here the improvement of the clients may be a reinforcement for the therapist’s superstition of doing the new therapy. Because superstitions are often maintained on an intermittent schedule of reinforcement, they are difficult to extinguish.
If the non-dependent event is a punishment, the result may be learned helplessness, a passive-resigned state resulting from learning the independence of behavior and consequences (Seligman, 1975). That is, if the person learns that things happen to him regardless of how he behaves, he may become passively resigned to simply take what happens with little trying to influence the outcomes. This is true of uncontrollable reinforcers as well as uncontrollable punishment; but the latter is the area in which most of the research has been done. A child in a classroom or a patient on a hospital ward who perceives that his behavior has little effect on what happens to him may develop learned helplessness. This is one reason to have consistency in our operant programs. Learned helplessness may be a component in a wide range of behavior problems, including the child who is withdrawn, the adult who is unassertive and indecisive, some forms of depression, and perhaps the acceleration of death in some old people.
It is useful to keep in mind that all of operant conditioning is a subset of the general area of feedback, information to individuals about the effects of their behavior (see Mikulas, 1974b, chap. 6). To move your arm requires feedback from the muscles of your arm about the effects of movement. Speech utilizes feedback from the tongue and lips, as well as auditory feedback from hearing your own voice. Education is guided by feedback on tests and papers. Political positions are sometimes altered because of feedback from voters via polls or mail. Every time we do something—from a simple movement to a complex social interaction—we receive varying amounts of feedback about what effects our behavior had on ourselves, others, and our environment. This feedback guides our current and future behavior.
Feedback may have one or more of these effects: (1) The feedback may be a reinforcement or punishment. Receiving an A on a test may be rewarding to a student so that he maintains the same approach to studying for the next tests. (2) The feedback may produce changes in motivation, such as the goals a person sets for himself. Receiving a D on a test may motivate the student to work harder in the class. (3) Feedback may provide informative cues that guide learning and performance. A person who does poorly on a test may see that it is because the test emphasized the class lectures which the student ignored. (4) Feedback may provide a new learning experience or rehearsal of previous learning. When getting a test back a student may learn the correct answers to questions that he did not know.
By keeping in mind that operant conditioning is part of feedback, it keeps us from overlooking the other important effects of feedback. When parents punish their children they should also give them feedback about exactly why (punishment contingencies) they are being punished and what are preferable alternatives. Managers should not simply praise their workers, but also point out what the workers did that is praise-worthy.
One study used operant conditioning to reduce phobias by encouraging the subjects to spend more and more time in the feared situation to extinguish the fear (Leitenberg et al., 1975). There was not much initial progress using only contingent praise. However, there was dramatic improvement when the subjects were given precise feedback about their performance.
Another subset of feedback is the area of biofeedback, use of mechanical devices to provide knowledge of the activity of a body function for which the person has inadequate feedback (Brown, 1975; DeCara et al., 1975; Yates, 1975, chap. 8). For example, a person may be hooked up to a device that provides him continuous feedback about his blood pressure. Through such biofeedback the person may learn to raise or lower his blood pressure at will. Biofeedback has been used for a wide range of applied problems, including improving reading by decreasing subvocalization via biofeedback from the Adam’s apple, reducing tension headaches by relaxing muscles in the neck and head measured by biofeedback, reducing migraine headaches by decreasing the relative flow of blood to the head, and generating specific brain waves that may facilitate relaxing. Biofeedback is a useful tool, but it is often inferior to procedures that do not require or depend on mechanical devices. For example, a person with tension headaches may profit more from extensive muscle relaxation training (Chapter 3) for these specific muscles. This way the person can discriminate and regulate these muscles without a mechanical device. On the other hand, the biofeedback may facilitate the early stages of muscle relaxation training.
Feedback is one of the major sources of variables affecting human behavior. Altering feedback is one way a behavior modifier can alter behavior. And operant conditioning deals with some powerful alterations in feedback.
Human behavior is strongly affected and guided by feedback, information about the consequences of one’s behavior. Feedback produces motivation and learning changes, including those of operant conditioning. The emphasis of operant conditioning is on changes in the probability of a behavior in the presence of specific stimuli as a result of events contingent on the behavior. A reinforcer increases the probability of a behavior it is contingent on; a punisher decreases the probability. The contingent event is usually dependent on the behavior and occurs because of the behavior. Non-dependent events may lead to superstitious behavior and/or learned helplessness. Behavior modification procedures based on operant conditioning include altering the stimuli that cue operant behaviors, reinforcing desired behaviors, punishing and/or extinguishing undesired behaviors, and changing the reinforcing or punishing effect of contingent events. Stimulus control, including narrowing and stimulus change, involves removing or altering stimuli that cue undesired behaviors and/or introducing stimuli that cue alternative behaviors. The first step in reinforcing behaviors is determining a reinforcer. This may involve observing or asking the client about reinforcers and perhaps letting the client try the reinforcer (reinforcer sampling). Reinforcers include tangible items, opportunities to do things such as high- probability behaviors, social approval and recognition, pleasing thoughts, and self- reinforcement. Procedures to get a behavior to occur to reinforce it include shaping, modeling, fading, punishment, and guidance. Initial learning is usually best when the reinforcer occurs immediately after every example of the correct behavior (short delay of reinforcement, continuous schedule of reinforcement). Extinction is the return of the probability of a behavior toward its initial value (baseline) after the contingent events have been removed. Use of an intermittent schedule of reinforcement increases resistance to extinction. Punishment as a change procedure should generally be avoided because of undesirable side effects; but it can be used effectively to disrupt or suppress an undesired behavior while a desired alternative is being strengthened. Positive punishment procedures include administering an aversive event and overcorrection, while negative punishment includes a withdrawal or loss of a reinforcer (response cost) and a period of time during which reinforcers cannot be acquired (time out). The reinforcing effects of an event can be reduced by aversive counterconditioning or stimulus satiation. Nervous habits can be reduced by negative practice and habit reversal. Contingency contracting is a formalized operant program in which the contingencies are well specified and usually negotiated. Contracting facilitates people learning to respond consistently with each other and the development of a reasonable reciprocity of expectations and demands. A token economy is contingency contracting in which the reinforcers are tokens that can later be exchanged for a choice of rewards.
|Give three reasons why it would be advantageous to establish a baseline before beginning an operant program. (Some reasons are given in Chapter 2.)|
|List two different types of tokens and three different possible reinforcers that could be used in each of the following settings: a nursery school, an automobile assembly plant, the army.|
|In your life: (a) What are the three most important types of reinforcement? (b)What is an unusual reinforcer? (c) How did these events come to be reinforcing?|
|Give examples of secondary gain from two different hypothetical cases.|
|Design and describe a training exercise you would use to help teachers identify the sources of reinforcement affecting classroom behavior.|
|How should grades be used in college and high school? as reinforcers? as a measure of accomplishment, regardless of how long it took? as a measure of accomplishment within a set time period? as an estimate by the instructor of the student’s basic skills and knowledge in the area? Why? What are the effects of these different grading approaches on student behavior and on people (e.g., graduate schools, businesses) that use grades in their selection processes?|
|In general, would you expect reinforcer sampling to be more useful with mental patients or college students? Why? What are the implications of your answer?|
|Design and outline a program you might set up in a prison to help the inmates learn to postpone immediate gratification and respond to contingencies of a much longer delay of reinforcement.|
|Contingency contracting generally insures that everyone understands the nature of the operant contingencies. What are the advantages of this? When might this be disadvantageous in an operant program? What ethical issues are involved?|
|Consider a mental patient with no physical disabilities who for the last five years has always been bathed, dressed, and fed by others. Outline a shaping program to help this person become more self-sufficient.|
|What are the relationships among discriminative stimuli, fading, narrowing, and operant extinction?|
|Outline a program for improving study habits that uses stimulus control, shaping, and contracting.|
|Describe an ideal high school in which all learning is totally individualized via contracts. What traditional social and educational ideas and values would be challenged by such an approach?|
|Draw up a contract for a hypothetical couple you have been working with in marriage counseling.|
|Assuming you are the head of a token economy half-way house for drug-abusers, describe some of the things you would do to facilitate the behavior changes from the half-way house carrying over to the real world to which the clients are returned.|
|Outline the steps you would go through in establishing a token economy in a kindergarten.|
|What are some important considerations in establishing a token economy in a prison? Would it be desirable, practical, and socially acceptable to allow prisoners to earn their way out of prison by acquiring personal, social, and vocational skills?|
|Set up a contract for yourself for at least one week. What did you do? How did it work? What would you do differently next time?|
|What are the procedural differences between operant punishment of this chapter and aversive counterconditioning of the last chapter? Describe a situation in which these differences would be significant.|
|Distinguish between positive punishment and negative reinforcement. Give an applied example in which the same event is used for both.|
|Give a classroom example of positive practice overcorrection and a business example of restitutional overcorrection.|
|What are the practical similarities and differences between response cost and time out? When would you use each one? Give examples of each for an elementary classroom and a ward in a mental hospital.|
|What are the implications of having a punishment-oriented culture? What may be done to change this? How about punishing people who use too much punishment?|
|Give three different examples, other than those in the text, of situations in which you would use operant extinction as your major change approach.|
|Describe a habit reversal program for nail-biting.|
|Give two examples, other than those in the text, of situations in which you would use stimulus satiation. Design and describe a self-control approach we might call “covert stimulus satiation.” Give an example of how this would be used.|
|Outline the operant components in a program for an alcoholic.|
|What is the relationship between phobias and avoidance conditioning? Give an example and show the interrelationships between operant and respondent variables.|
|In the context of reducing fear, what are the similarities between operant procedures (shaping, fading, reinforcement) and respondent procedures (use of hierarchy, incompatible response)? What does this mean in terms of separating operant and respondent variables? What are the practical implications?|
|Design and outline a general self-control strategy that incorporates the ideas of covert reinforcement, covert punishment, covert extinction, and covert sensitization. When would this approach be part of your general program?|
|Including ideas of learned helplessness, briefly describe the genesis of extreme social withdrawal in a hypothetical case of a 10-year-old girl. If not corrected, how might this problem lead to depression in later life?|
|Make up and briefly describe “feedback therapy” in which all therapeutic approaches are conceptualized in terms of feedback.|
Gentry, W. D. (ed.). Applied behavior modification. St. Louis: C. V. Mosby, 1975.
Kazdin, A. E. Behavior modification in applied settings. Homewood, Ill.: Dorsey Press, 1975.
Malott, R. W., Ritterby, K., & Wolf, E. L. C. (eds.). An introduction to behavior modification. Kalamazoo, Mich.: Behaviordelia, 1973.
Schaefer, H. H. & Martin, P. L. Behavioral therapy. 2d ed. New York: McGraw-Hill, 1975.
Skinner, B. F.Science and human behavior. New York: Macmillan, 1953. Free Press Paperback, 1 965.
Skinner, B. F. Walden Two. New York: Macmillan, 1948. Macmillan paperback, 1962.
Sundel, M. & Sundel, S. S. Behavior modification in the human services: A systematic introduction to concepts and applications. New York: Wiley, 1975.
Whaley, D. L. & Malott, R. W. Elementary principles of behavior. New York: Meredith Corporation, 1971.