William M. Baum
University of California, Davis and University of New Hampshire
I became convinced of the inadequacy of the standard model of operant behavior while still in graduate school in the 1960s. The notion of discrete responses strengthened by immediately following reinforcers failed to explain even laboratory phenomena, let alone everyday behavior. For example, try to use the model to explain why activity rates are maximal on ratio schedules or why people work for wages. Something different is required.
The matching law, discovered by Herrnstein in 1961, which held that relative response rates equaled or matched relative reinforcer rates, opened a new way of thinking. Instead of contiguity, one could conceive of relations between activity rates and reinforcer rates. Instead of punctate events, one could think of flow—both behavior and feedback from the environment as continuous flow. Since activities take up time, the universal metric of activity is time. Discussing this idea one day, Rachlin and I conceived of an experiment measuring only time allocation, which resulted in the 1969 paper “Choice as time allocation.”
A new conceptual framework started to come clear. In 1972, I submitted a long paper to the Journal of the Experimental Analysis of Behavior attempting to summarize the developments so far. The editor, Catania, suggested I break it into two papers, and the first came out in 1973, “The correlation-based law of effect.” The second half, which was more theoretical, came out in 1981.
Nowadays I describe the Law of Allocation as a fundamental law of behavior. The reason is simple: (a) an organism’s activities take time; (b) time is limited (e.g., 24 hours in a day); and (c) therefore activities must compete with one another for time. We all experience this competition in our lives—for example, the competition between work and family or between socializing and sleeping. The following equation (1) expresses this necessary law:

where Ti represents the time spent in Activity i, and Vi represents the competitive weight of Activity i, which is the activity’s ability to compete for time against other activities. The equation states that the proportion of time taken by an activity equals (matches) its relative competitive weight.
What determines an activity’s competitive weight? It depends on features of its consequences and antecedents: their rate, magnitude, and immediacy, for example, as given in this equation (2):

which states that an activity’s competitive weight equals the product of several power functions of variables labeled x (rate, amount, etc.) with exponents S.
This framework raises at least two questions. First, what about Skinner’s concept of the operant, exemplified by the lever press? The operant, as a noun, is a discrete event, whereas operant activity is a process extended in time. The number of lever presses during a session estimates the time taken by the activity, lever pressing, because each operation of the lever represents a certain amount of time in the activity. Attaching a switch to a lever, key, or button offers a way to measure approximately the amount of time interacting with the lever, key, or button. Although the switch is a fairly blunt instrument—in that it may fail to capture all of the activity—it has proven useful in a host of experiments. An experiment published in 1976 demonstrated, the equivalence of presses and time directly.
The second question arises from Equation 2: why should the factors determining competitive weight follow power functions? When I first proposed Equation 1 in 1974, power functions just seemed necessary to account for deviations from a strict matching relation. Any number of factors might affect the coefficients and exponents; they might almost be considered dependent variables. My view of the power functions changed when they began to turn up in experiments about single activities. They gave a quantitative basis to the concept Evelyn Segal put forward in 1972: induction. Induction is the process by which the events called reinforcers, punishers, and unconditional stimuli generate activities just by their presence in an environment. Moreover, the rate of these activities varies directly with the rate of the inducer.
After moving to California in 1999, I became involved with two research groups, one composed of evolutionary biologists and anthropologists at UC Davis and one composed of philosophers of biology at the California Academy of Science. I became convinced that no adequate account of behavior is possible without evolutionary theory. The environmental events that behavior analysts were calling reinforcers, punishers, and unconditional stimuli all impacted fitness or reproductive success. They are Phylogenetically Important Events—food, mates, predators, injury and so on—PIEs. They gain their inducing power from phylogeny, because individuals in a population that fail to respond to PIEs leave fewer descendants. As a result of natural selection, organisms respond vigorously to PIEs, and a PIE induces activities that either enhance or mitigate the PIE’s effects, depending on the particular PIE. Food induces appetitive and consummatory activities. Electric shock induces defensive and aggressive activities.
Segal applied the concept of induction only to non-operant activities such as adjunctive activities. “PIE-related,” however, applies to operant activities too, and in a 2012 paper called “Rethinking reinforcement,” I extended the concept of induction to cover operant activities as well as non-operant activities, thereby doing away with the concept of reinforcement as increasing “strength” of a response. Acquisition and maintenance of an operant activity are the result of induction combined with correlation or covariance between the activity and a PIE. For example, if lever pressing produces food and no other activity does so, then food induces lever pressing selectively. Lever pressing evolves and then is maintained by a closed loop, in which the food induces pressing, pressing produces the food, which induces pressing, and so on. (When I sent Segal a draft of “Rethinking reinforcement,” she wrote back that I had taken the concept of induction to “its logical conclusion.”)
Figure 1 illustrates the way this molar view sees acquisition and maintenance of an activity. On the left, a PIE (e.g., food) in the environment (E) impinges on the organism (O) at a rate r, and induces various activities (arrows), one of which is labeled B and the others as B0. The notations and indicate that the rates of B and B0 are induced by r according to the functions f1 and f2. On the right, the PIE is now contingent on activity B (e.g., interacting with a lever), resulting in covariance between B and r according to , a relation commonly referred to as a “feedback function.” Notably, the PIE continues to induce B0, which still competes with the now-operant activity B. Although this competition usually goes unrecorded, recent studies have confirmed it. As already mentioned, B and B0 are maintained by the closed loop depicted. As the activities continue, consequences become antecedents and antecedents become consequences; the distinction collapses.
When experiments began showing single activities increasing with food rate (and later shock rate; shown in a 2020 paper on avoidance) according to power functions, I realized that the power functions in Equation 2 should be thought of as induction, and that the competitive weight of an activity (Equation 1) is the extent to which it is induced (Equation 2).
The molar view replaces contiguity with covariance, as illustrated in Figure 1. Covariance between an activity and a PIE transforms the activity into operant activity that both interacts with the environment and sustains the organism.

Figure 1
Covariance closes a loop to maintain effective (operant) activity
Note: Left: a PIE (e.g. food or shock) from the environment (E) induces various activities (B and BO) in the organism process (O). Right: covariance imposed between B and the PIE results in a loop, in which the activity B produces the PIE and the PIE induces the activity, which again produces the PIE, and so on.
A second kind of covariance relates a feature of the environment (a stimulus S) to a PIE. When positive covariance exists between S and the PIE, we may say that S predicts the PIE. Then S becomes a conditional inducer and induces some of the same activities that the PIE induces, most notably the operant activity that the PIE induces. When someone works for wages, the work is in long-term covariance with surviving, which has parts like eating, sheltering, and maintaining health. This relation maintains working, but working itself is an activity or process comprised of parts like driving to work, completing tasks, and driving home, all of which are activities induced by the continued small-scale feedback from employers, including paychecks. (See my 2024 book, Introduction to behavior: An evolutionary approach, for an introduction to this way of thinking about behavior, and my 2022 book, Science and philosophy of behavior, for a collection of papers with commentary.)
Figure 2 illustrates the basics of operant behavior according to the molar view. It expands on the right-hand diagram in Figure 1. Broken arrows indicate covariance, and solid arrows indicate induction. The activity B stands in positive covariance with a PIE. A stimulus or context S stands in positive covariance with the PIE. Both context S and the PIE induce activity B. Working (B) is induced by the features that constitute context S and also is induced ultimately by the PIEs that facilitate surviving (food, mate, friends, shelter, etc.).

Figure 2
Relations among a signal, an activity, and positive activity-PIE covariance
Note: Dashed two-headed arrows indicate positive covariance between the signal S and the PIE and between the activity B and the PIE. Solid arrows indicate induction. S induces activity B in the short term, and the PIE induces the activity B in the long term.
The conceptualization shown in Figure 2 replaces the traditional, molecular, notation SD:R◊SR. The advantages of the molar view are many. For example, the conception in Figures 1 and 2 solves the problem of the first instance—that action must occur before it can be reinforced—and the origin of variation in action needed for shaping, both of which reinforcement left unsolved. The molar view affords a plausible account of avoidance, something the molecular view failed at, because successful avoidance is followed by nothing. Most importantly, the molar view: (a) affords plausible and elegant accounts of everyday behavior; and (b) connects behavior analysis with evolutionary theory.