Diminishing marginal utility refers to the phenomenon that each additional unit of gain leads to an ever-smaller increase in subjective value.

From: Self-Regulation and Ego Control, 2016

Valuation as a Mechanism of Self-Control and Ego Depletion

E.T. Berkman, ... J.L. Livingston, in Self-Regulation and Ego Control, 2016

Diminishing Marginal Utility

Diminishing marginal utility refers to the phenomenon that each additional unit of gain leads to an ever-smaller increase in subjective value. For example, three bites of candy are better than two bites, but the twentieth bite does not add much to the experience beyond the nineteenth (and could even make it worse). This effect is so well established that it is referred to as the “law of diminishing marginal utility” in economics (Gossen, 1854/1983), and is reflected in the concave shape of most subjective utility functions (eg, Kahneman & Tversky, 1979; Rabin, 2000; see Fig. 13.2). An important consequence of diminishing marginal utility is that subjective value changes most dynamically near the zero point, and quickly levels off as gains (or losses) accumulate.


Figure 13.2. Diminishing marginal utility of gains. Given a concave relationship between objective gains (x-axis) and subjective value (y-axis), each one-unit gain produces a smaller increase in subjective value than the previous gain of an equal unit. The marginal utility, or the change in subjective value above the existing level, diminishes as gains increase (shown on the y-axis to the right).

Within the psychology literature, diminishing marginal utility is akin to the phenomena of affective habituation (Dijksterhuis & Smith, 2002) and hedonic adaptation (Brickman & Campbell, 1971). The general finding in these lines of work is that people quickly adjust to affective experiences, so that repeated exposures to the same stimulus are less potent (habituation) and major events do not generally change people’s baseline affect (adaptation). Psychological models of adaptation have been updated and qualified in recent years, and may apply differently to short-term and long-term changes (see, for example, Diener, Lucas, & Scollon, 2006), but the general pattern holds: humans acclimate to events, particularly small events, as their novelty subsides. From a psychology perspective, diminishing marginal utility of short-term gains can be understood in terms of habituation—the first bite of chocolate tastes better than the second, and so forth—and long-term gains can be understood in terms of adaptation to a new zero point—winning the lottery does not permanently increase happiness but instead resets one’s reference point so that the subjective value of stimuli are evaluated with respect to that new starting point.

The tendency for returns on subjective value to diminish with repetition is relevant to how self-control plays out over time. By our definition, self-control is required when there is a conflict between a high-level goal and a low-level goal or impulse. Both options have some degree of subjective value, even if that value is derived from a different source (such as the immediate physical gratification associated with a positive experience or the sense of accomplishment that accompanies goal completion). Therefore, recent positive experiences that diminish the subjective value of one but not the other goal will influence self-control. Take, for example, the case of a smoker who is trying to quit and experiences a self-control conflict between the desire to smoke a cigarette (hedonic value) and the desire to quit (abstract goal value). According to the law of diminishing marginal utility, the subjective value of smoking an additional cigarette will be diminished if the smoker has just had a cigarette. Indeed, smokers are less likely to light up if they have recently smoked than if they were abstinent (and this effect also holds for food for most people; Epstein, Bulik, Perkins, Caggiula, & Rodefer, 1991). Apart from physiological factors such as dependency, the subjective value of a temptation (such as a cigarette for a smoker) is diminished with sequential consumption. Similarly, the subjective value of abstinence should be lower when a quitter makes progress than when he or she feels that he or she is falling short. As expected by diminishing marginal utility, motivation to attain a goal decreases if one focuses on the progress made toward that goal, particularly for goals to which an individual is highly committed (Fishbach, Eyal, & Finkelstein, 2010).

The law of diminishing marginal utility also applies to the case of ego depletion, a claim we will argue in a section later. First, however, we present neuroscientific data in support of the valuation model of self-control.

Read full chapter



Basic Methods from Neoclassical Economics

Andrew Caplin, Paul W. Glimcher, in Neuroeconomics (Second Edition), 2014

The ordinal revolution and the Logic of Choice

Just as the Marginal Revolution seemed to hinge on diminishing marginal utility, so too did the logarithmic utility function, proposed by Bernoulli. Taken together, these two sets of ideas may be seen as implying that utility can somehow be measured, and that in any reasonable such method of scaling and measuring, there will be some form of diminishing marginal utility.

Taking their lead from these insights, the major economists of the nineteenth century began to focus their energies on understanding how use-value, costs-of-production and exchange-value were related to the utilities experienced by decision makers. Their implicit goal was to understand how changes in society impacted the net utilities (also often called the welfare) of citizens in an attempt to design better societies. By the middle of the century many of these theories had become quite byzantine, with very detailed explanations of how the internal utility-specifying processes of decision makers interacted with the outside world to yield choices and hence individual levels of welfare. But critically, all of these theories were brought to a crashing halt at the end of the nineteenth century by the next revolution in the economic theory of choice, the Ordinal Revolution initiated by Vilfredo Pareto (1906).

Pareto noted that while contemporary theories rooted in diminishing marginal utility relied on the very precise details of the functions that related things like use-value and scarcity to utility, there was no independent evidence that utility existed, nor was there any evidence to support the assumption that people were acting to maximize their utilities. He even went so far as to prove mathematically that there was, in principle, no way to directly derive a unique value for utility from the choice behavior of a subject – a critical point the economists of the marginal revolution had entirely missed. All of these theories were, he proved, houses of cards built on layers of assumptions that could not be proven. What he stressed emphatically was that the only things traditional economists could observe and measure were choices and prices. Focusing on anything else was, he argued, placing theory before data. Even worse, as he developed this logic he was able to show that the precise numerical scaling of utilities on which these theories rested were almost unconstrained by actual data on choices and prices due to this critical flaw in their reasoning.

To understand this issue, consider a decision maker who is empirically observed to prefer Apples to Oranges, Oranges to Grapes and Apples to Grapes. In the language of economics we represent those observed preferences in the following way:


In this standard notation, the curly greater than (or less than) sign should be read as meaning “prefers.” With these observed preferences in hand, let us assign for Jack a utility of 3 to apples, 2 to Oranges and 1 to Grapes. Certainly, with these numbers we can rationalize the observed pattern of preferences as being based on a desire for the item offering highest utility – in a way much like the pricing curves did for David Ricardo. Unfortunately, and this is the critical thing that Pareto recognized, the same pattern could be explained if we squared all utility numbers, or if we halved or doubled them. The numbers themselves seem superfluous to the observed pattern of preference, and indeed as Pareto was the first to realize, they are.

Choice data tells us how subjects rank the objects of choice in terms of desirability. We can talk about utilities as ways to describe this ranking, but we must always remember that utilities are only really good for ordering things. Treating utilities as discrete or precise numbers that can be added or subtracted either for one individual or across individuals goes way too far.

What Pareto went on to stress, to say this another way, was that utility functions are only about ordering, not about discrete numerical values described by abstract mathematical functions. Mathematicians refer to numerical scales that only provide information about ordering as ordinal scales and thus what Pareto argued was that utility must be considered an ordinal quantity. If one good has a utility of 4 and another good has a utility of 2 (for a given chooser) then we know that the first good is better, but we do not really know how much better. This stands in contrast to numerical systems in which 4 really is twice the size of 2. These are systems of numbers referred to as cardinal. Pareto thus pointed out that ordinal utility is all that is needed for the scientific theory of choice.

Read full chapter



Diversity and the Good

Gregory M. Mikkelson, in Philosophy of Ecology, 2011

History of the general framework employed in this paper

Far from being foreign to economics, the theoretical framework introduced above actually originated there. In the 1920s, Pigou spelled out an implication of the diminishing marginal utility of money. Ones first $10,000 worth of monetary wealth, or yearly income, increases ones well-being immensely—by protecting one from starvation, exposure, etc. The second $10,000 also comes in very handy. However, with each successive increment, the additional utility declines. For instance, it is hard to imagine a billionaire even noticing an additional $10,000, let alone being made much happier or healthier by it. Intuitively, that same $10,000 would do a pauper far more good. These considerations provide one rationale for ensuring a reasonable degree of economic equality [Putnam, 2002].

Seventy years later, public-health researchers began finding empirical evidence supporting, but also pointing beyond, this kind of rationale. Average lifespans are higher, and health is generally better, in more egalitarian societies. This pattern is partly—but only partly—explicable in terms of a diminishing-returns relationship between individual income and health. Other factors also come into play. For example, levels of trust between members of a society, the functioning of public institutions (e.g., public-health agencies), and a number of other social properties and processes are enhanced by economic equality. These factors mediate social-level contextual effects of equality on health, that operate in addition to the compositional effect brought about by diminishing returns at the individual level [Kawachi and Kennedy, 2002].

Negative contextual effects of equality on health are also conceivable—e.g., because more egalitarian societies experience slower economic growth [Partridge, 2005]. However, the strong overall equality-health relationship proves that these are outweighed by the positive compositional and contextual effects of equality. The importance of positive contextual effects indicates that, were income re-distributed from the rich to the poor, even the rich would end up healthier despite their drop in income, because of improvements to the society in which they live [Wilkinson and Pickett, 2009].

Read full chapter



Bentham, Jeremy

Gilbert Geis, in Encyclopedia of Social Measurement, 2005

Money as a Measure of Pleasure

Money became for Bentham one of the more readily measurable items in his felicity calculus, because its possession strongly correlates with happiness. He was well aware of what later came to be called the Law of Diminishing Marginal Utility, that is, that the more wealth an individual possesses the less total happiness an increment is likely to bring. A multimillionaire would hardly be moved greatly by the addition of $100,000 to his fortune, but a poor person undoubtedly would derive great pleasure obtaining this amount of money. Though Bentham might not be able to calculate precisely the quantities of pleasure involved in such a situation, he was able to use it to advocate public policies favoring more equitable distribution of wealth, basing his advocacy on utilitarian principles.

Read full chapter



Time Preference and Discounting

M. Paulden, in Encyclopedia of Health Economics, 2014

The social rate of time preference for health

The standard explanations for societys time preference for consumption also apply to societys time preference for health. As societys health improves over time, it may have a preference for earlier health benefits over later health benefits, because of the diminishing marginal utility of health. Society may also prefer earlier health benefits because of catastrophe risk or pure time preference.

The social rate of time preference for health generally differs from the social rate of time preference for consumption. One reason is that the relative value of health and consumption might change over time. Dave Smith and Hugh Gravelle have suggested that the consumption value of health might grow over time, since it is positively correlated with increasing incomes.

The social rate of time preference for health may be estimated using the Ramsey formula. It may also be implicitly revealed by the allocation of health budgets across time (this is returned to in the final section).

Read full chapter



Positivism, History of

S. Fuller, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Antithesis: Machian Positivism

Ernst Mach symbolized the second moment of positivisms dialectic: he gave the movement a dose of its own medicine, namely, a scientific sense of sciences own limitations. Machs counsel emerged in the midst of the ironic, perhaps perversely self-defeating, character of fin de siecle Viennese culture, whereby Karl Krauss quip that psychoanalysis is the disease for which it is cure was eventually incorporated into positivistic lore as Ludwig Wittgensteins dark saying that philosophy is the ladder that must be climbed in order to be discarded (Janik and Toulmin 1973).

In the long term, this sense of scientific self-restraint has made its strongest impression in the dual doctrines of ‘value neutrality’ and ‘academic freedom,’ which are most closely associated in the social sciences with Max Weber. Although these doctrines normally are treated as separate, it is difficult to motivate either without examining their historical interconnection. The missing link is the idea that, absent further empirical demonstration, the value of theoretical discourse—the most obvious epistemic marker distinguishing experts from the lay public—is purely heuristic. Theory enables those who already know something to know much more; however, it obscures the vision of those who have yet to know anything.

Thus, the various Mach-inspired positivist projects of ‘reducing’ theoretical discourse to its empirical bases have been motivated by a keen sense of the contingency of when and where new scientific insights arise. The distinction that Hans Reichenbach and Karl Popper drew between the contexts of discovery and justification canonized this point for contemporary philosophy of science. The unique means by which a discovery is made is neither necessary nor sufficient for demonstrating its validity. Elementary logic textbooks demonize the failure to abide by this point as the ‘genetic fallacy.’ For Mach, the contingent nature of discovery places a special burden on scientists to render their insights as widely as possible, without denying themselves the right to follow it up in their own way. The Vienna Circle positivists may be seen as having compromised this legacy by limiting the context of justification to the received canons of inductive and deductive reasoning. But perhaps they had good reason.

Mach may have been too keen to let the lay public draw whatever conclusions they wished from their studies of science. The second moment of positivism enabled the conversion of scientific knowledge into a pure instrument detached from any prior theoretical or normative commitment. In practice, this ‘neutral’ stance made science available to the highest bidder, who in turn could hold a monopoly over its use (or nonuse). Consider the fate of the law of diminishing marginal utility (LDMU), which states that if there is enough of a good to satisfy a basic level of need, then each additional increment of the good satisfies the need less than the previous increment. LDMU first appeared in John Stuart Mills normative political economy, the source of modern welfare economics. The German translation of Mills System of Logic in 1849 had introduced the first phase of positivism to Mach and such contemporaries as Wilhelm Wundt and Wilhelm Dilthey, initiating the long-standing debates over the reducibility of the methods of the Geisteswissenschaften (Diltheys rendering of Mills ‘moral sciences’) to those of the Naturwissenschaften. While Mills later work, On Liberty, would inspire Machs own views linking academic freedom to the right of dissent, Mach refused to follow Mill in drawing normative conclusions from putatively scientific facts, even though Mills aim of removing the accumulated advantage of the rich was close to Machs own heart.

Mill treated LDMU as a scientific basis for redistributionist social policies. He inferred that each additional increment of income earned by the rich would improve their lives less than the same increment transferred to the poor. Mill presumed a common standard of interpersonal utility and the idea that different interpersonal endowments could be explained as reversible historical accidents. Moreover, he modeled LDMU on a certain interpretation of Newtons laws, which holds them to be true in an ideal physical medium but not in actual physical reality. Thus, Mill made the ideality of the scientific laboratory stand for the normative basis of social reform. What is now called the ‘marginalist revolution’ in economics reinterpreted the Newtonian precedent for LDMU in the value-neutral terms that captured the social situation of Mach and other second generation positivists who called a university home.

In the 1870s, Mills academic nemesis, William Stanley Jevons, had begun interpreting LDMU as the emergent product of interactions between agents whose respective utilities are in principle incalculable, that is, an empirically stable resolution of unknowable constitutive forces. Today we would say that LDMU is a macrolevel market effect, in which the parameters of the relevant market are left unspecified. Thus, cases that fail to conform to LDMU do not force a difficult normative judgement about whether the law or the case is at fault; rather, one casts around for an appropriately defined market to which LDMU might apply, since one may not have considered all the effects over a large enough scale, over a long enough period.

Second generation positivists came to regard LDMU as holding no clear implications for social policy. This opened the door to the curious twentieth century ‘noninterventionist’ policies that Milton Friedman and others have used to oppose scientific justifications of the welfare state. Here positivisms antitheoretical and antinormative stance is explicitly identified with the inductively self-organizing order of the so-called invisible hand. The shift from Jevons to Friedman is indicative of what logicians call the ‘modal fallacy’: The denial that X implies Y is interpreted to mean that X implies not Y, where X=LDMU; Y=State-managed redistribution of income. Thus, an economic principle that was seen originally as not necessarily licensing a particular policy intervention came to be seen as an outcome that obviates the need for any policy intervention whatsoever.

Nevertheless, the transition between the first two moments of positivism was neither smooth nor clearly separated from the third. Free market liberals continued to invoke Comtes original sense of positivism to stigmatize large-scale, usually Marx-inspired, social planning of the economy (e.g., Hayek 1952). A target of these studies was Vienna Circle organizer, Otto Neurath, who supposed that a specialist understanding of the persistent underlying structures of political economy provides a better handle on social policy than the spontaneously aggregated experiences of individual producers and consumers. While Neuraths Marxist orientation to social planning was close to Comtes privileging of expert over lay judgement, he also accepted the Machian view that useful knowledge should be spread as widely as possible. In that sense, Neuraths interest in designing an ideographic language (‘Isotype’) to enable the populace to understand capitalisms inequities was born of the same impulse as his debates with Rudolf Carnap over the construction of the ‘protocol statements’ that constitute the fundamental language of science. Here the second moment of positivism naturally shades into the third.

Read full chapter



Classical theories of consumption

Bingxin Wu, in Consumption and Management, 2011

Samuelson’s Economics

Samuelson’s Economics (Samuelson 1948) also gives great enlightenment to the study of consumption science systematic theory. Samuelson was the first American economist who won the Nobel Prize. He was a universal genius in economy who researched in economics, statistics, mathematics, and other fields. He combined Keynesianism and traditional micro economy and founded the ‘neo-classical synthesis’ – the modern framework of western economics in which all schools were given clear explanation and just evaluation. In 1970 he was given the Nobel Prize by the Royal Swedish Academy for developing mathematics and dynamic economic theory, and his research concerns in all the fields of economy.

Samuelson combined Keynes’s macro-economic theory with a new classical micro economic theory to form the new classical synthetic school. From the first edition in 1948 to the current 19th edition, the school continuously absorbs new research results of modern economists and the book Economics became the most influential schoolbook of the second half of the 20th century. Its consumption theory integrates the thoughts of various schools. Using the perspective of the micro consumer behavior theory, the marginal analysis utility theory of Jevons and Pangbaweike and the theory of demand by Hicks are inherited and absorbed. Ordinal utility theory is used to replace cardinal utility theory and homogeneous curve concavity; relative diminishing marginal utility and revenue budget line are adopted to illustrate the marginal principles rational consumers abide by. Price change generates an earning effect and substitution effect for consumers and enables the demand law to take effect. Demand elasticity and supply elasticity decide the proportion of interest gained by consumers and producers, respectively. In this way, consumption theory has become the starting point of research for all modern micro economics. As to its macro part, analysis is conducted for the consumption rate for both the long term and short term. In Economics, Samuelson (1948) argued that seen from the perspective of the long term, a low consumption rate or low consumption propensity is beneficial for economic growth: ‘Higher consumption relative to income can decrease investment and slow down economic growth; lower consumption relative to income can generate high investment and rapid economic growth’. Within short periods of time, its relation is uncertain:

The interactive relation between consumption and income plays different roles within short periods of time, especially during the expansion or contraction period of the economic cycle; when the economic situation promotes the rapid development of consumption and investment, the overall expenditure or overall demand will increase, short-term output and employment rate will improve. However, when high taxation rates or low consumption confidence leads to the decrease of consumption, the overall consumption will decrease and the economy will probably enter into a recession period.

All in all, Samuelson’s consumption theory is the combination of classical thought and Keynesianism.

In the sixth chapter of Economics, Samuelson gave a wonderful statement of consumption. The choice and utility theories, diminishing marginal utility theory, substitution effect, individual demand, market demand theory, and consumer surplus theory in the book have greatly enlightened the author of this book. For example, the choice and utility theories tell us that selective consumption means consumers tend to choose the goods and services which they consider as the most valuable. Utility means how the consumer arranges the order of different goods and services. Marginal utility theory is about the new and extra utility produced when the consumer uses up one unit of commodity, while diminishing marginal utility theory tells us that accompanying the consuming of certain goods, the new or extra marginal utility is decreasing – when the consuming amount of certain goods is increasing, the marginal utility is decreasing. All these theories gave great edification to the author’s research and management.

Read full chapter



Health Insurance: Economic and Risk Aspects

C.G. McLaughlin, M.E. Chernew, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 Risk and the Demand For Insurance

The relationship between risk and insurance demand is derived from the relationship between utility and wealth (for additional discussion see Arrow 1963). Utility is a function of wealth, with more wealth leading to higher levels of utility or satisfaction. Although utility rises with wealth, it is generally assumed to do so at a decreasing rate. This property is called diminishing marginal utility. Each incremental increase in wealth provides a smaller incremental increase in utility. It therefore follows that the gain in utility associated with any incremental gain in wealth is less than the loss in utility associated with an equivalent loss of wealth.

The property of diminishing marginal utility implies individuals are risk averse. That is, individuals would prefer to have any level of wealth with certainty than a gamble providing the same level of wealth on average. For example, a risk averse individual would prefer $100 dollars with certainty as opposed to a gamble with a 50 percent chance of winning $200 and a 50 percent chance of winning $0. Risk aversion is an inherent property of a concave utility function. Because risk averse individuals prefer certainty, the premium they are willing to pay for an insurance policy that removes risk exceeds the actuarially fair premium (AFP) of the insurance, which is the amount that the insurer would have to pay out, on average, for this policy. The gap between the premium individuals are willing to pay and the AFP is termed the risk premium (Phelps 1997).

The risk premium allows insurers to cover expenses above the medical payout, e.g., claims processing. Several factors determine the size of the risk premium an individual will be willing to pay. First, risk aversion is likely to vary with wealth. For very different reasons, both individuals with very high and those with very low levels of wealth are less willing to pay a risk premium (Feldstein 1999). The marginal disutility of an incremental decrease in wealth falls at high levels of wealth, reducing any utility gain to avoiding risk. In contrast, the marginal disutility becomes very large at very low levels of wealth, making the opportunity cost of purchasing insurance too high.

Second, the probability of a loss will influence the size of the risk premium. As the probability of the loss approaches 1, the willingness to pay for insurance rises, but more slowly than the increase in the AFP for that individual. Thus the risk premium falls. In the extreme case, when the probability of a loss equals 1, the risk premium goes to zero. In this case, there is no risk and individuals would not be willing to pay any risk premium. Similarly as the probability of a loss goes to zero, both the willingness to pay and AFP fall, but the willingness to pay falls faster and eventually the risk premium equals zero. Individuals would not pay for insurance if the probability of a loss equaled zero.

Third, the magnitude of the loss affects the risk premium individuals are willing to pay. A greater loss represents an increase in the variance of income. An individual will be willing to pay a higher risk premium for a higher cost illness.

The utility curve depicted in Fig. 1 can be used to analyze the demand for insurance described above. Throughout the analysis we assume individuals know the probability they will suffer a loss (or benefit from a gain). Because greater wealth leads to higher utility, the utility at point C, U(C), is greater than the utility at points A or B. Diminishing marginal utility implies that if point B is equidistant from points A and C, the utility gained from moving from B to C is less than the utility lost from moving from B to A.


Figure 1. Expected utility and risk

Consider an individual at wealth B evaluating a gamble with outcomes A and C, each of which has a 50 percent probability of occurring (e.g., tossing a fair coin). If point B is equidistant from points A and C, the expected wealth resulting from the gamble, i.e., where on average the individual could expect to be if tossing the coin, is the initial level of wealth, B. The expected utility of such a gamble, EU (gamble), is the probability weighted sum of the two outcomes A and C and, because each outcome has a 50 percent chance of occurring, can be determined by finding the midpoint on the chord connecting A and C. Because the utility curve is concave, even though the initial level of wealth is the same as the expected wealth of the gamble, the utility of B, U(B), will exceed the expected utility of the gamble, EU (gamble). This utility gain of avoiding risk is key to the demand for insurance.

Models of health insurance fit exactly into this model. For simplicity, assume a world in which there is only one type of adverse health event and spending in the unhealthy state is unaffected by the presence of insurance. Individuals are assumed to start with wealth level C and remain there if healthy during the year. If they suffer an illness shock, they will then spend C–A on health care services. Assuming the probability of a loss is 50 percent, the expected loss is C–B. Expected wealth, under uncertainty, is B. Without insurance the individual would have expected utility of EU (gamble).

Now imagine individuals could purchase an insurance contract that would pay the costs of medical care in the event of an illness. With a 50 percent chance of each individual incurring the loss, the expected cost to the insurer of each enrollee is C–B. The AFP therefore is C–B. If individuals pay the AFP, their wealth level is B, regardless of whether the illness occurs. In other words, the risk of financial loss has been eliminated. The individual has turned a potentially large loss in income into a smaller known loss of income (in the form of the AFP). In this situation, the utility would be U(B) because the income level B is achieved with certainty. As is clear from the figure, the individual prefers to purchase insurance relative to self-insuring. The amount of the utility gain is equal to the vertical distance between U(B) and EU (gamble). The greater the concavity of the utility curve, the greater the risk aversion and the greater the utility gain from insurance.

The utility curve depicted in Fig. 1 also can be used to assess the amount that an individual would be willing to pay for insurance. Utility of a wealth level of x, with certainty, is equivalent to the expected utility of remaining uninsured (facing the risk of having a loss that leaves them at wealth level A). Thus an individual would be indifferent between being uninsured or paying a premium of C–x for insurance. This premium is their maximum willingness to pay for insurance (WTP), with C–B the AFP and B–x the risk premium.

Read full chapter



Social Exchange

K.S. Cook, A. Gerbasi, in Encyclopedia of Human Behavior (Second Edition), 2012

Types of Exchange

Before specifying the major types of exchange that have been investigated we provide the underlying assumptions that underpin predictions about behavior in exchange structures. Molm and Cook specify four key assumptions that are shared by most current variants of the theory (with the exception of the research based on elementary theory). They include: (1) Actors are motivated to increase rewarding outcomes and to avoid loss. (2) Exchange structures emerge as a result of the actors mutual dependence on one another for valued resources. Without mutual dependence there would be no need for exchange; that is, both parties must have some reason to engage in exchange. (3) Actors engage in mutually contingent exchanges with their partners over time in a series of interactions (i.e., the theory focuses on relations that emerge over time). Finally, (4) the resources exchanged obey the law of diminishing marginal utility such that each additional unit of a valued resource is of less value to the individual, once they are ‘satiated.’ These behavioral assumptions derive from earlier theoretical work of Homans, Blau, and Emerson.

The earliest work on exchange focused primarily on two forms of exchange, direct two-party exchange, and indirect exchange in which parties are connected through a third party. For both Homans and Blau these were the main types of exchange under consideration. Emerson expanded his treatment of forms of exchange to deal not only with direct, dyadic exchange, but also indirect forms of exchange (e.g., generalized exchange) and what he termed productive exchange. In subsequent work, primarily the research program developed by Molm, two major forms of direct dyadic exchange were distinguished: negotiated and reciprocal exchange, which differ in the nature of the social process involved in the exchange relation. Anthropologist Ekeh also writes about the nature of the underlying differences between types of exchange (e.g., restricted vs. generalized exchange) though his discussion is generally more polemical than empirical.

The key distinctions are between direct and indirect exchange and between negotiated and reciprocal exchange. Direct exchange connects two actors in mutual exchange (which may or may not be asymmetrical in terms of power). Indirect exchange connects actors through their mutual ties with another party or parties (typically called ‘third’ parties). These could be considered second-order exchange connections. Two employees in an organizational unit may thus be connected through their mutual employment and supervision by a third party, perhaps a boss. Indirect ties may lead to the formation of direct ties over time as when two employees connected through mutual employment by a third party meet and form their own direct exchange relation in which they may exchange relevant information or assistance. Such forms of exchange were the cornerstone of Blaus initial interest in the analysis of exchange relations in organizations.

The second distinction which Molms work has made prominent focuses on the social process involved in the exchange. Many direct exchanges are negotiated and the two parties to the exchange actually negotiate over the terms of trade or the exchange of resources/services of value. The transaction is consummated when an agreement between the two parties is reached. This form of exchange is common in economics as well as in social exchange and necessitates mutual agreement for completion. Often such exchanges are enshrined in contracts to minimize risk, when there is much at stake, though in many circumstances a handshake is sufficient.

But many exchanges occur without explicit bargaining or negotiation. These exchanges are referred to by Molm as reciprocal exchange relations and they most often entail the mutual performance of services (or transmission of resources) of value over time, such that one actor initially performs a behavior of value that then is reciprocated at a later time by the recipient to create a two-party ‘transaction’ in which the terms of trade are not negotiated, but are implicit in the act of reciprocity. They often involve greater risk of nonreciprocity since one does not know when an exchange is initiated whether it will be reciprocated. Examples include the reciprocal exchange of gifts, dinner party invitations, or even taking turns baby-sitting, mowing lawns, or other acts of service that carry an implicit obligation of return. Failure to return the favor, gift, or service would be viewed as a violation of the norm of reciprocity, which both Blau and Emerson viewed as the hallmark of social exchange. (In fact for Emerson it was a defining characteristic of exchange, thus he did not treat it as theoretically problematic.) For Molm and for Blau this type of reciprocal exchange is distinctive of social exchange in general and is viewed as a key factor differentiating this form of exchange from negotiated forms of exchange.

In the research on power in exchange networks Molm has provided the most extensive empirical work comparing negotiated and reciprocal exchange. In studies of the differences in power use in these two types of exchange Molm demonstrates that power use is more muted in reciprocal than in negotiated exchange. The salience of conflict is stronger in negotiated than in reciprocal exchange. In addition, reciprocal acts of exchange (which are not explicitly negotiated) provide a stronger signal of trustworthiness and relational intent referred to as affective regard, in part because they carry a higher risk of nonreciprocity. Because of the inherent risk in reciprocal exchange, actors are more likely to attribute a partners positive behaviors to personal traits and intentions, which results in the emergence of stronger positive feelings and affective commitment in reciprocal exchange than in negotiated exchange. Given that the terms of the exchange are agreed on during negotiations there is little uncertainty about reciprocity and less room for the individuals involved to gain information about their partners trustworthiness. Attributions are thus more likely to be situational than personal in negotiated exchange.

Read full chapter



Moral Hazard

T. Rice, in Encyclopedia of Health Economics, 2014

Traditional Economic Theory

Before addressing moral hazard, it is useful to consider the traditional concept of consumer demand more broadly. If some key assumptions – for example, consumers are rational and well-informed – are deemed to be true (or are ignored), then what people demand (that is, what they are willing to pay for goods at different prices) is a barometer of social welfare. This is because in asserting these demands, they ‘reveal themselves’ to prefer one set of goods over another. It is a short leap to conclude that for society as a whole, whatever people choose will make society best off.

Not everyone, of course, agrees that demand curves can be used in such a way. American economists Ellis and McGuire (1993) take a much less value-laden approach, asserting that, “[W]e are skeptical that the observed demand can be interpreted as reflecting ‘socially efficient’ consumption, [so] we interpret the demand curve in a more limited way, as an empirical relationship between the degree of cost sharing and quantity of use demanded by the patient” (p. 142). Nevertheless, not only is that the first interpretation by far the most common one, but it underlies the entire notion of welfare loss discussed below.

To understand that theory it is useful to begin with the concept of ‘consumer surplus.’ This is defined as “[t]he difference between what a consumer pays for a good or service and the maximum they would pay rather than go without it” (Culyer, 2010). The former is set by the marketplace, the latter by the consumers own preferences. To illustrate, suppose a pound of apples costs US$2 and a consumer is willing to buy 4 lb at that price. This fourth pound, however, is probably of less value to him or her than are the previous pounds (unless a pie is being baked requiring that much). This is because of another economics concept, ‘diminishing marginal utility.’ In fact the consumer might be willing to pay US$5 for the first pound, US$4 for the second, and US$3 for the third. Fortuitously, they do not have to, as the market price is only US$2. As a result, in this example they have generated US$6 worth of consumer surplus: for each pound of apples, the difference between how much they are willing to pay and how much they actually have to pay. The term, incidentally, was first used in the mid-nineteenth century by a French engineer named Jules Dupuit as a way of calculating the value of railroad bridges (Ng, 1979) (A history of Dupuits contribution – and notably, the lack of contribution by John Marshall, who popularized the concept to the English-speaking world, can be found in Houghton (1958)).

Public policymakers are not very interested in the individual consumer as they are in the aggregation of all consumers. By summing up the consumer surplus, we can derive the value to society of a particular commodity or investment over and above its costs. This is useful to know in and of itself, but also can help policymakers choose among alternative projects in which to invest.

Pauly (1968) focused on the concept of moral hazard in critiquing a famous article by Kenneth Arrow (1963). Although Arrow raised the issue, he nevertheless argued, “The welfare case for insurance policies of all sorts is overwhelming. It follows that the government should under-take insurance in those cases where this market, for whatever reason, has failed to emerge” (p. 961).

Pauly showed that this is not necessarily the case because it fails to take into account moral hazard, which can chip away at consumer surplus. In essence, with full insurance, people would demand more services, even ones that had only marginal value. Because these services would cost (perhaps) as much to produce as others, society would suffer a welfare loss from this excessive amount of health insurance coverage. The welfare loss would equal the difference between how much it cost to produce the services and how much people were willing to pay for them. Suppose that a medical service cost $10, and a person would be willing to pay that much for up to three doctor visits per year. If, however, they had full insurance and had to pay nothing, they might demand six visits. Suppose for the fourth visit they would be willing to pay US$7, the fifth US$4, and the sixth US$1 (each still cost US$10 to produce). The sum of the welfare loss would be US$3+6+9=US$18.

Because people use more services when they have full insurance, it costs more to provide medical care than it would otherwise. Paulys point with regard to Arrows comment is critical. Arrow said that government should provide insurance if it is not available. Pauly shows that this is not necessarily true: people will have to pay (in taxes) for the insurance program, but much of the spending will go toward services that they would not have chosen to purchase in lieu of insurance – services, he argued, are of less value by definition. Stated more bluntly, the individual, and therefore society as a whole, could very well be better off with no insurance than with government-provided insurance, due to the concept of moral hazard. Or as Robert Evans (1984) states disparagingly of this line of reasoning, “The welfare burden is minimized when here is no insurance at all” (p. 49).

The word ‘could’ in the previous paragraph is there advisedly. Although Pauly argues that there is a welfare loss to health insurance, there is also a gain: people obtain utility from being protected against large medical expenses. The issue, then, is determining which is larger: the welfare gain from this security, or the welfare loss described above. Feldman and Dowd (1991) took both elements into account, and concluded that the loss was far greater than the gain.

The policy implication that is generally taken away from this analysis is that consumers should share in the cost of services, or, put more graphically, ‘have some skin in the game.’ Patient cost sharing will reduce service usage; it is assumed that the services that are forgone will be those that bring the lowest utility (a concept returned in the section The RAND Health Insurance Experiment). Although the RAND HIE has not been discussed yet, its authors touted the societal savings that they argue were generated by the uptick in cost-sharing requirements in the US that followed publication of the study results. The study cost US$285 million in 2010 dollars; they argue that this cost was made up in only a week from savings that resulted from the lower costs associated with the increased cost sharing (Manning et al., 1987).

Before going on, it needs to be pointed out that the discussion in this article focuses on ‘ex post moral hazard.’ This is the phenomenon that occurs when the out-of-pocket price of medical care is reduced through the possession of insurance, such that the quantity of services demanded subsequently increases. There is another type of moral hazard, known as ex ante. According to Culyer, this “refers to the effect that being insured has on behavior, generally increasing the probability of the event insured against occurring” (p. 331). For example, if you are insured you may be less likely to engage in preventive behaviors – or may take up skydiving – because of the financial protection afforded by insurance. Because ex ante moral hazard has received much less consideration in the health care literature, it is not discussed further here. It is more salient in other types of insurance, such as for fires. By possessing such insurance, business and homeowners may take less care in taking care of electrical wiring, installing fireproofing, etc.

Read full chapter