The View from Hell

Just another WordPress.com site

Born Obligated: A Place for Quantitative Methods in Ethics

with 2 comments

Behavioral economics methods may be more reliable than unsupported, sweeping assumptions in understanding the degree to which being born is okay.


Obviousness

That being born is a good thing is treated as axiomatic by the majority of thinkers who consider the issue.

Thomas Nagel, for instance, states that “All of us, I believe, are fortunate to have been born,” even while affirming that not having been born is no misfortune (Mortal Questions, “Death,” p. 7). Bryan Caplan has said, regarding IVF, “How can I neglect the welfare of the children created by artificial means? But I’m not ‘neglecting’ children’s welfare. I just find it painfully obvious that being alive is good for them [emphasis in original].”

There are two elements to this kind of thinking. First, it represents a judgment that life is, on the whole, worth getting and having; but second, all the talk of “obviousness” also implies that there is something wrong with even asking the question.

I want to address here how quantitative methods, rather than intuition and assumption, might be used to measure the downside of existence. I argue that there is a need to analyze quantitatively the obligations that we are all born with and the inherent pain of life, and that, if our lives are to be worth having on the whole, must be made up for with valuable experiences.

Work and Leisure

We might characterize the central unpleasant obligation in our lives as the obligation to “work” (broadly construed) in order to meet the salient and potentially misery-inducing needs we are born with or naturally develop. These needs include not only food, clothing, shelter, and medical care, but also status, love, sex, attention, and company.[1] We can even quantify these needs, by quantifying work done to satisfy these needs, for which we have a great deal of data.

Some of these needs, of course, may actually be satisfied by working – the need to belong, to feel valuable, to not be a burden. However, at the same time, some of these needs are actually increased by working – that is, work may create disutility as well as utility. How can you tell the difference between what people do to merely to ease the pain and discomfort of existence, and what people actually want to be doing?

Many economists have addressed the question of the difference between work and leisure, and how we may quantify and measure them. One crude-but-tempting measure of the value of leisure time is merely a person’s wage. But as Larson & Shaikh (2004) explain, this is much too crude to get at the true nature of work and leisure:

Assuming the average wage is the appropriate opportunity cost of time presumes that the individual faces no constraints on hours worked, derives no utility or disutility from work, and has a linear wage function…. This is unlikely to be true for many people….an individual’s average wage does not necessarily reveal anything about the shadow value of discretionary leisure time, either as an upper or lower bound.

The question of the value of leisure time is intimately related to the question of quantifying the unpleasant obligations placed on us by virtue of existence, so that we may have a starting point for a meaningful comparison of life’s costs and life’s benefits.

How do we characterize “work”? What is the difference between “work” and “leisure”?

Intuitively, we know the difference – or at least, there exist clear cases of “work” and clear cases of “leisure.” Operating a cash register is work. Washing dishes is work. Doing bong rips is leisure. Reading novels is leisure. Watching television and having sex are generally leisure (unless you’re in advertising or a prostitute). For most people, child care and lawn care qualify as work – whether paid or unpaid – but for some people, these may qualify as leisure some of the time.

These examples suggest that leisure is that which is done for the sake of the experience itself, whereas work is done with some goal in mind other than the experience itself, and is done only in service of that goal.[2] Running ten miles is leisure for me, because I do it for the pleasure of the experience; running those same ten miles might be work for someone else, because he does it to lose weight, not for the pleasure of running. A third person might run for both reasons, in which case the action has aspects of both leisure and work. We should not necessarily expect that every action and every hour can be neatly categorized as “work” or “leisure,” even for a particular individual.

This should give us pause when considering the definition of “leisure” preferred by Mark Aguiar and Erik Hurst in their 2006 paper “Measuring Trends in Leisure: The Allocation of Time Over Five Decades,” an hour-by-hour tally of time not spent in market or non-market work (e.g., at work, or doing unpaid work around the house or around town). In reality, a single hour may have substantial aspects of both work and leisure.

Aguiar and Hurst remark on a potentially definitional characteristic of leisure: the degree to which market inputs (money, technology) are consumed to reduce the amount of time spent in the activity. They say:

one definition of whether an activity is “leisure” may be the degree of substitutability between the market input and the time input in the production of the commodity. That is, the leisure content of an activity is a function of technology rather than preferences. In the examples above, one can use the market to reduce time spent cooking (by getting a microwave or ordering takeout food) but cannot use the market to reduce the time input into watching television (although innovations like VCRs and Tivo allow some substitution). [Emphasis mine.]

Let me give a definition of my own, to fit my question:

Work is any action (or omission, perhaps) that we undertake in order to prevent or remedy some unpleasant state, and that we would not undertake if the unpleasant potential state were not a factor. An activity has a strong work component if technology is demanded by individuals to reduce the amount of time they spend in the activity.

In other words, work is what you do only because you have to eat, and you spend as little time doing it as is possible to satisfy your (present and projected future) needs.

Many studies since the 1980s have found that physicians’ demand for leisure directly affects the prevalence of cesarean sections. Cesarean sections are highly correlated to time variables associated with doctors wanting to get the hell out of there, although (further strengthening the theory) this correlation is dependent on the type of insurance covering the patient.

Instead of relying on the “imaginary survey justification” to “prove” that coming into existence is a good thing, economists and ethicists could use more creative, quantitative methods to examine the question of how bad (and how good) life is. Specifically, we need to figure out how to tell the difference between suffering people attempting to remedy their shitty situation, and happy people chilling out – both of which may describe any of us at different times in our life, or even our day. “Are you glad you were born?” is unsubtle, an all-or-nothing approach that relies heavily on people knowing the answer to questions they may have only limited capacity to understand. Analyzing behavior in smaller chunks would give us a better idea of just how happy people are to be here.

Poverty and Pain

Behavioral economics is a strong tool for understanding ourselves and each other. However, many behavioral economists, consciously or unconsciously, rely heavily on the “imaginary survey justification,” and no economist, to my knowledge, has attempted to use behavioral economics methods to figure out how bad, or how good, life is to individuals.

Bryan Caplan published a fascinating, even audacious paper in 2007 entitled “Behavioral Economics and Perverse Effects of the Welfare State.” In it, he argues that giving the poor more life choices through charitable assistance seems to actually harm them, because they are irrational and fail to choose the best option for them. From his abstract:

Critics often argue that government poverty programs perversely make the poor worse off by encouraging unemployment, out-of-wedlock births, and other “social pathologies.” However, basic microeconomic theory tells us that you cannot make an agent worse off by expanding his choice set. The current paper argues that familiar findings in behavioral economics can be used to resolve this paradox. Insofar as the standard rational actor model is wrong, additional choices can make agents worse off. More importantly, existing empirical evidence suggests that the poor deviate from the rational actor model to an unusually large degree. The paper then considers the policy implications of our alternative perspective.

The option Caplan fails to consider is this: the lives of the poor are unacceptably bad without charitable aid.

We don’t think it irrational, exactly, when a person in extreme pain does something to relieve his pain that may have negative future consequences. A shrieking, sweating patient in horrible pain might be perfectly aware of the potential for developing a long-term addiction to opiates, but we do not consider his decision to take opiate medication to be irrational. His pain is so bad that we think it makes sense for him to use any means to stop it, even if they harm his future interests.

Connecting to my discussion of work vs. leisure, I think it a valid hypothesis that poverty is actually dreadfully painful – not only physically, but emotionally and socially. There is only so much pain we can expect a being to endure before his attempts to relieve it through future-damaging means becomes perfectly understandable and, in fact, rational.

The Demand for Pain Relief

An economic theory of rationality, to be in touch with human ethical reality, must include an account of pain. We must attempt to define and study pain (in the broad sense) in a behavioral economics context, rather than to define it away, as Caplan attempts to do.

Karl Smith notes that studies consistently show that health care consumers do not seem to take into account mortality data when choosing between health care providers, even when very good mortality data is widely available in a user-friendly format. Perhaps the demand for life is not as high as we might think. People seem to like spending money on health care, but not to care about outcome. One approach suggested by this is to study revealed preferences/willingness-to-pay for death risk reduction and pain relief (broadly defined), respectively, in different contexts and populations.

Is Loss Aversion Irrational?

A recent paper on behavioral economics, using tufted capuchin monkeys as subjects, demonstrated that the monkeys exhibit what is considered a typical human departure from rationality, “loss aversion.” That is, monkeys trained to use metal discs as money preferred to buy fruit from a graduate student who would give them a smaller food reward but sometimes add a few grapes to it, rather than from a graduate student who would give them a larger food reward but then maybe remove a few grapes. The monkeys weren’t maximizing the number of grapes they got; they specifically exhibited a preference to have things added, rather than have things taken away.

This does not, I think, exactly illustrate irrationality in the capuchins: it illustrates that they are utility maximizers, not grape maximizers. Monkeys experience a loss of utility from losing grapes that is greater than the utility produced by those grapes. Losing grapes, we might say, is painful. Doing the resource-maximizing thing does not necessarily equate with doing the utility-maximizing thing.

A Place for Quantitative Methods

Caplan’s conclusion is that we must not treat the poor as rational actors, because they deviate so heavily (compared to the wealthy) from being long-term best-interest maximizers. Therefore, he says, we should not expect to solve their problems by giving them money or other charitable aid.

An equally supported conclusion would be that being poor is so awful it is unendurable, like severe physical pain, and poor people actually are rational, taking this into account. Caplan also gives us a hint at what might be an indicator of painfulness: the degree to which the actor deviates from resource maximization. He says, “The behavioral literature has documented that the average person frequently violates neoclassical assumptions. But it rarely investigates variation in the tendency to violate neoclassical assumptions. Casual empiricism and limited formal evidence suggest that the poor do deviate more. A great deal more could be learned at low cost if new behavioral studies collected information on participants’ income and education to test for heterogeneity. [Citations omitted.]” Analyzing LOTS of factors for correlation to deviation from resource-maximization rationality, not just income, education, and intelligence, could help us understand the circumstances under which life is so painful that we act irrationally.


1. The extreme seriousness of the basic human need for affiliation and belonging is not widely acknowledged, even though data is available to that effect from a wide variety of sources. Kipling Williams’ meta-studies, Ostracism: The Early Detection System and Ostracism: Consequences and Coping are a good place to start to review the literature on the consequences of failed belonging. For instance, Williams explains experiments using Cyberball, an interactive computer game that can be used to give test subjects the impression of being ostracized in a controlled way. He says experimenters have “found strong negative impact on mood and need levels for those participants who were ostracized” in the Cyberball game, and when the experiment was conducted under fMRI, participants “showed significant increases in activity in their anterior cingulate cortexes, where people also show significant activity when enduring physical pain.” Further, he states that “In all of these Cyberball studies, the effects sizes of the ostracism manipulation are over 1.00 (often above 1.50) indicating strong effects, and subsequent meta-analyses indicate it takes only three people per condition to reach standard levels of significance. [Citations omitted.]” See pp. 17-19 of Ostracism: The Early Detection System. What’s especially amazing is that the effect is clearly not rational – it holds even when ostracized participants have been explicitly told that they’re only playing against a computer (NPCs).

Thomas Joiner’s book Why People Die by Suicide (see my review here) is a book-length treatment of an empirically-tested theory of the causes of suicide, and concludes that three factors are the best predictors of suicidality: failed belonging, feelings of burdensomeness, and competence (ability to physically do it). Two of the three factors are measures of failed social affiliation. Other kinds of sadness (including sadness for other reasons and clinical depression) are not very predictive of suicide. And Phillipe Rochat’s excellent book Others in Mind details the formation of the human “self” through child development studies and other empirical research, concluding that what he terms the Basic Affiliation Need is not only an extremely critical need, but one that is primordial to, and directly causes, the formation of the self. The need to belong and to have a place in society is not a luxury, but a basic need the absence of which is more painful than prolonged hunger or injury.

2. Yesterday, I overheard two high school girls having a conversation. One revealed to her friend that although she realized it meant giving up one’s life, she could see the upside to a diagnosis of terminal cancer – a kind of peace, and an exemption from the future-oriented unpleasantness we must all endure if we are to be considered socially responsible. “You could just have fun in school,” she said. “I work my ass off every day with work and schoolwork, but if you were going to die anyway, you could just relax. You wouldn’t have to worry.” Her friend agreed, but said she wanted to see what it was like to be an adult anyway. “I’m not sure I do,” said the first little girl. School is generally work, not leisure.

Advertisements

Written by Sister Y

April 27, 2011 at 4:02 pm

2 Responses

Subscribe to comments with RSS.

  1. Under your view, could it be that neoclassical assumptions are, arguably and perhaps in a strange way, ultimately vindicated? In other words, why not expand on the prevailing economic understanding of “utility” to account for a broader (and perhaps more subjectively attenuated) spectrum of heterogeneous preferences? Whatever circumstantial (or native/biological) contingencies delimit the pain- or loss-averse choices that seem most compelling to a subset of less fortunate economic actors, it seems that actions that are initially assumed to be irrational frequently yield to rational interpretation once a more rigorously factored scheme of metrics is applied. If a poorly designed yardstick is causing us to discern deviations that collapse when additional criteria are considered, then maybe the concept of utility should be elasticized to better account for what's actually going on.

    (I do realize that my ignorance of economic theory makes me vulnerable to the lure of Austrian-scented subjectivity where potentially useful distinctions can be lost in spiraling sleights of atomized post-hoc qualification, but it still seems like the kinds of choices that you distinguish are undertaken on a rational basis, and that the question of utility follows).

    Chip

    April 29, 2011 at 7:38 pm

  2. On a related note concerning the myriad factors that might shed light on variation across socioeconomic groups, I have often wondered how economists interpret differences in time preference. Here again, it seems that there is a danger of value-laden misconstruction, with shorter-term time preferences (which I understand to track IQ and income in the expected direction) being assumed as evidence for comparatively irrational action. There's a cable TV show about obsessive “couponers” that might illustrate the potential problem here. The people featured on the program collect, trade, and redeem voluminous quantities of coupons for grocery items and end up with truly incredible savings (like spending $50 for $1000 worth of groceries). Now it seems to me that there is a very real sense in which the ends sought (and gained) by these dedicated couponers are rational, if not hyper-rational. Yet, though anyone can play the game, most people don't find “extreme couponing” to be worth the effort. Why not? Presumably, it's because most people would rather invest their time in ways that they find more satisfying (or more subjectively loss-averting, I suppose), even if their choices entail the sacrifice of potential gains in one measurable dimension such as mega-savings at the checkout lane. If we posit an imaginary baseline where coupon gamers are held to be prime exemplars of homo-economicus in action, it's easy to imagine how other heterogeneously determined variations in time-preference (like accepting the terms of a high interest credit card and maxing it out to finance a huge a bareback sex party in a hotel penthouse) confute – or significantly undermine – other standardized assumptions about utility maximization. Again, I know this treads precipitously close to “just-so” Austrian school subjectivity, but I have sincere difficulty understanding how economists can ascribe comparative weight to choice A over choice B without knowing and weighing the subjective value of radically individuated priorities. To cut a bit closer to your broader point regarding pain minimization (which I didn't miss – sorry if it seems that I did), most people would probably intuit (along with the participants in your overheard conversation) that a terminal diagnosis makes sense as a game-changer, but it seems crucial to note that the game has ALREADY changed for those of us who, in a salient way, view life itself as form of terminal cancer. For such people, time preferences and consequent choices may differ in marked ways – and in ways ways that will seem, from a certain conceited vantage, irrationally deviant.

    Also (stipulating once again that I'm not well-read in micro or behavioral econ), I wonder if Caplan fails to appreciate the significance of the term “perversely” in the passage that you cite. Wouldn't a critic reply that, rather than “expanding” the set of options available to the poor, social programs “pervert” – and limit perforce – the options and incentives that would otherwise be in play? Without sufficient presuppositional footing, it may be that his engagement with a purported paradox is premature, if not superfluous.

    Chip

    April 29, 2011 at 7:40 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: