The View from Hell

Just another WordPress.com site

Archive for the ‘Seana Shiffrin’ Category

Inflicting Harm and Inflicting Pleasure on Strangers

with 7 comments

On ecstasy, peanuts, and how we take care of strangers.


A 2008 report from the United Kingdom’s Home Office Advisory Council on the Misuse of Drugs concluded that ecstasy (at least, MDMA) is not nearly as dangerous as was previously thought, either in deadliness or in long-term health consequences. The Council even recommended changing the classification of MDMA from its present status as Class A (heroin, crack, and amphetamines prepared for injection are Class A) to the less-dangerous Class B (which includes marijuana and Ritalin). (The recommendation was, of course, rejected.)

A February 2009 editorial in the New Scientist took the logic a step further:

Imagine you are seated at a table with two bowls in front of you. One contains peanuts, the other tablets of the illegal recreational drug MDMA (ecstasy). A stranger joins you, and you have to decide whether to give them a peanut or a pill. Which is safest?

You should give them ecstasy, of course. A much larger percentage of people suffer a fatal acute reaction to peanuts than to MDMA.[1]

The implication is that, when acting upon a stranger, we should minimize his risk of death.[2]

The lovely and talented Caledonian has a slightly different take: we should focus on the relative likelihood of harm, he says, rather than the relative likelihood of death.

Both of these goals – acting to minimize the risk of death to a stranger, and acting to minimize his risk of harm – are laudable and widely shared. But there’s a glaring aspect of the utilitarian calculus that almost no one seriously considers in making the decision to administer a peanut or some ecstasy. This is the differential positive utility to be gained by the stranger in each case. A peanut is marginally sustaining, but unless it’s been boiled with star anise and Sichuan peppercorns, it’s not particularly enjoyable. Ecstasy, on the other hand, is fucking awesome. Why doesn’t anybody consider the relative benefit to the stranger along with the relative harm?[3]

While many of us would certainly consider the pleasure of ecstasy in deciding whether to eat the pill or the peanut ourselves, it’s proper and coherent not to consider the pleasurable effects of a potentially harmful action when it will be inflicted upon a non-consenting stranger whose values we do not know. This illustrates Seana Shiffrin’s principal that, while it’s morally acceptable to harm a stranger without his consent in order to prevent worse harm (e.g., to administer ecstasy in order to avoid administering a peanut or to break someone’s arm in order to pull him from a burning car), it’s not morally acceptable to harm a stranger without his consent in order to provide a pure benefit. But the ecstasy example supports a stronger inference: when evaluating actions that will harm a non-consenting stranger, his potential pleasure doesn’t count. When we’re acting toward someone whose values we do not know, we should not think in terms of maximizing his utility, but in terms of minimizing our harm to him.

The distinction between acting toward a non-consenting stranger whose values we do not know, and acting toward ourselves (or toward someone whose values we know), is one that is ignored by S. D. Baum in his article “Better to exist: a reply to Benatar” (J. Med. Ethics 2008;34;875-876). Baum’s “reply” (to David Benatar’s position that it is always better not to bring people into existence) is, in relevant part, as follows:

The benefits/harms asymmetry is commonly manifested (including in Benatar’s writing) in the claim that no amount of benefit, however large, can make up for any amount of harm, however small. This claim comes from an intuition that while we have a duty to reduce harm, we have no duty to increase benefit. The corresponding ethical framework is often called “negative utilitarianism”. Negative utilitarianism resembles maximin in its resolute focus on the worst off—as long as some of those worst off are in a state of harm, instead of just in a state of low benefit. Like maximin, negative utilitarianism can recommend that no one be brought into existence—and that all existing people be euthanised. I find negative utilitarianism decidedly unreasonable: our willingness to accept some harm in order to enjoy the benefits of another day seems praiseworthy, not mistaken. I thus urge the rejection of this manifestation of the benefits/harms asymmetry. [Emphasis mine; citations omitted.]

Our own willingness to accept suffering in the interest of pleasure (or any other value) is no reason to think that it is right to inflict that same suffering on a non-consenting stranger. Negative utilitarianism may not be the proper course to take in our own lives, but thought experiments like mine suggest that negative utilitarianism is the proper course to take toward the lives of others who do not consent to our interference. [4]

Many people think it’s morally acceptable to have babies, despite the fact that the babies will certainly suffer a great deal during their lifetimes and may suffer an exceptional amount (that is, bringing someone into existence does him some harm). Pronatalists generally want to point out the good things in life – the pleasant effects of puppies and sunsets – and to balance them against life’s harms. But bringing a child into the world necessarily entails harming a stranger (for one doesn’t know the values of one’s child prior to procreation). It is no different from dosing a stranger with ecstasy for no reason, except that the harms of life massively exceed the harms of ecstasy, and the pleasure of life, for many, is much less. Considering the non-consenting stranger’s pleasure in the ecstasy/peanut case is unthinkable; procreation advocates need to explain why considering his pleasure in coming into existence is just fine.

The peanut/ecstasy example functions as a thought experiment that may be closer to real life than Shiffrin’s ingenious example in which a wealthy person drops gold bars from an airplane, thereby benefiting some of the people below but also occasionally breaking their arms.

The only case in which it is widely accepted to inflict unconsented harm in order to provide a pure benefit is when acting toward one’s children. This is an aspect of viewing one’s children as property rather than persons. (Proprietariness is also the best explanation for why parents sometimes kill their natural children – and why men sometimes kill their wives or wife-equivalents – when they decide to commit suicide.)


1. Actually, the New Scientist is oversimplifying; there are two risks of death in each case. The first kind of risk is the risk that the stranger S has particular characteristics which will make any peanut, or any MDMA, lethal for him. The second kind of risk is that a particular ecstasy tablet or peanut will be lethal for any given stranger (e.g., the tablet purporting to be E is really, say, buprenophine, or the peanut is somehow infected with lethal levels of salmonella). The latter type of risk probably isn’t that significant, though. UK studies don’t seem to be finding lethal chemicals in street ecstasy. In Australia, the most common “fake ecstasy” is methamphetamine, which is not particularly lethal. As for peanuts, the CDC reports that the death rate from nontyphoidal Salmonella like the S. typhimurium that recently caused peanut recalls is about 00.78%.

2. I have to point out that the Mounties claim that “peanut” is a street name for ecstasy. I’ve never heard this in my life, but I don’t go clubbing in Canada much.

3. We might also consider our own willingness to endure, on the one hand, a stranger’s slight peanut breath, and on the other, a stranger clinging to our leg like a baby macaque for three hours, but that is a separate calculus.

4. Baum also assumes, contrary to Benatar’s express position, that death is not a harm to already-existing people. In fact, Benatar’s claims do not rest on any simplistic pleasure/pain conception of value; Benatar argues that death is a harm, even a painless death. It is, in fact, one of the great harms of life – every born person will suffer the harm of death.

Written by Sister Y

March 8, 2009 at 7:42 am

Is Coming Into Existence an Agent-Neutral Value?

with 4 comments

David Benatar argues that bringing someone into existence is always a harm, and grounds his argument in a particular asymmetry – the “goodness” of absent pain, versus the mere neutrality of absent pleasure where no one is thereby deprived.

Seana Shiffrin, on the other hand, doesn’t argue that procreation is always a harm, but does refuse to characterize procreation as a “morally innocent endeavor” and argues for a more equivocal view of bringing people into existence. While procreation is not necessarily always a harm, it is often a harm, and procreators should bear moral responsibility for the harm they do. (Shiffrin, Seana Valentine. “Wrongful Life, Procreative Responsibility, and the Significant of Harm.” Legal Theory, 5 (1999), 117–148.) Shiffrin defends her view with a different asymmetry – that, while it is fine to harm someone in order to prevent a greater harm to him, even without his consent (the rescue case), it is not fine to harm a person without his consent merely to provide him a benefit. Her core example involves a wealthy recluse, Wealthy, with no other way to help others, dropping $5 million cubes of gold from the air on a neighboring island. Many receive his presents with no complications, but one recipient (Unlucky) is hit with the cube and breaks his arm. While the recipient might, after the fact, be glad to have been hit with the gold cube, and consider the broken arm worth it, intuition suggests that dropping $5 million gold cubes on people is wrong. Unlucky

admits that all-things-considered, he is better off for receiving the $5 million, despite the injury. In some way he is glad that this happened to him, although he is unsure whether he would have consented to being subjected to the risk of a broken arm (and worse fates) if he had been asked in advance; he regards his conjectured ex-ante hesitation as reasonable. Given the shock of the event and the severity of the pain and disability associated with the broken arm, he is not certain whether he would consent to undergo the same experience again.

Shiffrin goes on to flesh out the intuition that Wealthy has wronged Unlucky – for instance, we would say that Wealthy owes Unlucky an apology, and if Wealthy refused to pay for Unlucky’s corrective surgery, Unlucky would properly have a cause of action against Wealthy for the cost of his injuries.

Shiffrin’s focus on unconsented harm accords well with my thinking on procreation. I wish to question, though, whether it is the benefit/harm distinction that matters when motivating an unconsented harm. In my view, Shiffrin’s benefit/harm distinction is unnecessarily confusing and subject to contrary individual interpretations of harm and benefit; the very idea of harm and benefit are, in my view, too subjective to form the basis for the rightness or wrongness of inflicting unconsented harm. I think it is both more correct and more general to say that unconsented harm may be only be done in the service of a genuinely agent-neutral value.

Shiffrin considers, as a possible objection to her framework, that the real reason that a rescue is morally right, while Wealthy’s action toward Unlucky is morally wrong, is that in the rescue case, hypothetical consent may be said to exist, whereas not even hypothetical consent exists in Unlucky’s case (he is not sure he would have consented ex ante). Shiffrin argues that it is the asymmetry between harm and benefit that grounds our intuition on hypothetical consent, rather than the other way around. She argues that

there seems to be a harm/benefit asymmetry built into our approaches to hypothetical consent where we lack specific information about the individual’s will. We presume (rebuttably) its presence in cases where greater harm is to be averted; in the cases of harms to bestow greater benefits, the presumption is reversed.

My view is that we can be clearer than this. It is not the harm/benefit distinction that is driving the willingness to infer hypothetical consent; it is the different level of agent-neutrality of the inflicted harm’s consequence.

Thomas Nagel introduces the concept of agent-relative and agent-neutral value in The View from Nowhere. Agent-relative values are values which an agent holds, but which no one but the agent has much reason to promote. Agent-neutral values are values which anyone has reason to promote, whether or not the promotion of the values would benefit him directly. An agent’s desire to climb Mount Everest would be an agent-relative value; he may place genuine value on it, but I have no reason to assist him in his endeavor. However, relieving pain may be said to be an agent-neutral value; if someone is suffering severe pain, I have good reason to alleviate his pain.

In the rescue case, the rescuer causes harm to a person in order to prevent greater harm – to save his life, or to prevent more serious physical injury. Both saving life and preventing physical injury would probably be classified as agent-neutral values. In Unlucky’s case, however, the $5 million gold cube could well be seen as something with only agent-relative value. Shiffrin specifies that inhabitants of Unlucky’s island are well provided for even without the gold. While there might be an agent-neutral reason to provide people with a certain minimum level of money or material comfort, beyond this, there is not much reason to give substantial gifts to strangers. A person might want $5 million, but I have no particular reason to see that he gets it, while I do have a reason to ensure that his basic nutritional needs are taken care of.

A major problem with the agent-neutral/agent-relative classification is whether agent-neutral values exist at all. Eric Mack, for example, argues that there are no agent-neutral values (“Against Agent-Neutral Value,” Reason Papers 14 (Spring 1989) 76-89.) Mack argues that an agent-neutral value must necessarily be an “agent-external” value – something that is valuable in itself, even if no one is ever in a relationship with it so as to value it. Otherwise, all such values are “reducible to [their] value for someone,” that is, they are agent-relative (emphasis mine). Few are prepared to claim that there are truly agent-external value in this sense (things that would be valuable even if there would never exist any sentient beings in the world). I find the possibility of the nonexistence of agent-neutral values disturbing, calling to mind as it does relativism/subjectivism, though I could imagine an ethical system that recognized the existence only of agent-relative values, but also recognized reasons other than personal preference for taking the values of others seriously. Interestingly, Mack refers to the possibility for agent-relative values that are nevertheless, in his words, objective; as long as there can be reasons for taking the (agent-relative) values of others seriously, then the project of ethical philosophy doesn’t fall into dust.

George R. Carlson (in “Pain and the Quantum Leap to Agent Neutral Value,” Ethics, Vol. 100, No. 2 (Jan., 1990), pp. 363-367), while not exactly precluding the possibility for agent-neutral value, argues that Nagel’s chief example, pain, fails to be a genuinely agent-neutral value. He argues that while a person might have reason to alleviate the pain of another, these are not agent-neutral reasons. Rather, they are grounded in the perceptions and empathy of the agent.

What I find most concerning with the benefit/harm classification, as well as allegations of agent-neutral value, is that any of the examples so far examined may, depending on the individual circumstances, be either a harm or a benefit. Saving a life would generally be seen as an “agent-neutral” value; however, since I am a suicide, a rescuer saving my life would do only harm to me. Preventing pain is seen as an agent-neutral value; however, hiding my friend’s car keys so he cannot drive to a club and get beaten up by his dominatrix friend (and thereby preventing him physical pain) would certainly do him harm, not good. And studies of lottery winners seem to indicate that even loads of unnecessary money can do harm. (As J. David Velleman points out, even choice can be a harm.) Can these values really be especially agent-neutral if they are often harms? Is it not more appropriate to call them the agent-relative values of the majority, rather than genuinely agent-neutral values?

Shiffrin points out a “related asymmetry,” from Thomas Scanlon (Preference and Urgency, 72 J. PHIL. (1975) 655–69.). This is the asymmetry between the harm that is is morally correct to inflict on another, and the “harm” that a person may inflict on himself. In Shiffrin’s words (summarizing Scanlon),

One may reasonably put much greater weight on a project from the first-person perspective than would reasonably be accorded to it from a third-party’s viewpoint. A person may reasonably value her religion’s mission over her health, but the state may reasonably direct its welfare efforts toward her nutrition needs rather than to funding her religious endeavors.

This “related asymmetry” is, it seems to me, concerned with both the problem of consent and, indirectly, with the idea of agent-neutral versus agent-relative values. A person may consent to “harm” for any reason whatever, agent-relative or otherwise; but in order to inflict harm on another without consent, we must either (a) have such a good model of the person’s values that we can infer hypothetical consent based on agent-relative values, or (b) act in furtherance of genuinely agent-neutral values.

The ultimate question, of course, is whether coming into existence is the kind of value that it is morally acceptable to inflict harm on others, without their consent, in order to procure for them. Pain, suffering, illness, unrequited love, shame, sexual frustration, sorrow, disappointment, fear, and death are all guaranteed (or nearly so) by the fact of being brought into existence; these are certainly harms. The pronatalist might argue that despite these certain harms, it is not wrong to bring others into existence, because the unconsented harm in the service of an agent-neutral value: coming into existence. (I find the “hypothetical consent” argument unpersuasive, because we have no model, much less a reliable model, of the agent’s future agent-relative values when we contemplate bringing that agent into existence. This is my core problem with R.M. Hare’s “Golden Rule” argument that we should bring into existence those who will be happy to exist and not bring into existence those who won’t. How do we tell the difference ex ante?)

Is coming into existence an agent-neutral value? The problem we run into at this stage is that we have little theory of what qualifies an agent-neutral value. Carlson’s chief criticism of Nagel seems to be a lack of a theory for determining what counts as an agent-neutral value versus an agent-relative value (other than the unsatisfying “pain is awful”). Indeed, there seems to be a genuine question as to the degree to which agent-neutral values exist at all.

Actually, even under Mack’s restrictive definition, I think there is, in some sense, a clear example of a genuinely agent-neutral value, a peculiar value that would retain its value even if no sentient beings ever come into existence to appreciate it. This is the value of no sentient being coming into existence. If no beings exist, no suffering can occur; this is good, even though (and precisely because) no being ever come into existence to appreciate this pleasant state of affairs. The alternative would be worse; it is good that this worse option does not obtain, even though the only way anyone would perceive its better-ness would be by the worse alternative coming to pass.

There may be disagreement over whether coming into existence is an agent-neutral value. I certainly think that it is not, but I think that an argument could be made in good faith that it is. I think there is a stronger argument, however, that no one coming into existence is an agent-neutral value – perhaps the only such peculiar value – and, under my theory, an agent-neutral value is one in the service of which unconsented harm may be countenanced.

Written by Sister Y

August 15, 2008 at 5:02 am