The View from Hell

Just another site

Archive for the ‘rationality’ Category

The Practice of Euphemism

with one comment

Powerful, generally undetected euphemistic processes in language give us a falsely optimistic model of the world.

The Origin of Euphemistic Distortions

The formation and use of euphemism is a powerful, inevitable process in human language. Every day, subjects must be discussed or alluded to that could cause discomfort in the parties to the conversation, detracting from both the informative purpose of the conversation and the (generally more important) social bonding function. To avoid the discomfort, taboo subjects are discussed in a circuitous manner, removed as much as possible from the disturbing aspect of the topic. Disturbing aspects are ignored, reframed, treated symbolically, or otherwise elided.

On the level of diction, words and phrases are found to bring to mind the relevant aspects of a topic, while minimizing the disturbing or irrelevant aspects. Metaphor and metonymy are common mechanisms for euphemism, but there are many such methods, with not just new euphemisms, but new euphemistic mechanisms, being invented all the time.

But euphemism does not only happen on the level of word choice. From micro- to macro-, from the foundational narrative/legend of a society to the way social relationships are cognized, human language users and language-using communities and even nature (via evolution) are acting on language to orient human thought in euphemistic directions. How our brains conceive of the world (including language) is not related to what’s actually important in a universal sense, but to what was important to organisms’ fitness goals in the environment of evolutionary adaptedness. We do not perceive all wavelengths of light or sound, but only those that (a) were relevant to survival in the EEA and (b) for which a perceptive apparatus was evolutionarily available. (And we do not perceive things like X-rays at all.) Similarly, language does not give us a picture of what is, but only a picture of what was relevant to survival in the EEA.

Artists Explode Euphemism

The project of artists (and of phenomenology) is often to explode euphemistic ways of thinking. In “Dulce et Decorum est,” Wilfred Owen does so for the romantic idea of glorious death in combat. Patriotism and a euphemized conception of those fighting may be more comfortable and politically expedient for the folks back home, but here’s how it really is, says Owen, here is what is elided: the boy who doesn’t get his gas mask on in time, “guttering, choking, drowning,” “the white eyes writhing in his…hanging face, like a devil’s sick of sin,” at every jolt of the wagon they “flung him in,” the “blood/come gargling from his froth-corrupted lungs.” Happy Memorial Day.

No Counter-Process

What does this tell us about the accuracy of the model of the world we have from language? Is our conception accurate? Too rosy? Too negative?

We might expect our visual picture of the world to be “too rosy” if we found that our instrument for detecting red light (eyes, brain) were set too high compared to the mechanism for detecting other kinds of light. Analogously, an understanding of the linguistic phenomenon of euphemism might lead us to suspect that our conception of the world may be too optimistic – unless, of course, there were a countervailing, dysphemistic process. However, a moment’s reflection shows us that the effect of any dysphemistic process is only a tiny fraction of that of euphemism, at best.

A main function of euphemism is to avoid social discomfort. The idea of suffering is always socially uncomfortable – we should expect it to be edited out. There is rarely any reason to add pain (or social awkwardness) to already-comfortable language – this is the task of the artist and the philosopher alone.

The Mistaken Notion of Pure Language

Less subtle thinkers than Wilfred Owen have hoped for a world of clean language, without euphemism. This is a mistaken hope.

All language has connotation as well as denotation – an emotional message as well as an informative one, even if that emotional message is one of blank neutrality. We do not think without emotion; in a practical sense, we are incapable of doing so. Without the swift functioning of our emotions, we are crippled at such “simple” cognitive tasks as making decisions (see, e.g., “The role of emotion in decision-making: Evidence from neurological patients with orbitofrontal damage,” by Antoine Bechara, Brain and Cognition 55 (2004) 30–40). Why should language not take advantage of this fast system of cognition whose output is chemicals?

Language “cleaned of its emotional message” is not purer or realer or truer language – it is systematically distorted language. Making a project of eradicating euphemism immediately begs the question of the objectively correct word or conception for a given thing or act. “Crack baby” or “drug-exposed infant”? “Crack baby” one might call politically incorrect, vernacular, plainspoken, suggesting that “drug-exposed infant” is the euphemism. The latter, though, is the term used by child welfare professionals (nurses, social workers) to indicate that a horrible violation happened to this innocent infant child, emphasizing the wrong done to the child. Can you hear the screams more from “crack baby” or “drug-exposed infant”? See the tubes and the shaking and the tiny hands? And which better expresses that the mother of said infant used drugs to ease her pain from having been viciously sexually abused during her childhood?

All words are euphemisms. All language is euphemism – selection of relevant, comfortable aspects, and elision of pangs of empathetic pain so far as possible.

“Rape” is a euphemism. “Prison” is a euphemism. Even “prison rape” is a euphemism. Words indicate concepts, but cannot ever express how bad these experiences are for those who suffer them.

Memento Mori. (Population and Reproduction: A Modern Euphemistic Process)

Check this out: the writer of discusses my article Living in the Epilogue: Social Policy as Palliative Care in his piece Porn and Not Being Cheery. It’s dope. NSFW!!! There are pictures of nekkid people OMG!

Written by Sister Y

May 23, 2011 at 3:08 pm

Press: Traumatic Brain Injury Makes Suicide Rational

leave a comment »

From a story on a professional athlete who committed suicide, suspecting he had traumatic brain injury:

BOSTON — The suicide of the former Chicago Bears star Dave Duerson became more alarming Monday morning, when Boston University researchers announced that Duerson’s brain had developed the same trauma-induced disease recently found in more than 20 deceased players.

What is amazing about this story is this: there is no recommendation for greater mental health screening, detection, and services among former professional athletes. There are recommendations, however, to actually SOLVE THE PROBLEM that made the guy’s life hell in the first place.

Duerson shot himself Feb. 17 in the chest rather than the head so that his brain could be examined by Boston University’s Center for the Study of Traumatic Encephalopathy, which announced its diagnosis Monday morning in Boston.

In this case, the reporter seems to clearly accept the proposition that the former athlete’s suicide was caused by his traumatic brain injury – but NOT because his traumatic brain injury made him insane. Rather, it seems that his traumatic brain injury made his life bad enough that it’s impossible to completely reject the notion that he committed suicide rationally.

The medical model of suicide – the idea that suicide is a pathological symptom of a curable medical condition – has always been dubious, but it is clear from accounts like this that not even the media (repeatedly warned by well-meaning bullies to self-censor) fully buy the story. Everyone knows that there are good reasons to commit suicide. What few acknowledge is that most genuinely good reasons to commit suicide are not as easy to verify as this former athlete’s brain injury.

As David Foster Wallace describes it in Infinite Jest:

Think of it this way. Two people are screaming in pain. One of them is being tortured with electric current. The other is not. The screamer who’s being tortured with electric current is not psychotic: her screams are circumstantially appropriate. The screaming person who’s not being tortured, however, is psychotic, since the outside parties making the diagnoses can see no electrodes or measurable amperage. One of the least pleasant things about being psychotically depressed on a ward full of psychotically depressed patients is coming to see that none of them is really psychotic, that their screams are entirely appropriate to certain circumstances part of whose special charm is that they are undetectable by any outside party. [Emphasis mine.]

Written by Sister Y

May 2, 2011 at 5:17 pm

Born Obligated: A Place for Quantitative Methods in Ethics

with 2 comments

Behavioral economics methods may be more reliable than unsupported, sweeping assumptions in understanding the degree to which being born is okay.


That being born is a good thing is treated as axiomatic by the majority of thinkers who consider the issue.

Thomas Nagel, for instance, states that “All of us, I believe, are fortunate to have been born,” even while affirming that not having been born is no misfortune (Mortal Questions, “Death,” p. 7). Bryan Caplan has said, regarding IVF, “How can I neglect the welfare of the children created by artificial means? But I’m not ‘neglecting’ children’s welfare. I just find it painfully obvious that being alive is good for them [emphasis in original].”

There are two elements to this kind of thinking. First, it represents a judgment that life is, on the whole, worth getting and having; but second, all the talk of “obviousness” also implies that there is something wrong with even asking the question.

I want to address here how quantitative methods, rather than intuition and assumption, might be used to measure the downside of existence. I argue that there is a need to analyze quantitatively the obligations that we are all born with and the inherent pain of life, and that, if our lives are to be worth having on the whole, must be made up for with valuable experiences.

Work and Leisure

We might characterize the central unpleasant obligation in our lives as the obligation to “work” (broadly construed) in order to meet the salient and potentially misery-inducing needs we are born with or naturally develop. These needs include not only food, clothing, shelter, and medical care, but also status, love, sex, attention, and company.[1] We can even quantify these needs, by quantifying work done to satisfy these needs, for which we have a great deal of data.

Some of these needs, of course, may actually be satisfied by working – the need to belong, to feel valuable, to not be a burden. However, at the same time, some of these needs are actually increased by working – that is, work may create disutility as well as utility. How can you tell the difference between what people do to merely to ease the pain and discomfort of existence, and what people actually want to be doing?

Many economists have addressed the question of the difference between work and leisure, and how we may quantify and measure them. One crude-but-tempting measure of the value of leisure time is merely a person’s wage. But as Larson & Shaikh (2004) explain, this is much too crude to get at the true nature of work and leisure:

Assuming the average wage is the appropriate opportunity cost of time presumes that the individual faces no constraints on hours worked, derives no utility or disutility from work, and has a linear wage function…. This is unlikely to be true for many people….an individual’s average wage does not necessarily reveal anything about the shadow value of discretionary leisure time, either as an upper or lower bound.

The question of the value of leisure time is intimately related to the question of quantifying the unpleasant obligations placed on us by virtue of existence, so that we may have a starting point for a meaningful comparison of life’s costs and life’s benefits.

How do we characterize “work”? What is the difference between “work” and “leisure”?

Intuitively, we know the difference – or at least, there exist clear cases of “work” and clear cases of “leisure.” Operating a cash register is work. Washing dishes is work. Doing bong rips is leisure. Reading novels is leisure. Watching television and having sex are generally leisure (unless you’re in advertising or a prostitute). For most people, child care and lawn care qualify as work – whether paid or unpaid – but for some people, these may qualify as leisure some of the time.

These examples suggest that leisure is that which is done for the sake of the experience itself, whereas work is done with some goal in mind other than the experience itself, and is done only in service of that goal.[2] Running ten miles is leisure for me, because I do it for the pleasure of the experience; running those same ten miles might be work for someone else, because he does it to lose weight, not for the pleasure of running. A third person might run for both reasons, in which case the action has aspects of both leisure and work. We should not necessarily expect that every action and every hour can be neatly categorized as “work” or “leisure,” even for a particular individual.

This should give us pause when considering the definition of “leisure” preferred by Mark Aguiar and Erik Hurst in their 2006 paper “Measuring Trends in Leisure: The Allocation of Time Over Five Decades,” an hour-by-hour tally of time not spent in market or non-market work (e.g., at work, or doing unpaid work around the house or around town). In reality, a single hour may have substantial aspects of both work and leisure.

Aguiar and Hurst remark on a potentially definitional characteristic of leisure: the degree to which market inputs (money, technology) are consumed to reduce the amount of time spent in the activity. They say:

one definition of whether an activity is “leisure” may be the degree of substitutability between the market input and the time input in the production of the commodity. That is, the leisure content of an activity is a function of technology rather than preferences. In the examples above, one can use the market to reduce time spent cooking (by getting a microwave or ordering takeout food) but cannot use the market to reduce the time input into watching television (although innovations like VCRs and Tivo allow some substitution). [Emphasis mine.]

Let me give a definition of my own, to fit my question:

Work is any action (or omission, perhaps) that we undertake in order to prevent or remedy some unpleasant state, and that we would not undertake if the unpleasant potential state were not a factor. An activity has a strong work component if technology is demanded by individuals to reduce the amount of time they spend in the activity.

In other words, work is what you do only because you have to eat, and you spend as little time doing it as is possible to satisfy your (present and projected future) needs.

Many studies since the 1980s have found that physicians’ demand for leisure directly affects the prevalence of cesarean sections. Cesarean sections are highly correlated to time variables associated with doctors wanting to get the hell out of there, although (further strengthening the theory) this correlation is dependent on the type of insurance covering the patient.

Instead of relying on the “imaginary survey justification” to “prove” that coming into existence is a good thing, economists and ethicists could use more creative, quantitative methods to examine the question of how bad (and how good) life is. Specifically, we need to figure out how to tell the difference between suffering people attempting to remedy their shitty situation, and happy people chilling out – both of which may describe any of us at different times in our life, or even our day. “Are you glad you were born?” is unsubtle, an all-or-nothing approach that relies heavily on people knowing the answer to questions they may have only limited capacity to understand. Analyzing behavior in smaller chunks would give us a better idea of just how happy people are to be here.

Poverty and Pain

Behavioral economics is a strong tool for understanding ourselves and each other. However, many behavioral economists, consciously or unconsciously, rely heavily on the “imaginary survey justification,” and no economist, to my knowledge, has attempted to use behavioral economics methods to figure out how bad, or how good, life is to individuals.

Bryan Caplan published a fascinating, even audacious paper in 2007 entitled “Behavioral Economics and Perverse Effects of the Welfare State.” In it, he argues that giving the poor more life choices through charitable assistance seems to actually harm them, because they are irrational and fail to choose the best option for them. From his abstract:

Critics often argue that government poverty programs perversely make the poor worse off by encouraging unemployment, out-of-wedlock births, and other “social pathologies.” However, basic microeconomic theory tells us that you cannot make an agent worse off by expanding his choice set. The current paper argues that familiar findings in behavioral economics can be used to resolve this paradox. Insofar as the standard rational actor model is wrong, additional choices can make agents worse off. More importantly, existing empirical evidence suggests that the poor deviate from the rational actor model to an unusually large degree. The paper then considers the policy implications of our alternative perspective.

The option Caplan fails to consider is this: the lives of the poor are unacceptably bad without charitable aid.

We don’t think it irrational, exactly, when a person in extreme pain does something to relieve his pain that may have negative future consequences. A shrieking, sweating patient in horrible pain might be perfectly aware of the potential for developing a long-term addiction to opiates, but we do not consider his decision to take opiate medication to be irrational. His pain is so bad that we think it makes sense for him to use any means to stop it, even if they harm his future interests.

Connecting to my discussion of work vs. leisure, I think it a valid hypothesis that poverty is actually dreadfully painful – not only physically, but emotionally and socially. There is only so much pain we can expect a being to endure before his attempts to relieve it through future-damaging means becomes perfectly understandable and, in fact, rational.

The Demand for Pain Relief

An economic theory of rationality, to be in touch with human ethical reality, must include an account of pain. We must attempt to define and study pain (in the broad sense) in a behavioral economics context, rather than to define it away, as Caplan attempts to do.

Karl Smith notes that studies consistently show that health care consumers do not seem to take into account mortality data when choosing between health care providers, even when very good mortality data is widely available in a user-friendly format. Perhaps the demand for life is not as high as we might think. People seem to like spending money on health care, but not to care about outcome. One approach suggested by this is to study revealed preferences/willingness-to-pay for death risk reduction and pain relief (broadly defined), respectively, in different contexts and populations.

Is Loss Aversion Irrational?

A recent paper on behavioral economics, using tufted capuchin monkeys as subjects, demonstrated that the monkeys exhibit what is considered a typical human departure from rationality, “loss aversion.” That is, monkeys trained to use metal discs as money preferred to buy fruit from a graduate student who would give them a smaller food reward but sometimes add a few grapes to it, rather than from a graduate student who would give them a larger food reward but then maybe remove a few grapes. The monkeys weren’t maximizing the number of grapes they got; they specifically exhibited a preference to have things added, rather than have things taken away.

This does not, I think, exactly illustrate irrationality in the capuchins: it illustrates that they are utility maximizers, not grape maximizers. Monkeys experience a loss of utility from losing grapes that is greater than the utility produced by those grapes. Losing grapes, we might say, is painful. Doing the resource-maximizing thing does not necessarily equate with doing the utility-maximizing thing.

A Place for Quantitative Methods

Caplan’s conclusion is that we must not treat the poor as rational actors, because they deviate so heavily (compared to the wealthy) from being long-term best-interest maximizers. Therefore, he says, we should not expect to solve their problems by giving them money or other charitable aid.

An equally supported conclusion would be that being poor is so awful it is unendurable, like severe physical pain, and poor people actually are rational, taking this into account. Caplan also gives us a hint at what might be an indicator of painfulness: the degree to which the actor deviates from resource maximization. He says, “The behavioral literature has documented that the average person frequently violates neoclassical assumptions. But it rarely investigates variation in the tendency to violate neoclassical assumptions. Casual empiricism and limited formal evidence suggest that the poor do deviate more. A great deal more could be learned at low cost if new behavioral studies collected information on participants’ income and education to test for heterogeneity. [Citations omitted.]” Analyzing LOTS of factors for correlation to deviation from resource-maximization rationality, not just income, education, and intelligence, could help us understand the circumstances under which life is so painful that we act irrationally.

1. The extreme seriousness of the basic human need for affiliation and belonging is not widely acknowledged, even though data is available to that effect from a wide variety of sources. Kipling Williams’ meta-studies, Ostracism: The Early Detection System and Ostracism: Consequences and Coping are a good place to start to review the literature on the consequences of failed belonging. For instance, Williams explains experiments using Cyberball, an interactive computer game that can be used to give test subjects the impression of being ostracized in a controlled way. He says experimenters have “found strong negative impact on mood and need levels for those participants who were ostracized” in the Cyberball game, and when the experiment was conducted under fMRI, participants “showed significant increases in activity in their anterior cingulate cortexes, where people also show significant activity when enduring physical pain.” Further, he states that “In all of these Cyberball studies, the effects sizes of the ostracism manipulation are over 1.00 (often above 1.50) indicating strong effects, and subsequent meta-analyses indicate it takes only three people per condition to reach standard levels of significance. [Citations omitted.]” See pp. 17-19 of Ostracism: The Early Detection System. What’s especially amazing is that the effect is clearly not rational – it holds even when ostracized participants have been explicitly told that they’re only playing against a computer (NPCs).

Thomas Joiner’s book Why People Die by Suicide (see my review here) is a book-length treatment of an empirically-tested theory of the causes of suicide, and concludes that three factors are the best predictors of suicidality: failed belonging, feelings of burdensomeness, and competence (ability to physically do it). Two of the three factors are measures of failed social affiliation. Other kinds of sadness (including sadness for other reasons and clinical depression) are not very predictive of suicide. And Phillipe Rochat’s excellent book Others in Mind details the formation of the human “self” through child development studies and other empirical research, concluding that what he terms the Basic Affiliation Need is not only an extremely critical need, but one that is primordial to, and directly causes, the formation of the self. The need to belong and to have a place in society is not a luxury, but a basic need the absence of which is more painful than prolonged hunger or injury.

2. Yesterday, I overheard two high school girls having a conversation. One revealed to her friend that although she realized it meant giving up one’s life, she could see the upside to a diagnosis of terminal cancer – a kind of peace, and an exemption from the future-oriented unpleasantness we must all endure if we are to be considered socially responsible. “You could just have fun in school,” she said. “I work my ass off every day with work and schoolwork, but if you were going to die anyway, you could just relax. You wouldn’t have to worry.” Her friend agreed, but said she wanted to see what it was like to be an adult anyway. “I’m not sure I do,” said the first little girl. School is generally work, not leisure.

Written by Sister Y

April 27, 2011 at 4:02 pm

Mistakenly Glad

with 10 comments

Alex, a vegetarian, is glad to eat the vegetable soup at a restaurant because he mistakenly believes it is made with vegetable broth. Actually, it is made with beef broth. If Alex knew the truth, he would be disgusted. He is mistakenly glad to eat the soup. (This is true regardless of whether he ever finds out the truth.)

Martin is glad to be married. However, he mistakenly believes that his wife is sexually faithful, when in fact, she has been having sex with his business partner for many years. Martin values sexual fidelity such that if he knew the truth, he would be devastated to the degree that he would not be glad to be married. Martin is mistakenly glad to be married.

Emily is glad to have been given a diamond ring, because she believes it came from an ethical source. In fact, the diamond comes from a source that causes significant suffering to innocent people. If she knew the truth, she would be horrified and insulted at receiving the gift. She is mistakenly glad to receive the diamond.

Joyce is glad to have a son. However, she mistakenly believes her son is not murdering people and eating them. In fact, he is murdering people and eating them. If she knew this, she would regret having a son. Joyce is mistakenly glad to have a child.

The most common response people give upon hearing about philanthropic antinatalism is to ask why we haven’t killed ourselves (yet). The second most common, in my estimate, is what I call the “imaginary survey justification” – to assert that most people would be expected to report that they are glad to be alive (imaginary survey), therefore it is a good thing that they are alive, therefore it is a good thing to make new people.

I find this justification problematic not only because the empirical data are imaginary, but because it fails to address the phenomenon of being mistakenly glad. Just as ordinary “gladness” is subject to being mistaken if it is the product of incorrect beliefs, “gladness to be alive” is similarly problematic and subject to factual error. But is there any reason to be particularly worried about this in the context of “gladness to be alive”? Here are a few:

  • From an evolutionary standpoint, it would be incredibly dangerous to “allow” one’s organism to realize that life is not a great deal. We should expect human brains to embrace beliefs that promote gladness to be alive (and other survival-promoting mental states) regardless of their truth.
  • A high percentage of the world’s population is religious. I would suspect many people would subscribe to the statement, “I am happy to be alive because God created me and has a special plan for my life.” Thus, many people’s primary reason for being glad to be alive is patently false.
  • Many people believe in an afterlife. Same issue.
  • A high percentage of the world’s population lacks the capability for the kind of abstract thinking necessary to consider the question and all the prior beliefs one’s purported gladness may be based on.
  • The phenomenon of “meaningfulness” (commonly spoken of in the context of gladness-to-be-alive) seems to be a function of a specific kind of self-deception.

Similarly, more Americans than Europeans or South Americans seem happy to participate in their economic system, despite inequality, because they believe either (a) they have a “fair chance” at one day having high material wealth and status, or (b) they think there is a high probability of their one day having high status and material wealth. If it is merely procedural fairness (that is, reason (a)) that motivates them, they are only mistaken in this belief if the economic system is in fact unfair. However, if (b) is the reason – the belief that personal success is statistically likely – this is necessarily mistaken, because only a small percentage of people will achieve high status and material wealth, making the majority belief of personal future wealth demonstrably incorrect.

Exploiting other people’s false beliefs in a way that harms them is, ya know, fraud.

Written by Sister Y

March 10, 2011 at 10:31 pm

The Empirical Nature of "Meaning"

with 7 comments

A body of research suggests that the subjective experience of “meaning” is a response to one’s becoming aware of negative wellbeing.

Put another way, the phenomenon of meaning is reducible to a psychological response to suffering – suffering that cannot, for some reason, be remedied in the outside, extrapsychological world.

Studies have reported for years that parents report less happiness than those without children. However, some studies have shown an allegedly counterbalancing feature: parents report that their lives are more “meaningful” than do non-parents. So parents trade off happiness for meaning; seems rational.

That’s not the whole picture, though. An ultra-recent study finds that meaningfulness is a function not just of parenthood, but of how much parenthood sucks. “Parents who had the high costs of children in mind were much more likely to say that they enjoyed spending time with their children, and they also anticipated spending more leisure time with their kids,” say the study authors. While children used to have economic value – used to be a “good deal,” we might say – parents had no need of a subjective sense of the meaningfulness of parenting. “As the value of children has diminished, and the costs have escalated, the belief that parenthood is emotionally rewarding has gained currency. In that sense, the myth of parental joy is a modern psychological phenomenon,” say the study authors.

This same phenomenon is at work regarding hazing and group membership. The more suffering one endures in becoming part of a group, the more one subjectively values the group – whether it’s an American street gang or a Japanese university. Suffering is a quantifiable predictor of the subjective experience of meaning.

Similarly, having a crappy life (low SES) is a good predictor of religiosity. Religion is a technology that allows suffering people in a very shitty, unfair situation to continue living and producing children – in the interest of nature, but against their own interests.

The Book of Job is an example of this technology. It demonstrates a way to respond to the uncompensated sufferings of life: love God (the system) even more. Find “meaning.” Even the author of Job is disingenuous, though; he posits a little reward at the end for the ever-loyal, meaning-finding Job. As Ted Chiang shows in his story “Hell is the Absence of God” (and says explicitly in his story notes),

It seems to me that the Book of Job lacks the courage of its convictions: If the author were really committed to the idea tha virtue isn’t always rewarded, shouldn’t the book have ended with Job still bereft of everything?

The Book of Job is not logical or consistent; instead, it demonstrates a fitness-promoting response to unrecompensable suffering, and on another level, promises that this response will be ultimately rewarded on a non-psychological level. Therefore, I think the Book of Job is evil on two levels: trying to get people to engage in an irrational psychological response to allow them to ignore unfairness, and at the same time promising them that this defense mechanism will allow them to get compensated in the manner they really care about at a later time. It’s like the con artist who preys on victims of his previous cons, promising to get their money back.

My predictions from this meaning-as-quantifiable-response-to-suffering theory:

  • Women who experience horrific body changes from pregnancy will report finding more meaning in childrearing than those whose bodies are less affected.
  • Women whose husbands leave them shortly after the birth of a baby will report more finding more meaning in childrearing than matched controls whose husbands do not leave them.
  • This need not be limited to personal suffering of the parents. Parents whose children suffer major birth defects or illness will report more meaning in childrearing and the child’s life than matched controls of healthy children.
  • A sudden drop in SES will be a good predictor of the adoption of evangelical Christianity.

Written by Sister Y

March 6, 2011 at 12:44 am

Study: The Source of Parental Joy is Self-Deception

leave a comment »

The more it sucks to have children, the more parents tell themselves how great it is:

Parents rationalize the economic cost of children by exaggerating their parental joy

Any parent can tell you that raising a child is emotionally and intellectually draining. Despite their tales of professional sacrifice, financial hardship, and declines in marital satisfaction, many parents continue to insist that their children are an essential source of happiness and fulfillment in their lives. A new study published in Psychological Science, a journal of the Association for Psychological Science, suggests that parents create rosy pictures of parental joy as a way to justify the huge investment that kids require.

The study found that the more parents were primed to think about the realistic drawbacks of parenting, the the more those parents felt conflicted and bad about parenthood. But when given an opportunity to idealize parenting, they gladly took it – and the negative feelings disappeared. Parents primed with a more balanced view of parenthood were less likely to feel conflicted or negative about parenting and less likely to idealize.

Parents reminded about how bad parenting really is actually predicted they would spend more of their leisure time with their children in an upcoming weekend, versus matched controls primed to have a more balanced view of parenting!

From the press release:

Eibach and Mock put their findings into a historical perspective: In an earlier time, kids actually had economic value; they worked on farms or brought home paychecks, and they didn’t cost that much. Not coincidentally, emotional relationships between parents and children were less affectionate back then. As the value of children has diminished, and the costs have escalated, the belief that parenthood is emotionally rewarding has gained currency. In that sense, the myth of parental joy is a modern psychological phenomenon. [Emphasis mine.]

Thanks Rob!

Written by Sister Y

March 4, 2011 at 10:52 pm

Theories of Punishment

with 2 comments

Suicide is the only action that is not a crime that may be prevented by force.

Criminal justice is the formal practice of preventing and punishing proscribed behaviors.

There are five generally recognized theories of punishment, in criminal justice terms:

  • General deterrence means making an example of a criminal so that the population at large will be deterred from committing a crime.
  • Specific deterrence refers to punishing an individual criminal so that he or she will “think twice” and be deterred from committing a crime in the future.
  • Incapacitation means isolating and/or restraining a criminal so that he or she will not be able to commit a crime for the duration of the incapacitation.
  • Rehabilitation refers to providing assistance to a criminal so that he or she will not want or need to commit a crime in the future.
  • Retribution involves taking revenge on a criminal for the crime that he or she committed.

Deterrence, incapacitation, and rehabilitation models aim to prevent crime. Deterrence and rehabilitation models operate on the criminal’s mind, whereas the incapacitation model operates only on his body.

Suicidality is often considered to be a mental illness, properly considered to be within the purview of medicine; however, the interventions that are commonly undertaken in cases of suicidality demonstrate that the act is properly viewed as part of the criminal justice model.

The key feature of suicide: it is the only action that is not a crime that may be prevented by force.[1]

The prevention of suicide generally takes punitive, rather than medical, form. Generally, the methods used are incapacitative:

Because [preventing a determined person from committing suicide] is impossible, psychiatrists enjoy (if that is the right word) virtually unlimited professional discretion to employ the most destructive suicide-prevention measures imaginable, provided the measures are called “treatments.” The authoritative American Handbook of Psychiatry (1959 edition) endorsed lobotomy “for patients who are threatened with disability or suicide and for whom no other method seems likely to relieve or restore them.” In the 1974 edition, lobotomy was replaced by electroshock treatment administered in sufficient doses to destroy the subject’s will to kill himself: “[W]e do advocate its initial use for one type of patient, the agitated patient, often middle-aged and usually a man, who presents frank suicidal intention. We give ECT [electroconvulsive therapy] to such a patient . . . daily until mental confusion supervenes and reduces the ability of the patient to carry out his suicidal drive.” Thomas Szasz, Fatal Freedom: The Ethics and Politics of Suicide, pp. 56-57 (citations omitted). [Emphasis mine.]

However, often the methods used are so obviously unpleasant that they fall under the deterrent models as well – if not the retributional models!

In they Army, anyone reporting suicidal ideation is made to wear a bright orange vest and rubber bands in place of his shoelaces – not to mention watched 24/7 by a “buddy.” As reported by Elspeth Reeve:

Suicide watch (also called unit watch, buddy watch, or command interest profile) is how the Army deals with soldiers in garrison who express suicidal thoughts but don’t appear to be in immediate danger of harming themselves. It’s been around in some form since the 1980s, and generally involves a suicidal soldier being watched by one or two fellow soldiers around the clock, and having his gun, shoelaces, and belt taken away, so he can’t kill himself.

. . . . “You’re in an isolated state,” [a recruit who was under suicide watch] says. The orange vest makes you a pariah. “You’ve got the reason you’re on suicide watch to begin with on top of the fact that you stick out like a sore thumb,” he says. “It’s like you’re walking around in a zoo, and you’re the animal.”

. . . . The purpose of the vest is, ostensibly, to make it easy for others to keep an eye on a suicidal soldier, but forcing a soldier to advertise his own depression creates a powerful stigma. “When you see what happens to someone on suicide watch—the orange vest, the trips to the chaplain, the drill sergeant talking about them when they’re not there, saying they can’t handle the military. … When you see that, you’re going to think twice about speaking up and saying you need some help. It makes you not want to talk to someone. You don’t want to be like that guy,” the recruit from Benning says. [Emphasis mine.]

The Army’s treatment of suicidality is clearly punitive. Indeed, there is a strong incentive for soldiers to express insincere suicidality – that is, removal from combat duty. This would make it seem rational for the Army to institute counterincentives (conceding, implicitly, that suicidal behavior is rational in that it responds to incentives). But, as Reeve indicates, the punishment also dissuades genuine suicides from disclosing suicidal ideation.

At any rate, the “treatment” is clearly not rehabilitative, but punitive. General and specific deterrence are at work here, as well as incapacitation.

Similarly, from prisons to mental hospitals, disgusting and punitive “interventions” are used to prevent suicide. This is “mental health treatment” only in the most crudely and obsoletely behavioralist sense. Humiliating heavy dresses/smocks, presumably worn without underwear, are placed on male and female prisoners (of hospitals and prisons) to prevent them from committing suicide.[2] Again, general and specific deterrence are operative, as well as incapacitation. The smock is awful and undesirable, in addition to preventing one from enacting one’s suicidal wishes.

If suicide is a symptom of a mental illness, though, wouldn’t the distress be treated – not the action? People with trichotillomania do not have their hands forcibly restrained from touching their heads. Rather, the distressing compulsion to pull one’s hair is treated – and that only if it distresses the patient in the first place. In the case of suicide, however, the distress of everyone except that of the suicidal person is considered. If suicidal ideation does not cause one marked distress, why is it a mental illness?

The truth is that, despite the ostensible decriminalization of suicide, modern society still encounters suicide under a criminal model. The extreme position of Justice Scalia is, unfortunately, the one tacitly held by our government in general:

“At common law in England, a suicide – defined as one who “deliberately puts an end to his own existence, or commits any unlawful malicious act, the consequence of which is his own death,” 4 W. Blackstone, Commentaries *189 – was criminally liable. Ibid. Although the States abolished the penalties imposed by the common law (i.e., forfeiture and ignominious burial), they did so to spare the innocent family, and not to legitimize the act.” Cruzan v. Director, MDH, 497 U.S. 261 (1990).

Thanks Rob Sica.

1. I realize it may be necessary to distinguish civil injunctions, and civil contempt actions, here. Civil injunctions are ordered only in the case of irreparable harm to others. And, to be punished – by fine or jail – a contempt action must be proved beyond a reasonable doubt. Neither of these criteria are in place in the case of suicide. And, just to be clear, civil injunctions are by far an exceptional case. Money damages are by far the preferred remedy, when they are at all applicable.

2. Gawker says, “It’s weird these models don’t get more work! They are really selling the look. ‘Show me ‘I sure wish I could kill myself but this smock is impossible to rip into strangle-friendly strips’! Perfect.'”

Written by Sister Y

May 16, 2009 at 3:57 am