Literary Politick

Ever curious

The Ice-Bucket Challenge and Giving in America

leave a comment »

This week my Facebook feed has been bombarded with friends doing the Ice-Bucket Challenge. I’ve seen some pretty interesting adaptations (including a traveling friend who had no bucket, so jumped off a 60 foot cliff into the ocean instead – kudos Rob). I’ve also seen a number of what some people might call “haters” questioning the merits of the whole endeavor. With feelings running strong on both sides, I couldn’t help but join into the fray. What should we make of this fad?

The Results

I think the first thing we should notice is just that: it’s a fad. That said, it’s a fad I wish I had thought of. Working for a small non-profit myself, I would be the hero of the century if I could increase our revenue by such huge percentage with a random viral challenge. So I think the first thing we should recognize is: There is absolutely nothing wrong with using crazy marketing stunts like this to raise money for a worthy cause.

With the exception of the many who seem to have trouble dumping water on themselves, there is no harm in the challenge, and it will hopefully do significant good for the state of research for ALS.

So before I move on, I want to emphasize the importance of the outcome. The outcome is very positive, so from that perspective, the ice bucket challenge is great. And I applaud anyone who has participated with the goal simply of increasing funds going to this research.

Everything Else

But regardless of the outcome, why has this particular endeavor been so successful? And what does the answer to that question tell us about American society? Anything that goes viral tells us something about the deep desires of the society hosting the “virus.” What is this “virus” exploiting so effectively in our society?

A Google search tells me that, among all taxpayers (not just those who itemize tax deductions), charitable giving in the US averaged between 2 and 2.5% of income in 2008 for all those with income less than $500,000 (the most recent I could find for this total population data). With a median income in the US of $51,000, 2% would amount to just over $1,000 per year given to charitable causes, including religious organizations. Why do we spend so little on causes we claim to care about, when we are willing to spend so much of our income on frivolity (consider the hordes of low-salaried young people who will spend $100-200 in a single night on alcohol, and that multiple times per month)?

The rather obvious fact is, people want to spend money on themselves, regardless of what they say about their beliefs or goals. That is why it takes a gimmick to bring out donations in any sizable amount from the population at large. That doesn’t make the ice bucket challenge bad; it makes it savvy.

But there have been other gimmicks. Why has this particular gimmick been so effective?

Some of the response is chance – the right influencer dumps water on his head at the right time, and it takes off. But it never would have gone viral if not for the fact that we all want to appear as good, generous people online. No one is going to applaud my generosity—or my well-apportioned swimsuit physique—if I am asked to give money for ALS research and I just do it, privately.

An immediate clarification is needed – I know that many people who are participating in the challenge are extremely generous, and give regularly to all sorts of good causes. Even if the truly generous join in, the reason something like this goes viral is, sadly, because of appearances. Requests to privately increase support for any cause will never be as effective as requests to increase support for a cause that also bolsters your image, even if the generous donate in both instances.

But I would argue that private giving is what counts, especially because it is often more durable, not being motivated by social performance. Very few of those who give $10 for ALS research will continue supporting the cause on an ongoing basis, simply because of the lack of any continuing social payoff. But sustainable change only happens with sustainable support, most of which does not afford the opportunity for Facebook posting.

The wild popularity of the ice bucket challenge is sad evidence that as a society, we have missed the whole point of generosity, which is not really generosity if the goal is self-aggrandizement. There is a good reason that the ideal of Christian generosity is captured in Jesus’ command to not let your left hand know what your right hand is doing when you give to the poor.

Wait – I can’t see your abs!

My Challenge to You

The world is full of problems, and the world is full of people trying to solve those problems. To make any sort of sustainable impact on those problems, more is needed than generosity only when a viral trend demands it.

So instead of challenging you to dump water on your head and give two Starbucks lattes worth of money to ALS research, I challenge you instead to pick an organization that you think is making a lasting, positive difference in the world, and commit to support it for at least a year, with at least $100 per month, or whatever amount makes it hurt just a little.

And don’t tell me about it.

 

 

 

Written by jonathanwaldroup

August 21, 2014 at 9:14 pm

The Problems of GDP (and what Inigo Montoya has to say about them)

leave a comment »

Inigo Montoya, of The Princess Bride, was a very wise man. Take a moment and revel in his genius:

If only he knew how widely his wisdom could be applied. In this case, the word that we do not properly understand (or rather, the acronym) is GDP.

Nearly everyone in the US reflexively considers an increase in GDP to be a good thing. It is understandable why this association exists: most people do think of themselves as better off when they have more income, which GDP purports to measure on a national level (roughly), and generally when the country as a whole is doing better, people assume they have a better chance of personal success. Also, to the extent that we find pleasure in our nation’s wellbeing, we are made better off by the collective improvement of income, regardless of the personal outcome. And in many countries of the world, GDP growth plays an important role in helping reduce poverty, which is certainly positive.

But the positive associations with GDP growth are so firmly baked into our cultural conscience that most people not only judge our nation’s economic health through the measure, but also our overall wellbeing. If GDP is up, we are doing well as a country; we have succeeded. If it’s down, someone needs to be kicked out of office. You will struggle to find a politician that has anything negative to say about GDP growth, ever.

However, there are many reasons to believe that increasing GDP does not necessarily improve our country on the whole. At this point we could take this discussion in a number of directions. One problem might be if GDP growth is funneled to an elite few, which would increase inequality and perhaps fuel political instability. Some would say this is exactly what is beginning to happen in the US, and indeed, income and wealth inequality have increased dramatically over the past few decades. Check out this video on wealth inequality and this chart of income inequality in the US.

Another reason might be that as a society, we don’t actually need more income per se, having reached a level of affluence such that other non-financial goods should be emphasized more than pure income (such as the cohesion of families, good health, strong communities, the enjoyment of the arts and nature, etc.). John Kenneth Galbraith, in his 1958 classic The Affluent Society, makes a similar (though not identical) point even at a time when the US was far from its current level of aggregate wealth.

This last point touches on what I see as the fundamental problem (which is also discussed by Galbraith): GDP is a very poor measure of wellbeing. But our society as a whole has co-opted GDP—which was never intended to be more than an indicator of economic activity—to serve a role for which it is not useful. That is why I got excited when I heard that the Bureau of Economic Analysis was going to change the way GDP was measured.

The new changes will now include expenditures on research and development and creative work, among other things, adding perhaps 2% to annual GDP (quite a lot). And while this is great, unfortunately, the improvements don’t go nearly far enough. Specifically, there are three mammoth problems with the measurement of GDP that still need to be tackled:

1) For GDP, there is no such thing as a bad transaction (unless it is illegal). GDP adds up all the money that changes hands (that it can track), but it doesn’t matter if these transactions are beneficial for society or not. This is most visible after a natural disaster – because GDP only measures money changing hands, reconstruction efforts after disasters often prove to be a boon to GDP. Obviously natural disasters are terrible for the communities affected, but GDP often shows the opposite. Another example would be the GDP generated from the ever-expanding divorce industry (think lawyer fees, the need for two homes where previously one was sufficient, etc.), though I think few people would actually believe divorce is good for our country.

2) GDP ignores the informal and volunteer economy. In particular, it ignores the work done by stay-at-home parents and volunteering of all kinds. These activities save money for the families involved and also often contribute positive social value by building up strong households and communities. And yet, GDP ignores them. In fact, the more people contribute to such activities, the lower GDP will become, since that will mean people are not paying money for childcare, housecleaning, etc. Plus, any positive social contribution is also bad for GDP, since it undoubtedly harms those industries that cater to disintegrating families and communities (especially lawyers).

3) GDP ignores the value of the environment. This, in my mind, is one of the most significant failings of GDP. For centuries the value of natural resources and services has been taken for granted because we did not think humanity could significantly reshape global ecology. Unfortunately, we are discovering that we can (and do) have such an impact. GDP takes no account for how we are harming the environment’s ability to deal with our waste, provide our food, clean our water, make our oxygen, etc. Perhaps even more problematic (and clearly ridiculous), GDP does not take into consideration the fact that we are depleting natural resources. If we drill an oil well and empty it, this only shows up as income and takes no account of the fact that the natural resource is gone (i.e., our wealth is reduced). This is similar to someone draining their retirement account at age 65 to splurge on a lavish Caribbean vacation and thinking they have not done themselves any harm – after all, that was a really great resort! Obviously, using our natural resources has costs attached to it, especially if the resources are non-renewable.

Simon Kuznets, who developed the system of national accounts that includes GDP in the 1930s, knew better than most the limits of GDP. In 1934, Kuznets clearly articulated the need to keep GDP out of the business of measuring overall wellbeing: “The welfare of a nation can, therefore, scarcely be inferred from a measurement of national income as defined above.”

It’s not GDP’s fault, really – we have forced it to take more responsibility than it can shoulder. But even though there are plenty of separate indicators to tell us how we are doing as a country on things like health and happiness, the emotional and political weight attached to GDP is so strong that we must work to change it. Otherwise we will continue to be swayed in negative ways by a number that doesn’t tell us what we want it to. So next time someone uses GDP as a proxy for wellbeing, remember the wise words of Inigo Montoya: “You keep using that word. I do not think it means what you think it means.”

Further reading:

- My two main sources (mainly on the history of GDP and its failings): “If GDP is up, why is America Down?” and “Our Phony Economy”

- Alternatives to GDP: a Google search is very effective, but here is one link with links to a variety of alternatives.

Essay on Climate Change and Poverty Reduction

with one comment

It has been many moons since I last posted. My apologies. I hope to be posting more regularly soon.

But in the meantime, I won an essay contest and finally have my work published by a legit, external party! Check out my essay on climate change and poverty reduction (it’s short): http://www.brettonwoods.org/sites/default/files/documents/Henry_Owen_Award_Essay_Waldroup_1stPlace.pdf

(Here is an explanation of the contest: http://www.brettonwoods.org/article/inaugural-owen-award-celebrates-graduate-students)

Written by jonathanwaldroup

May 24, 2013 at 4:58 pm

Why you should ignore Strunk and White (and other writing advice)…mostly

with 3 comments

The Elements of Style, by Will Strunk and E.B. White, has been the Bible of English style for two generations. The book provides all sorts of advice, including the now famous dictum “Omit needless words!” as well as the rule of using ‘s after all words to make them possessive, with the exceptions of Jesus, Moses, and other ancient names, which only warrant an apostrophe. While several of the book’s points are well-founded, it also has more than its share of absurdities. Recently coming across several other lists of writing tips from famous authors, it seems that the majority of them are full to the brim with ridiculous rules and suggestions.

Will Strunk – not so feisty in black and white

So last night I went looking for my copy of Strunk and White in order to denounce it. To my great consternation, my copy of this little book had been stolen! My wife apparently took it to school, where, as an English teacher, she supposedly has some use for it. I am deeply skeptical. (She claims to have had it at school for a year. Apart from my skepticism, she is also violating the role of a private library as a working tool, as outlined in a previous post.) But not to worry, I have several other works that refer to the book extensively, so there is still plenty of material for my diatribe.

The problem with many of these style guides is that “good” style changes dramatically with time. As languages change, so to do the expectations of readers and the way in which words are best able to communicate their intended purpose. What one style guide says is best in 1959 (when The Elements of Style was first published) may not apply today. That said, many of the most basic suggestions from Strunk and White are still worth following:

-          “Omit needless words,”

-          “Do not join independent clauses by a comma.”

-          “Use the active voice.”

These are all legitimate pieces of advice that are likely to improve the readability of any text. But other rules come across as attempts to create laws out of pet peeves:

-          Spell out dates in quotations (e.g., “August ninth”) but not when the author is using a date (August 9th)

-          Use “which” for nonrestrictive clauses and “that” for restrictive clauses

-          Only use “hopefully” to mean “in a hopeful way” rather than the more common usage

-          Do not use “claim” as a substitute for declare, maintain, or charge

-          Do not use “due to” to mean “through, because of, or owing to”

Such rules are utter nonsense, and are based entirely on a prescriptivist desire to keep language from changing, to maintain “purity.” These types of rules are all predicated on the understanding that whenever the rules are being written down (say, in 1959), that era’s language is the way the language ought to be. But since language is constantly changing, any such claim is absurd, for there was always an earlier, “purer” language from which the current manifestation evolved. While some standards tend to last through the ages, such as concision, petty proscriptions of definitions which have crept into new areas, such as with “claim” or “due to” above, are bound to be ignored by the masses as the new definitions become part of the next generation’s idea of “pure” language.

The wonderful website Brain Pickings recently posted some other writing tips from long dead writers. Here are selections from the twenty most common writing mistakes in the eyes of HP Lovecraft, sci-fi and fantasy author, in 1920:

-          “Barbarous compound nouns, as viewpoint or upkeep

-          “Use of nouns for verbs, as ‘he motored to Boston,’ or ‘he voiced a protest.’”

-          “Errors in moods and tenses of verbs, as ‘If I was he, I should do otherwise,’ or ‘He said the earth was round.’”

-          “False verb-forms, as ‘I pled with him.’”

-          “Use of words in wrong senses, as ‘The book greatly intrigued me,’ ‘Leave me take this,’ ‘He was obsessed with the idea,’ or ‘He is a meticulous writer.’”

I too would call “viewpoint” barbarous. Only pirates use that word, in my experience. Of course, we say many of the other things forbidden in this list, and no one in his right mind would correct them. In fact, the dictionary now even includes “pled” as a perfectly endorsed past tense of “to plead.” Were he still alive today (check out that subjunctive, Lovecraft!), I bet Lovecraft would have started a petition to undo the deleterious effects of google’s entry into the dictionary as a verb.

Some of the most famous stylistic rules are in fact vestiges of other languages that need not apply in English at all. For instance, the injunction against split infinitives, as “to quickly run” or “to fervently believe,” is based on a former incarnation of the idea of a “pure” language. Back in the day, many believed Latin to be the perfect language. In Latin it is impossible to put anything in between the two components of an infinitive (“to” and “run,” for instance) because infinitives were a single word (“to run” in Latin is “currere”). Great English writers have ignored this rule in every century since the 1300s.

Another oft-cited rule is to never end a sentence with a preposition. John Dryden, in 1672, apparently criticized Shakespeare-contemporary Ben Jonson for such a sin, and it has been brought up in every generation since then. Once again, this construction is impossible in Latin, and Dryden was known to first write in Latin, as he thought it the superior language, and then translate into English, explaining his opposition to dangling prepositions. But there is no sensible reason to follow such a rule, and, again, great writers and even very large newspapers like the New York Times have routinely ignored this rule.

The list of erroneous rules and suggestions for writing could go on and on. In every age, writers think that their own writing is the height of language, but every new age heralds new feats of language and writing that add new dimensions to the English repertoire. In their better moments, many of the same rule-wielding writers quoted above still saw the larger picture. So after having bashed their many pieces of bad advice, here are a few sage comments worth considering:

“Style rules of this sort are, of course, somewhat a matter of individual preference, and even the established rules of grammar are open to challenge. Professor Strunk, although one of the most inflexible and choosy of men, was quick to acknowledge the fallacy of inflexibility and the danger of doctrine.”

-          E.B. White, in one of his better moment – Essays of E.B. White, 325

“All attempts at gaining literary polish must begin with judicious reading, and the learner must never cease to hold this phase uppermost. In many cases, the usage of good authors will be found a more effective guide than any amount of precept. A page of Addison or of Irving will teach more of style than a whole manual of rules, whilst a story of Poe’s will impress upon the mind a more vivid notion of powerful and correct description and narration than will ten dry chapters of a bulky textbook.”

-          H.P. Lovecraft, from the article on Brain Pickings

“If there is a magic in story writing, and I am convinced there is, no one has ever been able to reduce it to a recipe that can be passed from one person to another. The formula seems to lie solely in the aching urge of the writer to convey something he feels important to the reader. If the writer has that urge, he may sometimes, but by no means always, find the way to do it. You must perceive the excellence that makes a good story good or the errors that makes a bad story. For a bad story is only an ineffective story.”

-          John Steinbeck, another article from Brain Pickings

James Bond Now and Then: Three Societal Changes from Dr. No to Skyfall

leave a comment »

Flipping through the channels over Christmas break, I stumbled upon Dr. No, the first James Bond film from 1962. Never having seen the film, and intrigued by the antique feel of the movie, I ended up watching a sizable chunk of it. It was immediately clear that this erstwhile Bond was very different from his more recent resurrections. To check the latest evidence (and for fun), last week my wife and I finally saw the most recent Bond installment, Skyfall. Comparing the two films, some differences were obvious – the awkwardly moving backgrounds when Bond is driving, for instance. But there were a number of more subtle, more significant changes as well, mirroring fundamental ways our society has changed since the 1960s. Here I will focus on three themes: the perception of women, the use of technology, and the notion of heroism.

The Perception of Women

If there is one theme around which James Bond is constantly criticized, it is the way he treats women. I cannot help but agree that Bond has consistently used and abused women, treating them as playthings which can be thrown away at a moment’s notice. In the most recent Bond movie, this trend is still noticeable, and in some ways, his treatment of a character who he knows to be a sex slave may be a new low in the Bond world.

That said, and I admit it is a rather large caveat, the depiction of women in Bond films has nonetheless improved in other ways since Dr. No. First and foremost, women have much more volition in the new films. One of the most interesting things I noticed watching Dr. No is how often Bond grabs the wrists of the women he is interacting with. Bond’s interactions with the first Bond girl, Ursula Andress, are consistently of this nature. For instance, I remember a scene where they are on a beach (shortly after Andress’s character is introduced) and need to run away quickly. Bond grabs Andress’s wrist and they run the length of the beach in that pose, as if the woman was incapable of realizing that running away from bad guys with guns was a prudent idea. Couldn’t Bond just have said “follow me” or grabbed her hand instead?

No, both of these would have given the woman too much volition, more than was good for her in those days. Today, while Bond is clearly still the dominant character, many female characters, especially in the movies starring Daniel Craig, are portrayed as strong and willful. They have their own goals and aspirations that they tend to carry out on their own (especially in the recent Quantum of Solace). Bond is no longer in the habit of grabbing wrists and showing other overt forms of domination. In fact, Bond is shot and (ostensibly) killed by a woman at the very beginning of the latest movie.

What? A man?

Furthermore, it should certainly be pointed out that casting the character “M” as a woman for the past two decades was a huge step up for the depiction of women. After all, M is officially superior to Bond and had always been cast as a man before Judi Dench stepped in. Although (spoiler alert!) what does it say that M will be played by a man going forward? Until the issue of using women purely for sex is resolved, the Bond critics will still have plenty of ammo. But while there is still room for improvement, the depiction of women has improved in many subtle ways since the days of Dr. No.

The Use of Technology

Guns, explosions, and epic chase scenes have always played an important role in Bond movies. But the use of technology in recent movies has changed dramatically, most of all in Skyfall. In this movie, Bond is given a gun and a radio by Q, head tech wizard. That’s it. No bazookas. No cars that can launch guided missiles or turn into submarines. In many ways, this use of technology has actually come full circle. In Dr. No, there were some decidedly low-tech death mechanisms. I mean, really, who sends a spider to kill James Bond? Utterly outrageous. Unless, of course, the spider has a laser mounted to his head and doubles as a land mine. Unfortunately, Sean Connery did not have to deal with either of these possibilities. He just had to wake up and smash the spider. Add the spider to the ridiculous dragon-tank in Dr. No and the tech bonanza of the 1990s and 2000s looks positively magical.

But today we have reverted to those low-tech days. Sure there are still explosions, but only when explosions would actually happen. (As a side note, check out my favorite movie explosion from Steve Martin’s Pink Panther. Biker + fruit stand = explosion!!!)

Fruit stands are known to spontaneously combust.

In fact, in Skyfall the filmmakers seem to make a special point of exhibiting just how low-tech Bond has become, with only his gun and his radio. Q points out that “we don’t go for that sort of thing anymore,” referring to the glitzy gadgets of yore.

Bond is not amused.

Nor do most of the people in Generation Y. In the past, technology has been more of an aspiration than a reality. Society longed for that next cool product that would magically ease its troubles. Those gizmos continued to appear – refrigerators, microwaves, cassette tapes, CDs, computers, etc. But Generation Y has grown up with all of this and more – specifically the internet and all of the Apple products that seem to be beyond this world. Technology is no longer aspirational; it is embedded very deeply in everyday life. We can no longer think of life without smartphones and mp3s and high definition and wireless internet. They are integral to our understanding of reality.

This embeddedness of technology is so strong that we are also much harder to impress. New gadgets appear every day, and very few of them are actually radically new. Our expectations are too high. Accordingly, Bond has lost all of the fancy gizmos – very little the filmmakers could create would make much of a difference to us anyway. But what does make a difference is the human ability to control situations as Bond does. He still uses cool technology from time to time, but he impresses us with his finesse and suave and ability to incorporate technology into his strategy for defeating the enemy. No longer do people go to Bond movies to see the latest in technology. Apple and Google do that for us. Now they actually go to see Bond.

And that brings me to my final point.

The Nature of Heroism

Bond is still a hero, but he is a more complicated hero than he used to be. Until the latest Daniel Craig version, Bond was merely sexy, suave, and extremely lucky. Now Bond is all those things and more: frail, flawed, emotional, complex(ish). That is to say, Bond is becoming a real character, god forbid.

In Casino Royale, this new Bond begins to make an appearance. That movie, more than any other recent Bond movie, focuses on Bond’s ability to outwit his opponents rather than just outgun them. The movie is slower to develop and has more of a dark tinge to it as well – so much so that some people were turned off by the lack of normal Bond action. (I thought it was terrific.) In Quantum of Solace the new Bond adds some emotional depth as he seeks revenge after being betrayed.

In Skyfall, though, the new Bond truly arrives. He gets shot and goes off the grid for months, depressed, fallen, resorting to drink and silly spectacles with scorpions to retain his manhood. He returns when MI6 is bombed – grizzled, incapable of controlling his emotions, completely un-suave. On his first mission to gain information from an assassin, he fails miserably, both in terms of physical performance and at achieving his goal. He is only spared by that fortuitous Bond luck.

We finally get a glimpse of Bond’s past in the movie too, and we see it is dark – dark enough that he finds considerable pleasure in literally burning down his past. He even exhibits some emotion for the dying M. But the key is that he overcomes all of these things. He gets past his frailty; he overcomes his tragic past; he masters his emotions.

This is much more akin to the nature of heroism we see in most good literature. Heroes are not flat and unidimensional, born of greatness and living in greatness. That was the old Bond, who was always in control, always suave, never frail, never hurting. And certainly never emotional. But in an age where terrorism lurks in the recesses of our minds and financial markets crash on a semi-regular basis and no one seems to know what to do, the age of the superman has come to an end. We no longer want our heroes to reflect utter dominance in all situations because we no longer feel dominant in all situations (as we, especially in America, did for much of the 20th century). We need a hero who makes mistakes, who is haunted by the past, but who can still triumph over adversity. Bond is slowly becoming that type of hero.

Keep it up.

Written by jonathanwaldroup

January 10, 2013 at 8:18 pm

Guns, Violence, and International Evidence: More Guns, More Crime

with 28 comments

The tragedy at Sandy Hook Elementary is still fresh in our minds, and the debate on what should be done in response, if anything, is at full tilt. There are so many angles on the issue: gun ownership as a protection against tyranny, limiting the freedom of responsible members of society to reign in the harmful few, gun ownership as a crime deterrent or enabler, to name a few. I have talked about the first item in previous blogs (most recently concerning the Arab Spring and previously with respect to Gabrielle Giffords), arguing that guns no longer serve as any sort of protection against tyranny.

The second issue—limiting the rights of all to prevent the unstable few from committing atrocities—can be addressed rather quickly. We already do this in many areas of life, including weaponry. We do not allow people to buy bazookas, tanks, jets, missiles, etc., despite the fact that this is limiting the freedom of people who mean no harm, because the weapons are capable of causing such vast devastation. On a more mundane level, we limit the speed of cars on the road because of the safety hazard they pose, and we require people to wear seat belts (in most places) because it saves lives. In all of these cases we have chosen to forego some liberties to promote the greater good (saved lives). This is the nature of the social contract – give up some rights to gain greater benefits.

This simple acknowledgement of the way modern states function shows the absurdity of the “guns don’t kill people; people do” argument. The same could be said of speeding cars or bazookas, and yet we still place limits on such things.

That said, my primary purpose in this blog is to address the last issue in my list: does gun ownership increase or decrease violent crime, especially homicide? I am particularly interested in what gun ownership rates around the world have to do with homicide rates and the common arguments on the topic. To that end, you may have seen this graph floating around the internet recently (or others like it; here’s another one):

guns 1

This looks pretty convincing. You’ve got some really high homicide rates in countries with low gun ownership, and then much lower homicide rates as you move to the right. But the problem with data-based arguments is that they tend to be accepted without any critical analysis.

Here we have several problems.

First, and most importantly, the graph implies that civilian-owned firearms cause lower homicide rates. Clever economist that he is, Mr. Davies knows he can claim no such thing with this graph and merely states that countries with higher firearm ownership rates also have lower homicide rates. Just because two variables are correlated does not mean either one causes the other. But the clear implication of causation in producing the graph at all is very misleading on the part of Mr. Davies.

Second, sometimes it does not make sense to compare every country in the world. Economists are big fans of global data, and sometimes such data can be very helpful, but other times global data actually confuses the matter by making unjust comparisons. This is one of those cases. El Salvador, with the highest homicide rate in this graph was due almost entirely to gang violence, which has raged for years in that country (here is a recent article about gangs there). The gangs there have access to heavy weaponry, including assault rifles and grenades, which they use to prosecute their conflicts. In such a case, guns for self-protection are not the point – the violence is occurring between people who already have plenty of guns. When gang violence is the key driver of violence, what is needed is stronger law enforcement and rule of law, along with an attempt to address the roots of gang conflict.

Similarly, Ivory Coast has been in and out of civil war since 2000, the main source of its violence. More guns will not solve the problem – violence from wars only stops when one side wins and sets up a legitimate and stable government. In Honduras and Jamaica, drug trafficking and gang violence contribute to the high rates of homicide, again, unrelated to the rate of civilian gun ownership. In fact, in the data I use below, which was trying to approximate the data used for the above graph, 18 of the top 20 countries by homicide rate are in the Caribbean, Central America, or major drug production areas of South America.

When drug cartels and gangs are the main source of violence in a country, normally the bulk of the violence occurs between members of those groups, who already have plenty of guns. More guns for the civilian population who live around the conflict (but are not key players) will not end the violence. That is, if Gang A and Gang B are fighting and already have plenty of guns, arming group C (the civilian population) will not reduce conflict between A and B. Group C just wants to stay out of it. Fighting will continue until the root causes of the conflicts are addressed (e.g., the lack of economic opportunities for gang members, the high price and demand for drugs, etc.).

The problem with all of this is that murder in some countries has very different causes than in others. In wealthy countries like the US, murder is not due to civil war, country-wide gang violence, or massive drug cartel armies battling it out. So comparing such countries with the US and other wealthier countries is completely absurd and tells us nothing significant about gun ownership.

So I set out to look at the data from a different perspective. First, I tried to recreate the data from Mr. Davies’ graph above, using this data, so that I could compare the results with his. The data is not precisely the same, as Davies did not link to his precise sources, but I used similar time frames (2007). The graph I created is below:

guns 2

It looks very similar. The US is still way to the right. Honduras is up top. My main point here is to show that the data is similar to Mr. Davies’ (I think the other differences are due to fluctuation year to year in some of the worst conflict zones).

Then I decided to add GNI per capita (similar to GDP per capita, a simple measure of average income; data from the World Bank) and graph that against homicide. The results are below:

guns 3

The thing to notice here is that the graph looks very similar to the plots of firearm ownership and homicide. My point is that it is not hard to create a graph that shows murder rates going down as some other variable goes up. I’m sure we could produce similar graphs with variables like years of education, lifespan, and many others. Except in the case of income and homicide, there is some theoretical reason to think the relationship may be causal. After all, drug cartels and gang violence do not thrive in rich countries because they have stronger law enforcement, better prospects for their citizens (so they don’t turn to gangs), and diversified economies that provide many legal sources of income for their citizens. Many problems improve as countries get richer. Nonetheless, the relationship looks weak. So now let’s consider something else.

guns 4

Now this is a much different picture, isn’t it? This graph shows firearm ownership and homicide rates in OECD countries (a club of rich countries) that also have a GNI per capita of more than $25,000. As you can see, among wealthy nations similar to the US, as firearm ownership increases, so does homicide. This is completely different than the depiction of the data when looking at the whole world, comparing incomparable countries. And of course, that nation on the upper right, with very high firearm ownership and murder rates, is the US. The implication is obvious: even if guns are sometimes used to protect the innocent, any deterrence of homicide is outweighed by an even larger increase in homicides due to the availability of guns.

So if anything, we should expect that more guns will lead to more violence, not the other way around.

But, as I have discussed, correlation is not causation. So, in this new case, are the higher gun ownership rates causing higher homicide rates?

This possibility is supported by a number of studies, particularly by David Hemenway, Matthew Miller, and Deborah Azrael. A good summary of some of their main findings (on this topic and related ones) and a few of their articles are cited here: Harvard Injury Control Research Center. There are some fairly obvious candidates for why this relationship may exist. For instance, the availability of a gun may lead to fatal escalation in moments of anger and passion, when otherwise fists or less lethal knives would have been the weapons of choice. Guns may be taken by other members of a household (besides the gun owner) and used contrary to the owner’s will, simply because they are available. Attempts at deterrence by a gun-owner untrained to actually fight in such situations may lead to an even greater use of lethal force by criminals. These are just some of my own hypotheses, but I think the first one is probably the most likely.

My main point in this article is to show that the oft-cited data about guns and crime internationally is often misread. However, I also acknowledge that there are opposing views, most especially by John Lott and his collaborators (one of whom was my former professor, for whom I have immense respect). Lott and Mustard’s original article that started this debate can be found here, but it only deals with the right to carry concealed weapons, not overall ownership, and not in comparison to other countries. Lott’s later book that fleshes out his argument is called More Guns, Less Crime, and he wrote another book later again responding to critics. A good critique of his second book is here. Finally, another good summary of info on gun violence in the US is here. Overall, I think that research in recent years has largely shown that more guns does NOT equal less crime, contrary to Lott.

I do not believe all guns should be banned, nor do I think that such an outcome is at all possible in the US any time soon. But strict controls on who can buy guns and the types of weapons and accessories available for purchase would reduce gun violence in this country (if also combined with efforts to reduce the vast quantity of guns already floating around the US).

As many have pointed out, the same day of the Newtown shootings, a man attacked young students at an elementary school with a knife in China. While it was also horrible, without the gun the outcome was very different. The man in China wounded 23, but none died. If only that had been the outcome in Connecticut.

The Electoral College – Ridiculous

with 3 comments

In 1984, Ronald Reagan absolutely destroyed Walter Mondale in the presidential election. A map from Dave Liep’s Election Atlas shows the result well (strangely, it uses blue for Republicans, red for Democrats):

That’s right, Mondale won a whopping 13 electoral votes. Spectacular.

You might think that the practically everyone in the US voted for Reagan. But you would be wrong.

In 1984, Walter Mondale won 41% of the popular vote while receiving only 2% of the Electoral College (EC) vote. Nothing could be further from the representative aspirations held by our nation. 49% of the popular vote commanded 98% of the electoral vote, determining the presidential outcome.

Four times in US history, the person who received the greatest number of popular votes has not become president, in 1824, 1876, 1888, and 2000. Most of us remember the 2000 result, which was decided by the Supreme Court. Fewer people know about the bizarre outcome in 1824. Unlike today, when two parties dominate the polls, the 1824 election posed four regionally popular men against each other: Andrew Jackson, John Quincy Adams, William Crawford, and Henry Clay. The results came out as follows:

Candidates Popular   Vote % of   Total EC   Vote % of   Total
Andrew   Jackson 151,363 43% 99 38%
John Quincy Adams 113,142 32% 84 32%
William   Crawford 41,032 12% 41 16%
Henry   Clay 47,545 13% 37 14%

Unfortunately, according to the rules of the Electoral College, which are laid out in Article II, Section 1 of our Constitution (plus Amendments 12 and 23), if no single candidate receives at least 50% of the EC vote, then the task of choosing the president falls to the House of Representatives. Since Andrew Jackson only received 38% of the EC vote in 1824, the House chose the president, electing John Quincy Adams instead of the more popular Jackson. (It is important to note that in such a situation, the Representatives cast a single vote for each state, rather than voting by individual person.)

Prior to the 12th Amendment, outcomes could be even worse because the electors from each state did not cast separate ballots for the president and vice presidential roles. The crazy outcomes of the 1796 and 1800 elections thankfully rectified this issue, though not fixing many others. I will leave you to investigate those cases on your own. Also, check out this video depicting how crazy it gets if there is a tie in the EC!

Original Intent of the Electoral College 

One might wonder why it is that we have this strange system in the first place. Why is it that we don’t actually vote for an individual candidate, rather than the candidate’s electors? This system was the result of a compromise between competing groups among the Framers. There were those who wanted to have Congress elect the president, others who wanted a direct popular vote (including James Madison, Father of the Constitution). The most important issue that led to the current set-up, in which each state receives the same number of electors as its number of Senators and Representatives (thus, at least three), was that the small states feared the control that large states would hold over presidential elections (due to their large populations). So the reasoning for the EC system was very similar to the reasoning behind the Senate’s construction.

Another argument commonly cited about the reasoning behind the EC system was to avoid the “tyranny of the masses.” In this argument, the Framers basically thought the common folk of their day were ignorant and could be easily duped by a tyrannical but charismatic candidate. Putting a step between the masses and the actual office of president would prevent such a tyrant from coming to power. While I personally think the first reason (small state vs. big state) was the driving force behind the compromise, there are a number of statements from Framers that support this other argument as well.

So are these viable reasons to maintain the Electoral College?

Tyranny of the Masses

Let’s begin with the tyranny of the majority argument. First of all, establishing an elite class which the Framers actually thought was superior to the rest of the citizens is clearly not democratic. This is totally out of whack with the principles we espouse today, despite the fact that the Framers almost certainly did think of themselves as superior to everyone else around them (what aristocrats don’t?).

But more importantly, the current EC system as well as election laws effectively eradicate the ability to prevent the tyranny of the masses, if such an outcome were indeed preventable in the first place. First, the argument is premised on the notion that if the citizens at large somehow voted for a tyrant, the electors, wise and exalted, would realize our error and correct it by voting for non-tyrannical types. However, electors very seldom cast their ballots for candidates other than those they pledge to vote for up front. There is no reason to believe that the parties would ever intentionally select an elector who would vote for someone other than the candidate in question (surely a future tyrant would be wise enough to pick electors who were loyal to him).

Also, these days many states have laws that punish those who contravene their pledges. Such electors are called “faithless” and can be punished with fines in a number of states. In New Mexico faithlessness is actually a felony. In all of US history there have been 150 faithless ballots cast. Of these, 71 were due to the death of a candidate (Horace Greeley—he of “go west, young man” fame—in 1872). 23 others were the result of a conspiracy among Virginia electors in 1836 (they are just so wise and admirable!). Only nine faithless ballots have been cast since 1900, a trifling sum considering the 7295 electoral ballots cast for presidential candidates since 1900 (if I did the math right).

All this to say, the electors do not stand as a check between the will of the masses and tyranny. Given that electors have historically been infrequently faithless (and will likely continue to be faithful, given the incentives of choosing electors wisely), the winner-takes-all system of the EC actually encourages tyranny. Assuming electors chosen by a tyrant would still cast their ballots for that tyrant, then the tyrant does not actually need a popular majority to impose tyranny on the entire nation. Even if the elitist argument was ever correct, it makes no sense today.

Small State v. Big State

The more formidable historical argument is the argument for maintaining some balance between small and large states. Unfortunately, this argument is also rather outdated, given the changing nature of the United States. When the system was established, there were a total of 162 electors in the EC. The smallest state was Delaware, which received six electors. Since the number of electors is the number of Representatives plus Senators, this means that 1/3 of Delaware’s electors were the portion coming from the Senators (i.e., the portion not based on population). With only 162 electors, the 2 that did not depend on population represented more than 1% of the total EC. Thus, in Delaware’s case, this was a significant help in maintaining relevance in the presidential election (though still not huge).

However, even just four years later in 1792, new states meant an EC with 270 electors. Those two “free” electors now meant significantly less. Today, with 538 electors, the two non-population-determined electors mean practically nothing. Giving small states an allowance of two does not help them maintain relevance in the presidential election. If we really were concerned about balancing the interests of small and large states (population wise), we would need to allocate a much larger number of electors to every state regardless of population – on the order of 5 or 6 per state (similar to the 1% of the total EC that 2 electors represented in the first EC). So even though voters in small states command more electoral votes per person (as will be seen below), those states are still mostly irrelevant.

Further, while we know big states matter, the evidence that small states do not matter much is the amount of money spent in them. The vast bulk of the money is spent in a few critical swing states, with more than just 3, 4, or 5 electors. The video below explains it well.

http://www.npr.org/blogs/itsallpolitics/2012/11/01/163632378/a-campaign-map-morphed-by-money

So, I would argue that we are not really helping the small states stay relevant. We see this in nearly every presidential election – the states with three votes receive absolutely no attention. The candidates don’t visit unless it just happens to be convenient. The issues of these states are not addressed. These states are irrelevant.

Representation…or the Lack Thereof

By contrast, consider a state like California. In 2004, John Kerry easily won the state, taking all of its 55 electoral votes. He received 6.7 million popular votes vs. George W. Bush’s 5.5 million. That year, Montana (3 electors) cast around 450,000 ballots in total, Idaho (4 electors) cast about 600,000, Rhode Island (4 electors) cast about 440,000, South Dakota (3 electors) cast about 400,000, and so on.

I show these numbers because in California, 5.5 million individuals were disenfranchised. Their voices had absolutely no effect on the presidential election, whereas in the four states I just mentioned, much smaller numbers controlled 14 electoral votes. Why is it that we are ok with completely silencing the voices of millions around our country just because they disagree with the bulk of people in their state?

This holds true in 48 states, but it is most egregious in the large ones and when the race is tight in swing states. Republicans in New York and California are unheard. Democrats throughout the south are silenced. Millions of Americans are disenfranchised in every presidential election. This year, 1.7 million Republicans in Virginia had their votes essentially thrown away because a mere 100,000 more citizens voted Democratic in the state, winning all 13 electoral votes. 1.2 million Democratic votes were shunned in Missouri when 250,000 more Republicans showed up to the polls.

This system is ridiculous. While there may have been something to the small state argument originally, such a system no longer really helps the small states, and even if it did, helping small states to the expense of millions of voters elsewhere is unjust. This is especially true because electing the president on a popular vote would not remove the voice of the small states. Individuals there would merely have the same amount of power as those everywhere else.

This disenfranchisement should be reason enough to change the EC.

Other Issues and Solutions

Some say the EC makes elections clearer, preventing ambiguity on election night and limiting recalls to state-wide endeavors rather than national ones. This is true most of the time, but it is not a reason to disenfranchise so many. The US is the master of logistics—notice our military—and we should have faith in our own ability to figure out how to more quickly process and record the popular vote.

One method would be online voting. I get skeptical looks every time I mention such a proposal. Of course this system would need significant security measures, but it can certainly be done – we pay our taxes online! And other countries also use this method, Estonia being the prime example, which has used online voting since 2005.

However, changing to a popular vote would require a constitutional amendment, which is extremely difficult. Apparently, there have been more than 700 attempts to change the system, so I realize I’m facing long odds here. If a popular vote is out pragmatically, what else could we do?

Another method for solving the problem is the National Popular Vote Interstate Compact. States who pass this compact pledge to allocate their electors to the winner of the popular vote nationally as long as enough states have passed the compact, so as to ensure their cumulative electors constitute at least 50% of the EC. This would effectively give the presidency to whoever won the popular vote without requiring an amendment (though there may be a court challenge). So far eight states plus DC have passed this compact, amassing 132 electoral votes between them.

If even this is a bridge too far, the final compromise would be to change the allocation of electors as Nebraska and Maine have. In these states, two statewide electors are allocated in the usual way, and then one elector is allocated based on the outcome in each Congressional district. So the electoral vote is not a winner-take-all system, which is a much more (though not entirely) proportionally representative method.

In any case, the EC is terrible in my opinion. Feel free to critique; I’ll do my best to respond.

Written by jonathanwaldroup

November 8, 2012 at 11:11 pm

Follow

Get every new post delivered to your Inbox.

Join 111 other followers

%d bloggers like this: