When Jesus was born, was there really “no room in the inn?”
I love the Christmas season more than any other, in both its darker advent shades and its brighter Christmas morning celebration. So I was deeply surprised when earlier this year I had my mental image of Jesus’ birth upended by some fascinating observations, built on both a deeper cultural understanding of the times as well as a better understanding of a particular Greek word.
So let’s dive right in. All of the following is based on work by Kenneth Bailey, originally in its own article, and more recently in a shortened version in his book Jesus Through Middle Eastern Eyes. The full chapter on this topic is available here.
Luke 2:7 (ESV) says:
“And she gave birth to her firstborn son and wrapped him in swaddling cloths and laid him in a manger, because there was no place for them in the inn.”
So what is actually going on here? Bailey makes a few observations that help us understand the context leading up to this verse:
- Joseph and Mary did not arrive in the middle of the night on the night of Jesus’ birth, as is commonly imagined, creating a sense of urgency to accept any accommodation. Instead, Joseph and Mary went up to Bethlehem in 2:4, and then “while they were there, the time came for her to give birth” (2:6). They had arrived in Bethlehem at some point in the past, which would give Joseph time to sort out accommodation suitable for them.
- Bethlehem was the village of Joseph’s origin, and in Jewish custom, when he arrived and told people who he was, the community would have found a place for him to stay as a guest, most likely with a relative. This is particularly true because he was also of the House of David, after which the town was called. To turn Joseph away from his ancestral village, particularly with a pregnant wife, would have caused deep shame for the community.
- Mary had relatives (Elizabeth and Zechariah) nearby in Judea (Bethlehem is in central Judea). Since Mary and Joseph arrived in Bethlehem prior to the night of Jesus’ birth, if there really had been no room available anywhere, they certainly would have gone a few extra miles to Mary’s relatives’ home.
Given these three points, and the likely assumption that Joseph and Mary would have stayed with relatives, why do we still read that there was “no room in the inn?”
This is where the Greek comes in. The word rendered “inn” in many translations is kataluma. Importantly, this is not the word typically used for a commercial inn, which was pandocheion. Pandocheion is the word used in the parable of the Good Samaritan, for example, where the Samaritan pays for the beaten man’s room (Luke 10) and is commonly used elsewhere with the same meaning. The word literally means “all-receiving.”
Since Luke uses both words in his gospel, we must wonder why he chose kataluma here if he intended a commercial inn, as kataluma simply signifies “a place to stay,” and could mean an inn, a guest room, or even a house.
Fortunately, Luke gives us a good idea of which meaning he intended because he uses the word elsewhere in his gospel. When Jesus is giving his disciples instructions on the upcoming Passover meal in Jerusalem (Luke 22), he refers to the “kataluma where I am to eat the Passover with my disciples” (22:11). Of course, we know this to be the “upper room” mentioned in the next verse, where Jesus has his Last Supper.
So we see that Luke would choose to use the word kataluma to describe a guest room in a private home, rather than an inn. Along with the other context discussed earlier, this would suggest that Luke never intended us to picture an inn as part of his story, but rather a private home. Nonetheless, there was still “no room,” so why does this matter, especially since Jesus was laid “in a manger,” which seems to signify a barn-type setting, right?
Perhaps not. As it happens, houses in ancient Israel (and even to this day in some remote parts of the Middle East) often had areas where animals could come in at night, and this area included mangers! Check out the diagrams provided by Bailey below:
In the first figure you can see that a typical home would have a guest room (kataluma), a family room, and a “stable” area. This stable area was beneath the floor of the main room so animals could come into the house at night without getting in the way of the family. At the edge of the family area, where the animals’ heads would jut over into the family room, would be mangers cut out for the animals to feed if needed. See Bailey’s article for further notes confirming that this type of house would have been assumed in the story.
As one final point, this also fits with how the shepherds treat their visitation. Again, if they had come and found Jesus in a barn, they surely would have offered one of their own homes, though meager, to house the mother and child. This is the reality of communities where honor and shame are important motivating factors.
All of this suggests that when Jesus was born, Mary and Joseph were staying in a private home, but the guest room they would have usually been offered was already full for some reason. Since there was “no room in the guest room,” Jesus was born in the family area and was laid in the manger at the end of the family room. No inn, no barn – just a typical birth in a private home that was perhaps a bit more full than usual.
Even though some translations get the “guest room” right, the mentality of the inn has so captured us that we tend to think of it as a guest room at an inn. Instead, Bailey provides a more compelling picture:
- It fits the local reality of a community that would have taken in one of its own
- It shows Joseph as a husband who would have had enough wits about him to arrange for a decent place for his wife to give birth.
- Importantly, it suggests Jesus was not rejected by everyone at his birth, but rather lovingly welcomed, not as a royal would have been, but as a valued part of a community.
For the most part, this example does not have large ramifications theologically, even though I find it fascinating. But it does significantly alter our understanding of the Jewish community into which Jesus was born. With the traditional view, the community is heartless and cold, and Joseph is a terrible father. Bailey’s understanding presents a very different picture. Jesus would have to wait until later in his life to be rejected.
So while this is not of enormous theological consequence, I hope you find the matter as interesting as I did. At the least, this example shows up the huge cultural and historical barriers confronting us when we read scripture. I hope it inspires you to dig deeper to figure out what is really being said.
For further reading on the topic, including possible sources for our traditional understanding of “no room in the inn,” see the following:
- The chapter in Bailey’s Jesus Through Middle Eastern Eyes on this topic
- A web posting of Bailey’s original article on the topic
- A more recent discussion in a similar vein (summary with a link to academic article)
Kenneth Bailey lived and taught New Testament studies for forty years around the Middle East, and much of his writings have focused on understanding the gospels by trying to recover the cultural context of peasants in the time of Jesus, based in some part on long-surviving practices to the current day in rural areas of the Middle East. I would highly recommend his t Poet and Peasant & Through Peasant Eyes, which discuss the cultural context, as well as other literary observations, of the parables in Luke.
Earlier this year I decided to start what I have been calling “seminary at home,” in an attempt to gain a deeper understanding of the Bible without paying for another degree (which my wife strictly forbade). Instead of continuing to read only mainstream Christian books (Tim Keller, John Piper, etc.), I have shifted more of my reading towards critical, academic works (not that the mainstream books are bad, with the exception of Joel Osteen and the rest of the prosperity gospel ilk). This project has been very rewarding, and I would like to begin sharing some of the things I have been exploring with you, both as a way for me to process the information and because I believe they are simply interesting topics. Expect more in this vein in the future.
So far, most of my reading has been focused on two things:
- Biblical theology – that is, theology at the meta-level, looking at the whole sweep and scope of the Bible from beginning to end. What is the Bible as a whole trying to tell us?
- Historical and cultural context – that is, what is the background for any particular story, parable, poem, or letter, and how does that relate to the meaning of the text? In particular, what assumptions would the author expect his reader/hearer to have, most of which will never be stated anywhere?
While my focus on these two elements was largely because I fell in love with NT Wright, who focuses heavily on both things in both his scholarly and lay writings, I also think they are an appropriate place to start a more serious study of the Bible.
This is because “a text cannot mean what it never meant,” as Gordon Fee and Douglas Stuart write in their surprisingly good How to Read the Bible for All Its Worth. That is to say, our starting point should always be what a text meant from the pen of the original author to the original audience. If we do not ground our understanding of the text in its initial context, we can make it say whatever we want and there is no fixed authority for what the text really means, which is where every good cult begins. So examining both the immediate historical and cultural context as well as the larger backdrops and narratives of the writers and audiences (i.e., the Biblical theology) provides a strong framework for understanding what any particular passage means.
This is the heart of exegesis: critical interpretation of a text based on the types of information described above. The “ex-“ at the beginning of the word suggests the interpretation is coming “out of” the text as it was originally intended. Exegesis is opposed to eisegesis, which is interpretation that (erroneously) incorporates the reader’s own assumptions and presuppositions, with “eis-“ signifying a reading “into” the text of our own thoughts.
This latter way of approaching scripture is unfortunately very common and can have large ramifications. On a very personal level, it is evident in the practice of some who may be encountering an important decision and are looking for a verse to confirm their predispositions. Someone who is lacking the discipline of commitment may find justification for not marrying in Paul’s discussion of singleness, for example. Or someone who finds their worth solely in romantic relationships may make the opposite mistake, supposing any marriage is better than none from the many positive discussions of marriage in the Bible. Both miss the larger Biblical picture of marriage as a blessing that requires self-denial and sacrifice.
But eisegesis can have consequences at a systemic level as well. I personally believe that one of the areas where eisegesis systematically distorts the Bible—and a whole host of important theological views—is in the area of eschatology, that is, the study of the “end times.” Many people bring so much baggage to this topic that the entire fundamental Christian hope is obscured in their interpretations. Two of the most important and erroneous assumptions that many Christians bring to the Bible are:
- The idea stretching back to Plato that the physical world is meaningless, and only the spiritual has value (completely unbiblical).
- That all scripture is intended to be read “literally,” meaning Revelation and other apocalyptic literature is essentially historical narrative written before the fact.
With these ideas in the background, many Christians get to a book like Revelation and conclude that:
- Heaven is a disembodied reality, lacking any sort of physical dimension.
- “This world is gonna burn, so who cares how much we pollute,” and similar notions
But these resulting interpretations deny two of the most critical theses of the Bible. First, they deny that God made creation as good and delights in it (see the Psalms, Job, etc.), and that we were made specifically to care for it, steward it, and husband it into its full potential. Humanity was intended to be God’s vice-regents of the whole creation, and we are still supposed to be (one day, we will fulfill that role again).
Second, it denies the entire Christian hope, which is of course bodily resurrection into a renewed creation. Not just some piddling future where we sit around on clouds playing harps, but new bodies in a renewed creation, where we will pursue physical tasks. For the longest time when I was growing up, I had the disembodied, world-is-gonna-burn image in my head for how heaven would work, and it made no sense. I have to give it to this crowd – at least they are logical. If heaven really is a disembodied future and the earth is going to be annihilated, this life just doesn’t matter at all, nor does any of our work or how we treat the planet.
But that is not the hope of the Christian. Resurrection is our hope. If our body dies and all that remains is spirit, how is that the defeat of death? All that means is that the spirit doesn’t die, but the body still does, and we don’t get a new one. Resurrection, on the other hand, says that our bodies will be renewed and perfected. That is the defeat of death – physical death undone. And the Bible also describes how, somehow, what we do for good now will remain in the renewed creation, perfected by Christ. That means that we have hope for the future, and purpose in the present.
So all this to say, context matters, and what we bring in the back of our mind to any particular text is likely to mislead us unless we consciously put it aside and examine the text for what it is. That is what I am trying to do more systematically now during my seminary at home, and what I hope to share with you going forward.
So with that background, check back again soon for my first topical discussion on a Christmas theme. Specifically, when Jesus was born, was there really “no room in the inn?”
For those of you who might be interested, I have just had a blog published on Oxford University’s Skoll Centre for Social Entrepreneurship blog, describing my career path that has led me from dropping out of Oxford to where I am now. You can read it here: http://skollcentreblog.org/2014/09/09/finding-career-purpose-in-social-enterprise-2/
This week my Facebook feed has been bombarded with friends doing the Ice-Bucket Challenge. I’ve seen some pretty interesting adaptations (including a traveling friend who had no bucket, so jumped off a 60 foot cliff into the ocean instead – kudos Rob). I’ve also seen a number of what some people might call “haters” questioning the merits of the whole endeavor. With feelings running strong on both sides, I couldn’t help but join into the fray. What should we make of this fad?
I think the first thing we should notice is just that: it’s a fad. That said, it’s a fad I wish I had thought of. Working for a small non-profit myself, I would be the hero of the century if I could increase our revenue by such huge percentage with a random viral challenge. So I think the first thing we should recognize is: There is absolutely nothing wrong with using crazy marketing stunts like this to raise money for a worthy cause.
With the exception of the many who seem to have trouble dumping water on themselves, there is no harm in the challenge, and it will hopefully do significant good for the state of research for ALS.
So before I move on, I want to emphasize the importance of the outcome. The outcome is very positive, so from that perspective, the ice bucket challenge is great. And I applaud anyone who has participated with the goal simply of increasing funds going to this research.
But regardless of the outcome, why has this particular endeavor been so successful? And what does the answer to that question tell us about American society? Anything that goes viral tells us something about the deep desires of the society hosting the “virus.” What is this “virus” exploiting so effectively in our society?
A Google search tells me that, among all taxpayers (not just those who itemize tax deductions), charitable giving in the US averaged between 2 and 2.5% of income in 2008 for all those with income less than $500,000 (the most recent I could find for this total population data). With a median income in the US of $51,000, 2% would amount to just over $1,000 per year given to charitable causes, including religious organizations. Why do we spend so little on causes we claim to care about, when we are willing to spend so much of our income on frivolity (consider the hordes of low-salaried young people who will spend $100-200 in a single night on alcohol, and that multiple times per month)?
The rather obvious fact is, people want to spend money on themselves, regardless of what they say about their beliefs or goals. That is why it takes a gimmick to bring out donations in any sizable amount from the population at large. That doesn’t make the ice bucket challenge bad; it makes it savvy.
But there have been other gimmicks. Why has this particular gimmick been so effective?
Some of the response is chance – the right influencer dumps water on his head at the right time, and it takes off. But it never would have gone viral if not for the fact that we all want to appear as good, generous people online. No one is going to applaud my generosity—or my well-apportioned swimsuit physique—if I am asked to give money for ALS research and I just do it, privately.
An immediate clarification is needed – I know that many people who are participating in the challenge are extremely generous, and give regularly to all sorts of good causes. Even if the truly generous join in, the reason something like this goes viral is, sadly, because of appearances. Requests to privately increase support for any cause will never be as effective as requests to increase support for a cause that also bolsters your image, even if the generous donate in both instances.
But I would argue that private giving is what counts, especially because it is often more durable, not being motivated by social performance. Very few of those who give $10 for ALS research will continue supporting the cause on an ongoing basis, simply because of the lack of any continuing social payoff. But sustainable change only happens with sustainable support, most of which does not afford the opportunity for Facebook posting.
The wild popularity of the ice bucket challenge is sad evidence that as a society, we have missed the whole point of generosity, which is not really generosity if the goal is self-aggrandizement. There is a good reason that the ideal of Christian generosity is captured in Jesus’ command to not let your left hand know what your right hand is doing when you give to the poor.
My Challenge to You
The world is full of problems, and the world is full of people trying to solve those problems. To make any sort of sustainable impact on those problems, more is needed than generosity only when a viral trend demands it.
So instead of challenging you to dump water on your head and give two Starbucks lattes worth of money to ALS research, I challenge you instead to pick an organization that you think is making a lasting, positive difference in the world, and commit to support it for at least a year, with at least $100 per month, or whatever amount makes it hurt just a little.
And don’t tell me about it.
Inigo Montoya, of The Princess Bride, was a very wise man. Take a moment and revel in his genius:
If only he knew how widely his wisdom could be applied. In this case, the word that we do not properly understand (or rather, the acronym) is GDP.
Nearly everyone in the US reflexively considers an increase in GDP to be a good thing. It is understandable why this association exists: most people do think of themselves as better off when they have more income, which GDP purports to measure on a national level (roughly), and generally when the country as a whole is doing better, people assume they have a better chance of personal success. Also, to the extent that we find pleasure in our nation’s wellbeing, we are made better off by the collective improvement of income, regardless of the personal outcome. And in many countries of the world, GDP growth plays an important role in helping reduce poverty, which is certainly positive.
But the positive associations with GDP growth are so firmly baked into our cultural conscience that most people not only judge our nation’s economic health through the measure, but also our overall wellbeing. If GDP is up, we are doing well as a country; we have succeeded. If it’s down, someone needs to be kicked out of office. You will struggle to find a politician that has anything negative to say about GDP growth, ever.
However, there are many reasons to believe that increasing GDP does not necessarily improve our country on the whole. At this point we could take this discussion in a number of directions. One problem might be if GDP growth is funneled to an elite few, which would increase inequality and perhaps fuel political instability. Some would say this is exactly what is beginning to happen in the US, and indeed, income and wealth inequality have increased dramatically over the past few decades. Check out this video on wealth inequality and this chart of income inequality in the US.
Another reason might be that as a society, we don’t actually need more income per se, having reached a level of affluence such that other non-financial goods should be emphasized more than pure income (such as the cohesion of families, good health, strong communities, the enjoyment of the arts and nature, etc.). John Kenneth Galbraith, in his 1958 classic The Affluent Society, makes a similar (though not identical) point even at a time when the US was far from its current level of aggregate wealth.
This last point touches on what I see as the fundamental problem (which is also discussed by Galbraith): GDP is a very poor measure of wellbeing. But our society as a whole has co-opted GDP—which was never intended to be more than an indicator of economic activity—to serve a role for which it is not useful. That is why I got excited when I heard that the Bureau of Economic Analysis was going to change the way GDP was measured.
The new changes will now include expenditures on research and development and creative work, among other things, adding perhaps 2% to annual GDP (quite a lot). And while this is great, unfortunately, the improvements don’t go nearly far enough. Specifically, there are three mammoth problems with the measurement of GDP that still need to be tackled:
1) For GDP, there is no such thing as a bad transaction (unless it is illegal). GDP adds up all the money that changes hands (that it can track), but it doesn’t matter if these transactions are beneficial for society or not. This is most visible after a natural disaster – because GDP only measures money changing hands, reconstruction efforts after disasters often prove to be a boon to GDP. Obviously natural disasters are terrible for the communities affected, but GDP often shows the opposite. Another example would be the GDP generated from the ever-expanding divorce industry (think lawyer fees, the need for two homes where previously one was sufficient, etc.), though I think few people would actually believe divorce is good for our country.
2) GDP ignores the informal and volunteer economy. In particular, it ignores the work done by stay-at-home parents and volunteering of all kinds. These activities save money for the families involved and also often contribute positive social value by building up strong households and communities. And yet, GDP ignores them. In fact, the more people contribute to such activities, the lower GDP will become, since that will mean people are not paying money for childcare, housecleaning, etc. Plus, any positive social contribution is also bad for GDP, since it undoubtedly harms those industries that cater to disintegrating families and communities (especially lawyers).
3) GDP ignores the value of the environment. This, in my mind, is one of the most significant failings of GDP. For centuries the value of natural resources and services has been taken for granted because we did not think humanity could significantly reshape global ecology. Unfortunately, we are discovering that we can (and do) have such an impact. GDP takes no account for how we are harming the environment’s ability to deal with our waste, provide our food, clean our water, make our oxygen, etc. Perhaps even more problematic (and clearly ridiculous), GDP does not take into consideration the fact that we are depleting natural resources. If we drill an oil well and empty it, this only shows up as income and takes no account of the fact that the natural resource is gone (i.e., our wealth is reduced). This is similar to someone draining their retirement account at age 65 to splurge on a lavish Caribbean vacation and thinking they have not done themselves any harm – after all, that was a really great resort! Obviously, using our natural resources has costs attached to it, especially if the resources are non-renewable.
Simon Kuznets, who developed the system of national accounts that includes GDP in the 1930s, knew better than most the limits of GDP. In 1934, Kuznets clearly articulated the need to keep GDP out of the business of measuring overall wellbeing: “The welfare of a nation can, therefore, scarcely be inferred from a measurement of national income as defined above.”
It’s not GDP’s fault, really – we have forced it to take more responsibility than it can shoulder. But even though there are plenty of separate indicators to tell us how we are doing as a country on things like health and happiness, the emotional and political weight attached to GDP is so strong that we must work to change it. Otherwise we will continue to be swayed in negative ways by a number that doesn’t tell us what we want it to. So next time someone uses GDP as a proxy for wellbeing, remember the wise words of Inigo Montoya: “You keep using that word. I do not think it means what you think it means.”
- Alternatives to GDP: a Google search is very effective, but here is one link with links to a variety of alternatives.
It has been many moons since I last posted. My apologies. I hope to be posting more regularly soon.
But in the meantime, I won an essay contest and finally have my work published by a legit, external party! Check out my essay on climate change and poverty reduction (it’s short): http://www.brettonwoods.org/sites/default/files/documents/Henry_Owen_Award_Essay_Waldroup_1stPlace.pdf
(Here is an explanation of the contest: http://www.brettonwoods.org/article/inaugural-owen-award-celebrates-graduate-students)
The Elements of Style, by Will Strunk and E.B. White, has been the Bible of English style for two generations. The book provides all sorts of advice, including the now famous dictum “Omit needless words!” as well as the rule of using ‘s after all words to make them possessive, with the exceptions of Jesus, Moses, and other ancient names, which only warrant an apostrophe. While several of the book’s points are well-founded, it also has more than its share of absurdities. Recently coming across several other lists of writing tips from famous authors, it seems that the majority of them are full to the brim with ridiculous rules and suggestions.
So last night I went looking for my copy of Strunk and White in order to denounce it. To my great consternation, my copy of this little book had been stolen! My wife apparently took it to school, where, as an English teacher, she supposedly has some use for it. I am deeply skeptical. (She claims to have had it at school for a year. Apart from my skepticism, she is also violating the role of a private library as a working tool, as outlined in a previous post.) But not to worry, I have several other works that refer to the book extensively, so there is still plenty of material for my diatribe.
The problem with many of these style guides is that “good” style changes dramatically with time. As languages change, so to do the expectations of readers and the way in which words are best able to communicate their intended purpose. What one style guide says is best in 1959 (when The Elements of Style was first published) may not apply today. That said, many of the most basic suggestions from Strunk and White are still worth following:
- “Omit needless words,”
- “Do not join independent clauses by a comma.”
- “Use the active voice.”
These are all legitimate pieces of advice that are likely to improve the readability of any text. But other rules come across as attempts to create laws out of pet peeves:
- Spell out dates in quotations (e.g., “August ninth”) but not when the author is using a date (August 9th)
- Use “which” for nonrestrictive clauses and “that” for restrictive clauses
- Only use “hopefully” to mean “in a hopeful way” rather than the more common usage
- Do not use “claim” as a substitute for declare, maintain, or charge
- Do not use “due to” to mean “through, because of, or owing to”
Such rules are utter nonsense, and are based entirely on a prescriptivist desire to keep language from changing, to maintain “purity.” These types of rules are all predicated on the understanding that whenever the rules are being written down (say, in 1959), that era’s language is the way the language ought to be. But since language is constantly changing, any such claim is absurd, for there was always an earlier, “purer” language from which the current manifestation evolved. While some standards tend to last through the ages, such as concision, petty proscriptions of definitions which have crept into new areas, such as with “claim” or “due to” above, are bound to be ignored by the masses as the new definitions become part of the next generation’s idea of “pure” language.
The wonderful website Brain Pickings recently posted some other writing tips from long dead writers. Here are selections from the twenty most common writing mistakes in the eyes of HP Lovecraft, sci-fi and fantasy author, in 1920:
- “Barbarous compound nouns, as viewpoint or upkeep”
- “Use of nouns for verbs, as ‘he motored to Boston,’ or ‘he voiced a protest.’”
- “Errors in moods and tenses of verbs, as ‘If I was he, I should do otherwise,’ or ‘He said the earth was round.’”
- “False verb-forms, as ‘I pled with him.’”
- “Use of words in wrong senses, as ‘The book greatly intrigued me,’ ‘Leave me take this,’ ‘He was obsessed with the idea,’ or ‘He is a meticulous writer.’”
I too would call “viewpoint” barbarous. Only pirates use that word, in my experience. Of course, we say many of the other things forbidden in this list, and no one in his right mind would correct them. In fact, the dictionary now even includes “pled” as a perfectly endorsed past tense of “to plead.” Were he still alive today (check out that subjunctive, Lovecraft!), I bet Lovecraft would have started a petition to undo the deleterious effects of google’s entry into the dictionary as a verb.
Some of the most famous stylistic rules are in fact vestiges of other languages that need not apply in English at all. For instance, the injunction against split infinitives, as “to quickly run” or “to fervently believe,” is based on a former incarnation of the idea of a “pure” language. Back in the day, many believed Latin to be the perfect language. In Latin it is impossible to put anything in between the two components of an infinitive (“to” and “run,” for instance) because infinitives were a single word (“to run” in Latin is “currere”). Great English writers have ignored this rule in every century since the 1300s.
Another oft-cited rule is to never end a sentence with a preposition. John Dryden, in 1672, apparently criticized Shakespeare-contemporary Ben Jonson for such a sin, and it has been brought up in every generation since then. Once again, this construction is impossible in Latin, and Dryden was known to first write in Latin, as he thought it the superior language, and then translate into English, explaining his opposition to dangling prepositions. But there is no sensible reason to follow such a rule, and, again, great writers and even very large newspapers like the New York Times have routinely ignored this rule.
The list of erroneous rules and suggestions for writing could go on and on. In every age, writers think that their own writing is the height of language, but every new age heralds new feats of language and writing that add new dimensions to the English repertoire. In their better moments, many of the same rule-wielding writers quoted above still saw the larger picture. So after having bashed their many pieces of bad advice, here are a few sage comments worth considering:
“Style rules of this sort are, of course, somewhat a matter of individual preference, and even the established rules of grammar are open to challenge. Professor Strunk, although one of the most inflexible and choosy of men, was quick to acknowledge the fallacy of inflexibility and the danger of doctrine.”
- E.B. White, in one of his better moment – Essays of E.B. White, 325
“All attempts at gaining literary polish must begin with judicious reading, and the learner must never cease to hold this phase uppermost. In many cases, the usage of good authors will be found a more effective guide than any amount of precept. A page of Addison or of Irving will teach more of style than a whole manual of rules, whilst a story of Poe’s will impress upon the mind a more vivid notion of powerful and correct description and narration than will ten dry chapters of a bulky textbook.”
- H.P. Lovecraft, from the article on Brain Pickings
“If there is a magic in story writing, and I am convinced there is, no one has ever been able to reduce it to a recipe that can be passed from one person to another. The formula seems to lie solely in the aching urge of the writer to convey something he feels important to the reader. If the writer has that urge, he may sometimes, but by no means always, find the way to do it. You must perceive the excellence that makes a good story good or the errors that makes a bad story. For a bad story is only an ineffective story.”
- John Steinbeck, another article from Brain Pickings