Saturday, December 30, 2017


Dali: Nativity, 1967


In the morning we found we’d been
in each other’s dream.
Our dreams would disappoint
both Jung and Freud:
you were with me at a party,

I with you in Salt Lake City.
The parallel fact
beat in us like a heart:
untouching, we touched,
silent, we talked,
unmoving, we walked together.

I invented instant metaphysics:
death as a dream
in which we still
go to parties, drive to Salt Lake City.
But I could be wrong:

in this dreamtime,
not long, let the tongues
taste and tell,
let the dreamers’ arms
repeat the gesture, gathering
the beloved body.

Tonight an orange half-moon
rises over the coyote hills
like one half of a wish.
Some eternal I do
is said without our speaking.

The moon’s mottled lamp
lights our good-nights.
Let us now turn it off, until
we meet in another dream,
beyond the white rain of the stars.

~ Oriana



Wonderful dream, wonderful poem. I like those metaphysics — "untouching, we touched”— like no amount of space or time can separate these lovers, who will keep meeting like this, in parallel dreams!!


One of the sweetest experiences of my life, unforgettable — even if the dreams themselves were forgettable except for the parallel presence of the partner in them. (And yes, we are still together.)

Funny, I suddenly remembered that my mother used to call me every time she happened to have a dream about me. I remember only one of those dreams . . .  But more than anything, I remember that she had them, and cared enough to call me the morning after — yet another reason to love mothers and forgive them for being overprotective and always seeing us as a child (of course!)

(A shameless, unromantic partial denial of “unforgettable”: Or, as Samuel Beckett remarked, “What was that unforgettable line?”

I can’t discount the fact that I wrote a poem about this experience. Would I spontaneously remember it if I never wrote the poem? I can’t say for sure. But we are in part created by our own creations.

O look at all the fire folk sitting in the air,
The bright boroughs, the circle citadels there.

~ Gerald Manley Hopkins



This morning, the rude ringing of the phone (Home Depot) exiled me from the earthly paradise I saw in a dream. A campus in pine woods (is there a tree more glorious than the Jeffrey pine?). Buildings with great bay windows, a campus practically of glass so there is but a membrane between the inside and the trees. I turn to the area that is just lawn — but what green! We don’t get such green in California, except along mountain streams — and I say to my guide (a dean?): “We need to plant more trees.”

Always a more northern, rain-rich landscape, but especially the woods. The dream of living in the woods, ever since in childhood I saw Jurata on Hel Peninsula, the villas among the pines.

“Want to make sure you haven’t wasted your life? Plant a tree.”


“Want to be happy? Think like an old person,” read a recent headline in the New York Times.

Fortunately, one doesn’t have to be really old to realize that it’s simply too late in life not just to be depressed, but also for all kinds of noxious nonsense.

Too late to whine and grumble.

Too late to hold back forgiveness.

Too late to waste time on debates with religious people: our premises are so different, talking leads nowhere. The wise refuse to talk with creationists, Holocaust deniers, or climate-change deniers. Creationists and deniers care nothing about the earthly paradise, only the pie in the sky and/or the purity of their ideas, no matter how at odds with reality.

Time only to share beauty, poetry, and whatever life wisdom I have learned through both pain and joy. Suffering isn’t the only teacher. Each relationship has been a profound learning experience. Even the bad ones — I simply can’t repent having loved. Even if spoken to a stranger, I want my last words to be “I love you.”


    ~ “The murder rate in most countries has fallen significantly in the past 15 years. That's the reality, but most people don't believe it — fewer than one in 10 thinks there are fewer murders.

    Deaths from terrorist attacks around the world were lower in the past 15 years than in the previous 15 — but only a fifth of us think that's the case.

Even when it comes to other areas of public life, people's assessments can tend to be incorrect.

For example, people overestimate the number of teenage pregnancies by what the researchers call staggering amounts.

In some countries, they think about half of teenage girls get pregnant every year: in reality, the highest figure for any country is 6.7%, and the rate across all 38 countries is just 2%.

One reason for this tendency to assume the worst of the world, say the Ipsos-Mori people, is that we're genetically programmed to believe bad news more readily than good.

Our brains process negative information in a different way and store it more accessibly than positive stuff.

One neuroscientist demonstrated this by showing people pictures of things known to arouse positive feelings — which apparently include pizzas and Ferraris — and others known to arouse negative ones, such as a mutilated face or a dead cat.

As he did so, he measured the electrical activity in the brain. It turns out we react more strongly to the negative images.

The world is getting healthier and wealthier, which is good news, but headlines about that sort of thing just don't cut it when there's a terrorist attack or a war to report.

“If it bleeds, it leads,” is said to be the tabloid news editor's mantra. Whoever coined the phrase clearly had a profound insight into human nature.

Critics talk about "fear-based media". If we're fed such a relentlessly negative diet, they ask, is it any wonder we end up thinking the world is a terrible place?

Except, it turns out, we already thought that — or at least were predisposed to think it.

All those negative news stories are just reinforcement, feeding us what we're programmed to want — because it may save our lives.

This hypersensitivity to negative information — or bad news — apparently served an important function as human beings evolved.

Having the kind of brain that reacted more strongly to information about possible dangers meant, quite simply, that you were likely to live longer.

And those who didn't have that kind of brain? Well, as one scientist delicately put it, they "got edited out of the gene pool”.


A topic that threads through several subjects of the blog this week is the lack of congruence between reality, the facts of a situation, and our perception and beliefs about that reality. Interesting that we react differently to good news and bad, and that the tendency is to pay more attention  and give more weight to bad news. Of course this may have been a bias that helped us survive, that readiness, even expectation, of danger and threat, but it seems much less useful to us now, when the overwhelming emphasis on the negative can lead to hopelessness, depression and despair, and perhaps even worse, to miss directing energy and resources in ways that squander them, while real needs may be ignored, overlooked, or given scant attention.


Not that I mean to blame the media — they need support at this point. But anyone with half a brain can understand that a mass shooting will be reported in a way that takes precedence over a thousand acts of kindness that took place during the same time. It’s just unavoidable, but it does distort our view of reality (we live in the safest time in history) and of human nature (not innately evil). This feeds the right-wing ideology.

By the way, every time there is a mass shooting, the prices of the stock shares of the gun industry go up. A mass shooting stirs up the fear that gun control will finally be enacted, so gun enthusiasts rush to buy more guns. I don’t think there is the slightest bit of empathy for the victims, despite all the “thoughts and prayers” (I think the phrase has become morally obscene). Nor is there any REALISTIC concern about one’s own safety. Studies show that buying a gun highly increases the chance of accidental or intentional homicide (a bit of black humor here: “Without a gun, how could I defend myself against my family?”)

I call it "Ganesha Dancing"


But here is something fascinating:

“Gerontologists call this the paradox of old age: that as people’s minds and bodies decline, instead of feeling worse about their lives, they feel better. In memory tests, they recall positive images better than negative; under functional magnetic resonance imaging, their brains respond more mildly to stressful images than the brains of younger people.” ~ New York Times, 12-29-2017

Arik Brauer: My Father in Winter, 1983-84

~ “Glory to God in the highest heaven,
and on earth peace to those on whom his favor rests.”
~ Luke 2:14, New International Version

You know that favorite Christmas verse that says "Peace on earth, good will toward men?" Well, that's not really what it says in the Greek. What it really says is "On earth, peace toward those whom he favors." You can look it up in almost any English version besides the KJV and see it translated more accurately.

It may look like a small difference, but it's not really small at all. The correct rendering tracks better with the rest of the Bible, which always presents God's blessings as discriminatory, not equally doled out to all in the same way. If any God created "all men equal," it most certainly wasn't the God of the Bible.

But that's not what people today read when they get to that verse. They read what they want to read there, and telling them what the original says doesn't change their minds one bit. Cognitive dissonance is a powerful thing, and the religious mind typically doesn't change directions, even if new information arises from an already approved source of authority. I find that fascinating.” ~

~ Neil Carter (Godless in Dixie)

Parmigianino, Self-Portrait in a red hat, 1540

Neil Carter knows the New Testament Greek and I trust his accuracy. It’s true that Yahweh plays favorites in the most blatant way. But in this case, I favor the inclusive mistranslation in the King James Version. Inaccurate translation is one way a religion can reform.

Equality is a very new idea, not yet a century old (I don’t mean “equal before the law,” which is older, but true “equal rights”), and not accepted by most of the world . . . and yet I sense it shall prevail (if humanity survives this century, that is — it began with 9/11, not a good omen; and yet, for now, ISIS is losing). 


~ “It will soon be that time of year where many of us set ourselves up for failure. Make a resolution or don’t make a resolution; you will regret either. Or so the Danish philosopher Soren Kierkegaard might quip. One estimate suggests that almost half of Americans make New Year’s resolutions, and yet less than 10 percent successfully follow through. Maybe we forget about them long before our snow boots dry out. Maybe life takes us on a different path. Maybe we stop caring. Maybe we simply fail. It might be tempting to do away with this farce altogether, but before we commit to being noncommittal about the New Year, it’s worth thinking through some of the options.

The tradition of making New Year’s resolutions is at least four thousand years old. The ancient Babylonians celebrated their new year—the rebirth of the sun god Marduk—in spring, to coincide with barley-sowing season. Akitu was a twelve-day festival in which the king would promise to fulfill an extensive list of duties. To seal the king’s commitment, the high priest would slap him hard across the face. The slap had to be firm enough to draw tears: proof of the king’s dedication and a reminder to him to be humble. As part of the festival, other people also pledged their allegiance to the king and the gods and promised to repay their debts.

It may be tempting to overthrow this ancient tradition, to make no resolutions, and to go along with the flow of life like a carefree leaf on the surface of a happily bubbling stream. But Kierkegaard would argue that such a metaphor is deceptive: we would be akin to a stone hurled across the surface of the water, which “skips lightly for a time, but as soon as it stops skipping, instantly sinks down into the depths.” Without commitments, we risk disappearing into the existential abyss. A life that lacks purpose creates anxiety. A meaningful life, Kierkegaard suggests, is one in which we actively assert ourselves in order to live more fully.

It’s all well and good to make promises, but there’s still the challenge of keeping them. Friedrich Nietzsche suggests that what differentiates humans from other creatures is that we have “the right to make promises.” Making promises addresses a fundamental aspect of our humanity: that each of us is and is not the person we will become in the future. This is confusing, so let’s get concrete: Are you the same person you will be next year? Well, not exactly. Gray hair may sprout, wrinkles may emerge, your voice may deepen and thicken, your joints begin to ache. Your physical characteristics will objectively change, even if minutely. Your emotional and psychological identity may also shift; you might get a new job or a new partner, a new hobby or a new therapist.

A promise is a way of laying claim to an uncertain future. It is a way of projecting oneself into the coming months, protecting a commitment that may be impossible to keep. It is also a means of guarding or binding one’s identity—the I in “I promise.” Why does a nonhuman animal not make promises? Most don’t have a conception of themselves as individuals or a vested sense of identity. Yes, some animals may experience guilt, but guilt is not the same as the shame of breaking a longstanding promise. Nietzsche’s suggestion is that we ought to keep making resolutions—heartfelt, honest-to-God promises—lest we devolve into an animal-like state.

Nietzsche does not say, however, that we must keep our resolutions. Sometimes, many times, the cost is simply too high. To fulfill all promises unconditionally may be unwise, if not pig-headed and arrogant. For example, perhaps you committed to shedding a few pounds, but it turns out that your blood sugar plummets every time you go for more than two hours without a snack and you’re constantly on the verge of passing out. So that wasn’t a great resolution after all. Or you resolved not to go on any new dates and to focus on your career, but every morning you bump into the same lovely person at your favorite café. With new information, you just might need to leave some commitments behind. There’s no reason to feel guilty about that. The Romantic view of the self is that there’s no need to feel enslaved to an idea of ourselves that we wanted in the past. The self is forever in flux, changing, growing. The Romantic self is one that is ready to annihilate itself over and over again. As Nietzsche’s most famous protagonist Zarathustra says, “You must be ready to burn yourself in your own flame: how could you become new, if you had not first become ashes?”

For an existentialist an unwillingness to “burn yourself in your own flame,” to overcome or break a promise, can be a sign of “bad faith.” “Bad faith” is a situation in which you disavow the immediate free will that is always at your disposal. Bad faith is “bad” because it denies the hard, metaphysical core of being human—radical freedom. Radical freedom means we are radically responsible both for keeping and for transgressing promises. The fragility of our promises is what makes them meaningful.

So go ahead: make your resolutions. You have the right to make promises. And you have the right to break them. But you don’t have to make them during an evening of late-night drunkenness. That is what the rest of your sober life is for.” ~


I think most people attempt too much. Resolve to do something very small that would nevertheless improve your life. Go ahead, nothing is too tiny — it’s so sweet to succeed.



~ “The first line of Epictetus’ manual of ethical advice, the Enchiridion — “Some things are in our control and others not” — made me feel that a weight was being lifted off my chest. For Epictetus, the only thing we can totally control, and therefore the only thing we should ever worry about, is our own judgment about what is good. If we desire money, health, sex, or reputation, we will inevitably be unhappy. If we genuinely wish to avoid poverty, sickness, loneliness, and obscurity, we will live in constant anxiety and frustration. Of course, fear and desire are unavoidable. Everyone feels those flashes of dread or anticipation. Being a Stoic means interrogating those flashes: asking whether they apply to things outside your control and, if they do, being “ready with the reaction ‘Then it’s none of my concern.’ ”

Reading Epictetus, I realized that most of the pain in my life came not from any actual privations or insults but, rather, from the shame of thinking that they could have been avoided. Wasn’t it my fault that I lived in such isolation, that meaning continued to elude me, that my love life was a shambles? When I read that nobody should ever feel ashamed to be alone or to be in a crowd, I realized that I often felt ashamed of both of those things. Epictetus’ advice: when alone, “call it peace and liberty, and consider yourself the gods’ equal”; in a crowd, think of yourself as a guest at an enormous party, and celebrate the best you can.

Epictetus also won me over with his tone, which was that of an enraged athletics coach. If you want to become a Stoic, he said, “you will dislocate your wrist, sprain your ankle, swallow quantities of sand,” and you will still suffer losses and humiliations. And yet, for you, every setback is an advantage, an opportunity for learning and glory. When a difficulty comes your way, you should feel proud and excited, like “a wrestler whom God, like a trainer, has paired with a tough young buck.” In other words, think of every unreasonable asshole you have to deal with as part of God’s attempt to “turn you into Olympic-class material.” This is a very powerful trick.

Much of Epictetus’ advice is about not getting angry at slaves. At first, I thought I could skip those parts. But I soon realized that I had the same self-recriminatory and illogical thoughts in my interactions with small-business owners and service professionals. When a cabdriver lied about a route, or a shopkeeper shortchanged me, I felt that it was my fault, for speaking Turkish with an accent, or for being part of an élite. And, if I pretended not to notice these slights, wasn’t I proving that I really was a disengaged, privileged oppressor? Epictetus shook me from these thoughts with this simple exercise: “Starting with things of little value—a bit of spilled oil, a little stolen wine—repeat to yourself: ‘For such a small price, I buy tranquillity.’ ”

Born nearly two thousand years before Darwin and Freud, Epictetus seems to have anticipated a way out of their prisons. The sense of doom and delight that is programmed into the human body? It can be overridden by the mind. The eternal war between subconscious desires and the demands of civilization? It can be won. In the nineteen-fifties, the American psychotherapist Albert Ellis came up with an early form of cognitive-behavioral therapy, based largely on Epictetus’ claim that “it is not events that disturb people, it is their judgments concerning them.” If you practice Stoic philosophy long enough, Epictetus says, you stop being mistaken about what’s good even in your dreams.”


“For such a small price, I buy tranquility” — just that statement has affected me deeply.

And also, for such a small price we can make others happy — a larger tip, sometimes simply a smile. And when we make someone else happy, we automatically feel good too. 


Leisure … is not the privilege of those who can afford to take time; it is the virtue of those who give to everything they do the time it deserves to take. In fact, work ought to be done with leisure, if it is to be done well.

The heart is a leisurely muscle. It differs from all other muscles. How many push-ups can you make before the muscles in your arms and stomach get so tired that you have to stop? But your heart muscle goes on working for as long as you live. It does not get tired, because there is a phase of rest built into every single heartbeat. Our physical heart works leisurely. And when we speak of the heart in a wider sense, the idea that life-giving leisure lies at the very center is implied. Never to lose sight of that central place of leisure in our life would keep us youthful.” ~ David Steindl-Rast (a Benedictine monk)


“Work ought to be done with leisure, if it is to be done well.” We rarely hear about it: the ideal of giving to everything the time it deserves to take.

Scott Peck, author of the Road Less Taken, said something similar: people fail at various tasks mainly because they are not willing the take the time it takes to do it well. He spoke in terms of self-discipline rather than leisure. Note the power of words: the term "leisure" makes it sound a lot more pleasant.



~ “It was true that the Germans had more planes than anyone else. But, as the historian Victor Davis Hanson explains, in “The Second World Wars: How the First Global Conflict Was Fought and Won,” the Luftwaffe had a number of weaknesses, some very fundamental. A lack of four-engine bombers, for example, made it hard for Germany to conduct truly devastating long-range strategic-bombing campaigns against enemies overseas. (The Nazis never succeeded in mass-producing an equivalent to America’s B-17 Flying Fortress, which was in the air before the war.) The German Navy had no aircraft carriers, which made air supremacy during naval battles impossible. (In total, the Axis fielded only sixteen carriers; the Allies, a hundred and fifty-five.) Germany had limited access to oil, and thus to aviation fuel, and this constrained the number of missions the Luftwaffe could fly. Unlike the Allies, who excelled at building tidy, concrete runways from scratch as the front shifted, the Germans relied on whatever slapdash rural runways they could find, resulting in more wear and tear on their planes.

The Nazis were slower than the Allies to replace downed aircraft (they had less experience with high-volume manufacturing); they were also slower to replace fallen pilots (their aircraft were harder to operate). Over time, this lower replacement rate eroded, then reversed, their initial numbers advantage. They also lagged behind in various other areas of aviation technology: “navigation aids, drop tanks, self-sealing tanks, chaff, air-to-surface radar.” Some of these factors emerged only during the war. But others were clear beforehand, and analysts could have noticed them. In truth, Hanson writes, Lindbergh and many others were “hypnotized by Nazi braggadocio and pageantry.” The Nazis were apparently hypnotized, too. As a land-based power with a small navy, they needed the Luftwaffe to perform miracles (for instance, bombing Britain into submission). They did not see the Luftwaffe realistically; they deluded themselves into believing it could do the impossible.

“The Second World Wars” takes an unusual approach to its subject. The book is not a chronological retelling of the conflict but a high-altitude, statistics-saturated overview of the dynamics and constraints that shaped it. Hanson is unusual, too: he is a classicist and a specialist in military history at Stanford’s Hoover Institution, where he edits Strategika, “an online journal that analyzes ongoing issues of national security in light of conflicts of the past”; he’s also an almond farmer and a conservative polemicist whose articles on race, immigration, and the decline of agrarian values appear regularly on National Review’s Web site and other places. I’ve long found his political commentary tiresome—but his deeply researched and detailed military analyses are fascinating. “The Second World Wars” confines itself to the latter subject, with spectacular results. Hanson starts with the idea that the Axis powers were more or less destined to lose, then works backward to understand the reasons for their defeat. The book revolves around a question highly relevant to our own brewing confrontation with North Korea: Why, and how, do weaker nations convince themselves, against all evidence to the contrary, that they are capable of defeating stronger ones?

Hanson begins by putting the Second World War in a “classical context.” Although it was a high-tech conflict with newly lethal weapons, he writes, it still followed patterns established over millennia: “British, American, Italian, and German soldiers often found themselves fortifying or destroying the Mediterranean stonework of the Romans, Byzantines, Franks, Venetians, and Ottomans.” In many instances, military planners on both sides ignored the lessons of the past. Some lessons were local: it’s always been hard to “campaign northward up the narrow backbone of the Italian peninsula,” for example, which is exactly what the Allies struggled to do. Others were universal. Small countries have difficulty defeating big ones, because—obviously—bigger countries have more people and resources at their disposal; Germany, Italy, and Japan, therefore, should have been more concerned about their relatively small size compared to their foes. History shows that the only way to win a total war is to occupy your enemy’s capital with infantrymen, with whom you can force regime change. Hitler should have paused to ask how, with such a weak navy, he planned to cross the oceans and sack London and, later, Washington. At a fundamental level, it was a mistake for him to attack countries whose capitals he had no way to reach.

In terms of management and logistics, the Axis powers were similarly, and sometimes quite conspicuously, disadvantaged. Before the war, the United States produced a little more than half of the world’s oil; Axis leaders should have known this would be a decisive factor in a mechanized conflict involving tanks, planes, and other vehicles. (The Nazis may have underestimated the importance of fuel because—even though they planned to quickly conquer vast amounts of territory through blitzkrieg—many of their supply lines remained dependent upon horses for the duration of the war.) In general, Allied management was more flexible—British planners quickly figured out the best way to place radar installations, for example—while the Axis powers, with their more hierarchical cultures, tended toward rigidity. Axis leaders believed that Fascism could make up the difference by producing more fanatical soldiers with more “élan.” For a brief time at the beginning of the war, Allied countries believed this, too. (There was widespread fear, especially, of Japanese soldiers.) They soon realized that defending one’s homeland against invaders turns pretty much everyone into a fanatic.

In any event, Hanson shows that the Second World War hinged to an unprecedented extent upon artillery (“At least half of the combat dead of World War II probably fell to artillery or mortar fire”): the Allies had bigger, faster factories and could produce more guns and shells. “The most significant statistic of the war is the ten-to-one advantage in aggregate artillery production (in total over a million large guns) enjoyed by the British Empire, the Soviet Union, and the United States over the three Axis powers.” Russia, meanwhile, excelled at manufacturing cheap, easily serviceable, and quickly manufactured tanks, which, by the end of the war, were better than the tanks the Nazis fielded. Many Allied factories remained beyond the reach of Axis forces. There were a few possible turning points in the war: had Hitler chosen not to invade Russia, or not to declare war on the United States, he might have kept his Continental gains. Similarly, Japan might have contented itself with a few local conquests. But temperance and Fascism do not mix, and the outsized ambitions of the Axis powers put them on a collision course with the massive geographical, managerial, and logistical advantages possessed by the Allies, which, Hanson suggests, they should have known would be insurmountable.

The Axis powers fell prey to their own mythmaking: they were adept at creating narratives that made exceedingly unlikely victories seem not just plausible but inevitable. When the Allies perceived just how far Fascist fantasy diverged from reality, they concluded that Axis leaders had brainwashed their citizens and themselves. They began to realize that “the destruction of populist ideologies, especially those fueled by claims of racial superiority,” would prove “a task far more arduous than the defeat of a sovereign people’s military”:

Sober Germans, Italians, and Japanese, in the Allied way of thinking, had to be freed from their own hypnotic adherence to evil, even if by suffering along with their soldiers. . . . Death was commonplace in World War II because fascist zealotry and the overwhelming force required to extinguish it would logically lead to Allied self-justifications of violence and collective punishment of civilians unthinkable in World War I.

Hanson explores the specific decision-making processes behind the most merciless Allied decisions—“the firebombing of the major German and Japanese cities, the dropping of two atomic bombs, the Allied-sanctioned ethnic cleansing of millions of German-speaking civilians from Eastern Europe, the absolute end of the idea of Prussia”—while, from a higher altitude, pointing out that the delusional ideological fervor that shaped the beginning of the war shaped its end, too.

One of the tragic elements of war, in Hanson’s view, is that it often uncovers a reality that might have been comprehended in advance and by other means. Unfortunately, in the years before the Second World War, confusion reigned. The Axis countries lived in a fantasy world—they believed their own propaganda, which argued that, for reasons of race and ideology, they were unbeatable. The Allies, meanwhile, underestimated their own economic might in the wake of the Great Depression. They allowed themselves to be intimidated by Fascist rhetoric; justifiably horrified by the First World War, they wanted to give pacifism a chance, and so refrained from the flag-waving displays of aggression that might have revealed their true strength, while hoping, despite his proclamations to the contrary, that Hitler might be satisfied with smaller, regional conquests. 

“Most wars since antiquity can be defined as the result of such flawed prewar assessments of relative military and economic strength as well as strategic objectives,” Hanson writes. “Prewar Nazi Germany had no accurate idea of how powerful were Great Britain, the United States, and the Soviet Union; and the latter had no inkling of the full scope of Hitler’s military ambitions. It took a world war to educate them all.”

Sadly, a detailed examination of exactly when and how deterrence averts conflict is beyond the scope of “The Second World Wars.” Instead, with an extraordinary array of facts and statistics, the book offers an account of the fatalism of war. Until it begins, war is a matter of choice. After that, it’s shaped by forces and realities which dwarf the individuals who participate.” ~

Berlin near Hitler's bunker, May 1945


The assertion that neither Germany nor Japan could ever have won WWII was startling. However, the limitations of resources for both countries clearly support this argument, and we can only conclude the long, arduous, bitter struggle was due to factors other than strength of arms. I think the essence of this 'other' strength is fierce dedication to “the purity of their ideas, no matter how at odds with reality.” This is part of what allowed a prejudice to become an efficient engine of destruction, dedicated to the total obliteration of the scapegoated group.

The irrational, outrageous, unbalanced extreme, housed in the machinery created for it, became unquestionable, unstoppable, impervious to all but the most extreme acts against it. The same argument for the war against Japan — the fanaticism, the delusion, the refusal to countenance defeat, led to the extremities of ending that war. In Europe, Dresden, saturation bombing, in the Pacific, Hiroshima, Nagasaki. As though defeat could only be achieved by horrific, massive destruction — a terrible, undeniable, inescapable reality check.

Taking these ideas into our current situation with, for instance, the alternative right or evangelicals, suggests argument, and certainly discussion, are useless. Some minds may only be changed by some kind of scorched earth campaign. Perhaps the best we can do will be to resist, and be persistent in the pursuit of social justice and scientific fact.

Of course I enjoyed all of the blog topics, but this was the big one for me this time — it invited a different way of thinking about WWII.


In the next blog, I’ll have an article on the same phenomenon regarding the American Civil War: there was no way the South could have won. But crazy ideologies, fueled by selective bible quotations on both sides, mad it a Holy War — and those are the worst and longest, defeating logic — though one could argue that the defeat of logic and realistic perception was at the very inception.

All of Hitler’s generals begged him not to invade Russia — but could not prevail against this one mediocre man’s delusions. Contrary to myth, Hitler was not an “evil genius” — the word “genius” doesn’t apply. He had an actor’s gift for giving dramatic speeches, and some talent in the visual arts (not great talent, but some). His intelligence, though, was mediocre at best, his education limited, his writing ability pathetic (try reading Mein Kampf) — and there is the distinct possibility that his brain was malfunctioning due to mustard-gas injury during WW1, and later, due to the drug cocktail his feel-good doctor was administering. So, on top of all the other disadvantages Germany was dealing with, Hitler was a drug junkie.

Also — and I'm surprised that the article doesn’t mention it — the extermination of the Jews was actually a costly project that diverted a lot of resources. It was sheer insanity. But that is just the point of the article — Germany could have gained a lot of territory in Europe if it went about it in a rational way, but what we see here is insanity. Now, it may seem like insanity carries with it its own ultimate defeat — but only if nuclear weapons are not being used, so in our era it’s more terrifying.



~ “Christianity was in chaos in its early days, with some sects declaring the others heretics. And then, in the early 300s, Emperor Constantine of Rome declared he had become follower of Jesus, ended his empire’s persecution of Christians and set out to reconcile the disputes among the sects. Constantine was a brutal sociopath who murdered his eldest son, decapitated his brother-in-law and killed his wife by boiling her alive, and that was AFTER he proclaimed that he had converted from worshipping the sun god to being a Christian. Yet he also changed the course of Christian history, ultimately influencing which books made it into the New Testament.”

“Things that are today accepted without much thought were adopted or reinforced at Nicaea. For example, the Old Testament was clear in declaring that God rested on the seventh day, making it the Sabbath. The seventh day of the week is Saturday, the day of Jewish worship and rest. (Jesus himself invoked the holiness of the Jewish Sabbath.) The word Sunday does not appear in the Bible, either as the Sabbath or anything else. But four years before Nicaea, Constantine declared Sunday as a day of rest in honor of the sun god.
At Nicaea, rules were adopted regarding the proper positions for prayer on Sundays—standing, not kneeling; nothing was said of the Jewish Sabbath or Saturday. Many theologians and Christian historians believe that it was at this moment, to satisfy Constantine and his commitment to his empire’s many sun worshippers, that the Holy Sabbath was moved by one day, contradicting the clear words of what ultimately became the Bible. And while the Bible mentioned nothing about the day of Jesus’s birth, the birth of the sun god was celebrated on December 25 in Rome; Christian historians of the 12th century wrote that it was the pagan holiday that led to the designation of that date for Christmas.”


Then there is what many fundamentalist Christians hold to be the most important of all elements of the Bible: the Second Coming of Christ and the end of the world. What modern evangelicals want to believe cannot be reconciled with the Bible. In the Gospel of Mark, Jesus says of the Apocalypse, “This generation shall not pass, till all these things be done”—in other words, the people alive in his time would see the end of the world. Paul in 1 Corinthians is even clearer; he states, “The time is short.” He then instructs other Christians, given that the end is coming, to live as if they had no wives, and, if they buy things, to treat them as if they were not their own. Some evangelicals counter these clear words by quoting 2 Peter as saying that, for God, one day is like 1,000 years.

Two problems: That does nothing to counter what either Jesus or Paul said. And even in ancient times, many Christian leaders proclaimed 2 Peter to be a forgery, an opinion almost universally shared by biblical scholars today.


“The Barna Group, a Christian polling firm, found in 2012 that evangelicals accepted the attitudes and beliefs of the Pharisees—religious leaders depicted throughout the New Testament as opposing Christ and his message—more than they accepted the teachings of Jesus.”

The Trinity—the belief that Jesus and God are the same and, with the Holy Spirit, are a single entity—is a fundamental, yet deeply confusing, tenet. So where does the clear declaration of God and Jesus as part of a triumvirate appear in the Greek manuscripts?

Nowhere. And in that deception lies a story of mass killings.

The Sociopath Emperor

 Why would God, in conveying his message to the world, speak in whispers and riddles? It seems nonsensical, but the belief that he refused to convey a clear message has led to the slaughter of many thousands of Christians by Christians. In fact, Christians are believed to have massacred more followers of Jesus than any other group or nation.
Those who believed in the Trinity butchered Christians who didn’t. Groups who believed Jesus was two entities—God and man—killed those who thought Jesus was merely flesh and blood. Some felt certain God inspired Old Testament Scriptures, others were convinced they were the product of a different, evil God. Some believed the Crucifixion brought salvation to humankind, others insisted it didn’t, and still others believed Jesus wasn’t crucified.

Constantine convened a meeting in the lakeside town of Nicaea. Invitations were sent around the world to bishops and leaders of various sects, although not all of them. The group included the educated and the illiterate, zealots and hermits. Constantine arrived wearing jewels and gold on his scarlet robe and pearls on his crown, eager to discuss the true essence of a poor carpenter who had died 300 years before.

Things that are today accepted without much thought were adopted or reinforced at Nicaea. For example, the Old Testament was clear in declaring that God rested on the seventh day, making it the Sabbath. The seventh day of the week is Saturday, the day of Jewish worship and rest. (Jesus himself invoked the holiness of the Jewish Sabbath.) The word Sunday does not appear in the Bible, either as the Sabbath or anything else. But four years before Nicaea, Constantine declared Sunday as a day of rest in honor of the sun god.

And while the Bible mentioned nothing about the day of Jesus’s birth, the birth of the sun god was celebrated on December 25 in Rome; Christian historians of the 12th century wrote that it was the pagan holiday that led to the designation of that date for Christmas.

Constantine sided with those who believed Jesus was both God and man, so a statement of belief, called the Nicene Creed, was composed to proclaim that. Those who refused to sign the statement were banished. Others were slaughtered. After they had returned home and were far from Rome, some who signed the document later sent letters to Constantine saying they had only done so out of fear for their lives.

About 50 years later, in A.D. 381, the Romans held another meeting, this time in Constantinople. There, a new agreement was reached—Jesus wasn’t two, he was now three—Father, Son and Holy Ghost. The Nicene Creed was rewritten, and those who refused to sign the statement were banished, and another wholesale slaughter began, this time of those who rejected the Trinity, a concept that is nowhere in the original Greek manuscripts and is often contradicted by it.

To this day, congregants in Christian churches at Sunday services worldwide recite the Nicene Creed, which serves as affirmation of their belief in the Trinity. It is doubtful many of them know the words they utter are not from the Bible, and were the cause of so much bloodshed. (Some modern Christians attempt to use the Gospel of John to justify the Trinity—even though it doesn’t explicitly mention it—but they are relying on bad translations of the Greek and sentences inserted by scribes.)

To understand how what we call the Bible was made, you must see how the beliefs that became part of Christian orthodoxy were pushed into it by the Holy Roman Empire. By the fifth century, the political and theological councils voted on which of the many Gospels in circulation were to make up the New Testament. With the power of Rome behind them, the practitioners of this proclaimed orthodoxy wiped out other sects and tried to destroy every copy of their Gospels and other writings.

And recall that they were already working from a fundamentally flawed document. Errors and revisions by copyists had been written in by the fifth century, and several books of the New Testament, including some attributed to Paul, are now considered forgeries perpetrated by famous figures in Christianity to bolster their theological arguments. It is small wonder, then, that there are so many contradictions in the New Testament. Some of those contradictions are trivial, but some create huge problems for evangelicals insisting they are living by the word of God.” ~


I’ve been familiar with much of this stuff for some years now — forgeries, additions to the text by medieval scribes, problems with translation, contradictions, etc. What I find most interesting of all is that there is so much discussion of religion, a veritable explosion of it. I think it started not long after 9/11, which was a traumatic awakening on all kinds of levels. Now politics has moved ahead, but there is no going back to the old-time polite lies in the discussion of religion.

One important function of religion has been to support those in power, but I'm grateful to Newsweek for having provided scriptural reference: “Romans 13:1-2, which in the International Standard Bible says, “The existing authorities have been established by God, so that whoever resists the authorities opposes what God has established, and those who resist will bring judgment on themselves.”


“There are no last words. Any sacred text lasting millennia will have long since drifted far from its original meaning.” ~ Jeremy Sherman


Biologists have puzzled over the resilience of the germline for 130 years, but the phenomenon is still deeply mysterious.

Over time, a cell’s proteins become deformed and clump together. When cells divide, they pass that damage to their descendants. Over millions of years, the germline ought to become too devastated to produce healthy new life.

“You take humans — they age two, three or four decades, and then they have a baby that’s brand new,” said K. Adam Bohnert, a postdoctoral researcher at Calico Life Sciences in South San Francisco, Calif. “There’s some interesting biology there we just don’t understand.”

On Thursday in the journal Nature, Dr. Bohnert and Cynthia Kenyon, vice president for aging research at Calico, reported the discovery of one way in which the germline stays young.

Right before an egg is fertilized, it is swept clean of deformed proteins in a dramatic burst of housecleaning.
Clumping proteins are involved in many diseases of old age, such as Alzheimer’s. Dr. Kenyon and Dr. Bohnert set up an experiment using a special strain of C. elegant worms in which clumping proteins glowed.

It begins with a chemical signal released by the sperm, which triggers drastic changes in the egg. The protein clumps within the egg “start to dance around,” said Dr. Bohnert.

The clumps come into contact with little bubbles called lysosomes, which extend fingerlike projections that pull the clumps inside. The sperm signal causes the lysosomes to become acidic. That change switches on the enzymes inside the lysosomes, allowing them to swiftly shred the clumps.

“It’s a huge, coordinated shift,” said Dr. Bohnert.

The germline may not be the only place where cells restore themselves in this way.

Throughout our lives, we maintain a supply of stem cells that can rejuvenate our skin, guts and brains. It may be that stem cells also use lysosomes to eradicate damaged proteins.

“That would have huge implications,” Dr. Conboy said. It might be possible, for example, to treat diseases by giving aging tissues a signal to clean house.

Calico, founded by Google in 2013, is searching for drugs to counter aging. But Dr. Kenyon doesn’t see new medicine emerging from this research anytime soon.


~ "What happens when you first start drinking," Tabakoff explains, "is that a hormone that controls your water balance, an anti-diuretic hormone, is suppressed." And this leaves us heading for the ladies' or men's room — which can precipitate a pounding headache in the morning.

But Tabakoff says dehydration is not the only reason we get a headache.

"High levels of alcohol in the brain have fairly recently been shown to cause neuro-inflammation, basically, inflammation in the brain," he says.

This is why taking aspirin or other anti-inflammatory medicines, such as ibuprofen, can help us feel better.

Now, alcohol isn't the only headache-producing culprit in our drink glasses. Many alcoholic beverages, such as wines and beers, contain toxic byproducts of fermentation, such as aldehydes. And Tabakoff says if you drink too much, you can feel the effects.

"If these compounds accumulate in the body, " explains Tabakoff, "they can release your stress hormones, like epinephrine and norepinephrine, and as such can alter function in a stresslike way" — paving the way for a hangover.

Tabakoff says distilled spirits contain fewer of these toxic compounds than other types of booze, which explains why some people report feeling fewer hangover effects if they stick with vodka or gin.

Obviously, the only sure way to avoid a hangover is to not drink alcohol. But if you are going to indulge, Tabakoff says the tried-and-true advice — eat something before you drink, and while you drink, makes good sense.

"Food is very good for the purpose of slowing the absorption of alcohol," he says.

Adding liquid calories to your cocktails — say, Coke, ginger ale or sugary punch as a mixer — is a good way to slow absorption, too. In fact, a study we reported on back in 2013 determined that a diet soda and rum will make you drunker than rum mixed with sugary Coke.

Cecile Marczinski, a cognitive psychologist who authored that study, found that the average breath alcohol concentration was .091 (at its peak) when subjects drank alcohol mixed with a diet drink. By comparison, BrAC was .077 when the same subjects consumed the same amount of alcohol but with a sugary soda.

And here's another self-evident tip when it comes to drinking: Pace yourself.

"We can get rid of most of the alcohol we drink if we [limit] drinking to one drink per hour," Tabakofff says. This way, "our blood alcohol levels don't start accumulating."

One drink per hour is a rule of thumb, but that can vary depending on height or body size. Bigger people tend to be able to handle a little more alcohol, and smaller people a little less.

A single drink is less than you might think. It's 5 ounces of wine, 12 ounces of beer, or a shot of liquor.

ending on beauty:

Ring out the old, ring in the new,
Ring, happy bells, across the snow:
The year is going, let him go;
Ring out the false, ring in the true.

~ Tennyson 

Sunday, December 24, 2017


 Boatmen dressed as Santa on Venice's Grand Canal


I did not choose California. It was given to me.
What would a man of the north
have to do with parched emptiness?
Grayish clay, dry streambeds,
hills the color of straw and clumps of boulders
like prehistoric reptiles — that’s for me
the soul of these regions.
And fog creeping out of the ocean,
begetting greenness in the ravines.
And spiny oak and thistles.

Where was it said that we would possess
the earth like a beloved,
and plunge into her deep, clear rivers,
and flow on fertile currents?

~ Czeslaw Milosz


WHAT! Milosz didn’t feel grateful for the privilege of living in California? Sure, sure, he suffered because of losing his homeland, most people I know would be willing to admit (having no idea just how intense this suffering could be — the greatest loss a human being can experience, Milosz claimed), but — here a non-immigrant might make a wide gesture with his arm — “he gained all this!”

And Milosz would know better than to say anything. He’d just smile that smile reserved for those who can never understand because they haven’t experienced things that for someone else have been the core experiences of life. For Milosz, that was not so much the loss of Poland, as the loss of Lithuania; after that, the loss of Poland, since that meant (to a great extent) the loss of the language and the culture; and still after that, the loss of Europe, that larger homeland.

Most of the poem, however, is devoted to lamenting the dryness of California, its “parched emptiness.” Milosz felt close to nature, but it was the meadows and lakes and forests of Lithuania, and later the northern French countryside, which reminded him of Lithuania; he had trouble feeling at home in California’s landscape — which he experienced as the landscape of scarcity and struggle for each drop of water — “hills the color of straw” except for the two or fewer months of the wet season (if we are lucky to have it) — so dramatically different from the lush green of the north. Add to this “spiny oaks and thistles,” and you have the cursed earth after the loss of Eden.

But then there was or course irrigation (“desert landscaping” wasn’t yet part of local consciousness) — and the deer that came to his yard, treating it as their salad bar, as Milosz’s friends noted. One time a doe gave birth to twin fawns on his lawn, and decided to stay there for a while. So as for “parched,” he didn’t have it all that bad there on Grizzly Peak in Berkeley. 

Worse by far was the near-complete lack of recognition (until 1980 and the Nobel Prize) and the loneliness he felt among the leftist academics (though he knew from experience that France was even more hostile toward those who left Eastern Europe for political reasons; as one [American] professor explained some students’ hostility toward me, “They feel you betrayed the revolution.” “This is the real revolutionary country,” I replied — of course that was long before the current political climate.).

And there were Milosz’s metaphysical wrestlings, now reading Swedenborg, now Simone Weil, now trying to feel at home again in the Catholic church (if my own experience is any guide, once you leave, you can’t quite return; once you have had a certain perception with the force of insight, there is no going back. On the other hand, as priests love to claim, that you can never fully leave: the emotional imprint of a Catholic childhood cannot be deleted — whether the affection for Mary and/or a favorite saint, or the chronic sense that you are of being a bad boy/bad girl — god is disappointed with you).

The result is that homeless feeling. You don’t belong here, but then you don’t belong anywhere. As an immigrant you are called an “alien”; this seems ironic, because it’s everything around you that feels alien, wrong, unreal.
About the feeling of unreality: this is part of what goes with exile, with living in a place that’s very unlike your country of origin. The first landscape also establishes neural circuits that dictate what reality is supposed to be like. The differentness of another place creates a pleasant feeling of novelty as long as it’s a vacation and you have a home to go back to. Once that home is lost, the loss of familiarity is traumatic: the limbic system, wired for early attachment, goes into a shock of grief.

And Milosz’s reconciliation? In one of his poems, he tries to resolve the problem by bravely asserting: “Here and everywhere is my homeland.” But in this poem, he resigns himself to homelessness:

Where was it said that we would possess
the earth like a beloved,
and plunge into her deep, clear rivers,
and flow on fertile currents?

Indeed, once we were exiled from Eden, we were doomed to wander among thorns and thistles — or the armored chaparral plants of Southern California, or the spiny-leaved live oaks that dot the those straw-colored hills in Northern California. (I can’t  forget how ugly I found chaparral when I first went hiking in the local hills.) In the biblical myths (which don’t transplant well to forest cultures) the “promised land” sounded more lush again, but only by contrast with the deadly desert around the cultivated areas.

Think of Hagar banished just outside the settlement, beginning to run out of water. Indeed, there is nothing in the Judeo-Christian scripture that might give us the idea we would “possess the earth like a beloved.” The eroticism of that image is reinforced by the image of plunging in her rivers and flowing on fertile currents. Instead: dry streambeds, reptilian boulders — not covered in moss, no. “Parched emptiness.”

And we here near the Mexican border want to scream that he didn’t know how good he had it in the greener north and not in the actual semi-desert. Oh the ingratitude of a man who hadn’t experienced real drought.

“I did not choose California.” Milosz seems to have had no idea how lucky he was — how nobody was going to pity him for living in California.

And he knew better than to say he feels like a dispossessed aristocrat, sentenced to dwelling in a pioneer shack after living in a palazzo.

“I did not choose California.” But that’s the immigrant trauma speaking. It takes a long time to pass. But pass it does. In his old age, Milosz did note that emigré poets run out of nostalgia. Some have nothing else to write about. Again, Milosz was lucky.

Bougainvillea near Monterey, California; Andrena Zawinski



Speak softly, God! It could mean to someone
that the trumpets of your kingdom called;
for their sound no depth is deep enough:
then all times rise out of the stones,
and all the long-lost appear
in faded linen, brittle skeletons,
crooked from the weight of their soil.
That will be a miraculous return
into a wondrous homeland.

~ Rilke, from “The Last Judgment”

Here, that homeland beyond all price is simply the earth — the whole earth. But we should also remember that most people used to get buried in the towns and villages where they were lived; the “wondrous homeland” was the familiar trees and grasses, the same river, the same meadows of clouds in the sky. In Wuthering Heights, Catherine didn't want to stay in heaven; she wanted to return to the moors. All readers understand this at the deepest level; the real heaven we want is the place we already love, or used to love in childhood and youth — our first great love.

As Jack Gilbert put it, “We have already lived in the real paradise.”

What? I can already hear the outraged chorus of those who can recite a million things wrong with just about any place on earth: the climate, the bugs, too much rain or not enough — and that’s just the nature part of it, before we get to human-caused problems. But none of it is especially relevant when measured against what really counts: FAMILIARITY.

Low Tide; Michele Arnesen


Homefulness (a coinage meant to encompass the opposite of homesickness) may take many years to develop. The first two years are the hardest. Even minor things come back as bits of grief, such as my literally gut-level thinking, during the first weeks, “This is not real milk”; “This is not real food”; “Why does the meat have no taste?” etc; and later, in California, much as I appreciate both palm trees and eucalypts, “These are not real trees.” Even Catholic churches were not real churches but shabby substitutes. I did fall in love with Los Angeles, but fully knowing that this was not a real city. Let me share a short poem from long ago:


I miss real trees. The eucalypts
are not enough, not even
their incense after rain. 
I’m outraged that their bark peels off
in untidy tubular patches.
There’s a right-wing homunculus

in the middle of my brain, screaming
that everything looks wrong.
The houses look wrong,
the schools, the stores.
The streets look wrong,
their lunar emptiness.

I try to appease my right-wing
homunculus with December roses;
yawning with contempt, he’ll say,
“What happened to the scent?
These are not the real roses.
And if your heart flips over that

scentless fabrication of false petals,
that’s not your real heart.”

~ Oriana


I hasten to add that the local pine trees, with their magnificent long needles, did win my false heart, as did bougainvilleas (in spite of not being real flowers). The sense of unreality would come and go, and finally come over me less and less often. I’ve managed to get attached even to Chula Vista. But I perfectly understand why Milosz settled in Krakow for the last years of his life. He went to Krakow to die because that city most reminded him of Vilnius (always Wilno to him).

We love the familiar: a little trickle of a waterfall in the local mountains and not the Niagara Falls. Wait — did I just say “a little trickle of a waterfall”? Like those in Cuyamaca Mountains, when they trickle at all, in a relatively wet year? Amazingly enough, I have made the journey from homelessness to homefulness. 

My bird of paradise doesn’t get enough sunlight to develop to full glory, but it does contribute its piece to paradise 


Thinking about the homesickness that occurs when the well loved, familiar landscape of one's homeland is lost, along with all that entails, the language, the customs, the culture, I realize I have not suffered this kind of loss. I have lived all my life until the last year in the same geographical area, where even the changes, in seasons, in the city, in the neighborhoods, were still modulations of the familiar, not erasures, not changes into something too foreign to recognize.

Truly we do all long for that landscape we first knew, the landscape of our childhood, and it is in that sense we are all exiles, separated forever from that first loved world, because, even if we could go back, it wouldn't be there. It is not simply space, but time that separates us forever from that first home, we can never recapture completely what it was, in its fullness, not only in its physicality,  but in the place we occupied there, the way it fit us, the way it looked, smelled, tasted, sounded, and what it meant to our younger selves. That is why nostalgia is a form of grief.

And while the relief and sense of belonging of "homefulness" are very real, home is at once more and less about a particular place. It persists in what are the most essential things, food, language, family. People will go to great lengths to obtain, reproduce and maintain a beloved and familiar food culture, no matter where fortune has delivered them away from that first home. And one's own first language is itself a 'home'- sometimes deliberately replaced so as to 'fit in', sometimes deliberately punished and suppressed by the authorities of the 'new' home, but its loss is always experienced as a grievous separation, a terrible loneliness. Family, of course, is obvious. I think of all those Holocaust survivors whose families were completely, or almost completely, obliterated. Without family, you are lost in the world, no matter where you go.


You are right about time also separating us from what we got used to, that first (or even second or third) emotional home. I even wonder if that’s perhaps one of the reasons we get sick and die — at some point the world becomes just too alien a place. As I write in “From the New World,”

Only we stand still,
immigrants approaching port,
our precious, useless past

in our arms. Ludicrous,
the luggage we take,
the old photographs. The future

will be exile, a new world.


Re: family. For me, just one special person is enough. I’ve learned to do without family, strange as that felt during the first years in the US. But then it’s different being without family at a later age, when you have your own home and other resources, than when you are still quite young, poor, and prone to feeling helpless.

Also, being a writer changes things — your work substitutes for all kinds of other connections. And having grown up as an only child helps too. That said, I do wish there were a group where I could get a bit of sense of having a family — of being unconditionally accepted, even my foibles welcome as yet another part of my unique being.

more on homefulness:


~ “Tiffany Watt Smith: One of the emotions I became really interested in when researching the book was homesickness. In the mid to late 18th century, it was diagnosed as a fatal condition called nostalgia—from nostos, “homecoming,” and algia, “pain.” There was an outbreak of people who were experiencing a longing for home that was so intense it produced a melancholy and an exhaustion, but also sores, pustules, and fevers. People who suffered from it couldn’t eat. They’d end up fading away and dying.

Nowadays we think of homesickness as something kids have on sleepovers. It certainly hasn’t appeared on a death certificate for 100 years. The last person who was diagnosed with nostalgia as a cause of death died in 1918.

Lofthouse: How did it become so much less serious, then? Why has the idea of homesickness changed so much?

Smith: With modernity came a different set of values. It’s not just that it got easier to travel and go back home and communicate through telephones and the Internet and so on. It’s about frontier spirit—in [the current] cultural atmosphere, longing for something that’s comforting and reassuring might seem unambitious. If I feel homesick today, I might think I should grow out of it and enjoy the adventure.

We used to have these words for the feeling of wanting to be home, the feeling of wanting to be in one place for a very long time, which have now disappeared. There’s a wonderful word: “homefulness,” which is the feeling you get when you turn the corner of your road or your airplane lands and you know you're near home. It’s a lovely combination of relief and belonging.

Lofthouse: Are there any emotions our culture takes more seriously than it used to?

Smith: We give happiness a lot of space in our discussions. But it’s a relatively recent phenomenon that happiness is something you’d want to aim for. If you look back to 16th-century Renaissance Europe, there's a fascination with sadness that’s almost the equivalent of today’s fascination with happiness.
You start seeing a lot of authors writing about how to be sad better, and what the appropriate sort of sadness is. It’s seen as valuable because it brings you closer to God. It makes you more humble and more serious. In some cases, a more severe form of sadness, melancholia, was aligned with genius. I think the way we valorize happiness today is problematic. It creates pressure to feel upbeat and cheerful all the time.

Lofthouse: Have you found yourself experiencing emotions differently since doing the research for your book?

Smith: Definitely. There’s some interesting research being done at the moment about the relationship between words and emotions. Learning new words for emotions means you might be able to identify those emotions as they come up in your own experience. And the more emotions you can identify and translate from vague, amorphous things into concrete terms, the easier time you have of it. I now enjoy feeling homefulness. I might have had glimmers of it before, but I don’t think I felt it in the same way.


So true about the fascination with melancholy as the opposite of our self-help craze about happiness. In most languages, the word for happiness is the same as the word for “luck.” It definitely wasn’t something you could “choose” or cultivate. I have only too much personal experience with how easy it is to cultivate sadness.

I’ve always felt wild joy on coming home from a trip. It was the best part of any trip — with one exception — coming back to Torrance (part of Los Angeles) from Europe. Torrance happens to be exceptionally ugly (Mobil Oil refinery is there), and I instantly detested that ugliness while missing the beauty of Europe. I was actually seized with grief looking at the streets in my neighborhood.

San Diego is beautiful, and while I missed the greenness of Poland, the great parks, the rivers, the woods — and the warmth of the culture etc — after coming back from my two trips to Poland, I wasn’t heart-broken. At that point I already had more of that “homeful” feeling. I was returning to a beautiful city, and beauty makes me happy.


I think we do experience emotions we don't have words for — and it is frustrating if we try to describe them. Sometimes it can be done with poetry, or dance or painting or music.


Nor can a word ever adequately describe an emotion, which has a lot to do with the body. I think William James was right: emotions are of the body first and foremost, with the mind bravely trying to interpret and often to control the emotions. Think of a preverbal child: the emotions tend to be very intense. Labeling them is a way to “chill.”


Just a thought on our cultural preoccupation with “happiness" — I find it as irritating as the push to always "be positive" to have what is annoyingly dubbed "positivity." That and the nonsense that we all must work toward having and fostering high "self esteem." These qualities are seen as absolute goods, but are more what I would call narcissistic delusions, usually unfounded in reality, and actually in opposition to fact. They seem to me likely to produce nothing more than a kind of blissful idiocy.

Hope that doesn't sound too harsh, but now more than ever I think we need to see clearly and act wisely.

Asian Paradise Flycatcher: What paradise means to me: lots of flies

~ “Growing up, my favorite superhero was definitely Superman. Hands down, no one else captured my childhood imagination more perfectly than he did. I was also taught to idolize Jesus growing up, and it always delighted me that my boyhood idol so closely mirrored the object of my religious devotion as well.

Both of them were, in a sense, born “from above” but came to earth to be our savior. Both somehow had ongoing relationships with their real fathers through a kind of communication that was indirect and atypical, and both struggled with their identity to some degree as hybrids living in a world that didn’t fully understand what they were about.

Curiously, they also disappear after their earliest years only to reappear again as fully grown adults, ready to dive into their life’s calling as saviors of the world, leaving the rest of us wondering what happened to them during all those lost years? Stories have been written (and shows have been produced) exploring the adventures of young Clark Kent, but none of those are, strictly speaking, canonical; or at least they weren’t to me. The Superman I grew up with was played by Christopher Reeve, and his boyhood history remained a mystery to all of us.

Same thing is true for Jesus. He shows up as a baby in a story we celebrate year after year with pageants and sales, then he disappears for nearly thirty years, save for a single story of a trip to Jerusalem when he was twelve years old. It always bugged me that we didn’t know more about his childhood, especially since I was taught that children are to emulate Jesus as much as adults are supposed to, yet we are never told what Jesus was like as a child.

How are we supposed to know how we are to act as children? Doesn’t that strike you as a significant oversight on the part of the Holy Spirit, the Author of the Bible? If the kingdom of heaven belongs to little ones such as these, why does the New Testament do so little to address the spiritual lives of children?

Incidentally, there were stories written about the younger Jesus, only they didn’t make it into the Bible because they portrayed him as kind of a mischievous imp. One story tells of him breaking the Sabbath by fashioning pigeons out of clay (you weren’t supposed to make clay pigeons on the Sabbath because that’s too much like “work”), but then he claps his hands and makes them real and they fly away so that he doesn’t get in trouble. Other stories tell of people teasing him or correcting him only to have him slay them with a single word. Some of the stories have him bringing them back to life, but at that point I figure the damage is done, so the keepers of the canon decided those really oughta stay out of the official collection.

I should probably add here, as I always feel I must, that I don’t subscribe to the mythicist camp which so many of my fellow agnostic/atheist friends seem to have joined. I have to say that because a number of my readers will have already pushed back from what I’ve said thus far, protesting that any treatment of the life of Jesus which doesn’t discount his entire existence is of no use to them. I don’t have the time or energy to fight them on this. But I will say that where the birth narratives (aka “The Nativity”) are concerned, I’m as thorough a mythicist as they come.

Even the former Archbishop of Canterbury, Rowan Williams, once admitted he was persuaded against believing that the nativity stories were true the way they were being told. The details of wise men coming “from the east” following a star that moved across the sky finally to hover above the place where Jesus was born just didn’t add up, even to this learned man of faith. He admitted it was most likely “a legend,” and insisted instead that people of faith can still find deeper meaning in the stories without having to believe that they actually happened the way they were recorded.

I would go several steps further and note that the earliest gospel we have on hand, the gospel of Mark, says nothing at all about being born of a virgin, nor does it say anything about his birth being special. It would seem that whichever community produced the collection of stories we now call Mark had no awareness of a birth narrative attached to the object of their affections. Growing up, I always assumed it wasn’t included simply because that particular author didn’t see it as his job to include that part. Now that I’m older, however, I’m noticing a few more things that earlier escaped my attention.

It’s odd enough that Matthew’s gospel tells one story while Luke’s tells a completely different one. Matthew’s gospel makes it sound like Joseph and Mary were from Bethlehem (it records no census trip, and two years later when the wise men show up, Bethlehem seems to be their permanent residence) while Luke’s insists that they were really from Nazareth and were only in Bethlehem because of a “worldwide” census. Never mind that we have no extant record of such a census ever being taken, nor would it even make practical sense to demand that everyone return back to the original place of their ancestors’ births.

Luke doesn’t even get the right name of the governor for the region, and that while selling himself as the one whose gospel will give us the most thoroughly detailed and accurate account of the events depicted therein. It should have caught my attention as a young Bible student that Luke even felt that need, which would indicate he felt there already were untrustworthy gospels floating around out there (Could he have meant Matthew’s, for example? We have good reasons to believe it predates Luke’s writing, yet he includes nothing of Matthew’s nativity story in his own). But that aside, all historical records invalidate Luke’s naming of Quirinius as the governor of Syria during this time period, placing him at least a full decade later than the time of Herod’s death (the same Herod who appears, still very much alive, at the time of Jesus’s birth).

The Miraculous Christ Child (Christ Kindl), Steyr, Austria. In some European countries, it's the Christ Child who brings gifts to children; in the US, Christ Kindl became "Kris Kringle" and got fused with Santa Claus.

One more thing struck me as a grown-up that I never noticed as a child: It says in Luke 2:19 that after a large choir of angels appeared in the sky, proclaiming the birth of the savior of the world, telling a group of shepherds exactly where to find the baby, “Mary treasured these things, pondering them in her heart.” A few days later when she and Joseph made their way into Jerusalem for his circumcision, not one but two old prophets independently approached them and declared that their child was a promised savior of mankind. Surely this all would have made a deep impression on the parents of Jesus, as would the later story of his debating the teachers in the Temple at age twelve.

But in Mark 3 we learn that as soon as Jesus had begun preaching publicly, his family—including his mother Mary—sought to take him away because, and I quote, “He is out of his mind” (see Mark 3:20-35, and note the inclusion of his mother at the end of the story). Does that even make sense? Would this woman, who was chosen precisely because of her receptivity to the leadership of the Spirit, and who was witness to all these awe-inspiring divinely inspired proclamations, decide when his chosen time had finally arrived that he had lost his mind and needed to be taken away?

I honestly don’t know what to think about that. I only know that this little detail tucked away in the gospel of Mark indicates to us today that Jesus’s family doesn’t seem to have believed he was something special sent from God, and that’s remarkable considering the stories contained in the first couple of chapters of Matthew and Luke. One is tempted to believe that these stories only appeared many years after the death of Jesus.

Like the stories people tell about catching that One Big Fish, this tale just kept growing every time it got told until it became what it is today.

Dead Sea Salt Formation

The Earliest Christians Didn’t Have Christmas
Students of the Bible know that the gospels weren’t the earliest Christian writings. That honor belongs to the Pauline epistles, the letters Paul wrote to the surviving churches in his care before he died in the mid-60s C.E. Turning to check his letters to see what he says about the first Christmas, we find an awkward silence on the matter.

Paul never says a word about a virgin birth, nor does he say anything about the events surrounding [Jesus’] miraculous entrance into the world, hailed by kings and angels alike. This is a pretty big deal, frankly, and it should have bothered me more that it did when I was younger.

A virgin birth is kind of a big deal. And if this was the fulfillment of a prophecy from hundreds of years before, someone else should have said at least something about it at some point. Either Paul, or Peter, or James, or John — somebody should have brought it up again at some point, but they don’t. They don’t repeat any of the stories that we later find in the gospels of Matthew and Luke, and the most likely reason for this is that those stories hadn’t yet been concocted by the pious imaginations of the communities which later created these stories.

In short, the earliest Christians just didn’t have Christmas. That tradition appears to be one of the last additions to make it into the canon, and it would appear that most of the earliest apostles (and their imitators) knew nothing of the stories that believers today accept with little question. They wouldn’t want to dispense with them because they’ve become too precious, too meaningful and too inspiring to the church after all these years.

Christmas is still everyone’s favorite, because who wouldn’t love the swaddling baby cooing in the manger, surrounded by cute animals and adoring angels? Who doesn’t love the idea of rich men coming from out of nowhere to give expensive gifts to this poor child whom they’ve never even met, following a star that moves across the sky, guiding them across a desert to find this family tucked away in a cave? It’s a beautiful story, and the modern celebration of it has honored it well.

I’m just pretty convinced it never actually happened. And for people like me, that overrides our ability to celebrate its significance, whether we’re theists or not.” ~


Neil Carter’s insights can be delightful: Jesus and Superman, so similar! Just these two paragraphs say so much:

~ Both of them were, in a sense, born “from above” but came to earth to be our savior. Both somehow had ongoing relationships with their real fathers through a kind of communication that was indirect and atypical, and both struggled with their identity to some degree as hybrids living in a world that didn’t fully understand what they were about.

Curiously, they also disappear after their earliest years only to reappear again as fully grown adults, ready to dive into their life’s calling as saviors of the world, leaving the rest of us wondering what happened to them during all those lost years? Stories have been written (and shows have been produced) exploring the adventures of young Clark Kent, but none of those are, strictly speaking, canonical. ~

And Carter seems right on about the notorious “silence of Paul” — the man regarded by scholars as the real founder of Christianity seemed to have no idea about the Virgin Birth and the rest of the nativity narrative (nor about the miracles, teachings, or most other events in the Gospels). The only event of true interest to Paul was the alleged resurrection, with its promise of immortality for the true believers. The manger, the three kings? That couldn’t be less relevant.

But the story has its charm; you don’t have to be a child love the idea that the divine child would be born with animals looking on. Pope Benedict injunction (fortunately ignored) that the animals be removed from the portrayals of nativity because of lack of “scriptural evidence” shows a complete lack of understanding of the emotional power of tradition. Besides, ultimately, only the animals are real.

Botticelli’s Nativity. Note that Boticelli gives little puffs of clouds as footholds for the angels. 


Love Neil Carter's article. Superman was also my favorite TV show. I wonder if the authors of Superman saw the similarities with Jesus. Probably not since they were Jewish: Jerry Siegel and artist Joe Shuster, high school students living in Cleveland, Ohio, in 1933. They sold it to DC comics in 1938.

The stories about the Virgin Birth and angels in the sky seem to get bigger and bigger as time goes on.


Ah, Superman and Jesus: so both are Jewish conspiracies :)

Both are savior archetypes, originating during a time relative helplessness (what high school boy doesn’t dream of being Superman?)

We have mainly the painters to thank for the extreme popularity of the Nativity narrative, starting with the Annunciation. Thousands of paintings! But once mainstream painting became secular, we’ve been witnessing the secularization of Christmas — and Santa Claus and the reindeer taking the place of religious figures.

WHERE DID THE WISE MEN COME FROM? (for now, never mind it’s a myth)

Where did the Magi come from? The usual answer is from Persia. They are identified with a caste of Zoroastrian astrologers and philosophers known to be active in Persia from the sixth century BC. The term “magi” is derived from the Greek magos which in  turn was derived from the Persian term for the philosopher-astrologer-priests. They were active during the empire of the Medes. But did the wise men really come from Persia? By the time of the Roman Empire the Medes were long gone.

Whether there was an active Zorastrian caste of astrologer-priests at the time of Christ’s birth is debatable. That they had an interest in whether a new born king of the Jews would appear is also debatable.

I’m increasingly interested in the idea that they came from the Arabian peninsula, from the Kingdom of Sheba, which was in today’s Yemen. Why Yemen? Archeologists are increasingly agreed that the ancient and powerful kingdom of Sheba was located at the Southern tip of the Arabian peninsula, and that it’s reach extended deep into Eastern Africa. The three gifts of the magi indicate an origin in Sheba since the kingdom was known firstly for its vast wealth from the gold mines of Africa, secondly, the Boswellia tree–from which the gum that is used to make frankincense is tapped–is native only to the Arabian peninsula and Somalia. Thirdly, the commiphora tree–from which the resin to make myrrh is derived–also grows only in the Arabian peninsula.

The Kingdom of Sheba therefore grew rich on these three unusual, rare and precious commodities: gold, frankincense and myrrh. Of course in the ancient Middle East wise men from Persia could have brought these gifts, but if they were seen not only as rich gifts, but a sort of diplomatic gift–kings bringing the best produce and commodities from their own country in homage to a neighboring king, then identifying the country of origin with the gifts makes sense.

Apparently you no longer HAVE TO believe in it. You can be admitted to heaven even though you don’t believe in Virgin Birth. I certainly applaud not being told what you have to believe the way I was told what I had to believe, no questions asked — it was compulsory like attending mass on Sunday — but doesn’t the church realize that this is a slippery slope? Is belief in the changing of water into wine also optional now, or will soon be? What about walking on water? And the rest of the supernaturalism?

The Resurrection will probably be the last to go. All this is fine with me, even when I remember how I agonized trying to force myself to believe that all that mythological stuff was literally, historically true, and now they tell me . . .

In any case, it’s fascinating to watch the religion in the process of secularizing itself.

Note that all the angels are female. Parthenogenesis (“virgin birth” with no contribution from the male) does happen in nature, but the offspring is exclusively female. Thus, it would make sense if Baby J were a girl. 

Nativity, Caravaggio, 1609. Note the stark realism.  


I love the Caravaggio.


It certainly dares to be different: instead of beauty and elegance and the angels, we get a realistic portrayal of poverty.

speaking of animals . . .  if the gentleness of Jesus is so difficult to accept . . .


~ “Children’s books, like children themselves, come in for a fair amount of scolding, whether it’s the periodic “family values” attacks on books like “Heather Has Two Mommies” or the international stir kicked up just last month when an English mum argued that the non-consensual wakeup kiss at the end of “Sleeping Beauty” reinforces rape culture. You might think that “The Story of Ferdinand,” about a gentle bull who refuses to fight in either pasture or bullring, only wanting to sit under his favorite tree and smell the flowers, would be immune from such content-shaming. But the eighty-one-year-old book, which was written by Munro Leaf and illustrated by Robert Lawson and is the basis for the new animated film “Ferdinand,” opening on December 15th, was caught in the culture-war crossfire of its own era. Mahatma Gandhi and Eleanor Roosevelt were on Team Ferdinand. Adolf Hitler and Francisco Franco were not. But the battle lines weren’t drawn quite as neatly as those rosters suggest.

Set in the country somewhere outside Madrid, “The Story of Ferdinand” had the good or bad fortune to be published in September, 1936, three months after the start of the Spanish Civil War, when Fascist military forces began rebelling against the leftist Republic. In the book, the peaceful Ferdinand is mistaken for the “toughest, meanest bull in all of Spain” after he gets stung by a bee and starts “bucking and jumping and acting like he was crazy.” Carted off to the bullring, however, he reverts to languid form, sitting down and smelling “all of the beautiful flowers worn by the ladies in the crowd.” The picadors, the banderilleros, and the matador do their best, but “no one could get Ferdinand to fight,” and so he returns to his beloved pasture and tree. A sweet tale. But with Spain at war and the rest of Europe on the verge, Ferdinand’s pacifism conveyed a loaded message if looked at the right, or wrong, way. The book’s publisher, Viking Press, had wanted to hold it back until “the world settles down,” according to a reminiscence by Margaret Leaf, Munro’s widow, written on the book’s fiftieth anniversary. The author and illustrator insisted on going ahead, which the publisher did—though apparently without much faith, putting all its advertising muscle behind another picture book on its list that year: “Giant Otto,” by William Pène du Bois, which centered on a floppy dog the size and shape of a four-story burial mound. “ ‘Ferdinand’ is a nice little book,” Viking’s president reportedly proclaimed, “but ‘Giant Otto’ will live forever.”

“The Story of Ferdinand” sold respectably, at first, moving fourteen thousand copies in its first year. But it took off, in 1937, for reasons no one was quite sure of. By its first anniversary it had sold eighty thousand copies, a phenomenal number for a picture book during the Depression. By that Christmas, as this magazine reported, sales were “running slightly behind Dale Carnegie and well ahead of Eleanor Roosevelt.” The following December, the book “nudged ‘Gone With the Wind’ off the bestseller lists,” Margaret Leaf notes. Ferdinand merchandise began turning up in stores—and not just the usual toys, dolls, pajamas, and cereal boxes, but also women’s scarves, hats, and a Cartier brooch that sold for fifty dollars (roughly eight hundred and fifty dollars today). The bull ambled down Broadway as a balloon in the Macy’s Thanksgiving Day Parade and enjoyed the flowers on a float in the Rose Parade, in Pasadena. The story was adapted for radio by “The Royal Gelatin Hour” with Rudy Vallée, and on film by Walt Disney, in 1938, winning an Oscar for Best Animated Short. Life magazine proclaimed the book “the greatest juvenile classic since ‘Winnie the Pooh,’ ” while asserting that three out of four copies were bought by “grownups . . . largely for their own pleasure and amusement.”

That may well have been true. Munro sounded near-wistful when he wrote, in 1937, that he had published a book “I thought was for children, but now I don’t know.” Munro had finished his text long before the Spanish Civil War broke out, and always maintained he had nothing in mind but a funny story—he had only chosen a bull as the main character because mice and cats and bunnies were played out, he claimed. But as Viking had feared, the juxtaposition of a brutal Spanish war and a peaceable Spanish bull seemed more than coincidence to many observers—and fears of a coming wider conflict no doubt fuelled such readings. As a writer for The New Yorker observed, in a January, 1938, Talk of the Town story, “Ferdinand has provoked all sorts of adult after-dinner conversations. Some say he’s a rugged individualist, some say he’s a ruthless Fascist who wanted his own way and got it, others say the tale is a satire on sit-down strikes—you see the idea.” Leaf told the Times, in a piece that ran under the headline “writer for young tells of new woes,” “Letters complained that ‘Ferdinand’ was Red propaganda, others said it was Fascist propaganda, while a number protested it was subversive pacifism. On the other hand, one woman’s club resolved that it was an unworthy satire of the peace movement.” Publisher’s Weekly reported that Munro had received a complaint from a Geneva-based diplomat of undisclosed nationality who pointed out that “the real fate of any little bull who would not fight was a tragic trip to the butcher shop.” The implication was that Munro had dismissed the challenges facing professional peacemakers, and that Ferdinand must be some kind of dupe or quisling. The book was reportedly burned in Nazi Germany and wouldn’t be published in Spain until after Franco’s death.

Another strain of criticism worried about Ferdinand’s effect not on the League of Nations or impressionable Brown Shirts but on its originally intended audience. “Ever since Munro Leaf wrote his story about the mal-adjusted bull, our nurseries have been flooded with pieces about locomotives tired of the track, lambs who have lost their wool,” and so on, a writer for The New Yorker reported in yet another Talk of the Town piece on Ferdinand, with tongue perhaps halfway in cheek. The unsigned article accused lesser children’s authors of being “forced by a literary trend to play Cassandra with a lisp. We have no idea what, good or bad, will come of introducing futility so near the dawn of life, but it was only last week that our little nephew, waking from an anxious dream, stirred and sighed and said, ‘Oh God, another day!’ ” A columnist for the Cleveland Plain Dealer, parroting the book’s more excitable critics, noted that “something uncomfortably alarming is being engineered under the specious cloak of juvenile publication” and insisted that “something ought to be done about it.” This writer, too, was having a bit of fun with all the hand-wringing. But he also seemed to put his finger on something real when he wrote, “Certain irate fathers assert that the book is a deliberate attempt to make mollycoddles out of little boys.”
 “Mollycoddles,” “Cassandra with a lisp.” To some benighted eyes of the late nineteen-thirties, Ferdinand’s passivity clearly signified suspect masculinity. In 1938, the novelty jazz duo Slim and Slam—the guitarist Slim Gaillard and the bassist Slam Stewart, best known for the hit “Flat Foot Floogee (with a Floy Floy)”—recorded a song called “Ferdinand the Bull,” in which Slim sang:

Ferdinand, Ferdinand
The bull with the delicate ego. . . .
Ferdinand got his hands on his hips
Look at Ferdinand swishing. . . .
When the picador missed him
What did Ferdinand do?
He kissed him!

The song concludes: “Ferdie is a sissy, yes, yes.” Leaf himself fretted over that gloss on his hero. The Los Angeles Times, recounting a 1939 conversation with the author, wrote that he was less worried about the misperception of his book as political propaganda than “the fact that because Ferdinand only smelled the flowers and wouldn’t fight he, Leaf, must bear some resemblance to one of the softest-petaled and most delicate of garden flowerets. So he likes to have it known that he is a lacrosse player and won a boxing championship at Harvard—which, he admits, isn’t so much of a championship.”

 Psychoanalysts took their own swings. According to a 1940 interpretation published in American Imago, a prominent Freudian journal, Ferdinand is “an eternal child. . . . He does not regress; he simply remains locked in his happy innocence, nursing himself with the abundance of infantile pleasure.” The author describes the bull inhaling the scent of his beloved flowers with “his nostrils widened and his eyes closed or even worse, half-closed, like the eyes of a woman in ecstasy,” and concludes that the story is “a clear cut castration threat.”

As you’d expect where bulls and silly ideas about manhood are concerned, Ernest Hemingway also had an opinion. In 1951—the dust hadn’t yet settled, apparently—he published a short, fable-like story in Holiday magazine, of all places, titled “The Faithful Bull.” It begins:

One time there was a bull whose name was not Ferdinand and he cared nothing for flowers. He loved to fight and he fought with all the other bulls of his age, or any age, and he was a champion. . . . He was always ready to fight and his coat was black and shining and his eyes were clear.

Hemingway’s fable ends: “He fought wonderfully and everyone admired him and the man who killed him admired him the most.” Personally, I’ll stick with the original Ferdinand, whose own story concludes: “And for all I know he is sitting there still, under his favorite cork tree, smelling the flowers just quietly. He is very happy.”

Today, Ferdinand is hailed as an icon of gender nonconformity, his tale a celebration of “difference”—a shift that serves as not a bad yardstick for how much the culture has evolved over the last eight decades. Myself, I like to think of Ferdinand—despite the Iberian setting of his story—as a proud American refusenik in a continuum that begins with Bartleby, the Scrivener, or maybe Thoreau, and goes on to include Benjamin Braddock, the hero of “The Graduate,” and, for younger audiences, Maurice Sendak’s “Pierre,” of “I don’t care” fame. At the same time, I wonder if there are now parents and teachers who object to Ferdinand’s introducing toddlers to blood sports. A subject for further litigation.” ~

 How “The Story of Ferdinand” Became Fodder for the Culture Wars of Its Era, by Bruce Handy, The New Yorker, December 15, 2017

Disney's Ferdinand, 1938


“But only the memory of letters is left to us. We can no longer rely on letters! It is a nightmare to never again receive a letter.” ~ Erin Moure



As James Madison wrote in his account of the Constitutional Convention, slave states were concerned that the direct election of a president would diminish their sway in federal affairs. Why? Because a huge proportion of most slave states—enslaved people—were unable to vote. Slave states had crafted one work-around for this issue by pushing through the Three-Fifths Clause, which counted slaves as three-fifths of a person for the purposes of apportioning seats in the House of Representatives. But a popular vote of the president—based strictly on the population of white men who were allowed to vote—might prevent proslavery presidents from perpetually assuming control of the executive branch.

As Madison explained, the solution to this concern was the Electoral College. The Constitution apportioned electors based on the number of senators and representatives a state had. Each state got two senators regardless of population, but the number of representatives a state received depended upon its population. And since slaves qualified as three-fifths of a person in counting a state's population, slave states received extra representatives—which translated into extra electors. Through this constitutional calculus, slave states were able to maintain their grasp on the presidency for decades.

As for Hamilton’s notion that the wise electors would judiciously deliberate and choose the most qualified candidate? It’s utterly irrelevant today, since most states bind electors to their state’s popular vote. Some progressives have filed lawsuits to free electors from these laws and allow them to “vote their conscience,” a shamefully anti-democratic and myopic rationalization. More than 100 million people vote in each presidential election. Why should electors—mostly party loyalists with no notable qualifications for choosing a commander-in-chief—be able to nullify these votes? What gives them the right to toss out the democratic choice of the people in their individual states and pick their favorite candidate instead?

But wait, the counterargument goes, Clinton won more votes than Trump. Shouldn’t electors be able to vote for the candidate who won the popular vote? Yes, they should be, and if enough states pass the National Popular Vote Interstate Compact, they will be obligated to do so. This agreement would bind states’ electors to the winner of the popular vote; it would take effect the moment enough states—those whose electoral votes total 270—have passed it. The NPVIC is a precise, rational tool to address the Electoral College’s obsolescence. The fight to free electors from voting for their state’s popular vote winner is ad hoc silliness and a recipe for future disaster.

[The Electoral College] is an excellent example of the system’s absolute failure to serve any positive purpose in modern American democracy.” ~

from another source:

~ “Standard civics-class accounts of the Electoral College rarely mention the real demon dooming direct national election in 1787 and 1803: slavery.

At the Philadelphia convention, the visionary Pennsylvanian James Wilson proposed direct national election of the president. But the savvy Virginian James Madison responded that such a system would prove unacceptable to the South: “The right of suffrage was much more diffusive [i.e., extensive] in the Northern than the Southern States; and the latter could have no influence in the election on the score of Negroes.” In other words, in a direct election system, the North would outnumber the South, whose many slaves (more than half a million in all) of course could not vote. But the Electoral College—a prototype of which Madison proposed in this same speech—instead let each southern state count its slaves, albeit with a two-fifths discount, in computing its share of the overall count.

Virginia emerged as the big winner—the California of the Founding era—with 12 out of a total of 91 electoral votes allocated by the Philadelphia Constitution, more than a quarter of the 46 needed to win an election in the first round. After the 1800 census, Wilson’s free state of Pennsylvania had 10% more free persons than Virginia, but got 20% fewer electoral votes. Perversely, the more slaves Virginia (or any other slave state) bought or bred, the more electoral votes it would receive. Were a slave state to free any blacks who then moved North, the state could actually lose electoral votes.

If the system’s pro-slavery tilt was not overwhelmingly obvious when the Constitution was ratified, it quickly became so. For 32 of the Constitution’s first 36 years, a white slaveholding Virginian occupied the presidency.” ~

James Madison

~ “Why does the sun set so slowly around the solstice? At the December (or June) solstice, the sun rises and sets farthest south (or north) of due east and due west. The farther the sun sets from due west along the horizon, the shallower the angle of the setting sun. That means a longer duration for sunset at the solstices.

Meanwhile, at an equinox, the sun rises due east and sets due west. That means – on the day of an equinox – the setting sun hits the horizon at its steepest possible angle.

The sunset duration varies by latitude, but let’s just consider one latitude, 40o north, the latitude Denver or Philadelphia in the United States, or Beijing in China. At that latitude, on the day of solstice, the sun sets in about 3 and 1/4 minutes.

On the other hand, at 40o north latitude, the equinox sun sets in roughly 2 and 3/4 minutes.

At more northerly temperate latitudes, the sunset duration is greater; and at latitudes closer to the equator, the sunset duration is less. Near the Arctic Circle (65o north latitude), the duration of a solstice sunset lasts about 15 minutes; at the equator (0o latitude), the solstice sun takes a little over 2 and 1/4 minutes to set. Regardless of latitude, however, the duration of sunset is always longest at or near the solstices.” ~


1) Oleic acid lowers cholesterol

Patients at risk of heart disease were often prescribed a low-fat, high-carbohydrate diet (Step 1 Diet). Both the high-carbohydrate diet and a high monounsaturated fat diet lowered total blood cholesterol levels. However, patients on the high monounsaturated fat diet saw lower LDL-cholesterol and triglycerides than those on the carbohydrate diet.

The cells of the small intestine absorbed less cholesterol when oleic acid was present, because fewer proteins to transport cholesterol were produced.

2) Oleic Acid Protects Against Heart Disease

The famous “seven countries study” followed Mediterranean men and women over many decades and compared them to their counterparts in northern Europe, Japan, and the US. Those with diets rich in monounsaturated fats, including oleic acid from olive oil, had lower rates of heart disease.

The seven countries study was the first to definitively link saturated fat intake to total cholesterol levels and heart disease. Many of the modern dietary recommendations about fat intake are based on this study. Men and women in the “seven countries study” with elevated cholesterol levels had higher risks of death due to heart disease.

These protective effects are largely due to a decrease in total blood cholesterol levels, especially “bad” LDL-cholesterol. Limited evidence from an 8-week study of 23 patients with high heart disease risk connects a high monounsaturated fat diet to other heart-protective roles (blood clotting and circulation).

Because oleic acid also acts on insulin sensitivity, blood pressure, inflammatory markers, and blood vessel function those who consume it are protected against heart disease. Oleic acid also lowered cholesterol levels in a study of 180 patients randomly assigned to high- and low-monounsaturated fat diets over 2 years.

3) Oleic Acid Lowers Blood Pressure

In a study of 23 patients with elevated blood pressure, those assigned to high monounsaturated fat diets all had significantly reduced blood pressure after 6 months. Eight patients were able to stop taking blood pressure medication entirely while on the diet [R].

However, the drop in blood pressure observed in that 23-patient study was also linked to a reduced saturated fat intake, and to increased nitric oxide levels stimulated by polyphenols present in olive oil, not just to the high monounsaturated fat content.

Rats with high blood pressure fed a different form of oleic acid (bioactive 2-hydroxyoleic acid) decreased their blood pressure to normal levels after 7 days of treatment. These effects were attributed to changes in the production of proteins that control blood vessel contraction (increase in PKA and decrease in Rho A kinase proteins).

Integration of oleic acid into cellular membranes can alter their structure to allow certain receptors to be present or absent at the membrane (G-protein coupled receptors, specifically members of the adrenergic receptor family).

Increased levels of oleic acid in the vessels of rats fed a high olive oil diet were associated with an increase in receptors that lower blood pressure (via PKA)

4) Oleic Acid Improves Insulin Sensitivity

When a group of adults (162 healthy people) was put on a 3-month high saturated fat diet, their insulin sensitivity decreased, compared to a high monounsaturated fat group.

Ten overweight patients with non-insulin-dependent diabetes improved their glycemic profiles (blood glucose and insulin value correlation) when placed on a high monounsaturated fat diet for 15 days.

In mice with diet-induced diabetes and obesity, substitution of oleic acid for saturated fats in the diet improved symptoms (hypothalamic inflammation, insulin resistance, and body fat).

Eleven pre-diabetic patients were fed 3 diets, each for 28 days, one diet high in monounsaturated fats, another high in saturated fats, and a third high in carbohydrates. These patients had less belly fat and better insulin sensitivity on the high monounsaturated fat diet.

5) Oleic Acid Prevents Obesity

According to the World Health Organization, a high monounsaturated fat diet was the best predictor of low obesity rates worldwide [R].

A 28-day diet high in monounsaturated fats decreased belly (central) fat, which is associated with obesity, in 11 insulin-resistant patients.

6) Oleic Acid May Improve the Immune System and Resolve Inflammation

Oleic acid is incorporated into cell membranes and can directly interact with the immune cells (neutrophils) responsible for controlling the duration and intensity of tissue inflammation [R].

Part of the inflammatory response in the human body requires the formation of reactive oxygen species by neutrophils at the site of inflammation. This recruits other molecules necessary for healing. This response is increased in the presence of oleic acid leading to faster resolution of inflammation including the release of cytokines (IL1-b).

Neutrophils are also responsible for pathogen identification and defense. They more efficiently engulfed (phagocytosed) and killed microorganisms when incubated with oleic acid [R].

Recruitment of neutrophils to the site of inflammation increased when oleic acid was present in lung-injury model mice but decreased when tested in a second study in cells outside the body. So, it remains unclear when or where oleic acid may assist with this early response to inflammation.

7) Oleic Acid May Help with Rheumatoid Arthritis

Forty-three rheumatoid-arthritis patients given a dietary supplement of fish oil (high in omega-3) and olive oil (high in oleic acid) had the best improvements in both pain and mobility compared to fish oil alone or a placebo (soy oil).

8) Oleic Acid May Decrease Chronic Nerve Pain

Oleic acid inhibits a receptor (TRVP1) involved in pain perception (sensing of spiciness, hot temperatures, and itchiness). This is part of oleic acid’s natural role in inflammation [R].

Injection of oleic acid and albumin at the injury site in mice reduced the pain and involuntary movements associated with paralysis after spinal injury.

Albumin and oleic acid also promoted new nerve cell (dendritic) growth in normal mice and in mice genetically modified to have human TRVP1.

Injections of oleic acid in a mouse pain-model reduced pain and inflammation similar to that observed in human arthritis patients.

9) Oleic Acid Is Essential for Brain Function

Oleic acid is produced during the repair of nerves (mature axons) and plays a role in the production of myelin.

Adrenoleukodystrophy, a rare genetic disorder that leads to the breakdown of myelin, can be treated with a mixture of fatty acids, including oleic acid, to slow the disease and reduce brain inflammation.

10) Oleic Acid Improves Mood

A small 3-week study (14 to 18 people per test group) associated an oleic acid-rich diet with higher resting energy expenditure (mitochondrial activity), lower anger levels, and increased physical activity.

In 20 adolescent boys with ADHD, oleic acid levels in the blood were positively linked with brain plasticity and with the openness and extraversion of the boys’ personalities [R].
11) Oleic Acid Decreases Age-Related Cognitive Decline

High monounsaturated fat intake was correlated with reduced risk of cognitive decline in a survey of over 5,000 elderly Italians (over age 65).

This decreased risk is probably due to oleic acid’s role in the maintenance of neuron structural integrity.

The brain’s need for monounsaturated fatty acids increases with age in rats.

12) Oleic Acid May Slow Aging

Long-lived species like humans typically have higher levels of oleic acid in their membranes than shorter-lived species like rodents.

Aging is often linked with oxidative stress in cellular membranes and with DNA damage caused by free radicals released during energy production (glycolysis and electron transport). Rats that consumed more olive oil and probably have more oleic acid in the membranes had less age-related oxidative stress because these fatty acids are less susceptible to free radical damage.

13) Oleic Acid May Prevent Cancer

Oleic acid’s ability to decrease oxidative stress in the cell and thus to protect DNA from oxidative damage also lowers cancer risk.

Two studies interviewed more than 5,000 women with and without breast cancer about their dietary habits. Women with high levels of oleic acid in their diets were less likely to have cancer.

Mice with induced lung cancer (adenocarcinoma) fed an oleic acid-rich diet had increased rates of survival and longer disease-free periods.

However, mice with salivary gland tumors fed an oleic acid-rich diet increased tumor progression, possibly due to a lack of other monounsaturated fats in the diet.

ending on beauty:

I don’t have time for dying, leaving
these glorious sunsets and even small
things like that trail of ants and I have
yet to coax a butterfly to sit on my hand.

~ Una Nichols Hynum, “Aunt Betty”