Saturday, February 24, 2018

ERASMUS VERSUS LUTHER: GLOBALISM VERSUS NATIONALISM; THE OPTIMISM BIAS; LETHAL WOUNDS OF AR-15; OLIVE OIL EXTENDS LIFE SPAN

Natural Bridge Cave, Calaveras County, California

GLASS MOUNTAIN

Beyond seven rivers,
beyond seven seas,
there was a princess who lived
on a glass mountain.

She had mirrors for companions.
She had see-through dreams.
The knights’ horses’ hooves
kept slipping.

But a hero, the youngest, the fool,
in return for a crumb to a crone
at the crossroads, with the usual one tooth,
won a horse with diamond hooves.

*

Every morning I slid
my hands into solitude
as into a basin of water,
and waited for the knight.

Only diamond like a dancing star
could carve a stairway
up the mirrored slope.
Ice-blue windows blew open,

crystal doors slammed shut.
An owl flew to the top:
“If not you, then who?
Who else will be a fool?”

*

The mountain is slippery and steep.
Its sheen half-blinds me with reflections,
a glassy, insomniac light.
I carve one step, then another.

It may take me a lifetime, I know.
But the princess in the tower of cloud
can hear — could always hear —
starry beat of diamond hooves.

~ Oriana

I offer this poem in puzzlement and frustration. Can a poet love own poem that fails with the English-language audience? And one that no longer reflects the speaker’s current concerns or beliefs?

Completely unknown here, the Glass Mountain is a popular fairy-tale in Poland. The version I know does not feature a tree with golden apples or the guardian eagle. Rather, the princess’s jealous father has her imprisoned in a castle on top of a steep, slippery glass mountain to keep her safe from men. But they keep trying, only to slip down the glass slope. Finally a good guy,  the youngest son, an underdog who shows his kindness by sharing his meager food with a crone, is rewarded with a steed that has diamond hooves — and those hooves work!

The phrase “the glass mountain” has entered the language to mean an extreme challenge — perhaps an impossible one. “Trying to make Marek stop drinking is my glass mountain,” a woman may say while her listeners sigh heavy sighs (I wonder if “Co-Dependent No More” has been translated into Polish).

Whoever is trying to climb a glass mountain is usually mistaken both about the Princess (the prize), but especially about having the horse with the diamond hooves (so to speak). An aspiring artist may think that talent is his diamond hooves in his climb toward recognition, while in fact it would take even more talent and strangeness (or, let’s face it, pathology) — plus connections (location! location!), and enormous energy, willingness to make huge sacrifices, and superhuman hard work.

In the broadest sense, life itself is a glass mountain we keep trying to scale, in a somewhat Sisyphean mode. But let’s say we do make it to the top. We can linger  there only so long before it’s time to descend — and then we’re told we’re “over the hill”! (A saving feature: those later, “over-the- hill” years tend to be happier than youth.)

But the poem is strangely dear to me — perhaps because it reflects my starry-eyed younger self — both as the imprisoned princess and as the knight who starts coming to her rescue (note my brave attempt at a “liberated” solution — I try to be my own Prince Charming!). In the broader sense perhaps that younger, trusting self never dies — we continue to count on luck, or on what believers would call “grace.” It’s the “optimism bias.”

 
There are times when it’s best to cut one’s losses and stop trying to climb the glass mountain (or, for a country, to stop fighting an unwinnable war). This is difficult because we are apparently hard-wired for optimism — no matter how poor the odds.

(A shameless digression: A personal side issue for me: at this time I don’t have any Polish speaking friends, so I literally have no one who grew up familiar with the fairy tale and  thus understands the meaning of “glass mountain.” If I called a problem “my glass mountain,” it would elicit blank stares. If I called trying to eliminate the Electoral College or repeal the Second Amendment a glass mountain, I would again encounter sheer incomprehension. This is the unavoidable loneliness of someone who comes from another  culture. There are worse fates.)


Mary:

Your “glass mountain” tale seemed very familiar to me. I specifically remember the princess on the mountain story as being in one of my grade school readers. Though I remember the mountain as made of ice, not glass — maybe one of those “revisions” memory is prone to. But of course since the story is an obscure one in this culture, references to it would not be understood in the way you knew.

Yes, the kind of isolation you speak of, no one there to speak with you in your beloved native tongue, is a very lonely thing. After all, I am sure those first words we learn are for each of us the “real” words, connecting us to the “real” world, and not this strange translation.

You also say, more than once this week, "There are worse fates." And there are indeed — with this phrase I think we at once comfort and scold ourselves, voicing the pain and mother's admonition to "count your blessings" in four brief words.


Oriana:

Very keen observation on the first words being the “real words.” One reason is the emotional conditioning we acquire. But that’s also the liberating aspect of writing and speaking in a non-native language — it’s abstract chunks of letters/sounds, so you can use profanities and/or discuss matters like sex without feeling anything (or not much). That’s why one of my poems has the title “I Can Be a Poet Only in English.” Likewise, it’s not that offensive if someone uses vulgar language in your presence — again, because these are not “real words.”
 

 
THE OPTIMISM BIAS
 
~ “We like to think of ourselves as rational creatures. We watch our backs, weigh the odds, pack an umbrella. But both neuroscience and social science suggest that we are more optimistic than realistic. On average, we expect things to turn out better than they wind up being. People hugely underestimate their chances of getting divorced, losing their job or being diagnosed with cancer; expect their children to be extraordinarily gifted; envision themselves achieving more than their peers; and overestimate their likely life span (sometimes by 20 years or more).

The belief that the future will be much better than the past and present is known as the optimism bias. It abides in every race, region and socioeconomic bracket. Schoolchildren playing when-I-grow-up are rampant optimists, but so are grownups: a 2005 study found that adults over 60 are just as likely to see the glass half full as young adults.

Collectively we can grow pessimistic. But private optimism, about our personal future, remains incredibly resilient. A survey conducted in 2007 found that while 70% thought families in general were less successful than in their parents' day, 76% of respondents were optimistic about the future of their own family.

Even if that better future is often an illusion, optimism has clear benefits in the present. Hope keeps our minds at ease, lowers stress and improves physical health. In fact, a growing body of scientific evidence points to the conclusion that optimism may be hardwired by evolution into the human brain.

Hardwired for Hope?

I would have liked to tell you that my work on optimism grew out of a keen interest in the positive side of human nature. The reality is that I stumbled onto the brain's innate optimism by accident. After living through Sept. 11, 2001, in New York City, I had set out to investigate people's memories of the terrorist attacks. I was intrigued by the fact that people felt their memories were as accurate as a videotape, while often they were filled with errors. A survey conducted around the country showed that 11 months after the attacks, individuals' recollections of their experience that day were consistent with their initial accounts (given in September 2011) only 63% of the time. They were also poor at remembering details of the event, such as the names of the airline carriers. Where did these mistakes in memory come from?

Scientists who study memory proposed an intriguing answer: memories are susceptible to inaccuracies partly because the neural system responsible for remembering episodes from our past might not have evolved for memory alone. Rather, the core function of the memory system could in fact be to imagine the future — to enable us to prepare for what has yet to come. The system is not designed to perfectly replay past events, the researchers claimed. It is designed to flexibly construct future scenarios in our minds. As a result, memory also ends up being a reconstructive process, and occasionally, details are deleted and others inserted.

To test this, I decided to record the brain activity of volunteers while they imagined future events — not events on the scale of 9/11, but events in their everyday lives — and compare those results with the pattern I observed when the same individuals recalled past events. But something unexpected occurred. Once people started imagining the future, even the most banal life events seemed to take a dramatic turn for the better.  


Mundane scenes brightened with upbeat details as if polished by a Hollywood script doctor. You might think that imagining a future haircut would be pretty dull. Not at all. Here is what one of my participants pictured: "I was getting my hair cut to donate to Locks of Love [a charity that fashions wigs for young cancer patients]. It had taken me years to grow it out, and my friends were all there to help celebrate. We went to my favorite hair place in Brooklyn and then went to lunch at our favorite restaurant."

I asked another participant to imagine a plane ride. "I imagined the takeoff — my favorite! — and then the eight-hour-long nap in between and then finally landing in Krakow and clapping for the pilot for providing the safe voyage," she responded. No tarmac delays, no screaming babies. The world, only a year or two into the future, was a wonderful place to live in.

The Human Time Machine

To think positively about our prospects, we must first be able to imagine ourselves in the future. Optimism starts with what may be the most extraordinary of human talents: mental time travel, the ability to move back and forth through time and space in one's mind. Although most of us take this ability for granted, our capacity to envision a different time and place is in fact critical to our survival.

It is easy to see why cognitive time travel was naturally selected for over the course of evolution. It allows us to plan ahead, to save food and resources for times of scarcity and to endure hard work in anticipation of a future reward. It also lets us forecast how our current behavior may influence future generations. If we were not able to picture the world in a hundred years or more, would we be concerned with global warming? Would we attempt to live healthily? Would we have children?

While mental time travel has clear survival advantages, conscious foresight came to humans at an enormous price — the understanding that somewhere in the future, death awaits. Ajit Varki, a biologist at the University of California, San Diego, argues that the awareness of mortality on its own would have led evolution to a dead end. The despair would have interfered with our daily function, bringing the activities needed for survival to a stop. The only way conscious mental time travel could have arisen over the course of evolution is if it emerged together with irrational optimism. Knowledge of death had to emerge side by side with the persistent ability to picture a bright future.

 Using a functional magnetic resonance imaging (fMRI) scanner, we recorded brain activity in volunteers as they imagined specific events that might occur to them in the future. Some of the events that I asked them to imagine were desirable (a great date or winning a large sum of money), and some were undesirable (losing a wallet, ending a romantic relationship). The volunteers reported that their images of sought-after events were richer and more vivid than those of unwanted events.

This matched the enhanced activity we observed in two critical regions of the brain: the amygdala, a small structure deep in the brain that is central to the processing of emotion, and the rostral anterior cingulate cortex (rACC), an area of the frontal cortex that modulates emotion and motivation. The rACC acts like a traffic conductor, enhancing the flow of positive emotions and associations. The more optimistic a person was, the higher the activity in these regions was while imagining positive future events (relative to negative ones) and the stronger the connectivity between the two structures.

The findings were particularly fascinating because these precise regions — the amygdala and the rACC — show abnormal activity in depressed individuals. While healthy people expect the future to be slightly better than it ends up being, people with severe depression tend to be pessimistically biased: they expect things to be worse than they end up being. People with mild depression are relatively accurate when predicting future events. They see the world as it is. In other words, in the absence of a neural mechanism that generates unrealistic optimism, it is possible all humans would be mildly depressed.

 
Even when the incidents that befall us are the type of horrific events we never expected to encounter, we automatically seek evidence confirming that our misfortune is a blessing in disguise. No, we did not anticipate losing our job, being ill or getting a divorce, but when these incidents occur, we search for the upside. These experiences mature us, we think. They may lead to more fulfilling jobs and stable relationships in the future. Interpreting a misfortune in this way allows us to conclude that our sunny expectations were correct after all — things did work out for the best.

It seems that our brain possesses the philosopher's stone that enables us to turn lead into gold and helps us bounce back to normal levels of well-being. It is wired to place high value on the events we encounter and put faith in its own decisions. This is true not only when forced to choose between two adverse options (such as selecting between two courses of medical treatment) but also when we are selecting between desirable alternatives. 


Imagine you need to pick between two equally attractive job offers. Making a decision may be a tiring, difficult ordeal, but once you make up your mind, something miraculous happens. Suddenly — if you are like most people — you view the chosen offer as better than you did before and conclude that the other option was not that great after all. According to social psychologist Leon Festinger, we re-evaluate the options postchoice to reduce the tension that arises from making a difficult decision between equally desirable options.

In a brain-imaging study I conducted with Ray Dolan and Benedetto De Martino in 2009, we asked subjects to imagine going on vacation to 80 different destinations and rate how happy they thought they would be in each place. We then asked them to select one destination from two choices that they had rated exactly the same. Would you choose Paris over Brazil? Finally, we asked them to imagine and rate all the destinations again. Seconds after picking between two destinations, people rated their selected destination higher than before and rated the discarded choice lower than before.

In our experiment, after a decision was made between two destinations, the caudate nucleus rapidly updated its signal. Before choosing, it might signal "thinking of something great" while imagining both Greece and Thailand. But after choosing Greece, it now broadcast "thinking of something remarkable!" for Greece and merely "thinking of something good" for Thailand.

The Puzzle of Optimism

While the past few years have seen important advances in the neuroscience of optimism, one enduring puzzle remained. How is it that people maintain this rosy bias even when information challenging our upbeat forecasts is so readily available? Only recently have we been able to decipher this mystery, by scanning the brains of people as they process both positive and negative information about the future. The findings are striking: when people learn, their neurons faithfully encode desirable information that can enhance optimism but fail at incorporating unexpectedly undesirable information. When we hear a success story like Mark Zuckerberg's, our brains take note of the possibility that we too may become immensely rich one day. But hearing that the odds of divorce are almost 1 in 2 tends not to make us think that our own marriages may be destined to fail.

Why would our brains be wired in this way? It is tempting to speculate that optimism was selected by evolution precisely because, on balance, positive expectations enhance the odds of survival. Research findings that optimists live longer and are healthier, plus the fact that most humans display optimistic biases — and emerging data that optimism is linked to specific genes — all strongly support this hypothesis. Yet optimism is also irrational and can lead to unwanted outcomes. The question then is, How can we remain hopeful — benefiting from the fruits of optimism — while at the same time guarding ourselves from its pitfalls?

I believe knowledge is key. We are not born with an innate understanding of our biases. The brain's illusions have to be identified by careful scientific observation and controlled experiments and then communicated to the rest of us. Once we are made aware of our optimistic illusions, we can act to protect ourselves. The good news is that awareness rarely shatters the illusion. The glass remains half full. It is possible, then, to strike a balance, to believe we will stay healthy, but get medical insurance anyway; to be certain the sun will shine, but grab an umbrella on our way out — just in case.” ~

http://content.time.com/time/health/article/0,8599,2074067-5,00.html


Oriana:

While unable to fight off our own optimism bias — and perhaps we shouldn’t, given the health benefits — we may find the optimism of others excessive, childish, downright crazy, and just deeply annoying in its cheerful, chirping idiocy. Well, that’s life. Eighty percent of the population think they are above average.

To me the annoying person is not the typical optimist, or the besotted lottery player. It’s the cheerleader who assures you “You can do it!” even though you past a certain age you do know your abilities and your limitations, and no, you are not going to become an astrophysicist or a piano virtuoso or a business tycoon no matter how hard you try. Some situations really are glass mountains, and the princess in the castle on the top is not even your type.

But on the whole we can’t help being optimistic. There are worse fates.


Mary:

The article on our ability to imagine the future, and the evidence that optimism may be a bias hard wired in our brains is very interesting. Extending the argument a bit, perhaps the failure of that function, the inability to imagine anything but darkness ahead, culminating in the absolute darkness of death, may result in not mild, but the most severe depressions, the kind that may end in suicide. Without hope, there will be no survival, no push toward any future at all.


Oriana:

The optimism bias is so widespread that I wonder why it ever fails. But fail it does — reality can be just too compelling, and the higher the hope, the greater the despair in case of a crash — and inability to spin that crash (although normally we are fantastic spin doctors).

I reluctantly admit that TS Eliot was right when he said that we can bear only so much reality. Well, tough — so we can’t afford to be completely realistic and have to either kid ourselves or keep ourselves distracted (my main solution — but am I kidding myself without knowing it? I try not to form any expectations — but how many hidden, optimistic expectations is my life actually based on?) 


glass Roman hydria, 4th century c.e.

**

Yet overdoing positive fantasies can backfire:

HIP-SURGERY PATIENTS RECOVERED MORE SLOWLY WHEN THEY DWELLED ON POSITIVE FANTASIES OF WALKING WITHOUT PAIN

 
Yes, you read this correctly: they recovered MORE SLOWLY. This is almost as startling as the results of the famous 2006 STEP prayer study, in which those patients who knew they were being prayed for experienced significantly more post-surgery complications. But perhaps we shouldn’t be surprised. We have several studies by now that show that cultivating fantasies of already having accomplished the goal tends to lessen your efforts to achieve the goal. I think it’s related to the phenomenon well-known to writers: “talking out the book” may prevent them from ever writing the book. The drive to write is gone.

What also caught my attention is one of the readers’ comments:

“Positive thinking demands ego, demands your mental hard drive. Not that you shouldn't think — please, reality test things. But letting go of thinking and willing and pushing can have something to do with activating intuitive thinking, which can be much more powerful and useful.”

Here is an excerpt from the New Yorker article:

~ “Since publishing “The Secret,” in 2006, the Australian author Rhonda Byrne has been writing self-help manifestos based on the idea that people who think positive thoughts are rewarded with happiness, wealth, influence, wisdom, and success.

There’s no denying that many people have found comfort in Byrne’s ideas. Like religion, they offer an appealing, non-technical solution to life’s biggest problems while demanding nothing more of their adherents than faith. (Indeed, “The Secret” features verses from Matthew and Mark, promoting the idea that people receive in life what they seek in prayer.) But while many people give religion a pass because it claims to focus on questions that can’t be answered with science, the same is not true of success. Though Byrne presents her ideas without evidence, we can measure their worth with data.

According to a great deal of research, positive fantasies may lessen your chances of succeeding. In one experiment, the social psychologists Gabriele Oettingen and Doris Mayer asked eighty-three German students to rate the extent to which they “experienced positive thoughts, images, or fantasies on the subject of transition into work life, graduating from university, looking for and finding a job.” Two years later, they approached the same students and asked about their post-college job experiences. Those who harbored positive fantasies put in fewer job applications, received fewer job offers, and ultimately earned lower salaries. The same was true in other contexts, too. Students who fantasized were less likely to ask their romantic crushes on a date and more likely to struggle academically. Hip-surgery patients also recovered more slowly when they dwelled on positive fantasies of walking without pain.

Heather Barry Kappes, a management professor at the London School of Economics, has published similar research with Oettingen. I asked Kappes why fantasies hamper progress, and she told me that they dull the will to succeed: “Imagining a positive outcome conveys the sense that you’re approaching your goals, which takes the edge off the need to achieve.” Oettingen and Kappes asked two groups of undergraduates to imagine the coming week. One group fantasized that the week would go as well as possible, whereas the other group conjured a more neutral version of the week. One week later, when the students returned to the lab, the positive fantasizers felt that they had accomplished less over the previous week.

I asked Oettingen whether positive fantasies might sometimes be useful. She suggested that they might, if a person considered the specific steps that he would take to overcome the barriers to success. Kappes said that fantasies might be useful when you’re unable to satisfy a need—when you’re famished and hours from eating, for example—because they temporarily blunt the pang. There’s nothing wrong with getting lost in fantasy, as long as you aren’t ultimately hoping to indulge in the real thing.” ~

https://www.newyorker.com/business/currency/the-powerlessness-of-positive-thinking


An "unclean spirit" or a medieval Yoda with a punk haircut?

Oriana:

There is also a connection with false memory — the brain starts believing you've already done something, so no more effort is needed. The drive to do it dissipates.

 
I know from my own experience how easy it's to create a false memory, to fool the brain into the sense of “mission accomplished!” Hence also the danger of praying instead of doing.

I love the statement that “you succeed in your head, but not in the world.” That's substituting positive thinking (“an oxymoron,” as someone observed) for action, and some people do just that. They imagine that saying “I'm rich” for 15 minutes a day will make them rich! That's the harm of it — when people substitute fantasy and/or affirmations for action, e.g. substitute the fantasy of walking for the hard work of physical therapy.

I've learned to “think at the keyboard” because if I think it out in my head with perfect clarity, there's a chance I won't type it out. But salvation lies in the fact that if I do sit down at the keyboard, something else emerges, something not originally thought up.

“Just do it!” is perhaps the most useful mantra there is — especially for women, who are more prone to hesitating and overthinking. “Life rewards action,” is another very useful saying.

Still, there can be a lot of pleasure in a fantasy when the real thing is impossible but the grief over that impossibility is no longer acute. A friend of mine knew at one point she would never again experience a real romance — so she indulged in fantasies as a deliberate substitute. It’s possible that it spared her bitterness. She remained sweet-tempered, and, to everyone’s astonishment, in the last months of her life, in spite of her age, obesity, and disability, she did end up with a companion! Another instance of “you never know” — life has infinite surprises.

Likewise, going over some bad scenarios may be a very positive action — it shows us that those might not be actual huge disasters — somehow or other, we would find ways to cope. And what really happens is generally never quite what we expect — so that simple faith: somehow we’ll cope — is a lot more useful than fantasizing about wonderful outcomes.

The comment about intuitive thinking as opposed to “positive thinking” is also worth noting. 


Mary:

That push [to do things], that drive, is what enables accomplishment. The constant drivel about "being positive" and rejecting "negativity" is not only annoying, it reminds me of the whole scenario we have created where, with our children, "everyone's a winner, " everyone gets a trophy, no matter what they have or have not accomplished. The same with the extreme value put on "self esteem," which we are instructed is of essential value, must always be maintained and supported. But what qualities, virtues, attributes, actions, or productions  exist as the basis for all this glorious self-esteem? It seems to me often there is nothing there at all, and the prevailing culture doesn't even require it.

If you are a winner and a champion simply for existing, why try? You already have it, deserve it all, so there is no need to strive, to learn, to work, to do any of the hard things that lead to true accomplishment.

I have seen this with some post surgical patients, who do not work at their rehabilitation, perhaps because they imagined the surgery as some sort of miracle, that would produce the desired results without any effort of their own, other than their dream of magically induced recovery.

This is very like those believers in prayer and positive thinking who think such things are sure fire pathways to the future outcomes they desire. Have faith, pray, trust you will get what you want — or even something better — because god loves you and you deserve it.

Oriana:

One of the culprits here is the advertising industry. “You deserve the best,” has become a major slogan, used to push anything from toothpaste and soda pop to luxury cars. It’s certainly fine to buy the best if you can afford it, but “deserve” doesn’t enter into this.

But greatest damage has come with the New Age delusion of believing that it’s enough to “put it to the universe” that you desire something and the universe will magically comply with that wish — or else “something even better” is coming to you. Your secret super-power is simply repeating affirmations!

Alas, the way the brain works, affirmations may create an unconscious false memory of “mission accomplished,” and a consequent lessening or even absence of the drive to take a concrete action. That’s one of the first lessons writers learn: keep your mouth shut about a current project, or you’ll “talk it out” and end up with zero drive to do the writing. 


* *

On the frozen Neva, St. Petersburg, ca. 1910

"Trump? Putin? Internet? Soviet Union? Lenin, Stalin, Hitler? FOX News? Lady Gaga?

Life is life. Death is death. Without the former, there can be no latter, and vice versa. Or maybe not. It’s a brutally cold day today, but what would you expect. Winter is winter. Russia is Russia. Life goes on. We’ve all been dead for a long time now. The ice on the Neva is strong." ~ M. Iossel

**

“Actions are held to be good or bad, not on their own merits, but according to who does them. There is almost no kind of outrage — torture, imprisonment without trial, assassination, the bombing of civilians — which does not change its moral color when it is committed by ‘our’ side. The nationalist not only does not disapprove of atrocities committed by his own side, he has a remarkable capacity for not even hearing about them.” ~ George Orwell

This is so applicable to the current puzzlement over how Trump can get away with any outrage.


 
~ that's because to his base he's “One of Us.” That's all that matters — you root for your team.

Here it would be expected to have an image of one kind of atrocity or another . . . but let us instead detox by looking at a beautiful animal. 



NOT AN ORDINARY GUNSHOT WOUND — LETHAL INJURIES INFLICTED BY AR-15
 
~ “I was looking at a CT scan of one of the victims of the shooting at Marjory Stoneman Douglas High School, who had been brought to the trauma center during my call shift. The organ looked like an overripe melon smashed by a sledgehammer, with extensive bleeding. How could a gunshot wound have caused this much damage?


The reaction in the emergency room was the same. One of the trauma surgeons opened a young victim in the operating room, and found only shreds of the organ that had been hit by a bullet from an AR-15, a semi-automatic rifle which delivers a devastatingly lethal, high-velocity bullet to the victim. There was nothing left to repair, and utterly, devastatingly, nothing that could be done to fix the problem. The injury was fatal.

A year ago, when a gunman opened fire at the Fort Lauderdale airport with a 9mm semiautomatic handgun, hitting 11 people in 90 seconds, I was also on call. It was not until I had diagnosed the third of the six victims who were transported to the trauma center that I realized something out-of-the-ordinary must have happened. The gunshot wounds were the same low velocity handgun injuries as those I diagnose every day; only their rapid succession set them apart. And all six of the victims who arrived at the hospital that day survived.

Routine handgun injuries leave entry and exit wounds and linear tracks through the victim's body that are roughly the size of the bullet. If the bullet does not directly hit something crucial like the heart or the aorta, and they do not bleed to death before being transported to our care at a trauma center, chances are, we can save the victim. The bullets fired by an AR-15 are different; they travel at higher velocity and are far more lethal. The damage they cause is a function of the energy they impart as they pass through the body. A typical AR-15 bullet leaves the barrel traveling almost three times faster than, and imparting more than three times the energy of, a typical 9mm bullet from a handgun. An AR-15 rifle outfitted with a magazine cartridge with 50 rounds allows many more lethal bullets to be delivered quickly without reloading.

With an AR-15, the shooter does not have to be particularly accurate. The victim does not have to be unlucky. If a victim takes a direct hit to the liver from an AR-15, the damage is far graver than that of a simple handgun shot injury. Handgun injuries to the liver are generally survivable unless the bullet hits the main blood supply to the liver. An AR-15 bullet wound to the middle of the liver would cause so much bleeding that the patient would likely never make it to a trauma center to receive our care.

One of my ER colleagues was waiting nervously for his own children outside the school. While the shooting was still in progress, the first responders were gathering up victims whenever they could and carrying them outside the building. Even as a physician trained in trauma situations, though, there was nothing he could do at the scene to help to save the victims who had been shot with an AR-15. Most of them died on the spot, with no fighting chance at life.

A medical professor taught me about the dangers of drawing incorrect conclusions from data with the example of gum chewing, smokers, and lung cancer. He said smokers may be more likely to chew gum to cover bad breath, but that one cannot look at the data and decide that gum chewing causes lung cancer. It is the same type of erroneous logic that focuses on mental health after mass shootings, when banning the sale of semi-automatic rifles would be a far more effective means of preventing them.” ~

https://www.theatlantic.com/politics/archive/2018/02/what-i-saw-treating-the-victims-from-parkland-should-change-the-debate-on-guns/553937/

 

LUTHER VERSUS ERASMUS; EUROPE’S FIRST POPULISM
 
Desiderius Erasmus of Rotterdam, the leading figure of the Northern Renaissance, is widely considered the greatest of early humanists. Five hundred years ago, he faced a populist uprising led by a powerful provocateur, Martin Luther, that resulted in divisions no less explosive than those we see in America and Europe today.

Between 1500 and 1515, Erasmus produced a small library of tracts, textbooks, essays, and dialogues that together offered a blueprint for a new Europe. The old Europe had been dominated by the Roman Church. It emphasized hierarchy, authority, tradition, and the performance of rituals like confession and taking communion. But a new order was emerging, marked by spreading literacy, expanding trade, growing cities, the birth of printing, and the rise of a new middle class intent on becoming not only prosperous but learned, too.

Erasmus became the most articulate spokesman for this class. Moving from city to city in search of good libraries, fine wine, sparkling conversation, and skilled printers, he produced a new “design for living” based on the principles of tolerance, pluralism, concord, and virtuous conduct. In his 1515 essay Dulce bellum inexpertis (“War is sweet only to those who have not experienced it”), he denounced the ceaseless wars waged by rash princes. In The Education of a Christian Prince (1516), he offered a guide to good governance, urging sovereigns to pursue not their own interests but those of the people. In The Praise of Folly (1511), he mocked the pretensions and delusions of kings and courtiers, popes and theologians—part of his campaign to discredit the ruling class and open the way for renewal.

At the heart of Erasmus’s program was his revision of the New Testament. To reform Christendom, he felt, the text on which it was based had to be purified. This was the Vulgate, the Latin translation of the Bible. For a thousand years, this document had served as the scriptural foundation of the Roman Church. Many of its doctrines and institutions were based on specific words and phrases in the Vulgate. Yet a close inspection of the text raised many questions about its sacred status. It was marred by spelling mistakes, grammatical errors, clumsy constructions, and scribal blunders.

In 1500, Erasmus set out to learn Greek so that he could read the Gospels and Epistles in the language in which they had originally been written. (After the fall of Rome, knowledge of Greek had more or less disappeared from the Latin West.) He also began hunting down old manuscripts of the Greek New Testament; by comparing and collating them, he hoped to conjecture what their authors truly meant.

In early 1516, after months of exhausting writing, editing, and proofreading in the print shop of Johann Froben in Basel, Switzerland, the work was done. In addition to providing a revised Latin translation of the New Testament and a parallel Greek text (the first ever printed), Erasmus offered hundreds of annotations explaining the changes he had made. In them, he argued for a new way of reading the Bible—not as a collection of miracles, prophecies, and supernatural acts, but as the story of a transcendent being whose simplicity, humility, and compassion could encourage readers to change their ways and follow a more pious path.  


Erasmus of Rotterdam, an engraving by Albrecht Dürer, 1526
 
The publication of Erasmus’s revised New Testament was a milestone in biblical studies. It gave scholars the tools to read the Bible as a document that, while divinely inspired, was a human product that could be deconstructed and edited in the same manner as a text by Livy or Seneca. As copies began circulating, the magnitude of Erasmus’s achievement was immediately recognized. Not since Cicero had an intellectual figure so dominated Western discourse as Erasmus did in that enchanted spring of 1516. “Everywhere in all Christendom your fame is spreading,” wrote John Watson, a rector in England with whom he was friendly. “By the unanimous verdict of all scholars, you are voted the best scholar of them all, and the most learned in both Greek and Latin.”

The term “Erasmian” came into use to describe those who shared his vision. But those Erasmians represented only a small sliver of society. Erasmus wrote exclusively in Latin, for the highly educated, Latin-speaking elite. Dazzled by his readings in ancient Greek, Erasmus began promoting knowledge of that language as no less essential than Latin. “Almost everything worth learning is set forth in these two languages,” he wrote in one of his many educational texts. In these, Erasmus proposed a new curriculum for Europe, with instruction in Latin and Greek at its core.

Around the same time that the Erasmians were celebrating the dawn of a new enlightened era, a very different movement was gathering in support of Martin Luther. An Augustinian friar then in his early thirties, Luther had developed his own, unique gospel, founded on the principle of faith. Man, he thought, can win divine grace not through doing good works, as the Latin Church taught, but through belief in Christ. No matter how sincerely one confessed, no matter how many alms one gave, without faith in the Savior, he reasoned, no one can be saved. When Luther made this “discovery,” [sola fide
"faith alone"] in around 1515, he felt that he had become “altogether born again and had entered paradise itself through open gates.”

In his famous 1520 tract To the Christian Nobility of the German Nation, Luther (writing in the vernacular) offered his own reform program. Along with a piercing attack on Rome’s oppressive practices, he proposed twenty-seven measures to protect both the souls and pocketbooks of the German people. He also rejected the idea that the clergy make up a separate spiritual class superior to the laity. All Christians, he declared, are priests of equal standing, free to read and interpret the Bible for themselves. Such attacks on privileged elites endeared him to Herr Omnes, “Mr. Everyman.”

Because of such defiance, Luther was ordered to appear before Charles V, the Holy Roman Emperor, at the Diet of Worms in April 1521. Refusing to recant his writings, Luther made his famous stand on behalf of his conscience as a Christian. For that, he could have been seized on the spot and burned as a heretic, but with the German people mobilizing behind him, any effort to arrest him would have caused a riot. So Luther was able to leave Worms, resume his writing, and set in motion the Reformation.

Initially, Luther admired Erasmus and his efforts to reform the Church, but over time Luther’s inflammatory language and his stress on faith instead of good works led to a painful separation. The flashpoint was the debate over whether man has free will. In dueling tracts, Erasmus suggested that he does, while Luther vehemently objected; after that, the two men considered each other mortal enemies.

Beyond that immediate matter of dispute, however, their conflict represented the clash of two contrasting world views — those of the Renaissance and the Reformation. Erasmus was an internationalist who sought to establish a borderless Christian union; Luther was a nationalist who appealed to the patriotism of the German people. Where Erasmus wrote exclusively in Latin, Luther often used the vernacular, the better to reach the common man. Erasmus wanted to educate a learned caste; Luther, to evangelize the masses. For years, they waged a battle of ideas, with each seeking to win over Europe to his side, but Erasmus’s reformist and universalist creed could not match Luther’s more emotional and nationalistic one; even some of Erasmus’s closest disciples eventually defected to Luther’s camp. Erasmus became an increasingly marginal figure, scorned by both Catholics, for being too critical of the Church, and Lutherans, for being too timid. In a turbulent and polarized age, he was the archetypal reasonable liberal.

Even as his reputation faded, Erasmus worked to complete his blueprint for Europe. In The Complaint of Peace, he decried the nationalist enmities that were splitting the continent. “The English are hostile to the French, for no other reason than that they are French,” he wrote. “The Scots are disliked by the British, solely for being Scots. Germans don’t agree with French, Spaniards don’t agree with either. What perversity—for the mere name of a place to divide people when there is so much which could bring them together!”

Disturbed by the growing bitterness between Catholics and Protestants, Erasmus called on Christians to put aside their private hatreds and bitter quarrels, and instead nurture a spirit of accommodation so that peace could reign. On Mending the Peace of the Church, as he titled the tract, was a resonant appeal for religious tolerance—a formative document in the development of that tradition.

As his end approached, Erasmus sought to warn his fellow Christians of the catastrophe he saw looming — in vain. After his death, in 1536, Europe descended into a century of religious-fueled violence, culminating in the Thirty Years’ War (1618–1648) — the continent’s most destructive conflict before World War I. Erasmus’s ideas about tolerance, peace, and clemency were ruthlessly suppressed. Both Catholics and Protestants dismissed him as a weak, vacillating man who lacked ardor and conviction, and whose commitment to an irenic form of Christianity founded on the Gospels was as objectionable as it was obsolete.

Yet Erasmus’s vision of a united Europe in which people of differing beliefs share a common citizenship would live on, providing an intellectual haven amid the eruptions of nationalism, xenophobia, racism, and nihilistic violence that periodically ravaged the continent. Despite its snobbism and elitism, Erasmian humanism offered an alternative to the apocalypse.

Luther underwent his own reverses. When, in 1524–1525, the German peasants—inspired in part by his writings—rose up against their spiritual and secular overlords, Luther, fearing anarchy, denounced them as mad dogs who deserved to be stabbed, smitten, and slayed. With that, the common man turned irrevocably against Luther. A wrenching dispute over whether the body of Christ is present in the bread of communion led to an irreparable breach with the Swiss branch of the Reformation. And Luther’s uncompromising insistence on the rectitude of his own beliefs alienated many moderates, and not just Catholic ones.

By the time of his death, in 1546, Luther had become an isolated reactionary, his work eclipsed by a younger and more dynamic reformer, John Calvin. Even so, Luther would go down in history as the founder of Protestantism, the man who broke the spiritual stranglehold of the Roman Church. Luther’s brand of Bible-based ardor founded on pure faith would exercise a profound influence on Western culture, not least in America.

http://www.nybooks.com/daily/2018/02/20/luther-vs-erasmus-when-populism-first-eclipsed-the-liberal-elite/?utm_medium=email&utm_campaign=NYR%20Varoufakis%20Carrington%20Black%20Panther&utm_content=NYR%20Varoufakis%20Carrington%20Black%20Panther+CID_caf75038ff25213c0caf63226b4f932a&utm_source=Newsletter


from Wiki:

~ “Free will does not exist”, according to Luther in his letter De Servo Arbitrio to Erasmus translated into German by Justus Jonas (1526) in that sin makes human beings completely incapable of bringing themselves to God. Noting Luther's criticism of the Catholic Church, Erasmus described him as "a mighty trumpet of gospel truth" while agreeing, "It is clear that many of the reforms for which Luther calls are urgently needed.” He had great respect for Luther, and Luther spoke with admiration of Erasmus's superior learning.

[This mutual admiration didn’t last.] In a letter to Nikolaus von Amsdorf, Luther objected to Erasmus’ Catechism and called Erasmus a "viper," "liar," and "the very mouth and organ of Satan.”

Erasmus was accused by the monks against the Reformation, that he had:

    “prepared the way and was responsible for Martin Luther. Erasmus, they said, had laid the egg, and Luther had hatched it. Erasmus wittily dismissed the charge, claiming that Luther had hatched a different bird entirely.” ~

Oriana:

Interesting to learn that the Thirty-Year-War was the worst until WW1 — and that toward the end of his life Luther had become an "isolated reactionary." Apparently he was also an ardent believer that the end of the world was “at hand.” (I still admire Luther’s courage, but he was a very flawed human being.)

 “Sin away,” he allegedly said, confident that faith alone (the correct version of the faith, that is) was sufficient for entry to paradise. I can see how this would relieve the stress of trying to “earn” heaven by being perfectly good. At the same time, sola fide can imply a dangerous extreme of taking no responsibility. 


If our deeds, good or bad, don’t count — on our own, we deserve only hellfire, and besides, predestination rules — then so much for the much quoted — and wise — verse, “Without works, faith is dead.” That also happens to be the wisdom of Judaism: the important thing is not our beliefs, but our actions. 

In the end, whether of not we believe in the perpetual virginity of Mary matters very little. And yet wars were fought over imaginary problems of this sort. Oh human folly indeed! Oh gentle Erasmus, too civilized and broad-minded for your times — and alas, even for our own.



BECAUSE SO MUCH DEPENDS ON THE TRANSLATION

 
~ “It was subtle of God to learn Greek when he wished to become an author, and not to learn it better.” ~ Friedrich Nietzsche on the New Testament

[The latest translator, David Bentley Hart] clearly agrees with Nietzsche on the quality of the book’s koine Greek. He finds the Gospel of Matthew “rarely better than ponderous,” that of Mark “awkwardly written throughout,” and that of John “syntactically almost childish,” while Paul’s letters are “maladroit, broken, or impenetrable” and Revelation is “almost unremittingly atrocious.” Sometimes he does convey the original’s sheer goofiness: “Fallen, fallen, Babylon the Great who has given all the gentiles to drink from the wine of the vehemence of her whoring” (Rev. 14:8). No wonder Hunter Thompson said he did not have to worry about running out of LSD in a hotel. He could trip on the Gideon Bible’s Revelation.” ~

But the latest translator runs into a serious problem. “Hart claims that he will not let his own theological views color his translation. But he clearly does not believe in hell — at least not in a permanent hell . . . Rather than rely on . . . common sense, he labors to out hell from the text of the Bible.”

And this leads to all kinds of awkwardness, including this:

~ “When Jesus at the Second Coming (Matt. 25:46) divides the damned from the saved, [Hart] says, “These will go to the chastening of that Age, but the just to the life of that Age” (for KJV “everlasting fire . . . life eternal”).

The devil, without a hell to tend, is demoted by Hart to “the Slanderer.” . . . Thus we get at Matthew 25:41: “Go from me, you execrable ones, into the fire of the Age prepared for the Slanderer and his angels” (for Tyndale and KJV, “Depart from me, ye cursed, into everlasting fire which is prepared for the devil and his angels”).” ~

The lead image for this article in the New York Book Review notorious painting of the Madonna spanking Baby J by Max Ernst:

Note the halo dropping to the floor.

Finally, I can’t resist sharing an example of Paul’s traffic-jam sentences as translated by Hart:

“But because of false prophets secretly brought in, who stole in so as to spy upon freedom, which we have in the Anointed One Jesus, so that they could enslave us — to whom we did not yield in subordination for even an hour, so that the truth of the good tidings might remain with you and from those who were esteemed as something — precisely what sort of something at that time does not matter to me (God does not take a man at his face) — for to me these estimable men had nothing to add; rather, to the contrary, seeing that I have been entrusted with the good tidings for those of the foreskin, just as Peter for a mission for those of the circumcision — for he who was operating in Peter for a mission to those of the circumcision was also operating in me for the gentiles — and, recognizing the grace given to me, James and Cephas and John — who appeared to be the pillars — gave their hands in fellowship to me and to bar-Nabas, that we should go to the gentiles land they to the circumcision, if only we should remember the poor — the very thing, indeed, that I was eager to do (Gal. 2:4-13)”

“If only we should remember the poor” — then indeed we can gloss over gentiles and genitals. 

 
What is more important, however, is the reviewer’s conclusion:

~ “Fresh translations of familiar texts are useful because they make us reexamine what we thought we knew. Hart has certainly made me think more deeply about the centrality of the world’s end to the entirety of the New Testament. (. . . ) Every aspect of the New Testament should be read in light of this “good news” that the world will shortly be wiped out.” ~

http://www.nybooks.com/articles/2018/02/08/a-wild-and-indecent-book/


Domenichino: The Ecstasy of St Paul
 

Oriana:

Poor Paul! The reviewer ascribes his “traffic-jam sentences” to Paul’s being in a hurry — Paul really believed that the end of the world was imminent. I suspect that it’s rather that Paul’s growing blindness (related to his epilepsy) forced him to dictate, so we get a transcript of speech rather true writing.

This review is important because, among other things, it brings up the issue that’s typically overlooked when discussing translations: the translator’s own beliefs have a way of influencing the translation. Specifically, the reviewer shows that Hart’s non-belief in hell makes a ludicrous mess of his attempts to deal with passages that mention hell by twisting words so as to erase hell from the text.

Obviously a translator is wrong to twist a text so that it seems to support his beliefs — though I agree that as long as hell (and thus the need to be “saved” from it) is the foundation of Christianity, then Christianity is not a “religion of love.” Who knew that a translator's word choice could have such huge implications . . . Not that this latest translation will have any impact. For one thing, it's too late for true impact.

Another thing that occurred to me is that, if it’s the actual word of god (or even if it’s just “divinely inspired”), we’d expect the bible to be stylistically exquisite. And the King James Version has been praised as such. But as I think back to the words I so often heard in Polish, they struck me as awkward and old-fashioned rather than beautiful. My guess is that the KJV English translation is a lucky exception. Also, there exists an allegedly more accurate translation of the Hebrew Bible by Robert Alter — and the text is supposed to be more “forceful,” but also dry and largely devoid of beauty.

Since there can be no “objective” translation, devoid of the translator’s biases and theology, perhaps, at least when it comes to “holy” scriptures, we should stop trying?

One last thing: Bart Ehrman (and possibly Neil Carter too) said that it’s precisely reading the New Testament in the original Greek that leads some seminarians to leave the faith. The usual reason given for this is that these more critical seminarians see that a particular term could be translated in several different ways, so we can’t be sure what the text really means. What they were sure about suddenly becomes fuzzy at best. Likewise, the contradictions between the different gospels become more blatant. But I suspect that that without the seductive beauty of the KJV version, the text itself disappoints just too many times.

(Ehrman: “It often proves difficult enough to establish what the words of the NT mean; the fact that in some instances we don't know what the words actually were does more than a little to exacerbate the problem. I say that many interpreters would like to ignore this reality; but perhaps that isn't strong enough. In point of fact, many interpreters, possibly most, do ignore it, pretending that the textual basis of the Christian scriptures is secure, when unhappily, it is not.”)

(A shameless digression: I remember being shaken when I learned that Homer describes Penelope’s hand as “thick” — pachos. I recognized the root in pachyderm, and “thick” indeed fits better than “strong” [presumably because it’s muscular] or any other word. All of Homer’s translators are desperate to soften the original — not one dares say simply “thick.” No, translation is not neutral.)


**


“Things got so bad we couldn't lower our standards fast enough.” ~ Carrie Fisher, Wishful Drinking


 
 

EXTRA VIRGIN OIL APPEARS TO EXTEND LIFE SPAN, REDUCES THE RISK OF ALZHEIMER’S

~ “Unrefined extra virgin olive oil, a chief component of the Mediterranean diet, has been given significant credit for the diet’s health-promoting ability, especially with its rich polyphenol content.

Today, substantial new findings further validate extra virgin olive oil’s benefits for cardiovascular, bone, and brain health. Several of these studies were large-scale clinical trials on humans.

One study in particular caught mainstream media attention. This study, with nearly 19,000 participants, showed that those who consumed the highest quality foods, and who most closely adhered to a true Mediterranean diet, were the ones who were most likely to derive the benefits, including sharp reductions in coronary heart disease and stroke.
The high oleic acid (monounsaturated fat) content of olives was initially thought to be the main source of olive oil’s health benefits. Today, more researchers contend that the health benefits stem from olive oil’s high polyphenol content which includes: oleuropein, tyrosol, and hydroxytyrosol.

Increasing evidence suggests that the polyphenol hydroxytyrosol should be given the most credit as it makes up approximately 50% of extra virgin olive oil’s total polyphenol content.

In 2017, the American Journal of Clinical Nutrition published a study that evaluated the effects of hydroxytyrosol on a cohort of 1,851 men and women.

To be eligible for this study all participants had to be at high-risk for cardiovascular disease.

The participants averaged age 67 and had either type II diabetes or at least three or more major risk factors: smoking, hypertension, dyslipidemia, overweight/obesity, or a family history of premature cardiovascular disease.

The subjects were divided randomly into one of three different intervention groups:

    Group 1: Traditional Mediterranean diet supplemented with extra virgin olive oil.
    Group 2: Traditional Mediterranean diet supplemented with nuts.
    Group 3: Control, low-fat diet.

To measure hydroxytyrosol ingestion, urinary levels of its metabolite (homovanillyl alcohol) were measured.

Results showed that higher urinary levels of homovanillyl alcohol resulted in sharply lower risks of cardiovascular events and mortality.

Compared to those in the lowest quintile, individuals in the third or higher quintile levels of homovanillyl alcohol had at least 56% reduced risk of a cardiovascular event (heart attack, stroke, or death from cardiovascular cause). In addition, subjects in the highest quintile of homovanillyl alcohol had, on average, 9.5 years longer life after the age of 65.

 
The highest urinary levels of homovanillyl alcohol were obtained by the subjects whose intervention included a traditional Mediterranean diet with the addition of extra virgin olive oil.

Research on olive oil has been primarily focused on its cardiovascular support. However, a growing amount of research in the last decade has shown that it also reduces the risk of Alzheimer’s disease.

In a revealing animal study, researchers examined the effects of extra virgin olive oil on mice genetically prone to develop neurodegenerative changes typical of Alzheimer’s disease, such as a myloid plaque. These six-month-old mice were divided into two groups, one fed a standard diet, and the other group the same standard diet plus extra virgin olive oil. After six months, they found significant differences in their behavior and neuropathology. The researchers concluded that extra virgin olive oil exerted a beneficial effect on all major characteristics of Alzheimer’s disease, including behavior and neuropathology.

To test their neuropathology, all major biomarkers for Alzheimer’s disease (beta-amyloid, tau proteins, and synaptophysin) were recorded.

Beta-amyloid and tau are deleterious proteins that, through many mechanisms, cause cellular dysfunction and death. Synaptophysin is a protein marker of synaptic integrity.

The research found that in the mice fed extra virgin olive oil, there was a significant decrease in these deleterious proteins and an increase in the beneficial synaptophysin.

The researchers credited these beneficial effects to extra virgin olive oil’s ability to increase autophagy.

 
Autophagy is how cells rid themselves of debris that interferes with normal healthy cellular function.



http://www.lifeextension.com/Magazine/2018/1/Olive-Oil-Markedly-Extends-Human-Lifespan/Page-01?sourcecode=CVM701E&utm_source=zMag&utm_medium=email&utm_content=Article&utm_campaign=CVM701E

ending on beauty:

My song is snow in March,
in May. My song is eighty degrees
the next day. My song is, I couldn’t decide

what to get you, so here, everything.

~ Chen Chen





Saturday, February 17, 2018

MASS SHOOTINGS: ANGER, NOT MENTAL ILLNESS; WHY WE FALL IN LOVE; THE 2-SANTA GOP STRATEGY; WHO’S AT RISK FOR DOG BITES

Vermeer: Mistress and Her Maid with a Letter, 1667. It's interesting that we get to see more of the maid than of the mistress.
 
Let us forget with generosity those who cannot love us. ~ Pablo Neruda

*

MUSIC   

Ten years after your suicide,
this is  the moment I love best:
in silence you take my hand
and put your arm around my waist.

We take narrow steps as though
on a crowded dance floor,
our rhythm perfect,
the same silence leading us both.

We turn in tight circles,
we are almost formal. No
kissing, no: we dance as if
still only dreaming of each other.

We feel each other’s breathing,
our bodies’ boundaries of warmth. 
Slowly we dance without music —
unless we are the music —

How else can I explain
that in such silence we don’t hear
the shot that travels farther and farther
into the past, while we dance.

~ Oriana

**

I hate to start with the trite “This really happened,” but I feel this statement is important. One time the long-go lover who later committed suicide did take my hand and put his arm around my waist, and we began to dance — in silence. It was brief and magical. Yet because of the tragedy that eventually took place, it took a while for this memory to rise up in my mind — and when it did, it made me smile with pleasure. And I realized that it would always bring me pleasure, even though I knew what happened later. 

This was quite a psychological discovery: that an enchanting moment could be retrieved and enjoyed in spite of the knowledge of the unhappy ending. Ultimately, the pain could not cancel that unique moment. It has its own being, inviolate, untouched by the tragedy, the shock, the grief. It has its own kind of eternity — at least for as long as my memory lasts.

And perhaps most important of all, it’s a memory of tenderness. 


 A peach orchard in Georgia; Hayley Hyatt

WHY WE FALL IN LOVE

 
~ “Because we all want to expand beyond ourselves. Psychologist Arthur Aron at Stoney Brook University has conducted studies suggesting that a primary motive for us as humans is to “expand the self and to increase our abilities and our effectiveness.” [what I elsewhere called “personality enlargement” — we explore the interests and knowledge of our new partner, learning new things, changing in surprising ways]

Good eye contact. Arthur Aron again (see #1). He conducted a study that encouraged strangers of the opposite sex to discuss intimate details about themselves for 90 minutes. At the end of that time, each couple stared into each other’s eyes for four minutes in silence. The results? Many of the couples said they felt a deep attraction to each other, even though they’d never met before. Two of the couples ended up married.

Because of inner and outer synchronicity. We fall in love, says psychologist Mark B. Kristal in the University at Buffalo College of Arts and Sciences, when processes in our bodies align with appropriate triggers from the outside world. He speaks of “visual, regular olfactory, auditory and tactile cues” happening in “the proper time, order and place.” He told softpedia.com:

    There are several types of chemistry required in romantic relationships. It seems like a variety of different neurochemical processes and external stimuli have to click in the right complex and the right sequence for someone to fall in love.

Because we like the way they smell. Many studies have shown that smell plays a role in love. Plus we’re not just talking about the ordinary smell of your lover’s dirty T-shirts (dirty T-shirts, by the way, have been the stock-in-trade of smell studies), but also those other, perhaps odorless, signals that enter the brain through the olfactory system. That’s right, pheromones. Volumes have been written on the subject of smell and pheromones in attraction, love and marriage, and don’t we all know it’s true?

Because we like the way they kiss. Kissing has an element of smell to it, obviously, but kissing all by itself can determine if the relationship holds promise. Sheril Kirshenbaum, author of the book The Science of Kissing, told EarthSky that a kiss, and especially a first kiss, plays a big role in determining the future of a relationship, according to scientific studies. She said:

    Fifty-nine percent of men and 66 percent of women say they have ended a budding relationship because a kiss didn’t go well. It’s your body’s way of saying, look elsewhere.

Because of our hormones. You know how your heart pounds and your mouth goes dry when your new lover rings the doorbell? It’s basically a stress response. Romantic, eh? Adrenaline, dopamine and serotonin all come into play in love’s early stages. Love-struck couples also have high levels of the neurotransmitter dopamine, which stimulates an intense rush of pleasure, essentially the same effect on the brain as taking cocaine.

Because sex is good for us. Sex relieves stress, boosts immunity, burns calories, boosts heart health, improves intimacy … and so much more.

To make and raise babies, together. Martie Haselton, a psychologist at UCLA, believes love is a “commitment device,” a mechanism that encourages two humans to form a lasting bond to ensure the “long-term health of children.” Haselton and her colleagues conducted experiments, asking people to think about how much they love their partners while suppressing thoughts of other attractive people. They then have the same people think about how much they sexually desire their partners while suppressing thoughts about others. It turns out that love does a much better job of pushing out potential rivals than sex does. This is what you’d expect, Haselton says, if love was a drive to form a long-term commitment.

Because love is a drug. Neuroscientist Thomas Insel and colleagues at Emory University in Atlanta conducted studies showing that that monogamous pair bonding among prairie voles (small rodents that mate for life) affects the same brain reward circuits that are responsible for addiction to cocaine and heroin.

Okay, so now we know some of what the world of science has to offer on the subject of falling in love. Meanwhile, what’s the best way to stay in love? Psychologist Arthur Aron says the best predictor for lasting longterm relationships is kindness.” ~

http://earthsky.org/human-world/for-you-valentine-top-10-reasons-we-fall-in-love?utm_source=EarthSky+News&utm_campaign=e62fcfb145-EMAIL_CAMPAIGN_2018_02_02&utm_medium=email&utm_term=0_c643945d79-e62fcfb145-394935141

Oriana:

Only some people's kisses feel just right — delicious. I wonder if chemical/genetic compatibility is involved in some indirect way — or if it’s accidental. Sometimes the partner is appealing in terms of personality, intelligence, kindness, etc —  and then the kissing doesn’t work. And at other times there are red flags — but the kissing is ecstatic. This is perverse, so I suspect physiology.

Still, even with imperfect kissing, what I call “personality enlargement” is probably the best part of falling in love. We start exploring the new partner's knowledge and interests, learn new things, try new foods, meet new people —  our world expands.



WHY SO MANY MASS SHOOTINGS IN THE U.S.? (redux)

Of course in the eyes of the world, the answer is obvious: easy access to assault weapons. No need for long articles on the psychopathology of the shooters or the “toxic culture of violence” when this answer blazes its stark truth. But since there is little hope for change, we might as well delve into the psychology and culture . . .

~ “Obsessed with revenge, those aspiring to mass murder draw from the archetypal American hero who relies on gun violence to right wrongs and overturn oppressive institutions. Those who transition from fantasy to action are those who rationalize no other option than murder-suicide by ‘going out in a blaze of glory’. No doubt this rationalization represents a distinct kind of tunnel vision, distorting the traditional US hero into an anti-hero who regards society as the enemy.

In psychiatry, a ‘culture-bound syndrome’ is an idiosyncratic, locale-specific pattern of behavior that represents a culturally sanctioned expression of distress if not a mental illness per se. In Malaysia, for example, the culture-bound syndrome amok involves episodes of mass violence committed by an individual following a period of brooding. Unfortunately, in addition to borrowing the word amok in our own lay speech, it would appear that the US, along with other Western societies, has developed our own brand of running amok in the form of mass shootings. Once the cultural mythology of such mass murder has been firmly planted into public consciousness, a select few distressed individuals will look to this model to guide their own behavior, creating the problem of copycat killings.

Perhaps we need to look at these elements within the context of the culture itself. The US was born out of violent revolt, and the idea of the underdog responding with force to defeat an aggressor has been an archetype for the US hero ever since. As a nation, Americans see themselves as promoters of armed rebellion in the name of freedom and democracy around the globe.

In defiance of stereotypes, most mass shooters are not psychotic, delusional, ‘crazy’, or ‘insane’. A 2002 US Secret Service report found that the majority of school shooters have had a history of ‘feeling extremely depressed or desperate’ (not the same as having a clinical diagnosis of major depression) and nearly 80 per cent had considered or attempted suicide in the past. Almost all had experienced a major loss such as a perceived failure, loss of a loved one or romantic relationship, or a major illness prior to the shooting, and about 70 per cent perceived themselves as wronged, bullied or persecuted by others.

Revenge was a motive in the majority of incidents. Christopher Ferguson, a psychologist at Stetson University in Florida whose work has contributed to the debunking of the link between violent video games and violence, recently summarized the most salient features of a typical mass shooter, noting that risk factors for mass murder are similar for both adults and children. These include antisocial traits, depressed mood, recent loss, and a perception that others are to blame for their problems.

And herein lies the rub – while this kind of profile implies that mental illness could be an important risk factor, what we’re really talking about are negative emotions, poor coping mechanisms and life stressors that are experienced by the vast majority of us at one time or another. These risk factors are not necessarily the domain of mental illness, but rather the ‘psychopathology of everyday life’.

Therefore, it appears that the most important risk factors aren’t those that set mass murderers apart from the rest of us; instead, they are simply appropriated from culturally sanctioned patterns of aggression.

If mass shootings are difficult to predict, potentially self-perpetuating, and result not from easily eliminated sources but rather from untimely interactions between normal instincts, culturally sanctioned patterns of behavior and entrenched features of modern society, is there a rational approach to prevention? Inasmuch as marginalization seems to lie at the heart of the mass murderer’s grievances, further attempts to screen, identify, remove and effectively punish those with the potential to commit such violence are doomed to fail. [Instead,] we should reach out to those who have fallen away from mainstream society, bringing them back to the herd before they come to see only a single, deadly alternative.

Let’s also consider re-assessing some of our cultural values and teach our children about different kinds of heroes, how to resolve conflicts, and cope with loss. And, as a recent report from the Making Caring Common Project suggests, let’s prioritize raising children who are kind. The real solution is not about blame, but opportunity. According to the 2002 Secret Service report, mass shootings are not sudden, impulsive acts. They occur with planning that is known to at least one other person in more than 80 per cent of cases. This means that there’s time to reach out — not to a murderer, loser or weirdo; but to someone’s son, student, classmate and neighbor”.

http://aeon.co/magazine/psychology/what-explains-mass-shootings-in-the-us/


Richard Pousette-Dart, The Blood Wedding, 1958 
 
BUT CAN WE STOP GLORIFYING VIOLENCE?

 
~ “Over two decades ago, I traveled to a city in the Russian provinces called Rostov-On-Don to interview a psychiatrist named Alexander Bukhanovsky.

Bukhanovsky, now deceased, was famous. If you've seen the movie Citizen X, about the capture of serial killer Andrei Chikatilo, Bukhanovsky was the guy played by Max Von Sydow. He was the Soviet Union's first criminal profiler.

One of the first things he said was that both Russia and America produced disproportionate shares of mass killers.

"Giant militarized countries," he said, "breed violent populations.”

 
Bukhanovsky at the time was treating a pre-teen who had begun killing animals. He told me this young boy would almost certainly move on to killing people eventually. He was seeing more and more of these cases, he said.

The people who point at pop culture as the reason disturbed kids and lone-wolf madmen go on killing sprees are half right. But images of violence are less the problem than the messages behind them, which are profoundly intertwined with deep-seated cultural ideas about the virtue of military supremacy and the political efficacy of violence.” ~

https://www.rollingstone.com/politics/taibbi-parkland-florida-school-shooting-gun-control-nra-w516850


 DOES THE ANSWER LIE IN IMPROVED MENTAL HEALTH SERVICES?

~ “According to research, the sorts of individuals who commit mass murder often are either not mentally ill or do not recognize themselves as such. Because they blame the outside world for their problems, mass murderers would likely resist therapies that ask them to look inside themselves or to change their behavior.

A study of convicted murderers in Indiana found that just 18 percent had a serious mental-illness diagnosis. Killers with severe mental illnesses, in that study, were actually less likely to target strangers or use guns as their weapon, and they were no more likely than the mentally healthy to have killed multiple people.

“If we were able to magically cure schizophrenia, bipolar disorder, and major depression, that would be wonderful,” Jeffrey Swanson, a professor of psychiatry and behavioral sciences at the Duke University School of Medicine, told ProPublica. “But overall violence would go down by only about 4 percent.

After studying mass shooters for decades, Northeastern University criminologist James Alan Fox concluded that the killers have more mundane motivations: revenge, money, power, a sense of loyalty, and a desire to foment terror.
“Revenge motivation is, by far, the most commonplace. Mass murderers often see themselves as victims—victims of injustice. They seek payback for what they perceive to be unfair treatment by targeting those they hold responsible for their misfortunes. Most often, the ones to be punished are family members (e.g., an unfaithful wife and all her children) or coworkers (e.g., an overbearing boss and all his employees).”

“The thing about mass killers is that they externalize blame,” Fox told me. “All the disappointments, all the failures, the broken relationships, are because other people treated them wrong. They don’t see themselves as being inadequate and flawed.

Other experts have echoed Fox’s view. Michael Stone, a forensic psychiatrist at the Columbia College of Physicians and Surgeons and author of The Anatomy of Evil, on the personalities of murderers, recently conducted a study that found that a fifth of mass killers had a serious mental illness. “The rest had personality or antisocial disorders or were disgruntled, jilted, humiliated, or full of intense rage,” as The Washington Post’s Michael S. Rosenwald wrote last year. “They were unlikely to be identified or helped by the mental-health system.”

As Fox notes, mass killers tend to share a few characteristics—“depression, resentment, social isolation, the tendency to externalize blame, fascination with graphically violent entertainment, and a keen interest in weaponry”—that are common in the general population. Attempting to flag so many angsty, un-self-aware young males as potential future killers might push them closer toward violence, rather than away from it.

Instead, a better way of predicting whether someone might be predisposed to violence is if they have a history of violence, as Swanson told ProPublica. For example, Spencer Hight, who killed his ex-wife and seven others at a football-watching party in Plano, Texas, earlier this month, had been violent at least twice, reportedly slamming his wife’s face against a wall.

Compared to those with no criminal record, handgun purchasers who have at least one misdemeanor conviction are seven times more likely to be charged with a new offense after they buy their gun. Right now, only 23 states restrict people with a history of violent misdemeanors from owning firearms.

https://www.theatlantic.com/health/archive/2017/10/why-better-mental-health-care-wont-stop-mass-shootings/541965/?utm_source=atlfb



from another source:

VIOLENCE IS NOT A PRODUCT OF MENTAL ILLNESS. IT'S A PRODUCT OF ANGER.

~ “Violence is not a product of mental illness. Nor is violence generally the action of ordinary, stable individuals who suddenly “break” and commit crimes of passion. Violent crimes are committed by violent people, those who do not have the skills to manage their anger. Most homicides are committed by people with a history of violence. Murderers are rarely ordinary, law-abiding citizens, and they are also rarely mentally ill. Violence is a product of compromised anger management skills.

In a summary of studies on murder and prior record of violence, Don Kates and Gary Mauser found that 80 to 90 percent of murderers had prior police records, in contrast to 15 percent of American adults overall. In a study of domestic murderers, 46 percent of the perpetrators had had a restraining order against them at some time. Family murders are preceded by prior domestic violence more than 90 percent of the time. Violent crimes are committed by people who lack the skills to modulate anger, express it constructively, and move beyond it.” ~

http://www.slate.com/articles/health_and_science/medical_examiner/2014/04/anger_causes_violence_treat_it_rather_than_mental_illness_to_stop_mass_murder.html


Mary:

Why so many mass killings in the US? It is fairly obvious that it's not due to mental illness, a red herring that allows the interests of the gun industry and the NRA, backed by the substantial contributions they pour into the coffers of politicians, to remain in place and undisturbed even as the massacres continue, and their frequency accelerates. In fact the gun industry and it's lobbyists actually propose MORE guns as a solution. Arm the teachers! Allow students to carry guns! That this is insane is both obvious and unacknowledged.

We will only really understand the positions of these groups by examining how very very profitable the gun industry is, and how much and how powerful that money makes them as they use it to control politicians and lawmakers. They have no shame, and the piles of bodies mean nothing weighed against profit and power. They aren't even embarrassed about tossing the sop of "thoughts and prayers" to the grieving relatives.

The single gunman with an assault weapon is not insane, he's angry — he's looking for revenge, on as big a scale as he can manage, a bloody vengeance, too big for anything but the most efficient weapons, ones made to kill as many as possible as quickly as possible. The gunman will have his "blaze of glory”— his moment at the center of the world’s attention — his apotheosis.

As to why this is a particularly American thing — who are our heroes, and what are our stories? There is that mythos of the Wild West, with it's outlaws and gunslingers and shootouts. Skill with a gun was idolized, guns were power, the “equalizers.” We all grew up playing cowboys and watching westerns. The violence was always there, it has just become more graphic, more extreme . . . movies, TV, videogames.

However, I think it's going at things backwards to say the violence creates the culture — it is more like the violence is created by and a reflection of, the culture. Violent movies don’t make people violent. Violent culture will create violent movies. Of course I'm not speaking of individuals, but of the violence embedded in the culture itself. We relish our heroes, who resolve their dilemmas by eliminating them with superior firepower.

But now this bloody fantasy is being acted out in the real world  again and again. The victims pile up, sacrifices on the altar of narcissistic rage and our most lethal fantasies. The fear of losing our guns trumps the fear of losing our children. We have long ago decided that the mentally ill are disposable, and relegated them to the streets and prisons. Not only is better care and regulation of the mentally ill a false solution, it is unlikely to lead to much of anything. There is no profit in it.

I have so much frustration and grief and anger about these issues it is overwhelming. The deliberate refusal to see that this is a problem here and nowhere else has convinced me nothing useful will be done.

Oriana:

As the article states, if all mental illness were magically eliminated, we’d get only 4% less violence. Besides, there is as much or more mental illness in other countries, but — isn’t this strange? — no problem with mass shootings.

I’d like to learn more about what’s happening in Russia, since the statement “Giant militarized countries breed violent populations” makes intuitive sense. But we are not likely to get accurate information. Still, if American-style mass shooting were happening, there’d be leaks. And Putin would not tolerate civilian access to military-style weapons. So easy access to the most efficient killing firearms remains the most likely answer. Combine it with anger and raging desire for vengeance, and in some cases extreme ideology, and it’s just a matter of time until the next shooting, while we stand by helpless. 

Still, anger, “toxic masculinity,” glorification of violence, vicious ideology, and whatever other reasons have been given for the shootings don’t result in multiples of dead bodies within minutes unless the right lethal weapon is available. And it wouldn’t be without profits. 


If at least a liability insurance were required . . . 



But I don’t have any hope either. But think of the effect on the young people as they see that corrupt adults willing to sacrifice their lives for the sake of continued NRA and gun-industry donations. Even individuals on the no-fly list — i.e. suspected terrorists — can legally buy high-capacity weapons! The evil of it is beyond words. And we live with it, hoping that one day at the mall or office, campus or church (church!!), it won’t be us who get mowed down.


 

“Have no fear, folks. The Republican Congress is praying for us.” ~ Stuart Balcomb 




THE TWO-SANTA GOP STRATEGY (long but enlightening)

~ “Republican strategist Jude Wanniski’s 1974 “Two Santa Clauses Theory” has been the main reason why the GOP has succeeded in producing our last two Republican presidents, Bush and Trump (despite losing the popular vote both times). It’s also why Reagan’s economy seemed to be “good.”

Here’s how it works, laid it out in simple summary:

First, when Republicans control the federal government, and particularly the White House, they spend money like a drunken sailor and run up the US debt as far and as fast as possible.  This produces three results — it stimulates the economy thus making people think that the GOP can produce a good economy, it raises the debt dramatically, and it makes people think that Republicans are the “tax-cut Santa Claus.”

Second, when a Democrat is in the White House, they scream about the national debt as loudly and frantically as possible, freaking out about how “our children will have to pay for it!” and “we have to cut spending to solve the crisis!” This will force the Democrats in power to cut their own social safety net programs, thus shooting their welfare-of-the-American-people Santa Claus.


Think back to Ronald Reagan, who more than tripled the US debt from a mere $800 billion to $2.6 trillion in his 8 years. That spending produced a massive stimulus to the economy, and the biggest non-wartime increase in the debt in history. Nary a peep from Republicans about that 218% increase in our debt; they were just fine with it.

And then along came Bill Clinton. The screams and squeals from the GOP about the “unsustainable debt” of nearly $3 trillion were loud, constant, and echoed incessantly by media from CBS to NPR.  Newt Gingrich rode the wave of “unsustainable debt” hysteria into power, as the GOP took control of the House for the first time lasting more than a term since 1930, even though the increase in our national debt under Clinton was only about 37%.

The GOP “debt freakout” was so widely and effectively amplified by the media that Clinton himself bought into it and began to cut spending, taking the axe to numerous welfare programs (“It’s the end of welfare as we know it” he famously said, and “The era of big government is over”).  Clinton also did something no Republican has done in our lifetimes: he supported several balanced budgets and handed a budget surplus to George W. Bush.

When George W. Bush was given the White House by the Supreme Court (Gore won the popular vote by over a half-million votes) he reverted to Reagan’s strategy and again nearly doubled the national debt, adding a trillion in borrowed money to pay for his tax cut for GOP-funding billionaires, and tossing in two unfunded wars for good measure, which also added at least (long term) another $5 to $7 trillion. 

There was not a peep about the debt from any high-profile in-the-know Republicans then; in fact, Dick Cheney famously said, essentially ratifying Wanniski’s strategy, “Reagan proved deficits don't matter. We won the midterms [because of those tax cuts]. This is our due.” Bush and Cheney raised the debt by 86% to over $10 trillion (although the war debt wasn’t put on the books until Obama entered office).

Then comes Democratic President Barack Obama, and suddenly the GOP is hysterical about the debt again.  So much so that they convinced a sitting Democratic president to propose a cut to Social Security (the “chained CPI”). Obama nearly shot the Democrats biggest Santa Claus program.  And, Republican squeals notwithstanding, Obama only raised the debt by 34%.

Now we’re back to a Republican president, and once again deficits be damned. Between their tax cut and the nearly-trillion dollar spending increase passed on February 8th, in the first year-and-a-month of Trump’s administration they’ve spent more stimulating the economy (and driving up debt by more than $2 trillion, when you include interest) than the entire Obama presidency. 

Consider the amazing story of where this strategy came from, and how the GOP has successfully kept their strategy from getting into the news; even generally well-informed writers for media like the Times and the Post – and producers, pundits and reporters for TV news — don’t know the history of what’s been happening right in front of us all for 37 years.

Republican strategist Jude Wanniski first proposed his Two Santa Clauses strategy in 1974, when Richard Nixon resigned in disgrace and the future of the Republican Party was so dim that books and articles were widely suggesting the GOP was about to go the way of the Whigs.  There was genuine despair across the Party, particularly when Jerry Ford began stumbling as he climbed the steps to Air Force One and couldn’t even beat an unknown peanut farmer from rural Georgia for the presidency.

Wanniski was tired of the GOP failing to win elections.  And, he reasoned, it was happening because the Democrats had been viewed since the New Deal as the Santa Claus party (taking care of people’s needs and the General Welfare), while the GOP, opposing everything from Social Security to Medicare to unemployment insurance, was widely seen as the party of Scrooge.

The Democrats, he noted, got to play Santa Claus when they passed out Social Security and Unemployment checks – both programs of the New Deal – as well as when their "big government" projects like roads, bridges, and highways were built, giving a healthy union paycheck to construction workers and making our country shine.

Democrats kept raising taxes on businesses and rich people to pay for things, which didn't seem to have much effect at all on working people (wages were steadily going up, in fact), and that added to the perception that the Democrats were a party of Robin Hoods, taking from the rich to fund programs for the poor and the working class.

Americans loved the Democrats back then. And every time Republicans railed against these programs, they lost elections.

Wanniski decided that the GOP had to become a Santa Claus party, too.  But because the Republicans hated the idea of helping working people, they had to figure out a way to convince people that they, too, could have the Santa spirit.  But what?

“Tax cuts!” said Wanniski.

To make this work, the Republicans would first have to turn the classical world of economics – which had operated on a simple demand-driven equation for seven thousand years – on its head. (Everybody understood that demand – aka “wages” – drove economies because working people spent most of their money in the marketplace, producing demand for factory output and services.)

In 1974 Wanniski invented a new phrase – "supply side economics" – and suggested that the reason economies grew wasn't because people had money and wanted to buy things with it but, instead, because things were available for sale, thus tantalizing people to part with their money.

To help, Arthur Laffer took that equation a step further with his famous napkin scribble. Not only was supply-side a rational concept, Laffer suggested, but as taxes went down, revenue to the government would go up!  Neither concept made any sense – and time has proven both to be colossal idiocies – but together they offered the Republican Party a way out of the wilderness.

Ronald Reagan was the first national Republican politician to fully embrace the Two Santa Clauses strategy.  He said straight out that if he could cut taxes on rich people and businesses, those tax cuts would cause them to take their surplus money and build factories, and that the more stuff there was supplying the economy the faster it would grow.

But Wanniski had been doing his homework on how to sell “voodoo” supply-side economics.

In 1976, he rolled out to the hard-right insiders in the Republican Party his "Two Santa Clauses" theory, which would enable the Republicans to take power in America for the next forty years.

Democrats, he said, had been able to be "Santa Clauses" by giving people things from the largesse of the federal government. From food stamps to new schools to sending a man to the moon, the people loved the “toys” the Democrats brought every year.

Republicans could do that, too, the theory went — spending could actually increase without negative repercussions. Plus, Republicans could be double Santa Clauses by cutting people's taxes!


For working people it would only be a small token – a few hundred dollars a year on average – but would be heavily marketed. And for the rich, which wasn’t to be discussed in public, it would amount to hundreds of billions of dollars in tax cuts. 

 
The rich, Reagan, Bush, and Trump told us, would then use that money to import or build more stuff to market, thus stimulating the economy and making average working people richer. (And, of course, they’d pass some of that money back to the GOP, like the Kochs giving Paul Ryan $500,000.00 right after he passed the last tax cut that gave them billions.)

There was no way, Wanniski said, that the Democrats could ever win again. They'd be forced into the role of Santa-killers by raising taxes, or anti-Santas by cutting spending. Either one would lose them elections.

When Reagan rolled out Supply Side Economics in the early 80s, dramatically cutting taxes while exploding spending, there was a moment when it seemed to Wanniski and Laffer that all was lost. The budget deficit exploded and the country fell into a deep recession – the worst since the Great Depression – and Republicans nationwide held their collective breath.

But David Stockman came up with a great new theory about what was going on – they were "starving the beast" of government by running up such huge deficits that Democrats would never, ever in the future be able to talk again about national health care or improving Social Security.

And this so pleased Alan Greenspan, the Fed Chairman, that he opened the spigots of the Fed, dropping interest rates and buying government bonds, producing a nice, healthy goose to the economy.

Greenspan further counseled Reagan to dramatically increase taxes on people earning under $37,800 a year by doubling the Social Security (FICA/payroll) tax, and then let the government borrow those newfound hundreds of billions of dollars off-the-books to make the deficit look better than it was.

Reagan, Greenspan, Winniski, and Laffer took the federal budget deficit from under a trillion dollars in 1980 to almost three trillion by 1988, and back then a dollar could buy far more than it buys today. They and George HW Bush ran up more debt in eight years than every president in history, from George Washington to Jimmy Carter, combined.


And this so pleased Alan Greenspan, the Fed Chairman, that he opened the spigots of the Fed, dropping interest rates and buying government bonds, producing a nice, healthy goose to the economy.

Greenspan further counseled Reagan to dramatically increase taxes on people earning under $37,800 a year by doubling the Social Security (FICA/payroll) tax, and then let the government borrow those newfound hundreds of billions of dollars off-the-books to make the deficit look better than it was.

Reagan, Greenspan, Winniski, and Laffer took the federal budget deficit from under a trillion dollars in 1980 to almost three trillion by 1988, and back then a dollar could buy far more than it buys today. They and George HW Bush ran up more debt in eight years than every president in history, from George Washington to Jimmy Carter, combined.

Clinton was the anti-Santa Claus, and the result was an explosion of Republican wins across the country as Republican politicians campaigned on a platform of supply-side tax cuts and pork-rich spending increases. State after state turned red, and the Republican Party rose to take over, ultimately, every single lever of power in the federal government, from the Supreme Court to the White House.

Looking at the wreckage of the Democratic Party all around Clinton by 1999, Winniski wrote a gloating memo that said, in part: "We of course should be indebted to Art Laffer for all time for his Curve... But as the primary political theoretician of the supply-side camp, I began arguing for the 'Two Santa Claus Theory' in 1974. If the Democrats are going to play Santa Claus by promoting more spending, the Republicans can never beat them by promoting less spending. They have to promise tax cuts…”

Two Santa Clauses had gone mainstream. Never again would Republicans worry about the debt or deficit when they were in office; and they knew well how to scream hysterically about it as soon as Democrats took power.

George W. Bush embraced the Two Santa Claus Theory with gusto, ramming through huge tax cuts – particularly a cut to the capital gains tax rate on people like himself who made their principle income from sitting around the mailbox waiting for their dividend or capital gains checks to arrive – and blew out federal spending.

Bush, with his wars, even out-spent Reagan, which nobody had ever thought would again be possible. And it all seemed to be going so well, just as it did in the early 1920s when a series of three consecutive Republican presidents cut income taxes on the uber-rich from over 70 percent to under 30 percent.

In 1929, pretty much everybody realized that instead of building factories with all that extra money, the rich had been pouring it into the stock market, inflating a bubble that — like an inexorable law of nature — would have to burst.

In reality, his tax cuts did what they have always done over the past 100 years — they initiated a bubble economy that would let the very rich skim the cream off the top just before the ceiling crashed in on working people. Just like today.

The Republicans got what they wanted from Wanniski's work. They held power for thirty years, made themselves trillions of dollars, and cut organized labor's representation in the workplace from around 25 percent when Reagan came into office to around 6 of the non-governmental workforce today.

Over time, and without raising the cap, Social Security will face an easily-solved crisis, and the GOP’s plan is for force Democrats to become the anti-Santa, yet again. If the GOP-controlled Congress continues to refuse to require rich people to pay into Social Security (any income over $128,000 is SS-tax-free), either benefits will be cut or the retirement age will have to be raised to over 70.

Over time, and without raising the cap, Social Security will face an easily-solved crisis, and the GOP’s plan is for force Democrats to become the anti-Santa, yet again. If the GOP-controlled Congress continues to refuse to require rich people to pay into Social Security (any income over $128,000 is SS-tax-free), either benefits will be cut or the retirement age will have to be raised to over 70.

When this happens, Democrats must remember Jude Wanniski, and accept neither the cut to disability payments nor the entree to Social Security “reform.” They must demand the “cap” be raised, as Bernie Sanders proposed and the Democratic Party adopted in its 2016 platform.

And, hopefully, some of our media will begin to call the GOP out on the Two Santa Clauses program. It’s about time that Americans realized the details of the scam that’s been killing wages and enriching billionaires for nearly four decades.” ~

https://www.alternet.org/right-wing/two-santa-clauses-or-how-gop-conned-america-nearly-40-years


 

“THROUGH THE EYE OF A NEEDLE” — CHRISTIANITY AND THE PROBLEM OF WEALTH

~ “One of the central embarrassments of Christianity arises from one of the most central errors of its founding figurehead. Jesus Christ was convinced that the next world — a radically different world from the observable reality of Roman Judea in which he found himself — was, as he continuously put it, “at hand.” He was the prophet of this change in the exact same way John the Baptist had been the prophet of his own coming — that is, as a roadside herald, trumpet in hand, declaring the coming of something extremely imminent. Jesus repeatedly tells his listeners that he is a divisive figure, an enemy of complacency. He repeatedly tells people they must choose sides, this dusty live-a-day world all around them, or the next world, which is just about to dawn and change everything.

The problem with this particular mistake (the world didn’t change, the kingdom of Heaven didn’t arrive, the Romans kept nailing troublemakers to scaffolding) is that it elicits some of Jesus’ most straightforward comments — none more so than Matthew 19:21, when the Master is confronted by a rich young man who is righteous and God-abiding (when he’s given a list of commandments, he comments that he’s been following them his whole life – in other words, crucially, he’s not a sinner). The young man asks what he must do to gain eternal life, and Jesus’ answer hits him right between the eyes: “If thou wilt be perfect, go and sell that thou hast, and give to the poor, and thou shalt have treasure in heaven: and come and follow me.”

The young man refuses and goes away disappointed, and that’s when Jesus utters his famous imprecation that it’s easier for a camel to pass through the eye of a needle than for a rich man to enter the kingdom of Heaven.

Hardly any rich Christians have wanted to do what their Savior explicitly commands them to do. The text from Matthew provides the title of Peter Brown’s dense, magnificent new book (with its gigantic sub-title), Through the Eye of a Needle: Wealth, the Fall of Rome, and the Making of Christianity in the West, 350-550 AD, and the subject — the way early Christians got around the embarrassment of not wanting to be poor — is explored in 500 pages of fascinating, engaging prose and 100 pages of close-packed and amazingly comprehensive notes. The conflict between the sacred calling of Christianity and the more mundane concerns of spes saeculi, the hope of advancement in this world, is here given an examination like it’s never had before, with money at the heart of it all.

Also at the heart of it all is that pivotal figure, St. Augustine, and readers who’ve already encountered Brown’s justly revered Augustine of Hippo will know to expect fine writing and fine insight into the figure who, more than anybody, tried to work out a theocratic framework that would allow his congregation to be wealthy if only they avoided avarice. Blatant double-talk like that would come in very handy to Christians of every subsequent century.

~ Augustine’s justification of wealth came at the right time. In a world that had been unexpectedly shaken by renewed civil war and by barbarian invasion, there was no point in denouncing the rich for the manner in which they had gained their wealth. Those whose wealth had survived the shocks of this new crisis were unlikely to feel guilty about what little of it was left to them. The radical critiques of wealth and the wealthy associated with the preachings of Ambrose and with the Pelagian De divitiis were out-of-date. Such radicalism had been the product of an age of affluence. It had played on the disquiet of the comfortable rich of the fourth-century age of gold. It had less effect on persons who now faced the prospect of losing everything.” ~

https://www.openlettersmonthly.com/book-review-through-the-eye-of-a-needle/


Oriana:

Since the world was about to end, it made sense to divest oneself of wealth. The end of the world certainly makes the pursuit of money not just irrelevant, but downright sinful. There was time only for the acts of generosity and kindness.

But time kept passing, and the central promise of early Christianity turned out to be false — or, as true believers insist, delayed (indefinitely, it seems). Hence the need for a heavy spin on the question of wealth and the pesky problem of passing through the eye of a needle. And sure enough, such spin has been found — already by St. Augustine, himself no stranger to the comforts of wealth.

St. Augustine, 6th century fresco. A “doctor of the church,” he was a real “Dr. Spin.”


HOW CHRISTIANITY REALLY DIES




WHO IS MOST LIKELY TO GET BITTEN BY A DOG?

 
Dog bites are a problem. According to the American Veterinary Medical Association, 4.5 million Americans are bitten by dogs each year, and every day, nearly 1,000 individuals show up in hospital emergency rooms because of dog attacks. The annual cost of medical treatments for dog bites (including 27,000 reconstructive surgeries) is over $250,000,000, and insurance companies fork out $530 million dollars a year in dog bite claims. Then there are the 26 Americans who were killed by dogs last year.

How Many Dog Bites?

But how many people are really bitten by dogs and who is most likely to be bitten by a dog? Researchers at the University of Liverpool realized that a lot of dog bite victims do not actually see a doctor. They figured that the best way to estimate rates of dog bites would be to ask everyone in a community if they had ever been bitten by a dog. Their results have just been published in the Journal of Epidemiology and Community Health, and there are some surprises. (You can read the full text of their article here.)

Led by Dr. Carri Westgarth of the University of Liverpool, the research team attempted to contact people living in all 1,280 household in a semi-rural town near Liverpool. While they did not get everyone, they did have a high degree of cooperation and were able to obtain information from 767 residents. In addition to questions about dog bites, the researchers also asked about basic demography (sex, age, etc.) and the participants took a short test which measures the well-known Big Five personality traits.

Here’s what the researchers found:

    25 percent of the participants had been bitten by a dog.

    Only one in three victims received medical attention.

    Men were nearly twice as likely to have been bitten as women.
    People who owned multiple dogs were three times more likely to be bitten than non-dog owners.
    Children are at higher risk: 44 percent of the bites occurred when the victim was younger than 16.

    In 55 percent of cases, the person had never before seen the dog that bit them.

 
    But the most interesting finding was related to personality: People with higher scores on the Big Five trait of emotional stability were 22% less likely to have been bitten by a dog than were individuals who were less emotionally stable.

What Is The Link Between Personality and Dog Bites?

This is the first study to link dog attacks to the personalities of victims. Low emotional stability is also called neuroticism, and it is associated with insecurity, fear, self-consciousness, anxiety, and being temperamental. But why is this personality trait related to dog bites? Neuroticism is linked to a slew of mental and physical health problems. These include drug and alcohol dependency, panic disorders, cardiovascular disease, asthma, and irritable bowel syndrome. In their article, Westgarth and her colleagues suggested it is possible that some unknown pattern of behavior in emotionally unstable people makes them especially prone to dog bites.  But they also point out that other factors might be involved. For example, anxious people might be more likely to have nervous dogs. Or the causal arrow point could even point the other direction and being bitten by a dog could make people more fearful and anxious.

https://www.psychologytoday.com/blog/animals-and-us/201802/personality-affects-your-chances-being-attacked-dog


Oriana:

But the effect of “personality” isn't that large. People can be emotionally stable and still get bitten.

What should have gotten emphasis is that men are twice as likely to get bitten as women, and owners of multiple dogs three times as likely. Children are also at a higher risk.

Advice on how to prevent being bitten:

    “Don’t approach an unfamiliar animal.
    Do not run from a dog, panic or make loud noises.
    If an unfamiliar dog approaches you, remain motionless. Do not run or scream. Avoid direct eye contact.
    Don’t disturb a dog while they’re eating, sleeping, or taking care of their puppies.
    Allow a dog to sniff and smell you before you attempt to pet it. Afterward scratch the animal under the chin, not on the head.
    Report strays or dogs displaying strange behavior to your local animal control.
    If knocked over by a dog, roll into a ball and remain motionless. Be sure to cover your ears and neck with your hands and arms. Avoid eye contact and remain calm.
    Don’t encourage your dog to play aggressively.”

https://www.caninejournal.com/dog-bite-statistics/


The breed that does most biting: the chihuahua. Bulldogs and pit bulls come next. Of course chihuahua bites never killed anyone. That’s unfortunately not true about the larger breeds. Most deaths are caused by pit bulls — more than all the other breeds combined.



LEAFY GREENS LINKED TO SLOWER COGNITIVE DECLINE

 
The latest good news: A study recently published in Neurology finds that healthy seniors who had daily helpings of leafy green vegetables — such as spinach, kale and collard greens — had a slower rate of cognitive decline, compared to those who tended to eat little or no greens.

"The association is quite strong," says study author Martha Clare Morris, a professor of nutrition science at Rush Medical College in Chicago. She also directs the Rush Institute for Healthy Aging.

The research included 960 participants of the Memory and Aging Project. Their average age is 81, and none of them have dementia. Each year the participants undergo a battery of tests to assess their memory. Scientists also keep track of their eating habits and lifestyle habits.

To analyze the relationship between leafy greens and age-related cognitive changes, the researchers assigned each participant to one of five groups, according to the amount of greens eaten. Those who tended to eat the most greens comprised the top quintile, consuming, on average, about 1.3 servings per day. Those in the bottom quintile said they consume little or no greens.

After about five years of follow-up/observation, "the rate of decline for [those] in the top quintile was about half the decline rate of those in the lowest quintile," Morris says.

So, what's the most convenient way to get these greens into your diet?

"My goal every day is to have a big salad," says Candace Bishop, one of the study participants. "I get those bags of dark, leafy salad mixes."

A serving size is defined as a half-cup of cooked greens, or a cup of raw greens.

Many factors play into healthy aging — this study does not prove that eating greens will fend off memory decline. With this kind of research, Morris explains, scientists can only establish an association — not necessarily causation — between a healthy diet and a mind that stays sharp.

Still, she says, even after adjusting for other factors that might play a role, such as lifestyle, education and overall health, "we saw this association [between greens and a slower rate of cognitive decline] over and above accounting for all those factors.”

Some prior research has pointed to a similar benefit. A study of women published in 2006 also found that high consumption of vegetables was associated with less cognitive decline among older women. The association was strongest with greater consumption of leafy vegetables and cruciferous vegetables — such as broccoli and cauliflower.

What might explain a benefit from greens?

Turns out, these vegetables contain a range of nutrients and bioactive compounds including vitamin E and K, lutein, beta carotene and folate.
"They have different roles and different biological mechanisms to protect the brain," says Morris. More research is needed, she says, to fully understand their influence, but scientists know that consuming too little of these nutrients can be problematic.

For instance, "if you have insufficient levels of folate in your diet you can have higher levels of homocysteine," Morris says. This can set the stage for inflammation and a buildup of plaque, or fatty deposits, inside your arteries, which increases the risk of stroke. Research shows elevated homocysteine is associated with cognitive impairment among older adults.

Another example: Getting plenty of Vitamin E from foods in your diet can help protect cells from damage and also has been associated with better cognitive performance.

"So, when you eat leafy greens, you're eating a lot of different nutrients, and together they can have a powerful impact," Morris says.

https://www.npr.org/sections/health-shots/2018/02/05/582715067/eating-leafy-greens-daily-may-help-keep-minds-sharp



Oriana:

Don’t forget to put on a generous amount of extra virgin olive oil on your leafy greens. The oil will actually help absorb the micronutrients, besides having neuroprotective benefits of its own.

ending on beauty:

To be alive: not just the carcass
But the spark.
That's crudely put, but . . .

If we're not supposed to dance,
Why all this music?

    ~ Gregory Orr 

 
Downtown San Diego; Gwyn Henry