Saturday, November 20, 2021

FIRST BOOK BANNED BY PURITANS; WHY THE SECOND WAVE OF THE 1918 FLU WAS SO DEADLY; THE ONE PERSONALITY TRAIT THAT MOST DISTINGUISHES GIFTED PERSONS; IS THE BIBLE HISTORICALLY ACCURATE? HOW THE CRUSADES WERE “SOLD” AS AN ACT OF LOVE; IRON DEFICIENCY AND HEART DISEASE

Volcanic lightning; Sergio Tapiro

*
AT THE DOUBLE WATERFALL
Twin Lakes, near Mammoth, California

In Eastern Sierra, in the divine
ion-charged mountain air,
next to the white water

braiding and unbraiding,
a woman is smoking —
eyes closed as for a kiss,

a slow inhale, long ecstatic
exhale — giving herself
to the poison as to a lover.

“Life cannot offer
what drugs can offer,”
an expert declared. We don’t

have a prayer unless we are
artists, addicted to music,
the huge, merciless

music of the world.
O sparrow, sparrow, whose fall
is counted by God,

remind me always and now:
there is a price for bliss.
The waterfall’s fluent arms

embrace me, but the slap
of the water below
counts those who also tried

to make music and sank
like Icarus into brightness.  

~ Oriana

*
Oriana:

It always struck me as somewhat odd how Icarus is the subject of poems and visual arts, while his brilliant father gets no glory. I guess people love the romantic over-reacher, no matter how foolish, rather than a hard-working, practical engineer. 

Mary:

Yes, the "merciless music of the world " enacts a steep "price for bliss," and there are few who refuse to pay. I imagine those lab rats pulling the lever wired to their pleasure center until they starve and die. And that's physical pleasure, emotional pleasures, psychological pleasures,  the pleasure w take in beauty, as "artists, addicted to music," is even stronger, more compelling, more liable to be fatal. And yet without that music, without art, "we don't have a prayer.”… The music, the ecstasy, is our dangerous salvation.


 Hendrik Goltzius: Icarus, 1588 (from the series The Four Disgracers)

*

CONTINUAL PRACTICE

Oriana:

In Hamlet there is a little-noticed moment when Horatio expresses worry that Hamlet does not have sufficient skill at fencing to stand a chance in a sword fight. Hamlet replies, “I have been in continual practice.” It’s not a famous line, but for some reason it touches me to the core. I too have been in continual practice, the practice of paying attention and being astonished. I can’t quite say for what purpose, or have faith that a grand occasion to exercise those skills will ever arise. But purpose may be beside the point. At least once a day, I reflect on the paradoxes of the world and instantly I am in “immense amazement.”

“What use are you? In your writings there is nothing except
immense amazement.” ~ Milosz, “Consciousness”

And thus I remain if not in continual practice, then in continual astonishment.

*
I think that I am here, on this earth,
To present a report on it, but to whom I don’t know.
As if i were sent so that whatever takes place
Has meaning because it changes into memory.

~ Milosz, “Consciousness”

My quick response to the first two lines: “to your readers, silly.” But then two lines of sheer wisdom, and the reason I keep reading Milosz. Changing events into memory is also Keats’s “soul-making.”

Vladimir Kush, Sunrise

*
“I prefer winter and fall, when you feel the bone structure of the landscape — the loneliness of it, the dead feeling of winter. Something waits beneath it; the whole story doesn't show.” ~ Andrew Wyeth


*
“UNSTUCK IN TIME” — A BIOPIC ABOUT KURT VONNEGUT

~ Early on in Kurt Vonnegut: Unstuck in Time, the beloved writer has returned to his Indianapolis high school. This is a familiar move in cinematic biography: Vonnegut is there at the behest of the filmmakers to reminisce about those few perfect years of youth before the Second World War, there to conjure the innocence that will soon encounter a cascade of tragedy.

It is an obvious device, a bit manipulative, even, but who cares: the images of teenage Vonnegut alongside his friends, virtuosic in their happiness, are woundingly poignant in light of what’s to come, and they should be. “The Second World War was fought by children,” he says, towards the visit’s end, as he approaches the school’s panic doors… Panic doors? Yes. Vonnegut, leaving melancholy behind, snaps into delight as he tells us his ancestor invented the easy-to-open safety bars now ubiquitous on institutional exits, and cackles with delight demonstrating their use.

Here then, are all the elements of a Kurt Vonnegut novel: direct and disarming tenderness, joy surrounded by shadow, absurd coincidence, narrative digression, and—most importantly—the omnipresent feeling you might slip through time at any moment, as if it were a door you just happened to lean on.

*
Captured just before Christmas in 1944, the 22-year-old Vonnegut was in Dresden for the notorious Allied fire-bombing that killed 25,000 civilians; as a POW, Vonnegut was made to toil in the ruins of the blasted city, pulling bodies from the rubble in what he described as a “terribly elaborate Easter egg hunt.” The war, as it would for most of his generation, changed something in Vonnegut, and it was an experience he’d reckon with again and again for most of his life. Says the film’s co-director, Robert Weide: “It’s not like he’s just trying to get a bead on [Slaughterhouse-Five], it’s like he’s trying to purge the whole Dresden experience from his soul.” Indeed, Vonnegut’s writerly struggle to commit the story to paper, in what would become Slaughterhouse-Five (1969), is, if not heroic, truly epic.

His sixth novel, Slaughterhouse-Five took Vonnegut years of untold rewrites, trying it out in first person, third person, as a play—sometimes he’d get halfway through a draft and scrap it to start all over again. In one version, a middle-aged Billy Pilgrim (the main character) gets a drunken crank call from Vonnegut the Author telling him he’s just a character in a book. Says Vonnegut, in mournful voiceover: “I would hate to tell you what this lousy little book cost in money and anxiety and time.”

Slaughterhouse-Five would establish Vonnegut as a major American writer of the 20th century, shifting him from a literary sci-fi cult figure to a household name—particularly if that household contained a teenager. “The current idol of the country’s sensitive and intelligent young people is 47 years old,” as one TV news anchor describes Vonnegut’s newfound fame.

One of those young fans was director Robert Weide, who discovered Vonnegut in 1975 as a 16-year-old in Fullerton, California. For Weide, the gateway drug was Breakfast of Champions, given to him by a high school English teacher (Valerie Stevenson, who makes a charming cameo, and muses aloud that she’s “horrified” she actually assigned the book). This is what distinguishes Unstuck in Time from conventional literary biography: it is as much the story of a writer’s life, as it is the story of a fan’s life.

Weide would go on to immerse himself in Vonnegut, at one point teaching a small class to his fellow high school seniors. After producing a documentary for PBS on the Marx Brothers Weide made his move: in the summer of 1982 he wrote a letter to his hero, asking if he would be the subject of a documentary; to Weide’s delight, Vonnegut replied a month later, with a generosity characteristic of most of his fan interactions. He wrote: “I am honored by your interest in my work, and I will talk to you some, if you like, about making some sort of film based on it.”

It’s important, here, to note that was 39 years ago. From 1988 until Vonnegut’s death in 2007, Weide would amass hundreds of hours of interview footage with his teenage idol (who would become, over time, his dear friend). And though Weide says early on in the film he doesn’t like documentaries that feature the documentarians themselves, he concedes that, “When you take almost 40 years to make a documentary you owe some kind of an explanation.”

And what follows explains a lot.

Though Unstuck in Time follows a rough chronology based on its subject’s bibliography, its making-of metanarrative necessitates temporal jumps worthy of a Vonnegut story: footage of Weide’s wedding is paired with the renewal of vows decades later; Vonnegut, wry and cantankerous, appears onstage at 50, then 75, then 60; reels of Vonnegut’s children fly by, and they are teenagers, then fortysomethings, then older than their father was in the film’s first interviews. It can be a little dizzying (the film doesn’t provide a lot of temporal anchors) but its dual effect of compression and continuity is deeply moving, and is somehow illustrative of the hard-earned humanism Vonnegut never quite abandoned, even in the face of so much human cruelty. As Vonnegut does so often in his writing, so too does this documentary remind us: though our lives may be unbearably full with heartache and happiness, with contradiction, they are so very brief; as such, we are morally bound to pay attention to—and note—whichever small moments of grace and joy we are lucky enough to encounter.

Vonnegut’s daughters Edie and Nanette get the most screentime of his children, providing counterpoint, as a loving but skeptical chorus, to Weide’s fandom. We learn from them that the kindness that animates so much of Vonnegut’s writing, even at its darkest, wasn’t always readily available to those closest to him. When working—particularly in those lean years before the success of Slaughterhouse-Five—Vonnegut could be an absolute bear to be around, leaving all matters of the material world to his hugely supportive first wife, Jane, who comes across as a veritable saint. Even after Vonnegut left her, in the midst of Slaughterhouse’s astonishing success, eventually marrying the photographer Jill Krementz, Jane remained both a fan of his work, and a friend.

It’s at this point, though, that the advantages of Weide’s friendship become limitations. It seems clear that over the many years trying to document the life of their father, Weide developed a relationship with Vonnegut’s children, of warmth and respect, if not outright friendship. This perhaps suggests why Krementz is treated more as “the other woman” than as Vonnegut’s partner for the last thirty years of his life.

But if the purpose of a literary biography is to unpack the whys and whens and hows that animate a writer’s work, Unstuck in Time is a brilliant success. Like much of its subject’s writing, it is tender, smart, funny, candid, and dark. Crucially, it reminds us of what, for me at least, has been so important about Vonnegut’s writing: the idea that kindness is not the same as weakness, and that it is the institutions we create that do us the most harm, not one another.

Despite the unkindnesses wrought upon Vonnegut’s generation, and upon his own life—the Depression, the war, so much death—he gives us permission to believe there is goodness yet in this world, all around us, if only we choose to look. ~

https://lithub.com/a-fans-notes-the-kurt-vonnegut-documentary-40-years-in-the-making/?fbclid=IwAR0NF0OCncK9EIlpKWC2g1q7LD8iPn8cDiEuu6wKHN2fLADdyRzUkPbCDz8

Mary:

The biopic on Vonnegut demonstrates a crucial aspect that shaped the experience of all living in the 20th century. The two world wars changed more than the maps — they put what it means to be human radically into question. The experience of the war was the primary force in the lives not just of Vonnegut, but of many many soldiers, and not just of soldiers but of the millions of victims of the Holocaust, and the millions who witnessed its genocides. It left fear in many about what, if any, limits there were on human depravity. And finally, it stunned with the deadly power of its weapons, whose primary effect was the mass casualties of non-combattant citizens. Dresden, Guernica, Hiroshima, Nagasaki, all the wholesale slaughter of populations of civilians.

These horrors for many spelled the death of god, the failure of any orthodoxy in light of our own assumption of the godlike power to annihilate. Who could now rescue us from ourselves? That is the core of fear, not only in the Other, but in its reflection in ourselves.

*
MILES STANDISH AS “CAPTAIN SHRIMP”? THE FIRST BOOK BANNED BY THE PURITANS

~ Thomas Morton, an English businessman, arrived in Massachusetts in 1624 with the Puritans, but he wasn’t exactly on board with the strict, insular, and pious society they had hoped to build for themselves. “He was very much a dandy and a playboy,” says William Heath, a retired professor from Mount Saint Mary’s University who has published extensively on the Puritans. Looking back, Morton and his neighbors were bound to butt heads sooner or later.

Within just a few short years, Morton established his own unrecognized offshoot of the Plymouth Colony, in what is now the town of Quincy, Massachusetts (the birthplace of presidents John Adams and John Quincy Adams). He revived forbidden old-world customs, faced off with a Puritian militia determined to quash his pagan festivals, and wound up in exile. He eventually sued and, like any savvy rabble-rouser should, got a book deal out of the whole affair. 

Published in 1637, his New English Canaan mounted a harsh and heretical critique of Puritan customs and power structures that went far beyond what most New English settlers could accept. So they banned it—making it likely the first book explicitly banned in what is now the United States. A first edition of Morton’s tell-all—which, among other things, compares the Puritan leadership to crustaceans—recently sold at auction at Christie’s for $60,000.

The Puritans’ move across the pond was motivated by both religion and commerce, but Morton was there only for the latter reason, as one of the owners of the Wollaston Company. He loved what he saw of his new surroundings, later writing that Massachusetts was the “masterpiece of nature.” His business partner—slave-owning Richard Wollaston—moved south to Virginia to expand the company’s business, but Morton was already deeply attached to the land, in a way his more religious neighbors likely couldn’t understand. “He was extremely responsive to the natural world and had very friendly relations with the Indians,” says Heath, while “the Puritans took the opposite stance: that the natural world was a howling wilderness, and the Indians were wild men that needed to be suppressed.”

After Wollaston left, Morton enlisted the help of some brave recruits—both English and Native—to establish the breakoff settlement of Ma-Re Mount, also known as Merrymount, preserved today in the Quincy neighborhood and park of the same name. Morton essentially asked his neighbors, “What if we just throw [Wollaston] out and start our own utopian colony based on Plato’s Republic, and also as a society of the Native Americans?” explains Rhiannon Knol, a specialist in the Books & Manuscripts department at Christie’s in New York. “And that sounded a lot better to them.” Some of them, at least.

The Puritan authorities didn’t see Merrymount as a free-wheeling annoyance; they saw an existential threat. The problem wasn’t only that Morton was taking goods and commerce away from Plymouth, but that he was giving that business to the Native Americans, including trading guns to the Algonquins. With Plymouth’s monopoly dissolved and its perceived enemies armed, Morton had perhaps done more than anyone else to undermine the Puritan project in Massachusetts. Worse yet, in the words of Plymouth’s governor William Bradford, Morton condoned “dancing and frisking together” with the Native Americans—activities that were banned even without Native American participation. It was basically an early colonial version of Footloose. Governor Bradford nicknamed Morton the “Lord of Misrule,” and it’s not hard to imagine him wearing that title like a crown.

There could be no greater symbol of such misrule than Morton’s maypole. Reaching 80 feet into the air, the structure conjured all the vile, virile vices of Merry England that the Puritans had hoped to leave behind. Throughout medieval Europe, maypoles had been a popular installation for May Day (or Pentecost or midsummer, in some regions)—encouraging human fertility as the land itself sprung up from winter. Now that was a tradition that Morton could get behind, and he gladly called upon the residents of Merrymount to drink, dance, and frolic around the pole. The establishment of Merrymount had been a provocation, but Morton’s May Day celebrations meant war. 


Maypole by Brueghel

During the 1628 festivities, a Puritan militia led by Myles Standish invaded Merrymount and chopped down the maypole. (The incident later inspired Nathaniel Hawthorne’s short story “The May-Pole of Merry Mount,” first published in 1832.) Morton was tried for supplying arms to the Natives, and expelled to an island off the coast of New Hampshire to be left for dead. Somehow, he managed to hitch passage on a ship back to England, where he sued the Massachusetts Bay Company. The trial provided him with the basis for his book, much of which was composed at London’s Mermaid Tavern with a little help from his friends, including famed poet and playwright Ben Jonson.

Heath is careful to stress that the book is not a literary masterwork, but he acknowledges that it has its moments. Knol says she was particularly struck by the nicknames Morton threw at his Puritan foes, whom he called “cruell Schismaticks.” It’s hard to know who got it worse between Standish and John Endecott, governor of the Massachusetts Bay Colony (Plymouth’s neighbor to the north): Endecott is known in the book as “Captaine Littleworth,” Standish as “Captaine Shrimp.”

Even more radical than his belittling appellations were Morton’s subversive policy ideas, which went so far as to recommend “demartializing” the colonies. Unsurprisingly, the Puritans were appalled. Bradford, Plymouth’s governor, called New English Canaan “an infamous and scurrilous book against many god and chief men of the country, full of lies and slanders and fraught with profane calumnies against their names and persons and the ways of God.”

It’s likely that the book scandalized England as well. The book’s title page names Amsterdam as the place of publication rather than London—but that’s hard to believe, as that very Amsterdam publisher was in fact a well-known purveyor of Puritan books. Knol says that Amsterdam was likely listed as a lie to protect the actual publisher in London.

After publishing the book, Morton braved a venture back to his beloved Massachusetts, only to be turned right back around upon arrival. He tried to cross the Atlantic once again in 1643, and was this time exiled to Maine, where he died. His maypole may have been chopped down and his book banned, but Morton’s legacy lives on in Quincy, though sadly there’s no maypole in Merrymount Park. ~

https://getpocket.com/explore/item/america-s-first-banned-book-really-ticked-off-the-plymouth-puritans?utm_source=pocket-newtab

Mary: LIKE WOODSTOCK BEING SHUT DOWN BY SOUTHERN BAPTISTS

Marvelous story about Morton and the pilgrims, the cutting of his Maypole, around which he had invited natives and settlers to "frisk" and "frolic," sounds like Woodstock being shut down by Southern Baptists. And then, this man who sold guns to the natives comes to the idea of "demartializing" the colonies. I can see him, this "Lord of Misrule," joining in the escapades of Kesey and his Merry Pranksters, the banning of his book another feather in his cap, a badge of honor. It's such an American story, representative of a long chain of such stories, and such heroes, in American life and history. Even now, centuries later, we recognize the characters and plot instantly, and know where we stand.

Joe: THE GOD OF PUNISHMENT AND COVID

Is it possible that the Conservative Christian reaction to COVID 19 epidemic indicates the degree of Puritan influence on the United States? The Puritan settlers believed more in God’s punishment than in His love. Jonathan Edwards, a minister in 1703, gave a sermon, Sinners in the Hands of an Angry God, which illustrates that the Early American Christians believed more in His punishment than His love.

As cultural descendants of the Puritans, today’s Christians demonstrate their lack of belief in God’s love by refusing the vaccination. If they believed in His love, they would call the super-fast development of the vaccine a miracle. Then the refusal of the vaccination would be seen as a rejection of God’s blessing, not as a support of his vengeance.

Like the Puritans, the modern Fundamentalist Christian believes in God’s punishment, and those who contract COVID are sinners. The infected survive if He forgives them, and the unforgiven die. This inheritance of the Puritan belief system contributes to 70 percent of the unvaccinated being conservative Christian.

On the other hand, Asia is becoming the most Covid-vaccinated area in the world. The reason the Asian nations were behind was the unavailability of the vaccine. Recently, Japan reached 80 percent of its country vaccinated. Could it be that the popularity of the Buddha religion has something to do with this? Buddhists practice honoring their ancestors.

They do this by respecting all of the life forms, including their human neighbors. Maybe that is why mask-wearing is so acceptable in Asia. It may partially explain the high rate of vaccination in the East. It seems behaving as if your neighbor’s health is essential is easier if you honor your ancestors than if you believe your neighbors are sinners.


Oriana:

Buddhists also believe in science. They have no problem grasping how vaccines work, and how the protection you gain is amply worth what minor side effects you may experience (I experienced none, which startled me).

Thank you for the insight that believing in the god of punishment as opposed to the god of love may be an important factor in a person’s attitude toward vaccination. And yes, we know that vaccination rates are lowest where bible literalism prevails. These are the same people who, if they happen to be in a hospital for any reason and end up have a successful surgery or any other treatment give all the credit to god and the “power of prayer,” and none to the doctors and nurses.


*

*

AMERICA AS A RETRO-PROGRESSIVE NATION

We’re a paradoxically retro-progressive nation, on the pragmatic cutting edge but founded by uptight reactionary Puritans, nostalgic for less pragmatic religious dogmas (a recipe for lie buying). It's like if Silicon Valley had been founded by Druids. ~ Jeremy Sherman


*
IS OURS THE TRUE AGE OF ANXIETY?

~ Though anxiety disorders are now considered the most common type of psychiatric disorders in the United States – affecting up to 31 per cent of adults at some point in their lifetime – anxiety hasn’t always stood out as a well-recognized mental health problem. In the US and elsewhere, the concept of anxiety has evolved over time in ways that have better allowed it to be seen as a major clinical concern.

Historically, anxiety has often been mixed with other symptoms in a way that has masked its significance. For example, in American Nervousness (1881), the American neurologist George Miller Beard outlined the causes of what he regarded as an epidemic level of fear in US culture. His specific diagnosis was ‘neurasthenia’. A significant part of the diagnosis included anxiety, but it also featured a variety of other psychological and physical symptoms, catalogued over long lists, including insomnia, heart palpitations and back pain. In part through Beard’s promotional efforts and popular writings, neurasthenia achieved considerable cultural cachet.

The idea resonated as rapid social change was underway; Beard lay much of the blame at the feet of Thomas Edison and his inventions. As a medical diagnosis, though, neurasthenia quickly fell out of favor. Medical professionals began to doubt the seriousness of nervousness per se; they were inclined to regard other symptoms associated with it, such as cardiovascular complaints, as worthier of treatment. Beard himself contributed to this decline in arguing that this nervousness would subside as American culture grew more sophisticated.

In the following decades, Sigmund Freud did much to renew the profile of anxiety, starting with an attempt to cleave it from the remnants of neurasthenia. He saw promise in the study of fear and anxiety, casting fear (and, by extension, anxiety) as the problem whose solution would throw a floodlight on mental life writ large. His followers took up this mantle, too, describing, among other things, some of the social circumstances that increase anxiety.

In the middle of the 20th century, anxiety would again re-emerge as a significant concern and a cultural idiom of unease, the lens artists and authors used to talk about change. Perhaps the most famous statement in this regard was the book-length poem The Age of Anxiety (1947) by W H Auden – his locution persists to this day – though the poet was hardly alone in casting anxiety as the signature disorder of the era. In books such as The Meaning of Anxiety (1950) by Rollo May, psychologists and others saw much to worry about in the US, and the special value of talking about anxiety.

Anxiety also fit well within an emerging medical ecology. Miltown, an anxiolytic drug, was launched in the 1950s, ushering in a new era of seriously treating ‘nerve problems’, including the ‘nervous breakdown’ (of which anxiety was thought to be a key symptom). As a minor tranquilizer, Miltown was fast-acting and effective in calming nerves in a way that could seem miraculous. Advertisements focused on its ability to treat stress and anxiety, encouraging consumers to see their everyday unease in a new way – as a treatable condition.

But the mid-century age of anxiety would be short-lived. The rise and fall of Miltown was quick. Although much of the backlash focused on the drug itself, especially the potential for abuse, resistance ultimately circled back to the more elementary question of whether anxiety ought to be regarded as a problem to be treated with medication. Why treat something that was so common – and perhaps simply reflected the strains of an era in which anxiety really ought to be common? Evolutionary accounts, after all, begin with the idea that fear and anxiety enhance fitness by alerting people to potential threats.

The creation of psychiatric disorder categories in manuals like the DSM is not merely an academic matter. The concepts that psychiatrists create tend to assume a life of their own once they are enshrined in diagnostic instruments and articulated as scientific tools. Due in part to the criteria provided in the DSM, the late 20th century could rightly be regarded as the age of depression. With the ascent of selective serotonin reuptake inhibitors (SSRIs) such as Prozac starting in the late 1980s, major depressive disorder assumed a special significance. By the DSM’s criteria, many people met the threshold for a major depressive disorder. And SSRIs seemed especially well suited to treating it. Around the time Prozac came on the market, the total number of doctors’ office-based visits per year for depression increased significantly, going from 10.99 million in 1985 to an average of 20.43 million in 1993 and 1994. It’s not that instances of depression suddenly multiplied. Instead – in an echo of the advent of early anti-anxiety drugs – depression was suddenly regarded and talked about by more people as a treatable medical condition, rather than as an everyday trouble that could be ignored.

Of course, anxiety never went away. Depression might have seemed ubiquitous, but people in the late 20th century hardly had less to be anxious about or more to be depressed about. Indeed, anxiety disorders frequently co-occur with major depression. Therapists have certainly recognized the importance of anxiety as a dimension of suffering in their patients: alleviating a patient’s fear and anxiety is the better part of making them well, even if targeting depression with SSRIs is the focus of much treatment. Furthermore, reported anxiety, as a basic emotional experience, began rising across birth cohorts during the 20th century – an increase that, I argue in my book Unnerved (2021), is due in part to changes in the family, a rise in income inequality and economic uncertainty, and increasingly fraught social attachments. If we’re in the midst of a new age of anxiety, the designation might very well be accurate this time.

As a therapeutic target, anxiety has risen in prominence again both because it is common and because it lends itself well to the 21st-century treatment armamentarium. Anxiety medications tend to be fast-acting, and the use of benzodiazepines, a powerful class of medications first prescribed decades ago, has increased over time in outpatient settings.

Anxiety is also responsive to other kinds of treatment. It can be treated effectively with cognitive behavioral therapy (CBT), for example. And CBT can be administered in a variety of settings, without necessarily requiring extensive training. In school settings, for instance, anxiety interventions can be administered effectively by nurses and teachers. Patients presenting psychiatric symptoms to doctors have increasingly been presenting anxiety.

Some of the long-standing uncertainty about whether anxiety is worthy of treatment has been resolved as well. It is increasingly clear that even though a degree of anxiety might be natural and perhaps even essential to a well-adapted species, anxiety also has negative consequences with respect to role performance and well-being. Anxiety can undermine school performance in children and adolescents. Anxious workers are often less productive. Anxious athletes might not perform up to their own expectations. Over time, anxiety could lead to worse physical health, too.

Although there remains considerable stigma attached to most psychiatric disorders, for anxiety, it is different and shifting. It is possible to regard anxiety as both treatable and not at all unusual. Much of the enduring stigma surrounding psychiatric disorders centers on a fear of violence. But in the mind of the public, anxiety is less associated with violence than, for instance, schizophrenia is. 

The lingering stigma related to anxiety is partly due to the idea that it reflects weakness. An old theme that has fed into the ambivalence over treating anxiety as a clinical problem is that people can overcome it with the right mindset and might even learn from it. Yet there is growing acceptance that psychiatric disorders are largely genetic in origin, diminishing the stigma once attached to disorders that were previously regarded as a matter of weak character. It is likely easier now to admit to others that one is anxious.

The idea of an age of anxiety is rarely intended to be a specific psychiatric claim. But there has been an increase in the seriousness with which anxiety is taken as a clinical concern among both the public and treatment providers. If we’re more anxious now than we used to be, we’re also more inclined to treat our anxiety. In that sense, the age of anxiety has been slow in coming but it might be here. ~

https://psyche.co/ideas/after-many-false-starts-this-might-be-the-true-age-of-anxiety?utm_source=Aeon+Newsletter&utm_campaign=8b2da0e263-EMAIL_CAMPAIGN_2021_11_15_05_49&utm_medium=email&utm_term=0_411a82e59d-8b2da0e263-71890240

Oriana:

So it’s not so much what’s really happening inside the patient’s brain, as what medicine thinks can be treated. When the expensive new-type anti-depressants arrived, we lived in an age of depression. Now, with benzodiapenes restored to grace, we again appear to suffer chiefly from anxiety.

I suspect that cognitive-behavioral therapy could also be useful for some cases of anxiety. So many of our fears and worries are irrational. Or, even when they are rational, if there is nothing we can do about the situation, then there is still no point in feeling anxious. It's usually pointless to ruminate. Let's do something useful instead. I've learned the hard way and over many years that the best antidote is deep breathing and action, action, action.

Mary:

In regard to anxiety and depression, the perception that "treatability" is what matters, more than what the individual is experiencing, is a telling insight. The trend has been to expand the category of “illness,” steadily increasing the pool of patients who must be/can be treated. Things that may not have been seen as other than personality traits, bad habits, or personal choices, increasingly are labeled as treatable illnesses, widening the scope and the influence of the dispensers of treatment, from psychiatrists to social workers and therapists. And of course, the biggest winner in this game is the pharmaceutical industry.

Are there more reasons for anxiety and depression in the modern world? I think in terms of the speed and rate of social and technological change, which can be dizzying, yes. But even more significant may be the loss of strong family and community structure and support. Lives become more and more fragmented, people don’t stay in one place, keep one job, or even career. They don’t maintain ties with the family group, now separated in both time and space.

As much as we feel threatened by violence now, past ages were at least as violent, if not more so, so that's not new. I think what’s new is the fragmentation, the loss of strong family ties and support.

*

“The men the American people admire most extravagantly are the most daring liars; the men they detest most violently are those who try to tell the truth.” ~ H.L. Mencken

Oriana:

I'm not a Mencken fan, but now and then he hits on something true or close to it. Well, lying and politics — this is universal, not specifically American. However, Americans may be somewhat more likely to admire “daring liars” because the country has the dimension of myth so strongly embedded in it. To Jewish immigrants America (and not Palestine) was the “Goldene medine” — the “golden country.”

But I agree with Jeremy: the number one factor in the propensity to buy lies is probably religiosity, and especially the religious extremists’ yearning for a religious utopia that would of course be a nightmare for the rest of us. (The Puritans were an example; now it's the Evangelicals.)

New York, Mulberry Street, 1900

*
THE MYSTERIES OF THE HUMAN BODY

The Human Body is a treasure trove of mysteries, one that still confounds doctors and scientists about the details of its working. It's not an overstatement to say that every part of your body is a miracle. Here are fifty facts about your body, some of which will leave you stunned…

1. It’s possible for your body to survive without a surprisingly large fraction of its internal organs. Even if you lose your stomach, your spleen, 75% of your liver, 80% of your intestines, one kidney, one lung, and virtually every organ from your pelvic and groin area, you wouldn't be very healthy, but you would live.

2. During your lifetime, you will produce enough saliva to fill two swimming pools. Actually, Saliva is more important than you realize. If your saliva cannot dissolve something, you cannot taste it.

3. The largest cell in the human body is the female egg and the smallest is the male sperm.
The egg is actually the only cell in the body that is visible by the naked eye.
 
4. The strongest muscle in the human body is the tongue and the hardest bone is the jawbone.

5. Human feet have 52 bones, accounting for one quarter of all the human body's bones.

6. Feet have 500,000 sweat glands and can produce more than a pint of sweat a day.

7. The acid in your stomach is strong enough to dissolve razor blades. The reason it doesn't eat away at your stomach is that the cells of your stomach wall renew themselves so frequently that you get a new stomach lining every three to four days.

8. The human lungs contain approximately 2,400 kilometers (1,500 mi) of airways and 300 to 500 million hollow cavities, having a total surface area of about 70 square meters, roughly the same area as one side of a tennis court. Furthermore, if all of the capillaries that surround the lung cavities were unwound and laid end to end, they would extend for about 992 kilometers. Also, your left lung is smaller than your right lung to make room for your heart.

9. Sneezes regularly exceed 100 mph, while coughs clock in at about 60 mph.

10. Your body gives off enough heat in 30 minutes to bring half a gallon of water to a boil.

11. Your body has enough iron in it to make a nail 3 inches long.

12. Earwax production is necessary for good ear health. It protects the delicate inner ear from bacteria, fungus, dirt and even insects. It also cleans and lubricates the ear canal.

13. Everyone has a unique smell, except for identical twins, who smell the same.

14. Your teeth start growing 6 months before you are born. This is why one out of every 2,000 newborn infants has a tooth when they are born.

15. A baby's head is one-quarter of its total length, but by the age of 25 will only be one-eighth of its total length. This is because people's heads grow at a much slower rate than the rest of their bodies.

16. Babies are born with 300 bones, but by adulthood the number is reduced to 206. Some of the bones, like skull bones, get fused into each other, bringing down the total number.

17. It's not possible to tickle yourself. This is because when you attempt to tickle yourself you are totally aware of the exact time and manner in which the tickling will occur, unlike when someone else tickles you.

18. Less than one third of the human race has 20-20 vision. This means that two out of three people cannot see perfectly.

19. Your nose can remember 50,000 different scents. But if you are a woman, you are a better smeller than men, and will remain a better smeller throughout your life.

20. The human body is estimated to have 60,000 miles of blood vessels.

21. The three things pregnant women dream most of during their first trimester are frogs, worms and potted plants. Scientists have no idea why this is so, but attribute it to the growing imbalance of hormones in the body during pregnancy.

22. The life span of a human hair is 3 to 7 years on average. Every day the average person loses 60-100 strands of hair. But don't worry, you must lose over 50% of your scalp hairs before it is apparent to anyone.

23. The human brain cell can hold 5 times as much information as an encyclopedia. Your brain uses 20% of the oxygen that enters your bloodstream, and is itself made up of 80% water. Though it interprets pain signals from the rest of the body, the brain itself cannot feel pain.

24. The tooth is the only part of the human body that can't repair itself. (apparently not true for small cavities)

25. Your eyes are always the same size from birth but your nose and ears never stop growing. [Oriana: this is also true for the prostate gland]

26. By 60 years of age, 60% of men and 40% of women will snore.

27. We are about 1 cm taller in the morning than in the evening, because during normal activities during the day, the cartilage in our knees and other areas slowly compress.

28. The brain operates on the same amount of power as 10-watt light bulb, even while you are sleeping. In fact, the brain is much more active at night than during the day.
 
29. Nerve impulses to and from the brain travel as fast as 170 miles per hour. Neurons continue to grow throughout human life. Information travels at different speeds within different types of neurons.

30. It is a fact that people who dream more often and more vividly, on an average have a higher Intelligence Quotient.

31. The fastest growing nail is on the middle finger.

32. Facial hair grows faster than any other hair on the body. This is true for men as well as women.

33. There are as many hairs per square inch on your body as a chimpanzee.

34. A human fetus acquires fingerprints at the age of three months.

35. By the age of 60, most people will have lost about half their taste buds.

36. About 32 million bacteria call every inch of your skin home. But don't worry, a majority of these are harmless or even helpful bacteria.

37. The colder the room you sleep in, the higher the chances are that you'll have a bad dream.

38. Human lips have a reddish color because of the great concentration of tiny capillaries
just below the skin.

39. Three hundred million cells die in the human body every minute.

40. Like fingerprints, every individual has an unique tongue print that can be used for identification. 

41. A human head remains conscious for about 15 to 20 seconds after it has been decapitated.

42. It takes 17 muscles to smile and 43 to frown.

43. Humans can make do longer without food than sleep. Provided there is water, the average human could survive a month to two months without food depending on their body fat and other factors. Sleep deprived people, however, start experiencing radical personality and psychological changes after only a few sleepless days. The longest recorded time anyone has ever gone without sleep is 11 days, at the end of which the experimenter was awake, but stumbled over words, hallucinated and frequently forgot what he was doing.

44. The most common blood type in the world is Type O. The rarest blood type, A-H or Bombay blood, due to the location of its discovery, has been found in less than hundred people since it was discovered.

45. Every human spent about half an hour after being conceived, as a single cell. Shortly afterward, the cells begin rapidly dividing and begin forming the components of a tiny embryo.

47. Your ears secrete more earwax when you are afraid than when you aren't.

48. Koalas and primates are the only animals with unique fingerprints.

49. Humans are the only animals to produce emotional tears.

50. The human heart creates enough pressure to squirt blood 30 feet in the air.

Oriana:

The strongest muscle is the tongue? That works metaphorically as well . . . 

I removed #46, which stated: "Right-handed people live, on average, nine years longer than left-handed people do." I looked it up: this is apparently not true, but rather a statistical error.

Leonardo: Anatomical Sketches of the arm

*
HOW THE CRUSADES WERE PRESENTED AS AN ACT OF LOVE

~ Seven hundred years after Augustine’s conversion, Pope Urban stood before a packed hall at Clermont—in what was then the Duchy of Aquitaine—filled with dozens, probably hundreds, of the most powerful and influential people in Europe, including archbishops, abbots, knights, and noblemen from across the region. It was November 1095, and if he was going to make an impact, now was the time to do it. Augustine’s ideas about love and emotion still dominated Christianity at the time, and Urban, a skilled rhetorician, knew how to use them. He began his speech.

Most beloved brethren: Urged by necessity, I, Urban, by the permission of God chief bishop and prelate over the whole world, have come to these parts as an ambassador with a divine admonition to you, the servants of God.

That the “beloved brethren” term was used as a way to get everyone on the same page is important. Urban and his chroniclers were tapping into the crowd’s brotherly, uti [meaning approximately "utilitarian"] love for fellow Christians who were up against a common enemy. The most explicit example of this is found in the account by Balderic of Dol. After listing the horrors inflicted by Islamic forces on his fellow Christians living at the edges of the Byzantine Empire—they were flogged, driven from their homes, enslaved, robbed of their churches, and so on—Urban is said to have addressed the crowd directly:

“You should shudder, brethren, you should shudder at raising a violent hand against Christians; it is less wicked to brandish your sword against Saracens. It is the only warfare that is righteous, for it is charity to risk your life for your brothers.”

The word used in most of the primary sources for charity was caritas — that Augustinian right sort of love. But Urban and his chroniclers were also tapping into the direct and powerful frui love [enjoyment; love of something for its own sake] you should feel for Christ himself. Robert the Monk had Urban use this notion to pry people away from those they loved on earth:

But if you are hindered by love of children, parents and wives, remember what the Lord says in the Gospel, “He that loveth father or mother more than me, is not worthy of me . . . Every one that hath forsaken houses, or brethren, or sisters, or father, or mother, or wife, or children, or lands for my name’s sake shall receive an hundredfold and shall inherit everlasting life.”

The chance of everlasting life in the presence of God was the key. Linking it to a frui love for Christ and fellow Christians was powerful.

Much of the crusader rhetoric tapped into uti love for the Holy Land itself. The ever-polemical Balderic of Dol echoed Psalm 79:1 by having Urban say:

We weep and wail, brethren, alas, like the Psalmist, in our inmost heart! We are wretched and unhappy, and in us is that prophecy fulfilled: “God, the nations are come into thine inheritance; thy holy temple have they defiled; they have laid Jerusalem in heaps; the dead bodies of thy servants have been given to be food for the birds of the heaven, the flesh of thy saints unto the beasts of the Earth. Their blood have they shed like water round about Jerusalem, and there was none to bury them.”

This uti love for the Holy Land wasn’t just a legend built up by Crusade writers. Islamic accounts of the Crusades put similar words into the mouths of crusaders. Writing about the Islamic reconquering of Jerusalem in 1187, Persian scholar Imad ad-Din al-Isfahani heard terrified crusaders get ready for a final battle with the words:

“We love this place, we are bound to it, our honor lies in honoring it, its salvation is ours, its safety is ours, its survival is ours. If we go far from it we shall surely be branded with shame and just censure, for here is the place of the crucifixion and our goal, the altar and the place of sacrifice.”

The impetus for crusading seems to have been a deep sense of uti love in the Augustinian sense. The problem is, Augustine didn’t mean “love only thy neighbors whom you agree with,” and that raises a question about how an 11th-century man might reconcile violence against others with neighborly love. Thankfully, at least from the crusader’s point of view, Augustine also had an answer for that in his concept of a just war.

Augustine saw war as an act of correction, a bit like disciplining a child who has misbehaved. He wrote:

“They who have waged war in obedience to the divine command, or in conformity with His laws have represented in their persons the public justice or the wisdom of government, and in this capacity have put to death wicked men; such persons have by no means violated the commandment, “Thou shalt not kill.”

As long as you are fighting for the right reasons—that is, for God—and not for personal gain or hatred, then the war is just. More than that, it can be an act of uti love. Killing a sinner is to remove sin from the face of the earth, and that, to Augustine, was a good thing. It was also a good thing for the crusaders. ~

https://lithub.com/how-christian-leaders-made-the-case-for-the-crusades-as-an-act-of-love/?fbclid=IwAR0kol9NcuEjyBTXiyDBN37URTV8fFszgbVmBvRR_v5eISjwqCaCF4BKG-w

Oriana:

During the Middle Ages, the struggle for the possession of Jerusalem was strictly between Islam and Christianity. Somehow no one suggested that the city be returned to the Jews — even though both sides knew that Jerusalem was the capital of ancient Israel. Somehow that didn't count, and the descendants of the ancient Israelis were not thought to have any right whatsoever to their former homeland. Such are the peculiar ironies of the place also known as the Holy Land.

*
“HOMES” VERSUS “TENTS”: IS THE BIBLE HISTORICALLY ACCURATE?

~ Erez Ben-Yosef wasn’t interested in the Bible. His field was paleomagnetism, the investigation of changes in the earth’s magnetic field over time, and specifically the mysterious “spike” of the tenth century B.C., when magnetism leapt higher than at any time in history for reasons that are not entirely understood. With that in mind, Ben-Yosef and his colleagues from the University of California, San Diego unpacked their shovels and brushes at the foot of a sandstone cliff and started digging.

They began to extract pieces of organic material—charcoal, a few seeds, 11 items all told—and dispatched them to a lab at Oxford University for carbon-14 dating. They didn’t expect any surprises. The site had already been conclusively dated by an earlier expedition that had uncovered the ruins of a temple dedicated to an Egyptian goddess, linking the site to the empire of the pharaohs, the great power to the south. This conclusion was so firmly established that the local tourism board, in an attempt to draw visitors to this remote location, had put up kitschy statues in “walk like an Egyptian” poses. 

But when Ben-Yosef got the results back from Oxford they showed something else—and so began the latest revolution in the story of Timna. The ongoing excavation is now one of the most fascinating in a country renowned for its archaeology. Far from any city, ancient or modern, Timna is illuminating the time of the Hebrew Bible—and showing just how much can be found in a place that seems, at first glance, like nowhere.

If you were a rising young archaeologist in the 1970s, you were skeptical of stories about Jewish kings. The ascendant critical school in biblical scholarship, sometimes known by the general name “minimalism,” was making a strong case that there was no united Israelite monarchy around 1000 B.C.—this was a fiction composed by writers working under Judean kings perhaps three centuries later. The new generation of archaeologists argued that the Israelites of 1000 B.C. were little more than Bedouin tribes, and David and Solomon, if there were such people, weren’t more than local sheikhs. This was part of a more general movement in archaeology worldwide, away from romantic stories and toward a more technical approach that sought to look dispassionately at physical remains.

In biblical archaeology, the best-known expression of this school’s thinking for a general audience is probably The Bible Unearthed, a 2001 book by the Israeli archaeologist Israel Finkelstein, of Tel Aviv University, and the American scholar Neil Asher Silberman. Archaeology, the authors wrote, “has produced a stunning, almost encyclopedic knowledge of the material conditions, languages, societies, and historical developments of the centuries during which the traditions of ancient Israel gradually crystallized.” Armed with this interpretative power, archaeologists could now scientifically evaluate the truth of biblical stories. An organized kingdom such as David’s and Solomon’s would have left significant settlements and buildings—but in Judea at the relevant time, the authors wrote, there were no such buildings at all, or any evidence of writing. In fact, most of the saga contained in the Bible, including stories about the “glorious empire of David and Solomon,” was less a historical chronicle than “a brilliant product of the human imagination.”

At Timna, then, there would be no more talk of Solomon. The copper mines were reinterpreted as an Egyptian enterprise, perhaps the one mentioned in a papyrus describing the reign of Ramses III in the 12th century B.C.: “I sent forth my messengers to the country of Atika, to the great copper mines which are in this place,” the pharaoh says, describing a pile of ingots he had placed under a balcony to be viewed by the people, “like wonders.”

The new theory held that the mines were shut down after Egypt’s empire collapsed in the civilizational cataclysm that hit the ancient world in the 12th century B.C., perhaps because of a devastating drought. This was the same crisis that saw the end of the Hittite Empire, the famed fall of Troy, and the destruction of kingdoms in Cyprus and throughout modern-day Greece. Accordingly, the mines weren’t even active at the time Solomon was said to exist. Mining resumed only a millennium later, after the rise of Rome. “There is no factual and, as a matter of fact, no ancient written literary evidence of the existence of ‘King Solomon’s Mines,’” Rothenberg wrote.

That was the story of Timna when Erez Ben-Yosef showed up in 2009. He had spent the previous few years excavating at another copper mine, at Faynan, on the other side of the Jordanian border, at a dig run by the University of California, San Diego and Jordan’s Department of Antiquities.

The dig quickly took an unexpected turn. Having assumed they were working at an Egyptian site, Ben-Yosef and his team were taken aback by the carbon-dating results of their first samples: around 1000 B.C. The next batches came back with the same date. At that time the Egyptians were long gone and the mine was supposed to be defunct—and it was the time of David and Solomon, according to biblical chronology. “For a moment we thought there might be a mistake in the carbon dating,” Ben-Yosef recalled. “But then we began to see that there was a different story here than the one we knew.”

Accommodating himself to the same considerations that would have guided the ancient mining schedule, Ben-Yosef comes to dig with his team in the winter, when the scorching heat subsides. The team includes scientists trying to understand the ancient metallurgical arts employed here and others analyzing what the workers ate and wore. They’re helped by the remarkable preservation of organic materials in the dry heat, such as dates, shriveled but intact, found 3,000 years after they were picked.

A few years ago the team produced one of those rare archaeology stories that migrates into pop culture: The bones of domesticated camels, they found, appear in the layers at Timna only after 930 B.C., suggesting that the animals were first introduced in the region at that time. The Bible, however, describes camels many centuries earlier, in the time of the Patriarchs—possibly an anachronism inserted by authors working much later. The story was picked up by Gawker (“The Whole Bible Thing Is B.S. Because of Camel Bones, Says Science”) and made it into the CBS sitcom “The Big Bang Theory” when Sheldon, a scientist, considers using the finding to challenge his mother’s Christian faith.

In the past decade, Ben-Yosef and his team have rewritten the site’s biography. They say a mining expedition from Egypt was indeed here first, which explained the hieroglyphics and the temple. But the mines actually became most active after the Egyptians left, during the power vacuum created by the collapse of the regional empires. A power vacuum is good for scrappy local players, and it’s precisely in this period that the Bible places Solomon’s united Israelite monarchy and, crucially, its neighbor to the south, Edom.

The elusive Edomites dominated the reddish mountains and plateaus around the mines. In Hebrew and other Semitic languages, their name literally means “red.” Not much is known about them. They first appear in a few ancient Egyptian records that characterize them, according to the scholar John Bartlett in his authoritative 1989 work Edom and the Edomites, “as bellicose by nature, but also as tent-dwellers, with cattle and other possessions, able to travel to Egypt when necessity arose.” They seem to have been herdsmen, farmers and raiders. Unfortunately for the Edomites, most of what we do know comes from the texts composed by their rivals, the Israelites, who saw them as symbols of treachery, if also as blood relations: the father of the Edomites, the Bible records, was no less than redheaded Esau, the twin brother of the Hebrew patriarch Jacob, later renamed Israel. With the Egyptian empire out of the picture by 1000 B.C., and no record of Israelite activity nearby, “The most logical candidate for the society that operated the mines is Edom,” says Ben-Yosef. 

But archaeologists had found so few ruins that many doubted the existence of any kingdom here at the time in question. There were no fortified cities, no palaces, not even anything that could be called a town. The Edom of Solomon’s time, many suspected, was another fiction dreamed up by later authors.

But the dig at the Faynan copper mines, which were also active around 1000 B.C., was already producing evidence for an organized Edomite kingdom, such as advanced metallurgical tools and debris. At Timna, too, the sophistication of the people was obvious, in the remains of intense industry that can still be seen strewn around Slaves’ Hill: the tons of slag, the sherds of ceramic smelting furnaces and the tuyères, discarded clay nozzles of the leather bellows, which the smelter, on his knees, would have pumped to fuel the flames. 

These relics are 3,000 years old, but today you can simply bend down and pick them up, as if the workers left last week. (In an animal pen off to one corner, you can also, if so inclined, run your fingers through 3,000-year-old donkey droppings.) The smelters honed their technology as decades passed, first using iron ore for flux, the material added to the furnace to assist in copper extraction, then moving to the more efficient manganese, which they also mined nearby.

The archaeologists found the bones of fish from, astonishingly, the Mediterranean, a trek of more than 100 miles across the desert. The skilled craftsmen at the furnaces got better food than the menial workers toiling in the mine shafts: delicacies such as pistachios, lentils, almonds and grapes, all of which were hauled in from afar. 

A key discovery emerged in a Jerusalem lab run by Naama Sukenik, an expert in organic materials with the Israel Antiquities Authority. When excavators sifting through the slag heaps at Timna sent her tiny red-and-blue textile fragments, Sukenik and her colleagues thought the quality of the weave and dye suggested Roman aristocracy. But carbon-14 dating placed these fragments, too, around 1000 B.C., when the mines were at their height and Rome was a mere village.

In 2019, Sukenik and her collaborators at Bar-Ilan University, working a hunch, dissolved samples from a tiny clump of pinkish wool found on Slaves’ Hill in a chemical solution and analyzed them using a high-performance liquid chromatography device, which separates a substance into its constituent parts. She was looking for two telltale molecules: monobromoindigotin and dibromoindigotin. Even when the machine confirmed their presence, she wasn’t sure she was seeing right. The color was none other than royal purple, the most expensive dye in the ancient world. Known as argaman in the Hebrew Bible, and associated with royalty and priesthood, the dye was manufactured on the Mediterranean coast in a complex process involving the glands of sea snails. People who wore royal purple were wealthy and plugged into the trade networks around the Mediterranean. If anyone was still picturing disorganized or unsophisticated nomads, they now stopped. “This was a heterogeneous society that included an elite,” Sukenik told me. And that elite may well have included the copper smelters, who transformed rock into precious metal using a technique that may have seemed like a kind of magic.

purple wool, symbol of wealth, around 1,000 b.c.

More pieces of the puzzle appeared in the form of copper artifacts from seemingly unrelated digs elsewhere. In the Temple of Zeus at Olympia, Greece, a 2016 analysis of three-legged cauldrons revealed that the metal came from the mines in the Arava Desert, 900 miles away. And an Israeli study published this year found that several statuettes from Egyptian palaces and temples from the same period, such as a small sculpture of Pharaoh Psusennes I unearthed in a burial complex at Tanis, were also made from Arava copper. The Edomites were shipping their product across the ancient world.

It stands to reason, then, that a neighboring kingdom would make use of the same source—that the mines could have supplied King Solomon, even if these weren’t exactly “King Solomon’s mines.” But did Solomon’s kingdom even exist, and can archaeology help us find out? Even at its height, Timna was never more than a remote and marginal outpost. But it’s on these central questions that Ben-Yosef’s expedition has made its most provocative contribution. 

Looking at the materials and data he was collecting, Ben-Yosef faced what we might call the Timna dilemma. What the archaeologists had found was striking. But perhaps more striking was what no one had found: a town, a palace, a cemetery or homes of any kind. And yet Ben-Yosef’s findings left no doubt that the people operating the mines were advanced, wealthy and organized. What was going on?

The mining operation, in Ben-Yosef’s interpretation, reveals the workings of an advanced society, despite the absence of permanent structures. That’s a significant conclusion in itself, but it becomes even more significant in biblical archaeology, because if that’s true of Edom, it can also be true of the united monarchy of Israel. Biblical skeptics point out that there are no significant structures corresponding to the time in question. But one plausible explanation could be that most Israelites simply lived in tents, because they were a nation of nomads. In fact, that is how the Bible describes them—as a tribal alliance moving out of the desert and into the land of Canaan, settling down only over time. (This is sometimes obscured in Bible translations. In the Book of Kings, for example, after the Israelites celebrated Solomon’s dedication of the Jerusalem Temple, some English versions record that they “went to their homes, joyful and glad.” What the Hebrew actually says is they went to their “tents.”) These Israelites could have been wealthy, organized and semi-nomadic, like the “invisible” Edomites. Finding nothing, in other words, didn’t mean there was nothing. Archaeology was simply not going to be able to find out.

The veteran Israeli archaeologist Aren Maeir, of Bar-Ilan University, who has spent the last 25 years leading the excavation at the Philistine city of Gath (the hometown, according to the Bible, of Goliath), and who isn’t identified with either school, told me that Ben-Yosef’s findings made a convincing case that a nomadic people could achieve a high level of social and political complexity. He also agreed with Ben-Yosef’s identification of this society as Edom. Still, he cautioned against applying Ben-Yosef’s conclusions too broadly in order to make a case for the accuracy of the biblical narrative. “Because scholars have supposedly not paid enough attention to nomads and have over-emphasized architecture, that doesn’t mean the united kingdom of David and Solomon was a large kingdom—there’s simply no evidence of that on any level, not just the level of architecture.”

A visitor walking through the eerie formations of the Timna Valley, past the dark tunnel mouths and the enigmatic etchings, is forced to accept the limits of what we can see even when we are looking carefully. We like to think that any mystery will yield in the end: We just have to dig deeper, or build a bigger magnifying glass. But there is much that will always remain invisible.

What Ben-Yosef has produced isn’t an argument for or against the historical accuracy of the Bible but a critique of his own profession. Archaeology, he argues, has overstated its authority. Entire kingdoms could exist under our noses, and archaeologists would never find a trace. Timna is an anomaly that throws into relief the limits of what we can know. The treasure of the ancient mines, it turns out, is humility. ~

https://www.smithsonianmag.com/history/archaeological-dig-reignites-debate-old-testament-historical-accuracy-180979011/

Timna arches. Deuteronomy describes Israel as a “land out of whose hills you can dig copper.”

*
THE PERSONALITY TRAIT THAT MOST DISTINGUISHES GIFTED INDIVIDUALS

~ According to the Davidson Institute, “profoundly gifted” people exhibit the following tendencies: rapid comprehension, intuitive understanding of the basics, a tendency toward complexity, the need for precision, high expectations, divergent interests—and a quirky sense of humor. They usually show “asynchronous development," being remarkably ahead in some areas while being average or behind in other ways. It’s hard to know where they fit in, and educational settings typically are not designed to accommodate their differences. Especially for younger children, youthful appearance clashes with advanced ability, making it harder for certain teachers to be responsive.

While many things contribute to giftedness, including various types of intelligence, genetic factors, and upbringing, one key area of interest is personality. Do gifted people look different in terms of personality compared to "non-gifted"1 individuals? In the journal High Ability Studies, researchers Ogurlu and Özbey (2021) conduct a meta-analysis of the literature on personality and giftedness to see where the Big 5 personality traits of Extraversion, Conscientiousness, Openness to Experience, Neuroticism and Agreeableness fit in.

They reviewed multiple databases to find research articles meeting stringent criteria to include in their pooled analysis, whittling 103 citations down to a final group of 13 high-quality studies for review. They identified 83 factors related to giftedness, age, gender, and personality in the final pooled sample of almost 8,000 people, including 3,244 gifted individuals.

Using sophisticated statistical methods, they compared personality measures between gifted and non-gifted groups to see which personality traits significantly correlated with giftedness. There were no significant differences between the gifted and non-gifted groups for Agreeableness, Extraversion, Conscientiousness, or Neuroticism. However, Openness to Experience was more strongly correlated with giftedness, with a moderately strong effect size. In addition, they found that other factors, including age, gender, individual study sample, and geographical location, did not account for giftedness or the relationship between Openness and giftedness.

Openness to Experience is a key component of intelligence, contributing to creativity and the capacity to consider multiple options and perspectives in approaching life, solving problems, and understanding complex situations. Openness fits with the observed proclivity gifted people have for complexity and divergent thinking, and the remarkable and sometimes astonishing knack gifted people have for seeing things others would never notice or even imagine. Not to mention the quirky sense of humor, which can be a double-edged sword.

Another important implication of this study is that while gifted people are at times stereotyped as having awkward or maladaptive personalities, less-social traits including lower extroversion, lower agreeableness, and higher neuroticism were not correlated with giftedness.

Conscientiousness, interestingly, was not associated with giftedness, although it is independently associated with performance in work and academic settings. Being gifted does not guarantee success, but it contributes when properly wielded.

Though correlation is not causation, it is tempting to wonder whether one could increase Openness. Research suggests that it is possible to change personality in desired directions. Many enriched educational approaches include pedagogy designed to cultivate imagination, creativity, and lateral thinking. Can adults choose to broaden horizons or opt to keep a narrow view, or is this choice itself in the first place a function of Openness? External motivations to increase Openness, such as dating someone more open-minded or wanting to advance professionally, might lead individuals to try new things more than they would if left to their own devices.

Future research can look at interventions to understand whether open-mindedness, if desired, can be acquired. Research on giftedness is important in order to dispel the myths and stigma that are roadblocks for gifted individuals to thrive throughout the lifespan as well as to help develop and provide the resources needed for society to best benefit from these individuals, by informing educational policy and practice, and continuing to understand the causes of, and remedies for, underachievement. ~

https://www.psychologytoday.com/us/blog/experimentations/202111/one-personality-trait-distinguishes-gifted-people


JESUS: THE STRANGEST METAMORPHOSIS

Oriana:

This is an example of how culture can completely twist the original messages of a religion. It’s as if Christian nationalists were completely unaware that Jesus was a pacifist and a champion of the poor. The reversal appears to be complete.

*
NO COMMANDMENT TO BELIEVE IN GOD

Some years ago I was startled when I realized that there is no commandment to believe in god. The commandments concern conduct, not belief. "No other gods before me" meant the actual gods of the region, Asherah (Yahweh's wife -- "But the Mother went with them" [i.e. with Adam and Eve]), Isis, Baal, the Hellenistic gods etc. Yahweh had to #1. You could worship Baal AFTER having sacrificed to Yahweh, but not before. A Jewish professor pointed that out to me: "no other gods BEFORE me" does not exclude gods AFTER me. But there is no commandment that says "Thou shalt believe in me." Words for "belief" and "religion" didn't even exist in archaic Hebrew, nor did "mind" or "thought" or “imagine." Again, what counted was conduct.

Ashera

Oriana:

I agree that if religion disappeared we would still have all kinds of problems. But with religion out of the way, especially the part that justifies violence and the subjugation of women, we would have less insanity to deal with, less passionate viciousness, less killing in the hope of being rewarded in paradise -- and more mental space for common sense . . . maybe even a shimmer of understanding that we are all human, and can help one another to make this world closer to paradise.

*
BUT AT LEAST IT INSPIRED ART

Dürer: Saint Jerome in his study, 1514. What wonderful detail! Note the lion, St. Jerome's pet ever since he pulled a thorn from the lion's paw.

St. Jerome's other pet, the skull; painting by Caravaggio, 1605

*
“We all have two lives. The second begins when you realize you only have one.” ~ Confucius

*
IRON DEFICIENCY IN MIDDLE AGE LINKED TO HEART DISEASE

~ Approximately 10% of new coronary heart disease cases occurring within a decade of middle age could be avoided by preventing iron deficiency, suggests a study published in ESC Heart Failure, a journal of the European Society of Cardiology (ESC).

“This was an observational study and we cannot conclude that iron deficiency causes heart disease,” said study author Dr. Benedikt Schrage of the University Heart and Vasculature Center in Hamburg, Germany. “However, evidence is growing that there is a link and these findings provide the basis for further research to confirm the results.”

Previous studies have shown that in patients with cardiovascular diseases such as heart failure, iron deficiency was linked to worse outcomes including hospitalizations and death. Treatment with intravenous iron improved symptoms, functional capacity, and quality of life in patients with heart failure and iron deficiency enrolled in the FAIR-HF trial. Based on these results, the FAIR-HF 2 trial is investigating the impact of intravenous iron supplementation on the risk of death in patients with heart failure.

The current study aimed to examine whether the association between iron deficiency and outcomes was also observed in the general population.

The study included 12,164 individuals from three European population-based cohorts. The median age was 59 years and 55% were women. During the baseline study visit, cardiovascular risk factors and comorbidities such as smoking, obesity, diabetes and cholesterol were assessed via a thorough clinical assessment including blood samples.

Participants were classified as iron deficient or not according to two definitions: 1) absolute iron deficiency, which only includes stored iron (ferritin); and 2) functional iron deficiency, which includes iron in storage (ferritin) and iron in circulation for use by the body (transferrin).

Dr. Schrage explained: “Absolute iron deficiency is the traditional way of assessing iron status but it misses circulating iron. The functional definition is more accurate as it includes both measures and picks up those with sufficient stores but not enough in circulation for the body to work properly.”

Participants were followed up for incident coronary heart disease and stroke, death due to cardiovascular disease, and all-cause death. The researchers analyzed the association between iron deficiency and incident coronary heart disease, stroke, cardiovascular mortality, and all-cause mortality after adjustments for age, sex, smoking, cholesterol, blood pressure, diabetes, body mass index, and inflammation. Participants with a history of coronary heart disease or stroke at baseline were excluded from the incident disease analyses.

At baseline, 60% of participants had absolute iron deficiency and 64% had functional iron deficiency. During a median follow-up of 13.3 years there were 2,212 (18.2%) deaths. Of these, a total of 573 individuals (4.7%) died from a cardiovascular cause. Incidence of coronary heart disease and stroke were diagnosed in 1,033 (8.5%) and 766 (6.3%) participants, respectively.

Functional iron deficiency was associated with a 24% higher risk of coronary heart disease, 26% raised risk of cardiovascular mortality, and 12% increased risk of all-cause mortality compared with no functional iron deficiency. Absolute iron deficiency was associated with a 20% raised risk of coronary heart disease compared with no absolute iron deficiency, but was not linked with mortality. There were no associations between iron status and stroke.

The researchers calculated the population attributable fraction, which estimates the proportion of events in 10 years that would have been avoided if all individuals had the risk of those without iron deficiency at baseline. The models were adjusted for age, sex, smoking, cholesterol, blood pressure, diabetes, body mass index, and inflammation. Within a 10-year period, 5.4% of all deaths, 11.7% of cardiovascular deaths, and 10.7% of new coronary heart disease diagnoses were attributable to functional iron deficiency.

“This analysis suggests that if iron deficiency had been absent at baseline, about 5% of deaths, 12% of cardiovascular deaths, and 11% of new coronary heart disease diagnoses would not have occurred in the following decade,” said Dr. Schrage.

The study showed that iron deficiency was highly prevalent in this middle-aged population, with nearly two-thirds having functional iron deficiency,” said Dr. Schrage. “These individuals were more likely to develop heart disease and were also more likely to die during the next 13 years.” ~

https://www.escardio.org/The-ESC/Press-Office/Press-releases/Iron-deficiency-in-middle-age-is-linked-with-higher-risk-of-developing-heart-disease

Oriana:

Eggs, red meat, liver and other giblets are the best sources of well-absorbed heme iron. Seafood and poultry are also good sources.

Plants such as spinach and lentils provide non-heme iron. Heme iron is absorbed and utilized 3-4 times more efficiently than non-heme iron.

Calcium inhibits the absorption of both heme and non-heme iron. Vitamin C increases the absorption of iron, as does apple cider vinegar. Cinnamon and ginger also increase iron absorption.

~ Runners tend to struggle with anemia and iron deficiency, often at a slightly higher prevalence than other athletes, but why is that? Anemia, or iron deficiency anemia, is fairly common in runners and even more so in female runners. This is partly due to blood loss during menstruation, but can also be impacted by what is called “Foot Strike Hemolysis”. This refers to what happens each time your heel hits the ground every time you plant your foot on a run. With each step, tiny blood vessels are broken, and iron is lost in the system. Iron is also lost through sweat.

But there is another, and less well known reason that iron deficiency and anemia can occur. And it can hit anyone, athlete or not.

A new area of focus and research is around the anti-nutrient PHYTATE, or IP6. Phytate is an anti-nutrient that binds strongly to iron, zinc, calcium and magnesium, and pulls it out of your body before you have a chance to absorb it.

For those of us who consume large amounts of plant foods (grains, beans, nuts and seeds and all products made from these foods), we may be unknowingly consuming large amounts of phytate, which then impacts our body’s ability to absorb iron from foods. Vegetarians, vegans or anyone who consumes large amounts of pasta, whole grains, crackers, breads, beans, nuts and nut butters or other plant based foods is at risk. The World Health Organization estimates that about 2 billion people worldwide are anemic, and that one third of all women of reproductive age are anemic. Much of this can be attributed to diets high in phytate (grain, corn or soy based). ~ (oops, I seem to have lost the link)

*

WHY THE SECOND WAVE OF THE 1918 FLU PANDEMIC WAS MUCH WORSE

~ The three teenagers—two boys and a girl—could not have known what clues their lungs would one day yield. All they could have known, or felt, before they died in Germany in 1918 was their flu-ravaged lungs failing them, each breath getting harder and harder. Tens of millions of people like them died in the flu pandemic of 1918; they happened to be the three whose lungs were preserved by a farsighted pathologist.

A century later, scientists have now managed to sequence flu viruses from pea-size samples of the three preserved lungs. Together, these sequences suggest an answer to one of the pandemic’s most enduring mysteries: Why was the second wave, in late 1918, so much deadlier than the first wave, in the spring? These rediscovered lung samples hint at the possibility that the virus itself changed to better infect humans.

This might sound familiar. The no-longer-so-novel coronavirus is also adapting to its human host. With modern tools, scientists are tracking the virus’s evolution in real time and finding mutations that have made the virus better at infecting us. More than 1.4 million coronavirus genomes have now been sequenced. But the database for the 1918 flu is much smaller—so much so that the comparison feels unfair. This new study brings the number of complete 1918 flu genomes to a grand total of three, plus some partial genomes.

Hundred-year-old lung tissue is incredibly hard to find. Sébastien Calvignac-Spencer, a virologist at the Robert Koch Institute, in Berlin, came across the samples in this newest study in a stroke of luck. A couple of years ago, he decided to investigate the collections of the Berlin Museum of Medical History of the Charité. He wasn’t looking for anything in particular, but he soon stumbled upon several lung specimens from 1918, a year he of course recognized as a notable one for respiratory disease. Despite the flu pandemic’s notoriety, the virus that caused it is still poorly understood. “I thought, Well, okay, so it’s right here in front of you. Why don’t you give it a try?” he told me. Why not try to sequence influenza from these lungs? (This work is not dangerous: The chemically preserved lung specimens do not contain intact or infectious virus; sequencing picks up just fragments of the virus’s genetic material.)

Calvignac-Spencer and his colleagues ultimately tested 13 lung specimens and found evidence of flu in three. One was from a 17-year-old girl who died in Munich sometime in 1918. The two others were from teenage soldiers who both died in Berlin on June 27, 1918. 

The team was able to recover a complete flu-virus genome from the 17-year-old girl’s lung tissue—only the third ever found. The two other full 1918 flu genomes both came from the United States, from the lungs of a woman buried in Alaska and from a paraffin-wax-embedded lung sample of a soldier who died in New York. With another genome in hand, the researchers moved to investigate how they differed. Several changes showed up in the flu’s genome-replication machinery, a potential evolutionary hot spot because better replication means a more successful virus. The team then copied just the replication machinery of the 17-year-old’s virus—not the entire virus—into cells and found it was only half as active as that of the flu virus found in Alaska.

The obvious caveats should apply here: tiny sample size, the limits of extrapolating from test tube to human body. The exact date of the girl’s death in 1918 is also unknown, but this finding hints at the possibility that the virus’s behavior did change during the pandemic. Scientists have long speculated about why the 1918 pandemic’s second wave was deadlier than the first. Patterns of human behavior and seasonality could explain some of the difference—but the virus itself might have changed too.

The lungs of the two young soldiers in Berlin provide another clue. The teenagers’ June 1918 deaths were squarely in the pandemic’s first wave. These two samples yielded only partial genomes, but the team was able to reconstruct enough to home in on changes in nucleoprotein, one of the proteins that make up the virus’s replication machinery. Nucleoproteins act like scaffolds for the virus’s gene segments, which wind around the protein like a spiral staircase. They are also extremely distinctive, which can be a weakness: the human immune system is very good at recognizing and sabotaging them.

Indeed, the 1918 flu virus’s nucleoprotein seems to have mutated between the first and second waves to better evade the human immune system. The first-wave viruses’ nucleoproteins looked a bit like those in flu viruses that infect birds—which makes sense because scientists suspect that the 1918 flu originated in birds. But bird viruses are attuned to bird bodies. “When it jumps to humans, the virus is not evolved to be optimally resistant” to the human immune system, Jesse Bloom, a virologist at Fred Hutchinson Cancer Research Center, in Seattle, told me. Bloom and others have identified specific mutations that make the nucleoprotein better at resisting the human immune system. The first-wave flu viruses did not have them, but the second-wave ones did, possibly because they had had the time to adapt to infecting humans.

Unfortunately, many historical samples have been lost as pathology collections have fallen out of fashion over the past century. “If we had started these kinds of studies in the ’60s, we would have had no problems finding thousands and thousands of specimens,” Calvignac-Spencer said. “And now we’re really fighting to assemble a collection of 20.” He’s been in touch with more than 50 museum collections around the world in the hunt for more pandemic-flu samples. He recently found one from Australia, but the work is slow. Calvignac-Spencer has also looked for other viruses, including measles, which he and his colleagues previously found in a 100-year-old lung from the same medical collection in Berlin.

The further back in time researchers must go, the harder the samples are to find—but Bloom told me he’s especially intrigued by the possibility of finding pre-1918 flu genomes in the archives. When the 1918 pandemic swept through the world, it apparently completely replaced whatever flu existed before. Its modern-day descendants continue to infect us today as seasonal flu. In this way, the 1918 flu is familiar to us and our immune systems. What came before is still a mystery.

https://www.theatlantic.com/science/archive/2021/05/pandemic-virus-mutations-1918-flu/618972/?utm_campaign=the-atlantic&utm_medium=social&utm_source=facebook&fbclid=IwAR3QoVW4CfP8YCEngMlBzDPv9-lzSsM578mkHPQTT8ZhoatDh5QR9KZZ0Uo

And of course there's a "noser" here, her nostrils exposed. The whole point is to breathe through the mask, blocking the virus from entry through the nose -- or if not entirely blocking, then at least significantly reducing the viral load.

*

ending on beauty:

And God said to me: Write.
Leave the cruelty to kings.
Without that angel barring the way to love,
there would be no bridge for me
into time.

~ Rainer Maria Rilke


Creation and Expulsion from Paradise; Giovanni di Paolo, 1445

No comments:

Post a Comment