Friday, November 26, 2021


Dante Gabriel Rossetti: Pandora, 1869

Forget the shining city on the hill, or the new
Jerusalem, Eden, Promised Land—you’re reading from
the wrong text, wrong translation, the non-canonical
source, the exquisite forgery stuffed in the turkey-
shaped urn. This is what the disembarking Pilgrims found:
a place of Briars and Thorns, of Troubles and Sorrows,
Valley of the Shadow of Death, where God is no more
a cause of our sin than the sun of a Dunghill’s stench.
Life is indeed strenuous, but the more desperate
the situation, then the more actions it evokes.
we are like soldiers landed in a hostile country
whose commander has burned their ships behind them and told
them to eat up their enemies or drink up the sea.

~ Leonard Kress, American Literary Review

I’ve posted this poem before, maybe even more than once. It goes straight to the heart of the issue: the early Puritan as religiously deluded colonists who thought of themselves as the New Chosen People coming to take possession of the New Canaan. The Elect, while the rest of the humanity was predestined for hellfire. The cruelty and arrogance are beyond words.

On the other hand, the very real difficulties the settlers encountered meant a high mortality rate. For a while, this was indeed the Valley of the Shadow of Death, a place of Briars and Thorns. 

By the way, the s0-called Pilgrims who came on the Mayflower never referred to themselves as mere Pilgrims. That's a later appellation. Those whom we now call the Pilgrims saw themselves as Saints.

“Our great American philosopher William James has said - 'We have as many personalities as there are people who know us.' To which I would add 'We have no personalities unless there are people who know us. Unless there are people we hope to convince that we deserve to exist.’”~ Joyce Carol Oates, A Widow's Story


~ The Pilgrims and the Wampanoags did indeed share a harvest celebration together at Plymouth in fall 1621, but that moment got forgotten almost immediately, overwritten by the long history of the settlers’ attacks on their Indigenous neighbors.

In 1841, a book that reprinted the early diaries and letters from the Plymouth colony recovered the story of that three-day celebration in which ninety Indigenous Americans and the English settlers shared fowl and deer. This story of peace and goodwill among men who by the 1840s were more often enemies than not inspired Sarah Josepha Hale, who edited the popular women’s magazine Godey’s Lady's Book, to think that a national celebration could ease similar tensions building between the slave-holding South and the free North. She lobbied for legislation to establish a day of national thanksgiving.

And then, on April 12, 1861, southern soldiers fired on Fort Sumter, a federal fort in Charleston Harbor, and the meaning of a holiday for giving thanks changed.

Southern leaders wanted to destroy the United States of America and create their own country, based not in the traditional American idea that “all men are created equal,” but rather in its opposite: that some men were better than others and had the right to enslave their neighbors. In the 1850s, convinced that society worked best if a few wealthy men ran it, southern leaders had bent the laws of the United States to their benefit, using it to protect enslavement above all.

In 1860, northerners elected Abraham Lincoln to the presidency to stop rich southern enslavers from taking over the government and using it to cement their own wealth and power. As soon as he was elected, southern leaders pulled their states out of the Union to set up their own country. After the firing on Fort Sumter, Lincoln and the fledgling Republican Party set out to end the slaveholders’ rebellion.

The early years of the war did not go well for the U.S. By the end of 1862, the armies still held, but people on the home front were losing faith. Leaders recognized the need both to acknowledge the suffering and to keep Americans loyal to the cause. In November and December, seventeen state governors declared state thanksgiving holidays.

New York Governor Edwin Morgan’s widely reprinted proclamation about the holiday reflected that the previous year “is numbered among the dark periods of history, and its sorrowful records are graven on many hearthstones.” But this was nonetheless a time for giving thanks, he wrote, because “the precious blood shed in the cause of our country will hallow and strengthen our love and our reverence for it and its institutions…. Our Government and institutions placed in jeopardy have brought us to a more just appreciation of their value.”

The next year Lincoln got ahead of the state proclamations. On July 15, he declared a national day of Thanksgiving, and the relief in his proclamation was almost palpable. After two years of disasters, the Union army was finally winning. Bloody, yes; battered, yes; but winning. At Gettysburg in early July, Union troops had sent Confederates reeling back southward. Then, on July 4, Vicksburg had finally fallen to U.S. Grant’s army. The military tide was turning.

President Lincoln set Thursday, August 6, 1863, for the national day of Thanksgiving. On that day, ministers across the country listed the signal victories of the U.S. Army and Navy in the past year and reassured their congregations that it was only a matter of time until the United States government put down the southern rebellion. Their predictions acknowledged the dead and reinforced the idea that their sacrifice had not been in vain.

In October 1863, President Lincoln declared a second national day of Thanksgiving. In the past year, he declared, the nation had been blessed. In the midst of a civil war of unequaled magnitude and severity, he wrote, Americans had maintained their laws and their institutions and had kept foreign countries from meddling with their nation. They had paid for the war as they went, refusing to permit the destruction to cripple the economy. Instead, as they funded the war, they had also advanced farming, industry, mining, and shipping.

Immigrants had poured into the country to replace men lost on the battlefield, and the economy was booming. And Lincoln had recently promised that the government would end slavery once and for all. The country, he predicted, “with a large increase of freedom,” would survive, stronger and more prosperous than ever. The president invited Americans “in every part of the United States, and also those who are at sea, and those who are sojourning in foreign lands” to observe the last Thursday of November as a day of Thanksgiving.

In 1863, November’s last Thursday fell on the 26th. On November 19, Lincoln delivered an address at the dedication of a national cemetery at Gettysburg, Pennsylvania. He reached back to the Declaration of Independence for the principles on which he called for Americans to rebuild the severed nation:

”Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.”
Lincoln urged the crowd to take up the torch those who fought at Gettysburg had laid down. He called for them to “highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.”

The following year, Lincoln proclaimed another day of Thanksgiving, this time congratulating Americans that God had favored them not only with immigration but also with the emancipation of formerly enslaved people. “Moreover,” Lincoln wrote, “He has been pleased to animate and inspire our minds and hearts with fortitude, courage, and resolution sufficient for the great trial of civil war into which we have been brought by our adherence as a nation to the cause of freedom and humanity, and to afford to us reasonable hopes of an ultimate and happy deliverance from all our dangers and afflictions.”

In 1861, Americans went to war to keep a cabal from taking control of the government and turning it into an oligarchy. The fight against that rebellion seemed at first to be too much for the nation to survive. But Americans rallied and threw their hearts into the cause on the battlefields even as they continued to work on the home front for a government that defended democracy and equality before the law.

And they won.

My best to you all for Thanksgiving 2021.

~ Heather Cox Richardson, Facebook

Battle of the Ironclads during the Civil War (the bloodiest civil war in human history)


~ When the Pilgrims set out for the New World, they had no idea what they were getting into. The Pilgrims were working-class Christians who left England to avoid religious persecution. They were not experienced farmers, hunters, and fishermen. And they were completely unprepared to deal with New England's cold, harsh climate.

Only half of the original 102 Pilgrim settlers survived their first winter. They were so starved, they even resorted to robbing the houses and graves of a deserted Indian habitation.

Tisquantum (better known as Squanto) was from the Patuxet tribe in modern-day Maine. In 1614, Squanto was kidnapped by English sailors and taken to Spain to be sold as a slave. He was rescued by local Catholic priests but fled to London. When Squanto returned to America, he discovered that his tribe had been almost wiped out by disease — probably viral hepatitis — brought by European settlers.

During his search for surviving tribe members, Squanto met Massasoit, the leader of the Wampanoag tribe. Massasoit took Squanto as his captive.

Since the Wampanoag had also been devastated by disease, Squanto tried to convince Massasoit to ally with the groups of English settlers that had arrived in Massachusetts. An alliance would help the Wampanoag fend off the threat of the Narragansett, an enemy tribe that hadn't been devastated by disease.

After the Pilgrims suffered their first winter in 1620, Massasoit decided to follow Squanto's advice. Using Squanto's help as translator, Massasoit signed a peace treaty with the Pilgrims. Squanto stayed with the English settlers to teach them to fish, hunt, and cultivate corn — and to avoid captivity under Massasoit.

By the next fall, the Pilgrims hosted a giant feast to celebrate their first successful harvest.
It wasn't exactly peaceful.

On the feast day, Massasoit showed up with 90 men. Many of them carried weapons. Some say Massasoit crashed the party because he was alarmed by celebratory gunfire at the Pilgrim camp; others say the Pilgrims responded with gunfire after Massasoit arrived.

Since the armed Wampanoags outnumbered the Pilgrims almost 2-1, you can bet the feast was pretty awkward.

The Pilgrims and the Wampanoags weren't friends. They were wary allies. The Pilgrims saw Native Americans as uncivilized savages, while the Native Americans saw Europeans as short, weak, and smelly.

Despite their antagonism, both groups needed each other to survive. The Pilgrims were desperate to avoid starvation and aggression from hostile Native American tribes; the Wampanoags were desperate for guns.

Some historians think the first Thanksgiving happened in late September or early October, when the fall crops had just been harvested. If this is true, the "first Thanksgiving" would have resembled more of a harvest festival than a religious gathering.

And harvest festivals were definitely not invented by the Pilgrims. Ancient Egyptians, Greeks, and Romans held celebratory festivals to honor the gods. Even Native American tribes observed thanksgiving celebrations, long before the Pilgrims arrived.

Unfortunately, the "real" Thanksgiving might have much bloodier origins than the Pilgrims' first big feast.

In 1637, English settlers (a group of Puritans, not Pilgrims) raided the village of the Pequot tribe. They burned 700 men, women, and children alive. John Winthrop, governor of the Puritans' Massachusetts Bay Colony, proclaimed a day of thanksgiving to celebrate the return of the colonists who had carried out the massacre.

Then, in 1789, George Washington proclaimed a "day of Thanksgiving" to express gratitude for American independence and the successful ratification of the U.S. Constitution.

In 1863, President Abraham Lincoln proclaimed Thanksgiving a national holiday to heal a nation ravaged by the Civil War.

Ever since, most Americans have associated Thanksgiving with a day of family, food, and gratitude.

However, for Americans who view the Pequot massacre as the origin of Thanksgiving, the national holiday is seen as a sham. Since 1970, Thanksgiving protesters have gathered at Cole's Hill, which overlooks Plymouth Rock, to honor a "National Day of Mourning" for the indigenous people who were massacred by the Puritans and other European colonists.

There's a lot we can't confirm about the real Thanksgiving. What we do know, however, is the English settlers weren't as virtuous as we thought. (But then again, your sister's Thanksgiving dinner guest might not be either.)

Even if the real Thanksgiving story isn't a happy one, you can still be grateful for your good fortune.

from another source:

In 1620, a small group of English separatists packed up and headed for the New World in search of religious freedom. Calling themselves “Saints” (the term “Pilgrims” wouldn’t be used to describe the settlers for another 200 years), they headed to what is now Delaware but landed in Plymouth in December after being blown off course by storms. The colonists first encountered the peaceful yet cautious Wampanoag the following spring.

At the time, the two disparate groups were attempting to find common ground. In April 1621, both had signed a treaty pledging to come to the aid of the other in case of attack. After losing nearly half of their settlers to sickness during their first winter in America, the English were teetering on extinction. The Wampanoag weren’t far from that reality themselves: Between 1616 and 1619, diseases introduced by European colonizers killed up to 90 percent of New England’s Native population in an epidemic now referred to as the Great Dying. Greatly weakened, the tribe also needed help fending off incursions from the Narragansett, a rival Native group.

Massasoit smoking a ceremonial pipe with Plymouth Colony Governor John Carver in 1621

Because it was held outdoors, Tom Begley, a historian at Plimoth Patuxet Museums, likens the gathering to a political potluck picnic. Communication was difficult, as only Tisquantum—remembered today as Squanto—and a few other Native American guests spoke English and could act as translators.

“It was a diplomatic event between these two communities,” he says. “Despite the language barrier, it’s still pretty interesting that they were gathering together for three days. We always talk about how the relationship between the Indigenous people and settlers changed over time, and this is one of the earliest examples of relationship building.”

While that first feast was likely festive, what happened after it adds a darker tone to the holiday for many Native Americans, some of whom observe Thanksgiving as a National Day of Mourning, an annual commemoration that began in 1970. ~

Squanto, a 19th century textbook illustration


~ In 1986 the New York Times review of Robert Bly’s Selected Poems was headlined “Minnesota Transcendentalist”. It was perceptive to note his link with the New England poets of the 19th century, which was strong, but within a few years it would look absolutely prescient. For although he was one of the outstanding poets of his generation, Bly, who has died aged 94, may be remembered, like the two most enduring of the original Transcendentalists, for facets of his work other than poetry.

Just as Ralph Waldo Emerson’s legacy is as an essayist, the influence of Bly’s essays on poetic theory and his many translations have resonated with readers and his fellow poets. But Bly is more likely to be seen as a 20th-century parallel to Henry David Thoreau. Like Thoreau, he made his mark with civil disobedience, and later with a hugely popular prose work concerned with the denaturing effects of civilization.

Bly’s early poetry in the 60s was his best [Silence in the Snowy Fields], although its quality was often subsumed by controversy surrounding his anti-war positions. In 1966, he co-founded American Writers Against the Vietnam War. The following year, when he won the National Book award for The Light Around the Body, he donated the prize money to draft resistance. But his entire poetic career was thrown into the shadows by the remarkable success of Iron John: A Book About Men (1990).

A meditation on his vision of American manhood being torn from its natural roots because fathers fail to initiate their sons properly into masculinity, Iron John spawned a movement combining encounter-group sensitivity with primal tree-hugging survivalism. Yet with his imagistic, often spiritual, poetry, his deep interests in mysticism, his rustic dress and his nasal, high-pitched voice, Bly often seemed an unlikely prophet of masculinity.

Bly called his poetic technique “deep image”, and his highly visual, quietly surreal poems, often in rural settings, reflected his upbringing in Scandinavian-settled Minnesota. He was born in Lac qui Parle county, where his parents, Alice (nee Aws) and Jacob Bly, Norwegian immigrants, were farmers. At 18, after graduating from high school in Madison, he enlisted in the US navy.

Discharged in 1946, he enrolled at St Olaf’s College in Northfield, Minnesota, but after a year transferred to Harvard, where he joined a precocious group of undergraduate writers, including John Ashbery, Richard Wilbur, John Hawkes, George Plimpton and, at Radcliffe, Adrienne Rich. It was at Harvard that he read a poem by WB Yeats, and resolved to “be a poet for the rest of my life”.

After graduation in 1950, he moved to New York, writing and struggling to support himself with a succession of menial jobs and meager disability payments for the rheumatic fever he contracted while in the navy.

In 1954, he returned to the Midwest, as a graduate student in the University of Iowa’s writers’ program, teaching to pay his way. Again he found himself in a writer’s hothouse; his fellow students included Philip Levine, Donald Justice and WD Snodgrass, with Robert Lowell and John Berryman on the faculty. The proliferation of creative writing programs on American campuses today owes much to the collective success of this group, the level of which, it could be argued, has never been repeated.

He married the writer Carol McLean in 1955, and returned to Minnesota. The next year, he received a Fulbright grant to travel to Norway to translate poetry. There he discovered not only such Swedish poets as Tomas Tranströmer, Gunnar Ekelöf and Harry Martinson, but also, in translation, other writers relatively unknown in English: Georg Trakl, Pablo Neruda and César Vallejo. His translations of Tranströmer continued throughout both their careers, and the affinity between their poetry makes these some of the most effective ever done.

On his return to America, Bly started a magazine to publish such writers. The Fifties, co-edited with William Duffy, would change its name decade by decade, and had an immense effect on American poetry, defining the deep image style. Through the magazine, Bly became close to a similarly inclined poet, James Wright, and with him translated Twenty Poems of Georg Trakl (1961). He also translated Knut Hamsun’s novel Hunger from the Norwegian in 1967.

Deep image arose from the way the poets Bly admired drew on almost subconscious imagery, yet used it in a very deliberate way. He called it “leaping” poetry, once describing it as surrealism with a center holding it all together. Out of these influences, in 1962, came Bly’s first book of poems, Silence in the Snowy Fields, whose bonding with the countryside would be echoed by later generations of creative writing professors in poems about chopping wood in denim shirts. But in Bly’s hands, the quiet of the northern landscape provided a deep, personal beauty. It was an immediate success, and led to a Guggenheim fellowship.

Those poems gave no hint of the despair that became evident in The Light Around the Body, which not only reflected his feelings about the Vietnam war, but also his years of struggle in New York. They drew on the same imagery as his first book, but used it in a far more ferocious way. Studying Jung’s theories of mythic archetypes led to Bly’s mixing them into his politics in Sleepers Joining Hands (1973), whose long poem, The Teeth Mother Naked at Last‚ is a powerful condemnation of war as an affront to the Great Mother Culture. He placed a long essay, I Came Out of the Mother Naked‚ at the center of this book, and prose poems would soon become an integral part of his poetics, culminating in This Body Is Made of Camphor and Gopher Wood (1977).

After a divorce from Carol in 1979, in 1980 he married Ruth Ray, a Jungian psychologist, and moved to Moose Lake, Minnesota. He began working with men’s and women’s groups, producing books of poetry that reflected the transactional experience, most notably the love poems in Loving a Woman in Two Worlds (1985).

After PBS Television’s Bill Moyers produced a documentary, A Gathering of Men, about those men’s groups, Iron John became an immediate bestseller. It was followed by The Sibling Society (1996), which lamented the “perpetual adolescence of modern American men”, and The Maiden King: The Reunion of Masculine and Feminine (with Marion Woodman, 1998). At the same time his translations expanded to include the 15th-century Sufi mystic Kabir and the Urdu poet Ghalib. Bly encapsulated his poetic career in the moving Meditations on the Insatiable Soul (1994) and Morning Poems (1997), and published his second “selected poems” collection, Eating the Honey of Words, in 1999. The US invasion of Iraq inspired the collection The Insanity of Empire (2004).

In 2013 Airmail, selections from Bly’s decades of correspondence with Tranströmer, was published in English. It revealed both a deep friendship and a contrast in the way the poetry of this homespun American mystic and the Swedish psychologist made its “leaps”. Stealing Sugar From the Castle: Selected and New Poems was published in the same year, and a last Collected Poems appeared in 2018.

Bly is survived by Ruth, by four children, Mary, Bridget, Micah and Noah, from his first marriage, and by nine grandchildren. ~


My favorite volume by Robert Bly is the one that’s hardly ever mentioned: The Morning Poems. Now that Silence in the Snowy Fields has become rather cliché (beginning with the overuse of the word “silence”), The Morning Poems seem refreshing simple and unpretentious. And of course I'm in awe of the fact that he could write a poem every morning.

Breathing seemed frail and daring in the morning.
To pull in air was like reading a whole novel.

The angleworms, turned up by the plow, looked
Uneasy, like shy people trying to avoid praise.

For a while we had goats. They were like turkeys
Only more reckless. One butted a red Chevrolet.

~ Robert Blue, from Walking on the Farm

Nothing conclusive has yet taken place in the world, the ultimate word of the world and about the world has not yet been spoken, the world is open and free, everything is still in the future and will always be in the future. ~ Mikhail Bakhtin


Bakhtin is critical of what he calls the monologic tradition in Western thought that seeks to finalize humanity, and individual humans. He argues that Dostoyevsky always wrote in opposition to ways of thinking that turn human beings into objects (scientific, economic, social, psychological etc.) – conceptual frameworks that enclose people in an alien web of definition and causation, robbing them of freedom and responsibility: "He saw in it a degrading reification of a person's soul, a discounting of its freedom and its unfinalizability... Dostoyevsky always represents a person on the threshold of a final decision, at a moment of crisis, at an unfinalizable, and unpredeterminable, turning point for their soul.”


Truth is not born nor is it to be found inside the head of an individual person. It is born between people collectively searching for truth, in the process of their dialogic interaction.  








~ As adults, we spend a lot of time talking about all of the things that we have to do. 

You have to wake up early for work. You have to make another sales call for your business. You have to work out today. You have to write an article. You have to make dinner for your family. You have to go to your son’s game.

Now, imagine changing just one word in the sentences above.

You don’t “have” to. You “get” to.

You get to wake up early for work. You get to make another sales call for your business. You get to cook dinner for your family. By simply changing one word, you shift the way you view each event. You transition from seeing these behaviors as burdens and turn them into opportunities.

The key point is that both versions of reality are true. You have to do those things, and you also get to do them. We can find evidence for whatever mind-set we choose.

I once heard a story about a man who uses a wheelchair. When asked if it was difficult being confined, he responded, “I’m not confined to my wheelchair—I am liberated by it. If it wasn’t for my wheelchair, I would be bed-bound and never able to leave my house.” This shift in perspective completely transformed how he lived each day.

I think it’s important to remind yourself that the things you do each day are not burdens, they are opportunities. So often, the things we view as work are actually the reward.

Embrace your constraints. Fall in love with boredom. Do the work.

You don’t have to. You get to. ~


I’ve tried it: it does make a difference. It diminishes a negative attitude in a powerful way — and more efficiently, it seems to me, than trying to “think positive” (maybe because you can’t really completely fool yourself; the guffaws of laughter from the Inner Realist stand in the way). But “get to” doesn’t violate reality, even if it’s “I get to go to the hospital.” You’re going to meet a wide variety of people. It’s going to be an adventure.

I think this is also somewhat in line with the “in reverse” kinds of advice. Want to really learn something? Teach it. Want to get yourself motivated? Try to motivate someone else.


~ Defining the concept of the mind is a surprisingly slippery task. The mind is the seat of consciousness, the essence of your being. Without a mind, you cannot be considered meaningfully alive. So what exactly, and where precisely, is it?

Traditionally, scientists have tried to define the mind as the product of brain activity: The brain is the physical substance, and the mind is the conscious product of those firing neurons, according to the classic argument. But growing evidence shows that the mind goes far beyond the physical workings of your brain.

No doubt, the brain plays an incredibly important role. But our mind cannot be confined to what’s inside our skull, or even our body, according to a definition first put forward by Dan Siegel, a professor of psychiatry at UCLA School of Medicine and the author of the 2016 book, Mind: A Journey to the Heart of Being Human.

He first came up with the definition more than two decades ago, at a meeting of 40 scientists across disciplines, including neuroscientists, physicists, sociologists, and anthropologists. The aim was to come to an understanding of the mind that would appeal to common ground and satisfy those wrestling with the question across these fields.

After much discussion, they decided that a key component of the mind is: “the emergent self-organizing process, both embodied and relational, that regulates energy and information flow within and among us.” It’s not catchy. But it is interesting, and with meaningful implications.
The most immediately shocking element of this definition is that our mind extends beyond our physical selves. In other words, our mind is not simply our perception of experiences, but those experiences themselves. Siegel argues that it’s impossible to completely disentangle our subjective view of the world from our interactions.

“I realized if someone asked me to define the shoreline but insisted, is it the water or the sand, I would have to say the shore is both sand and sea,” says Siegel. “You can’t limit our understanding of the coastline to insist it’s one or the other. I started thinking, maybe the mind is like the coastline—some inner and inter process. Mental life for an anthropologist or sociologist is profoundly social. Your thoughts, feelings, memories, attention, what you experience in this subjective world is part of mind.”

The definition has since been supported by research across the sciences, but much of the original idea came from mathematics. Siegel realized the mind meets the mathematical definition of a complex system in that it’s open (can influence things outside itself), chaos capable (which simply means it’s roughly randomly distributed), and non-linear (which means a small input leads to large and difficult to predict result).

In math, complex systems are self-organizing, and Siegel believes this idea is the foundation to mental health. Again borrowing from the mathematics, optimal self-organization is: flexible, adaptive, coherent, energized, and stable. This means that without optimal self-organization, you arrive at either chaos or rigidity—a notion that, Siegel says, fits the range of symptoms of mental health disorders.

Finally, self-organization demands linking together differentiated ideas or, essentially, integration. And Siegel says integration—whether that’s within the brain or within society—is the foundation of a healthy mind.

Siegel says he wrote his book now because he sees so much misery in society, and he believes this is partly shaped by how we perceive our own minds. He talks of doing research in Namibia, where people he spoke to attributed their happiness to a sense of belonging.

When Siegel was asked in return whether he belonged in America, his answer was less upbeat: “I thought how isolated we all are and how disconnected we feel,” he says. “In our modern society we have this belief that mind is brain activity and this means the self, which comes from the mind, is separate and we don’t really belong. But we’re all part of each others’ lives. The mind is not just brain activity. When we realize it’s this relational process, there’s this huge shift in this sense of belonging.

In other words, even perceiving our mind as simply a product of our brain, rather than relations, can make us feel more isolated. And to appreciate the benefits of interrelations, you simply have to open your mind. ~


That the mind is more than the brain can be discussed in many ways, certainly in terms of the mind/body relationship. What we have been discovering is more and more evidence that there is no mind/body split, but a continuous organized system of electrochemical interactions governing much of what we thought of as brain activity, emotions, proclivities and personality. The psychoactive drugs used in treating mental illness erase any presumption of separation, as does recent study of the effects of the gut microbiome on emotion and mental states. And then there are those spooky instances of parasites literally "changing the minds" and behaviors of their hosts, as is suspected in toxoplasmosis infections.


My first major experiential lesson in the unity of body and mind was the menopause. A lot of my younger self-image turned around “intelligent, sexy, has a great memory, eager to learn, energetic, hard-working.” And suddenly I was none of those things, and as for my great memory, I couldn’t recall Shakespeare’s first name, and finally my own address (it came back, back that momentary black-out was unnerving). And though the worst is over, having my mind pulled out from under me, so to speak, was a very humbling experience, and certainly confirmed how body and mind are one system, intricately connected, possibly beyond our capacity to completely disentangle which is which.

Add to this all the complex ways in which we interact with others and with the environment, and it’s obvious that nothing whatsoever is an isolated unit. We are part of a whole beyond our comprehension. Like robots in old science fiction, we’re in danger of overloading our circuits. It’s humbling but necessary to stick to one small thing at a time, one foot in front of the other — that’s still the most reliable way to get anywhere.



~ More than 150 years ago Victorian biologist Charles Darwin made a powerful observation: that a mixture of species planted together often grows more strongly than species planted individually.

It has taken a century and a half — ironically about as long as it can take to grow an oak to harvest — and a climate crisis to make policymakers and landowners take Darwin’s idea seriously and apply it to trees.

There is no human technology that can compete with forests for the take-up of atmospheric carbon dioxide and its storage. Darwin’s idea of growing lots of different plants together to increase the overall yield is now being explored by leading academics, who research forests and climate change.

Scientists and policymakers from Australia, Canada, Germany, Italy, Nigeria, Pakistan, Sweden, Switzerland, the UK, and the US came together recently to discuss if Darwin’s idea provides a way to plant new forests that absorb and store carbon securely.

Planting more forests is a potent tool for mitigating the climate crisis, but forests are like complex machines with millions of parts. Tree planting can cause ecological damage when carried out poorly, particularly if there is no commitment to diversity of planting. Following Darwin’s thinking, there is growing awareness that the best, healthiest forests are ones with the greatest variety of trees — and trees of various ages.

Forests following this model promise to grow two to fourfold more strongly, maximizing carbon capture while also maximizing resilience to disease outbreaks, rapid climate change, and extreme weather.

Why we should plant more forests

In mixed forests, each species accesses different sources of nutrients from the others, leading to higher yields overall. And those thicker stems are made mostly of carbon.

Mixed forests are also often more resilient to disease by diluting populations of pests and pathogens, organisms that cause disease.

Darwin’s prescient observation is tucked away in chapter four of his 1859 famous book On the Origin of the Species. Studies of this “Darwin effect” have spawned vast ecological literature. Yet it is still so outside of the mainstream thinking on forestry that, until now, little major funding has been available to prompt the use of this technique.

Darwin also famously described evolution by natural selection, a process by which genes evolve to be fit for their environment. Unfortunately for the planet, human-induced environmental change outstrips the evolution of genes for larger, slower reproducing organisms like trees.

Yet it is still so outside of the mainstream thinking on forestry that, until now, little major funding has been available to prompt the use of this mixed forest technique.

Healthier trees capture more carbon

At our meeting we discussed a study of Norbury Park estate in central England, which describes how — using the Darwin effect and other climate-sensitive measures — the estate now captures over 5,000 tonnes of carbon dioxide per year, making it quite possibly the most carbon-negative land in the UK. Such impressive statistics don’t happen by accident or by sticking some trees in the ground and hoping; care and ecological nous are needed.

Trees of different ages also continuously provide harvestable timber and so steady jobs, in stark contrast to the other methods of forestry, where large areas are felled and cleared at the same time.

The UK government, like other administrations, has laid down requirements for responsible large-scale tree planting. These requirements continue to be revised and improved. There are still vital questions about which trees we should plant, where we should plant them, and what to do with them once they’ve grown.

It has been said that it is impossible to plant a forest, but it should certainly be possible to design a plantation that will blossom into a forest for future generations. We need forests to be a practical, dependable, and just response to our climate and biodiversity crises, and Darwin has shown us the way.


It would be wise to put Darwin's idea about diverse plants grown together  being more productive and more sustainable into practice. Yes, planting trees is a great idea, but maybe not so great if what we plant is a monoculture — and that's pretty much the norm. A forest is a complex community of living creatures, comprised of a variety of trees of various ages, undergrowth plants, fungi and the vast underground connecting web of fungal mycelium and tree roots — also insects, mammals, amphibians and birds. A living forest has resiliency and staying power, where a monoculture, any monoculture, does not, and must be nurtured and supported and protected just to keep it going. Another lesson to learn and implement soon, to help avoid the coming climate catastrophe.



~ There is every reason to conclude, as art historians have, that an absorbing self-portrait by a gifted young Flemish Renaissance painter by the name of Caterina van Hemessen, painted in 1548, is likely the first self-portrayal of an artist, male or female, at work at the easel. Such attributions are a risky business, of course. Just ask the endless succession of nominees for inventor of abstract painting (now Kandinsky, now Hilma af Klint, now JMW Turner…). There's always a chance that an earlier example, unfairly forgotten by time, will come to light.

But in the case of Hemessen's transfixing masterpiece, it isn't simply the posture – the young woman depicting herself in meta-mid-brushstroke as she sets out to create the very same painting that we see before us – that distinguishes the work as one of the most pioneering in the history of image-making. The depth and complexity of the small, oil-on-oak panel's reflection on the very nature of creativity and self-invention is incontestably ground-breaking and changed forever the way artists presented themselves to the world.

At first glance, it's the slightly unsettling, unrequited stare of the prim sitter, gazing past us to a mirror that sits somewhere outside the frame, that piques our attention. That her plush velvet sleeves are at odds with the grubby task at hand – smudging pigment and oil on a sloppy palette – adds to the curious sense of staging.

It isn't long before our eyes are pulled deeper into the painting's mystery by the teasing inscription that Hemessen has inserted. In the murky void between the larger likeness of herself that dominates the right half of the image and the smaller self-portrait that the painter-within-the-painting has begun to create on the primed oak panel that rests on the easel on the left, it reads: "Ego Caterina de Hemessen me pinxi 1548 Etatis suae 20" (or "I, Caterina of the Hemessens, painted me in 1548 at the age of 20”.)

Though it was customary for portraitists to inscribe their works with captions identifying their sitters, in this instance, the language is anything but clarifying in its function and serves ingeniously to intensify the panel's visual verve with a level of semantic, psychological, and philosophical intrigue. Who, after all, is speaking these wafting, weightless words? Are we to imagine that they are being breathed ghostily down the centuries from the departed lips of the artist herself – a gifted stylist who, in an era when few female artists made much headway, so distinguished herself that the queen consort of Hungary and Bohemia, Mary of Austria, retained her services? 

Or is this declaration, "I Caterina…", the ventriloquised whisper from the motionless mouth of the artist's alter ego in the painting – a silent semblance of self whose absent eyes stare out assertively but refuse to meet ours? Or does the "me" in "I … painted me" attach instead to that ever-emerging almost-self on the panel-within-the-panel who is, if we follow the logic of the painting's depiction to its conclusion, the eventual, irreducible "me" that will ultimately be created?

Hemessen's portrait presumes the existence of three distinct selves, refracted like a ray of white light in a prism into the bright spectrum of the painter, the painted, and the yet-to-be-painted – a trio locked forever in a spinning phantasmagoria of identity.

There can be little doubting that Hemessen deliberately hinged so much of the work's intensity on the impenetrable poetry of her riddling inscription. Trained by her father, Jan Sanders van Hemessen, a leading figure of the Romanist School (16th-Century Low Country artists who'd traveled to Rome) in the Flemish Renaissance, she knew her art history well. The patterning of the language of her floating caption was an unambiguous allusion to what is still, to this day, one of the most arresting self-portraits ever made: Albrecht Dürer's Self-Portrait at Twenty-Eight (1500).

Amplifying the significance of the mirror in the imagination of the time, and of even deeper resonance to Hemessen's work, are the writings of the 14th-Century Italian mystic, St Catherine of Siena, whose teachings had been in popular circulation in Europe since the beginning of the 16th Century. As if presciently sanctioning the visual verve of Hemessen's painting, in which the artist dares to see herself not merely performing a function typically assigned to men (painting), but assuming aspects of the male Christ, Catherine of Siena challenged the notion that women were not equally summoned to see themselves as mirrors of Christ. Marshalling the metaphor of the looking glass, she asserts that Christ is "a mirrour that needis I moste biholde, in the which myrrour is representid to me that I am thin ymage & creature”.

Caught in a crossfire of ricocheting reflections – religious and feminist, optical and artistic – Hemessen's inexhaustible panel deserves credit for tracing the cultural and psychological axes against which all subsequent self-portraiture will plot itself. Her underappreciated painting in many ways establishes the themes that far-better-known self-portraits from Rembrandt to Cindy Sherman, Artemisia Gentileschi to Picasso, would explore in the ensuing centuries, works that have come to define not merely the respective oeuvres of those exceptional artists, but the story of art itself in the last half millennium. What keeps our eyes transfixed on those masterpieces by Van Gogh and Frida Kahlo is the poignancy of their comprehension, their hope, that perhaps some element of ourselves can survive the fleeting dabs and strokes of our moments in time, can survive as an energy that echoes across ages – an instinct that Caterina van Hemessen gave deft voice to in the confounding physics of her mysterious masterpiece. ~


Alas, she looks so sad and unhealthy. So unloved, I dare guess. We can only imagine how difficult it must have been for a woman to be an artist back in that era. Not that it's every easy to be a professional artist.




~ A few years ago, when the advocacy group Coltura called on America to stop using gasoline, it prompted mockery.

Coltura had been waging a war against gasoline for a few years by this point, but its primary weapons were things like music and performance art. One piece featured actors inside a clear plastic bubble panicking as it filled with simulated exhaust.

Then in 2017, Coltura's co-executive director, Matthew Metz, published an op-ed calling for Washington state to phase out gas-powered cars completely. A Seattle columnist wrote an article about Metz, with the word "crazy" featuring prominently.

A lot has changed in four years. Tesla is now the world's most valuable automaker. Multiple automakers say they will cease production of gas- and diesel-powered cars within the next two decades.

And what was once a fringe idea is now part of a global trend: momentum is building for the idea that zero-emission vehicles, primarily electric ones, are the future of the auto industry.

"More and more countries are announcing targets to to phase out internal combustion engine vehicles at the national level," Sandra Wappelhorst, who has tracked this trend for the International Council on Clean Transportation, told NPR earlier this year.

The climate talks that recently wrapped up in Glasgow featured a non-binding call for all vehicles sold worldwide to be zero-emission by 2040. The European Union is considering a zero-emission mandate that would kick in five years earlier, in 2035.

The idea is percolating from the heads of government down to individuals. A recent poll commissioned by Coltura, conducted by well-regarded national polling groups, found that more than 50% of U.S. voters support requiring all new cars to be electric within a decade.

"In, like, 10 years, you probably won't even have gas cars anymore. Right?" asked Elle King, as she looked at an electric vehicle on display at a mall in Northern Virginia this week. "And good thing, because gas is expensive.”

In the United States, the federal government has not embraced a full phaseout, instead calling for 50% of new cars sold to be electric. But California, Massachusetts and New York have all set plans to end gas car sales within 15 years.

And these state proposals to transform our automotive lives have not prompted a widespread political backlash – despite Americans' obsession with cars and the country's huge dependence on gasoline. 

A Rivian electric pick-up truck is already on the market.


Eventually may be the key word here. Phasing out gas cars by 2035 — the date under consideration by the EU and many states — may feel far away, which could help explain why people are not up in arms about the policies.

That could be a problem, says Jasmine Sanders, the executive director of OurClimate. Actually ending gas car sales by 2035 would require a tremendous amount of change over the next 15 years — from infrastructure investments to shifts in consumer thinking and behavior.

"We have to go ahead and start doing this now," Sanders says. "We cannot wait until 2034 and then [start] telling people, 'No, you can't buy that gas vehicle.' "

And the scale of the proposed transformation is immense. Right now, gas and diesel vehicles make up 97% of the U.S. auto market. Electric vehicles still cost more upfront, and America doesn't have the electric grid or charging infrastructure to support a fully electric fleet. 

Automakers are increasingly accepting the idea that electric vehicles are the future, but they are also acutely aware of the scale of change involved, and there is no consensus on how quickly it will actually happen.

Environmentalists are pushing for a gas car phaseout as early as 2030, while some skeptical automakers think even 2040 is too ambitious.

In short, America has not yet broken up with gasoline. A few Democratically-controlled states setting targets is no guarantee that it will happen.

But what's clear is that in just a few years, the idea of having no more gas cars has moved from the fringes to the center of attention.

Today, Coltura isn't just writing op-eds about the end of gas cars. It's helping to write legislation to make that a reality, state by state.

Coltura's shift from the outskirts to the halls of power also shows up in unexpected ways. A woman named Jennifer Granholm made a cameo appearance in one of the anti-gasoline music videos Coltura released a few years ago.

At the time, she was the former governor of Michigan and a noted electric vehicle enthusiast. Today, she's the U.S. secretary of energy.


On the transition away from gas powered engines...I wonder if it might surprise us by how quickly it happens. So many times change comes slow, slow, slow, then all at once. When things reach a certain critical or pivotal point, then rush forward to a new state, a new equilibrium.


That would be a happy surprise indeed. Recently I met a woman who just bought a new car. "Electric?" I asked. The woman said she didn't have 60K to buy a Tesla. As if she never heard that by now we have plenty of affordable models. I wonder if the makers of electric cars should get more aggressive in their advertising, reminding people that not too long from now gas-powered vehicles won't even be manufactured. 

Still, some huge changes can happen quite suddenly. I remember when women didn't go to graduate school; the within two-three years, women were the majority of graduate students. So yes, it could happen quickly. I hope it does. (Or if the future is hydrogen or something else entirely, that's fine too.)


~ Confessions is the story of Saint Augustine’s journey to Christian conversion. There is an earnest anxiety about his project: to explain to God, who already knows and who has given him the very power of expression, how it was that he came to Him. This zero-point in the narrative, before even his birth, is full of tautology and paradox, as if Augustine is spinning in place, unable to launch the telling of his story. But there is also a tender vulnerability and sincerity that, for me, would become the key to a profound sense of recognition: “Who will grant me that you come to my heart and intoxicate it?” and “What am I to you that you command me to love you and that, if I fail to love you, you are angry with me?”

But on the first reading, Augustine began to lose me not long after his introductory excursus, with a description of what he calls the “sins of my infancy.” He uses the term “infancy” in its literal meaning: in-fant, that is, something lacking speech. He acknowledges he doesn’t remember that time of his life, but his observation of infants, he says, gives him the information he needs to confess the sins of his own infancy.

“I have personally watched and studied a jealous baby. He could not yet speak and, pale with jealousy and bitterness, glared at his brother sharing his mother’s milk . . . it can hardly be innocence, when the source of milk is flowing richly and abundantly, not to endure a share going to one’s blood-brother, who is in profound need, dependent for life exclusively on that one food.”

Augustine’s description of this baby was motivated, it seemed to me, by a blind commitment to the Christian doctrine that declares man inherently depraved and marred by sin from the moment of conception. It’s one thing, I thought, to quietly accept this doctrine, and another to go looking at a baby’s behavior and ascribe to it vicious and malevolent intention. It seemed to me a sloppy and bad-faith justification for a suspect idea that he had accepted as a matter of faith and which he now tried to advance on the basis of a dubious interpretation of a baby’s preconscious behavior.

This was an only slightly more sophisticated version of what, growing up in the Dominican Republic, I had seen Christians all around me do. It always irked me. Seeing it in this ancient and revered source, proffered to me by the Columbia faculty as one of the towering achievements of ancient thought, puzzled me. Augustine’s reasoning felt dishonest, forced. Was I meant to take this seriously? Or was I reading it as an example of how blind faith can turn even a “great” thinker into a simpleminded fanatic?

Yet despite this disconcerting opening riff on the sins of infancy, Confessions had an enormous impact on me, and for a few weeks, it even revived my sense of Christianity as a possible way of life. In Augustine, I found many echoes of my own experience.

What would happen to me in the US soon began take its anomalous shape: my father, who had accompanied my older brother and me on the trip, would not stay and reunite with my mother to begin a new life in America; he would return to the Dominican Republic and live his life there, with the family he had made after he and Mom had divorced when I was five. My mother, ill-equipped to navigate the complexities of life in New York, where she had now lived for three years, would be fired from her minimum-wage factory job in Brooklyn and take up with a man my brother and I did not trust and with whom we refused to live. Instead, we would live in a room in the basement of Juan and Fefa Alcántara’s house, making do as best we could.

It happened just at this time, as my brother and I were added to a large roster of mouths to feed and children to mind in Fefa’s household, that the whole family was being convulsed by a religious awakening. And while my brother resisted, I was swept along: I let go the soft atheism that my father, my reason, and the fanaticism of the people I knew who called themselves Christians had engendered in me, and gave myself over to the new and wonderful sweetness of salvation.

My newfound faith brought many happy days to my life and accelerated my learning of English by daily, devoted, and absorbed reading of the King James Version of the Bible. In Fefa’s house, I was no longer a heathenish burden, but a miraculous blessing and testament to the power of the new message. My conversion brought light into the family, made me closer to Fefa’s children, and took away, temporarily, my feeling of being a stranger in an alien world. We prayed together, we sang together, we went to church together.

All of this was with me, fresh with me, as I encountered Saint Augustine’s Confessions in January 1992. Confessions is an intensely intimate book, and you always have the sense that you have just walked into a private, whispered conversation. The book invites you to witness a probing, urgent heart-to-heart between Augustine and his God. The subject is Augustine himself; the journey of becoming Augustine. The object of attention is the self. It is Augustine’s self-analysis.

We probably know more about the psychology and inner life of Saint Augustine than that of any other ancient person. Before conversion, he was a prominent teacher of rhetoric, so in his self-exploration, and in the telling of his life story, he had at his disposal an unsurpassed range of rhetorical tools. His expressive capacity—in particular, his skill at describing emotion and inner experience—is unlike anyone else before modernity.

The first few books of Confessions are slow, and one can get annoyed at what feels like Augustine’s gratuitous beatings around the bush, his reliance on Biblical quotations to say even the most commonplace things, his distracted curiosity that seems unable to stay on any subject. It can be exasperating. Especially if you are reading quickly, as I was in Lit Hum. But once I got past that difficult entry, Augustine had his hooks in me. His insights into human psychology were illuminating and profound, and came to me in a language I understood.

Teaching the book to Columbia first-year students many years later, I found that my transformative first encounter with the text is the exception rather than the rule for the typical eighteen-year-old. Many students find it hard to establish the sympathetic bond that must undergird any powerful encounter with a work of literature. This bond is hard to form for students with Augustine, I think, for reasons embedded in our post-Christian and postmodern condition. It’s hard, in our post-faith world, to inhabit the mind of someone who lives with a vivid sense of God’s presence. I even find some of my students reluctant to admit to a sincere longing for truth, and to the possibility of truth, because it is intellectually unfashionable.

As a college freshman, my own religious experiences gave me an advantage, an entry point, into Augustine that others did not have. The power of his mind, the beauty of his language, and the depth of insight that pervades his writing captivated me. In plumbing the depths of his own psyche, Augustine gave me a language with which to approach my own interiority; he gave me a model and a set of questions with which to explore the emotional wilderness, full of doubt and confusion, that was my own coming-to-adulthood, in America, in New York City, at Columbia.

Perhaps what most amazed me about the saint was his consciousness that his own heart was a mystery, that its inner recesses were dark, unknown, and often inaccessible. Yet he was relentlessly committed to burrowing deeper and deeper into his own self and to discovering there, in the end, the only form of truth he would accept. Far from a pedantic or doctrinaire holy man, I found an uncertain, childlike man trying desperately to make sense of his own being in the world.

“In late August of 386, at the age of 31, Augustine converted to Christianity. As Augustine later told it, his conversion was prompted by hearing a child's voice say "take up and read" (Latin: tolle, lege). ~ Wiki (Oriana: Note the peacock on the ledge)

* * *

Resentment is like drinking poison and waiting for the other person to die. ~ St. Augustine


Now that's an example of Christian love!


~ Diets with higher inflammatory potential were tied to an increased risk of incident dementia, a prospective observational study showed.

Compared with participants with the lowest inflammatory diet scores, those with the highest scores were three times more likely to develop incident dementia (HR 3.01, 95% CI 1.24-7.26, P=0.014), the researchers wrote in Neurology.

"A diet with a more anti-inflammatory content seems to be related to lower risk for developing dementia within the following 3 years," Scarmeas told MedPage Today. Available dementia treatments are not very effective, he said -- "it's quite important that we find some measures to partially prevent it."

"Diet might play a role in combating inflammation, one of the biological pathways contributing to risk for dementia and cognitive impairment later in life," he added.

Evidence suggests certain foods, nutrients, and non-nutrient food components can modulate inflammatory status acutely and chronically. Earlier prospective research looked at dietary inflammatory potential and cognitive decline only in women, not in both sexes, the researchers noted.

Scarmeas and co-authors analyzed data from 1,059 older adults in the Hellenic Longitudinal Investigation of Aging and Diet (HELIAD), a population-based study that investigates associations between nutrition and age-related cognition in Greece. People with dementia at baseline were excluded from the analysis.

Participants had a mean baseline age of 73.1 and a mean 8.2 years of education; 40.3% were men. Dietary intake was evaluated through a semi-quantitative food frequency questionnaire validated for the Greek population and administered by a trained dietitian.

People in the first tertile consumed a diet that included about 20 servings of fruit, 19 of vegetables, four of legumes, and 11 of coffee or tea a week, on average. People in the third tertile at a more pro-inflammatory diet, with a weekly average of nine servings of fruit, 10 of vegetables, two of legumes, and nine of coffee or tea.

Over an average follow-up of 3.05 years, 62 people were diagnosed with dementia. Higher dietary inflammatory index scores correlated with higher dementia risk. A gradual risk increase across higher tertiles suggested a dose-response relationship between the inflammatory potential of diet and incident dementia, Scarmeas and co-authors observed.

"Our results are getting us closer to characterizing and measuring the inflammatory potential of people's diets," Scarmeas said. "That, in turn, could help inform more tailored and precise dietary recommendations and other strategies to maintain cognitive health.” ~


First, it needs to be noted that there are startling differences in the rates of dementia among various countries. Greece has a relatively low rate of both Alzheimer's Disease and vascular dementia, and its diet is regarded as among the best in the world for brain and heart health. Japan has the lowest rate among developed nations. 

Compared with genetics, diet may play a relatively minor role. Still, since diet can be modified, it should be studied in more depth. Fish and seafood? Seaweed salad? Rice versus wheat?

Since the population in this study was Greek, I find it strange that wine wasn’t discussed.

~ The best way to prevent dementia is by consuming red or white wine in moderation daily. The Bordeaux study by Professor J.M. Orgogozo of the University of Bordeaux in 1997, showed that wine could reduce dementia by up to 80%, which is an incredible amount that has unfortunately been ignored by health policy makers. ~

“Light-to-moderate drinking (one to three drinks per day) was significantly associated with a lower risk of any dementia (hazard ratio 0.58 [95% CI 0.38-0.90]) and vascular dementia (hazard ratio 0.29 [0.09-0.93]). No evidence that the relation between alcohol and dementia varied by type of alcoholic beverage was found. Regular light to moderate drinking [also] seemed to be associated with a decreased risk for ischemic stroke.”

The role of olive oil likewise needs to be explored further.

New research in mice suggests that adopting a diet rich in extra virgin olive oil can prevent the toxic accumulation of the protein tau, which is a hallmark of multiple types of dementia.”



~  A promising new approach to potentially treat Alzheimer’s disease – and also vaccinate against it – has been developed by a team of UK and German scientists.

Both the antibody-based treatment and the protein-based vaccine developed by the team reduced Alzheimer’s symptoms in mouse models of the disease. The research is published today (November 15, 2021) in Molecular Psychiatry.

The work is a collaboration between researchers at the University of Leicester, the University Medical Center Göttingen, and the medical research charity LifeArc.

Rather than focus on the amyloid beta protein in plaques in the brain, which are commonly associated with Alzheimer’s disease, the antibody and vaccine both target a different soluble form of the protein, that is thought to be highly toxic.

Amyloid beta protein naturally exists as highly flexible, string-like molecules in solution, which can join together to form fibers and plaques.  In Alzheimer’s disease, a high proportion of these string-like molecules become shortened or ‘truncated’, and some scientists now think that these forms are key to the development and progression of the disease.

Professor Thomas Bayer, from the University Medical Center Göttingen, said: “In clinical trials, none of the potential treatments which dissolve amyloid plaques in the brain have shown much success in terms of reducing Alzheimer’s symptoms. Some have even shown negative side effects. So, we decided on a different approach. We identified an antibody in mice that would neutralize the truncated forms of soluble amyloid beta, but would not bind either to normal forms of the protein or to the plaques.”

Dr. Preeti Bakrania and colleagues from LifeArc adapted this antibody so a human immune system wouldn’t recognize it as foreign and would accept it. When the Leicester research group looked at how and where this ‘humanized’ antibody, called TAP01_04, was binding to the truncated form of amyloid beta, the team had a surprise. They saw the amyloid beta protein was folded back on itself, in a hairpin-shaped structure.

Professor Mark Carr, from the Leicester Institute of Structural and Chemical Biology at the University of Leicester, explained: “This structure had never been seen before in amyloid beta. However, discovering such a definite structure allowed the team to engineer this region of the protein to stabilize the hairpin shape and bind to the antibody in the same way. Our idea was that this engineered form of amyloid beta could potentially be used as a vaccine, to trigger someone’s immune system to make TAP01_04 type antibodies.”

When the team tested the engineered amyloid beta protein in mice, they found that mice who received this ‘vaccine’ did produce TAP01 type antibodies.

The Göttingen group then tested both the ‘humanized’ antibody and the engineered amyloid beta vaccine, called TAPAS, in two different mouse models of Alzheimer’s disease. Based on similar imaging techniques to those used to diagnose Alzheimer’s in humans, they found that both the antibody and the vaccine helped to restore neuron function, increase glucose metabolism in the brain, restore memory loss and – even though they weren’t directly targeted – reduce amyloid beta plaque formation.

LifeArc’s Dr Bakrania said: ‘’The TAP01_04 humanized antibody and the TAPAS vaccine are very different to previous antibodies or vaccines for Alzheimer’s disease that have been tested in clinical trials, because they target a different form of the protein. This makes them really promising as a potential treatment for the disease either as a therapeutic antibody or a vaccine. The results so far are very exciting and testament to the scientific expertise of the team. If the treatment does prove successful, it could transform the lives of many patients.”.

Professor Mark Carr added: “While the science is currently still at an early stage, if these results were to be replicated in human clinical trials, then it could be transformative. It opens up the possibility to not only treat Alzheimer’s once symptoms are detected, but also to potentially vaccinate against the disease before symptoms appear.”

The researchers are now looking to find a commercial partner to take the therapeutic antibody and the vaccine through clinical trials. ~


Meanwhile a different vaccine is also being developed, one that aims to make the immune system clear away the amyloid plaque.

ending on beauty:

Some say we are living at the end of time,
But I believe a thousand pagan ministers
Will arrive tomorrow to baptize the wind.

~ Robert Bly, Living at the End of Time

Photo: David Whyte

Saturday, November 20, 2021


Volcanic lightning; Sergio Tapiro

Twin Lakes, near Mammoth, California

In Eastern Sierra, in the divine
ion-charged mountain air,
next to the white water

braiding and unbraiding,
a woman is smoking —
eyes closed as for a kiss,

a slow inhale, long ecstatic
exhale — giving herself
to the poison as to a lover.

“Life cannot offer
what drugs can offer,”
an expert declared. We don’t

have a prayer unless we are
artists, addicted to music,
the huge, merciless

music of the world.
O sparrow, sparrow, whose fall
is counted by God,

remind me always and now:
there is a price for bliss.
The waterfall’s fluent arms

embrace me, but the slap
of the water below
counts those who also tried

to make music and sank
like Icarus into brightness.  

~ Oriana


It always struck me as somewhat odd how Icarus is the subject of poems and visual arts, while his brilliant father gets no glory. I guess people love the romantic over-reacher, no matter how foolish, rather than a hard-working, practical engineer. 


Yes, the "merciless music of the world " enacts a steep "price for bliss," and there are few who refuse to pay. I imagine those lab rats pulling the lever wired to their pleasure center until they starve and die. And that's physical pleasure, emotional pleasures, psychological pleasures,  the pleasure w take in beauty, as "artists, addicted to music," is even stronger, more compelling, more liable to be fatal. And yet without that music, without art, "we don't have a prayer.”… The music, the ecstasy, is our dangerous salvation.

 Hendrik Goltzius: Icarus, 1588 (from the series The Four Disgracers)




In Hamlet there is a little-noticed moment when Horatio expresses worry that Hamlet does not have sufficient skill at fencing to stand a chance in a sword fight. Hamlet replies, “I have been in continual practice.” It’s not a famous line, but for some reason it touches me to the core. I too have been in continual practice, the practice of paying attention and being astonished. I can’t quite say for what purpose, or have faith that a grand occasion to exercise those skills will ever arise. But purpose may be beside the point. At least once a day, I reflect on the paradoxes of the world and instantly I am in “immense amazement.”

“What use are you? In your writings there is nothing except
immense amazement.” ~ Milosz, “Consciousness”

And thus I remain if not in continual practice, then in continual astonishment.

I think that I am here, on this earth,
To present a report on it, but to whom I don’t know.
As if i were sent so that whatever takes place
Has meaning because it changes into memory.

~ Milosz, “Consciousness”

My quick response to the first two lines: “to your readers, silly.” But then two lines of sheer wisdom, and the reason I keep reading Milosz. Changing events into memory is also Keats’s “soul-making.”

Vladimir Kush, Sunrise

“I prefer winter and fall, when you feel the bone structure of the landscape — the loneliness of it, the dead feeling of winter. Something waits beneath it; the whole story doesn't show.” ~ Andrew Wyeth


~ Early on in Kurt Vonnegut: Unstuck in Time, the beloved writer has returned to his Indianapolis high school. This is a familiar move in cinematic biography: Vonnegut is there at the behest of the filmmakers to reminisce about those few perfect years of youth before the Second World War, there to conjure the innocence that will soon encounter a cascade of tragedy.

It is an obvious device, a bit manipulative, even, but who cares: the images of teenage Vonnegut alongside his friends, virtuosic in their happiness, are woundingly poignant in light of what’s to come, and they should be. “The Second World War was fought by children,” he says, towards the visit’s end, as he approaches the school’s panic doors… Panic doors? Yes. Vonnegut, leaving melancholy behind, snaps into delight as he tells us his ancestor invented the easy-to-open safety bars now ubiquitous on institutional exits, and cackles with delight demonstrating their use.

Here then, are all the elements of a Kurt Vonnegut novel: direct and disarming tenderness, joy surrounded by shadow, absurd coincidence, narrative digression, and—most importantly—the omnipresent feeling you might slip through time at any moment, as if it were a door you just happened to lean on.

Captured just before Christmas in 1944, the 22-year-old Vonnegut was in Dresden for the notorious Allied fire-bombing that killed 25,000 civilians; as a POW, Vonnegut was made to toil in the ruins of the blasted city, pulling bodies from the rubble in what he described as a “terribly elaborate Easter egg hunt.” The war, as it would for most of his generation, changed something in Vonnegut, and it was an experience he’d reckon with again and again for most of his life. Says the film’s co-director, Robert Weide: “It’s not like he’s just trying to get a bead on [Slaughterhouse-Five], it’s like he’s trying to purge the whole Dresden experience from his soul.” Indeed, Vonnegut’s writerly struggle to commit the story to paper, in what would become Slaughterhouse-Five (1969), is, if not heroic, truly epic.

His sixth novel, Slaughterhouse-Five took Vonnegut years of untold rewrites, trying it out in first person, third person, as a play—sometimes he’d get halfway through a draft and scrap it to start all over again. In one version, a middle-aged Billy Pilgrim (the main character) gets a drunken crank call from Vonnegut the Author telling him he’s just a character in a book. Says Vonnegut, in mournful voiceover: “I would hate to tell you what this lousy little book cost in money and anxiety and time.”

Slaughterhouse-Five would establish Vonnegut as a major American writer of the 20th century, shifting him from a literary sci-fi cult figure to a household name—particularly if that household contained a teenager. “The current idol of the country’s sensitive and intelligent young people is 47 years old,” as one TV news anchor describes Vonnegut’s newfound fame.

One of those young fans was director Robert Weide, who discovered Vonnegut in 1975 as a 16-year-old in Fullerton, California. For Weide, the gateway drug was Breakfast of Champions, given to him by a high school English teacher (Valerie Stevenson, who makes a charming cameo, and muses aloud that she’s “horrified” she actually assigned the book). This is what distinguishes Unstuck in Time from conventional literary biography: it is as much the story of a writer’s life, as it is the story of a fan’s life.

Weide would go on to immerse himself in Vonnegut, at one point teaching a small class to his fellow high school seniors. After producing a documentary for PBS on the Marx Brothers Weide made his move: in the summer of 1982 he wrote a letter to his hero, asking if he would be the subject of a documentary; to Weide’s delight, Vonnegut replied a month later, with a generosity characteristic of most of his fan interactions. He wrote: “I am honored by your interest in my work, and I will talk to you some, if you like, about making some sort of film based on it.”

It’s important, here, to note that was 39 years ago. From 1988 until Vonnegut’s death in 2007, Weide would amass hundreds of hours of interview footage with his teenage idol (who would become, over time, his dear friend). And though Weide says early on in the film he doesn’t like documentaries that feature the documentarians themselves, he concedes that, “When you take almost 40 years to make a documentary you owe some kind of an explanation.”

And what follows explains a lot.

Though Unstuck in Time follows a rough chronology based on its subject’s bibliography, its making-of metanarrative necessitates temporal jumps worthy of a Vonnegut story: footage of Weide’s wedding is paired with the renewal of vows decades later; Vonnegut, wry and cantankerous, appears onstage at 50, then 75, then 60; reels of Vonnegut’s children fly by, and they are teenagers, then fortysomethings, then older than their father was in the film’s first interviews. It can be a little dizzying (the film doesn’t provide a lot of temporal anchors) but its dual effect of compression and continuity is deeply moving, and is somehow illustrative of the hard-earned humanism Vonnegut never quite abandoned, even in the face of so much human cruelty. As Vonnegut does so often in his writing, so too does this documentary remind us: though our lives may be unbearably full with heartache and happiness, with contradiction, they are so very brief; as such, we are morally bound to pay attention to—and note—whichever small moments of grace and joy we are lucky enough to encounter.

Vonnegut’s daughters Edie and Nanette get the most screentime of his children, providing counterpoint, as a loving but skeptical chorus, to Weide’s fandom. We learn from them that the kindness that animates so much of Vonnegut’s writing, even at its darkest, wasn’t always readily available to those closest to him. When working—particularly in those lean years before the success of Slaughterhouse-Five—Vonnegut could be an absolute bear to be around, leaving all matters of the material world to his hugely supportive first wife, Jane, who comes across as a veritable saint. Even after Vonnegut left her, in the midst of Slaughterhouse’s astonishing success, eventually marrying the photographer Jill Krementz, Jane remained both a fan of his work, and a friend.

It’s at this point, though, that the advantages of Weide’s friendship become limitations. It seems clear that over the many years trying to document the life of their father, Weide developed a relationship with Vonnegut’s children, of warmth and respect, if not outright friendship. This perhaps suggests why Krementz is treated more as “the other woman” than as Vonnegut’s partner for the last thirty years of his life.

But if the purpose of a literary biography is to unpack the whys and whens and hows that animate a writer’s work, Unstuck in Time is a brilliant success. Like much of its subject’s writing, it is tender, smart, funny, candid, and dark. Crucially, it reminds us of what, for me at least, has been so important about Vonnegut’s writing: the idea that kindness is not the same as weakness, and that it is the institutions we create that do us the most harm, not one another.

Despite the unkindnesses wrought upon Vonnegut’s generation, and upon his own life—the Depression, the war, so much death—he gives us permission to believe there is goodness yet in this world, all around us, if only we choose to look. ~


The biopic on Vonnegut demonstrates a crucial aspect that shaped the experience of all living in the 20th century. The two world wars changed more than the maps — they put what it means to be human radically into question. The experience of the war was the primary force in the lives not just of Vonnegut, but of many many soldiers, and not just of soldiers but of the millions of victims of the Holocaust, and the millions who witnessed its genocides. It left fear in many about what, if any, limits there were on human depravity. And finally, it stunned with the deadly power of its weapons, whose primary effect was the mass casualties of non-combattant citizens. Dresden, Guernica, Hiroshima, Nagasaki, all the wholesale slaughter of populations of civilians.

These horrors for many spelled the death of god, the failure of any orthodoxy in light of our own assumption of the godlike power to annihilate. Who could now rescue us from ourselves? That is the core of fear, not only in the Other, but in its reflection in ourselves.


~ Thomas Morton, an English businessman, arrived in Massachusetts in 1624 with the Puritans, but he wasn’t exactly on board with the strict, insular, and pious society they had hoped to build for themselves. “He was very much a dandy and a playboy,” says William Heath, a retired professor from Mount Saint Mary’s University who has published extensively on the Puritans. Looking back, Morton and his neighbors were bound to butt heads sooner or later.

Within just a few short years, Morton established his own unrecognized offshoot of the Plymouth Colony, in what is now the town of Quincy, Massachusetts (the birthplace of presidents John Adams and John Quincy Adams). He revived forbidden old-world customs, faced off with a Puritian militia determined to quash his pagan festivals, and wound up in exile. He eventually sued and, like any savvy rabble-rouser should, got a book deal out of the whole affair. 

Published in 1637, his New English Canaan mounted a harsh and heretical critique of Puritan customs and power structures that went far beyond what most New English settlers could accept. So they banned it—making it likely the first book explicitly banned in what is now the United States. A first edition of Morton’s tell-all—which, among other things, compares the Puritan leadership to crustaceans—recently sold at auction at Christie’s for $60,000.

The Puritans’ move across the pond was motivated by both religion and commerce, but Morton was there only for the latter reason, as one of the owners of the Wollaston Company. He loved what he saw of his new surroundings, later writing that Massachusetts was the “masterpiece of nature.” His business partner—slave-owning Richard Wollaston—moved south to Virginia to expand the company’s business, but Morton was already deeply attached to the land, in a way his more religious neighbors likely couldn’t understand. “He was extremely responsive to the natural world and had very friendly relations with the Indians,” says Heath, while “the Puritans took the opposite stance: that the natural world was a howling wilderness, and the Indians were wild men that needed to be suppressed.”

After Wollaston left, Morton enlisted the help of some brave recruits—both English and Native—to establish the breakoff settlement of Ma-Re Mount, also known as Merrymount, preserved today in the Quincy neighborhood and park of the same name. Morton essentially asked his neighbors, “What if we just throw [Wollaston] out and start our own utopian colony based on Plato’s Republic, and also as a society of the Native Americans?” explains Rhiannon Knol, a specialist in the Books & Manuscripts department at Christie’s in New York. “And that sounded a lot better to them.” Some of them, at least.

The Puritan authorities didn’t see Merrymount as a free-wheeling annoyance; they saw an existential threat. The problem wasn’t only that Morton was taking goods and commerce away from Plymouth, but that he was giving that business to the Native Americans, including trading guns to the Algonquins. With Plymouth’s monopoly dissolved and its perceived enemies armed, Morton had perhaps done more than anyone else to undermine the Puritan project in Massachusetts. Worse yet, in the words of Plymouth’s governor William Bradford, Morton condoned “dancing and frisking together” with the Native Americans—activities that were banned even without Native American participation. It was basically an early colonial version of Footloose. Governor Bradford nicknamed Morton the “Lord of Misrule,” and it’s not hard to imagine him wearing that title like a crown.

There could be no greater symbol of such misrule than Morton’s maypole. Reaching 80 feet into the air, the structure conjured all the vile, virile vices of Merry England that the Puritans had hoped to leave behind. Throughout medieval Europe, maypoles had been a popular installation for May Day (or Pentecost or midsummer, in some regions)—encouraging human fertility as the land itself sprung up from winter. Now that was a tradition that Morton could get behind, and he gladly called upon the residents of Merrymount to drink, dance, and frolic around the pole. The establishment of Merrymount had been a provocation, but Morton’s May Day celebrations meant war. 

Maypole by Brueghel

During the 1628 festivities, a Puritan militia led by Myles Standish invaded Merrymount and chopped down the maypole. (The incident later inspired Nathaniel Hawthorne’s short story “The May-Pole of Merry Mount,” first published in 1832.) Morton was tried for supplying arms to the Natives, and expelled to an island off the coast of New Hampshire to be left for dead. Somehow, he managed to hitch passage on a ship back to England, where he sued the Massachusetts Bay Company. The trial provided him with the basis for his book, much of which was composed at London’s Mermaid Tavern with a little help from his friends, including famed poet and playwright Ben Jonson.

Heath is careful to stress that the book is not a literary masterwork, but he acknowledges that it has its moments. Knol says she was particularly struck by the nicknames Morton threw at his Puritan foes, whom he called “cruell Schismaticks.” It’s hard to know who got it worse between Standish and John Endecott, governor of the Massachusetts Bay Colony (Plymouth’s neighbor to the north): Endecott is known in the book as “Captaine Littleworth,” Standish as “Captaine Shrimp.”

Even more radical than his belittling appellations were Morton’s subversive policy ideas, which went so far as to recommend “demartializing” the colonies. Unsurprisingly, the Puritans were appalled. Bradford, Plymouth’s governor, called New English Canaan “an infamous and scurrilous book against many god and chief men of the country, full of lies and slanders and fraught with profane calumnies against their names and persons and the ways of God.”

It’s likely that the book scandalized England as well. The book’s title page names Amsterdam as the place of publication rather than London—but that’s hard to believe, as that very Amsterdam publisher was in fact a well-known purveyor of Puritan books. Knol says that Amsterdam was likely listed as a lie to protect the actual publisher in London.

After publishing the book, Morton braved a venture back to his beloved Massachusetts, only to be turned right back around upon arrival. He tried to cross the Atlantic once again in 1643, and was this time exiled to Maine, where he died. His maypole may have been chopped down and his book banned, but Morton’s legacy lives on in Quincy, though sadly there’s no maypole in Merrymount Park. ~


Marvelous story about Morton and the pilgrims, the cutting of his Maypole, around which he had invited natives and settlers to "frisk" and "frolic," sounds like Woodstock being shut down by Southern Baptists. And then, this man who sold guns to the natives comes to the idea of "demartializing" the colonies. I can see him, this "Lord of Misrule," joining in the escapades of Kesey and his Merry Pranksters, the banning of his book another feather in his cap, a badge of honor. It's such an American story, representative of a long chain of such stories, and such heroes, in American life and history. Even now, centuries later, we recognize the characters and plot instantly, and know where we stand.


Is it possible that the Conservative Christian reaction to COVID 19 epidemic indicates the degree of Puritan influence on the United States? The Puritan settlers believed more in God’s punishment than in His love. Jonathan Edwards, a minister in 1703, gave a sermon, Sinners in the Hands of an Angry God, which illustrates that the Early American Christians believed more in His punishment than His love.

As cultural descendants of the Puritans, today’s Christians demonstrate their lack of belief in God’s love by refusing the vaccination. If they believed in His love, they would call the super-fast development of the vaccine a miracle. Then the refusal of the vaccination would be seen as a rejection of God’s blessing, not as a support of his vengeance.

Like the Puritans, the modern Fundamentalist Christian believes in God’s punishment, and those who contract COVID are sinners. The infected survive if He forgives them, and the unforgiven die. This inheritance of the Puritan belief system contributes to 70 percent of the unvaccinated being conservative Christian.

On the other hand, Asia is becoming the most Covid-vaccinated area in the world. The reason the Asian nations were behind was the unavailability of the vaccine. Recently, Japan reached 80 percent of its country vaccinated. Could it be that the popularity of the Buddha religion has something to do with this? Buddhists practice honoring their ancestors.

They do this by respecting all of the life forms, including their human neighbors. Maybe that is why mask-wearing is so acceptable in Asia. It may partially explain the high rate of vaccination in the East. It seems behaving as if your neighbor’s health is essential is easier if you honor your ancestors than if you believe your neighbors are sinners.


Buddhists also believe in science. They have no problem grasping how vaccines work, and how the protection you gain is amply worth what minor side effects you may experience (I experienced none, which startled me).

Thank you for the insight that believing in the god of punishment as opposed to the god of love may be an important factor in a person’s attitude toward vaccination. And yes, we know that vaccination rates are lowest where bible literalism prevails. These are the same people who, if they happen to be in a hospital for any reason and end up have a successful surgery or any other treatment give all the credit to god and the “power of prayer,” and none to the doctors and nurses.




We’re a paradoxically retro-progressive nation, on the pragmatic cutting edge but founded by uptight reactionary Puritans, nostalgic for less pragmatic religious dogmas (a recipe for lie buying). It's like if Silicon Valley had been founded by Druids. ~ Jeremy Sherman


~ Though anxiety disorders are now considered the most common type of psychiatric disorders in the United States – affecting up to 31 per cent of adults at some point in their lifetime – anxiety hasn’t always stood out as a well-recognized mental health problem. In the US and elsewhere, the concept of anxiety has evolved over time in ways that have better allowed it to be seen as a major clinical concern.

Historically, anxiety has often been mixed with other symptoms in a way that has masked its significance. For example, in American Nervousness (1881), the American neurologist George Miller Beard outlined the causes of what he regarded as an epidemic level of fear in US culture. His specific diagnosis was ‘neurasthenia’. A significant part of the diagnosis included anxiety, but it also featured a variety of other psychological and physical symptoms, catalogued over long lists, including insomnia, heart palpitations and back pain. In part through Beard’s promotional efforts and popular writings, neurasthenia achieved considerable cultural cachet.

The idea resonated as rapid social change was underway; Beard lay much of the blame at the feet of Thomas Edison and his inventions. As a medical diagnosis, though, neurasthenia quickly fell out of favor. Medical professionals began to doubt the seriousness of nervousness per se; they were inclined to regard other symptoms associated with it, such as cardiovascular complaints, as worthier of treatment. Beard himself contributed to this decline in arguing that this nervousness would subside as American culture grew more sophisticated.

In the following decades, Sigmund Freud did much to renew the profile of anxiety, starting with an attempt to cleave it from the remnants of neurasthenia. He saw promise in the study of fear and anxiety, casting fear (and, by extension, anxiety) as the problem whose solution would throw a floodlight on mental life writ large. His followers took up this mantle, too, describing, among other things, some of the social circumstances that increase anxiety.

In the middle of the 20th century, anxiety would again re-emerge as a significant concern and a cultural idiom of unease, the lens artists and authors used to talk about change. Perhaps the most famous statement in this regard was the book-length poem The Age of Anxiety (1947) by W H Auden – his locution persists to this day – though the poet was hardly alone in casting anxiety as the signature disorder of the era. In books such as The Meaning of Anxiety (1950) by Rollo May, psychologists and others saw much to worry about in the US, and the special value of talking about anxiety.

Anxiety also fit well within an emerging medical ecology. Miltown, an anxiolytic drug, was launched in the 1950s, ushering in a new era of seriously treating ‘nerve problems’, including the ‘nervous breakdown’ (of which anxiety was thought to be a key symptom). As a minor tranquilizer, Miltown was fast-acting and effective in calming nerves in a way that could seem miraculous. Advertisements focused on its ability to treat stress and anxiety, encouraging consumers to see their everyday unease in a new way – as a treatable condition.

But the mid-century age of anxiety would be short-lived. The rise and fall of Miltown was quick. Although much of the backlash focused on the drug itself, especially the potential for abuse, resistance ultimately circled back to the more elementary question of whether anxiety ought to be regarded as a problem to be treated with medication. Why treat something that was so common – and perhaps simply reflected the strains of an era in which anxiety really ought to be common? Evolutionary accounts, after all, begin with the idea that fear and anxiety enhance fitness by alerting people to potential threats.

The creation of psychiatric disorder categories in manuals like the DSM is not merely an academic matter. The concepts that psychiatrists create tend to assume a life of their own once they are enshrined in diagnostic instruments and articulated as scientific tools. Due in part to the criteria provided in the DSM, the late 20th century could rightly be regarded as the age of depression. With the ascent of selective serotonin reuptake inhibitors (SSRIs) such as Prozac starting in the late 1980s, major depressive disorder assumed a special significance. By the DSM’s criteria, many people met the threshold for a major depressive disorder. And SSRIs seemed especially well suited to treating it. Around the time Prozac came on the market, the total number of doctors’ office-based visits per year for depression increased significantly, going from 10.99 million in 1985 to an average of 20.43 million in 1993 and 1994. It’s not that instances of depression suddenly multiplied. Instead – in an echo of the advent of early anti-anxiety drugs – depression was suddenly regarded and talked about by more people as a treatable medical condition, rather than as an everyday trouble that could be ignored.

Of course, anxiety never went away. Depression might have seemed ubiquitous, but people in the late 20th century hardly had less to be anxious about or more to be depressed about. Indeed, anxiety disorders frequently co-occur with major depression. Therapists have certainly recognized the importance of anxiety as a dimension of suffering in their patients: alleviating a patient’s fear and anxiety is the better part of making them well, even if targeting depression with SSRIs is the focus of much treatment. Furthermore, reported anxiety, as a basic emotional experience, began rising across birth cohorts during the 20th century – an increase that, I argue in my book Unnerved (2021), is due in part to changes in the family, a rise in income inequality and economic uncertainty, and increasingly fraught social attachments. If we’re in the midst of a new age of anxiety, the designation might very well be accurate this time.

As a therapeutic target, anxiety has risen in prominence again both because it is common and because it lends itself well to the 21st-century treatment armamentarium. Anxiety medications tend to be fast-acting, and the use of benzodiazepines, a powerful class of medications first prescribed decades ago, has increased over time in outpatient settings.

Anxiety is also responsive to other kinds of treatment. It can be treated effectively with cognitive behavioral therapy (CBT), for example. And CBT can be administered in a variety of settings, without necessarily requiring extensive training. In school settings, for instance, anxiety interventions can be administered effectively by nurses and teachers. Patients presenting psychiatric symptoms to doctors have increasingly been presenting anxiety.

Some of the long-standing uncertainty about whether anxiety is worthy of treatment has been resolved as well. It is increasingly clear that even though a degree of anxiety might be natural and perhaps even essential to a well-adapted species, anxiety also has negative consequences with respect to role performance and well-being. Anxiety can undermine school performance in children and adolescents. Anxious workers are often less productive. Anxious athletes might not perform up to their own expectations. Over time, anxiety could lead to worse physical health, too.

Although there remains considerable stigma attached to most psychiatric disorders, for anxiety, it is different and shifting. It is possible to regard anxiety as both treatable and not at all unusual. Much of the enduring stigma surrounding psychiatric disorders centers on a fear of violence. But in the mind of the public, anxiety is less associated with violence than, for instance, schizophrenia is. 

The lingering stigma related to anxiety is partly due to the idea that it reflects weakness. An old theme that has fed into the ambivalence over treating anxiety as a clinical problem is that people can overcome it with the right mindset and might even learn from it. Yet there is growing acceptance that psychiatric disorders are largely genetic in origin, diminishing the stigma once attached to disorders that were previously regarded as a matter of weak character. It is likely easier now to admit to others that one is anxious.

The idea of an age of anxiety is rarely intended to be a specific psychiatric claim. But there has been an increase in the seriousness with which anxiety is taken as a clinical concern among both the public and treatment providers. If we’re more anxious now than we used to be, we’re also more inclined to treat our anxiety. In that sense, the age of anxiety has been slow in coming but it might be here. ~


So it’s not so much what’s really happening inside the patient’s brain, as what medicine thinks can be treated. When the expensive new-type anti-depressants arrived, we lived in an age of depression. Now, with benzodiapenes restored to grace, we again appear to suffer chiefly from anxiety.

I suspect that cognitive-behavioral therapy could also be useful for some cases of anxiety. So many of our fears and worries are irrational. Or, even when they are rational, if there is nothing we can do about the situation, then there is still no point in feeling anxious. It's usually pointless to ruminate. Let's do something useful instead. I've learned the hard way and over many years that the best antidote is deep breathing and action, action, action.


In regard to anxiety and depression, the perception that "treatability" is what matters, more than what the individual is experiencing, is a telling insight. The trend has been to expand the category of “illness,” steadily increasing the pool of patients who must be/can be treated. Things that may not have been seen as other than personality traits, bad habits, or personal choices, increasingly are labeled as treatable illnesses, widening the scope and the influence of the dispensers of treatment, from psychiatrists to social workers and therapists. And of course, the biggest winner in this game is the pharmaceutical industry.

Are there more reasons for anxiety and depression in the modern world? I think in terms of the speed and rate of social and technological change, which can be dizzying, yes. But even more significant may be the loss of strong family and community structure and support. Lives become more and more fragmented, people don’t stay in one place, keep one job, or even career. They don’t maintain ties with the family group, now separated in both time and space.

As much as we feel threatened by violence now, past ages were at least as violent, if not more so, so that's not new. I think what’s new is the fragmentation, the loss of strong family ties and support.


“The men the American people admire most extravagantly are the most daring liars; the men they detest most violently are those who try to tell the truth.” ~ H.L. Mencken


I'm not a Mencken fan, but now and then he hits on something true or close to it. Well, lying and politics — this is universal, not specifically American. However, Americans may be somewhat more likely to admire “daring liars” because the country has the dimension of myth so strongly embedded in it. To Jewish immigrants America (and not Palestine) was the “Goldene medine” — the “golden country.”

But I agree with Jeremy: the number one factor in the propensity to buy lies is probably religiosity, and especially the religious extremists’ yearning for a religious utopia that would of course be a nightmare for the rest of us. (The Puritans were an example; now it's the Evangelicals.)

New York, Mulberry Street, 1900


The Human Body is a treasure trove of mysteries, one that still confounds doctors and scientists about the details of its working. It's not an overstatement to say that every part of your body is a miracle. Here are fifty facts about your body, some of which will leave you stunned…

1. It’s possible for your body to survive without a surprisingly large fraction of its internal organs. Even if you lose your stomach, your spleen, 75% of your liver, 80% of your intestines, one kidney, one lung, and virtually every organ from your pelvic and groin area, you wouldn't be very healthy, but you would live.

2. During your lifetime, you will produce enough saliva to fill two swimming pools. Actually, Saliva is more important than you realize. If your saliva cannot dissolve something, you cannot taste it.

3. The largest cell in the human body is the female egg and the smallest is the male sperm.
The egg is actually the only cell in the body that is visible by the naked eye.
4. The strongest muscle in the human body is the tongue and the hardest bone is the jawbone.

5. Human feet have 52 bones, accounting for one quarter of all the human body's bones.

6. Feet have 500,000 sweat glands and can produce more than a pint of sweat a day.

7. The acid in your stomach is strong enough to dissolve razor blades. The reason it doesn't eat away at your stomach is that the cells of your stomach wall renew themselves so frequently that you get a new stomach lining every three to four days.

8. The human lungs contain approximately 2,400 kilometers (1,500 mi) of airways and 300 to 500 million hollow cavities, having a total surface area of about 70 square meters, roughly the same area as one side of a tennis court. Furthermore, if all of the capillaries that surround the lung cavities were unwound and laid end to end, they would extend for about 992 kilometers. Also, your left lung is smaller than your right lung to make room for your heart.

9. Sneezes regularly exceed 100 mph, while coughs clock in at about 60 mph.

10. Your body gives off enough heat in 30 minutes to bring half a gallon of water to a boil.

11. Your body has enough iron in it to make a nail 3 inches long.

12. Earwax production is necessary for good ear health. It protects the delicate inner ear from bacteria, fungus, dirt and even insects. It also cleans and lubricates the ear canal.

13. Everyone has a unique smell, except for identical twins, who smell the same.

14. Your teeth start growing 6 months before you are born. This is why one out of every 2,000 newborn infants has a tooth when they are born.

15. A baby's head is one-quarter of its total length, but by the age of 25 will only be one-eighth of its total length. This is because people's heads grow at a much slower rate than the rest of their bodies.

16. Babies are born with 300 bones, but by adulthood the number is reduced to 206. Some of the bones, like skull bones, get fused into each other, bringing down the total number.

17. It's not possible to tickle yourself. This is because when you attempt to tickle yourself you are totally aware of the exact time and manner in which the tickling will occur, unlike when someone else tickles you.

18. Less than one third of the human race has 20-20 vision. This means that two out of three people cannot see perfectly.

19. Your nose can remember 50,000 different scents. But if you are a woman, you are a better smeller than men, and will remain a better smeller throughout your life.

20. The human body is estimated to have 60,000 miles of blood vessels.

21. The three things pregnant women dream most of during their first trimester are frogs, worms and potted plants. Scientists have no idea why this is so, but attribute it to the growing imbalance of hormones in the body during pregnancy.

22. The life span of a human hair is 3 to 7 years on average. Every day the average person loses 60-100 strands of hair. But don't worry, you must lose over 50% of your scalp hairs before it is apparent to anyone.

23. The human brain cell can hold 5 times as much information as an encyclopedia. Your brain uses 20% of the oxygen that enters your bloodstream, and is itself made up of 80% water. Though it interprets pain signals from the rest of the body, the brain itself cannot feel pain.

24. The tooth is the only part of the human body that can't repair itself. (apparently not true for small cavities)

25. Your eyes are always the same size from birth but your nose and ears never stop growing. [Oriana: this is also true for the prostate gland]

26. By 60 years of age, 60% of men and 40% of women will snore.

27. We are about 1 cm taller in the morning than in the evening, because during normal activities during the day, the cartilage in our knees and other areas slowly compress.

28. The brain operates on the same amount of power as 10-watt light bulb, even while you are sleeping. In fact, the brain is much more active at night than during the day.
29. Nerve impulses to and from the brain travel as fast as 170 miles per hour. Neurons continue to grow throughout human life. Information travels at different speeds within different types of neurons.

30. It is a fact that people who dream more often and more vividly, on an average have a higher Intelligence Quotient.

31. The fastest growing nail is on the middle finger.

32. Facial hair grows faster than any other hair on the body. This is true for men as well as women.

33. There are as many hairs per square inch on your body as a chimpanzee.

34. A human fetus acquires fingerprints at the age of three months.

35. By the age of 60, most people will have lost about half their taste buds.

36. About 32 million bacteria call every inch of your skin home. But don't worry, a majority of these are harmless or even helpful bacteria.

37. The colder the room you sleep in, the higher the chances are that you'll have a bad dream.

38. Human lips have a reddish color because of the great concentration of tiny capillaries
just below the skin.

39. Three hundred million cells die in the human body every minute.

40. Like fingerprints, every individual has an unique tongue print that can be used for identification. 

41. A human head remains conscious for about 15 to 20 seconds after it has been decapitated.

42. It takes 17 muscles to smile and 43 to frown.

43. Humans can make do longer without food than sleep. Provided there is water, the average human could survive a month to two months without food depending on their body fat and other factors. Sleep deprived people, however, start experiencing radical personality and psychological changes after only a few sleepless days. The longest recorded time anyone has ever gone without sleep is 11 days, at the end of which the experimenter was awake, but stumbled over words, hallucinated and frequently forgot what he was doing.

44. The most common blood type in the world is Type O. The rarest blood type, A-H or Bombay blood, due to the location of its discovery, has been found in less than hundred people since it was discovered.

45. Every human spent about half an hour after being conceived, as a single cell. Shortly afterward, the cells begin rapidly dividing and begin forming the components of a tiny embryo.

47. Your ears secrete more earwax when you are afraid than when you aren't.

48. Koalas and primates are the only animals with unique fingerprints.

49. Humans are the only animals to produce emotional tears.

50. The human heart creates enough pressure to squirt blood 30 feet in the air.


The strongest muscle is the tongue? That works metaphorically as well . . . 

I removed #46, which stated: "Right-handed people live, on average, nine years longer than left-handed people do." I looked it up: this is apparently not true, but rather a statistical error.

Leonardo: Anatomical Sketches of the arm


~ Seven hundred years after Augustine’s conversion, Pope Urban stood before a packed hall at Clermont—in what was then the Duchy of Aquitaine—filled with dozens, probably hundreds, of the most powerful and influential people in Europe, including archbishops, abbots, knights, and noblemen from across the region. It was November 1095, and if he was going to make an impact, now was the time to do it. Augustine’s ideas about love and emotion still dominated Christianity at the time, and Urban, a skilled rhetorician, knew how to use them. He began his speech.

Most beloved brethren: Urged by necessity, I, Urban, by the permission of God chief bishop and prelate over the whole world, have come to these parts as an ambassador with a divine admonition to you, the servants of God.

That the “beloved brethren” term was used as a way to get everyone on the same page is important. Urban and his chroniclers were tapping into the crowd’s brotherly, uti [meaning approximately "utilitarian"] love for fellow Christians who were up against a common enemy. The most explicit example of this is found in the account by Balderic of Dol. After listing the horrors inflicted by Islamic forces on his fellow Christians living at the edges of the Byzantine Empire—they were flogged, driven from their homes, enslaved, robbed of their churches, and so on—Urban is said to have addressed the crowd directly:

“You should shudder, brethren, you should shudder at raising a violent hand against Christians; it is less wicked to brandish your sword against Saracens. It is the only warfare that is righteous, for it is charity to risk your life for your brothers.”

The word used in most of the primary sources for charity was caritas — that Augustinian right sort of love. But Urban and his chroniclers were also tapping into the direct and powerful frui love [enjoyment; love of something for its own sake] you should feel for Christ himself. Robert the Monk had Urban use this notion to pry people away from those they loved on earth:

But if you are hindered by love of children, parents and wives, remember what the Lord says in the Gospel, “He that loveth father or mother more than me, is not worthy of me . . . Every one that hath forsaken houses, or brethren, or sisters, or father, or mother, or wife, or children, or lands for my name’s sake shall receive an hundredfold and shall inherit everlasting life.”

The chance of everlasting life in the presence of God was the key. Linking it to a frui love for Christ and fellow Christians was powerful.

Much of the crusader rhetoric tapped into uti love for the Holy Land itself. The ever-polemical Balderic of Dol echoed Psalm 79:1 by having Urban say:

We weep and wail, brethren, alas, like the Psalmist, in our inmost heart! We are wretched and unhappy, and in us is that prophecy fulfilled: “God, the nations are come into thine inheritance; thy holy temple have they defiled; they have laid Jerusalem in heaps; the dead bodies of thy servants have been given to be food for the birds of the heaven, the flesh of thy saints unto the beasts of the Earth. Their blood have they shed like water round about Jerusalem, and there was none to bury them.”

This uti love for the Holy Land wasn’t just a legend built up by Crusade writers. Islamic accounts of the Crusades put similar words into the mouths of crusaders. Writing about the Islamic reconquering of Jerusalem in 1187, Persian scholar Imad ad-Din al-Isfahani heard terrified crusaders get ready for a final battle with the words:

“We love this place, we are bound to it, our honor lies in honoring it, its salvation is ours, its safety is ours, its survival is ours. If we go far from it we shall surely be branded with shame and just censure, for here is the place of the crucifixion and our goal, the altar and the place of sacrifice.”

The impetus for crusading seems to have been a deep sense of uti love in the Augustinian sense. The problem is, Augustine didn’t mean “love only thy neighbors whom you agree with,” and that raises a question about how an 11th-century man might reconcile violence against others with neighborly love. Thankfully, at least from the crusader’s point of view, Augustine also had an answer for that in his concept of a just war.

Augustine saw war as an act of correction, a bit like disciplining a child who has misbehaved. He wrote:

“They who have waged war in obedience to the divine command, or in conformity with His laws have represented in their persons the public justice or the wisdom of government, and in this capacity have put to death wicked men; such persons have by no means violated the commandment, “Thou shalt not kill.”

As long as you are fighting for the right reasons—that is, for God—and not for personal gain or hatred, then the war is just. More than that, it can be an act of uti love. Killing a sinner is to remove sin from the face of the earth, and that, to Augustine, was a good thing. It was also a good thing for the crusaders. ~


During the Middle Ages, the struggle for the possession of Jerusalem was strictly between Islam and Christianity. Somehow no one suggested that the city be returned to the Jews — even though both sides knew that Jerusalem was the capital of ancient Israel. Somehow that didn't count, and the descendants of the ancient Israelis were not thought to have any right whatsoever to their former homeland. Such are the peculiar ironies of the place also known as the Holy Land.


~ Erez Ben-Yosef wasn’t interested in the Bible. His field was paleomagnetism, the investigation of changes in the earth’s magnetic field over time, and specifically the mysterious “spike” of the tenth century B.C., when magnetism leapt higher than at any time in history for reasons that are not entirely understood. With that in mind, Ben-Yosef and his colleagues from the University of California, San Diego unpacked their shovels and brushes at the foot of a sandstone cliff and started digging.

They began to extract pieces of organic material—charcoal, a few seeds, 11 items all told—and dispatched them to a lab at Oxford University for carbon-14 dating. They didn’t expect any surprises. The site had already been conclusively dated by an earlier expedition that had uncovered the ruins of a temple dedicated to an Egyptian goddess, linking the site to the empire of the pharaohs, the great power to the south. This conclusion was so firmly established that the local tourism board, in an attempt to draw visitors to this remote location, had put up kitschy statues in “walk like an Egyptian” poses. 

But when Ben-Yosef got the results back from Oxford they showed something else—and so began the latest revolution in the story of Timna. The ongoing excavation is now one of the most fascinating in a country renowned for its archaeology. Far from any city, ancient or modern, Timna is illuminating the time of the Hebrew Bible—and showing just how much can be found in a place that seems, at first glance, like nowhere.

If you were a rising young archaeologist in the 1970s, you were skeptical of stories about Jewish kings. The ascendant critical school in biblical scholarship, sometimes known by the general name “minimalism,” was making a strong case that there was no united Israelite monarchy around 1000 B.C.—this was a fiction composed by writers working under Judean kings perhaps three centuries later. The new generation of archaeologists argued that the Israelites of 1000 B.C. were little more than Bedouin tribes, and David and Solomon, if there were such people, weren’t more than local sheikhs. This was part of a more general movement in archaeology worldwide, away from romantic stories and toward a more technical approach that sought to look dispassionately at physical remains.

In biblical archaeology, the best-known expression of this school’s thinking for a general audience is probably The Bible Unearthed, a 2001 book by the Israeli archaeologist Israel Finkelstein, of Tel Aviv University, and the American scholar Neil Asher Silberman. Archaeology, the authors wrote, “has produced a stunning, almost encyclopedic knowledge of the material conditions, languages, societies, and historical developments of the centuries during which the traditions of ancient Israel gradually crystallized.” Armed with this interpretative power, archaeologists could now scientifically evaluate the truth of biblical stories. An organized kingdom such as David’s and Solomon’s would have left significant settlements and buildings—but in Judea at the relevant time, the authors wrote, there were no such buildings at all, or any evidence of writing. In fact, most of the saga contained in the Bible, including stories about the “glorious empire of David and Solomon,” was less a historical chronicle than “a brilliant product of the human imagination.”

At Timna, then, there would be no more talk of Solomon. The copper mines were reinterpreted as an Egyptian enterprise, perhaps the one mentioned in a papyrus describing the reign of Ramses III in the 12th century B.C.: “I sent forth my messengers to the country of Atika, to the great copper mines which are in this place,” the pharaoh says, describing a pile of ingots he had placed under a balcony to be viewed by the people, “like wonders.”

The new theory held that the mines were shut down after Egypt’s empire collapsed in the civilizational cataclysm that hit the ancient world in the 12th century B.C., perhaps because of a devastating drought. This was the same crisis that saw the end of the Hittite Empire, the famed fall of Troy, and the destruction of kingdoms in Cyprus and throughout modern-day Greece. Accordingly, the mines weren’t even active at the time Solomon was said to exist. Mining resumed only a millennium later, after the rise of Rome. “There is no factual and, as a matter of fact, no ancient written literary evidence of the existence of ‘King Solomon’s Mines,’” Rothenberg wrote.

That was the story of Timna when Erez Ben-Yosef showed up in 2009. He had spent the previous few years excavating at another copper mine, at Faynan, on the other side of the Jordanian border, at a dig run by the University of California, San Diego and Jordan’s Department of Antiquities.

The dig quickly took an unexpected turn. Having assumed they were working at an Egyptian site, Ben-Yosef and his team were taken aback by the carbon-dating results of their first samples: around 1000 B.C. The next batches came back with the same date. At that time the Egyptians were long gone and the mine was supposed to be defunct—and it was the time of David and Solomon, according to biblical chronology. “For a moment we thought there might be a mistake in the carbon dating,” Ben-Yosef recalled. “But then we began to see that there was a different story here than the one we knew.”

Accommodating himself to the same considerations that would have guided the ancient mining schedule, Ben-Yosef comes to dig with his team in the winter, when the scorching heat subsides. The team includes scientists trying to understand the ancient metallurgical arts employed here and others analyzing what the workers ate and wore. They’re helped by the remarkable preservation of organic materials in the dry heat, such as dates, shriveled but intact, found 3,000 years after they were picked.

A few years ago the team produced one of those rare archaeology stories that migrates into pop culture: The bones of domesticated camels, they found, appear in the layers at Timna only after 930 B.C., suggesting that the animals were first introduced in the region at that time. The Bible, however, describes camels many centuries earlier, in the time of the Patriarchs—possibly an anachronism inserted by authors working much later. The story was picked up by Gawker (“The Whole Bible Thing Is B.S. Because of Camel Bones, Says Science”) and made it into the CBS sitcom “The Big Bang Theory” when Sheldon, a scientist, considers using the finding to challenge his mother’s Christian faith.

In the past decade, Ben-Yosef and his team have rewritten the site’s biography. They say a mining expedition from Egypt was indeed here first, which explained the hieroglyphics and the temple. But the mines actually became most active after the Egyptians left, during the power vacuum created by the collapse of the regional empires. A power vacuum is good for scrappy local players, and it’s precisely in this period that the Bible places Solomon’s united Israelite monarchy and, crucially, its neighbor to the south, Edom.

The elusive Edomites dominated the reddish mountains and plateaus around the mines. In Hebrew and other Semitic languages, their name literally means “red.” Not much is known about them. They first appear in a few ancient Egyptian records that characterize them, according to the scholar John Bartlett in his authoritative 1989 work Edom and the Edomites, “as bellicose by nature, but also as tent-dwellers, with cattle and other possessions, able to travel to Egypt when necessity arose.” They seem to have been herdsmen, farmers and raiders. Unfortunately for the Edomites, most of what we do know comes from the texts composed by their rivals, the Israelites, who saw them as symbols of treachery, if also as blood relations: the father of the Edomites, the Bible records, was no less than redheaded Esau, the twin brother of the Hebrew patriarch Jacob, later renamed Israel. With the Egyptian empire out of the picture by 1000 B.C., and no record of Israelite activity nearby, “The most logical candidate for the society that operated the mines is Edom,” says Ben-Yosef. 

But archaeologists had found so few ruins that many doubted the existence of any kingdom here at the time in question. There were no fortified cities, no palaces, not even anything that could be called a town. The Edom of Solomon’s time, many suspected, was another fiction dreamed up by later authors.

But the dig at the Faynan copper mines, which were also active around 1000 B.C., was already producing evidence for an organized Edomite kingdom, such as advanced metallurgical tools and debris. At Timna, too, the sophistication of the people was obvious, in the remains of intense industry that can still be seen strewn around Slaves’ Hill: the tons of slag, the sherds of ceramic smelting furnaces and the tuyères, discarded clay nozzles of the leather bellows, which the smelter, on his knees, would have pumped to fuel the flames. 

These relics are 3,000 years old, but today you can simply bend down and pick them up, as if the workers left last week. (In an animal pen off to one corner, you can also, if so inclined, run your fingers through 3,000-year-old donkey droppings.) The smelters honed their technology as decades passed, first using iron ore for flux, the material added to the furnace to assist in copper extraction, then moving to the more efficient manganese, which they also mined nearby.

The archaeologists found the bones of fish from, astonishingly, the Mediterranean, a trek of more than 100 miles across the desert. The skilled craftsmen at the furnaces got better food than the menial workers toiling in the mine shafts: delicacies such as pistachios, lentils, almonds and grapes, all of which were hauled in from afar. 

A key discovery emerged in a Jerusalem lab run by Naama Sukenik, an expert in organic materials with the Israel Antiquities Authority. When excavators sifting through the slag heaps at Timna sent her tiny red-and-blue textile fragments, Sukenik and her colleagues thought the quality of the weave and dye suggested Roman aristocracy. But carbon-14 dating placed these fragments, too, around 1000 B.C., when the mines were at their height and Rome was a mere village.

In 2019, Sukenik and her collaborators at Bar-Ilan University, working a hunch, dissolved samples from a tiny clump of pinkish wool found on Slaves’ Hill in a chemical solution and analyzed them using a high-performance liquid chromatography device, which separates a substance into its constituent parts. She was looking for two telltale molecules: monobromoindigotin and dibromoindigotin. Even when the machine confirmed their presence, she wasn’t sure she was seeing right. The color was none other than royal purple, the most expensive dye in the ancient world. Known as argaman in the Hebrew Bible, and associated with royalty and priesthood, the dye was manufactured on the Mediterranean coast in a complex process involving the glands of sea snails. People who wore royal purple were wealthy and plugged into the trade networks around the Mediterranean. If anyone was still picturing disorganized or unsophisticated nomads, they now stopped. “This was a heterogeneous society that included an elite,” Sukenik told me. And that elite may well have included the copper smelters, who transformed rock into precious metal using a technique that may have seemed like a kind of magic.

purple wool, symbol of wealth, around 1,000 b.c.

More pieces of the puzzle appeared in the form of copper artifacts from seemingly unrelated digs elsewhere. In the Temple of Zeus at Olympia, Greece, a 2016 analysis of three-legged cauldrons revealed that the metal came from the mines in the Arava Desert, 900 miles away. And an Israeli study published this year found that several statuettes from Egyptian palaces and temples from the same period, such as a small sculpture of Pharaoh Psusennes I unearthed in a burial complex at Tanis, were also made from Arava copper. The Edomites were shipping their product across the ancient world.

It stands to reason, then, that a neighboring kingdom would make use of the same source—that the mines could have supplied King Solomon, even if these weren’t exactly “King Solomon’s mines.” But did Solomon’s kingdom even exist, and can archaeology help us find out? Even at its height, Timna was never more than a remote and marginal outpost. But it’s on these central questions that Ben-Yosef’s expedition has made its most provocative contribution. 

Looking at the materials and data he was collecting, Ben-Yosef faced what we might call the Timna dilemma. What the archaeologists had found was striking. But perhaps more striking was what no one had found: a town, a palace, a cemetery or homes of any kind. And yet Ben-Yosef’s findings left no doubt that the people operating the mines were advanced, wealthy and organized. What was going on?

The mining operation, in Ben-Yosef’s interpretation, reveals the workings of an advanced society, despite the absence of permanent structures. That’s a significant conclusion in itself, but it becomes even more significant in biblical archaeology, because if that’s true of Edom, it can also be true of the united monarchy of Israel. Biblical skeptics point out that there are no significant structures corresponding to the time in question. But one plausible explanation could be that most Israelites simply lived in tents, because they were a nation of nomads. In fact, that is how the Bible describes them—as a tribal alliance moving out of the desert and into the land of Canaan, settling down only over time. (This is sometimes obscured in Bible translations. In the Book of Kings, for example, after the Israelites celebrated Solomon’s dedication of the Jerusalem Temple, some English versions record that they “went to their homes, joyful and glad.” What the Hebrew actually says is they went to their “tents.”) These Israelites could have been wealthy, organized and semi-nomadic, like the “invisible” Edomites. Finding nothing, in other words, didn’t mean there was nothing. Archaeology was simply not going to be able to find out.

The veteran Israeli archaeologist Aren Maeir, of Bar-Ilan University, who has spent the last 25 years leading the excavation at the Philistine city of Gath (the hometown, according to the Bible, of Goliath), and who isn’t identified with either school, told me that Ben-Yosef’s findings made a convincing case that a nomadic people could achieve a high level of social and political complexity. He also agreed with Ben-Yosef’s identification of this society as Edom. Still, he cautioned against applying Ben-Yosef’s conclusions too broadly in order to make a case for the accuracy of the biblical narrative. “Because scholars have supposedly not paid enough attention to nomads and have over-emphasized architecture, that doesn’t mean the united kingdom of David and Solomon was a large kingdom—there’s simply no evidence of that on any level, not just the level of architecture.”

A visitor walking through the eerie formations of the Timna Valley, past the dark tunnel mouths and the enigmatic etchings, is forced to accept the limits of what we can see even when we are looking carefully. We like to think that any mystery will yield in the end: We just have to dig deeper, or build a bigger magnifying glass. But there is much that will always remain invisible.

What Ben-Yosef has produced isn’t an argument for or against the historical accuracy of the Bible but a critique of his own profession. Archaeology, he argues, has overstated its authority. Entire kingdoms could exist under our noses, and archaeologists would never find a trace. Timna is an anomaly that throws into relief the limits of what we can know. The treasure of the ancient mines, it turns out, is humility. ~

Timna arches. Deuteronomy describes Israel as a “land out of whose hills you can dig copper.”


~ According to the Davidson Institute, “profoundly gifted” people exhibit the following tendencies: rapid comprehension, intuitive understanding of the basics, a tendency toward complexity, the need for precision, high expectations, divergent interests—and a quirky sense of humor. They usually show “asynchronous development," being remarkably ahead in some areas while being average or behind in other ways. It’s hard to know where they fit in, and educational settings typically are not designed to accommodate their differences. Especially for younger children, youthful appearance clashes with advanced ability, making it harder for certain teachers to be responsive.

While many things contribute to giftedness, including various types of intelligence, genetic factors, and upbringing, one key area of interest is personality. Do gifted people look different in terms of personality compared to "non-gifted"1 individuals? In the journal High Ability Studies, researchers Ogurlu and Özbey (2021) conduct a meta-analysis of the literature on personality and giftedness to see where the Big 5 personality traits of Extraversion, Conscientiousness, Openness to Experience, Neuroticism and Agreeableness fit in.

They reviewed multiple databases to find research articles meeting stringent criteria to include in their pooled analysis, whittling 103 citations down to a final group of 13 high-quality studies for review. They identified 83 factors related to giftedness, age, gender, and personality in the final pooled sample of almost 8,000 people, including 3,244 gifted individuals.

Using sophisticated statistical methods, they compared personality measures between gifted and non-gifted groups to see which personality traits significantly correlated with giftedness. There were no significant differences between the gifted and non-gifted groups for Agreeableness, Extraversion, Conscientiousness, or Neuroticism. However, Openness to Experience was more strongly correlated with giftedness, with a moderately strong effect size. In addition, they found that other factors, including age, gender, individual study sample, and geographical location, did not account for giftedness or the relationship between Openness and giftedness.

Openness to Experience is a key component of intelligence, contributing to creativity and the capacity to consider multiple options and perspectives in approaching life, solving problems, and understanding complex situations. Openness fits with the observed proclivity gifted people have for complexity and divergent thinking, and the remarkable and sometimes astonishing knack gifted people have for seeing things others would never notice or even imagine. Not to mention the quirky sense of humor, which can be a double-edged sword.

Another important implication of this study is that while gifted people are at times stereotyped as having awkward or maladaptive personalities, less-social traits including lower extroversion, lower agreeableness, and higher neuroticism were not correlated with giftedness.

Conscientiousness, interestingly, was not associated with giftedness, although it is independently associated with performance in work and academic settings. Being gifted does not guarantee success, but it contributes when properly wielded.

Though correlation is not causation, it is tempting to wonder whether one could increase Openness. Research suggests that it is possible to change personality in desired directions. Many enriched educational approaches include pedagogy designed to cultivate imagination, creativity, and lateral thinking. Can adults choose to broaden horizons or opt to keep a narrow view, or is this choice itself in the first place a function of Openness? External motivations to increase Openness, such as dating someone more open-minded or wanting to advance professionally, might lead individuals to try new things more than they would if left to their own devices.

Future research can look at interventions to understand whether open-mindedness, if desired, can be acquired. Research on giftedness is important in order to dispel the myths and stigma that are roadblocks for gifted individuals to thrive throughout the lifespan as well as to help develop and provide the resources needed for society to best benefit from these individuals, by informing educational policy and practice, and continuing to understand the causes of, and remedies for, underachievement. ~



This is an example of how culture can completely twist the original messages of a religion. It’s as if Christian nationalists were completely unaware that Jesus was a pacifist and a champion of the poor. The reversal appears to be complete.


Some years ago I was startled when I realized that there is no commandment to believe in god. The commandments concern conduct, not belief. "No other gods before me" meant the actual gods of the region, Asherah (Yahweh's wife -- "But the Mother went with them" [i.e. with Adam and Eve]), Isis, Baal, the Hellenistic gods etc. Yahweh had to #1. You could worship Baal AFTER having sacrificed to Yahweh, but not before. A Jewish professor pointed that out to me: "no other gods BEFORE me" does not exclude gods AFTER me. But there is no commandment that says "Thou shalt believe in me." Words for "belief" and "religion" didn't even exist in archaic Hebrew, nor did "mind" or "thought" or “imagine." Again, what counted was conduct.



I agree that if religion disappeared we would still have all kinds of problems. But with religion out of the way, especially the part that justifies violence and the subjugation of women, we would have less insanity to deal with, less passionate viciousness, less killing in the hope of being rewarded in paradise -- and more mental space for common sense . . . maybe even a shimmer of understanding that we are all human, and can help one another to make this world closer to paradise.


Dürer: Saint Jerome in his study, 1514. What wonderful detail! Note the lion, St. Jerome's pet ever since he pulled a thorn from the lion's paw.

St. Jerome's other pet, the skull; painting by Caravaggio, 1605

“We all have two lives. The second begins when you realize you only have one.” ~ Confucius


~ Approximately 10% of new coronary heart disease cases occurring within a decade of middle age could be avoided by preventing iron deficiency, suggests a study published in ESC Heart Failure, a journal of the European Society of Cardiology (ESC).

“This was an observational study and we cannot conclude that iron deficiency causes heart disease,” said study author Dr. Benedikt Schrage of the University Heart and Vasculature Center in Hamburg, Germany. “However, evidence is growing that there is a link and these findings provide the basis for further research to confirm the results.”

Previous studies have shown that in patients with cardiovascular diseases such as heart failure, iron deficiency was linked to worse outcomes including hospitalizations and death. Treatment with intravenous iron improved symptoms, functional capacity, and quality of life in patients with heart failure and iron deficiency enrolled in the FAIR-HF trial. Based on these results, the FAIR-HF 2 trial is investigating the impact of intravenous iron supplementation on the risk of death in patients with heart failure.

The current study aimed to examine whether the association between iron deficiency and outcomes was also observed in the general population.

The study included 12,164 individuals from three European population-based cohorts. The median age was 59 years and 55% were women. During the baseline study visit, cardiovascular risk factors and comorbidities such as smoking, obesity, diabetes and cholesterol were assessed via a thorough clinical assessment including blood samples.

Participants were classified as iron deficient or not according to two definitions: 1) absolute iron deficiency, which only includes stored iron (ferritin); and 2) functional iron deficiency, which includes iron in storage (ferritin) and iron in circulation for use by the body (transferrin).

Dr. Schrage explained: “Absolute iron deficiency is the traditional way of assessing iron status but it misses circulating iron. The functional definition is more accurate as it includes both measures and picks up those with sufficient stores but not enough in circulation for the body to work properly.”

Participants were followed up for incident coronary heart disease and stroke, death due to cardiovascular disease, and all-cause death. The researchers analyzed the association between iron deficiency and incident coronary heart disease, stroke, cardiovascular mortality, and all-cause mortality after adjustments for age, sex, smoking, cholesterol, blood pressure, diabetes, body mass index, and inflammation. Participants with a history of coronary heart disease or stroke at baseline were excluded from the incident disease analyses.

At baseline, 60% of participants had absolute iron deficiency and 64% had functional iron deficiency. During a median follow-up of 13.3 years there were 2,212 (18.2%) deaths. Of these, a total of 573 individuals (4.7%) died from a cardiovascular cause. Incidence of coronary heart disease and stroke were diagnosed in 1,033 (8.5%) and 766 (6.3%) participants, respectively.

Functional iron deficiency was associated with a 24% higher risk of coronary heart disease, 26% raised risk of cardiovascular mortality, and 12% increased risk of all-cause mortality compared with no functional iron deficiency. Absolute iron deficiency was associated with a 20% raised risk of coronary heart disease compared with no absolute iron deficiency, but was not linked with mortality. There were no associations between iron status and stroke.

The researchers calculated the population attributable fraction, which estimates the proportion of events in 10 years that would have been avoided if all individuals had the risk of those without iron deficiency at baseline. The models were adjusted for age, sex, smoking, cholesterol, blood pressure, diabetes, body mass index, and inflammation. Within a 10-year period, 5.4% of all deaths, 11.7% of cardiovascular deaths, and 10.7% of new coronary heart disease diagnoses were attributable to functional iron deficiency.

“This analysis suggests that if iron deficiency had been absent at baseline, about 5% of deaths, 12% of cardiovascular deaths, and 11% of new coronary heart disease diagnoses would not have occurred in the following decade,” said Dr. Schrage.

The study showed that iron deficiency was highly prevalent in this middle-aged population, with nearly two-thirds having functional iron deficiency,” said Dr. Schrage. “These individuals were more likely to develop heart disease and were also more likely to die during the next 13 years.” ~


Eggs, red meat, liver and other giblets are the best sources of well-absorbed heme iron. Seafood and poultry are also good sources.

Plants such as spinach and lentils provide non-heme iron. Heme iron is absorbed and utilized 3-4 times more efficiently than non-heme iron.

Calcium inhibits the absorption of both heme and non-heme iron. Vitamin C increases the absorption of iron, as does apple cider vinegar. Cinnamon and ginger also increase iron absorption.

~ Runners tend to struggle with anemia and iron deficiency, often at a slightly higher prevalence than other athletes, but why is that? Anemia, or iron deficiency anemia, is fairly common in runners and even more so in female runners. This is partly due to blood loss during menstruation, but can also be impacted by what is called “Foot Strike Hemolysis”. This refers to what happens each time your heel hits the ground every time you plant your foot on a run. With each step, tiny blood vessels are broken, and iron is lost in the system. Iron is also lost through sweat.

But there is another, and less well known reason that iron deficiency and anemia can occur. And it can hit anyone, athlete or not.

A new area of focus and research is around the anti-nutrient PHYTATE, or IP6. Phytate is an anti-nutrient that binds strongly to iron, zinc, calcium and magnesium, and pulls it out of your body before you have a chance to absorb it.

For those of us who consume large amounts of plant foods (grains, beans, nuts and seeds and all products made from these foods), we may be unknowingly consuming large amounts of phytate, which then impacts our body’s ability to absorb iron from foods. Vegetarians, vegans or anyone who consumes large amounts of pasta, whole grains, crackers, breads, beans, nuts and nut butters or other plant based foods is at risk. The World Health Organization estimates that about 2 billion people worldwide are anemic, and that one third of all women of reproductive age are anemic. Much of this can be attributed to diets high in phytate (grain, corn or soy based). ~ (oops, I seem to have lost the link)



~ The three teenagers—two boys and a girl—could not have known what clues their lungs would one day yield. All they could have known, or felt, before they died in Germany in 1918 was their flu-ravaged lungs failing them, each breath getting harder and harder. Tens of millions of people like them died in the flu pandemic of 1918; they happened to be the three whose lungs were preserved by a farsighted pathologist.

A century later, scientists have now managed to sequence flu viruses from pea-size samples of the three preserved lungs. Together, these sequences suggest an answer to one of the pandemic’s most enduring mysteries: Why was the second wave, in late 1918, so much deadlier than the first wave, in the spring? These rediscovered lung samples hint at the possibility that the virus itself changed to better infect humans.

This might sound familiar. The no-longer-so-novel coronavirus is also adapting to its human host. With modern tools, scientists are tracking the virus’s evolution in real time and finding mutations that have made the virus better at infecting us. More than 1.4 million coronavirus genomes have now been sequenced. But the database for the 1918 flu is much smaller—so much so that the comparison feels unfair. This new study brings the number of complete 1918 flu genomes to a grand total of three, plus some partial genomes.

Hundred-year-old lung tissue is incredibly hard to find. Sébastien Calvignac-Spencer, a virologist at the Robert Koch Institute, in Berlin, came across the samples in this newest study in a stroke of luck. A couple of years ago, he decided to investigate the collections of the Berlin Museum of Medical History of the Charité. He wasn’t looking for anything in particular, but he soon stumbled upon several lung specimens from 1918, a year he of course recognized as a notable one for respiratory disease. Despite the flu pandemic’s notoriety, the virus that caused it is still poorly understood. “I thought, Well, okay, so it’s right here in front of you. Why don’t you give it a try?” he told me. Why not try to sequence influenza from these lungs? (This work is not dangerous: The chemically preserved lung specimens do not contain intact or infectious virus; sequencing picks up just fragments of the virus’s genetic material.)

Calvignac-Spencer and his colleagues ultimately tested 13 lung specimens and found evidence of flu in three. One was from a 17-year-old girl who died in Munich sometime in 1918. The two others were from teenage soldiers who both died in Berlin on June 27, 1918. 

The team was able to recover a complete flu-virus genome from the 17-year-old girl’s lung tissue—only the third ever found. The two other full 1918 flu genomes both came from the United States, from the lungs of a woman buried in Alaska and from a paraffin-wax-embedded lung sample of a soldier who died in New York. With another genome in hand, the researchers moved to investigate how they differed. Several changes showed up in the flu’s genome-replication machinery, a potential evolutionary hot spot because better replication means a more successful virus. The team then copied just the replication machinery of the 17-year-old’s virus—not the entire virus—into cells and found it was only half as active as that of the flu virus found in Alaska.

The obvious caveats should apply here: tiny sample size, the limits of extrapolating from test tube to human body. The exact date of the girl’s death in 1918 is also unknown, but this finding hints at the possibility that the virus’s behavior did change during the pandemic. Scientists have long speculated about why the 1918 pandemic’s second wave was deadlier than the first. Patterns of human behavior and seasonality could explain some of the difference—but the virus itself might have changed too.

The lungs of the two young soldiers in Berlin provide another clue. The teenagers’ June 1918 deaths were squarely in the pandemic’s first wave. These two samples yielded only partial genomes, but the team was able to reconstruct enough to home in on changes in nucleoprotein, one of the proteins that make up the virus’s replication machinery. Nucleoproteins act like scaffolds for the virus’s gene segments, which wind around the protein like a spiral staircase. They are also extremely distinctive, which can be a weakness: the human immune system is very good at recognizing and sabotaging them.

Indeed, the 1918 flu virus’s nucleoprotein seems to have mutated between the first and second waves to better evade the human immune system. The first-wave viruses’ nucleoproteins looked a bit like those in flu viruses that infect birds—which makes sense because scientists suspect that the 1918 flu originated in birds. But bird viruses are attuned to bird bodies. “When it jumps to humans, the virus is not evolved to be optimally resistant” to the human immune system, Jesse Bloom, a virologist at Fred Hutchinson Cancer Research Center, in Seattle, told me. Bloom and others have identified specific mutations that make the nucleoprotein better at resisting the human immune system. The first-wave flu viruses did not have them, but the second-wave ones did, possibly because they had had the time to adapt to infecting humans.

Unfortunately, many historical samples have been lost as pathology collections have fallen out of fashion over the past century. “If we had started these kinds of studies in the ’60s, we would have had no problems finding thousands and thousands of specimens,” Calvignac-Spencer said. “And now we’re really fighting to assemble a collection of 20.” He’s been in touch with more than 50 museum collections around the world in the hunt for more pandemic-flu samples. He recently found one from Australia, but the work is slow. Calvignac-Spencer has also looked for other viruses, including measles, which he and his colleagues previously found in a 100-year-old lung from the same medical collection in Berlin.

The further back in time researchers must go, the harder the samples are to find—but Bloom told me he’s especially intrigued by the possibility of finding pre-1918 flu genomes in the archives. When the 1918 pandemic swept through the world, it apparently completely replaced whatever flu existed before. Its modern-day descendants continue to infect us today as seasonal flu. In this way, the 1918 flu is familiar to us and our immune systems. What came before is still a mystery.

And of course there's a "noser" here, her nostrils exposed. The whole point is to breathe through the mask, blocking the virus from entry through the nose -- or if not entirely blocking, then at least significantly reducing the viral load.


ending on beauty:

And God said to me: Write.
Leave the cruelty to kings.
Without that angel barring the way to love,
there would be no bridge for me
into time.

~ Rainer Maria Rilke

Creation and Expulsion from Paradise; Giovanni di Paolo, 1445