Saturday, April 24, 2021


“If nothing saves us from death, at least love should save us from life.” ~ Pablo Neruda. Photo: Mim Eisenberg


The Frost performs its secret ministry,
Unhelped by any wind. The owlet's cry
Came loud—and hark, again! loud as before.
The inmates of my cottage, all at rest,
Have left me to that solitude, which suits
Abstruser musings: save that at my side
My cradled infant slumbers peacefully.
'Tis calm indeed! so calm, that it disturbs
And vexes meditation with its strange
And extreme silentness. Sea, hill, and wood,
This populous village! Sea, and hill, and wood,
With all the numberless goings-on of life,
Inaudible as dreams! the thin blue flame
Lies on my low-burnt fire, and quivers not;
Only that film, which fluttered on the grate,

Still flutters there, the sole unquiet thing.
Methinks, its motion in this hush of nature
Gives it dim sympathies with me who live,
Making it a companionable form,
Whose puny flaps and freaks the idling Spirit
By its own moods interprets, every where
Echo or mirror seeking of itself,
And makes a toy of Thought.

                      But O! how oft,
How oft, at school, with most believing mind,
Presageful, have I gazed upon the bars,
To watch that fluttering stranger ! and as oft
With unclosed lids, already had I dreamt
Of my sweet birth-place, and the old church-tower,
Whose bells, the poor man's only music, rang
From morn to evening, all the hot Fair-day,
So sweetly, that they stirred and haunted me
With a wild pleasure, falling on mine ear
Most like articulate sounds of things to come!
. . .
Dear Babe, that sleepest cradled by my side,
Whose gentle breathings, heard in this deep calm,
Fill up the intersperséd vacancies
And momentary pauses of the thought!
My babe so beautiful! it thrills my heart
With tender gladness, thus to look at thee,
And think that thou shalt learn far other lore,
And in far other scenes! For I was reared
In the great city, pent 'mid cloisters dim,
And saw nought lovely but the sky and stars.
But thou, my babe! shalt wander like a breeze
By lakes and sandy shores, beneath the crags
Of ancient mountain, and beneath the clouds,
Which image in their bulk both lakes and shores
And mountain crags: so shalt thou see and hear
The lovely shapes and sounds intelligible
Of that eternal language, which thy God
Utters, who from eternity doth teach
Himself in all, and all things in himself.
Great universal Teacher! he shall mold
Thy spirit, and by giving make it ask.

Therefore all seasons shall be sweet to thee,
Whether the summer clothe the general earth
With greenness, or the redbreast sit and sing
Betwixt the tufts of snow on the bare branch
Of mossy apple-tree, while the night-thatch
Smokes in the sun-thaw; whether the eave-drops fall
Heard only in the trances of the blast,
Or if the secret ministry of frost
Shall hang them up in silent icicles,
Quietly shining to the quiet Moon.

~ Samuel Taylor Coleridge

The poetry of this poem lies in the beauty of its imagery, and of course in the idea that frost performs a "secret ministry." Its philosophy could be summarized as the divinity of Nature (spelled with a capital N). To the Romantics, nature was suffused with the universal divine spirit, more accessible to someone who grew up in the countryside.

For I was reared
In the great city, pent 'mid cloisters dim,
And saw nought lovely but the sky and stars.
But thou, my babe! shalt wander like a breeze
By lakes and sandy shores, beneath the crags
Of ancient mountain, and beneath the clouds,
Which image in their bulk both lakes and shores
And mountain crags: so shalt thou see and hear
The lovely shapes and sounds intelligible
Of that eternal language, which thy God
Utters, who from eternity doth teach
Himself in all, and all things in himself.

Coleridge sees Nature as the expression of the divine (universal spirit or mind); it is the eternal language of God.

Personally I don't need to interpose a divine being between myself and nature; but given their era, the Romantics needed some "universal spirit" or universal mind to account for the beauty of nature. They didn't yet have the concept of evolution to provide a mechanism for a self-creating, self-evolving universe.


~ Coleridge’s poetry resonated with the psychedelia of the 1960s and a general cultural shift that emphasized the value of the imagination and a more holistic view of the human place within nature. Today, Coleridge is far more often remembered as a poet than a philosopher. But his philosophy was spectacular in its originality and syntheses.

Although Coleridge wrote poetry throughout his life, his energies increasingly channeled towards philosophy. Drawing from neo-Platonism, the ingenious but difficult transcendental idealism of Immanuel Kant, and the even obscurer intricacies of post-Kantians such as J G Fichte and F W J Schelling, his philosophy was undoubtedly of the difficult metaphysical kind, very much at odds with practically minded British empiricism. Lord Byron spoke for many when he described Coleridge:

Explaining Metaphysics to the nation —
I wish he would explain his Explanation.

Yet the British empiricism of John Locke, David Hume and David Hartley was itself at odds, Coleridge pointed out, with a deeper heritage of British thought. ‘Let England be,’ he pronounced, ‘Sidney, Shakespeare, Spenser, Milton, Bacon, Harrington, Swift, Wordsworth’, who represent the idealizing and proto-romantic tradition that he identified as ‘the spiritual platonic old England’. Coleridge rallied that ‘spiritual platonic’ tradition to oppose the philosophies of empiricists and hard-headed expounders of ‘common-sense’ such as Samuel Johnson, Erasmus Darwin, Hume, Joseph Priestley, William Paley and William Pitt, ‘with Locke at the head of the Philosophers and [Alexander] Pope of the Poets’.

Identifying ‘civilization’ with the forces of economic and technological progression, and ‘cultivation’ with the deeper roots of spiritual connection, tradition and permanence, he warned of producing a society that was ‘varnished rather than polished; perilously over-civilized, and most pitiably uncultivated!’ This concern with cultivation was an important tenet in what Mill called the ‘Germano-Coleridgian’ school, which examined what the empiricists, utilitarians and materialist mechanists tended to overlook: the historical development and the socially and psychologically significant meanings embedded in religion, tradition and cultural symbolism.

This ‘Germano-Coleridgian’ approach was in stark contrast to British utilitarianism, which reduced ethics to Bentham’s principle of utility. In the culture wars of his day, Coleridge championed cultural and spiritual concerns, and opposed the ethical elevation of sensual pleasure, and the reduction of that and everything else to base matter.

Coleridge argued for the transcendence of God rather than holding, with Baruch Spinoza, that God is a wholly immanent power identified with the natural world. Characteristically of Coleridge, however, he didn’t dismiss Spinozistic arguments, but adopted parts of them to fit within what he saw as a wider whole. ‘Spinoza’s is … true philosophy,’ he wrote, ‘but it is the Skeleton of the Truth.’ It needed to be fleshed out in order to let ‘the dry Bones live’.

’In Xanadu did Kubla Khan / A stately pleasure-dome decree …’ So begins Coleridge’s poem on the power of words and imagination in physical and poetic creation. Echoing how the mighty potentate creates with an imperious fiat, the inspired poet, we’re told, could ‘build that dome in air’ in an explosively constructive fusion of opposites – ‘[t]hat sunny dome! those caves of ice!’ It’s a creation with more astonishing magic than even the worldly power of the Khan could muster:

And all should cry, Beware! Beware!
His flashing eyes, his floating hair!
Weave a circle round him thrice,
And close your eyes with holy dread,
For he on honey-dew hath fed,
And drank the milk of Paradise.

Though unpublished till 1816, Kubla Khan was written between 1797 and 1799, around Coleridge’s annus mirabilis of 1797-98, when he also wrote his supernatural poem The Rime of the Ancient Mariner, the daemonic Christabel, and some of the greatest of what he called his ‘Meditative Poems in Blank Verse’. One of those poems, the sublime ‘Frost at Midnight’, describes the beauties of nature – ‘lakes and shores / And mountain crags’ – as incarnations of the divine word, being ‘The lovely shapes and sounds intelligible / Of that eternal language, which thy God / Utters’. That poem ends on the achingly beautiful, mysterious note of ‘the secret ministry of frost’ that will, if the night gets colder, hang up the thaw-drops from the eaves ‘in silent icicles, / Quietly shining to the quiet Moon.’


Coleridge became fascinated with the notion of universal truth as a realm of ‘eternal verities’ that originate and endure in some kind of cosmic reason. This ‘reason’ he saw as underlying the fabric of the Universe and corresponding to both the universal Logos of Heraclitus and the divine Logos of St John. While Heraclitus is known for his view of a world in such constant flux that we can’t step into the same river twice, he’s also the philosopher who conceived of a universal Logos, the all-encompassing order that allows a coherent and rational reality to exist from what would otherwise be a swirling chaos. The Logos of St John is the Word that was with God in the beginning, which was and is God. It’s the spiritual heart of reality that entered into its own creation by becoming flesh, becoming the light of the world, if only the darkness could comprehend it.

Broader and deeper than any idealism that would do away with matter as an illusion or an abstraction, Coleridge retained it within his system, much as he’d done with associationism in the theory of mind. Thus, as he wrote in 1817, he saw it essential to "consider matter as a Product – coagulum spiritûs [the coagulation of spirit], the pause, by interpenetration, of opposite energies – … while I hold no matter as real otherwise than as the copula [or synthesis] of these energies, consequently no matter without Spirit, I teach on the other hand a real existence of a spiritual World without a material."

Coleridge developed a philosophy of ideas as powers that saw matter arise from opposed forces, forces arise from powers, powers and laws as the objective side of ideas, and ideas as residing eternally in cosmic reason, or Logos, the mind of God. His philosophy gained a comprehensiveness beyond psychology and philosophy of mind as his enquiries progressed into cosmology and the metaphysics of matter. Throughout his life, Coleridge searched for a unified view of reality that was at once bodily and spiritual.

He persuaded many of his empiricist and utilitarian British contemporaries of the dangers of understanding everything mechanistically, including mind and humanity itself. With these methods, Coleridge achieved not only an astonishingly broad and holistic philosophy of great intellectual richness and scope, but also forged a brilliant synthesis within the culture wars of his time, which we could well heed today. ~


~ Pirates were young men – the vast majority under the age of 30 – who chose to sail for themselves, rather than for merchant marines or the state’s navy. Their goal was to rob ships on the high seas to get wealthy fast. If they succeeded, these penurious and ill-educated men had an option rare in their times: to retire early. Born into poverty with few possibilities, piracy was always a tempting option. But the risk, beyond sailing tempestuous seas in small wooden boats, was simple and brutal: they faced certain death by hanging if they were captured by maritime authorities. Yet this did not stop them. In the Golden Age of Piracy (1650s to 1730s), young men joined the ranks of the pirates in droves.

In many ways, a pirate’s life was better than a legitimate career on a merchant ship or in the Royal Navy. Merchant and naval ships were crowded, and stocked with inadequate food and dirty water. Diseases such as scurvy, which turned sailors a sickly yellow before killing them, ran rampant. Pay was minimal and often withheld in order to prevent desertion. Punishments were harsh and capricious: whippings, imprisonment below deck, and even hangings were standard fare in the Royal Navy. Injuries often led to infection, which could result in gangrene and amputation. Unlike merchant ships, naval ships were manned by sailors who were forced into their line of work as punishment for crimes. And landlubbers constantly ran the risk of impressment, so young men had to try hard not to inadvertently end up on one of His Majesty’s dreadful wooden ships. Piracy was one way to do so.

A pirate ship offered a much healthier environment. These were egalitarian communities in which the crew could nominate and vote in their captains – or indeed vote them out if they demonstrated poor leadership. Food was much more plentiful and nutritious because plundering ships provided them with more supplies, along with valuable goods. And one of the best things about being a pirate was the fair pay. Loot was divided up evenly between the crew. Payments were not withheld, which gave pirates more freedom and security to spend money on land for themselves or their families. Injuries were frequent, but pirates were compensated. The loss of an eye could yield payment of £150, while losing a limb could be worth between £400 and £600.

Pirates also formed tight-knit communities that welcomed sailors from all walks of life. They weren’t loyal to any particular country. Instead, they thought of their ship as their nation. The crews were diverse with about half of them being British or American males while the rest were usually French, Spanish, Dutch, African or even Asian. Africans, either freed or escaped enslaved people, could find work or safe haven on a pirate ship. Religious minorities could also join without the fear of persecution, seeing as pirate ships were not generally bastions of Christian purity. There were very few women on pirate ships, though some, such as Anne Bonny and Mary Read, developed a significant role within the crew. Compared with the Royal Navy, piracy seemed like a good option. But the risks prospective pirates had to weigh were drastic.

Pirates knew their lives on the high seas were brief. On average, pirates lasted in their chosen career for only two years before they died by illness, injury or execution. Illness and injury were to be expected on the high seas. It was the formidable risk of capture and execution, however, that most distinguished the life of a pirate from their legitimate counterparts.

When a pirate was captured at sea, he was sent back to London to await trial within the dismal walls of Newgate Prison. Cells were overcrowded and freezing cold with no drainage system, leaving inmates in the company of their own waste. Pirates became ill due to the damp and unsanitary conditions. Only those who had sympathetic family or friends willing to buy them food and blankets might have a slightly less miserable experience. Once taken from their cell, the trial was short and beside the point: they were inevitably found guilty and sentenced to death. They were then hanged at a place called Execution Dock on the banks of the Thames, the river symbolizing the place of their crimes.

The public executions of pirates were intended to be slow and agonizing torture. The noose was shorter than normal, which caused them to die by strangulation rather than a broken neck. Their death could take as long as nearly an hour in some cases, all the while their limbs jerking uncontrollably in a motion known as the Marshal’s Dance. Perhaps worst of all was the knowledge that they would leave their family destitute and socially ostracized. Few wanted to be associated with the families of dead pirates.

It is from within the grey walls of Newgate Prison that we’re afforded a glimpse of what the pirates thought about their lives. In part, we have to thank the Ordinary of Newgate for this. The Ordinary was not a priest, but a chaplain of the prison who offered spiritual counsel. It was an unusual job. In order for someone to be appointed as an Ordinary, he had to have already obtained rights to be able to publish an account of prisoners’ last dying speeches, their behavior on the scaffold, along with the condemned man’s life story. These would then sell at an affordable price of three or six pence. Initial printings could run into the thousands, which would earn the Ordinary up to £200 pounds a year (nearly £22,000 in today’s currency). 

Beyond profit, the purpose of these publications was to warn about the consequences of becoming a pirate. Piracy was extremely lucrative for the Ordinary of Newgate given the enormous popular interest in the lives of those who took to the seas.

One such typical pirate in the care of the Ordinary was a young man named Walter Kennedy. The day before his scheduled execution, Kennedy appeared to be ready to die. He proclaimed that he’d made his peace with God. He said he was glad he had only a wife to leave behind rather than both a wife and children so he wouldn’t be leaving her in further debt and grief. But the next day, as death stood before him, Kennedy descended into a panic knowing that he’d soon endure the Marshal’s Dance. The Ordinary described him as ‘extremely terrified and concerned at the near approach of death’ and begged for water at the scaffold. Terror followed the apparent acceptance of death as its reality became certain.

According to accounts from the Ordinary of Newgate, this terror often abated upon the scaffold. The condemned became reconciled with their end. The Ordinary offered counsel and prayer to pirates, and found them to be penitent, repentant and ready to accept their fate. In the end, Kennedy managed to compose himself and give his last dying speech. Like many others, his concern was for his family. He urged people not to judge his wife for his actions, for she’d always admonished him for his ‘vices’. His crimes were his, and his alone. 

We also have a few letters written to loved ones. These strike similar notes as the accounts of the Ordinary: some mixture of fear, acceptance, followed by a kind of reconciliation with death that left only concern and sadness about those they were forced to leave. One such widowed pirate captain was William Lawrence. He had placed his children into his mother’s care before setting off to sea. In the moments before his death, all he cared about was the welfare of his children and his mother. At the core of his last letter to her, he implored his mother to ‘be father and mother, as well as grandmother, over my dear and tender infants, in whom I hope God may grant such honest principles and morals in their hearts’.

Another young pirate, Captain Joseph Halsey, who found himself facing his execution chose to not give the customary speech. Instead, he wrote a letter to his mother in an attempt to ease her soul and her grief for her son’s impending passing. He insisted that he was innocent and therefore hoped that God would grant him rest. Halsey wrote that he hoped his siblings would live virtuous lives in the fear of God but, most strikingly, he ended his letter with these words of comfort and sorrow:

I am very sorry mother, to think that I should be called so soon out of this world by an untimely end, for I had always hope of helping you, and should have done very well, had it not been for these [other pirates]. It has cost me all my wages, venture, and life. Don’t make yourself uneasy, for it cannot be helped. I’ll send you home my shirts, buckles, and hat. Remember me… ~


I was surprised when I came upon the title of this article — facing death as calmly as a pirate? Not that I have ever wondered just how pirates (or any other convicts) faced their impending execution. After I finished reading, the final calm made perfect sense to me. Once something is inevitable, with no chance of a last-minute reprieve, it’s not that hard to imagine that we concentrate on our last task, such as making the last speech. We are not going to scream, which would be useless and demeaning — we wish to preserve dignity. It’s also touching to reflect how often the pirates expressed concern for their family. To the end, a human being is a social animal. 

Note in particular that the pirates' last thoughts did not turn to god and the afterlife (at least in this account). The men expressed caring for their wives and children, brothers and sisters, and, if she was still alive, their mother. These "hardened criminals" were apparently quite capable of tenderness.

A side note here was the revelation of the brutality of the hanging of pirates — the short rope that didn’t break the neck, but caused a slow strangulation instead. Thus, we observe something close to nobility on the part of the condemned, and to utmost cruelty on the part of the executioners — again, human, all too human.


During the “Golden Age of Piracy,” the poor in cities were starving and homeless. The government placed the unemployed into workhouses and used them as slave labor. By feeding the forced labor a starvation diet, the city officials and the business associates increased their profits. Outside these facilities, the poor begged and stole in order to eat. Being unemployed and homeless, young men were the ideal recruits for piracy.

I agree with the article that the young were more likely to be sentenced to the poor house, and the differences between the poor house and the workhouse were negligible.  The discipline existing in these houses was as harsh as in the British Navy. The democracy of the pirate ships made them more desirable than the regimented choices given to the young men living in poverty.

On the streets or under government care, the poor lived in crowded, unsanitary conditions. They lived with measles, smallpox, malaria, scarlet fever, and chickenpox. These conditions were acerbated by malnutrition. Except for daily rations of hardtack and rum, these were the same conditions found on the ship. When we take the unromantic view of the pirates’ life, we note that on land or at sea, the plight of the pirates was homelessness and starvation.

Poverty is the same situation that we face today — a condition developed and maintained by the capitalist economy. Cheap labor requires a substantial portion of the population to exist in poverty; otherwise, wages would rise. The fact that the pirate’s life was short didn’t matter when starvation and slavery were the only other choice. Pirates were not the swashbuckling and fun-loving adventurers portrayed in the movies. Pirates came out of the harsh reality of poverty. The harshness of their lives was a result of industrialization, benefiting a few rich and powerful families. (


This is a shocking article, starting with its title. If the title strikes us as manipulative “click bait,” it’s no accident. Other view of pirates is so stereotypical — eyepatch, a missing limb, a bottle of rum — that many readers would simply skip the article unless alerted that something unusual would follow.

What follows is more than simply unusual: it’s shocking. We learn of the brutal abuse in the British navy and on merchant ships. By contrast, pirate ships offered egalitarianism. The men voted for their captain, and could vote him out. The loot was distributed equally. There was even compensation for injuries! And the food was better.

The price was the risk of capture and sadistic execution — the too-short rope that didn’t break the neck but slowly strangulated instead, causing the “Marshal’s dance,” no doubt mocked by the coarse crowd of people who attended public executions for entertainment. Considering that risk, it took desperation for a young man to become a pirate. The part about “ dying like a pirate” was interesting, but for me the central message of this article was the desperation it took for a poor young man to become a pirate.



~ Exculpation without exoneration. No one fully designs themselves. We are not self-authors from some blank slate. We do not pick our traits from some a la carte menu. That's exculpation: Freedom of guilt of self-authorship.

But we nonetheless pay the price and reap the rewards for who we ended up being.

I think the mind/body, free will/determinism debates are fueled by that paradox. I think Catholicism was an early attempt to deal with it, that, I think does a pretty poor job of it, no better or worse than Judaism. Just old, outdated, confused.  ~ Jeremy Sherman


~ Why the human brain evolved as it did never has been plausibly explained. Apparently, not since the first life-form billions of years ago did a single species gain dominance over all others – until we came along. Now, in a groundbreaking paper, two Israeli researchers propose that our anomalous evolution was propelled by the very mass extinctions we helped cause. Or: As we sawed off the culinary branches from which we swung, we had to get ever more inventive in order to survive.

As ambling, slow-to-reproduce large animals diminished and gradually went extinct, we were forced to resort to smaller, nimbler animals that flee as a strategy to escape predation. To catch them, we had to get smarter, nimbler and faster, according to the universal theory of human evolution proposed by researchers Miki Ben-Dor and Prof. Ran Barkai of Tel Aviv University, in a paper published in the journal Quaternary.

In fact, the great African megafauna began to decline about 4.6 million years ago. But our story begins with Homo habilis, which lived about 2.6 million years ago and apparently used crude stone tools to help it eat flesh, and with Homo erectus, which thronged Africa and expanded to Eurasia about 2 million years ago. The thing is, erectus wasn’t an omnivore: it was a carnivore, Ben-Dor explains to Haaretz.

“Eighty percent of mammals are omnivores but still specialize in a narrow food range. If anything, it seems Homo erectus was a hyper-carnivore,” he observes.

And in the last couple of million years, our brains grew threefold to a maximum capacity of about 1,500 cranial capacity, a size achieved about 300,000 years ago. We also gradually but consistently ramped up in technology and culture – until the Neolithic revolution and advent of the sedentary lifestyle, when our brains shrank to about 1,400 to 1,300cc, but more on that anomaly later.

The hypothesis suggested by Ben-Dor and Barkai – that we ate our way to our present physical, cultural and ecological state – is an original unifying explanation for the behavioral, physiological and cultural evolution of the human species.


Evolution is chaotic. Charles Darwin came up with the theory of the survival of the fittest, and nobody has a better suggestion yet, but mutations aren’t “planned.” Bodies aren’t “designed,” if we leave genetic engineering out of it. The point is, evolution isn’t linear but chaotic, and that should theoretically apply to humans too.

Hence, it is strange that certain changes in the course of millions of years of human history, including the expansion of our brain, tool manufacture techniques and use of fire, for example, were uncharacteristically progressive, say Ben-Dor and Barkai.

“Uncharacteristically progressive” means that certain traits such as brain size, or cultural developments such as fire usage, evolved in one direction over a long time, in the direction of escalation. That isn’t what chaos is expected to produce over vast spans of time, Barkai explains to Haaretz: it is bizarre. Very few parameters behave like that.

So, their discovery of correlation between contraction of the average weight of African animals, the extinction of megafauna and the development of the human brain is intriguing.

From mammoth marrow to joint of rat

To be clear, just this month a new paper posited that the late Quaternary extinction of megafauna, in the last few tens of thousands of years, wasn’t entirely the fault of humanity. In North America specifically, it was due primarily to climate change, with the late-arriving humans apparently providing the coup de grâce to some species.

In the Old World, however, a human role is clearer. African megafauna apparently began to decline 4.6 million years ago, but during the Pleistocene (2.6 million to 11,600 years ago) the size of African animals trended sharply down, in what the authors term an abrupt reversal from a continuous growth trend of 65 million years (i.e., since the dinosaurs almost died out).

When Homo erectus the carnivore began to roam Africa around 2 million years ago, land mammals averaged nearly 500 kilograms. Barkai’s team and others have demonstrated that hominins ate elephants and large animals when they could. In fact, originally Africa had six elephant species (today there are two: the bush elephant and forest elephant). By the end of the Pleistocene, by which time all hominins other than modern humans were extinct too, that average weight of the African animal had shrunk by more than 90 percent.

And during the Pleistocene, as the African animals shrank, the Homo genus grew taller and more gracile, and our stone tool technology improved (which in no way diminished our affection for archaic implements like the hand ax or chopper, both of which remained in use for more than a million years, even as more sophisticated technologies were developed).
If we started some 3.3 million years ago with large, crude stone hammers that may have been used to bang big animals on the head or break bones to get at the marrow, over the epochs we invented the spear for remote slaughter. By about 80,000 years ago, the bow and arrow was making its appearance, which was more suitable for bringing down small fry like small deer and birds. Over a million years ago, we began to use fire, and later achieved better control of it, meaning the ability to ignite it at will. Later we domesticated the dog from the wolf, and it would help us hunt smaller, fleet animals.

Why did the earliest humans hunt large animals anyway? Wouldn’t a peeved elephant be more dangerous than a rat? Arguably, but catching one elephant is easier than catching a large number of rats. And megafauna had more fat.

A modern human can only derive up to about 50 percent of calories from lean meat (protein): past a certain point, our livers can’t digest more protein. We need energy from carbs or fat, but before developing agriculture about 10,000 years ago, a key source of calories had to be animal fat.

Big animals have a lot of fat. Small animals don’t. In Africa and Europe, and in Israel too, the researchers found a significant decline in the prevalence of animals weighing over 200 kilograms correlated to an increase in the volume of the human brain. Thus, Ben-Dor and Barkai deduce that the declining availability of large prey seems to have been a key element in the natural selection from Homo erectus onward. Catching one elephant is more efficient than catching 1,000 rabbits, but if we must catch 1,000 rabbits, improved cunning, planning and tools are in order.


Our changing hunting habits would have had cultural impacts too, Ben-Dor and Barkai posit. “Cultural evolution in archaeology usually refers to objects, such as stone tools,” Ben-Dor tells Haaretz. But cultural evolution also refers to learned behavior, such as our choice of which animals to hunt, and how.

Thus, they posit, our hunting conundrum may have also been a key element to that enigmatic human characteristic: complex language. When language began, with what ancestor of Homo sapiens, if any before us, is hotly debated.

Ben-Dor, an economist by training prior to obtaining a Ph.D. in archaeology, believes it began early. “We just need to follow the money. When speaking of evolution, one must follow the energy. Language is energetically costly. Speaking requires devotion of part of the brain, which is costly. Our brain consumes huge amounts of energy. It’s an investment, and language has to produce enough benefit to make it worthwhile. What did language bring us? It had to be more energetically efficient hunting.

Domestication of the dog also requires resources and, therefore, also had to bring sufficient compensation in the form of more efficient hunting of smaller animals, he points out. That may help explain the fact that Neolithic humans not only embraced the dog but ate it too, going by archaeological evidence of butchered dogs.

At the end of the day, wherever we went, humans devastated the local ecologies, given enough time.

There is a lot of thinking about the Neolithic agricultural revolution. Some think grain farming was driven by the desire to make beer. Given residue analysis indicating that it’s been around for over 10,000 years, that theory isn’t as far-fetched as one might think. Ben-Dor and Barkai suggest that once we could grow our own food and husband herbivores, the megafauna almost entirely gone, hunting for them became too energy-costly. So we had to use our large brains to develop agriculture.

And as the hunter-gathering lifestyle gave way to permanent settlement, our brain size decreased.

Note, Ben-Dor adds, that the brains of wolves which have to hunt to survive are larger than the brain of the domesticated wolf, i.e., dogs. Also: The chimpanzee brain has remained stable for 7 million years, since the split with the Homo line, Barkai points out.

“Why does any of this matter?” Ben-Dor asks. “People think humans reached this condition because it was ‘meant to be.’ But in the Earth’s 4.5 billion years, there have been billions of species. They rose and fell. What’s the probability that we would take over the world? It’s an accident of nature. It never happened before that one species achieved dominance over all, and now it’s all over. 

How did that happen? This is the answer: A non-carnivore entered the niche of carnivore, and ate out its niche. We can’t eat that much protein: we need fat too. Because we needed the fat, we began with the big animals. We hunted the prime adult animals which have more fat than the kiddies and the old. We wiped out the prime adults who were crucial to survival of species. Because of our need for fat, we wiped out the animals we depended on. And this required us to keep getting smarter and smarter, and thus we took over the world.


One interesting thing here is the statement that wherever humans settled, already thousands of years ago, they caused an ecological disaster.


The idea that we created the conditions that demand that we evolve to survive is very interesting. Of course the disappearance of megafauna is not wholly due to our predation — climate change also played a part — but the result for us was very much in terms of  “adapt or die.” And it appears each of our adaptations led not only to our survival, but to enormous changes in the biosphere.

At no time in all our long history did we live so lightly on the land that we did not change it. The neolithic development of agriculture and the industrial and technological revolutions have made the most extensive and drastic changes, each creating problems in the course of applying their solutions — challenges we again must meet or perish. It is as though we spur our own evolution by creating conditions that not only directly cause change but also will eventually create new circumstances that demand further adaptations.

Agriculture brought with it a host of changes both social and physical — changes in how we form societies, how we eat and digest food,  how we work and play, our mobility, the shapes of our jaws and teeth, the volumes of our brains, the diseases we suffer...and all of these changes are also challenges necessitating new changes.

Evolution, we are learning, can be slow, and sometimes, fast. All change does not require a long string of generations, but can become significant over a much shorter time. This had been demonstrated for instance in the experiments with domesticating foxes, who developed significant traits of domestication after only a few generations. Our evolution is both physical and behavioral, and not as chaotic as that dependent on random mutations alone. It does not stem purely from out unconscious — we can investigate and invent and deliberately respond. We can choose our attempts at solution, discard what doesn’t work and learn how to do better.

I believe the current crisis of climate change is one of those challenges that can spur us to new and powerful inventions, new and better ways to survive in a world we have been instrumental in creating. Our evolution spurred by old solutions creating their own conditions of failure, so that new solutions had to be inevitably found, too urgently to depend on the chaos of random mutations, seems a good way to understand the progression of human development.

It might illustrate this process to think of diseases like HIV and ebola,  jumping from their animal reservoirs in remote wild areas to infect human populations. One factor allowing this to happen was the development of roads into these areas, followed by more human presence and contact. This is like the experience of plague historically, spread by overland and marine trade routes . And of course our current pandemic was spread by international travel by air and sea. Our activity creates the circumstances we need to understand and survive. Thus, in many ways,  evolution is  a crisis-driven response.

In California, diet is the true religion. But let's be archaic for a while, and turn to what is traditionally regarded as religion.


~ Published at a time when the number of Catholic priests continues to dwindle and the power of bishops over the faithful continues to weaken, Garry Wills’s book, “Why Priests?,” may accelerate both processes. A former Jesuit seminarian, Wills draws on his expertise in classical languages and his wide reading in ecclesiastical history to argue that the Catholic/Orthodox priesthood has been one long mistake.

Wills bears down hard on the New Testament Letter to Hebrews (author unknown), the canonical source for the belief that Jesus considered himself a priest. The text fails to support that proposition, Wills argues, and the corollary that Jesus left behind a priesthood to wield spiritual authority over lesser mortals has no scriptural leg to stand on.

Wills also attacks the belief that Jesus’s death was a sacrifice. The main difficulty here was pointed out by Abelard in the 12th century, in a passage quoted by Wills: “It seems extremely cruel and evil to demand the death of a person without guilt as a form of ransom . . . and even more for God to accept his own Son’s death as the means of returning all the world to his esteem.” Wills aligns himself with a “new body of Christian thinkers . . . [who are] escaping the imported cult of human sacrifice initiated by the Letter to Hebrews.”

While biblical scholars debate the complexities of Wills’s reasoning, the ordinary reader can venture at least this far. If Wills is right, he puts to rest two of the biggest anomalies in Judeo-Christian thought. The first is the tension between the notion of God as love and the notion of God as a needy tyrant whose ego must be fed by worship and sacrifice (animals in the Old Testament, Jesus in the New). The second controversy has to do with Jesus’s seemingly contradictory messages to the faithful. He is supposed to have assured all human beings that they are equal in the sight of God, saying, “Whoever exalts himself will be humbled, and whoever humbles himself will be exalted,” only to turn around and establish a quasi-aristocratic caste to tell us how to live our lives, yea, extending into our very bedrooms. For Wills, the second message is wholly manmade.

At the end, Wills addresses a question he says he gets all the time: Why stay a Catholic when you cast doubt on the church’s basic composition? His crafty answer turns the question inside-out: “No believing Christians should be read out of the Mystical Body of Christ, not even papists. It will hardly advance the desirable union of all believers if I begin by excluding those closest to me.”


At this point this is not a new book, but it's worth mentioning again because it points out not just the lack of scriptural evidence for the exalted role of priesthood (the Protestant Reformation took care of that in suffiicient detail), but but because it addresses "
the tension between the notion of God as love and the notion of God as a needy tyrant whose ego must be fed by worship and sacrifice. I don't think there is any way this tension can be resolved. If we want a god of love and not a needy tyrant, we have to let go of Christianity as it is now understood (Unitarians may be an exception here).



~ William James, the father of Western psychology, in 1902 defined spiritual experiences as states of higher consciousness, which are induced by efforts to understand the general principles or structure of the world through one’s inner experience. At the core of his view of spirituality is what we might call ‘connectedness’, which refers to the fact that individual goals can be truly realized only in the context of the whole – one’s relationship to the world and to others.

Recent neuroscientific research shows that the same [“higher”] states of consciousness can be achieved by secular practices. Scientific and creative epiphanies with their accompanying ecstatic states characterized by a sense of unity and bliss are similar to religious experiences, with both involving a higher state of presence and observation. Many geniuses, such as Albert Einstein and the mathematician Srinivasa Ramanujan, reported spiritual-like states during their revelations or breakthroughs. But these don’t have to be the rare experiences of a chosen few. They can be reached in daily life. As the Nobel laureate and poet Czesław Miłosz put it: ‘Description demands intense observation, so intense that the veil of everyday habit falls away and what we paid no attention to, because it struck us as so ordinary, is revealed as miraculous.’

I’m a neuroscientist and, among other things, I study the way that spiritual states are reflected in the brain and other parts of the body. Spiritual practices have been shown to be closely linked to self-awareness, empathy and a sense of connectedness, all of which can be correlated with the frequency of brainwaves as measured by electroencephalogram (EEG). Studies using EEG have demonstrated how ‘fragmented’ or out of step our whole brain activity can be much of the time, suggestive of conflicts between our behavior, thinking, feeling and communication. On the other hand, expert meditators demonstrate more ‘harmonious’ brain waves, which could be indicative of greater synchrony or connectivity within and across different neural areas. In short, spirituality, similar to love, has physiological effects in the brain and body, and EEG provides a window on these changes.

What’s more, research suggests that we can do more than just measure this kind of activity. We can also train our brains to behave in a more ‘aware’ way by engaging in activities that facilitate greater connection or neural synchronization. Higher synchronization – imagine a large group of brain cells singing together – has been found following the practice of different contemplative paradigms, such as meditation and prayer (creating, as it were, slower ocean waves, now growing calmer and calmer). One way of interpreting this is that neuronal synchronization enhances our brain ‘harmony’ or ‘integrity’ – achieving a state in which the brain works in a more congruent way, adopting a more global perspective. Other findings point to the psychological consequences of this state – greater neuronal synchronization tends to enable a greater ability to make moral judgments and problem-solve creatively.

Neuronal synchronization also correlates with feeling more self-connected, which can, in turn, further increase empathy, creativity and social effectiveness. In two words, it’s associated with greater self-awareness, which has many practical benefits. For instance, the psychologist and author of Insight (2017) Tasha Eurich wrote in the Harvard Business Review in 2018 that people with greater self-awareness are more confident, make sounder decisions, build stronger relationships and communicate more effectively. The self-aware also receive more promotions, have more satisfied employees, and achieve more profitable companies.

The scientific exploration of such experiences could reveal the mechanisms enabling us all to achieve these states even in the most mundane moments, such as waiting in traffic.

Most of my research over the past two decades is linked to a movement meditation called Quadrato Motor Training (QMT; Oriana: perhaps the closest analog is Tai-chi) that demands both coordination and mindfulness. Practitioners alternate between dynamic movements and static postures, while dividing their attention between their body in the present moment and its location in space. QMT requires a connection between the ‘external’ world and the inner realm by requiring the participant to be intentionally aware of both inner and outer ‘worlds’ simultaneously.

In our research, we found that QMT improved cognitive flexibility. For example, in thinking about a simple glass, most people will associate it with the act of drinking. But following QMT training, additional ‘worlds of content’ can open up – the glass can be viewed as ‘the holy grail’ or a hat. In fact, our study showed that a seven-minute QMT session increased cognitive flexibility by 25 per cent, compared with other simpler kinds of movement or verbal training. 

What’s more, EEG measures showed that the enhanced cognitive flexibility associated with QMT training was also accompanied by increased brain synchronization of the kind previously related to relaxation, attention and a flow state. Some might say QMT also fosters spirituality.

What else helps to produce neural synchronization? Well, perhaps surprisingly, being in a space similar to a sensory-deprivation chamber, with little external stimulation, also impacts on neuronal synchronization. Such was the idea behind the OVO chamber (uovo is Italian for ‘egg’, describing the space’s shape), created by Patrizio Paoletti, one of the leading teachers of meditation, based on his ‘Sphere Model of Consciousness.’

My colleagues and I have found that OVO-immersion leads to increased neuronal synchronization in the insula, a brain area related to empathy as well as bodily self-awareness. In turn, this was found to be accompanied by an increased sense of ‘absorption’ (akin to that feeling you get when overwhelmed by the beauty of a sunset, as you fully and voluntarily engage your attention in the experience). Absorption is also closely related to spirituality, meditation and empathy, likely because all involve openness to self-altering experiences and voluntarily increasing awareness.

As spirituality is closely related to one’s state of consciousness, self-awareness and neuronal synchronization, the more one’s consciousness is elevated, the more one feels the connectedness of things. Imagine you’re out driving and notice the sun setting. Is your next thought about the traffic, or are you awed by the glorious sunset and the daily planetary dance we all share? Now imagine the same drive. Someone recklessly cuts you off and zooms away. Is your first reaction to get upset and start chasing them – risking yourself and others? Or do you remain calm, with your heart rate the same as before the car overtook you?

In both examples, the latter option involves engaging a more mature, present part of ourselves in the current once-in-a-lifetime moment – being fully connected to the experience of the sights, the sounds and the scents. This is the kind of experience some call spirituality, namely the interconnectedness of being. In contrast, each time we react involuntarily, we aren’t anchored in our centre, but controlled by a more automatic state not chosen by us, and therefore we’re less connected both within ourselves and to the greater good.

For me, a big part of spirituality is overcoming daily challenging situations with calm and care. When we lose it, for instance, what exactly are we losing? Nothing less than our selves. We all lose it sometimes, but we can lose it less often by continually reconnecting to our best selves and to each other.

Recently, while preparing for an online conference, I shared some of my data showing increasingly greater levels of relaxation among novice, intermediate and expert meditators. My curious 17-year-old daughter said: ‘Mom, I know how to meditate. I’m an intermediate practitioner, right?’ I replied: ‘Well, as you see by these graphs, there is a big difference between knowing how to meditate and actually practicing it regularly. Imagine some stairways you can climb to reach the sky. You have the means to get there, but now you need to keep climbing.’

As a neuroscientist, knowing that brain change is possible (even among grown-ups) keeps me optimistic and motivated in my research. Through simple non-invasive techniques, we can all train and monitor our spirituality and, in turn, live more vibrantly. ~ Tal Dotan Ben-Soussan


Can anyone have a spiritual experience? I suppose the answer is yes — as long as you are define spirituality as "connectedness." The author assumes you must be willing to practice a certain meditation technique (which can involve movement, e.g. yoga)
and eventually you are likely to achieve the state of “connectedness” that this author’s posits is the core of a spiritual experience. 

But isn't such connectedness and pure positive emotion more easily achieved by spending quality time with your dog? I ask this seriously. I've seen people express affection for their pets, and their faces seemed radiant like those of the saints. 

“Spiritual” is a vague term, and I wonder if we might be better off discarding it. Instead, could we perhaps speak about calm contentment, or a flow of positive emotions when enjoying the beauty of art or nature? “Spiritual” seems to point to a quasi-religious yearning to speak to a dead person, for instance — not just to have a dream about them, but to seem them walk into the room and say something, anything.

But, since typically (given existing accounts) no one else would see the dead person, would that not be a hallucination? And so we are back to the inability to verify. Still, going by subjective accounts, post-bereavement hallucinations are relatively common. Apparently that’s one way the brain has to cope with profound loss.

Likewise, no one denies the subjective reality of dreams or near-death experiences. We recognize these are interesting experience, sometimes extremely so — NDEs are said to have changed people’s lives.

Nevertheless, when we feel peaceful and contented, perhaps even blissful, wouldn’t it be sufficient and ultimately more useful to use terms such as “peaceful” and “contented”? By that standard, there has never been any doubt that anyone can achieve those pleasant states of mind. The practical next step is to ask if we can voluntarily increase their frequency. It appears so — but that’s hardly a new discovery.

Tal Dotan Ben-Soussan is a neuroscientist and bio-psychologist who lives in Assisi, Italy.

St. Francis by Ben Ortega


~ In some times and places, life is seen as a one-way expedition from birth to death. We progress linearly and don’t look back. In other times and places, life is circular, a never-ending round trip. We live, die, and live again.

The idea that existence is cyclical can bring us a certain composure in the face of life’s tribulations and vulnerabilities. It can help us appreciate the words of the ancient Greek playwright Aristophanes that ‘old men are children twice over’, as well as William Shakespeare’s dictum that old age is a ‘second childishness’. As the Taoist thinker Zhuang Zi said: ‘Fang sheng fang si’ – at the moment of life is death; at the moment of death is life. The beginning is the end; the end is the beginning.

Such ancient wisdom and literary musing find a close translation in modern neurological research into dementia and Alzheimer’s disease: retrogenesis, meaning backward (retro-) beginning (-genesis). It was proposed by Barry Reisberg, a psychiatrist at the New York University school of medicine, as a model to make sense of the progressive decline in Alzheimer’s. In 1999, Reisberg defined retrogenesis as ‘the process by which degenerative mechanisms reverse the order of acquisition in normal development’. In other words, the deterioration of a patient with Alzheimer’s follows, in reverse order, a child’s normal development. What a child learns first in this world, a patient loses last; what a child learns last, the patient loses first. The beginning is the end; the end is the beginning.

This is not just philosophical contemplation. A body of scientific evidence supports the retrogenesis model. In Alzheimer’s, we generally witness the gradual decline of two kinds of abilities: cognitive and functional. The former refers to capacities such as attention, memory, language and orientation. The latter refers to abilities to perform daily living tasks from dressing and eating to shopping and handling finances. Multiple tools have been developed to assess the extent of these skills in patients with Alzheimer’s, such as the mini-mental state exam (MMSE) for cognitive abilities and the interview for deterioration in daily living activities in dementia (IDDD) as a measure of functional decline.

When researchers in 2012 used these tools to compare the cognitive and functional abilities of 148 elderly participants and 181 children, they found much to support the retrogenesis model. Patients with Alzheimer’s who were at the very early stage of the disease obtained cognitive MMSE scores comparable with those achieved by children around the ages of six and seven. As the disease progresses, patients’ MMSE scores gradually dropped to the level of a five-year-old and, then, below that of a four-year-old. As for their functional abilities, patients with mild to moderate Alzheimer’s had IDDD scores comparable with those obtained by children aged seven to four, respectively. Patients with severe Alzheimer’s obtained IDDD scores below the level of a four-year-old.

Reisberg had also mapped the sequence of development in childhood and in Alzheimer’s, and the correlation is striking. At the age of 12 and above, children have the ability to hold a job, as working people at the very early stage of Alzheimer’s would insist on doing. At the age of eight to 12, children can be trusted to handle simple finances, as people with mild Alzheimer’s also do. At the age of five to seven, children can be counted on to select proper clothing for themselves, and people with moderate Alzheimer’s likewise determine their own fashion preferences. Around the age of four, children start to brave the toilet independently; people with moderately severe Alzheimer’s often retain that ability, too. At the age of 15 months, children can speak five to six words, and people with severe Alzheimer’s are equally fluent. Younger than 15 months, infants exhibit reflexes important for survival, for example, turning their faces toward a gentle stroking hand or sucking whatever touches the roofs of their mouths. These reflexes are lost in adulthood but regained in severe Alzheimer’s.

Even in emotional development, which can’t be charted with the same precision, there tends to be a reverse relationship between childhood and Alzheimer’s. For example, children aged between two and five tend to use temper tantrums and verbal outbursts to make their frustration known. Alzheimer’s patients at corresponding stages of their second childhood also have that approach.

Why would this reverse correlation happen? What, if any, mechanisms in the brain account for it? In childhood and early adulthood, brain development is not a homogeneous process. It starts in the so-called ‘primitive’ regions – those that allow basic functions such as processing visual information (the visual cortex) or sensing touch and warmth (the somatosensory cortex). The more evolved or higher-order brain regions mature later, such as those that enable reasoning and problem solving (the frontal lobe). In Alzheimer’s, brain damage happens to follow a reverse order. Patients first lose their higher-order abilities such as memory, language and reasoning, but hold on to their visual, sensory and motor skills for a longer time.

Researchers speculate that what lies behind this first-in, last-out pattern are axons – the long, slender fibers extended from neurons, like the brain’s electrical wires. Axons conduct electrical impulses away from a neuron to be received by other neurons so that information can be relayed and processed in the brain. Just like wires need to be insulated for their protection and proper function, axons are coated by a sheath of protein and fatty substances called myelin. This myelin sheath doesn’t develop simultaneously all over the brain’s axons. It grows first around axons that are critical for a child’s early survival: for example, axons that enable sensory and motor functions. Over years, then, these axons accumulate thicker myelin, which, like thickly coated wires, are less susceptible to damage.

By contrast, because babies don’t need language and self-control to survive, so axons that are involved in higher-order abilities don’t develop myelin until later in the process, and possess only a thin coat of protection. These axons are the ones that are more liable to damage in old age. When that happens, signal-exchange between neurons slows down or becomes blocked, abnormal protein deposits called tangles spread and choke neurons to death, corresponding brain functions are lost, and childhood is reversed.

As with all contemporary theories that try to explain Alzheimer’s, the retrogenesis model has its problems: for, when we really scrutinise them, our first and second childhoods aren’t exactly mirror images of one another. In children, the ability to perform basic activities such as dressing themselves and complex activities such as handling finances develop simultaneously. That is, with increasing age, children tend to have progressively higher overall functional abilities. By contrast, in Alzheimer’s, functional decline clearly discriminates. Patients lose the ability to perform complex tasks first, while basic abilities hang on for a while longer, sometimes persisting into the late stage of the disease when patients should have been, according to retrogenesis, no more capable than infants.

A further example concerns language, which isn’t always a predictable, straightforward story in reverse. According to the retrogenesis model, patients with late-stage Alzheimer’s, like infants, can utter only a few words. But in reality, a great deal of individual differences exist: some patients are able to produce one word; others, up to 252 words. In some cases, patients with severe Alzheimer’s dementia retain the ability to communicate humor, irony and sarcasm, which are metalinguistic abilities developed late in childhood, and thus should have been lost sooner.

Moreover, pathologically speaking, multiple hypotheses exist on the ultimate cause of degeneration in the Alzheimer’s brain: the accumulation of the protein fragment beta-amyloid, the spread of the abnormal protein tau, the dysfunction of brain metabolism. In other words, various factors other than myelin play a role in the Alzheimer’s brain.

So retrogenesis is just that, one of several competing theories. While evidence supports its broad pattern, its details could do with correction. Yet you don’t need to follow retrogenesis religiously in order to glean some sense of peace and grace toward Alzheimer’s – just like you don’t need to be a Buddhist or Taoist to take comfort in the cyclicality of life. And the professionals, psychologists, therapists and social workers don’t need the retrogenesis model to be strictly true in all respects before they envision activities and care programs that accommodate patients’ changing abilities, while also protecting their sense of pride and wellbeing.

n the absence of an Alzheimer’s cure, we can at least change the way we think and approach the disease. If we’re willing to give up our stubborn belief that life must be progressively linear, then when our parents or grandparents – or, indeed, when we – start to fade away, we might accept it as the beginning of a new cycle. 

In this second childhood, we don’t lose the ability to be content; we just lose the obsession to seek contentment. When our conscious mind drifts, we’re not so much losing something we’re entitled to as returning to the way we first came into this world. When the ones we care about deeply enter their second childhood, we’re not so much losing them as gaining another opportunity to love, praise and accept as they approach life’s end and its beginning. Fang sheng fang si. The beginning is the end; the end is the beginning. ~


~ In the 16 months since the SARS-CoV-2 virus burst into the global consciousness, we’ve learned much about this new health threat. People who contract the virus are infectious before they develop symptoms and are most infectious early in their illness. Getting the public to wear masks, even homemade ones, can reduce transmission. Vaccines can be developed, tested, and put into use within months. As they say, where there’s a will, there’s a way.

But many key questions about SARS-2 and the disease it causes, Covid-19, continue to bedevil scientists.

What accounts for the wide variety of human responses to this virus?

Some people who contract SARS-2 never know they’re infected. Others have flu-like symptoms — some mild, some more debilitating. Some recover completely, others go on to suffer from the puzzling condition that’s come to be known as long Covid. Some die.

What predisposes individuals to those various and varied outcomes? That’s the question that perplexes Angela Rasmussen, a virologist affiliated with the Georgetown Center for Global Health Science and Security.

An obvious answer might be how much virus individuals are exposed to when they get infected. In other words, lots of virus equals more severe disease. But Rasmussen said animal studies don’t show dose as being a factor here. Some preexisting health problems, like diabetes, seem to put people at higher risk of getting more severely ill, but even they don’t explain all the variability. Some people without comorbidities, as they’re called, become profoundly ill.

“To me the data (and all the virus research I’ve ever done) suggests the host response is a major determinant, if not THE major determinant, of disease severity,” Rasmussen wrote. She wants to know why some immune systems handle the virus with ease while others get swamped.

How much immunity is enough immunity?

Florian Krammer, a professor of vaccinology at the Icahn School of Medicine at Mount Sinai Hospital in New York, wants to know the exact measurements of antibodies needed to fend off asymptomatic Covid and symptomatic disease. “I guess you could say that I want to know which type of immune response indicates protection,” said Krammer. “It is likely indicated by a single antibody titer for each of the types of protection.”

Nahid Bhadelia, medical director of the special pathogens unit at Boston Medical Center, is also eager to quantify how much immunity is enough, so we can determine who is protected and who needs to have their immunity boosted. “We do this for measles now, for example — if there is an exposure, we check antibodies,” said Bhadelia.

Sarah Cobey, associate professor of viral ecology and evolution at the University of Chicago, thinks the issue might be more complicated. In her view, beyond specific antibody levels, physiological factors that vary from individual to individual are probably also part of the equation. “It would be nice to know exactly what we should measure and how to interpret it,” she said.

This could be among the factors that help explain why there’s so much variability in people’s susceptibility to the virus, and the severity of disease they experience if they contract it. 

“Knowing how well a partially immune population could transmit the virus at any time could dramatically improve forecasting and the potential for effective policy responses,” Cobey added.

So far, the vast majority of people who contracted Covid haven’t caught it again. If this coronavirus is like its cousins — four human coronaviruses that cause colds — reinfections will occur. How often will they happen? Will they be milder? What’s the impact of the variants — viruses that have acquired significant mutations — on reinfections, asked Kristian Andersen, an immunologist at the Scripps Research Institute.

Paul Bieniasz, head of the laboratory of retrovirology at Rockefeller University, has similar questions. “Are we headed for a situation akin to what occurs with the seasonal coronaviruses where the virus and reinfection is common but associated with only mild disease, with periodic reinfection providing boosts to immunity?” he asked. “Alternatively, will infection in those with waning immunity be associated with an unacceptable disease burden, necessitating a constant ongoing battle, with updated vaccines to keep viral prevalence and disease low?”

Put another way, how long will immunity last?

Soumya Swaminathan, the World Health Organization’s chief scientist, would like to know how long immunity lasts — immunity after infection and after vaccination. Knowing this would allow for better use of scarce vaccines, she suggested.

Natalie Dean, a biostatistician at the University of Florida, also listed this as her question, noting that the answers will tell us how achievable herd immunity is, and whether and when vaccine booster shots will be needed. “It could be that protection against infection is comparatively short-lived, but protection against severe disease is longer lasting,” Dean said. 

“It could be that vaccine-induced protection has a different durability than infection-induced protection.”

How are viral variants going to impact the battle against Covid-19?

Variants have changed the virus in disadvantageous ways. Some, like B.1.1.7, have made it substantially more transmissible. Another, B.1.351, appears to be able to at least partially evade the immune protections generated by previous infection or immunization. The variants are top of mind for a number of experts.

“My question is: What impact will these variants have on vaccine-related protection, effective treatment and what the ultimate impact this virus will be on our world for years to come,” said Michael Osterholm, director of the University of Minnesota’s Center for Infectious Diseases Research and Policy.

John Moore, a professor of microbiology and immunology at Weill Cornell Medical College, is of the same mindset.

“I wish I knew the outcome of the ongoing battle between the vaccines and the variants, both here in the USA and globally,” he said. “Will the more troubling … antibody-resistant variants reduce vaccine efficacy to an extent that compromises national and international efforts to control the pandemic via the current generation of vaccines?”

What is long Covid, who is at risk of developing it, and can it be prevented?

“My top ‘I wish we knew’ about Covid is by far what drives long Covid,” said Akiko Iwasaki, a virologist and immunologist at Yale University. The condition has been given a formal name, post-acute sequelae of SARS-CoV-2 infection, or PASC.

Significant numbers of people who contract the disease report debilitating and varied symptoms weeks and months after recovering. Brain fog. Deep fatigue. Shortness of breath. Why this happens is a mystery.

Iwasaki noted that other chronic syndromes are triggered by viral infections. “I think we have a unique opportunity to understand once and for all how acute viral infection can lead to long-term symptoms so we can design better therapy against this debilitating disease and potentially other viral-induced chronic fatigue syndrome,” she said.

Krutika Kuppalli, an infectious diseases physician at the Medical University of South Carolina, wonders if factors that put people at risk of developing long Covid can be identified, so that the risk can be lessened. And Andersen, of Scripps Research Institute, would like to know the frequency at which long Covid occurs and how cases break down by age and severity of symptoms during the initial infection.

What’s the deal with Covid and kids?

Children are largely — but not entirely — spared Covid’s wrath. Younger children especially seem to have few or mild symptoms in most cases. A few develop a Kawasaki disease-like syndrome a few weeks after infection.

Caitlin Rivers, an infectious disease epidemiologist at the Johns Hopkins Center for Health Security, wants to know more about the disease in children — for instance, are kids who have asymptomatic infection likely to transmit the virus, and how frequently? “I think the disease dynamics in children are still not well understood,” she said, noting that while many studies have looked at symptomatic illness in children, few have used study designs that would find asymptomatic infections in this age group.

How big a role do asymptomatically infected people actually play in SARS-2 transmission?

The fact that some portion of infected people never develop symptoms but do transmit the virus really threw a monkey wrench into efforts to contain and control the virus. A further complication: Infected people can transmit a day or two before they know they are sick, when they are pre-symptomatic.

Saskia Popescu, an infectious disease specialist and assistant professor in George Mason University’s biodefense program, wishes we had a clearer picture on how infectious asymptomatic and pre-symptomatic people actually are. “We have few studies truly doing continued testing to identify asymptomatic infection right when it happens and then doing follow up analysis into how infectious that might be,” she said. Popescu wonders how often the virus picked up from these people on swabs taken for polymerase chain reaction testing (you know it by now as PCR) is actually infectious virus, or whether there’s a period of shedding of non-infectious viral junk. “Is this person truly infectious and needs isolation and contact tracing or am I just getting viral fragments?” she wondered.

Can we figure out who might become a superspreader?

SARS-2 shares a bizarre feature with its older cousins, SARS-1 and MERS, a camel virus that occasionally triggers small outbreaks on the Arabian Peninsula. The majority of people who catch this bug don’t infect anyone else. Most of the transmission is done by a small number of people, potentially fewer than 20% of those who become infected. A lot of experts don’t like the term superspreader; some prefer to talk about superspreading events. Any way you slice it, though, a minority of people are responsible for a majority of cases.

Last summer Ben Cowling, a professor of infectious diseases epidemiology at the University of Hong Kong, co-wrote an opinion piece in the New York Times on the phenomenon, arguing that if authorities focused on preventing the types of activities that allow superspreading to occur — crowded events, sharing close spaces with others — more onerous measures wouldn’t be needed.

Now Cowling wonders if there is a way to figure out the types of people who are more likely to be superspreaders.

It’s the question that weighs on Vineet Menachery’s mind, too. “If we can decipher what makes a person a superspreader, I think it could change the dynamics of outbreaks and how we deal with them, now and in the future,” said Menachery, a coronavirus expert at the University of Texas Medical Branch.

There aren’t obvious clues to pursue. “We know the virus that comes from superspreaders is not different in terms of its genetic sequence. We know there is no link with disease severity. There is no evidence for age, sex, or co-morbidities in driving this phenomena,” Menachery said.

The differences between SARS-2 and its older cousin, SARS-1

The 2002-2003 SARS outbreak showed the world the disruptive power of coronaviruses. Ever since, scientists have worried about this large family of viruses, found in bats and others animals. The camel coronavirus, MERS, which was first spotted in 2012, underscored the threat: Coronaviruses are species jumpers.

But the virus that caused the original SARS outbreak, now called SARS-1, did not know some of the tricks SARS-2 has in its repertoire. Some coronavirus experts marvel at the differences between the two.

Stanley Perlman, a microbiologist at the University of Iowa, would like to know: Why is it SARS-2 can infect and make copies of itself — a process called replication — in the cells of the upper airways, something that SARS-1 did not do? SARS-1 replicated in cells deep in the lungs, which is why people who contracted that virus were only infectious when they were really sick — limiting how many people they could infect. SARS-2 has a huge advantage, because it replicates in the upper airways. People infected with SARS-2 — even those whose symptoms are so mild they don’t know they are infected — have opportunities to transmit the virus every time they sneeze, cough, even speak.

Adding to the puzzle: Both viruses infect by attaching to ACE-2 receptors on human cells, yet they choose cells in different parts of the body.

Finding out why SARS-2 can replicate in the upper airways could help drug developers figure out how to prevent it from happening, Perlman said. It would also help scientists assess the pandemic risks posed by other coronaviruses that might jump from an animal species.

Susan Weiss, who like Perlman is a longtime coronavirus researcher, is also interested in learning why people infected with SARS-2 can transmit to other people if they are asymptomatic or pre-symptomatic. That didn’t happen with SARS-1 or MERS, she noted.

Last but not least: Where did SARS-2 come from?

Analysis of the genetic sequences of SARS-2 viruses retrieved from some of the earliest people known to have been infected suggests the virus started transmitting among people sometime in the autumn of 2019. The original source of the virus is almost certainly a bat, but how did a bat virus find its way into humans? Were pangolins or mink or other wild animals sold as exotic foods in China’s wet markets the spark for the worst pandemic in a century?

Inquiring minds want to know — and not just for curiosity’s sake. Knowing the virus’ route will help the world prepare for future outbreaks. Research shows there are lots of bat coronaviruses we haven’t yet met. ~



~ Although inequities in COVID-19 outcomes in the US have been documented for sex and race—men are dying at higher rates vs women, and Black individuals are dying at higher rates vs White individuals— the current study is the first to quantify how COVID-19 mortality rates vary by both race and sex.

“In our analysis of COVID-19 mortality patterns…we find that contrary to blanket claims of men’s higher mortality, Black women have over three times the COVID-19 mortality rate of both White and Asian men. Black women in the United States are dying from COVID-19 at a higher rate than every other group, male or female, except Black men,” wrote co-authors Tamara Rushovich, MPH, and Sarah S. Richardson, PhD, in an op-ed for The Boston Globe.

Results showed Black women had COVID-19 mortality rates that were nearly 4 times higher than that of White men and 3 times higher than that of Asian men, as well as higher rates than White and Asian women.

Researchers also found that Black men had significantly higher mortality rates than any other sex and racial group, including over 6-times higher than the rate among White men.

The disparity in mortality rates between Black women and White women was over 3 times the disparity between White men and White women.

The disparity between Black men and Black women was larger than the disparity between White men and White women.

from another source:

~ We know the most powerful risk factor for being hospitalized and dying is age. And in fact, 80% of deaths are related to persons who are 65 years and older whereas if you look in some of the cohorts in the African American population, you see the mean age is about 60, about 5 years younger than would be expected and perhaps the strongest driver in that population is obesity.

Obesity causes mechanical obstruction of ventilation and difficulty being placed in the prone position which we found was optimal for respiration. Also, obesity is related to an inflammatory state. Cytokines which are released with infection with coronavirus are also baseline higher in persons with obesity; we know there are high levels of C-reactive protein, interleukin-6. 

So now you have a condition where the inflammatory state is already heightened. You add a virus which causes an increase in cytokines, and that up regulates the inflammatory state and makes it even more a problem in terms of persons who become ill, with increased hospitalization and unfortunately increased death.


~ there was a significant effect among the 4,159 of 4,488 patients who had their diagnosis of COVID-19 confirmed by a positive PCR test:
25% fewer hospitalizations
50% less need for mechanical ventilation
44% fewer deaths

"Our research shows the efficacy of colchicine treatment in preventing the 'cytokine storm' phenomenon and reducing the complications associated with COVID-19," principal investigator Jean-Claude Tardif, MD, of the Montreal Heart Institute, said in the press release. He predicted its use "could have a significant impact on public health and potentially prevent COVID-19 complications for millions of patients.” ~


What is obvious by now is the need to lower inflammation, preventing death caused by “cytokine storm.” Even aspirin appears to be of use (in addition, aspirin is also an anti-clotting drug).

Colchicine, derived from crocus, was used against arthritis as early as 1500 B.C.


90% of cells in our body are bacteria - organically, our bodies are only 10% human.
We are less than 1% human in terms of the overall gene activity in our bodies
Babies are born sterile, but 50% become carriers of C. diff in their first year of life - other bacteria hold it in check.

One strain of bacteria, marketed as the probiotic Mutaflor, is reportedly from a culture taken from the stomach of a German soldier in WW1 - he was apparently the only one in his barracks not to succumb to food poisoning.

Jeremy Nicholson at Imperial College London says that developed nations have probably changed their microbial communities more in the last 50 years than in the previous 10,000, thanks to antibiotics and other lifestyle changes.


ending on beauty:

Proust says memory is of two kinds.
There is the daily struggle to recall
where we put our reading glasses

and there is a deeper gust of longing
that comes up from the bottom
of the heart

At sudden times.
For surprising reasons.

~ Anne Carson, "Wildly Constant"