Saturday, April 20, 2024


Neptune Grotto, Sardinia

December, a shopping mall,
an empty trolley on the overpass,
its windows lit with moonlike glow —

Hecate’s last laugh,
reminding me how much I loved
taking a train in Warsaw at night —

how I would enter
the train’s rhythm,
the knocking of the wheels against

the shifting and dividing tracks;
blind backs of buildings,
unknown streets –-

an underworld
passing across my face
reflected in the dark, drizzled glass.

If I had known what station
would be next —
how the doors of life

close quickly, and we watch the past
through time’s prison bars —
in the cramped Warsaw apartment,

at fifteen, when I made up my mind
to live in the West,
would I have danced as if

we never lose
anything we love —
just keep adding beauty to beauty.

The trolley flying overhead
like a luminous ghost
brought back an unreal city,

in the same instant of stone and breath
arriving and departing,
falling and rising from its ruins.

The same moon moved between
darkness and light-wounded clouds,
winter’s hungry Wolf Moon,

adding phantom beauty to beauty.
“That is all,” the master said.
That is all but it is splendid.”

~ Oriana



~ From a certain perspective, Virginia Woolf did not write criticism at all. Her literary essays and journalism are truer specimens of belles lettres than of the kind of writing that surrounds Woolf’s Common Reader series on my university library’s shelves, books with titles like Virginia Woolf and Postmodernism, or Virginia Woolf and the Politics of Language, or Virginia Woolf and the Problem of the Subject, or Hellenism and Loss in the Works of Virginia Woolf. These are books written by and for specialists; their stock-in-trade is the relentless analysis of particulars, the meticulous interrelation of text and context–all self-consciously framed with theoretical abstractions. Associative leaps, bold assertions, insights born of intuition and experience rather than justified by detailed exegesis and authoritative citation: for today’s professional critics, these are as inadmissible as stolen evidence in a courtroom.

Against their painstakingly researched conclusions, Woolf’s commentaries seem—indeed, are—impressionistic, idiosyncratic, unsubstantiated. On what basis, with what justification, can she claim that Donne “excels most poets” in his “power of suddenly surprising and subjugating the reader”? What exactly does it mean to “subjugate the reader” anyway? Where are the quotations—where is the specific analysis of prosody and form, metaphor and imagery—to support that claim, or the claim that in “Extasie” “lines of pure poetry suddenly flow as if liquefied by a great heat”? Is there “something morbid, as if shrinking from human contact, in the nature worship of Wordsworth, still more in the microscopic devotion which Tennyson lavished upon the petals of roses and the buds of lime trees”? Is it “the soul that is the chief character in Russian fiction”? In all Russian fiction?

Or, more accurately, why is it so difficult to distinguish between the writers and their writing in her essays? “Gissing is one of the extremely rare novelists who believes in the power of the mind, who makes his people think”; “Nor was Hardy any exception to the rule that the mind which is most capable of receiving impressions is very often the least capable of drawing conclusions”; “At the end we are steeped through and through with the genius, the vehemence, the indignation of Charlotte Brontë.

So much for the death of the author or the autonomy of the text—and so much for any idea that critical conclusions must be both earned and rendered convincing by references to comprehensive evidence. Woolf never argues or proves: she probes, questions, illuminates, and asserts.

Yet next to The Common Reader, those rows of academic volumes are as lifeless as the novels of hapless Arnold Bennett, who, Woolf declares in “Modern Fiction,” can make a book so well constructed and solid in its craftsmanship that it is difficult for the most exacting of critics to see through what chink or crevice decay can creep in. There is not so much as a draught between the frames of the windows, or a crack in the boards. And yet — if life should refuse to live there?

Woolf believes that some “essential thing” has moved out of Bennett’s house of fiction, and for all the perfectly sincere arguments I could make about the different purposes, standards, and values of academic criticism, when I read it I am often overcome with the same unhappy feeling that there too something essential has “moved off, or on, and refuses to be contained within such ill-fitting vestments” as those shelves of severe erudition.

Woolf’s criticism, on the other hand (and let’s, after all, concede her the term) is full of life and vitality. It is not criticism meant for cataloguing according to Library of Congress rules; it is not criticism as scholarship. It offers us no nuggets of pure truth to wrap up between the pages of our notebooks. Though definite, it is never definitive: its pronouncements are really provocations, at least to me—reading it, I simmer with questions and challenges and counter-examples, along with admiration for the lambent play of Woolf’s mind across her subjects.

From the Oresteia to Ulysses, from the Paston letters to Gissing’s New Grub Street: Woolf seems able to talk with ease and wit about anything. Her criticism stimulates us to participate in the conversation with her, though not quite as equals—for there’s nothing common at all about the cultivation or polish of her writing.

In her essay on “The Pastons and Chaucer,” for instance, we get this:

'Such is the power of conviction, a rare gift, a gift shared in our day by Joseph Conrad in his earlier novels, and a gift of supreme importance, for upon it the whole weight of the building depends. Once we believe in Chaucer’s young men and women and we have no need of preaching or protest.'

Conrad is an unexpected visitor in an essay on the fifteenth century, and also a distracting one. Is she right about that, I find myself asking? Why does she specify the “earlier novels” in particular? She doesn’t say, or give an example—and why should she? Conrad is just a passing thought here, after all. But in her essay explicitly on Conrad, I find not much more that is solid, not much that is supported with what in my daily work I call “textual evidence”—her brush moves too fast, the strokes are too thick with color to limn in the details. “There are no masts in drawing rooms” in Conrad’s later novels, she observes:

'The typhoon does not test the worth of politicians and businessmen. Seeking and not finding such supports, the world of Conrad’s later period has about it an involuntary obscurity, an inconclusiveness, almost a disillusionment which baffles and fatigues. We lay hold in the dusk only of the old nobilities and sonorities: fidelity, compassion, honor, service — beautiful always, but now a little wearily reiterated, as if times had changed.'

There’s nothing concrete here, no spelling out of the abstractions she has summoned. The essay includes hardly any quotations from Conrad himself. I do not point this out this as a criticism of her criticism, but to acknowledge how it works: her comments are convincing, not as conclusions about Conrad, but as evocations of Woolf’s experience of Conrad. In speaking so freely about it, she prompts us to think about Conrad for ourselves, and to test our experience against hers.

Here’s another surprise: a cameo appearance by Jane Austen in a discussion of Greek tragedy, which is a literature, Woolf says, of extremes, “cries of despair, joy, hate”: But these cries give angle and outline to the play. It is thus, with a thousand degrees of difference, that in English literature Jane Austen shapes a novel.

There comes a moment—‘I will dance with you,’ says Emma—which rises higher than the rest, which, though not eloquent in itself, or violent, or made striking by beauty of language, has the whole weight of the book behind it. In Jane Austen, too, we have the same sense, though the ligatures are much less tight, that her figures are bound, and restricted to a few definite movements. She, too, in her modest, everyday prose, chose the dangerous art where one slip means death.

What a surprise that comparison is, and what conviction it carries. And as you read it, aren’t you, too, suddenly aware of its fitness? Though in Austen—as Woolf knows perfectly well—the movements are comic, rather than tragic. “He is a gentleman; I am a gentleman’s daughter; so far we are equal,” says Elizabeth, quietly, without violence, a moment that indeed has the whole weight of the book behind it, and against which Lady Catherine’s cry of despair—“Are the shades of Pemberley to be thus polluted?”—echoes with shrill futility.

When my own experience of a writer is close to Woolf’s, I read her criticism with special appreciation. I love her essay on George Eliot, for instance, written for the TLS on the occasion of George Eliot’s centenary. Her comments on the novels are, characteristically, at once impressionistic and profound; her epigrammatic utterances, as always, tempt and provoke.

What does she mean, after all, by her endlessly quoted line about Middlemarch, “the magnificent book which with all its imperfections is one of the few English novels written for grown-up people”? What imperfections? Why for grown-up people? (A. S. Byatt speculates in an interview that she meant Middlemarch “deals with adult sexuality,” not, I am certain, the first application of that remark likely to be made by my undergraduate students.) Many critics since 1919 have written at length about George Eliot’s problematic heroines, but no one has surpassed Woolf’s rigorous tenderness:

Yet, dismiss the heroines without sympathy, confine George Eliot to the agricultural world of her ‘remotest past’, and you not only diminish her greatness but lose her true flavor. That greatness is here we can have no doubt. The width of the prospect, the large strong outlines of the principal features, the ruddy light of her early books, the searching power and reflective richness of the later tempt us to linger and expatiate beyond our limits. But is it upon the heroines that we would cast a final glance. . . . In learning they seek their goal; in the ordinary tasks of womanhood; in the wider service of their kind. They do not find what they seek, and we cannot wonder.

The ancient consciousness of woman, charged with suffering and sensibility, and for so many ages dumb, seems in them to have brimmed and overflowed and uttered a demand for something — they scarcely know what — for something that is perhaps incompatible with the facts of human existence. George Eliot had far too strong an intelligence to tamper with those facts, and too broad a humor to mitigate the truth because it was a stern one. Save for the supreme courage of their endeavor, the struggle ends, for her heroines, in tragedy, or in a compromise that is even more melancholy.

What moves me the most is her generosity to a writer (indeed, a woman) so unlike herself: “as we recollect all that she dared and achieved . . . we must lay upon her grave whatever we have it in our power to bestow of laurel and rose.”

On the other hand, I’d like to challenge Woolf about Charlotte Brontë. Though she often acknowledges the emotional power of Jane Eyre, she doesn’t give Brontë credit for much more than its raw expression. “Thomas Hardy is more akin to Charlotte Brontë in the power of his personality and the narrowness of his vision,” she remarks, but Brontë has “no trace” of the “speculative curiosity” she finds in Hardy.

Maybe (though don’t you wish we could pursue this comparison further?), but is it true that all of Brontë’s “force … goes into the assertion ‘I love’, ‘I hate’, ‘I suffer’”? Doesn’t Woolf underestimate, here, the political as well as the religious dimension of Jane Eyre? Isn’t she casting as intensely personal, as exclusively emotional, an experience which is also crucially intellectual?

For Woolf, Jane’s feelings—which she reads as Brontë’s feelings—limit the novel’s artistry. “That is an awkward break,” she writes in A Room of One’s Own, about the “low, slow ha! ha!” of Grace Pool’s laugh that Jane hears as she paces the third story of Thornfield Hall:

It is upsetting to come upon Grace Poole all of a sudden. . . . anger was tampering with the integrity of Charlotte Bronte the novelist. . . . Her imagination swerved from indignation and we feel it swerve.

Are there really no madwomen in Woolf’s attic? A Room of One’s Own is, of course, itself a fabulously angry book, though elegantly, urbanely, so. Is there no room in Woolf’s aesthetic theory for the “goblin ha! ha!” of “demoniac laughter”?

And for all the rapid associative movement of these essays, there can be no doubt that there is an aesthetic theory building. In her mind, it is clear, are always questions about the novel especially: what can it, what does it, what should it do? What is its special province or gift? Briefly, often tangentially, but incessantly, she explores how the novel works, what are its formal means and ends. Inquiring into the role of the chorus in Greek drama, she can’t help but refract her answer through fiction:

Always in imaginative literature, where characters speak for themselves and the author has no part, the need of that voice is making itself felt. For though Shakespeare (unless we consider that his fools and madmen supply the part) dispensed with the chorus, novelists are always devising some substitute—Thackeray, speaking in his own person, Fielding coming out and addressing the world before his curtain rises.

Thinking about the morality of Chaucer, she finds it in “the way man and women behave to each other. . . . It is the morality of ordinary intercourse, the morality of the novel.” (It is particularly, I want to say to her, to enjoy her reply, the morality of Anthony Trollope.) The extremes of “emotion concentrated, heightened” in Elizabethan drama lead her to Anna Karenina and War and Peace: “in the play, we recognize the general; here, in the novel, the particular.”

Her sharpest attention is on the development of psychological fiction. In “Modern Fiction,” turning over for inspection the failures of Wells and Bennett and Galsworthy, she concludes that “for the moderns ‘that,’ the point of interest, lies very likely in the dark places of psychology.” But she is not arrogant enough—or she is just too uncommonly well read—to assert that point of interest as a distinctly modern innovation. She has long seen it coming, through the experimentation of her predecessors. In Austen’s Persuasion, for instance, published over a century earlier, Woolf finds evidence of a new interest on Austen’s part, in interiority. Had Austen lived longer, written more, Woolf speculates,

She would have devised a method, clear and composed as ever, but deeper and more suggestive, for conveying not only what people say, but what they leave unsaid; not only what they are, but what life is. She would have stood farther away from her characters, and seen them more as a group, less as individuals. . . . She would have been the forerunner of Henry James and of Proust.

What—I can imagine her asking herself, as she writes about other novelists—am I doing, what else can I do, with the novel? Surely figuring this out was always, for her, the underlying project of her criticism: everything she read helped her bring the novel as she imagined it into being. Not just Woolf’s essays but also her letters and diaries show her sorting and filtering and shaping her reading into a pattern that leads inexorably to her. “My feeling, as a novelist,” she writes to a friend in 1934,

is that when you make a character speak directly you’re in a different state of mind from that in which you describe him indirectly: more ‘possessed,’ less self-conscious, more random, and rather excited by the sense of his character and your audience. I think the great Victorians, Scott (no–he wasn’t a Vn.) but Dickens, Trollope, to some extent Hardy all had this sense of an audience and created their characters mainly through dialogue. Then I think the novelist became aware of something that can’t be said by the character himself … Middlemarch I should say is the transition novel: Mr Brooke done directly by dialogue: Dorothea indirectly. Hence its great interest–the first modern novel. Henry James of course receded further and further from the spoken word, and finally I think only used dialogue when he wanted a very high light.

This is all rather incoherent, and also, as is the case with all theories, too definite. At the same time I do feel in the great Victorian characters, Gamp, Micawber, Becky Sharp, Edie Ochiltree, an abandonment, richness, surprise, as well as a redundancy, tediousness, and superficiality which makes them different from the post Middlemarch characters. Perhaps we must now put our toes to the ground again and get back to the spoken word, only from a different angle; to gain richness, and surprise.

“As is the case with all theories, too definite”: there’s the clue, I think, to
the emphasis in her criticism on richness and surprise rather than method or rigor. She conceives of literature, she writes in “How It Strikes a Contemporary,” as a “vast building” to which all writers contribute. She appeals there to contemporary critics to “scan the horizon; see the past in relation to the future; and so prepare the way for masterpieces to come.” Her own criticism is cumulative and generous in just that way, an ongoing process of exploration complementary to her own literary experimentation.

That’s why, for all the formidable range and confidence of the essays, they are not, in the end, intimidating but inviting, in ways that those academic volumes of criticism about Woolf can almost never be. It’s not, of course, altogether their fault: they have different aims and audiences, and must follow the forms that satisfy these. A primary requirement of criticism in this mode is that its authors suppress their own personalities and present their arguments well-wadded with the apparatus of objectivity, from self-consciously meta-critical introductions to extensive footnotes. Though I know there is always a passionate reader somewhere in that text, it is rare indeed for her to show her face, or for us to hear her voice.

Woolf, by contrast, confronts both her reading and her readers with total immediacy. Free–and fearless enough–to say just what she thinks, she reminds us that reading is, after all (above all) no more than the encounter of one mind with another. She knew that critics “are only able to help us if we come to them laden with questions and suggestions won honestly in the course of our own reading.” But such is the quality of her mind that she achieves what most readers cannot: “those profound general statements which are caught up by the mind when hot with the friction of reading as if they were of the soul of the book itself.”

Monk's House: Virginia Woolf's writing retreat

The goal of politics is to make us children. The more heinous the system the more this is true. The Soviet system worked best when its adults—its men, in particular—were welcomed to stay at the emotional level of not-particularly-advanced teenagers. Often at a dinner table, a male Homo sovieticus will say something uncouth, hurtful, disgusting, because this is his teenager’s right and prerogative, this is what the system has raised him to be, and his wife will say, Da tishe!—Be quiet!—and then look around the table, embarrassed. And the man will laugh bitterly to himself and say, Nu ladno, it’s nothing, and wave away the venom he has left on the table.

~ Gary Shteyngart, Little Failure

Oriana: Note that keeping the faithful in a childlike state of mind is also the goal of religion. You are supposed to believe with blind obedience. You are not supposed to talk back.

“1979. Coming to America after a childhood spent in the Soviet Union is equivalent to stumbling off a monochromatic cliff and landing in a pool of pure Technicolor.” ~ Gary Shteyngart, Little Failure

“I am scared of the photo studio. I am scared of the telephone. Scared of anything outside our apartment. Scared of the people in their big fur hats. Scared of the snow. Scared of the cold. Scared of the heat. Scared of the ceiling fan at which I would point one tragic finger and start weeping. Scared of any height higher than my sickbed. Scared of Uncle Electric Current. "Why was I so scared of everything?" I ask my mother nearly forty years later.

"Because you were born a Jewish person," she says.” ~ Gary Shteyngart, Little Failure

“Power is not a means, Vinston; it is an end. One does not establish a dictatorship in order to safeguard a revolution; one makes the revolution in order to establish the dictatorship. The object of persecution is persecution. The object of torture is torture. The object of power is power.” ~ Gary Shteyngart, Little Failure


April 14, 1943 marks the death of Stalin’s elder son, Yakov Dzhugashvili.

Shortly after the defeat of Nazi Germany in May 1945, Allied military intelligence agents unearthed a container of top secret files in the garden of a senior German diplomat. One set of microfilmed documents contained a report into the shooting of a Soviet prisoner of war by an SS guard at Sachsenhausen concentration camp on 14 April 1943. The victim was a 36-year-old Red Army artillery lieutenant named Yakov Dzhugashvili.

There was nothing remarkable in the death of a Soviet soldier in Nazi captivity. In the first 12 months of Adolf Hitler’s invasion of the USSR, an estimated 2.8 million Red Army prisoners had succumbed to starvation, exposure and disease while in German hands. However, as the SS report made clear, Yakov Dzhugashvili was no ordinary prisoner. He was the eldest son of Joseph Vissarionovich Dzhugashvili, better known to the world as Joseph Stalin.

Yakov, Svetlana, Stalin

Aware of the significance of the death of such a high-status prisoner in his custody, the SS chief Heinrich Himmler forwarded the report to the foreign minister, Joachim Ribbentrop, on 22 April 1943, claiming that Dzhugashvili had died while trying to escape. Ribbentrop prudently decided not to release the news to the Nazi press and the report was duly filed away. Had it not been removed from Berlin in the final month of war in Europe along with other German state documents, Himmler’s report might never have survived. Once in Allied hands, it was despatched to London for proper examination.

Not much was known in the British Foreign Office about Stalin’s eldest child. Dzhugashvili was born in March 1907 to Stalin’s first wife, Ekaterina neé Svanidze. Ekaterina succumbed to tuberculosis when he was only a few months old, leaving him in the care of an aunt in Georgia. Father and son would not meet again until almost four years after the October Revolution in 1917, by which time Stalin was a rising star in the Bolshevik regime.

When he joined his father in Moscow, the 14-year-old Yakov could barely speak a word of Russian. Stalin came to regard his son as a weakling unworthy of his affection or respect. On being told that Yakov had tried to kill himself with a pistol over an unhappy love affair, Stalin scornfully exclaimed, ‘He can’t even shoot straight!’

The one bright spot in Yakov’s life was his father’s second wife, Nadezhda Alliluyeva. Yakov regarded Nadezhda, who was only six years older than him, as more like a devoted sister than a stepmother. In November 1932, she killed herself. Some whispered that the cause was her despair over the effect of Stalin’s ruthless policies on the Soviet people; others secretly opined that it was the revulsion she felt for her husband.

With Nadezhda’s death, Yakov lost an important ally in his father’s inner circle as well as a dear friend. He managed to escape Stalin’s bullying several years later by joining the Red Army. The move had fateful consequences. Shortly after the start of Nazi Germany’s invasion of the Soviet Union on 22 June 1941 his artillery unit was annihilated and he was taken prisoner. Having established Yakov’s identity, his captors gleefully announced the news to the world. However, he adamantly resisted all their attempts to get him to denounce his father’s regime.

In early 1943, the Germans offered to swap Yakov for Field Marshal Friedrich Paulus, whom the Red Army had captured at Stalingrad. Stalin allegedly turned the offer down, saying: ‘I will not trade a Marshal for a Lieutenant.’ By this time, Dzhugashvili was being held at Sachsenhausen concentration camp outside Berlin in a special compound for ‘prominent persons’ and persistent Allied escapees. Here he shared a hut with another Soviet officer, Vassily Kokodin, a nephew of Stalin’s foreign minister, Vyacheslav Molotov, and with four British prisoners of war.

Compared with the conditions endured by the rest of Sachsenhausen’s inmates, their quarters were comfortable. However, the SS report into Yakov’s death mentioned frequent disagreements between the two Soviet prisoners and their reluctant British companions that made him seriously depressed. The final straw seemed to have been the accusation from the Britons that Dzhugashvili and Kokodin had deliberately fouled the communal latrine. 

Just before nightfall on 14 April 1943 Yakov strode out of his hut towards the surrounding electrified barbed wire fence. According to the SS report, a sentry ordered him to halt. When he refused, the sentry, obeying standing orders, opened fire, killing him instantly. Yakov’s actions were those of a man bent on suicide.

As officials in Whitehall waited for a full translation of the SS report, they discussed with US diplomats the idea of informing Stalin how his son had died. The conference between Stalin, Winston Churchill and Harry Truman at Potsdam in July 1945 seemed to be ideal for this purpose. However, once the complete document had been studied, the Foreign Office quickly abandoned the plan, with one diplomat concluding that he and his colleagues ‘do not think the evidence would give [Stalin] any comfort and it would be naturally distasteful to us to draw attention to the Anglo-Russian quarrels which preceded the death of his son’.

The file on Dzhugashvili was closed and its contents would remain a secret until it was declassified three decades later. Did Stalin ever discover how his son died? In January 1951 Russian agents in Germany’ began to investigate Yakov’s disappearance, with a reward of one million roubles for a definite lead.

Joseph Stalin died in March 1953, aged 75, leaving behind little evidence to suggest that he lamented the death of his eldest child any more than he had mourned those who had perished while he ruled over the Soviet Union.

Daniel Baker:
How bitterly ironic. After all, this kind of absolute equality was what the Soviet Union was supposed to be all about. The son of the Party General Secretary enjoyed no special privileges; he wore the same Red Army uniform as the son of a farmer or a factory worker, was paid the same, faced the same dangers, and if captured was no more likely to be traded back to his own side than any other lieutenant. But of course, it was all a lie. Stalin enjoyed all kinds of special privileges: he had a private dacha at Kuntsevo, was paid about 15 times the minimum wage, received medical care at the Kremlin Hospital which was closed to the public, etc. Yakov Iosifovich Dzhugashvili died in a concentration camp not because Stalin was treated equally, but because he didn’t give a shit about his son.

Mehdi Epsilon:
If the Great Purge had not occurred, the Red Army would have been stronger than the Nazi Army, possessing half of the Earth's resources and a huge human reservoir, if only Trotsky had been in power.

The reason was to prevent a dispute between the Communists and the Social Democrats that allowed the rise of the Nazis, and he also made an agreement with the Nazis to prevent war on two fronts from 1939 to 1941 for the Germans.

Stalin was biggest friend of Hitler more than Mussolini and the reason for his first successes 1939 - 1942.

David Sigalov:
Stalin had announced “There are no Soviet prisoners of war — only traitors.” Accordingly, if a Soviet soldier was taken prisoner, Stalin demonstrated his empathy by having that man's family arrested.


And those Soviet POWs who made it back were sent to the Gulag. Stalin was afraid that seeing the West even in a very limited way, as a POW, was already dangerous, a threat to the Communist regime that insisted the Soviet Union was paradise.



Russia is a very rich country in terms of natural resources yet its people have a standard of living far below the West.

Russia is the largest country in the world in terms of land mass, but has little influence.

Russia projects itself to be powerful from a military standpoint, yet cannot defeat small countries like Ukraine.

Russia is backward in its outlook but has not desire to change that. ~ Brent Cooper, Quora

Huib Minderhut:
Poverty in Russia is so deep that even redistributing the wealth of the oligarchs would hardly make a dent. Unfortunately. 100 years of theft, greed and mismanagement has turned a rich country into an economic wasteland.

Alex Sadovsky:
Russia is a culturally medieval Asian country with the self-image of a Victorian European empire. A split identity disorder. Perhaps this lies at the root of its inability to industrialize and effectively use its vast resources.


The economic model was inefficient and the decisions made in the Soviet five year plans were extremely poor and contributed to people's suffering. Many of their famines were caused by the Soviet leaders' own poor decisions.

How did they survive for so long? Sheer authoritarian rule. Fear. Oppression. Intimidation. Torture. Imprisonment. Exile. And execution.

Combine all of those against a people and it is difficult for them to resist. ~ Brent Cooper, Quora

man dead of famine, Kharkiv

Baruch Cohen:
Add some luck — in the 30s depression in the West, capitalists were more than happy to build plants in the USSR for half of what they would have otherwise charged.. In the 40s … Hitler made Britain and USA befriend Soviet Union. In the 60s — huge deposits of oil (Samotlor! ) and later, gas (Urengoy!) plus Western Europe near-sighted greed helped revive the economy for some time.

Mat Geeser:
In the earlier, simpler stages of industrialization a command economy can lead to impressive growth; later more complex industry progressively suffers.

Dimitri Zolochev:
The USSR survived for 70 years because the people were content with low standards of living, scarcity, no freedoms, and lies by the failing regime. Ultimately, the communist system collapsed when everything stagnated and the rest of the world was prospering through free markets and private entrepreneurs and enterprises. The Kremlin could no longer hide the truth and the train wreck finally was revealed.

Haelvor Saever:
It worked well enough as long as they kept the fear element up to par. Stalin had great economic results to show for unlike Gorbachev who tried to soften things up. I guess the same would apply to any other system of slavery.


~ Mass heating failures have left hundreds of thousands of Russians to experience first-hand what they had wished for Europeans and Ukrainians last year.

As you may have heard Russian men do not rush in droves to sign military contracts and die for Putin’s fantasies in Ukraine despite the best efforts of television propagandists trained in the art of manipulation and dark psychology.

Sous Chef of the Wagner mercenary army Yevgeny Prigozhin followed by Russia’s Defense Ministry have been recruiting convicts from penitentiaries (penal colonies stocked with criminals who had committed serious crimes serving long sentences) offering pardon at the end of the six month contract.

This created a peculiar double standards outcome. It’s only the inmates in the Russian military who get to leave the battlefront — to commit new crimes, which has been well documented 
while regular troops do not get rotated out or what the wives of the mobilized call “indefinite draft.”

As pundits note, Putin’s crooked regime admires criminals and hold them in higher regard than innocent civilians who didn’t commit any crimes, the latter occupying the lowest rungs in the social hierarchy.

Over 20% of the population of penal colonies, around 100,000 prisoners, have got early release. It is becoming progressively harder to get a fresh crop of cannon fodder by the scouts of the ministry of war. They have combed the same penal colonies multiple times with ever diminishing returns.

And now I bring you to a real life example of what Russians mean by “There would be no happiness, but misfortune helped.”

Getting the idea from the collapse of the central heating infrastructure, directors of penal colonies began to turn off the heating to force prisoners to go to war.

Apparently as a prisoner you don’t have any rights to serve your time allotted by the judiciary system. The only available way to pay the debt to the society is to sign the papers and go to war.

“They simply turned off their heating to sub-zero temperatures. Thus, the conditions in prisons become unbearable and men have no choice but to go to war in Ukraine,” a human rights activist told Bild. ~ Misha Firer, Quora

Michael Huff:
Trenches are cold, too. Doesn’t seem like much of a bargain.

Charles Lyell:
Insane. Release and train convicted criminals to use destructive weapons, then turn them loose on the general public after they prove they’re dangerous survivors.


In response to the question: Did communism fail because of Stalin? Dima replied, No, it’s the opposite. Stalin was its brightest star. After his death, Mao tried to keep the lights on for two more decades but failed in the end.

The history of many countries shows that Stalinism, with its national derivatives, is the only viable form of Communism.

Disciplined and sustained implementation of Stalinist principles could have preserved it until our days and maybe even made it the dominant force in the Eastern hemisphere. The main principles of Stalinism were gradually abandoned in the USSR after Stalin’s death. In hindsight, the Communist cause was doomed the day he died.

The following things turned out to be unavoidable requirements for the survival of Communism:

A semi-permanent state of emergency in preparation for revolutionary wars. In our books, revolutionary wars were practiced in the form of “repelling Imperialist aggression/provocations” (WW2, the Winter War) or “reaching a helping hand to the supporters of peace and progress” (the Polish and Afghan War).

Forced rotation of bureaucracy. Regular purges at all levels are based on the principles of meritocracy.

A relentless concentration of economic resources in “heavy industry,” the Soviet designation of the military-industrial complex.

Shielding the population from all types of bourgeois influence. Upholding an impenetrable perimeter of national borders. Constant guarding of the media space.

Eradication of all manifestations of consumerism, nationalism, and religiosity.

Forced labor in the form of “labor armies” for modernization of infrastructure, colonization of provinces, and expedited implementation of technological projects. In the USSR, the space program and atomic bomb were administered by the secret police in sharáshkas (“prison-style laboratories and research centers”).

Below, a photo showing one of the last sparks of the Stalinist fighting spirit, circulated by irreverent Russian Internet users with the caption, “We should live our lives as if Christ is coming this afternoon.”

You see a group of Soviet functionaries during a war training session in the early 1970s. These were the years when the USSR probably had its last chance to win the Cold War. However, Stalin’s wonder boys, who ruled over us after his death, turned out not to be up to the task.

I don't believe Stalinism could have survived for that long. Studying the USSR and living under dictatorship has taught me that running a large country as if it were an open air prison will never last no matter how good its intentions are.

Barry Howard Westfall:
Bless his heart! Nikita Khrushchev was a man with a kindly bone or two who was not constantly pounding his shoe and telling Americans that the Soviets would bury them, or sending people to the Lubyanka Prison to be tortured to death like Beria (Stalin’s J. Edgar Hoover) did. Moreover, K brought his wife along on his first or second visit to the West where he argued with Nixon about which economic system was going to be the best in the future before Nixon ran against Kennedy the first time in 1960 (called the “Kitchen Debate”.)

I remember my mother saying that she thought Mrs. K. looked like a kindly mother whom she would love to take to church with her. (Even at thirteen, I knew Mrs. K would never go to church in America. But it was my mom’s thought that counted, I guess). When she liked someone’s behavior, the nicest thing she could think of saying was that she would love to take the lady with her to the Methodist Church.

Jerry Childs:
Kill off anybody who might be a threat and you can keep things going fine.

The point this guy is making is that the only way communism works is by iron fist. Stalin did that and when he died, it’s like Khrushchev didn’t have the balls to keep up the killings and the Gulags. So communism died because the Soviets weren’t as fearful of revolution anymore.

Alan Drake:
Demographic decline would doom the USSR after Stalin with continued purges & mass deaths. (Holomodor etc). Not enough surviving adults to keep society going. Stalin exterminated about 9 million Soviet citizens — not counting the wars he started.


It’s hard to say when the Soviet system began to fall apart — Stalin’s death was not enough. I think Khrushchev’s denunciation of Stalin and his “personality cult” was the first earthquake. Even next door, in Poland, we were startled to hear the term “de-Stalinization.” In spite of it, the dictatorial stranglehold seemed strong enough that we never thought the Soviet Union could collapse and be dissolved. At my parents’ New Year’s Eve party in the late fifties, a Hungarian scientist said, “Nothing is going to change for a thousand years.”

And in a certain way he was correct: Russia continues to be a corrupt predatory empire where no lives matter.

Nevertheless, Khrushchev needs to be given credit for his courage to denounce Stalin, be it only after Stalin’s death. 

“The speech was leaked to the West by the Israeli intelligence agency Shin Bet, which received it from the Polish-Jewish journalist Wiktor Grajewski.

“Wiktor Grajewski pulled off one of the greatest intelligence coups in postwar history. He passed to the CIA the main sections of the Soviet leader Nikita Khrushchev’s historic speech, delivered on February 25, l956, in which he denounced Stalin’s crimes for the first time. The speech marked the first significant turning point in the Cold War — and, for a time, destabilized the Soviet Union’s East European empire.

Grajewski got hold of the speech in classic spy thriller style, through a girlfriend, Łucja Baranowska. He was ostensibly a communist journalist in Warsaw, and she was secretary to Edward Ochab, the First Secretary of the Polish United Workers’ Party. She found a copy of the speech on Ochab’s desk and smuggled it to Grajewski for a few hours.

As it happened, Grajewski had made a recent trip to Israel to visit his sick father, and resolved to emigrate there. After he read the speech, he decided to take it to the Israeli embassy, and gave it to Yaakov Barmor, who had helped Grajewski undertake his trip. Barmor, a Shin Bet representative, took photographs of the document and sent them to Israel.

In the late fifties, Grajewski settled in Israel. He is buried in Jerusalem.

“The speech was shocking in its day. There are reports that some of those present suffered heart attacks and others later took their own lives due to shock at the revelations of Stalin's use of terror. The ensuing confusion among many Soviet citizens, raised on panegyrics and permanent praise of the "genius" of Stalin, was especially apparent in Georgia, Stalin's homeland, where days of protests and rioting ended with a Soviet army crackdown on 9 March 1956. In the West, the speech politically devastated organized communists; the Communist Party USA alone lost more than 30,000 members within weeks of its publication.

The Khrushchev report's "Secret Speech" name came because it was delivered at an unpublicized closed session of party delegates, with guests and members of the press excluded. The text of the Khrushchev report was widely discussed in party cells in early March, often with the participation of non-party members. The official Russian text was not openly published until 1989, during the glasnost campaign of the Soviet leader Mikhail Gorbachev." (Wikipedia)

“Not one person in a hundred knows how to be silent and listen, no, nor even to conceive what such a thing means. Yet only then you can detect, beyond the fatuous clamor, the silence of which the universe is made.” ~ Samuel Beckett

Have you noticed how often writers speak about silence? They can’t seem to shut up about it.


Vittorio Arrigoni died on April 15th 2011.

He was the perfect example of a Western Anti-Israeli activist that went to "help the Palestinians" but ended up killed by Jihadists.

Arrigoni moved to the Gaza Strip to act against what he defined as "ethnic cleansing" done by Israel.

Francisco Muňoz:
He defended the creation of a single democratic and secular state in the “historical Palestine” region, where Palestinian and Jews could live in peace. It was precisely for his fight against Islamic Fundamentalism that he paid with his life. A dissident Salafist movement kidnapped him and asked Hamas for the liberation of its leader, Hesham al-Sa'eedni and other militants imprisoned in Palestine.

The Hamas police looked for him and he was finally located in an empty apartment in North of Gaza, where he was found dead by hanging or strangulation. His murder was condemned by Fatah and Hamas. But he criticized Hamas too for having “very limited human rights,” and Fatah for being a collaborator of the State of Israel. He condemned Salafists for being Islamic extremists and Israel for being an “apartheid state”. He acted like he was still living in Europe with Free Speech and freedom, but he was in a hornet’s nest defending “Free Palestine from the river to the sea” where he finally died.

Ephraim Aelony:
The perps where Al Quada. The people who arrested the perps where Hamas.


“The Nazis occupied our village when I was only 12 years old. At first they didn’t realize we were Jewish. We looked like our neighbors. But then someone reported us. So our home was registered as a Jewish residence. We knew our days were numbered. Time was a ticking bomb. Then late one night a non-Jewish friend of ours showed up at our house.

“Listen to me,” he told us with great urgency. “Dig a hole under your fence and crawl out two at a time. Someone will meet you at the other end and lead you to safety. Tomorrow all the Jews in town will be executed”.

His name was Kazi Bitdayev. And he secretly took all 6 of us to the neighboring village of Zheguta. He hid us there for 8 months, each week coming back for us and relocating us to a new basement or attic. He protected us. He fed us. He was our angel. After the war we searched for him for decades, desperate to thank him for saving our lives. But it was no use. We couldn’t find him.

Last year we finally located Kazi's grandchildren. And got to thank them. A moment I waited for my whole life. Hashem has truly blessed me.

- In loving memory of Zinayida Segal (pictured holding a photo of her rescuer Kazi)


Happy Khanuka is written in Russian, so I assume this took place in Russia or Ukraine.


He was an unforgiving person. Phenomenally unforgiving! All those who displeased him were doomed to die, sooner or later. Stalin was also more frightening to be around then Adolf Hitler. He was a bully, and a terrible husband. On the evening of November 1932, at a dinner party, Stalin, intoxicated, began to argue with his second wife Nadezhda Alliluyeva. Stalin also began to flirt with the younger wife of another colleague in front of her. As they were toasting to the extermination of State enemies, Stalin noticed that his wife failed to toast with them. Annoyed, Stalin tossed a piece of bread or a cigarette butt at her. She abruptly left the dinner table back to her quarters. That night she put a bullet into her head. Stalin refused to attend her funeral.

Stalin was also a terrible father. During the second world war, one of Stalin's sons, Yakov, was captured by the German army. When the news reached Joseph Stalin he simply replied, “I have no son.” Later he stated how he admired his son's decision to kill himself. Stalin only cared about his one daughter Svetlana. However, in time she grew to fear and despise him, especially when she learned that her mother committed suicide because of him. She later defected to the United States.

Stalin was far more paranoid than Adolf Hitler. The more power he gained the more paranoid he became. Stalin never gave one thought to ending the lives of millions, but he in turn was terrified of death. A few years before he expired, his paranoia had reached new heights of absurdity. He formulated the Doctor's Plot in which it was claimed that Jewish doctors were trying to poison the Soviet leadership. A new round of purges was about to begin. Fortunately, Stalin died of a brain aneurysm in 1953 before the next big Purge began.

The details of Stalin's death are far more horrifying than Hitler's. He was asleep in his dacha and the guards were given strict orders not to wake him or they would face a firing squad. Stalin didn't wake up, and the guards as well as other colleagues were too afraid to wake him. When they finally open the door, they found the the great Stalin on the floor paralyzed, lying in a pool of his urine. Members of Stalin's inner circle came to see him, Beria, never like Stalin to begin with, began to mock and shout obscenities at him. Stalin's eyes opened staring directly at him. Beria immediately collapsed to the floor and kissed his hand begging for forgiveness. Stalin soon went limp again and Beria commenced his insults at his master.

Perhaps the most chilling details we have of Stalin's last moments comes from his daughter Svetlana. As he lay dying in front of them, she noted the following in her diary. He suddenly opened his eyes and stared at everyone around the room with an angry/horrified glance. He then lifted up his arm as if to point to something in the room. It looked as if he was trying to curse them all. Finally his life forced wrenched itself free of his body and Stalin was finally gone. ~ McFish, Quora


Columbine marked the beginning of a new era of high-profile mass shootings in the US. Was the attack the inevitable outcome of lax controls and a culture of gun glorification?

Perhaps the most important thing to know about the most infamous school shooting in US history is that it was never meant to be a school shooting. That it is remembered primarily in this way – that the word ‘Columbine’ has become a shorthand for angry young men brandishing deadly firearms – demonstrates how the myths of that day shape historical memory while obscuring historical reality.

What only became clear in the months and years after the Columbine High School massacre was that Eric Harris, 18, and Dylan Klebold, 17, had planned to execute the deadliest terrorist attack in US history in their suburban Denver high school on 19 April 1999. The plan’s centerpiece was a series of homemade bombs set to explode at lunchtime, when the school’s cafeteria would be at its most crowded. Smaller explosives scattered elsewhere throughout the building would ignite an apocalyptic conflagration. Their goal was to exceed the death toll from the 19 April 1995 Oklahoma City bombing, and originally Harris and Klebold planned to carry out what they referred to as ‘Judgment Day’ on its anniversary. But a last-minute delay in getting supplies meant the bombing would have to wait until the following day.

Instead, each of the 13 people they murdered died in a much more quotidian way, by gunfire – 13 of the nearly 11,000 firearms homicides reported in the US in 1999. The guns were part of a planned second act, in which Harris and Klebold would take up positions outside the school, shooting fleeing students. But the bombs failed to detonate so instead the killers stormed the school, searching for victims before killing themselves. The attack would be remembered as a school shooting, marking the beginning of a new era of high-profile mass killings in American schools and elsewhere, all of them enabled by the easy availability of deadly firearms. Columbine – a failed school bombing turned mass shooting – begat Virginia Tech, Sandy Hook, Parkland and Uvalde.

Easy access

School shootings in 1990s America were rare but not unheard of. In 1998, a year before Columbine, two boys aged 13 and 11 brought several firearms with them to school outside of Jonesboro, Arkansas, killing four students and a teacher. Two months later a 15-year-old boy walked into his school cafeteria in Springfield, Oregon, with a semi-automatic rifle and murdered two students; he had killed his parents at home the previous day. President Bill Clinton tasked Attorney General Janet Reno with investigating these shootings and, more generally, what appeared to be an uptick in violence in schools.

The commonality between these incidents and Columbine was teenage boys and guns. Since the end of the Second World War, American parents and politicians alike had fretted about the easy access children had to firearms in a country that venerated the gun and offered access to it like no other. Popular culture, from the television westerns of the 1950s such as The Rifleman and Have Gun – Will Travel to the gangster rap of the 1990s, glorified guns and connected them to aspirational manhood across social and economic divides.

Amid a spike in crime rates in the late 1980s and early 1990s, politicians and commentators speculated endlessly about what access to guns and pop culture did to young men; most infamous was the ‘superpredator’ concept, a since-debunked theory that a generation of remorseless young killers, mostly Black and Latino, were coming of age in America’s cities. Despite a patchwork of state and federal laws meant to keep guns out of the wrong hands in a country of more than 200 million firearms, by the 1990s it had become almost impossible to keep them away from a determined teenager.

Making a murderer

Teenage boys with guns felt characteristically American but everything else about Columbine seemed novel. Harris and Klebold’s plans dwarfed anything that came before; the targets appeared to be not so much their fellow students as the viewing public, forced to suffer through the horror of seeing this seemingly safe suburban space desecrated. They left behind extensive evidence of their intentions – journals and a series of home video recordings that became known as the ‘Basement Tapes’, filmed in the weeks leading up to the attack.

Much of the story of that day, the killers’ lives leading up to it, and the survivors’ struggle thereafter is recounted in Columbine (2009) by journalist Dave Cullen. He covered the story in 1999 and spent years painstakingly reconstructing events and biographies, and attempting to correct so many of the rumors and myths that had become increasingly embedded in public memory.

In particular, Cullen wrestled with Columbine’s most challenging question: why? There were many ready explanations in the immediate aftermath of the attack: everything from guns, goth culture, violent video games and bullying to the influence of shock rocker Marilyn Manson, Nazism, religious hatred and negligent parenting. Mass media recycled scripts dating back to the 1950s about teenage boys and guns. Americans catalogued their own social pathologies and attempted to apply each of them in turn.

But Cullen eventually gathered historical evidence that almost nobody on 20 April 1999 knew existed: the killers’ own explanations for why they did what they did. Their journals and recordings laid it all out explicitly, but police officials stonewalled efforts to release these and other documents to the public. Much of this evidence was kept secret for months and even years after April 1999, allowing rumor to become intractable myth.

Relying on newly released evidence, Cullen wrote a 2004 article for Slate, ‘The Depressive and the Psychopath’, describing Klebold and Harris respectively. The journals and tapes revealed Klebold to be suicidally depressive and easily manipulated by a trusted friend to commit a horrific crime. Harris, on the other hand, appeared to so many of the investigators who looked at the case to be a textbook psychopath, someone incapable of empathy for other human beings and harboring a god-like superiority complex. Neither was bullied; if anything, they dabbled in bullying themselves. Harris imagined himself an Übermensch – and Klebold followed along.

Cullen, in other words, asked Americans to look internally, to the mental state of the two killers, rather than externally, to the society that produced them. Without a more convincing contextual explanation – depositions of the killers’ parents are sealed until 2027 – it’s likely to remain the consensus.

‘Gun show loophole’

But what of the guns? They were the common denominator in mass murders at schools before Columbine, and the killers who have followed, many of whom would cite Harris and Klebold as inspiration, have also turned to them as effective tools for the task.

‘Bombs are hard’, Cullen noted, but guns are easy – easy to get, easy to load, easy to fire. ‘I want to torch and level everything’, Harris wrote in one journal entry, ‘but bombs of that size are hard to make.’ He and Klebold used four guns in the killings: two shotguns with barrels they sawed down to well below the legal limit, a carbine rifle and a TEC-9 semi-automatic pistol.

The TEC-9 was one of the guns banned by name in the 1994 Federal Assault Weapons Ban, but one of the law’s loopholes allowed for the possession and resale of firearms that had been manufactured before the ban. When the killers’ acquaintance Mark Manes sold them the gun for $500 in the weeks before the shooting, he broke the law not because he sold a banned gun but because he sold it to minors. Manes had purchased it legally for $491 at the same gun show where Harris and Klebold would acquire their other weapons, with the assistance of an 18-year-old friend named Robyn Anderson.

After it was discovered how the killers acquired their firearms, the ‘gun show loophole’ entered the mainstream language of gun politics where it still exists today, one of a handful of perennial issues raised by a range of ‘gun safety’ organizations. The 1993 Brady Handgun Violence Prevention Act mandated background checks on all retail gun transactions through licensed dealers. But the law did not require a background check on sales between private individuals.

Common estimates for such sales range as high as 40 per cent of all US gun sales annually
, though some scholars have suggested this is an overestimation. Many of these sales occur outside of gun shows, but they became associated with shows because the 1986 Reagan-era Firearm Owners Protection Act loosened laws about where retailers could sell guns. The Act also allowed for licensed retailers to make private sales at gun shows without background checks under certain conditions.

Quotidian terror

Would universal background checks have stopped the killings at Columbine? Or those that followed at Virginia Tech, Sandy Hook, Parkland or Uvalde? Would red flag laws – laws that allow courts to temporarily seize someone’s firearms if they present a threat to others or themselves – have prevented the attacks from taking place?

It is hard to imagine in the 21st century, in a nation of more than 400 million guns, denying committed killers the most convenient tools for the task. The Columbine killers’ bombs did not explode – in that they proved, fortunately, failures. But in making a more commonplace connection between America’s near limitless gun stockpile and its most vulnerable social spaces, they successfully ushered in an era of quotidian terror. ~


A new study has revealed a troubling development in the state of Maryland: while murder rates fluctuated between 2005 and 2017—first trending downward, then increasing for a few years—the homicides recorded during that period have grown steadily more violent the entire time.

According to “Increasing Injury Intensity among 6,500 Violent Deaths in the State of Maryland,” which is forthcoming in the Journal of the American College of Surgeons, researchers examined the intensity of deadly incidents over a 13-year period. Intensity was measured by the number of gunshots, stab wounds, and fractures exhibited by victims. Across all three causes of death, while the rate of homicides varied during the period, the percentage of high-violence crimes consistently increased.

Conducting the study was easier in Maryland than it might have been in other states. Maryland is unusual in that its Chief Medical Examiner is required to report on all murders, suicides, and unusual deaths, which means that researchers had access to a broad-based data set. State policy meant that researchers had access to information about victims who died under medical care and those who died at the scene of the crime. The state’s data set was geographically comprehensive, too, including cases from Baltimore’s urban center, the suburban areas around Baltimore and Washington, DC, and the rural areas in the eastern and western parts of the state.

Other similar studies, by contrast, have only focused on data from individual medical centers. Those limited data sets only included cases from nearby areas and excluded cases in which victims died before receiving medical care.

What the study team found in their examination of the data set is that the number of shootings in which victims received at least ten gunshot wounds nearly doubled between 2005 and 2017, increasing from 5.7% to 10%. Individual acts of gun violence, in other words, are becoming more violent.

The increase in violence is particularly stunning when considered alongside the difficulty of hitting a person with a single bullet. According to statistics collected by the Dallas Police Department, only 54% of people who had been shot at were actually struck by a bullet. Two-thirds of all shots fired, in fact, missed their targets altogether… and these particular shots were fired by law enforcement officers who get significant handgun training.

Indeed, half of the officers who used their weapons, according to the study, missed with all of the shots they fired. In other words, hitting a victim with just one bullet is hard, even for highly-trained officers, which means that inflicting ten or more gunshot wounds is statistically very unlikely. Somehow, however, extreme gun violence is still becoming more prevalent. (Possible explanations include the increasing availability of weapons that fire multiple shots, the presence of multiple shooters, and shots being fired at a much closer range.)

It doesn’t actually take ten or more bullets to murder a victim, however. Ten gunshots is overkill.

“The median number of shots you need to kill someone is one,” said Dr. David Efron, co-author of the study and the Medical Director of the R. Adams Cowley Shock Trauma Center and the Thomas M. Scalea, MD Distinguished Professor of Trauma at the University of Maryland Medical School. “It’s highly lethal.”

Incidents in which a victim is shot multiple times, despite the fact that only one shot is typically necessary to cause death, seem to suggest other factors at work. “If you pull a trigger once,” said Dr. Efron, “That’s a pretty intensive event, at least to most of us. But ten or more, you’re over a hump somewhere. And it turns out that the proportion of people who died with ten or more shots went up ridiculously.” Just in 2017, Dr. Efron noted, there were 60 people state-wide who were shot with more than ten bullets.

Dr. Efron is quick to note that the phenomenon he and his fellow researchers studied isn’t restricted to urban areas.

“It’s very important to make sure that studies like this aren’t dismissed as a city problem,” he said. “Trauma is a disease of proximity, and cities happen to just have a lot more proximity.”
Undoubtedly, part of the explanation for the increase in incidents of extreme gun violence involves the nature of the guns being used. Automatic weapons with sizable magazines allow for more shots to be taken more quickly. However, the study in question noted an increasing intensity (though not frequency) of violence of all types. Between 2005 and 2017, the percentage of stabbing victims with at least five wounds rose from 48% to 60%. Among victims who died from blunt trauma, those with at least five bone fractures increased from 24% to 38%.

Taken as a whole, the number of victims who suffered extreme violence of all types increased steadily during the study period, which suggests that addressing the issue will require more than just sensible new gun laws.

It’s important to note again that the study revealed an increase in the intensity of violence, not frequency. During the study period, the frequency of homicides in Maryland decreased for a long period, then increased for a few years (without quite reaching earlier levels) after the murder of Freddie Gray, which resulted in new policing strategies being adopted by the Baltimore Police Department. While the number of homicides decreased and increased, in other words, the intensity of violence only grew. The state saw fewer murders, but those murders were far more violent.

What accounts for that transformation? The answer is surely multifaceted, with sociological, psychological, and economic factors all playing roles.

“The field of violence prevention,” the researchers noted, “has long studied factors associated with homicide including access to weapons, social isolation, structural socio-economic barriers, and the narcotic trade.”

Dr. Efron was more blunt: “Is it an erosion of the social fabric, less community, less religion, less stable families?” he asked. “I don’t know.”

The research team does note that other studies suggest those who perpetrate violent crimes “were more likely to have experienced interpersonal trauma” themselves, including merely witnessing violence, especially when young. Both the victims themselves and those who are present at the scene acquire an increased likelihood of acting violently themselves. Acts of violence cause a nuclear chain reaction, multiplying potential triggers for further violence, thereby giving rise to concerns about escalation.

“It doesn’t matter what kind of violence you witness when you’re young,” said Dr. Efron. “It could be a terrorist explosion. It could be domestic violence. It could be something you see on the street. You are at much greater risk of perpetrating a violent event as an adult.”

Unfortunately, outreach programs can locate and help victims of violence—and even perpetrators—but providing aid for witnesses is much more challenging.

Solutions for the rise in overkill, however, fall outside the purview of the study, which also doesn’t attempt to identify any clear causes for the phenomenon. The researchers’ paper, however, does reveal, for the very first time, the extent of the problem, at least in one state, which might be the first step in addressing a troubling problem. ~


Supply-side economics is a widely held belief that increasing the supply of goods and services powers economic growth. A key tenet of this theory is creating a better climate for businesses—the suppliers. Supply-siders reckon that when companies and the rich are wealthier, everybody prospers, so their policies typically center on tax cuts, deregulation, and lower interest rates.

Evidence suggests that this theory may not always work. In this article, we'll take a look at the background of supply-side economics along with some of its faults.

Supply-side economics, which is based on the belief that everyone prospers when companies and the rich have more money at their disposal, is an economic model used by many countries.

Data shows that tax cuts and other policies to fatten corporate profits don’t always result in job, investment, productivity, and economic growth.

There is also no concrete evidence supporting the opinion that tax cuts pay for themselves.

The basic idea behind supply-side economics is that companies reinvest their profits, leading to more jobs, greater productivity, higher tax revenue, and so forth. That is largely how U.S. President Ronald Reagan, and countless politicians since, sold supply-side economics to the public and paved the way for its acceptance.

At the core of supply-side economics is the belief that by reducing tax rates on individuals and businesses, particularly on high-income earners and corporations, it will incentivize them to work harder, invest more, and innovate, leading to increased economic output. Lower taxes are seen as a way to encourage entrepreneurs and businesses to expand, create jobs, and ultimately benefit all members of society through a rising tide that lifts all boats.

Advocates argue that reducing government regulations and intervention in the economy can also spur growth by allowing the market to operate more freely. Critics of supply-side economics argue that it disproportionately benefits the wealthy and exacerbates income inequality. Those opposed contend that it can lead to budget deficits and reduced government revenue.

Supply-side economics was first presented as an economic theory by Arthur Laffer in the 1970s. Laffer argued that tax cuts stimulate demand, resulting in more job opportunities and wealth circulating in the economy.

It didn’t take long for Laffer’s theory to enter the mainstream. In the 1980s, President Reagan and British Prime Minister Margaret Thatcher ran with the idea that the money high-earners saved by paying less tax would be pumped back into the economy to everyone’s benefit, and adopted supply-side economics in their respective countries.

Tax cuts for the rich is an economic policy that’s been championed by several leading politicians since, including former U.S. Presidents George W. Bush and Donald Trump. More recently, it was also a feature of Liz Truss’ disastrously short stint as U.K. prime minister.
Truss lasted just six weeks in office after her bold call to tax Britain’s wealthiest less during an unprecedented cost-of-living crisis backfired. The move spooked investors, destroyed the value of the local currency, and was abandoned, much to Truss’ humiliation, within less than a month.


Few topics divide economists quite like the supply-side one. For every expert who swears that this economic approach works, another one vehemently disputes it. Like other theories, supply-side economics isn’t flawless and does have some holes. Here are five key reasons why the theory has been disproven.

Tax Cuts Don’t Create More Jobs

If companies are taxed less, they’ll use their excess savings to employ more staff, supply-siders argue. The problem is that there isn’t a lot of evidence to back that up. From 1982 to 1989, when the United States was governed by Reagan and taxes were cut substantially, the labor force didn’t grow anymore than previously. A similar thing happened under George W. Bush’s watch. In 2001 and 2003, Congress passed two generous tax cuts for the wealthy, and the slowest job growth in half a century followed.

There's a number of reasons why this may happen. When individuals or businesses receive tax cuts, they may not necessarily use all the extra money to create jobs. The effects of tax cuts on job creation may not be immediate. The impact of tax cuts can vary by industry. In any case, supply-side economics may have flaws in terms of tangible, short-term job creation.

Supply-Side Policies Weakened Investment

Data supporting the popular opinion that lower taxes on the rich spur more investment is also hard to come by. In fact, the Center for American Progress, citing figures from the U.S. Bureau of Economic Analysis, said that average annual growth in nonresidential fixed investment was significantly higher in the non-supply-side 1990s than in the Reagan and Bush decades. Ironically, in the 1990s, the tax rate for higher earners was raised.

Tax Cuts Don’t Spur Stronger Economic Growth

All of the above serves as a reminder that supply-side economics doesn’t always achieve what its advocates say it does and is by no means a guarantee for economic growth. Often, supply-siders point to the 1980s as evidence that these policies engineer economic turnarounds. However, as Roubini points out, the pickup in growth exhibited from 1983 to 1989 came after a severe recession and was nothing out of the ordinary.

More evidence that traditional supply-side policies don’t lift economies was discovered in Kansas. In 2012 and 2013, lawmakers there cut the top rate of the state’s income tax by almost 30% and the tax rate on certain business profits to zero in a desperate bid to energize the local economy. That experiment lasted about five years and didn’t go well, with Kansas’ economy underperforming most neighboring states and the rest of the nation during that period.

Important: The economic benefits of deregulation also aren’t as clear-cut as supply-side advocates let on. While it is true that some regulations can be unnecessary and onerous, the majority are essential standards that underpin the economy and protect consumers.

Tax Cuts Don’t Pay for Themselves

A key selling point of supply-side economics is that tax cuts actually increase overall tax revenue by boosting employment and the incomes of the population and, therefore, don’t leave the country in more debt. This view has gained political currency but isn’t backed by much concrete evidence.

In fact, data shows that budget deficits exploded during Reagan’s era of tax cuts. According to the New York University Stern School of Business, the public debt-to-gross domestic product (GDP) ratio rose to 50.6% in 1992 from 26.1% in 1979.

The National Bureau of Economic Research (NBER) similarly shot down talk of tax cuts paying for themselves. Based on its estimates, for each dollar of income tax cuts, only 17 cents will be recovered from greater spending.

What Do Economists Think of Supply-Side Economics?

Opinions are mixed. Some economists strongly believe that putting more money into the pockets of businesses is the best way to ensure economic growth. Others strongly dispute this theory, arguing that wealth doesn’t trickle down and that the only outcome is the rich getting richer.

What Are the Disadvantages of Supply-Side Policies?

The most obvious disadvantages are the time it can take for these policies to work, the fact that they can be very costly to implement, and the backlash that they receive from left-wing thinkers. Telling the population that helping the rich will benefit everyone is a hard sell, particularly as there is no concrete evidence to support this.

Are There Any Examples of Supply-Side Policies Working?

While there are plenty of holes in supply-side economics, it isn’t completely flawed, although its success can be hard to measure. It takes a long time to reap the benefits of these policies, and any good that comes from them may be attributed to something else. A lot also depends on where you stand politically. Some people credit the likes of Ronald Reagan and Margaret Thatcher with salvaging the economy in the 1980s. Others believe their supply-side policies ruined everything and spurred inequality.

The Bottom Line

Supply-side economics, which posits that everyone prospers when companies have more money at their disposal, has reshaped how most of the world’s major economies operate. The thing is, not all economists agree with the trickle-down theory. Abundant evidence has been presented to support the view that supply-side economics doesn’t deliver as advertised. According to those findings, this economic model does not create more jobs and lift the economy or result in similar overall tax revenues. ~


The demand-side theory or Keynesian theory was developed in the 1930s by John Maynard Keynes. Built on the idea that economic growth is stimulated through demand, the theory seeks to empower buyers by:

Increasing government spending through public programs, like higher unemployment benefits or subsidies, or establishing infrastructure projects to create jobs and promote consumer demand and boost consumer spending.

Increasing the money supply within an economy where central banks buy or sell government securities and expand the money supply. More money in circulation leads to lower interest rates, an incentive for consumers and businesses to buy goods or invest in their businesses.

Overall, multiple studies support both supply and demand-side fiscal policies. However, studies have shown that due to multiple economic variables, environments, and factors, it can be hard to pinpoint effects with a high level of confidence and to determine the exact outcome of any one theory or set of policies.

Criticism of Supply-Side Theory

Critics often cite that supply-side tax cuts do not lead to increased economic growth, neglect the demand side of the economy, and may lead to higher deficits and currency weakness. Some also argue that supply-side economics is merely trickle-down economics, benefiting the rich and doing little for the poor and middle class.

In 2001 and 2003, Congress passed two generous tax cuts for the wealthy, and the slowest job growth in half a century followed. Evidence suggests that they did not improve economic growth or pay for themselves, but instead ballooned deficits and debt and contributed to a rise in income inequality.

Market commentators have also argued that supply-side policies are responsible for the growing trend among corporations to engage in stock buybacks. Buybacks occur when companies place the cash they gain from lower taxes back into the pockets of their shareholders rather than investing in new plants, equipment, innovative ventures, or employees. In 2018, US corporations spent more than $1.1 trillion to repurchase their stock rather than invest in new plants and equipment or pay their workers more.

How Does Supply Side Economics Compare with Keynesian Economics?

Some economists argue that supply-side theory has more in common with Keynesian economics than with classical economics because both theories focus on how aggregate demand affects economic outcomes. However, while Keynesian Economics relies heavily on government intervention (such as fiscal policy or monetary policy), the supply-side theory emphasizes market forces instead. In addition, while classical economics focuses primarily on what individuals can produce (and therefore what they can sell), the supply-side theory also takes into account what businesses can produce (and therefore what they can sell).

Ultimately, then, the supply-side theory may be said to have a greater emphasis on market dynamics than Keynesian theory does.


There is yet another economic term we’ve heard over the years: “voodoo economics”:
“Voodoo economics is a derogatory phrase first used by George H.W. Bush to describe Ronald Reagan's economic policies.

The expectation that decreased taxes on the wealthy and businesses would result in increased spending on goods, services, and salaries failed to materialize. Moreover, President Reagan’s relaxed regulation contributed to the savings and loan crisis.

By the early 1990s, the U.S. economy had fallen back into recession.”

Demand-side economics could be seen as the opposite of supply-side economics.

Demand-side economics maintains that increasing the demand for goods and services is the key to economic growth.

A demand-side economic policy might call for large-scale government spending on infrastructure projects in order to increase related production, purchases, and hiring.

Demand-side economics may also be called Keynesian economics for John Maynard Keynes, who developed the theory and advocated its implementation as a way out of the Great Depression of the 1930s.

Was George H.W. Bush a Voodoo Economist?

As vice president, George H.W. Bush wisely did not make any reference to voodoo economics. He supported President Ronald Reagan's program of tax cuts on corporations and wealthy individuals.

As president, Bush might have been a more moderate proponent of voodoo economics. In 1990, he raised the maximum individual income tax rate to 31%, from 28%, two years after promising to do no such thing. That contributed to his failure to be reelected to a second term.


Here’s how demand-side economics differs from supply-side:

Demand-side economists argue that instead of focusing on producers, as supply-side economists want to, the focus should be on the people who buy goods and services, who are far more numerous.

Demand-side economists like Keynes argue that when demand weakens—as it does during a recession—the government has to step in to stimulate growth.

Governments can do this by spending money to create jobs, which will give people more money to spend.

This will create deficits in the short-term, Keynesians acknowledge, but as the economy grows and tax revenues increase, the deficits will shrink and government spending can be reduced accordingly.

Broadly speaking, there are two-prongs to demand-side economic policies: an expansionary monetary policy and a liberal fiscal policy.

In terms of monetary policy, demand-side economics holds that the interest rate largely determines the liquidity preference, i.e., how incentivized people are to spend or save money. During times of economic slowness, demand-side theory favors expanding the money supply, which drives down interest rates. This is thought to encourage borrowing and investment, the idea being that lower rates make it more appealing for consumers and businesses to buy goods or invest in their businesses—valuable activities that increase demand or create jobs.

When it comes to fiscal policy, demand-side economics favors liberal fiscal policies, especially during economic downturns. These might take the form of tax cuts for consumers, like the Earned Income Tax Credit, or EITC, which was an important part of the Obama administration’s efforts to fight the Great Recession.

Another typical demand-side fiscal policy is to promote government spending on public works or infrastructure projects. The key idea here is that during a recession it’s more important for the government to stimulate economic growth than it is for the government to take in revenue. Infrastructure projects are popular options because they tend to pay for themselves in the long term.

Before Keynes, the field of economics was dominated by classical economics, based on the works of Adam Smith. Classical economics emphasizes free markets and discourages government intervention, believing that the “invisible hand” of the market is the best way to efficiently allocate goods and resources in a society.

The dominance of classical economic theory was severely challenged during the Great Depression when a collapse in demand failed to result in increased savings or lower interest rates that might stimulate investment spending and stabilize demand.

During this time, the U.S. under the Hoover Administration pursued a policy of balanced budgets, leading to massive tax increases and the Smoot-Hawley tariffs of the 1930s. These policies, especially the latter, failed to stimulate demand for domestic industries and provoked retaliatory tariffs from other nations, which led to a further decrease in international trade and likely worsened the crisis.

Writing in his General Theory of 1936, Keynes argued persuasively that, contrary to classical economics, markets have no self-stabilizing mechanism. According to his account, producers make investment decisions based on expected future demand. If demand appears weak (as it does during a recession), businesses are less likely to produce more goods and services, which in turn results in fewer people with jobs or income that might stimulate economic activity. In cases like this, Keynes argued, governments could stimulate demand by increasing spending.

Keynes’ policies found advocates in Franklin Roosevelt’s administration, which pursued many of the monetary and fiscal policies advocated by Keynes in the form of the New Deal. This included government spending through programs like the Works Progress Administration (WPA), the Civilian Conservation Corps (CCC), the Tennessee Valley Authority (TVA), and the Civil Works Administration (CWA).

Though the exact relationship between Franklin’s New Deal policies and the Great Depression is a hotly debated topic among economists, Keynes’ views became economic orthodoxy in the United States and much of the western world until the “stagflation” of the 1970s, when they largely fell out of fashion in favor of supply-side theories.

Although most often associated with FDR and the New Deal, Keynesian economics and its descendants has experienced something of a revival since the 2008 financial crisis.

During the Great Recession, the Obama administration pursued a number of demand-side policies to stimulate the economy. These included aggressively lowering interest rates, cutting taxes for the middle class, and pushing a $787 billion dollar stimulus package. The administration also intervened in the financial sector, passing the largest overhaul of that sector since the 1930s, in stark contrast to the more laissez-faire attitudes of the 1990s and early 2000s.

As during the 1930s, these demand-side policies were fiercely contested at the time, and remain controversial even today. The slowness of the recovery prompted criticism from many economists, especially those on the left, who argue that even more aggressive stimulus was needed, while economists on the right criticized the Obama administration for increasing the deficit.

The Importance of Both Supply and Demand

I’m not advocating throwing supply out the window; both perspectives are important. 

On the supply side, we must understand our business strategies and how to maximize efficiencies. But we must do that through the lens of demand. They are interdependent systems that can conflict with one another. The key is understanding the conflicts and making explicit trade-offs so that supply and demand are both happy—the result is acceptable to both.

At its core, supply asks, “How do I make it better, cheaper, faster?” and then looks at things like customer insights.

Demand asks, “How do we make our lives better?”

When we understand the two systems and how they come together, we’re able to build products and services with known trade-offs that are acceptable on both sides and that allow us to make better decisions. Neither side can convince the other to make trade-offs that are not effective for them.

If we understand that supply and demand feed each other, we can start to see the whole instead of seeing just pieces at a time. That’s where we make mistakes—missed opportunities—from both sides. You build products to sell, customers buy a product to make their lives better.

 John Maynard Keynes


In the early 20th century, as the world was industrializing more, economists struggled with how to curtail depressions, recessions, and unemployment.

British economist John Maynard Keynes came up with new ideas that would shape economics forever.

Keynes viewed savings as a negative aspect of the economy because if people save, they do not spend, and if they do not spend, the economy does not grow, since consumption is a large part of gross domestic product (GDP).

Furthermore, Keynes believed that if people spend less, businesses produce less, so they hire fewer workers, increasing unemployment.

Keynes believed that governments should take central roles in an economy, intervening during times of economic slowdowns or rapid growth.

Keynes had many critics, primarily from monetarist economists led by Milton Friedman, who argued points such as government spending pushing out private spending.

Basics of Keynesian Economics

John Maynard Keynes (1883-1946) was a British economist educated at the University of Cambridge. He was fascinated by mathematics and history, but eventually took an interest in economics at the prompting of one of his professors, the famed economist Alfred Marshall (1842-1924).

After leaving Cambridge, he held a variety of government positions, focusing on the application of economics to real-world problems. Keynes rose in importance during World War I and served as an advisor at conferences leading to the Treaty of Versailles, but it would be his 1936 book, "The General Theory of Unemployment, Interest, and Money," which would lay the foundations for his legacy: Keynesian economics.

Keynes' coursework at Cambridge focused on classical economics, whose founders included Adam Smith. Classical economics rested on a laissez-faire approach to market corrections—in some ways, a relatively primitive approach to the field. Immediately prior to classical economics, much of the world was still emerging from a feudal economic system, and industrialization had yet to fully take hold. Keynes's book essentially created the field of modern macroeconomics by looking at the role played by aggregate demand.

The Keynesian theory attributes the emergence of an economic depression to several factors:
The circular relationship between spending and earning (aggregate demand)

AGGREGATE DEMAND is the total demand for goods and services in an economy and is often considered to be the gross domestic product (GDP) of an economy at a given point in time.

Keynes on Savings

Savings was viewed by Keynes as having an adverse effect on the economy, especially if the savings rate is high or excessive. Because a major factor in the aggregate demand model is consumption; if individuals put money in the bank rather than buying goods or services, GDP will fall.

In addition, a decline in consumption leads businesses to produce less and to require fewer workers, which increases unemployment. Businesses are also less willing to invest in new factories.

Keynes on Unemployment

One of the groundbreaking aspects of the Keynesian theory was its treatment of the subject of employment. Classical economics was rooted in the premise that markets settle at full employment. Yet Keynes theorized that wages and prices are flexible and that full employment is not necessarily attainable or optimal.

This means that the economy seeks to find a balance between the wages workers demand and the wages businesses can supply. If the unemployment rate falls, fewer workers are available to businesses looking to expand, which means that workers can demand higher wages. A point exists at which a business will stop hiring.

Wages can be expressed in both real and nominal terms. Real wages take into account the effect of inflation, while nominal wages do not. To Keynes, businesses would have a hard time forcing workers to cut their nominal wage rates, and it was only after other wages fell across the economy, or the price of goods fell (deflation) that workers would be willing to accept lower wages.

In order to increase employment levels, the real, inflation-adjusted wage rate would have to fall. This, however, could result in a deepening depression, worsening consumer sentiment, and a decrease in aggregate demand. Additionally, Keynes theorized that wages and prices responded slowly (i.e. were 'sticky' or inelastic) to changes in supply and demand. One possible solution was direct government intervention.

The Role of Government

One of the primary players in the economy is the central government.

It can influence the direction of the economy through its control of the money supply—both via its ability to alter interest rates or by buying back or selling government-issued bonds. In Keynesian economics, the government takes an interventionist approach; it does not wait for market forces to improve GDP and employment. 

This results in the use of deficit spending.
As one of the components of the aggregate demand function mentioned earlier, government spending can create demand for goods and services if individuals are less willing to consume and businesses are less willing to build more factories.

Government spending can use up extra production capacity. Keynes also theorized that the overall effect of government spending would be magnified if businesses employed more people and if the employees spent money through consumption.

It is important to understand that the role of the government in the economy is not solely to dampen the effects of recessions or pull a country out of depression; it also must keep the economy from heating up too quickly.

Keynesian economics suggests that the interaction between the government and the overall economy moves in the direction opposite to that of the business cycle: more spending in a downturn and less spending in an upturn. If an economic boom creates high rates of inflation, the government could cut back its spending or increase taxes. This is referred to as fiscal policy.

Uses of Keynesian Theory

The Great Depression served as the catalyst that shot John Maynard Keynes into the spotlight, though it should be noted that he wrote his book several years after the Great Depression.

During the early years of the Depression, many key figures, including then-President Franklin D. Roosevelt, felt that the notion of the government "spending the economy toward health" seemed too simple a solution. It was by visualizing the economy in terms of the demand for goods and services that made the theory stick.

In his New Deal, Roosevelt employed workers in public projects, both providing jobs and creating demand for goods and services offered by businesses. Government spending also increased rapidly during World War II, as the government poured billions of dollars into companies manufacturing military equipment.

Criticism of Keynesian Theory

One of the more outspoken critics of Keynes and his approach was economist Milton Friedman. Friedman helped develop the monetarist school of thought (monetarism), which shifted the focus toward the role the money supply has on inflation rather than the role of aggregate demand.

Government spending can push out spending by private businesses because less money is available in the market for private borrowing, and monetarists suggested this be alleviated through monetary policy: the government can increase interest rates (making the borrowing of money more expensive) or it can sell Treasury securities (decreasing the dollar amount of funds available for lending) in order to beat inflation.

Another criticism of Keynesian theory is that it leans toward a centrally planned economy. If the government is expected to spend funds to thwart depressions, it is implied that the government knows what is best for the economy as a whole.

This eliminates the effects of market forces on decision-making. This critique was popularized by economist Friedrich Hayek in his 1944 work, "The Road to Serfdom." In the forward to a German edition of Keynes's book, it is indicated that his approach might work best in a totalitarian state.

What Are the Key Principles of Keynesian Economics?

Some of the key principles of Keynesian economics are that aggregate demand has a greater likelihood than aggregate supply of causing short-term economic events and that demand is impacted by both public and private decisions, wages and prices are sticky, so they respond slowly to changes in demand and supply, and lastly, changes in demand have the greatest effect on output and employment.

Monetarists are thought to be the opposite of Keynesian economics. While Keynes believes government intervention is key to controlling the economy, monetarists believe that changes in the money supply are key to controlling the economy.

Keynesian economics was shown not to work in all scenarios. In the 1970s, when the U.S. economy suffered stagflation, a combination of inflation and slow growth, Keynesian economics had no answer on how to tackle the problem, leading to a decline in its popularity.

The Bottom Line

While Keynesian theory in its original form is rarely used today, its radical approach to business cycles and its solutions to depressions have had a profound impact on the field of economics. These days, many governments use portions of the theory to smooth out the boom-and-bust cycles of their economies. Economists combine Keynesian principles with macroeconomics and monetary policy to determine what course of action to take.


The position noted in 1912 was inaccurate and the search was taking place in the wrong areas. Then too, remember that Titanic fell to a depth of more than two miles below the surface. Difficult to probe properly until more modern technology, allowing more accurate readings and depth penetration were developed and employed.

Titanic did not stay as one entire ship. She broke in two and each part was separated from the other on the ocean floor with a mile of debris between them. This would have been confusing for some looking for an entire wreck.

As to who found the Titanic, that was Bob Ballard who located the famous ship in 1985. But there is an interesting story behind that story. Ballard, and the folks at Wood’s Hole Oceanographic Institute, were developing a very high tech submersible that could stand the pressures of deep sea exploring and that had great mobility.

But he needed funding and could not get it so he turned to the US Navy. They agreed after some time but required that Ballard and his team also locate two missing nuclear subs, The Thresher and the Scorpion. The fun part was Ballard had to keep that secret and no one could know. So now, with the backing of funding from the US Navy, and in partnership with the French National Institute of Oceanography, Ballard went out to find the Titanic and find her he and his associates most certainly did.

As to the condition of the ship, yes she is deteriorating. She has been in salt water since 1912, sitting on the bottom of the ocean around 400 miles off the coast of Newfoundland, Canada. Rust and pressure damage have been taking their toll on the great ship over these many years. The salt water and sea life have permeated every inch from stem to stern. She has been gutted by time and there is no stopping the inevitable: one day the supporting frame will give way.

I am adding this as several individuals have asked where Titanic is actually located. Please see the map below which also contains the coordinates:
And just as an add on, Titanic is not thought of as a wreck. It is looked upon as a memorial. It is a testament to the over 1500 people that died when she sank on April 15, 1912. Many bodies were recovered and many were not. While numerous items have been brought up from the depths, they are handled carefully and the ship is treated as a gravesite. ~ Bram Bottfeld, Quora


If you believe decades of headlines, olive oil could be the closest thing to a life-fixing panacea we have—and now it’s even helping physicists in their experiments.

Researchers at the FOM Institute for Atomic and Molecular Physics (AMOLF) used a single drop of olive oil to create a mirror effect within a system of interacting photons, and the results generate a reaction that mimics memory.

Have you ever used a computer that’s bogged down by too many open programs, and as you type or move the mouse, the screen responds a fraction of a second late? Your action has been logged, but it hasn’t yet occurred. This behavior is analogous to what the AMOLF scientists studied, which is a physical phenomenon called
hysteresis, or the way the interacting items within a system are reliant on what has happened before—their memory.

To study hysteresis in photons, these researchers positioned two mirrors so that photons bounced between them, and then added a drop of oil so they could measure how photons behaved inside the drop. This oil forms a laser cavity.

“Scanning the laser-cavity frequency detuning at different speeds across an optical bistability, we find a hysteresis area that is a nonmonotonic function of the speed,” the researchers write in their paper. Photons enter the area and get muddled up in a memory system.

Photons in oil aren’t the only hysteresistic systems. Boiling water is a closely studied example of hysteresis, and scientists have studied every way to magnify the phenomenon, because it varies so much based on a bunch of different factors.

“Experimental boiling curves with hysteresis have different trends, depending on thermal and geometrical parameters of the enhancement structure and boiling liquid physical properties,” a 2015 paper explains.

Hysteresis is often linked with nucleation—the two phenomena have related definitions and frequently appear together. In a seeded raincloud, nucleation is what turns the fixed cloud vapors into drops big enough to fall as rain, and this process, too, is set in motion before it fully expresses. Nucleation acts differently and takes different amounts of time depending on temperature and other factors. The variation is on the same level as with tinkering with boiling water to fine-tune hysteresistic reactions.

There’s some heated (so to speak) debate about what really causes hysteresis. Even though parts of it have been observed for a long time, explaining what’s happening is a different question that hasn’t been fully answered. For that reason, the olive oil scientists are excited about their findings and keeping their future research within a narrow scope.

“The equations that describe how light behaves in our oil-filled cavity are similar to those describing collections of atoms, superconductors and even high energy physics,” researcher Said Rodriguez explained. And by continuing to study only the hysteresis of the oil-filled cavity, the team can focus on those potential applications rather than the broader entire idea of hysteresis.

HYSTERESIS —think of “history”

Hysteresis is the dependence of the state of a system on its history. For example, a magnet may have more than one possible magnetic moment in a given magnetic field, depending on how the field changed in the past.


Agriculture is the biggest anthropogenic source of methane worldwide, closely followed by leaks from oil and gas fields, and remains a major climate blindspot. And while 60% of global methane emissions comes from human activities, the remaining 40% comes from natural sources, including permafrost and wetlands, which are thawing rapidly and becoming increasingly waterlogged due to rising temperatures.

Methane fuels 20-30% of the heating the planet has experienced to date. Although shorter-lived in the atmosphere than carbon dioxide (CO2), methane has a global warming impact
more than 80 times higher than CO2 over a 20-year period.

Here are some of the world's biggest hidden sources of this highly potent greenhouse gas.
Mysterious giant craters have been appearing in the north Siberian permafrost, which is thawing rapidly due to climate change


In Russia's Yamal and Gyda peninsulas, mysterious giant craters have been appearing in the north Siberian permafrost. Elevated levels of methane have been detected in the water at the base of the craters. One theory suggests that methane may be bubbling up from deep pockets of gas where permafrost is thawing, from methane-producing bacteria, or from the ice itself. As the gas builds up under ice cover, it eventually ruptures explosively, hurling ice and earth for hundreds of meters and leaving massive scars behind. And while the precise reasons these craters are appearing in the Russian Arctic are not fully understood, what is clear is that, worldwide, the permafrost thaw resulting from climate change could become a huge source of methane.

Melting glaciers are revealing hidden methane stocks that have remained hidden for thousands of years

Glacial meltwater

Glaciers around the world are melting rapidly due to rising temperatures and this is revealing unknown environments – and hidden methane stocks – that have remained hidden for thousands of years. A 2023 study by the University of Copenhagen found that methane concentrations in the meltwater of three glaciers in the Yukon territory of north-west Canada were up to 250 times higher than those in the atmosphere.

This surprised scientists as it was previously thought that glacial methane emissions require oxygen-free environments, such as vast ice sheets. "The release of methane under ice is more comprehensive and much more widespread than we thought," write the authors. It is unclear what the global effect of this melting will be, they warn.

When water tumbles through turbines to generate electricity, large amounts of methane are released into the atmosphere (Credit: Getty Images)


Hydroelectric dams and their reservoirs are one of the biggest sources of methane escaping from water, releasing the equivalent of almost one billion tons of CO2 each year. The methane comes from the decomposing matter at the bottom of reservoirs which is released when the water cascades through turbines that generate electricity. UK start-up Bluemethane is working to capture these methane bubbles to use as biogas for electricity generation and heating, and as fuel for vehicles.

Polluted rivers

Freshwater ecosystems such as rivers and lakes account for almost half of global methane emissions. A 2020 analysis of the rivers snaking through the New Territories, one of Hong Kong's lushest areas, revealed that the waters were supersaturated with high concentrations of methane, carbon dioxide and nitrous oxide. The more polluted the river was, the higher its emissions, the scientists found. Large amounts of carbon and nitrogen end up in rivers worldwide via pesticide runoff and these are broken down via anaerobic or aerobic respiration, releasing carbon dioxide, methane and nitrous oxide.


The methane emitted in cow burps often makes headlines, but despite being a well-known source of methane it has so far proved tricky to reduce. Agriculture –- from rice paddies to livestock –- makes up the largest human source of methane emissions on the planet, according to the International Energy Agency. And within farming, cattle are arguably the biggest offenders, with one Californian feedlot producing more methane than the biggest oil and gas fields in the state.

It's a difficult source of methane to address. Not only are governments reluctant to tell people what to eat, or farmers how to farm, but public datasets of livestock facilities are hard to come by, so knowing where to direct the gaze of remote, methane-tracking sensors or satellites is fraught. Many livestock operations are also dispersed over large areas, or packed into places where farming might not be the main emissions source, analysts point out.

Waterlogged wetlands are releasing methane into the atmosphere more rapidly

Wetlands are the world's largest natural source of methane. As climate change leads to rising temperatures and erratic rainfall, these waterlogged soils are releasing methane into the atmosphere more rapidly. 2024 analysis by the Department of Energy's Lawrence Berkeley National Laboratory found that wetland methane emissions across the Boreal-Arctic region have increased 9% over the past two decades. The scientists found that between 2002 and 2021 wetlands in these regions released an average of 20 teragrams –- or 20 trillion kg –- of methane per year — equivalent to the weight of about 55 Empire State Buildings.

Organic matter rotting in landfill sites emits large amounts of methane

Landfill sites

Waste is the third large source of methane globally, after agriculture and energy, according to the International Energy Agency. Organic matter rotting in landfill sites emits large amounts of methane. A 2022 study revealed that a landfill site in Mumbai released about 9.8 tons of methane per hour, or 85,000 tons per year. Composting leftover scraps instead of sending them to landfill can help reduce the amount of methane released into the atmosphere.

Wildfires are a major source of methane pollutants

As wildfires grow in frequency and intensity around the globe, tracking their greenhouse gas emissions – including methane – is becoming ever more important. These fires are a major source of methane pollutants. The amount of methane emitted from the US' 20 biggest fires in 2020 alone was seven times more than wildfires over the past 19 years.

And, methane continues to be emitted long after the flames die out, Nasa researchers in Alaska found. Methane hotspots were 29% more likely to occur in tundra that had been scorched by wildfire than nearby unburned areas.

Methane has long lived in the shadow of CO2 – but its impact on global temperatures is such that it is vital we understand it better.



Inspired by observations on an indigenous diet in St. Lucia, Dr. John Rollo in 1797 successfully treated 2 patients with diabetes with a diet consisting only of meat and fat. Rollo recommended near-complete elimination of plant foods (29), a prescription that was widely adopted and empirically optimized to prolong the life of people with diabetes in the 19th century. Recognizing the link between carbohydrate intake and glucosuria, some physicians allowed intake of low-carbohydrate vegetables, whereas others promoted a strictly meat- and fat-based approach for diabetes management.


The current outcry about the possible dangers of carnivore diet reminds me of similar dire warnings when the Atkins diet gained popularity. Weren't all Atkins dieters supposed to drop dead of heart attacks?  And the way the sales of bread and breakfast cereals were down, well, obviously the economy would be ruined. And to be telling overweight people not to eat fruit -- what blasphemy! And what about diabetics no longer needing insulin, or needing only a fraction of their former dose of insulin? We can't have that! And so on, and so on . . . 

Another interesting parallel is that keto diet was first developed for children with epilepsy, and carnivore diet was developed for diabetics. The rice diet and the Pritikin diet evolved as treatments of the cardiovascular disease. 

But whether it's unlimited raw broccoli or unlimited cream cheese, your body has a way of keeping you away from such extremes.


Darwin’s letter is a reply to a young barrister named Francis McDermott. McDermott wrote to Darwin on November 23, 1880, with an unusual request.

“If I am to have pleasure in reading your books I must feel that at the end I shall not have lost my faith in the New Testament,” he wrote.

“My reason in writing to you therefore is to ask you to give me a Yes or No to the question Do you believe in the New Testament…” McDermott wrote.

McDermott continues by promising not to publicize Darwin’s reply in the “theological papers”. Darwin responded the very next day.

“Dear Sir, I am sorry to have to inform you that I do not believe in the Bible as a divine revelation and therefore not in Jesus Christ as the son of God. Yours faithfully Ch Darwin”


Right after I left the church, if you’d asked me if I believed in the divinity of Jesus, I would have said No. However, I didn’t realize that I was imbued with a reverent and completely uncritical attitude toward Jesus and pretty everything he allegedly said — at least the “essence,” as we thought, not cursing the fig tree and remarks about Jesus’ enemies about to be moaning and gnashing their teeth — though I was beginning to wonder what he REALLY said.

Only in the U.S.,  when I started talking to Jewish friends who of course did not have any reverence for Jesus, to put it mildly, and were highly critical of his moral extremism as against the human nature (the kind of “hangman’s metaphysics” that Nietzsche talks about — creating impossible, guilt-producing commandments), that I saw another area in which I had been brainwashed and needed to re-think.


In a book titled: Albert Einstein, the Human Side, the authors quote another letter Einstein wrote in 1954:

“It was, of course, a lie what you read about my religious convictions, a lie which is being systematically repeated. I do not believe in a personal God and I have never denied this but have expressed it clearly. If something is in me which can be called religious then it is the unbounded admiration for the structure of the world so far as our science can reveal it.”


“I have repeatedly said that in my opinion the idea of a personal God is a childlike one.”


Einstein believed in physics, not in an invisible parent in the sky who demands worship and answers prayers — sometimes.

If Einstein lived today, he’d say he’s not a “New Atheist.” He wouldn’t be trying to convince you to shed your faith. Instead, he’d follow the Neil deGrasse Tyson approach to religion, which is to say he’d stay away from labels… but even he’d admit the idea of a Christian God who listens to your prayers and watches over your life is just flat-out ridiculous.



People become vegetarians for a variety of reasons. Some do it to alleviate animal suffering, others because they want to pursue a healthier lifestyle. Still others are fans of sustainability or wish to reduce greenhouse gas emissions.

No matter how much their carnivorous friends might deny it, vegetarians have a point: cutting out meat delivers multiple benefits. And the more who make the switch, the more those perks would manifest on a global scale.

But if everyone became a committed vegetarian, there would be serious drawbacks for millions, if not billions, of people.

“It’s a tale of two worlds, really,” says Andrew Jarvis of Colombia’s International Center for Tropical Agriculture. “In developed countries, vegetarianism would bring all sorts of environmental and health benefits. But in developing countries there would be negative effects in terms of poverty.”

Jarvis and other experts at the center hypothesized what might happen if meat dropped off the planet’s menu overnight.

First, they examined climate change. Food production accounts for one-quarter to one-third of all anthropogenic greenhouse gas emissions worldwide, and the brunt of responsibility for those numbers falls to the livestock industry. Despite this, how our dietary choices affect climate change is often underestimated. In the US, for example, an average family of four emits more greenhouse gases because of the meat they eat than from driving two cars – but it is cars, not steaks, that regularly come up in discussions about global warming.

“Most people don’t think of the consequences of food on climate change,” says Tim Benton, a food security expert at the University of Leeds. “But just eating a little less meat right now might make things a whole lot better for our children and grandchildren.

Marco Springmann, a research fellow at the Oxford Martin School’s Future of Food program, tried to quantify just how much better: he and his colleagues built computer models that predicted what would happen if everyone became vegetarian by 2050. The results indicate that – largely thanks to the elimination of red meat – food-related emissions would drop by about 60 percent. If the world went vegan instead, emissions declines would be around 70 percent.

“When looking at what would be in line with avoiding dangerous levels of climate change, we found that you could only stabilize the ratio of food-related emissions to all emissions if everyone adopted a plant-based diet,” Springmann says. “That scenario is not very realistic – but it highlights the importance that food-related emissions will play in the future.”

Food, especially livestock, also takes up a lot of room – a source of both greenhouse gas emissions due to land conversion and of biodiversity loss. Of the world’s approximately five billion hectares (12 billion acres) of agricultural land, 68 percent is used for livestock.

Should we all go vegetarian,
ideally we would dedicate at least 80 percent of that pastureland to the restoration of grasslands and forests, which would capture carbon and further alleviate climate change. Converting former pastures to native habitats would likely also be a boon to biodiversity, including for large herbivores such as buffalo that were pushed out for cattle, as well as for predators like wolves that are often killed in retaliation for attacking livestock.

The remaining 10 to 20 percent of former pastureland could be used for growing more crops to fill gaps in the food supply. Though a relatively small increase in agricultural land, this would more than make up for the loss of meat because one-third of the land currently used for crops is dedicated to producing food for livestock – not for humans.

Both environmental restoration and conversion to plant-based agriculture would require planning and investment; however, given than pasturelands tend to be highly degraded. “You couldn’t just take cows off the land and expect it to become a primary forest again on its own,” Jarvis says.

Carnivorous Careers

People formerly engaged in the livestock industry would also need assistance transitioning to a new career, whether in agriculture, helping with reforestation or producing bioenergy from crop byproducts currently used as livestock feed.

Some farmers could also be paid to keep livestock for environmental purposes. “I’m sitting here in Scotland where the Highlands environment is very manmade and based largely on grazing by sheep,” says Peter Alexander, a researcher in socio-ecological systems modeling at the University of Edinburgh. “If we took all the sheep away, the environment would look different and there would be a potential negative impact on biodiversity.”

Should we fail to provide clear career alternatives and subsidies for former livestock-related employees, meanwhile, we would probably face significant unemployment and social upheaval – especially in rural communities with close ties to the industry.

There are over 3.5 billion domestic ruminants on earth, and tens of billions of chickens produced and killed each year for food,” says Ben Phalan, who researches the balance between food demand and biodiversity at the University of Cambridge. “We’d be talking about a huge amount of economic disruption.”

But even the best-laid plans probably wouldn’t be able to offer alternative livelihoods for everyone. Around one-third of the world’s land is composed of arid and semi-arid rangeland that can only support animal agriculture. In the past, when people have attempted to convert parts of the Sahel – a massive east-to-west strip of Africa located south of the Sahara and north of the equator – from livestock pasture to croplands, desertification and loss of productivity have ensued. “Without livestock, life in certain environments would likely become impossible for some people,” Phalan says. That especially includes nomadic groups such as the Mongols and Berbers who, stripped of their livestock, would have to settle permanently in cities or towns – likely losing their cultural identity in the process.

Plus, even those whose entire livelihoods do not depend on livestock would stand to suffer. Meat is an important part of history, tradition and cultural identity. Numerous groups around the world give livestock gifts at weddings, celebratory dinners such as Christmas center around turkey or roast beef, and meat-based dishes are emblematic of certain regions and people. “The cultural impact of completely giving up meat would be very big, which is why efforts to reduce meat consumption have often faltered,” Phalan says.

The effect on health is mixed, too. Springmann’s computer model study showed that, should everyone go vegetarian by 2050, we would see a global mortality reduction of 6-10 percent, thanks to a lessening of coronary heart disease, diabetes, stroke and some cancers.

Eliminating red meat accounts for half of that decline, while the remaining benefits are thanks to scaling back the number of calories people consume and increasing the amount of fruit and vegetables they eat. A worldwide vegan diet would further amplify these benefits: global vegetarianism would stave off about 7 million deaths per year, while total veganism would knock that estimate up to 8 million. Fewer people suffering from food-related chronic illnesses would also mean a reduction in medical bills, saving about 2-3 percent of global gross domestic product.

But realizing these projected benefits would require replacing meat with nutritionally appropriate substitutes. Animal products contain more nutrients per calorie than vegetarian staples like grains and rice, so choosing the right replacement would be important, especially for the world’s estimated two billion-plus undernourished people. “Going vegetarian globally could create a health crisis in the developing world, because where would the micronutrients come from?” Benton says.

All in Moderation

But fortunately, the entire world doesn’t need to convert to vegetarianism or veganism to reap many of the benefits while limiting the repercussions.

Instead, moderation in meat-eating’s frequency and portion size is key. One study found that simply conforming to the World Health Organization’s dietary recommendations would bring the UK’s greenhouse gas emissions down by 17 percent – a figure that would drop by an additional 40 percent should citizens further avoid animal products and processed snacks. “These are dietary changes that consumers would barely notice, like having a just-slightly-smaller piece of meat,” Jarvis says. “It’s not this either-or, vegetarian-or-carnivore scenario.”

Certain changes to the food system also would encourage us all to make healthier and more environmentally-friendly dietary decisions, says Springmann – like putting a higher price tag on meat and making fresh fruits and vegetables cheaper and more widely available. Addressing inefficiency would also help: thanks to food loss, waste and overeating, fewer than 50 percent of the calories currently produced are actually used effectively.

“There is a way to have low productivity systems that are high in animal and environmental welfare – as well as profitable – because they’re producing meat as a treat rather than a daily staple,” Benton says. “In this situation, farmers get the exact same income. They’re just growing animals in a completely different way.”

In fact, clear solutions already exist for reducing greenhouse gas emissions from the livestock industry. What is lacking is the will to implement those changes. ~


This reminds me of a former neighbor of mine who was a dedicated vegetarian. He died of a massive heart attack in his early seventies. Nor is it unusual to hear of a vegetarian getting cancer. One of my vegetarian friends suffers from raging rheumatoid arthritis. So a vegetarian diet doesn’t guarantee better health — as I was to find out the hard way.

I tried being a vegetarian for three and half years — and what miserable years they were. I was always hungry and became an eating machine; I had to eat every two hours, so I carried peanut butter sandwiches with me when I was away from home. I also started to wake up in the middle of the night, feeling ravenously hungry. I gained weight like crazy. I had white streaks on my fingernails, which suggested a zinc deficiency — but a zinc supplement didn’t help. I was tired and joyless. Life tasted more or less like brown rice, which I consumed every day, often even twice a day.

Ultimately it was the weight gain that made me stop torturing myself. I couldn’t stand looking at myself in the mirror. I discovered Atkins, and you can’t really do Atkins without eating meat — or at least fish. I began eating my first meat-including meal already in the kitchen, not wanting even the minor delay of taking the plate to the dining table. Every cell in my body seemed to be dancing with joy.

And the taste! It was a cheap cut of beef, but for me it was a royal feast. I felt so happy after the endless bricks of tofu I used to forced myself to swallow. (No tofu has passed my mouth since.)

I lost weight amazingly fast, which was exhilarating. The white spots on my fingernails were gone within a month or so. Soon my one complaint was that my nails grew so fast. My monthly cycle came back, so no need to worry about bone loss (or neuron loss — though at that point I didn’t yet know how much a woman’s brain relies on estrogen and sex steroids in general). I loved looking at myself in the mirror and getting attention from men once more. One of my worries became: Is it normal never to feel tired?

In addition to zinc deficiency, Vitamin-12 deficiency together with omega-3 fatty acid deficiency can perhaps explain why I felt so bad while on a vegetarian diet. According to all my sources, the vast majority of people do fine on a vegetarian diet — but there are exceptions who become sickly and malnourished. I found out the hard way.

[You may be asking why I tortured myself for so long. Aside from my Catholic background, which plants the pernicious idea that suffering is good for you, it was simply ignorance. I had barely a fraction of the knowledge about diet and health that I have now.]

The opposite of a vegetarian diet is the carnivore diet, which aims at a total elimination of plant foods. The proponents point out that plants contains a multitude of toxic compounds. Some purists eat only meat and butter from grass-fed cows, since corn-fed cattle aren’t as healthy as cattle eating their natural food, which is grass.

It may be entertaining to watch the vegan-carnivore wars, but basically it’s a waste of time. Between the two extremes — veganism versus carnivore diet — and the good old omnivore diet that has sustained our ancestors — there is only one way to find out what is right for you: listen to your body. It will let you know — sometimes as soon as the first meal! 



Although the Carnivore Diet has only recently become popular, scientists have been interested in this very low-carb way of eating for hundreds of years.

There are several accounts of researchers mimicking the traditional meat-based dietary intake of Arctic or nomadic societies as far back as the 1700s. For example, in 1797 Dr. John Rollo successfully treated patients with type 2 diabetes using a diet that consisted primarily of meat and fat after studying the very low-carbohydrate diet of indigenous people in St. Lucia. After discovering that a very low-carb diet benefited those with diabetes, it became a widely adopted treatment for managing this condition until the discovery of insulin in 1921.

The Carnivore Diet we know today was popularized by Shawn Baker, M.D., who authored a book titled The Carnivore Diet in 2018 after finding that a meat-based diet benefited his health. This version of the Carnivore Diet advocated for the complete elimination of plant foods and total reliance on meat and other animal products like eggs, seafood, and full-fat dairy products.

Some Carnivore advocates follow a strict Carnivore Diet that only includes animal-based foods, while others follow less restrictive versions that allow for small amounts of plant-based foods, like low-carb vegetables. 

However, most people following Carnivore-type diets get most of their calories from meat and other animal foods.

Although there are different versions of the Carnivore Diet, most people following this eating pattern primarily consume animal foods, such as:

Meat: Steak, pork, ground beef, bison, lamb, and venison
Organ meats: Liver, heart, and kidneys
Poultry: Chicken, duck, and turkey
Seafood: Salmon, sardines, clams, mussels, and shrimp
Full-fat dairy: Full-fat yogurt, cheese, and butter
Eggs: Whole eggs and egg yolks

In addition to animal-based foods, people on Carnivore Diets allow for seasonings like salt, pepper, herbs, and spices. 

A 2021 study that included data on the dietary intake of 2,029 people following Carnivore-style diets found that red meat products, like beef, lamb, and venison were the most commonly consumed foods, followed by eggs and nonmilk dairy products. The study also found that over 50% of the participants drank coffee at least once per day.


Fruit: Berries, apples, grapes, bananas, avocados, and peaches
Vegetables: Potatoes, zucchini, broccoli, asparagus, and greens
Grains and grain-based products: Bread, quinoa, rice, pasta, and noodles
Nuts and seeds: Almonds, cashews, peanut butter, pumpkin seeds
Beans: Black beans, chickpeas, kidney beans, and lentils
Snack foods and sweets: Cookies, chips, ice cream, cakes, and candy
Sugary beverages: Juice, soda, sweetened coffee drinks, and energy drinks

Water is the preferred beverage when following a Carnivore Diet, though many people who follow this diet include tea and coffee in their daily routine.

Additionally, some people allow for a small amount of low-carb vegetables, like leafy greens and zucchini.


Currently, there’s limited research investigating the health benefits of following a Carnivore Diet. 

However, there’s plenty of evidence that very-low carb diets can benefit the health of some people, such as those with type 2 diabetes.

Although there are no strict rules regarding the macronutrient ratio of the Carnivore Diet, it can generally be considered a type of high-protein, very low-carb diet. Studies show that certain very low-carb diets, like the keto diet, could be helpful for certain health conditions. 

But, keep in mind that very low-carb diets aren’t the same thing as the Carnivore Diet, and there’s currently limited evidence that the Carnivore Diet improves health in any way, specifically. 

That said, the Carnivore Diet may offer a few benefits.

May Improve Blood Sugar Regulation

Low-carb diets are effective for improving health outcomes in people with diabetes. This is because these diets are low in carbohydrate-rich foods, which have the largest impact on blood sugar and insulin levels. 

If a person follows a Carnivore Diet, their carbohydrate intake would be minimal, and their blood sugar levels and reliance on diabetes medications would likely decrease. 

In the 2021 study that included data on the dietary intake of 2,029 people following Carnivore-style diets for nine to 20 months, researchers found that the participants with type 2 diabetes experienced reductions in their levels of hemoglobin A1c (HbA1c)-a long-term marker of blood sugar control- and significant reductions in their diabetes medication use. In fact, among the 262 participants with type 1 or type 2 diabetes (T2DM), 84% discontinued oral diabetes medications and 92% of participants with T2DM discontinued their use of insulin.

Although these results are promising, more research investigating the effectiveness and safety of the Carnivore Diet is needed. Also, it’s important to note that diabetes can be effectively managed using less restrictive diets, such as plant-based diets and more inclusive low-carb diets, which are far better for overall health and easier to stick to long-term. 

May Promote Weight Loss 

The Carnivore Diet eliminates many foods and beverages implicated in weight gain, including ultra-processed foods and added sugar. Since this dietary pattern is low in carb-rich foods and so high in protein, which is the most filling macronutrient, it’s likely that the Carnivore Diet will promote weight loss, at least in the short term. 

In the 2021 study mentioned above, the participants reported substantial reductions in their body mass index (BMI)—a measure of body fat based on height and weight—after transitioning to a Carnivore Diet.

While this is encouraging, the Carnivore Diet is highly restrictive and likely unsustainable for most people. Similar diets, like the keto diet, have also been shown to be effective for short-term weight loss. However, diets that cut out a number of healthy foods like keto are notoriously difficult to stick to, and most evidence suggests that, in the long-term, their efficacy is comparable to other, less restrictive weight loss diets.

This means that even though the Carnivore Diet may promote quick weight loss, more inclusive diets that are easier to follow are likely just as effective for long-term weight loss and healthy weight maintenance.

Other Possible Benefits 

Participants included in the 2021 study reported that following a Carnivore Diet led to improvements in their overall health, physical and mental well-being, and some chronic medical conditions.

This may be because the Carnivore Diet cuts out foods and drinks associated with poor physical and mental health, including ultra-processed foods and added sugars. 

But keep in mind that the participants included in this study had only been following the Carnivore Diet for nine to 20 months. It’s unknown how the Carnivore Diet impacts long-term health, including disease risk.

Overall, more research is needed to fully understand how the Carnivore Diet impacts overall health. 

Risks and Side Effects

Though proponents of the Carnivore Diet suggest that this way of eating can help boost weight loss and improve chronic diseases, there are several significant downsides to this way of eating. 

First, this diet is extremely restrictive and cuts out foods that are known to improve health and deliver essential nutrients, like fruits and vegetables. Diets low in produce have been consistently linked with an increased risk of several diseases, including cancer and heart disease, as well as overall mortality risk.

A high intake of red and processed meat has also been associated with an increased risk of several health conditions, including colorectal cancer, breast cancer, colon cancer, and heart disease.

Another concern with the Carnivore Diet is the environmental impact of a dietary pattern high in red meat and other animal products. Research shows that red meat production significantly contributes to greenhouse gas emissions and has a considerable impact on global warming and climate change.

What’s more, a diet low in plant foods can lead to unpleasant side effects like constipation, fatigue, low mood, nutrient deficiencies, and more. [Oriana: though I never did pure Carnivore, very low carbohydrate diet quickly showed its benefits: high energy, good mood, elimination of zinc and iron deficiencies, “and more.”]

Keep in mind that some people following a less restrictive Carnivore Diet may include some produce in their diet, like low-carb vegetables. [and, need we say, green salad twice a day, as Atkins suggested]

If you’re looking for a safer, more evidence-based way to better your health, consider trying a less restrictive diet high in foods known to improve overall health, such as the Mediterranean diet or a less restrictive low-carb diet. ~


I find meat very energizing. My brain starts working better, and I experience the "can do" attitude even when undertaking difficult projects. But here is my warning to everyone, on any diet: NEVER CONSUME PROCESSED MEAT. It's carcinogenic.

Ideally: grass-fed beef, free-range chickens, wild-caught fish.

I see no need for Keto Bars and similar keto junk food. A low-carb diet keeps you feeling full, without the urge to be constantly snacking (eliminating the craving for snacks may be critical in how this diet works to combat obesity and various ailments).

“One of the most common ‘side effects’ of a carnivore diet is the near-complete absence of gas.”  ~ Shawn Baker

The top three issue I’ve observed being improved by carnivore diet are joint pain, digestive health, and mental health.” ~ Shawn Baker

from another source:

Elimination of Inflammatory Foods: By cutting out plant-based foods, especially those high in anti-nutrients or irritants, the diet reduces the common triggers of inflammation.

Rich in Essential Nutrients: Animal products are packed with nutrients like omega-3 fatty acids, zinc, and selenium, known for their anti-inflammatory properties.

Focus on Gut Health: A simplified diet can aid in gut healing, reducing the gut-related inflammation often caused by more complex diets.

In summary, those on carnivore diet typically report these benefits:
more energy, less joint pain, decrease in headaches, better digestion, better focus, increased strength and endurance, better sleep, quicker recovery between workouts.

“Many of the triggers for autoimmune conditions, it turns out, are plant products.” ~  

Ending on beauty:

In my beginning is my end. Now the light falls
Across the open field, leaving the deep lane
Shuttered with branches, dark in the afternoon,
Where you lean against a bank while a van passes,
And the deep lane insists on the direction
Into the village, in the electric heat
Hypnotized. In a warm haze the sultry light
Is absorbed, not refracted, by grey stone.
The dahlias sleep in the empty silence.
Wait for the early owl.

~ TS Eliot, East Coker, second stanza of Section I.