Sunday, June 18, 2023

DOSTOYEVSKY’S “RUSSIAN GOD”; THE DECLINE IN MARRIAGE MIGHT BE A GOOD THING; MOUNTAINS INSIDE THE EARTH; THE FIFTIES REALLY WERE DIFFERENT; ANTI-DOPAMINE PARENTING; CARL SCHMITT, THE FATHER OF THE NEW AUTHORITARIANISM; OBESITY PERMANENTLY CHANGES THE BRAIN

Daylily Maestro Puccini; photo: Mim Eisenberg

*
MY EIFFEL TOWER READING

I am reading my poems aloud
at the foot of the Eiffel Tower.
I stand in metallic moonlight.
My hands bloom matching pallor.

The tourists at the top of the Tower
do not hear me, but I don’t shout.
I read the dark stanzas alone.
The sky is stapled with stars.

From the Louvre, Aphrodite
stretches to me missing arms.
The spider-like cathedral
climbs the web of ancient night.

The winged horses from the Opera gallop by.
Marble lions roar without a sound.
The streets hear me, the city —
stony galaxies of light.

I am reading aloud to the trees,
to the statue with a starry sword;
to the ghosts of beheaders and queens.
to the echoes stored up in stone.  

It is late but I read on. I need not shout.
The night knows my poems by heart.
The night greets me with a river of silence
and lays bridges from line to line.

~ Oriana


Marc Chagall: Paris Through the Window, 1913.

Jennifer Blessing writes (on Guggenheim.org):

~ After Marc Chagall moved to Paris from Russia in 1910, his paintings quickly came to reflect the latest avant-garde styles. In Paris Through the Window, Chagall’s debt to the Orphic Cubism of his colleague Robert Delaunay is clear in the semitransparent overlapping planes of vivid color in the sky above the city.

The Eiffel Tower, which appears in the cityscape, was also a frequent subject in Delaunay’s work. For both artists it served as a metaphor for Paris and perhaps modernity itself. Chagall’s parachutist might also refer to contemporary experience, since the first successful jump occurred in 1912.

Other motifs suggest the artist’s native Vitebsk. This painting is an enlarged version of a window view in a self-portrait painted one year earlier, in which the artist contrasted his birthplace with Paris. The Janus figure in Paris Through the Window has been read as the artist looking at once westward to his new home in France and eastward to Russia.’

Chagall, however, refused literal interpretations of his paintings, and it is perhaps best to think of them as lyrical evocations, similar to the allusive poetry of the artist’s friends Blaise Cendrars (who named this canvas) and Guillaume Apollinaire. ~

Oriana:

Chagall brings Vitebsk to Paris. He suffuses the city that some call the capital of Western civilization with a far-away folklore. You can’t get away from your central themes. Poets are amazed at how the feeling of loss manages to infiltrate a poem that promised to be a cheerful little piece. Or, if they’ve just fallen in love, how that love informs a poem about a tree; indeed they are in love with the whole world, holding it in their poems as in loving arms (I'm stealing from Wislawa Szymborska)

*
DOSTOYEVSKY’S “RUSSIAN GOD”

~ Winston Churchill famously stated that Russia “is a riddle wrapped in a mystery inside an enigma.” While reading Fyodor Dostoevsky’s The Brothers Karamazov and Demons, I arrived at the same conclusion, thoroughly perplexed by Dostoevsky’s use of the phrase “Russian God.”

What does Dostoevsky’s peculiar notion of Russian God mean? In Christendom, God is personal, loves all men, and sent his son to die not for the salvation of a singular nation but all mankind. Analysis of Dostoevsky’s Demons and The Brothers Karamazov indicates that Dostoevsky’s concept of Russian God is a uniquely Russian entity that diverges from biblically orthodox conceptions of God and faith. Underpinned by Russian nationalism, anti-Westernism, and Russian imperial doctrine,
Dostoevsky’s Russian God notion emphasizes tribalism over Christian theological orthodoxy and is a footnote in the long history of state manipulation of religion in Russia.

Dostoevsky helps readers understand the complex relationship between Russian nationalism and Russian Orthodoxy because his Russian God concept is underpinned by Imperial Russia’s “Official Nationality” doctrine and sobornost’. Official Nationality was the Russian Empire’s main ideological doctrine during the reigns of Tsars Nicholas I and Alexander III. It combined three elements—Russian Orthodoxy, Russian Nationality, and autocracy—into a single conservative Russian imperial ideology that called for Russification, pan-Slavism, and anti-Westernism.

Sobornost’ is an abstract Russian religious concept that was first articulated by Slavophilic thinker Alexei Khomiakov. It is a living community of believers that exists within the church and binds all together in Russian Orthodox faith. Sobornost’s community is explicitly exclusionary of Catholics, Protestants, and other Western denominations perceived as corrupt.

According to Russian philosophy scholar Elena Besschetnova, “Khomiakov played a huge role in the formation of the Russian national identity, in the formation of the ‘Russian spirit’ and ‘Russian thought,’ giving to it religious connotation and its own special direction.” Sobornost’ thus has a relationship with Official Nationality and Russian chauvinism through its linkages to Russia’s exclusionary religious identity.

In Demons, Russian God is mentioned from the perspective of morally flawed characters. Demons’ first mention of Russian God occurs when Stepan Tromfimovich is introduced to the reader as a man who lives in “a hotbed of freethinking, depravity, and godlessness,” who enjoys drinking champagne and bloviating liberal thoughts about the “‘Russian spirit,’ about God in general and the ‘Russian God’ in particular; for the hundredth time repeating scandalous Russian anecdotes known to everyone and repeated to everyone.”

Tromfimovich is depicted as a good-for-nothing liberal, whose alcohol-induced banal ramblings on the supposed true nature of Russia, the Russian spirit, and Russian God, ultimately cause more harm than anything else, considering that he is partly to blame for the revolutionary ideas that fuel the destruction that occurs in the novel. Significantly, freethinking and Westernization are equivocated with atheism and degeneracy, reasoning characteristic of Official Nationality.

Demons’ next mention of Russian God occurs when Petr Stepanovich speaks with Semyon Karmazinov, a nihilist whose character serves as Dostoevsky’s literary caricature of Ivan Turgenev. Karmazinov’s nihilism is evident when he states:

Holy Russia is least capable in all the world of resisting anything. Simple people still hang on somehow by the Russian God; but the Russian God, according to the latest reports, is rather unreliable and even barely managed to withstand the peasant reform … No, I don’t believe in the Russian God at all.

Seeking clarification, Petr Verkhovensky immediately asks Karmazinov whether he believes in “European [God]”, to which Karmazinov unambiguously replies: “I don’t believe in any”. This dialogue is telling with regards to Russian religious identity, as it implies that Russian God and “European God” are distinctly different conceptual entities.

While Demons provides no explicit definitions or qualifying characteristics of either the Russian or European God, the quality of their being separate is both significant and utterly unbiblical. The Russian Orthodox monks in The Brothers Karamazov provide some insight to Russian God, in that Russian God should not be simply associated with Eastern Orthodoxy or Byzantine Rite. The monks chastise Constantinople’s leadership, saying:

We stick to the old ways, who cares what innovations they come up with; should we copy them all? … We’ve had as many holy fathers as they have. They sit there under the Turks and have forgotten everything. Their Orthodoxy has long been clouded.

Thus, Russian God should not be interpreted as an Eastern Orthodox concept in general, but specifically as a Russian Orthodox entity. This indicates Russian nationalism consistent with Official Nationality.

For Dostoevsky, the debate about faith in The Brothers Karamazov is not as simplistic as atheism versus theism, but rather a belief in the God of Russian Orthodoxy, complete with its themes of sobornost’ and political linkages to Russian nationalism and autocracy, versus anything else. Dostoevsky’s Russian God is incompatible with ecumenical fellowship through Christ, and proper religious observance is only preserved through identification and division of ingroups and outgroups on extra-theological grounds.

In The Brothers Karamazov, this is evident in how Catholicism is lampooned in “The Grand Inquisitor.” Rome is described as “the third temptation of the devil”; Pavel Smerdyakov, who is not entirely an atheist, has a corrupted faith which is described as “not Russian at all”; Ivan Karamazov, who also is not a complete atheist, is similarly criticized for being a “stinking Jesuit”, and the Ecumenical Patriarchate’s leadership in Constantinople is branded as illegitimate by Russian monks who describe it as “clouded” by Turkish influence.

The negative effects of undermining of sobornost’ are central in Dostoevsky’s novels. This is especially prevalent in Demons, where liberal characters’ ridicule of Holy Russia, Russian culture, and Russian God, combined with praise of institutional reforms such as the new criminal courts with trial by jury (as opposed to the previous more autocratic system), demonstrates how deviation from Official Nationality’s three tenets of Orthodoxy, nationality, and autocracy leads to death and destruction. Thus, failure to adhere to sobornost’ destabilizes, and governmental reform and liberal attitudes destroy.

In both novels Dostoevsky uses Russian God to refer not to European, or Western God, but to unique aspects of Russian Orthodoxy with its unique emphasis on sobornost’. The Brothers Karamazov’s single reference to Russian God demonstrates the sincerity of Dmitry Karamazov’s character transformation when he exclaims his love for Russia and Russian God in the epilogue. Unfortunately for Dostoevsky, sobornost’s nationalistic and exclusionary premises contradict biblical teaching. Thus, Dostoevsky’s Russian God notion is an amalgamation of Russian nationalism, autocracy apologia, and sobornost’.

Dostoevsky’s Russian God concept is instrumental for a holistic understanding of Russian religious identity, which is best understood through the historical nexus of Russian Orthodoxy, nationalism, and autocracy that was characteristic of Russia for most of the nineteenth century and increasingly characteristic of Russia today. While Dostoevsky may not have been a good theologian or a doctrinally sound Christian, his works speak volumes of truth about Russia’s idiosyncratic attitudes toward faith, God, community, and the dangers of foreign influences on “Holy Russia.” ~

https://providencemag.com/2019/08/fyodor-dostoevsky-russian-god-faith-christianity-brothers-karamazov-demons/


*
CARL SCHMITT AND THE NEW AUTHORITARIANISM

~ Donald Trump, Rodrigo Duterte, Viktor Orban, Vladimir Putin – from Manila to Moscow, Washington to Budapest, populist authoritarians are the new normal.

In Hungary, Orban, the prime minister, aims to build an “illiberal democracy” while in Russia, Putin long ago crushed independent journalism and political opposition. Turkey’s Recep Tayyip Erdoğan presides over a brutal crackdown on media and civil society. In the Phillipines, Rodrigo “the Punisher” Duterte promised to drop the corpses of 100,000 suspected gangsters in Manila Bay, threatening to close down congress if it opposes him.

And in the US, Trump’s run for the presidency prompted Republican commentator Andrew Sullivan to warn of the threat of tyranny.

There are many differences among these leaders. But instinctively we recognize some similarities: the bluster and the bravado, the ability to articulate a popular anger at existing elites, the sense of being an outsider and the ever-alluring promise to “get things done” and make their countries “great again”.

THE “SOVEREIGN LEADER”

Schmitt argues that effective states need a truly sovereign leader who is not shackled by constitutions, laws and treaties. A truly sovereign president who will cut through red tape and take whatever action is necessary.

This is the über-sovereignty that allowed Putin to annex Crimea in 2014 without paying attention to international law. It’s the mode of decision-making implied by Trump’s announcement that he will “build a great, great wall” along the US-Mexican border, or his claim that you can’t beat Islamic State by “playing by the rules”. And it’s exactly this approach that Duterte invokes in his clampdown on crime, bypassing the courts and “getting criminals off the streets”.

The rule of law is an obstacle to be overcome – not a principle to be embraced. And many voters agree: they want political leaders who are getting results, not talking to lawyers.

But the price for this Schmittian sovereignty is high: it needs the executive to control the legislature, the courts and often the media. In Russia, parliament has become a rubber-stamp, the courts are dutiful allies of the Kremlin and the media is largely under state control. In Turkey, Erdoğan has subdued the country’s courts and locked up scores of journalists. In February 2016 he said he would not respect a constitutional court ruling that resulted in the release of two journalists – the pair were subsequently jailed after a further trial.

The US democratic system may be remarkably resilient, but it’s anybody’s guess what a President Trump might do if courts or congress blocked his most radical ideas.

US AND THEM

Schmitt’s second big idea is that politics is fundamentally about the distinction between friends and enemies. Liberal democracies are hypocritical, says Schmitt. They have constitutions and laws that pretend to treat everybody equally, but this is a sham. All states are based on a distinction between “them” and “us”, between “friend” and “enemy”. A nation needs to constantly remind itself of its enemies to ensure its own survival.

The new authoritarians embrace Schmitt’s friend/enemy distinction with gusto. Trump has a litany of opponents – Mexicans, Muslims, the Chinese – that seek to undermine America. In Russia, it’s the US that serves as Public Enemy Number One. In Hungary, migrants from the Middle East fill the role.

But – as Schmitt’s experience of Nazi Germany proved only too well – a nation defined in terms of external enemies quickly finds internal foes too. In Russia, Putin warned against a “fifth column” of “national traitors”. In Turkey more than 2,000 people have been prosecuted since April 2014 on charges of “insulting” Erdoğan – and academics, journalists and political opponents are attacked as enemies of the Turkish state. For Trump, too, there are plenty of internal enemies, not least “disgusting reporters” in the much hated “liberal media”.

RISE OF AUTHORITARIANISM

Schmitt’s third radical idea is to redefine democracy. In Schmitt’s view, democracy is not a contest between different political parties, but the creation of an almost
mystical connection between the leader and the masses. The leader articulates the internal emotions of the crowd. That’s why Putin still enjoys approval ratings in the 70-80% range, despite Russia’s economic woes. And it’s why Trump will flourish with his supporters regardless of policy flip-flops.

When Trump claims he can shoot somebody on Fifth Avenue and not lose any votes, he’s channelling Schmitt.

Schmitt’s brilliance lay in his unflinching, unsentimental analysis of the baser notions of politics. He knew only too well the power of xenophobia and hatred to mobilize mass support. He saw at first hand the attraction of a leader who could cut through political or constitutional quagmires to “save” the nation. Even as a jurist, he felt the rush of emotion in a crowd when a leader articulates their deepest fears and desires.

Liberals will rail against Duterte, campaign to “stop Trump”, and call for more sanctions against Putin’s Russia. But the rise of Schmittian politics is a sure sign of a deep malaise in global democracy. The spread of liberal ideas around the world has failed to address the social dislocation and economic marginalization of huge groups in society. Instead it has produced a turbocharged global elite, apparently unaccountable to the societies from which they extract their wealth.

Quick-fix authoritarian solutions will ultimately fail, but they can also be highly destructive. The second half of the 20th century can be defined as a struggle between Schmittian politics – the authoritarianisms of the left and of the right – and a workable liberal alternative.

After 1945, Germans refused to accept the assumptions of a Schmittian world, of a society divided into friends and enemies. Instead they forged a constitution that embedded the rule of law and liberal freedoms. That embrace of liberal democracy was a hard-fought lesson. The rise of the new authoritarians around the world is forcing us to learn it all over again. ~

https://theconversation.com/carl-schmitt-nazi-era-philosopher-who-wrote-blueprint-for-new-authoritarianism-59835

*
A DEBATE OVER MARX’S ERRORS

Ryan Osterman:

Some of biggest Marx’s mistakes were for three reasons:

Marx himself was quite polemical towards his critics — particularly on the “dictatorship of the proletariat” idea in which this could easily slip into something worse than Russian Tsardom. To be fair, he was not an authoritarian.

He didn’t believe in writing a cookbook and the evidence seems to suggest that he imagined something like one giant worker cooperative. Which itself was vague. However he died in 1883 so to blame him for Stalinism is a bit unfair.

He was limited to the ideas of his time. He couldn’t have really known that domination would lead to something like what played out.

Dima Vorobiev:

Marx had quite a fascination for the French Revolution and the revolutionary terror. Also, it was Marx who trademarked the “dictatorship of the proletariat.”


The Communist Manifesto is a cookbook per se. The Stalinists and our comrades didn’t do anything that contradicted it.

Can’t we say the same about Jesus’ teachings? There’s no indication of His prescience about the Crusades, the Holy Inquisition, and the eradication of heretics in His name that came after him, in His name?

Al Brown:
The genius of Applied Marxists — as first implemented by Lenin but, as you say, entirely in line with Marx’s thinking — was in honestly admitting the fundamental contradiction in the term “Democratic Socialism”. So they solved it by redefining “Democracy”. Anyone who wouldn’t go along stopped being a citizen and became a Class Enemy. I disagree with their philosophy, but I admire the courage and consistency of their convictions.

They recognized that Socialism has to be a one-way ratchet. It can’t go backward, so it must not be challenged and some basic democratic choices have to be precluded. Since some of those choices, like private property, are very popular, they have to be suppressed with violence, and since they’re pretty basically human they keep trying to come back, and the violence has to continue indefinitely.

Oriana:
What struck me most when I first started about Marx and Marxism is that Marx was against the trade unions, which proved effective in improving working conditions and the workers’ well-being. Marx actually wished for a worsening of workers’ conditions — so they’d be more inclined toward revolution. A decently paid, satisfied worker was a threat, a living contradiction of Marx’s theory — practically a class enemy. “We must abolish prosperity,” Marx wrote. Long before the example of Russia and China in contrast to England and Germany, Marx understood that workers must be sufficiently impoverished to feel angry enough to take revolutionary action.

Relatively prosperous workers preferred peaceful reforms, something that enraged Marx because it contradicted his ideas about the historical inevitability of communism.

Theoretically, the growing misery of the working man was to happen in the most industrialized countries, like England and Germany. But it was precisely the most industrialized countries that had the most trade unions and whose workers had the highest standard of living — and were also the least “revolutionary.” Marx never came to terms with this reality, and never tried to revise his dogmas. He longed to see his prophecy of revolution come true — in the right places, of course, i.e. the most industrialized countries. But history showed a strong sense of irony.

It's too bad that Marx existed at all. Without him and his Communist Manifesto, there'd probably be no Soviet Union (and likely no Nazi Germany either, wielding its inflammatory threat of "Judeo-Bolshevism") and the world would be spared a great deal of suffering. 

What a stunning thought: what the world could be if Marx and Lenin had never been born.

*
RUSSIANS ARE GETTING SCARED AND ARE DENYING REALITY (Misha Firer)

SPIEF, Russia’s most important business event, is turning into a full-flung clown show. Director-general of the state-controlled auto giant Avtovaz demonstrated new Lada Aura to the journalists and guest and it didn’t start! Director-general and the deputy of the largest auto factory in Russia opened the hood and began to rummage inside like two regular dudes spending Sunday afternoon in a garage in an attempt to figure out what is wrong and fix it. Can you think of Toyota CEO doubling up as a mechanic at a car show?

My friend Anna visited a children’s psychiatrist upon recommendations of her family friend in Moscow, Russia.

Her son Mark had been diagnosed with Asperger’s and suffers from fainting spells. Anna wanted to know what she can do to deal with it.

Anna was very surprised when the psychiatrist told her that autism doesn’t exist, it’s a Western hoax to make money and recommended scaring Mark with the possibility of being sent to psychiatric clinic. And if he continues to faint, make him spend a month in the psychiatric clinic purely for prophylactic purpose.

Psychiatrist watched Mark’s short movies that he shoots with his phone, and said that he’s going to be the next movie director Karen Shakhnazarov, who just happens to be a die-hard supporter of invasion of Ukraine.

Shakhnazarov also believes that Ukraine doesn’t exist and the special operation is conducted to return Russian lands.

I think there’s definitely a connection there -- in the doctor’s mind autism doesn’t exist and neither does Ukraine.

It’s like a whole bunch of vatniks [poorly educated Russian nationalists] in Russia mentally live in parallel world with Putin. It’s like they have their own cult of denial.

Giving a speech at SPIEF, Vladimir Putin said that Russia is “gradually kicking dependency on oil and gas.”

How? By shifting back to coal? Putin addressed the very demographic that remains loyal to him by not sharing common reality with billions of men and women. And his boot lickers are right beside him expressing insane thoughts.

“Russia is a savior island. It’s like a Russian Ark for migrants. They’re very terrified to live abroad,” said Svetlana Anokhina, coordinator of the center of relocation of immigrants from NATO countries.

About three million Russian citizens have relocated to mainly NATO counties since the beginning of war in Ukraine.

Oligarchs and families of the Russia’s ruling elites have never left NATO countries, war or no war, and yet there are allegedly terrified Russians who can’t wait to jump the sinking ship and board the Russian ark.

The head of the committee at State Doom’a responsible for financial markets (like financial market need this man in his office to operate them manually) announced that in 10 years America won’t be a financial superpower.

Apparently, Russian ruble that just keeps getting debased day after day, is going to rule the currency waves!

The award for the best bootlicker goes to Andrey Gurulev. In response to the reports about biolabs deployed by America in the Arctic, a few hours by water from Murmansk, said that Russia needs to develop biological weapons capable of destroying the Anglo-Saxons.

 “They specifically study the gene pool of each nation and make genetic weapons capable to destroy this particular nation. We need to deal with the Anglo-Saxons: this is our main enemy. And they should know that we are doing the same against them,” said the State Doom’a deputy.

To thwart the spread of Western reality, Russian lawmakers plan to launch an alternative “protected internet” — protected from nettlesome reality that just keeps messing with their communist utopian fantasies. A user can access it only using a personal identification number.

“Only services that have been vetted by the government will be allowed,” said Andrew Svintsov, deputy head of the information politics in State Doom’a. It’s like information can’t flow along information highways without this man’s special permit.

As the Russian expression goes, “Stop the Earth from spinning, I need to get off.”

Putin’s cult will continue to make efforts to stop the world from spinning but in the end laws of physics will win as they always do. ~ Misha Firer, Quora


Tim Orum:
The [false] narrative will always be more attractive than the truth because the narrative leads to rewards, both social and financial. The truth merely leads to reality and often gets you crucified for your efforts.

Paul Lee Bryan:
Excellent post, Misha. Here’s my morning haiku from your words:

Cult of denial
Treating illness with prison
Why won't my car start

Scott West:
There isn’t a single country in today’s world that could survive and prosper without technology. The western world provided the impetus for today’s industrial and computer age, which China and Russia have emulated to varying degrees of success. A country that doesn’t develop its people potential will ultimately lose when its military might fails/falls to more advanced technology. China is following the same path, but with a lot more people.

*
WHY PUTIN IS EAGER TO ANNEX NEW TERRITORIES (Misha Firer)

~ I have an answer to the perplexing question why Putin is annexing new territories and the whole civilized world has risen to stop him.

Mentally Putin still lives in the first half of the 20th century, when the pros of adding new territories outweighed the cons.

It’s not the case anymore and Russians know it, too, but nobody in Russia dares to oppose Putin’s stubborn insistence on dragging the country into the past because he’s not a president in the modern sense of the word.

He’s a monarch.

Ever since Ivan the Third, Grand Prince of Moscow, ended Tatar-Mongol rule and united territories centered around Moscow, Russians believed that without a king in Kremlin, they’d be occupied and divided.

This collective irrational fear of the populace has fed into Putin’s ascendancy from an elected president to an absolute monarch who does whatever the hell he wants.

Russians know what he’s doing is suicidal madness but absolutely helpless to put an end to it because deep down they feel that if the king is gone Tatar-Mongols will be back at the gates, and that’s exactly the line Medvedev and Putin never stop repeating — take away our protection you all gonna be slaves like 600 years ago. And they choose what they think is the lesser evil. ~ Quora


A gas field in Siberia

*
AUSTRO-HUNGARY: A PROTO EUROPEAN UNION

~ Austria-Hungary was basically what the European Union would have been 150 years ago. It was a multinational empire, but it was rather a confederacy rather than an imperium, as the various states and nations had been amalgamated by marriage and personal unions rather than subjugation and conquest.

Austria-Hungary was not a democracy, but it was well on its way on becoming one. It was a justice state with rule of law, the regime was pretty much transparent and uncorrupt, and it was well integrated by its economy. It was pluralistic and tolerant state, with dozens of languages and religions.

There were basically two opposite forces -- nationalism (which strove to disintegrate the union) and economy (which strove to integrate the union). The most serious cases were the Pan-Italianism in Trieste and Tiroli, which strove for joining Trieste and Tiroli into Italy, and Pan-Slavism, which strove for a Slavic union and separating the Slavic lands from both Austria-Hungary, Greece and Ottoman Turkey.

(Let us be polite and say the Pan-Italianists succeeded, but Pan-Slavism failed.)

But the Austria-Hungary was economically extremely well integrated, and the various territories found synergy in unity. When Austria-Hungary was split into parts after the WW1, the newly founded nation states found themselves much poorer than they had been before, as borders and tarifs and customs were now set up between the states.

The European Union is basically Austria-Hungary 2.0 — a confederacy of nation states integrated together with ties of economy and culture.

Belli gerant alii; tu, felix Austria, nube! (Let the others wage wars; you, happy Austria, marry!)

~ Susanna Viljanen, Quora

Emperor Franz-Joseph, everyone's favorite monarch

*
WHY THE RUSSIAN LOSSES IN WW2 WERE SO HUGE

 
~ Stalin and Russians today talk about the tremendous sacrifice in lives they made in WWII compared to the US and Britain.

But was it a sacrifice? Or did their leaders waste the lives of the men as Putin is doing today?

I think history supports the latter. The reason the Soviets had such great casualties was due to Stalin and their leaders, not the Germans. Stalin had no hesitation to sacrifice the lives of millions of his men and citizens, even if not necessary.

WWII is replete with instances where thousands of Soviet lives would be sacrificed when other alternatives were available. Stalin either didn’t realize there were other alternatives or didn’t care.

So WWII was not so much a sacrifice by the Soviet Union as it was a total disregard for human life of his own people by Stalin. ~ Brent Cooper, Quora

Kit Baker:
By Zhukov’s own commentary, the way through a German mine field was to march troops through it. A stunning example of how little life meant to the Soviet Leadership…apparently both then and now.

Rick Schulz:
Most of the experienced officers in the Soviet army were murdered by Stalin in his purges, prior to Germany’s invasion of Russia. As a result, many of the casualties were the result of inexperienced Russian commanders.

Guttorm Gundelag:
Stalin sacrificed non-Russian Soviets. So Latvia, Ukraine, Kazakstan, Estonia, Lithuania etc would become more Russian. By design & intent.

Henry Yeh:
Stalin was Georgian, yet his policies almost wiped out Georgians on behalf of Russian empire.

Mitch Cohen:

— with one exception: Putin’s life matters (to Putin, of course)

*
WW3 IS HIGHLY UNLIKELY — OR IS IT?


The Russians are spent in Ukraine. They got no steam left to take on anyone else anytime soon. North Korea is North Korea, enough said. Iran is going nowhere because they know what happened to Iraq and don’t want to be Iraq 3.0. And then we got China. The Chinese are a threat but they will be contained in a proxy war similar to Ukraine. I see no other possibly working threats outside these nations for ww3 combatants.  ~ Mike Chang

Kevin:
One big factor. Rationality. There are no rational scenarios. Any rational person or country wouldn't do something dumb enough to start world war 3. However there are a lot of countries run by the irrational. Or countries that have many different types of people and tribes and there are huge power struggles. It's not difficult to see what starts out as an “innocent" little gang war spill over into a full on civil war. And from there into a regional war. Then you get Allies between each country involved.  Those allies start throwing insults and verbal shit at each other. Under the table deals and back stabbing or misunderstandings. It doesn't take much for any of that to turn into a big disaster.

Ben Jensen:
Kevin, I agree. What we will probably start seeing are conflicts over influence between the US and China within the next 10 or so years. It has already started in places like Angola, Peru, and Uzbekistan, along with the cyber wars and more traditional HUMINT operations that have gone on for millennia. The issue will be MAD that it was with the Soviets, but the Chinese are much more traditional than that, even if they have them in the back pocket.

Jayson Lee:
China doesn’t want WW3. They only want your money and they are getting it.

Stan Miller:
WW3 will probably be started by an adversary we never saw coming.

Michael Corbin:
There WILL be wars going forward but I reckon they’ll be small(ish) and contained as the autocracies haven’t got the wherewithal between them to engage in protracted conflicts.

John Dallas:
World super power with nukes won’t be fighting each other; maybe a proxy war but not head on. The threat is too great for nuclear if one side is put in a corner.

I feel sorry for those countries with no nukes because they will be continuously be used and abused, just look at Ukraine.

Mister Smiley:
History repeats itself. WW1 and WW2 or Napoleon's wars did not start as one big party, it took little transgressions, aggressions and more people get invited to the party until its a full fledged war. The big players have not been at each others throats directly but rather via proxy wars. From Korea, Vietnam, Afghanistan, Iraq and Iran, Middle East , Kosovo, Syria, Afghanistan 2, Iraq and now Ukraine are all proxy wars. One day someone will feel over confident and push the world to the brink of war. World economy is at the edge of disaster, overpopulation, disease, fresh water, food, pollution, fuel , aging world population, over- fishing, global warming etc. I do not see any hope for humanity, none.

Mike Hoyt:
China is not a threat as long as their economy needs western markets. They are not capable of self sustainability, China is poor in resources and their population is too massive to feed with their own agricultural output. Without international trade, especially North America and Europe, they will starve.

People forget, there was a massive famine in China not long ago, in the 60s. One of the worst famines in human history. It doesn’t take much to tip the country back into grinding poverty and starvation.

Oriana:

I don't foresee WW3 -- unless Putin really goes insane. He already seems desperate, is partly delusional in his thinking, but this is not yet clinical paranoia. Just a crazy old dictator who'll end up "in the dustbin of history," as one Communist slogan proclaimed about "enemies of the people."


 Joe:

Putin is just a dictator, and he follows the Mussolini & Hitler playbook. 

Oriana:

Thank goodness he doesn’t have Hitler’s gift for giving rousing speeches. 

But he is most like Hitler, with Lukashenko as Mussolini.

Trump is a travesty of Hitler; I hope he never recovers former popularity.
 
Joe:
Exactly. It is scary how such an incompetent person can be so popular. It makes you fear the will of the people.

Oriana:
That's the risk one takes with democracy: it's possible that someone elected will decide to destroy democracy. There are also examples of benevolent dictators (or, in the past, absolute monarchs) who were good for their countries. But ours is an era of multiple voices and accountable heads of state -- that's why the dictators look particularly blatant and, in Putin's case, looney.

*
WHAT DID STONEHENGE SOUND LIKE?

~ "We know that the acoustics of places influence how you use them, so understanding the sound of a prehistoric site is an important part of the archaeology," says Trevor Cox, professor and acoustics researcher at the University of Salford in Manchester. 

Despite its being the world's best-known and most architecturally sophisticated ancient stone circle, archaeologists still don't know who built Stonehenge or what it was used for. Some theories suggest it was used as a burial site, a place of healing or even a celestial calendar, given that the gaps in the outer stone ring are in perfect alignment with the summer and winter solstice. Yet as the decades pass, this massive monument built on a grassy hill in the Wiltshire countryside remains a mystery.

"We're gradually finding out more and more about it, but some things we just don't think we'll ever be able to find out. We have no way of understanding why people started to build it, and the reason that they continued to work on it may well have changed over the hundreds of years it took to complete," said Susan Martindale, volunteer manager for English Heritage, the charitable trust that manages Stonehenge.

Thanks to Cox's recent studies, however, we now know a fascinating detail about one of the world's most enigmatic sites: it once acted as a giant echo chamber, amplifying sounds made inside the circle to those standing within, but shielding noise from those standing outside the circle. This finding has led some to ponder whether the monument was actually constructed as a ritual site for a small and elite group.

This breakthrough is a decade in the making. While researching "the sonic wonders of the world" 10 years ago, Cox began to ponder whether studying the acoustical properties of Stonehenge may help uncover some of its secrets. "I realized there was a technique in acoustics that had never been applied to prehistoric sites before, and that was acoustic scale modeling," he said. "I'm the first to make a scale model of Stonehenge or any prehistoric stone site.”

Cox set out to create a 1:12 scale replica that he could test inside the university's semi-anechoic chamber, a room that absorbs virtually all sound, thanks to the geometric foam covering every surface except the floor. To create the replica, Cox first received a computer model from English Heritage, allowing him to better understand what Stonehenge looked like at its fullest configuration, around 4,000 years ago.

"If you go down to modern Stonehenge, it's a magnificent site, but a lot of the stones are missing or lying on the floor," he said. "This [configuration] is one particular arrangement. Actually, from about 2000 BCE onwards, it changed a lot for about a millennium."

In total, the process of creating 157 stones through 3D printing and molding techniques took about six months to complete. During that time, Cox said his dining room floor was covered with bits and pieces of the project in a laborious effort to achieve the qualities of real stones at scale.

Once the stones were painted grey and arranged in the correct distribution according to the computer model, the challenges of the testing process began. "Everything's a twelfth of the size in real life, and that means we have to test at 12 times the frequency," he said. "You have to get all the loudspeakers and microphones that work at those frequency ranges and they're not commonly available."

Cox has recreated a 1:12 scale replica of how Stonehenge once appeared

To complete each test, Cox and his team placed the loudspeakers around the stones and played the various frequencies they were interested in measuring. The microphones in the room collected data on how the stones affected the sound. Through mathematical processing, Cox was able to create a computer model that simulates the acoustic properties of Stonehenge and can distort voices or music to give a sense of what they would sound like within the circle. The results surprised him: although Stonehenge has no roof or floor, sound bounces between the gaps in the stones and lingers within the space. In acoustics, lingering sound is known as reverberation.

"We know that music is improved by reverberation, so we would imagine if music was played, it would just sound a little bit more powerful and impactful within the circle," he said.
One of the most notable findings from Cox's research is the effect of the stones on the directionality of the voice. 

In an open, natural environment, like the grassy hill Stonehenge is built on, a speaker facing away from a listener would only be understood about one-third of the time. The reflections from the stones at Stonehenge would have amplified the voice by four decibels, bringing the number of sentences understood to 100%.

These results showed that Stonehenge would have allowed people inside the circle to hear each other quite well, while those outside would have been excluded from any ceremonies taking place. Cox's research adds to a growing body of evidence that Stonehenge may have been used for rituals reserved for a select few, with one study even pointing to the possibility of a hedge grown to shield the view from those not participating.

"The research definitely gives more information about how Stonehenge might be used. Even if you turn away, there's always stone reflections to reinforce your voice, so it doesn't really matter if you can't see the person talking. It would be quite good for speech communication," he said. 

Cox likens Stonehenge's acoustical properties to the difference between standing in an empty cinema as opposed to a cathedral. Although those of us used to going in and out of buildings might not find the difference very discernible, Cox notes that the late Neolithic people who built Stonehenge and weren't used to the acoustics of large walls and enclosed spaces would have likely found the effect mesmerizing.

After Cox published his initial findings in 2020, he and his colleagues began to tackle new questions, such as how people inside the circle might change the acoustics. The team recently finished a new set of measurements by placing up to 100 small, wooden figurines around the model. 

"We know that people being inside would have changed the acoustics because we absorb sound," he said. "We want to quantify how it might have changed as more people went inside the circle, because presumably there were people inside the circle during the ceremonies.”

This latest research also takes a closer look at how listeners hear sounds coming from different angles, since whether sound reaches people from the side or front changes how we perceive it. For example, sound reflections from the side improve the quality of music in a concert hall. Once Cox analyses his new set of data, he hopes to publish the findings later this year.

Cox acknowledges that unanswered questions about the real Stonehenge make it difficult for him to draw definitive conclusions from his work with the scale model. Instead, he sees the acoustics research as another tool to find more clues and build a clearer picture of the site's qualities.

"If we think about human ceremonies, they usually involve some form of sound, whether that's music or speaking or chanting. And we know that if they really wanted to be heard, people should have been inside the circle," he said. "Now, the problem with acoustic archaeology is that sound disappears, so we can't ever be certain about what was done there.”

Although Cox's day-to-day work focuses on improving sound for those with hearing loss, he now regularly fields requests to discuss his Stonehenge research.

"One of the things about working on it is you realize how powerful it is to people, how people really connect with it and how people are fascinated by anything to do with Stonehenge," he said. "I think that creates a mystique for the amazing ability of our ancestors to create the most astonishing monuments.” ~

https://www.bbc.com/travel/article/20230601-what-did-stonehenge-sound-like

Mary:

That the ancient builders of Stonehenge understood and used structure to produce certain auditory effects enhances our ideas of how such structures were possibly used. Amplifying sound and resonance would be very effective tools for social and religious theater...voices would be especially powerful when shaped in this way.  We know the classical Greeks had and used such knowledge in the construction of their theaters, and perhaps that was knowledge and techniques mastered in much earlier times. It certainly would build a strong sense of mystery and magic in those experiencing drama and rituals in such surroundings.

Oriana:

I suspect that Stonehenge and similar stone rings were to the worshipers what Gothic cathedrals were to the people during the Middle Ages -- experiencing the magnificence was part of worship.

In addition, yes, it's possible that the way the stones were arranged served as a calendar for establishing the exact day of the equinox and/or solstice. 

In any case, it's wonderful that these megalith circles survived, and we can still experience the awe of changing seasons. Long live the axial tilt!

*
ARE YOU AN ORCHID OR A  DANDELION PARENT?

~ Ask any parent of young children whether they've ever felt overwhelmed, and the answer will probably be: yes. Even in the most relaxed households there can be days when the noise, mess and chaos seem to spiral out of control, leaving parents exhausted and irritated. Toddlers don't have an off button or a quiet voice.

As normal and common as this feeling is, there's a personality trait that can make everyday family life more overwhelming for some parents than others. Roughly 20-30% of the population are classed as being a highly sensitive person (HSP), according do a 2018 research paper – a trait receiving greater recognition by scientists as well as the general public. This sensitivity can relate to smells, sights or sounds. People who have it may, for example, find it hard to cope with bright lights and loud noise, and can find chaotic situations very stressful. It can also involve a heightened awareness of other people's moods or feelings, and come with a particularly strong sense of empathy.

Add the demands of parenting into the mix, and it surely sounds like a recipe for disaster. On top of the daily sensory and emotional overload, highly sensitive parents may face the additional challenge of caring for children who are also highly sensitive (being highly sensitive is thought to be 47% heritable).

Fortunately, though, the trait also comes with certain advantages, research suggests. For those affected, learning to understand these nuances could help turn parenting into a more joyful and enriching experience, rather than an overwhelming one.

The first step is probably to find out if you are highly sensitive. A team of psychologists from different universities who study sensitivity have developed a free online test for this.  Crucially, being highly sensitive is not a disorder but a personality trait – a certain way of responding to one's environment. In particular, highly sensitive people tend to react especially strongly to sensory stimulation, a characteristic known as sensory processing sensitivity (SPS).

"Generally, sensitive people have heightened perception, they perceive more details," explains Michael Pluess, a developmental psychologist at Queen Mary University of London who specialises in the study of highly sensitive people and co-developed the test. "They will pick up on the moods of other people and have higher empathy. They also process things more deeply so they will pick up more about the environment." That is, they have a tendency to ruminate on what they experience and can be deeply affected by what they see and feel (which explains why I can't watch horror films).

Being highly sensitive involves a brain response to certain events or experiences that is measurably different from that of less sensitive people.

In one study, researchers asked a randomly recruited group of people to take a high-sensitivity test – a set of questionnaires, similar to the online test – then showed them photos of happy and sad people, and monitored their brain activity through fMRI scans. The highly sensitive people in the group, who had scored high in the test, displayed stronger activation of regions of the brain involved in awareness and empathy compared to the less sensitive participants.

Other studies showed similar patterns of people with sensory processing sensitivity displaying especially strong brain activation in regions involved in empathy and reflective thinking.

This tendency to process information deeply can lead to highly sensitive people being easily overstimulated, Pluess adds – and I can somewhat relate to that. I flinch at hearing about the plot of a gruesome movie. Watching it is out the question. It can feel physically painful to be in a noisy environment with bad acoustics. On London's screechy underground I have to cover my ears – and often wonder why nobody else does it. 

This sensitivity to noise – a typical feature of being highly sensitive – can make parenting especially challenging. When my children scream, it can feel as though my brain is imploding. To respond to their needs and comfort them, I have to learn to switch off that sensation. Of course, this is easier when I feel well-rested. Unfortunately, parenting tends to come with disrupted sleep, at least in the early years.

The challenges highly sensitive parents face – including stress and overstimulation in a chaotic environment – can interfere with "high quality parenting", explains Pluess.

Research has shown that in the early stages of parenthood, highly sensitive parents report greater stress and tend to find parenting more difficult than other parents do. However, they also report more attunement with their child – good news which chimes with other findings on highly sensitive people showing especially strong empathy.

Emerging evidence also suggests the added stress highly sensitive parents feel can be short-lived. A pilot study due to be presented at the European Conference on Developmental Psychology in August 2023 found that while highly sensitive parents initially experienced high levels of stress, by the time their babies were nine months old they showed improved parenting styles compared to those who had low sensitivity.

Francesca Lionetti, a researcher at G d'Annunzio University of Chieti-Pescara in Italy, conducted the study and found that there was another factor involved. Negative childhood experiences impacted how a highly sensitive person responded to parenthood.

"If they experienced rejection [from their parents as a child], then they reported more stress and were more intrusive in their parent-child interactions," she explains. 

Lionetti points out that "being a highly sensitive parent does not need to be negative". Being attuned to details can, for example, be a positive factor in parenting. In the study, she found that for sensitive parents, being better attuned to their own respiratory signals was linked to more positive parenting. "That's related to the fact that [highly sensitive people] process more deeply what's going on inside their body," explains Lionetti.

This also tallies with research currently in press, which found that when new teachers were sent to teach in challenging environments, those who were more sensitive experienced a greater drop in wellbeing and felt greater stress than those who scored low on sensitivity. But once they got used to their environment, they fully recovered.

"It seems that sensitive people in the short term are more easily overwhelmed with change," explains Pluess. But when it comes to parenting, he says that highly sensitive parents have the potential to be exceptional. "Their sensitivity helps them to understand their child and respond more quickly and more appropriately to the needs of the child.”

Since parental overwhelm can of course affect anyone, whether highly sensitive or not, some of the coping strategies for highly sensitive people could in fact benefit all parents.

One is being aware of your own reactions, and knowing what makes you feel stressed or relaxed. Self-awareness then allows us to accept the positives as well as the challenges of parenting, Pluess says, and look for ways to feel calm or find spaces of quiet when we feel overwhelmed.

"Sensitive people seem to benefit from social support too," he adds. Research shows that highly sensitive people respond better to mental health programs that promote resilience, while highly sensitive children benefit even more than others from anti-bullying interventions.

Highly sensitive people have been described as "orchids" who find it hard to thrive if the conditions are not right, unlike less sensitive "dandelion-type" people, which can grow in any environment. Of course, everyone needs light and warmth – and an apparent dandelion may just be an orchid-like person who was forced to deny their needs. But the metaphor might help convey that it's ok to try and modify our environment a little, to help us flourish.

Sometimes, parenting can actually help people make their lives more orchid-friendly. At school, my daughter gets regular "brain breaks", where her class sing songs to give themselves a rest. I haven't yet tried getting my entire household to join me for an all-singing brain break when things feel like they're getting out of hand – but maybe I should give it a go.

https://www.bbc.com/future/article/20230525-the-rise-of-highly-sensitive-parents

Oriana:
I always knew that I was easily overwhelmed, and I could hardly imagine a more challenging situation than trying to calm down a screaming child. Looking at parenthood from the outside, it looked like constant stress and battle.

This article said something obvious that no one told me and no other article mentioned: being a highly sensitive parent, you are indeed doomed to be overwhelmed at first — but then you adjust and do fine. The highly sensitive tend to be above-average in intelligence, and they can devise strategies and ways of coping to preserve their sanity.

*
THE FIFTIES REALLY WERE DIFFERENT

~ Most of what we considered “normal life” back then would be considered very odd today.
Contrary to the Beaver Cleaver myth, most Americans were in working-class families. Often large families, some of them multi-generational. Children usually did not have their own bedrooms. The parents had the biggest room, then one for the daughters and one for the sons. There was very little closet space, but that didn’t matter because clothes and shoes were pricey so we didn’t have much. And siblings also “borrowed” clothes from each other. But we each had a “Sunday best” outfit — with hats and gloves — for going to church, going downtown shopping, and other special occasions.

Generally, there was a single bathroom for the entire household, and a strict schedule for the morning regime. We did not shower daily, even if the bathroom was equipped with a shower (most weren’t).

Many of our mothers were in the workforce, especially “pink collar” jobs. Routine household chores were parceled out to the children. Daughters cooked, sewed, did laundry, etc. Sons carried heavy things, took care of the yard (if there was one), etc. Most families did not have access to all the appliances we take for granted today, e.g. self-lighting stoves, free-standing freezers, dishwashers, clothes dryers, microwave ovens, air-conditioners, etc. Few families had more than one car. Many families had no car.

Yes, we had a lot of unsupervised play time. Mostly we played on a mostly level part of the street, and had to worry about missing a ball that would then roll downhill and into a sewer.

Playgrounds were few, playground safety non-existent. Few children had bicycles. If they had access to one, it was likely an old hand-me-down and heavy as hell. (That’s what I learned to bike with.)

Birthday parties with outside guests and fancy store-bought cakes were rare. Most were celebrated (
if at all) within the family circle and a homemade cake. Maybe candles.

Halloween was all-out greed time, with no adult supervision. We planned our trick-or-treating routes in advance, taking note of houses that offered special treats (e.g. candied apples) in limited quantity. We used pillowcases for bags because they would hold more candy so we could go on longer forays before returning to home base and unloading. Then we’d go out again, on a different route. Costumes were less important than the candy grab, and always homemade.

Afterward we would compare our “takes.” My elder brother always scored the most. We usually scored enough apples for a few pies or applesauce. The grown-ups would inspect the loot and gladly take the old-fashioned candies that we disdained. The rest would be stashed in order of preference, from full-size candy bars, to candy bars that tended to appear only at Halloween, to non-chocolate candy and homemade popcorn balls, with “candy corn” at the bottom. We were not allowed to pig out and we had to share. But I had a piece of candy for lunch for the rest of the year.

Parents neither waited on their children nor chauffeured them around. If we wanted to go somewhere (e.g. the library) or to school play rehearsal, we had to figure how to get there and back. We did a lot of walking (e.g 3+ miles to the library).

Most of us walked to school, lugging a bag of books to and from school every day. Physical punishment by teachers and principals was normal.

Smoking was common, uncomfortably so. Long before the dangers of “secondhand smoke” were identified, we kids suffered from it. (I remember throwing up in the car from my father’s cigar.) Smoking was allowed almost everywhere, so “inside” tended to smell as bad as “outside” given air quality at the time. Oh, and we had to be careful with hanging the laundry outside: if the wind shifted and the particulates from the mill headed our way, we had to rush to rescue the clothes before they got dirty again.

All in all, it’s definitely better here in the future. ~ Michelle Pilecki, Quora

MAC:
It’s better except for the free-range kid part. I wonder sometimes if it’s more dangerous for children today, or if people are just more paranoid. I never had anyone try anything with me when I was a kid, and I went all over the place by myself.

Michelle Pilecki:
Danger? Consider the total lack of seatbelts in cars, not to mention child-safety car seats. Top-heavy carriages that easily fell over. Flimsy baby strollers. The multitude of contagious diseases for which there was (not yet) no vaccine prevention.

And let’s not forget the specter of total annihilation in a likely-enough atomic war. We who lived in the shadow of Pittsburgh’s (then) mighty steel mills were confident that we were among the primary targets.

Paul Knierim:
Childhood mortality in the developed world was about 5 times higher in 1950 than now (for many reasons, lack of vaccines being an obvious one). And for the parents and grandparents of 1950, they grew up in much more danger even than that and many had lost siblings because families had more kids in the past too. It probably didn’t seem worth the effort to worry about the little fraction of a percent dangers back then when there was nothing you could do about the big dangers.

But I do think there’s an element of irrational paranoia as well, mainly caused by television. If we watch something happen a thousand times on TV, it becomes hard to feel that it’s not a real threat. I don’t think reading about it in the newspaper or hearing a radio report had the same impact.

Diana Dubrawski:
I vote more dangerous today. In the 50s and 60s, we boomers moved in big packs of kids; more families (at least in the suburbs) had someone home during the day—neighborhoods weren’t deserted from 9 - 5 as they are today; in the 50s and 60s, there was no internet with vast networks of predators encouraging one another and sharing tips on how to molest children.

Children today are fewer and engage in structured activities after school [read: supervised]—there is much less ‘stranger danger’ because kids are supervised, not because there are fewer predators.

Donald Daly:
We didn't own a car, we walked, took the bus, train or a cab. We would take a bus to another town, have dinner and walk about 6 miles home after dark.

Taking ice from the back of the milk truck. Saturday afternoon movies that lasted 4 hours 2 movies, cartoons, newsreels, ushers in uniforms to take you to your seats.

Busy downtowns, on a Saturday, everyone was downtown, shopping.

No chain restaurants. You might have a owner that had 2-3 restaurants, but each would be different.

Black and White TV. TVs small screen, large TV body, remote? What's a remote.

Circuses coming to town with a circus parade.

Carnivals in parks.

No helicopter parents.

Rode your bike, walked to school.

Good penny candy. Candy bars that had not shrunk in size.

Punchout books, pushed out the perforated pieces for ships, castles, etc.

Using your imagination, large boxes and crates could be cars, planes, ships, rockets.

Cap guns, pop guns, BB guns. Buying a Remington by cutting and filling out the info and sending in your money.

Beat Cops. Cops walked a neighborhood he knew everyone and  everyone knew him. If he caught you doing something he'd kick your ass, then drag you home to dad who'd kick it again.

Learning cursive writing, going to a library.

Gingerbread and squibs.

Steve Henry:
Polio was terrifying. All the swimming pools were closed and my parents were scared I would get it. I've had 3 different styles of administration. First was the regular style injection. The second was the gun injector. The third was the sugar cube. I have no idea how many total I've had. And there was a girl in my classes who had had polio, it did a number on her.

Sandra McDonald:
It has long been my feeling that people born after, say, 1995 take a smoke-free world largely for granted, and don’t fully realize the huge difference in quality of life that existed for kids growing up after that date, vs. in the years and especially decades before. Smoking used to be (as you say) just about everywhere, it was obnoxious and offensive, and adults inflicted it on children and other adults without so much as a moment’s thought.

Lonnie Smith:
In grew up in the 60’s and my memory of my house—and every other house or apartment had the same element— There was a Blue Haze that existed everywhere. It started around 4 ft and went all the way to the ceiling. I noticed it because as a Kid I was UNDER it.

It was a reverse sea of smoke that UNDULATED above you when you were little.

And today, young people freak at the mere MENTION of second hand smoke.

Back then, cigarette smoke WAS the Atmosphere. EVERYWHERE!

Djofraleigh Anderson:
Born in 46, we did not get a TV until I953 and getting one was high priority, even before getting a car. We stood on the sidewalks and watched TV in store windows.

Music was AM only, car had no radio, nothing stereo and cars and houses had no A/C, just windows, and no seat belts, air bags, but we rarely got over 35mph.

I never had a birthday party, never thought of one, but had cake and candles and made a wish.
Christmas was big, crazy big, carolers, parades, decorations that went four blocks thru town and we always had a real tree, always.

Blue laws had everything closed on Sunday except a couple of truck stops and the picture show. Nothing was open till after church hours — Noon.

Doctors made house calls for sick people were thought too sick to travel. We had a $3 doc, and a $5 one, one old, one young. The $3 doc gave me penicillin right thru my coat and shirt on the porch and big needles were simply given an alcohol wipe before the next shot. The hospital was a dump, tight little wards. Polio was epidemic when I was starting school, and my cousin got it and she was in an iron lung 50 miles away. Everyone got the mumps, measles, and I had the whooping cough and my tonsils out before starting schools. No one had braces on their teeth. I went to a dentist the first time when I was a teen.

Shoes were expensive and resoled often, and mine had heel taps to keep from wearing them out front and rear. My coat had elbow patches, sometime I had knee patches, and my blue jeans were rolled up for a cuff and let down as I grew. Every boy in class had a barlow knife in his pocket, and most men cigarette lighters too, even if they did not smoke — Handy tools to carry about. I had a teacher who smoked in class. Smoking was done everywhere, mostly by men. Cigars abounded. Cars had ash trays and all coffee tables, too.

In town, the stores mostly closed at Noon on Wednesdays & Saturdays. The banks were not open all day, either, but took an hour or two to do the books sometime mid day?

We did not have a one way street in a town of 5000. We wanted what we saw others with on TV. If Lucy had a kitchen stool, we got one like hers. Unmarried people did not live together, were not rented to, except in the few slummy places, and they pretended to be there.

There were more fights then, but less deadly, and spanking at school were at least weekly. I loved the way I was raised. Things got better every year until my dad died when I was 14. I still miss him 70 years later.

Peggy Ruizisasi:
Yep, that was my 1950’s childhood to a “T”. I might add the fun experience of needing to run outside and snatch the clothes off the clothesline if it began to rain, or if a flock of migrating birds settled in the nearby trees. All pets were indoor-outdoor, and I never heard of a litter box until I was grown. I got one pair of sneakers a year, and wore them until they fell apart. Went barefoot all summer, except in church and in town (hot asphalt!). Had one “church dress.”

Also had the joy of suffering through pretty much every childhood illness, including both kinds of measles and chickenpox, because vaccines were not yet available. My parents (who had lost siblings to epidemics) would have walked barefoot over broken glass to get me vaccinated, and made sure I got all the vaccines that were available, which was pretty much just smallpox and polio.

Our t.v. was black and white, there were very few kids’ shows (Mickey Mouse Club, Wonderful World of Disney, Merry Melodies cartoons, and some old Flash Gordon and Superman reruns) and when the channel needed changed (to one of the 4 or 5 available) — I was the remote control — as in, “get up and change the channel!” When we wanted to access more remote stations, we had to go outside, shinny up the t.v. antenna to the roof, and manually twist the antenna until dad shouted up that the new station was coming in with no static.

But we did get to roam freely, come home only when it got dark in the summer, and our main hazards were falling in the creek or stepping on a rattlesnake. I carried a pocket knife for most of my childhood, and got a lot of use out of it. But we were bored a lot — no computer, no electronic gadgets, no VCR’s, going to an indoor movie or a drive-in theater once in a blue moon, and not allowed to use the family phone except in an emergency — anyway, it was a “party line” shared by about 50 neighbors, so anybody could listen in on your calls!

Most of my entertainment was reading books — though I had to beg to be driven to the town library to check them out once a week. Oh, and comic books, though I had to buy those with my own money — which I earned by collecting used soda bottles from ditches and roadsides and returning them to the store 3 miles away (on foot dragging a red wagon) for a penny a bottle. I got no allowance. Took 500 bottles to earn 5 bucks, but it taught me hustle. On balance, things are much more convenient and entertaining today.

Ryan Mattes:
Hell, I went to high school in the late 80’s and we still had a smoking section for lunch (outside, for the students). The teachers lounge was dense with smoke that would billow out when the door opened, and teachers bumming smokes from the students wasn't unusual.

Cynthia Bowers:

Middle class was a one car family

Two or three bedroom house, no AC. In my youth few moms worked. You dressed up to go shopping and to church. Vacation was visiting relatives. Every adult smoked and most had a couple highballs every evening

Children behaved in public or “ the hand” snatched them outside for some correction

No one apparently had peanut allergies or gluten or lactose intolerance

No one would ever discuss politics socially

Just serving in the military didn’t make you a hero

Women wore hats as did men… real hats. Ladies leave their hats on indoors, men ALWAYS remove theirs

John C. Anderson
People would have a front room, with real nice furniture covered in plastic, reserved for visits by whoever, in what I suppose you would call the living room.

But they had less expensive, comfortable furniture in the room where they watched t.v. and did other family activities, where they really “lived”.

I found it very strange, almost pagan.

As if the living room was “preserved”, like something out of an Egyptian pyramid.

Kinda funny and phony.

Oriana:
Funny how the expression the “good old days” is still alive. Yes, there were some positives, but mostly, it seems, the constant smoking, the lack of many safety features we now take for granted, more disease (including the dreaded polio), less sensitivity to children’s needs and less respect for them (there were so many of us!); girdles and stockings for girls past a certain age; more rigid gender rules; chivalry (e.g. opening the door for a girl) developing at some point in mid-teens; adults who were polite to each other but felt free to yell at a child; being force-fed religion and threatened with eternal damnation (
you'll fry in hell; on the whole, I’d call those the “bad old days.”

One good feature, it seems to me, was that people were more available to one another. They visited in person, sometimes unannounced (not everyone had a phone), and the joy of greeting a visitor seemed genuine.

For all my antipathy toward religion, I loved the sound of church bells, especially in the countryside. I especially loved the Angelus at noon and the Vespers bell in the evening.

Overall: It was the best of times, it was the worst of times. That’s always true, isn’t it, aside from extreme situations (e.g. the Black Plague; the world wars; the occasional natural disasters).

Those were the good times for extraverts, with crowds of people, a swarm of kids, and noise everywhere. If you wanted privacy and quiet, you had to try to find a refuge. In spite of it all, somehow I managed to read a lot of novels, and I was by no means an exception: at least half of the class, or maybe even two-thirds, consisted of avid fiction readers. Many years later, when I heard the expression, “Books are my television,” I saw how totally this applied to us. Yet even with TV making inroads, “What are you reading?” was a reliable conversation starter. And this didn’t include the assigned reading, which was already considerable.

Nevertheless, on the whole I am very glad not to be living at any point in the past. Seeing the progress in technology and medicine, I mostly envy the young — they are going to witness even more amazing developments. 

Mary:

The fifties were indeed a different world. Childhood was very different.. with much more independence and much less concern for safety. We were not always "supervised" the way children are today. Play was not all scheduled and structured and organized...to a great extent we invented our own play with whatever came to hand, inventing and retelling stories we knew or heard of from many sources..books, films, tv, as well as the traditional games — races, hopscotch, tug o war, climbing whatever was available, trees, fences or walls, outside all day, getting dirty, walking everywhere we wanted to go, usually in groups of others not arranged or chosen by our parents, but discovered in school or the neighborhood in an open and unorganized way.

There was both more freedom and more personal responsibility. For instance, I can't remember a single instance of anyone having their homework overseen or assisted by the grown ups. It was your job to do yourself...we may have all had different styles, getting it all done quickly, or like my one sister, stretching it out over hours from dinner to bedtime. It was not mom or dad's responsibility, and beyond a question about whether you were done with it; they weren't involved.

We didn't have bikes, so we walked everywhere, or took the streetcar. There wasn't such a sense of danger everywhere, of threats you had to be warned about and protected from. No seatbelts or carseats...our favorite was to ride in the back of the station wagon, loose, not held in place by anything at all.

I don't feel any desire to go back to those times, however. Corporal punishment was real and commonplace, from mild to truly abusive. You saw it, knew it and experienced it, but didn't think it was unusual. Everybody, pretty much, got a beating, occasional or regular. That is no longer acceptable...We have moved toward a more humane and less abusive standard for the usual historical victims — women and children.

Sometimes when I think about then and now it seems we had less stuff but more freedom and more human connection. Now there's crazy amounts of stuff, but a much lonelier, divided existence.


*
COMPARING AUSTRALIA AND THE US

I’ve lived 25 years off and on in Australia, 30 years in the USA. I love both countries and the people and the societies are both increasingly diverse.

Now for living better or not.

Both countries have been transformed over the past 30–40 years with in Australia the total demise of industrial production, and in the USA the vastly reduced industrial output. Both were relatively self sufficient, protected especially in Australia by tariffs all of which disappeared in the 90’s leading to just about zero production.

Why is that important? Because both countries, especially Australia had to reinvent themselves which affected every single person in the country similarly in the USA.

What happened?

In 2 words: RESOURCE BOOM. Without it, Australia would have collapsed. Without resource extraction Australia could not pay for anything. There are agricultural exports including wheat and wine and meat, but these are not enough to sustain the country, let alone make it grow.

It is absolutely critical to understand why Australians live well and we live very very well, it is because of all of the iron ore and other resources we sell mostly to China.


Every person benefits from the new schools, new roads, new skylines — this is Perth, a nothing place on the end of the earth 30 years ago.

And this was Perth in 1980


All mining money folks, where an electrician aged 23 can get a job for $200k a year working every other week up in the mines in air conditioned quarters, free food and transport.
Now the USA, the 1967 race riots, the beginning of the end.

Of course it is not all like that in Detroit. I am just giving an example to show how the ‘average’ person is faring in either country.

Now for the details which clearly favor Australia for an average middle class person who has been here for a long time (housing prices) vs upside making big money, USA.

AUSTRALIA

Minimum wage is over $25 per hour. I don’t know anyone over the age of 18 who makes less than $30 per hour. Same jobs in the USA or Canada that make $25 per hour like driller for cabling make $40+ per hour in Australia.

Australia is made for the middle class, not upper, not lower. If you are middle class you get all of the benefits. Good public schools, also school choice. Strong sports programs but you have to pay something for after school. You do not get free lunches, you pay. Schools are solid all over the country as it does not work on the ‘millage’ system of the USA. So even a poor district gets the same per student contribution plus all kinds of infrastructure money. Much better for the middle class.

Taxes — USA wins; you have to pay taxes in Australia which are at 32.5% at AU$45k or 30k USD. There is no way out, you pay and if you go ‘cash’ economy and they catch you, you are in big trouble.

Similarly no taxes for consumption other than 10% GST. So your ‘user’ taxes are much lower than the USA and no state can cheat other states. A big plus if you are middle class.

Health Care — Again the middle and lower classes win. You need to pay Medicare for both countries but in Australia you get a Medicare card and you can use it all the time. Health insurance in Australia is to augment not substitute. There is no such thing as becoming eligible for Medicare — everyone is eligible. However the USA at the top end is way way better with new equipment. If you are rich you want to be in the USA for health care. If you are middle class, Australia as prescription costs are regulated and nationally negotiated so they are much cheaper than the USA and no dramas.

You want to make money; both countries are good but if you are middle class, Australia is the place to be. Right now with the mining boom, it is nothing to have 23 year old electricians making $200k (USD140K).

Both countries have great university systems, Australia just has fewer but very large and you get help from the government but you still have student loans to pay back.

You can talk about the food and food prices. You can find cheap good food in both countries so that is not a difference. The USA has more variety of cars. Both countries have lousy public transit especially older trains except for Melbourne but that is a big exception.

Regardless of how you want to cut it, what Australia wins on is while the USA has 2 massive coast lines on the east and west, Australia is an island(s) meaning we got nothing but coast line.

Sure you can’t live in the outback, but who cares when you have this:

Melbourne, AU


Yeh, I would say the quality of life is a tad bit better for the average person in Australia.

She’ll be right mate. ~ Henry Greenfield, Quora

*
WHY GERMANY GOT AHEAD ECONOMICALLY AND MILITARILY IN SPITE OF PAYING WWI REPARATIONS

German industry never suffered from combat on German soil.

The Germans hauled off most of Belgian industry and 50 % of heavy French industry.

A direct result of 1 & 2 was that the German industry of 1918 was substantially stronger than that of 1914.

Hyperinflation. Most German debt was domestic while most British and French debt was foreign (mainly owed to the US). Britain and France had to pay off their debt to the US using hard currency. The hyperinflation in Germany allowed the government to pay off its domestic debt almost immediately.

A direct result of hyperinflation was that German exports soared while British and French exports stagnated because of the high taxes to pay off the foreign debt.

By 1924, the German economy was stronger than the British and French economy combined because of points 1 through 5.

The War Reparation didn’t cover the cost of covering the damage the Germans had done. And since they got to keep the industry they stole, in the long run Germany made a profit.

By 1933 (negotiated in 1932), the Germans no longer had to pay off their War Reparations; there was a moratorium on payments. So Hitler started out with a budget surplus.

The Nazis made Germany self-sufficient when it came to food. The famine of World War One resulted in the German People being willing to make great sacrifices just to make sure they would never go hungry again. So the food situation in 1939–1945 was a lot better than it had been in 1914–1918 (although the quality was lower).

Germany got to reconstitute its Armed Forces starting in 1935/1936. This means new equipment, new uniforms, new (and better) training, new (and better) tactics, etc. The other countries had to make due with what they got: older equipment, older uniforms, poorer training, older ideas, etc.

When it comes to pay, Germany always was more left-wing than the rest of Europe. The Nazis amplified this and good pay and a better standard of living were cornerstone of Nazi doctrine: hence better pay for the troops (and farmers, factory workers, etc.).

It should also be noted that by 1938 Germany was utterly broke … a direct result of all that new equipment and better pay. ~ Tierry Etienne Joseph Ratty

Mike Tolson:
Mostly all accurate, except for food. Germany wasn't exactly self-sufficient. It had enough for a short war. After that it depended heavily on foodstuffs plundered from conquered countries. Hitler was well aware of the great hunger experienced by Germans during WWI. He made sure that would not happen again, mostly by taking whatever was needed from France, the low countries, Poland, and the USSR. (And others.) Even if that meant great hunger and, ultimately, starvation in those nations. So much of the German war effort was supported by the theft of Jewish property and the plunder of Europe. Many people don't really understand this.

Steven Costa:
The one thing that enraged the Soviet Red Army soldiers while invading Nazi Germany, was the high standard of living that Germans enjoyed. The well kept houses with running water and electricity were unknown in the Soviet Union. A lot of Soviet peasants lived in abject poverty. They were aghast and enraged that such a prosperous nation would attack and invade them.

Triangle Whip
Germany was poor until Hitler came to power. He got rid of the banking currency and made his own currency based on production. So people were able to produce everything — good food, good machines, and cool uniforms. Now were are in the Weimar Republic, and everything is gone up…

Bruno Spickermann:
They were supported by big industry who liked the idea of being anti communist, anti union, law and order, may be anti Jewish too.

If you lived in Germany at that time you would not feel so rich. After war reparations, inflation, unemployment, strong communist party, the Nazis gave them hope. Looked pretty good at first but soon turned out to be very misleading dictatorship and preparation for war.

Joe M:

By July 1934, the Nazi party was the only party in Germany, and within four months, Germany was no longer a democracy. The previous year, 1933, the Nazis proclaimed May first as National Labor Day, but within three weeks, Hitler ended collective bargaining. The government started to regulate contracts. The Nazis stated that only employers would decide wages, hours, and whether a worker could leave for another job. The Nazis destroyed any gains that labor had made in their favor since the end of WW1. Not only did the Nazis take away collective bargaining, they removed all the personal freedom of the working class.

In exchange for their loss of freedom, workers received a Spartan diet (guns before butter) and increased hours for a low wage. As for the farmers, William Shirer writes that the Farm Law of 1933 pushed farmers and farm workers back into feudal days. All farms up to 308 acres which provided a living were declared hereditary estates subject to the ancient laws of entailment. The Nazis prohibited farms from being sold, divided, mortgaged, or foreclosed for debts. The farm worker was bound to the soil as irrevocably as the serfs of feudal times. The Reich Food Estate regulated every aspect of his life and work.

Nevertheless, most foreign dignitaries and reporters repeated the Nazi propaganda that German workers were prosperous and happy. Only a few journalists like William Shirer, the author of The Rise and Fall of the Third Reich, reported accurately on the conditions in Germany. After the war, Nazi records and the Nuremberg trials vindicated the reporters and politicians who recounted a different description of Germany before WWII. Yet even today the Nazi propaganda that labor prospered under Hitler is believed by many, despite the historians and journalists documenting the circumstances that the German people lived under Hitler.

*
DECLINE IN MARRIAGE MIGHT BE A GOOD THING

One of the curious things about marriage is the role it’s played in embedding commonly held views about normality. Married people are generally considered normal people. As such, they have possessed inordinate power to dictate the terms of normality in a way that single people rarely can.

And yet marriage, clearly, isn’t for everyone. Plenty of people have no desire to do it. Plenty of others have done it and haven’t liked it. The stats only corroborate this. Fewer people over the years have been getting married, while the stresses and strains of lockdown in 2020 (along with the temporary closure of venues) saw divorces in England and Wales overtake weddings for the first time.

Not everyone, however, is taking marriage’s declining popularity lying down. At the recent National Conservatism conference, delegates were promised a national revival founded on “faith, family and flag”. Likewise, China has just proposed a list of measures to actively encourage its young women to marry and have children (and not just one child any more: three, ideally). This is a national policy, but it’s one with global benefits: to stem the threat of economic stagnation, growing the population is supposed to ensure the continuity of a huge, and therefore cheap, labor force.

In other words, unless more Chinese women have more children, we’ll all have to pay more for our merch – with matrimony here (never mind that not everyone who marries has children and not everyone who has children gets married) still framed by national governments as the gateway to maternity first of all. Other countries may well follow China’s lead. In Japan, where they’ve just recorded a seventh consecutive year of declining birthrates, and fewer couplings, the government is accused of failing to act quickly enough to mitigate the effects of a rapidly aging population.

Meanwhile, the US has its own history of turning marriage into a patriotic act, sometimes on economic grounds, at other times on racial ones. Paul Popenoe, for instance, founder of the American Institute of Family Relations and a big fan of Hitler and “applied eugenics”, opened his marriage counseling clinic in 1930 with the stated aim of saving the marriages of the “biologically superior” so as to save the race.

Sheesh. None of it sounds very romantic. Little wonder Chinese women, even when lured by financial incentives, don’t especially fancy saving global capitalism by means of marriage. Looking at the various ends to which marriage has been recruited, it’s tempting to conclude that marriage itself is nothing but a front for powerful interests that largely contradict those of marrying people themselves. Should we take it, then, that that’s all marriage was ever really about?

Marriage may not be for everyone, but, as a currently married person, I’ve been trying to make it suit me. That doesn’t mean I’ve found it easy (I haven’t), although I have found it gets easier over time. Still, I do occasionally wonder if it is marriage’s very success as an institution that has proven injurious to the lived experience of so many marriages. For if the norms marriage has helped to reproduce have been particularly pernicious for single people, they have not been too kind to couples either. As any psychoanalyst could tell you, when it comes to relationships, the invocation of the ideal tends to summon its own shadow. This is no less true of the spousal relation than it is of the parental one, where the ideal that none of us can live up to has the effect, very often, of inspiring cruel and abusive behavior under that idealized cover.

So, could marriage’s fall from favor turn out to be a good thing for people who marry? When marriage ceases to be the cultural norm, new marriages – or new ways of being married – might be possible. Shorn of patriarchal expectations, marrying people could find they’re better able to talk about what it is they really want out of marriage, for example. And non-marrying people should be better able to unshackle themselves from the sense that everyone from their mother to HMRC disapproves of their relationship status. 

The marriage contract, when no longer functioning as a fig leaf for the wider social contract, could become the testing ground for other possibilities – such as different ideas about how to inhabit a shared planet.

After all, if marriage is currently being made essential to the international market of cheap goods across the world, that isn’t very good for the world. The economic system being “saved” here is one that exploits marriage’s lifelong promise and commitment to create a culture that’s short-termist in every other way. Taken at its own word, marriage’s time signature is incompatible with many of the systems to have hitched their rides to it. Whereas, if you de-normalize marriage, something more experimental emerges.

Arguably, this has been going on since at least the late 18th century. The rise of the love match, if you think about it even for a moment, is nothing if not radical. You meet someone – whether through family or friends, or on holiday, or on a bus, or online – and hey, before you know it, you’re promising to spend the rest of your lives together. How on earth is that normal? It’s hardly surprising so many marriages don’t work. What’s remarkable is that so many do – some of them even happily. How?

Tolstoy famously found happy families alike and only the miserable ones interesting. And it does seem that he and Sofia Tolstoy had perfected the art of being deeply unhappy together in their own way. Yet unhappy marriages, as I see it, are more likely to give their game away than the happy ones, which always retain an aura of mystery.  

What I usually suspect of the happily married couple is that they’ve donned a cover of marital normality as a license to withdraw from the very world that urged them into it. And I also suspect of the happy couple that, when they do step into their marriage, it isn’t merely to reproduce that world – it’s to re-imagine it. However unfashionable it may be, therefore, the long term of marriage is likely to remain forever topical because marriage is one possible model (one – not the only one) for the sort of creativity and tenacious solidarity that’s surely required if we’re to face our unknown future together – for richer or for poorer, for better or for worse. ~

https://www.theguardian.com/commentisfree/2023/jun/05/decline-marriage-married-wedded-patriotic

*
FEWER PEOPLE ARE GETTING MARRIED: IT SHOULD BE A CAUSE FOR CELEBRATION

Nanny knows worst, English literature tells us, when she meddles in matters of the heart. If Juliet’s nurse and Wuthering Heights’ Nelly hadn’t freelanced quite so enthusiastically as relationship therapists, lives might have been saved and indeed lived happily ever after. Had Mrs Danvers thought to take a few deep breaths and detach herself from her late charge, Rebecca, some prime Cornish real estate might still be standing too.

Marriage is certainly in long decline in England and Wales. Marriage rates are falling fast among young people in particular: 1.2 million more 25- to 35-year-olds were unmarried in 2021 than in 2011. Expect laments from those who consider the institution a “building block of society”. What is wrong with young people that they are not getting married?

But people’s romantic choices are their own business. These trends are not evidence of a crisis, but of revealed preferences. We no longer cattle-prod people into the institution and bar the doors. Social and financial pressures on singles have lessened. Perhaps this means fewer marriages. But perhaps that isn’t a problem.

Here are some measures that tend to succeed in boosting marriage. First, making divorce very hard to get. In the halcyon days of marriage, it was only men who could get divorced and only rich ones who could afford it. Later, women had to rigorously “prove” adultery if they wanted to end marriage on that basis. Until the late 1990s, the contributions of “homemakers” were not recognized by the divorce courts and stay-at-home spouses did not get much of a payout. This mostly disadvantaged women, incentivizing them to put up with bad marriages. And it was only last year that “no fault divorce” was written into law, meaning couples could end their marriages without undue conflict. This, importantly, meant domestic abuse victims could walk away faster, having previously faced a two-year wait or the prospect of riling their already dangerous spouse with accusations.

Another effective way to bolster marriage is to heavily stigmatize single people and their children. The prospect of social disdain is highly motivating – vicious anti-spinster rhetoric and the social exclusion of “bastards” once propelled many a couple up the aisle.

Tax breaks and cash incentives can work too. In Hungary, married couples who promise to have three children can now get a £23,200 allowance towards a house – a scheme that has reversed the country’s downwards marriage trend. But is the prospect of cash the best foundation for a relationship? One would hope that these incentivized couples were really marrying for other reasons. But if so why give them money? Single people, especially parents, already suffer considerable financial penalties. For those who are single for good or unsolvable reasons, more discrimination does not help.

Marriage boosters tend to make the assumption that the institution is an unalloyed social good. But is it? One body of evidence suggests that married people live longer, healthier lives than the rest, another that this only applies to happy partnerships. Bad marriages can be seriously detrimental: frequent conflict, studies suggest, harms your health in all sorts of ways. And many still don’t walk away soon enough from bad relationships. One woman in four experiences domestic abuse in her lifetime. Money worries and the costs of divorce are still trapping people in loveless marriages.

True, there is some evidence that children benefit from marriage, but this becomes far muddier when the marriage is a bad one. It is certainly clear that children benefit from more parenting and more money, but not that trapping two people in a relationship is the only way for them to get it. If supporting children is the aim, we should perhaps think about increasing cohabitation rights or lessening parental stress by increasing access to childcare. Marriage is not the only answer.

And is marriage really a building block of society? Well, perhaps no longer. A few generations ago, marriage did have a close relationship with the community around it: a couple were defined by their wider social connections and extended family. But these days, married couples are viewed as primarily self-sufficient and autonomous. It is single people who tend to be far more connected to their communities: on average, they are more politically engaged, have more friends and provide more care to their siblings, parents and neighbors.

Marriage is declining – but it is not a crisis and certainly not one that calls for bustling intervention. Soaring levels of cohabitation among the young suggest instead that people are taking lifelong commitment seriously enough to give it a trial run first. There may be fewer marriages. But fewer bad ones too. ~

https://www.theguardian.com/commentisfree/2023/feb/25/fewer-people-marrying-cause-for-celebration-not-state-intervention

Oriana:

One thing I rarely see discussed is the fact that one partner tends to be the dominant one. This creates some degree of stress for the non-dominant partner. The unhappy partner usually has the option of leaving the marriage, be it at a heavy price. When this happens, it's not unusual for the dominant partner to be genuinely puzzled: But I thought we had a happy relationship! I didn't drink, I didn't sleep around . . .  

No, but your very presence was oppressive. This is probably better left unsaid . . .

Another issue, more easily resolved, is the division of labor in a shared household. She cooks; he mows the lawn. She makes sure the clothes are clean; he makes sure the family vehicles are well maintained. Or the other way: some men are great cooks; some women can do simple car repairs. The point is having clarity about who does what, and doing it without resentment. If resentment creeps in, start discussing the problem. Don't let it fester. 

Marriage is difficult; it takes learning, often the hard way. It requires compromise, tolerance, and much forgiveness. It's much easier to live alone and be your own boss. But some people can't afford the ever-rising rent or a steep mortgage. And even those who like living alone experience occasional loneliness and/or wish someone would help them with household chores. And some people do want children, and know that children do better in two-parent households. There are simply no easy answers, and every couple must work out their own arrangement. 

Sometimes this means two separate households. Sometimes it's the grandparents who raise the kids. Whatever works. 

The good thing is that we as society have stopped idealizing marriage, no matter the price. There seems to be more honest discussion these days, and more egalitarianism. And definitely more options, even if couples do complain about being pressured to have children because their parents can hardly wait for the grandbabies. (I remember an older woman saying that the only good thing about marriage was that ultimately you got to have grandchildren).

Nothing is perfect, and everything changes. The institution of marriage has lasted millennia. It's a very flawed institution, but nothing better has been found -- especially when it comes to raising children.


*
ANTI-DOPAMINE PARENTING

~ Back when my daughter was a toddler, I would make a joke about my phone: "It's a drug for her," I'd say to my husband. "You can't even show it to her without causing a tantrum."
She had the same reaction to cupcakes and ice cream at birthday parties. And as she grew older, another craving set in: cartoons on my computer.

Every night, when it was time to turn off the screen and get ready for bed, I would hear an endless stream of "But Mamas." "But Mama, just five more minutes. But Mama, after this one show ... but Mama ... but Mama ... but Mama.”

Given these intense reactions to screens and sweets, I assumed that my daughter loves them. Like, really loves them. I assumed that they brought her immense joy and pleasure. And thus, I felt really guilty about taking these pleasures away from her. (To be honest, I feel the same way about my own "addictions," like checking social media and email more than a hundred times a day. I do that because they give me pleasure, right?)

But what if those assumptions are wrong? What if my daughter's reactions aren't a sign of loving the activity or the food? And that, in fact, over time she may even come to dislike these activities despite her pleas to continue?

In the past few years, neuroscientists have started to better understand what's going on in kids' brains (and adult brains, too) while they're streaming cartoons, playing video games, scrolling through social media, and eating rich, sugar-laden foods. And that understanding offers powerful insights into how parents can better manage and limit these activities. Personally, I call the strategy "anti-dopamine parenting" because the ideas come from learning how to counter a tiny, powerful molecule that's essential to nearly everything we do.

Turns out, smartphones and sugary foods do have something in common with drugs: They trigger surges of a neurotransmitter deep inside your brain called dopamine. Although drugs cause much bigger spikes of dopamine than, say, social media or an ice cream cone, these smaller spikes still influence our behavior, especially in the long run. They shape our habits, our diets, our mental health and how we spend our free time. They can also cause much conflict between parents and children.

Dopamine is a part of an ancient neural pathway that's critical for keeping us alive. "These mechanisms evolved in our brain to draw us to things that are essential to our survival. So water, safety, social interactions, sex, food," says neuroscientist Anne-Noël Samaha at the University of Montreal.

For decades, scientists thought dopamine drew us to these vital needs by providing us with something that's not as critical: pleasure.

"There's this idea, especially in the popular media, that dopamine increases pleasure. That, when dopamine levels increase, you feel the sensation of 'liking' whatever you're doing and savoring this pleasure," Samaha says. Pop psychology has dubbed dopamine the "molecule of happiness.”

But over the past decade, research indicates dopamine does not make you feel happy. "In fact, there's a lot of data to refute the idea that dopamine is mediating pleasure," says Samaha.

Instead, studies now show that dopamine primarily generates another feeling: desire. 

"Dopamine makes you want things," Samaha says. A surge of dopamine in your brain makes you seek out something, she explains. Or continue doing what you're doing. It's all about motivation.

And it goes even further: Dopamine tells your brain to pay particular attention to whatever triggers the surge.

It's alerting you to something important, Samaha says. "So you should stay here, close to this thing, because there's something here for you to learn. That's what dopamine does.”

And here's the surprising part: You might not even like the activity that triggers the dopamine surge. It might not be pleasurable. "That's relatively irrelevant to dopamine," Samaha says.

In fact, studies show that over time, people can end up not liking the activities that trigger big surges in dopamine. "If you talk to people who spend a lot of time shopping online or, going through social media, they don't necessarily feel good after doing it," Samaha says. "In fact, there's a lot of evidence that it's quite the opposite, that you end up feeling worse after than before.”

A HIGH-JACKED HIGHWAY

What does this all mean for your kids? Say my daughter, who's now 7 years old, is watching cartoons after dinner. While she's staring into the technicolor images, her brain experiences spikes in dopamine, over and over again. Those spikes keep her watching (even if she's actually really tired and wants to go to bed).

Then I come into the room and say, "Time's up, Rosy. Close the app and get ready for bed." And although I'm ready for Rosy to quit watching, her brain isn't. It's telling her the opposite.

"The dopamine levels are still high," Samaha explains. "And what does dopamine do? It tells you something important is happening, and there's a need somewhere that you have to answer.”

And what am I doing? I'm preventing her from fulfilling this need, which her brain may elevate as being critical to her survival. In other words, a neural pathway made to ensure humans go seek out water when they're thirsty is now being used to keep my 7-year-old watching yet another episode of a cartoon.

Not finishing this "critical" task can be incredibly frustrating for a kid, Samaha says, and "an agitation arises." The child may feel irritated, restless, possibly enraged.

Because the spike in dopamine holds a child's attention so strongly, parents are setting themselves up for a fight when they try to get them to do any other activity that triggers smaller spikes, such as helping parents clean up after dinner, finishing homework or playing outside.

"So I tell parents, 'It's not you versus your child, but rather it's you versus a hijacked neural pathway. It's the dopamine you're fighting. And that's not a fair fight,'" says Emily Cherkin, who spent more than a decade teaching middle school and now coaches parents about screens.
This response can happen to children at any age, even toddlers, says Dr. Anna Lembke, who's a psychiatrist at Stanford University and author of the book Dopamine Nation. "Absolutely. This happens at the earliest ages. So screens and sweets are, in and of themselves, alluring and potentially intoxicating.”

Armed with this knowledge, parents have more power to reduce the stress and negative consequences of these dopamine-surging activities. Here are some strategies to do that.

Tip 1: Wait 5 minutes

Dopamine surges are potent, says neuroscientist Kent Berridge at the University of Michigan, but they are brief. "They have a short half-life," he says.

"If you take away the cue [triggering the dopamine] and you can wait two to five minutes, a lot of the urge usually goes away," says Berridge, who's been instrumental in deciphering dopamine's role in the brain.

In other words, when you stop the cartoons at 30 minutes or cut off the cake at one slice, you may hear a bunch of whining, protest and tears, but that reaction will likely be brief.

But here's the key. You have to put the dopamine trigger out of sight, says Lembke at Stanford. Because seeing the laptop or extra leftover cake can start the cycle of wanting over again.

Tip 2: Look for the "Goldilocks" activities

Of course, not all of these activities and foods will be as enticing or intoxicating to every child, Lembke explains. "Our brains are all wired a little bit differently from one individual to the next.”

And remember, dopamine motivates children to act and stay focused. The key, she says, is to figure out which activities give your child the right amount of dopamine. Not too little and not too much — the Goldilocks amount. And to do that, she says, pay attention to how your kid feels after the activity stops.

"If the child feels even better after the activity, that means we're getting a healthy source of dopamine," Lembke says. Not too little. But also not too much. And there's low risk the activity will become problematic for the child.

For example, my daughter doesn't have (much of) a problem turning off audiobooks or putting away art projects. Same goes for video-calling with friends, coloring, reading and, of course, playing outside with friends. These activities make her behavior better afterward, not worse.

What about the opposite — when a child feels worse after an activity or snack, and their behavior declines? Then, Lembke says, there's a high risk that the activity could hook the child into a compulsive loop. "Once they start engaging often and for long periods of time, they may really lose control," she explains.

"People have this idea that, 'Oh, well, if I let my kid play as many video games as they want or be on social media as much as they want, they'll get tired of it.' And in fact, the opposite happens," Lembke says.

Research indicates that over time, some people's brains can actually become more sensitive to the dopamine triggered by a particular activity. And therefore, the more time a person spends engaged with this activity, the more they may crave it — even if the activity becomes unpleasurable.

So, Lembke says, parents really need to be careful and thoughtful with these activities. They need to limit the frequency and duration.

Which brings us to …

Tip 3: Make microenvironments

Create places in your home where the child can't access or see problematic devices, Lembke recommends. For example, have only one room in the house where children can use the phone or tablet. Keep these devices out of bedrooms, the kitchen, the dining room and the car.

At the same time, create times in your schedule where the child cannot see or access this device. Narrow down usage to only a small time each day, if possible. Or take a weekly "tech Sabbath," where everyone in the family takes a 24-hour break from their phones and tablets.

And for problematic foods, keep them out of the house. For example, the family eats ice cream only on special trips to the ice cream parlor.

Lembke calls these "microenvironments" — both physical and chronological. And they can have profound power over our brains, she says. "It's amazing how when we know we can't go on a device, the craving goes away.”

Because here's the tricky aspect of dopamine: Our brains can start to predict when dopamine spikes are imminent, Lembke explains. We identify signals in the environment that point to it. These environmental cues can actually trigger a surge of dopamine in the brain before the child even begins eating or using a screen. These spikes can be larger than the ones experienced during the activity.

For a child, a signal could be a tablet sitting on a shelf, walking into the living room where they usually use a device, or even simply the time of day.


These environmental signals can make it tough, even painful, for kids to start breaking their habits, Lembke says. But that pain usually dissipates in a few days or weeks. Give children time to adjust.

Tip 4: Try a habit makeover

Instead of cutting out an activity altogether, look for a version that's more purposeful, says neuroscientist Yevgenia Kozorovitskiy at Northwestern University.

Kozorovitskiy, who has two tween boys, ages 11 and 12, says prohibiting video games altogether isn't realistic for her family. But she does think carefully about which games they're playing. "They will sometimes want to play this adventure game that's really complex and cognitively wonderful," she explains. "It requires exploration, discovery and strategy. And they play it together, physically. They're speaking about strategy, exchanging plans and using advanced social and language skills.”

I tried this strategy with my daughter. One night we switched the cartoons for a language learning app. I told her that having an activity that's more purposeful will actually be more pleasurable.

And yes, she expressed great disappointment in this swap out, with tears and "But Mamas." But I stayed strong and calm, and I waited. After a few minutes, just as Kent Berridge said, the craving seemed to pass even more quickly than I expected. She easily switched gears to learning a bit of Spanish each night — with very little fuss.

I also started to put in place a piece of advice I heard from all the experts: Enrich your child's life off the screens. We had a neighbor teach her how to crochet. As a family, we started going for more walks after dinner. We bought a new pet (or actually 15 new pets) for her to take care of. And we started having more friends over on the weekends.

And guess what happened? After using the language app for a few weeks, she lost interest in the screens altogether. She hasn't watched a cartoon since.

But I'll tell you this: I will think very carefully before introducing a new app, device or even a new dessert into our lives. The battle against dopamine is just too hard for me to fight. ~

https://www.npr.org/sections/health-shots/2023/06/12/1180867083/tips-to-outsmart-dopamine-unhook-kids-from-screens-sweets?utm_source=pocket-newtab

*

MOUNTAINS INSIDE THE EARTH

~ It was a glaring summer's day in Antarctica. Through frozen eyelashes, Samantha Hansen blinked out at the featureless landscape: a wall of white, where up was the same as down, and ground blended seamlessly into sky. Amid these disorientating conditions, with temperatures of around -62C (-80F), she identified a suitable spot in the snow, and took out a spade.

Hansen was in the continent's bleak interior – not the comparatively balmy, picturesque Antarctica of cruise ship tours, but an unforgiving environment rarely even braved by the local wildlife. As part of a team from the University of Alabama and Arizona State University, she was looking for hidden 'mountain' ranges – peaks that no explorer has ever set foot on, no sunlight has ever illuminated. These mountains occur deep within the Earth.

It was 2015 and the researchers were in Antarctica to set up a seismology station – equipment, half-buried in the snow, that would allow them to study the interior of our planet. In total, the team installed 15 across Antarctica.


The mountain-like structures they revealed are utterly mysterious. But Hansen's team discovered that these ultra-low velocity zones or ULVZs, as they are known, are also likely to be almost ubiquitous – wherever you are in the world, they may be lurking far beneath your feet. "We found evidence for ULVZs kind of everywhere [we looked]," says Hansen. The question is – what are they? And what are they doing inside our planet?

A mystery history

The Earth's strange interior mountains occur at a critical threshold: the one between the planet's metallic core and the surrounding rocky mantle. This abrupt transition is, as Hansen's team point out, even more drastic than the change in physical properties between solid rock and air. It has been tantalizing experts for decades – as enigmatic as it is influential to the geology of the planet.

Though the 'core-mantle boundary' is thousands of kilometers from the Earth's surface, there is a surprising amount of interchange between its unfathomable depths and our own world. It's thought to be a kind of graveyard for ancient pieces of the ocean floor – and it may even be behind the existence of volcanoes in unexpected locations, such as Hawaii, by creating super-heated highways to the crust.

Lava meets the ocean in Hawaii
 
The story of the deep-Earth mountains began in 1996, when scientists explored the core-mantle boundary far beneath the central Pacific Ocean. They did this by studying seismic waves created by massive ground-shuddering events: usually earthquakes, though nuclear bombs can achieve the same effect. These waves pass right through the Earth, and can be picked up by seismic stations at other locations on its surface, sometimes more than 12,742 km (7,918 miles) away from where they started. By examining the paths the waves take as they travel through – such as the way they're refracted by different materials – scientists can piece together an X-ray-like picture of the interior of the planet.

When researchers looked at waves generated by 25 earthquakes, they found they inexplicably slowed down when they reached a jagged patch on the core-mantle boundary. This vast, otherworldly mountain range was highly variable – some peaks stretched 40km (24.8 miles) up into the mantle, equivalent to 4.5 times the height of Everest. Meanwhile, others were just 3km (1.7 miles) high.

Since then, similar mountains have been found lurking at scattered locations all around the core. Some are particularly large: one monster specimen occupies a patch 910km (565 miles) across under Hawaii.

Yet to this day, no one knows how they got there, or what they're made of.

Most of the Earth's crust is made of basalt – and this might also be the material behind the mysterious deep-Earth mountains

One idea is that the mountains are parts of the lower mantle that have been superheated due to their proximity with Earth's incandescent core. While the mantle can reach 3,700C (6,692F), this is relatively mild – the core can achieve atom-bending highs of 5,500C (9,932F) – not far off the temperature at the surface of the Sun. The hottest parts of the core-mantle boundary, it is suggested, may become partially molten – and this is what geologists see as ULVZs.

Alternatively, the deep-Earth mountains could be made from a subtly different material to the surrounding mantle. Incredibly, it's thought that they could be the remains of ancient oceanic crust which disappeared into its depths, eventually sinking down over hundreds of millions of years to settle just above the core.

In the past, geologists have looked to a second puzzle for clues. The deep-Earth mountains tend to be found near other mystery structures: enormous blobs, or large low-shear velocity provinces (LLSVPs). There are just two: an amorphous lump called "Tuzo" beneath Africa, and another known as "Jason" beneath the Pacific. They are thought to be truly primeval, possibly billions of years old. Again, no one knows what they are, or how they got there. But their close proximity to the mountains has led to the belief that they're somehow linked.

One way to explain this association is that it did indeed all begin with tectonic plates slipping down into the Earth's mantle, and sinking to the core-mantle boundary. These then slowly spread out to form an assortment of structures, leaving a trail of both mountains and blobs. This would mean both are made from ancient oceanic crust: a combination of basalt rock and sediments from the ocean floor, albeit transformed by the intense heat and pressure.

But the existence of deep-Earth mountains below Antarctica could contradict this, Hansen suggests. "Most of our study region, the southern hemisphere, is pretty far away from those larger structures.”

A frigid quest

To install their Antarctic seismology stations, Hansen and her team flew out to suitable locations in helicopters and small planes, placing the equipment in waist-deep snow – some near the coast, under the curious gaze of resident penguins, others inland.

It only took a matter of days to get the first results. The instruments can detect earthquakes almost anywhere on the planet – "If it's big enough, we can see it," Hansen says – and there are plenty of opportunities. The US National Earthquake Information Center records around 55 across the globe every day.

While identifying deep-Earth mountain ranges had been done before, no one had ever checked for them below Antarctica. It's not near either of the mystery blobs, or close to where any tectonic plates have recently fallen. Yet to the team's surprise they found them at every site they sampled. ~

https://www.bbc.com/future/article/20230605-the-hidden-mountains-lurking-deep-within-our-planet

*

JACOB TAUBES AND THE ATTEMPT TO UNITE JUDAISM AND CHRISTIANITY

*
JACOB TAUBES, EMPEROR OF CHAOS

Jacob Taubes (1923—1978), German sociologist, political philosopher, and scholar of Judaism. “Regarded by some as a genius, by others as a charlatan, Taubes moved among yeshivas, monasteries, and leading academic institutions on three continents. He wandered between Judaism and Christianity, left and right, piety and transgression. Along the way, he interacted with many of the leading minds of the age, from Leo Strauss and Gershom Scholem to Herbert Marcuse, Susan Sontag, and Carl Schmitt.” ~ https://press.princeton.edu/books/hardcover/9780691170596/professor-of-apocalypse

~ The Middle Ages featured numerous violent millenarian movements. Millenarianism informed “the heresy of the free spirit,” the thinking of “the spiritual Franciscans,” and the highly influential doctrines of the twelfth-century thinker Joachim of Fiore.
Joachim divided history into three epochs, that of the father, the son, and the spirit. Christ ushered in the epoch of the son, which is far superior to that of the father; we are on the verge of the millenarian reign of the spirit when, as the Book of Revelation promises, “all is made new.” Epochs had subdivisions, each superior to its predecessor.

As the medieval historian Robert Lerner has observed, Joachim was the first thinker to arrive at the progressive view of history according to which
later is necessarily better. Any ordinary “spiritual Franciscan” was holier than the greatest saint a thousand years before. Joachim initiated the disastrous but all too common way of thinking that makes it possible to say that “history is on our side.” It was an idea that, between the world wars, prompted many intellectuals to reject old-fashioned democratic liberalism and become fascists or communists, presenting their beliefs as ideologies of the future. Those who think this way do not examine political doctrines for their truthfulness or likely consequences. Instead, they simply place them on a time line. Joachim played a major role in Jacob Taubes’s thought.

Taubes planned to write a history of medieval apocalyptic movements. Perhaps he didn’t follow through, Muller guesses, because in 1957 Norman Cohn published his classic study, The Pursuit of the Millennium: Revolutionary Millenarians and Mystical Anarchists of the Middle Ages. Though it covered the same movements that interested Taubes, Cohn’s book evaluated them quite differently. Instead of finding inspiration and political direction in them, Cohn detected danger and the potential for still greater danger. Stressing how often such movements tried to rid the world of Jews and the rich, Cohn viewed them as forerunners of both Nazism and Bolshevism. The Marxist doctrine of “the leap from the kingdom of necessity to the kingdom of freedom” merely secularizes Joachim’s third age of the spirit. And Nazi self-descriptions—“Third Reich,” “The Reich to Last a Thousand Years”—are clearly millenarian.

Taubes wondered how such a brilliant thinker as the legal theorist Carl Schmitt could have favored the Nazis, but one wonders why Taubes was perplexed. After all, he welcomed Schmitt’s hostility to liberalism and his disdain for bourgeois values. He also enthusiastically agreed with Schmitt’s belief that “all significant concepts of the modern theory of the state are secularized theological ones.” And, as we have seen, he was able to sympathize with totalitarian and political violence. It would seem that it was only the campaign against the Jews, rather than against the groups targeted by the Bolsheviks, that must have repelled him.

This Jewish thinker displayed a curiously contradictory attitude to his Jewish heritage. He loved breaking Orthodox commandments, but of course one must acknowledge them to do so. No one who does not respect the laws of kashrut gets a thrill from eating pork. Taubes straddled the secular and the Orthodox. He seemed to embrace both Judaism and Christianity—he often dressed like a Protestant pastor—and so his fascination with Saint Paul, the Jew who founded Christianity, makes sense. 

Taubes interpreted Paul as an antinomian thinker like himself. He maintained that Paul rejected Roman as well as Jewish law and, indeed, all established institutions. Of course, Taubes had to read Paul rather selectively to arrive at such conclusions. But ignoring contrary evidence was his stock-in-trade.

Taubes was passionately devoted to the mystique of revolution, destruction, and chaos. Having once taught Dostoevsky’s novel The Possessed, which depicts Russian radicals who thought just this way, Taubes seems to have arrived at a position resembling the famous dictum of the Russian revolutionary anarchist Mikhail Bakunin: “the will to destroy is also a creative will.” Accordingly, Taubes became the foremost defender of the Free University’s radical student movement of the late 1960s and 1970s, which later developed into the Baader–Meinhof terrorist gang. “His self-appointed role,” Muller explains, “was that of a gnostic, apocalypticist, or revolutionist—a man who fed on chaos.”

From gnosticism Taubes borrowed the idea that the world we live in, along with all its rules and laws, is irredeemably evil and must be utterly destroyed. Those who understand this truth recognize their own alienation, which is a sign of their superiority. They form a privileged aristocracy not only allowed to violate all norms of existing society but also positively enjoined to do so. It was as if Taubes had chosen to imitate Raskolnikov, the hero of Dostoevsky’s Crime and Punishment, who maintained that “extraordinary people” had the right, indeed the duty, to commit crimes.

Apocalypticism reinforced this conclusion. If the world is about to end, all norms are canceled. Here Taubes liked to cite Scholem’s essay “Redemption through Sin” and his work on Sabbatai Zevi, a sixteenth-century Jew who claimed to be the Messiah but then converted to Islam. His followers explained this descent into apostasy as an attempt to free the world from evil by immersing oneself in it. Applying this lesson to themselves, they inverted moral norms and made a point of doing everything that Jewish law forbade. Taubes liked to teach Isaac Bashevis Singer’s novel Satan in Goray, which describes a small Jewish community gripped by this madness. Like gnosticism, apocalypticism justifies antinomianism or “cosmic nihilism,” doctrines easily assimilable to Nietzsche’s idea of the transvaluation of all values and to almost any theory promoting revolutionary violence.

Coming to maturity in the 1960s, when millenarian and antinomian views were in fashion and when Taubes enjoyed his greatest influence, I arrived at exactly contrary beliefs, largely, I suppose, because as a student of Russian culture I saw what political messianism meant in practice. Norman Cohn’s study deeply impressed me. If utopian thinking secularizes religious messianism, I concluded, it was to be avoided. My first book, examining utopia as a literary genre, took an anti-utopian stance. Later I developed a theory of life and literature I called “prosaics,” which finds the greatest value not in grand theories or dramatic events, but in the ordinary processes of daily life. No ideology can substitute for basic decency.
The twentieth century bears witness: nothing causes more evil than the attempt to abolish it forever. And as my favorite Russian philosopher, Mikhail Bakhtin, explained, no ideology gives one an “alibi” for individual responsibility.

Several modern critical schools emerged. They favored what came to be called “the hermeneutics of suspicion,” which denied any rational basis for knowledge. Like Taubes, some scholars cultivated nihilistic or messianic views, especially in their ready division of the world into the saved and the damned. For reasons Taubes understood, such thinking spreads rapidly because it has natural appeal. So does the quasi-gnostic claim that one belongs to an elite group discerning what unenlightened folk do not. What could be better than illogic calling itself “rigor” or lawbreaking recast as moralism? Those who sanctify sin never fail to find followers.

To understand Taubes and his appeal, then, is to grasp an important trend in recent social thought. Taubes, we recall, had a taste for chaos, and created it wherever he went. He aspired to be a new Saint Paul, and perhaps he really was prophetic of the world—God help us—now in the making. I cannot help recalling the closing lines of Alexander Pope’s mock epic The Dunciad:

Lo! thy dread Empire, Chaos! is restored;
Light dies before thy uncreating word;
Thy hand, great Anarch! lets the curtain fall;
And universal Darkness buries All.

https://newcriterion.com/issues/2023/6/emperor-of-chaos?fbclid=IwAR258D_svAGod-ensMjnfcirYoUo41nE4EvEH-1yiwwlCFW5RJuvgVwzQb8

Oriana:
Jacob Taubes is known for his books: Occidental Eschatology (1991); The Political Theology of Paul; To Carl Schmitt; and From Cult to Culture. 


*
TAUBES’ LIFELONG ATTENTION TO ST. PAUL

Antinomianism (Greek anti, “against”; nomos, “law”): doctrine according to which Christians are freed by grace from the necessity of obeying the Mosaic Law. The antinomians rejected the very notion of obedience as legalistic; to them the good life flowed from the inner working of the Holy Spirit. ~ Encyclopedia Britannica

~ Scion of a long line of rabbis, Jacob was a brilliant scholar. And yet, after publishing his doctoral dissertation, Abendländische Eschatologie [Western Eschatology], in 1947 at age twenty-four, he never again produced a sustained work. His decades as a philosopher, tutor of philosophy, and professor of philosophy left a trail of scattered articles and transcripts of classes that were later assembled and published, enabling his name to achieve a resonance his production would otherwise have failed to assure.  

Taubes is best known to us for his writings and talks on Saint Paul. This life-long interest in Paul is eloquent in many ways, but it is a non-political phrase of Paul’s that explains the major flaw among the many possessed by Taubes: his failure to produce a single major work after his doctoral dissertation. Paul lamented in Romans 7:19 that “The good that I will, I do not,” describes Taubes perfectly. He lacked self-discipline in every regard, intellectually, romantically, and even hygienically. He was a man overflowing with ideas, ideas that could have filled volumes. But he lacked the will to produce them. One feels justified in fearing that had he actually written them, though, they would have descended into the abstruse and abstract depths of many of the works he did produce.

We will never have to experience Taubes’ awful traits: we will, for example, be forever spared the awful stench in his room at the Jewish Theological Seminary caused by his irregular washing and the clothing he left unwashed, piled in his closet…

Muller’s focus on Taubes’ foibles and failings are essential to understanding the man who produced the slim body of work he did. But the core of Taubes remains his lifelong attention to Paul.

This, though, is stained by a trait of Taubes’ that appeared and reappeared throughout his life: his appropriation of the ideas of others. Taubes’s first real exposure to Paul was in a talk given by his father, the chief rabbi of Zurich. The ideas expressed by Taubes père laid the groundwork for Taubes’ Pauline political theology, and for his life.

As Muller recounts, Taubes’ father explained that “Paul saw himself as an apostle to the gentiles, not to the Jews. Indeed, Paulinism was nothing more than the attempt to give new form to the main religious motifs of the Jewish Ten Days of Atonement, and to spread them among the heathens. Jacob, then seventeen years of age, must have heard this lecture, and probably read it as well. He certainly took it to heart. The themes of eschatology (the end of days) and of movements of religiously inspired renewal would be the subject of his doctoral dissertation. 

And his interests were not purely historical, for Jacob aspired to become the philosopher of a spiritual revival that would draw upon Judaism but go beyond it. He came to identify himself with the Apostle Paul, who would take elements of Judaism and reformulate them for a larger audience.” Jacob Taubes would reformulate and expand on these ideas all his life, and they would provide the basis for perhaps his most important posthumous work, The Political Theology of Paul.

Taubes’ first wife, Susan, a Hungarian Jew who wanted nothing to do with the Jewish religion, which she scorned and dismissed out of hand, had little use for her husband’s Paulinism. In a striking formulation, she characterized Paul’s apostolate as “a plan for ‘international religion’—by making the whole world a little Jewish and the Jews a little less Jewish. She found Paul’s obsession with “the law” (and Jacob’s) to be infantile and a distraction from the real world: “Paul (I mean all Pauls) was too engrossed in masturbation to see the wonder of the phallus. The world of the ‘law’ is also an infantile world. It is for the child that every object is associated with a ‘may and may not.’ The adult comes directly in contact with things.” In short, she found Paul “a successful charlatan.”

Taubes, in his tacking and twisting, in his search to expand the boundaries of beliefs and to unite the disparate through his ingenious analyses, in the end, had one object of worship: Jacob Taubes.

Taubes’ egotism, his betrayals, his irreligious life carried out in the name of religion and disguised under religious display, and his personalized form of antinomianism, help explain his affection for Paul. Muller gives us a man aware of his own genius, but less so of his failings. He wanted, Muller says, “to become one of those Jewish thinkers who would contribute to the creation of a new universalist message – precisely the terms in which he thought of Paul.” His self-assigned mission was no less than to “combine the rational and the irrational to create a myth appropriate to the modern age.” The contradictions in Taubes’ thought were infinite, for “he rejected Jewish particularism while hoping to renew the religious core of Judaism.”

These messianic tasks were not placed on his shoulders by anyone but himself. That he took them on himself and thought them possible explains his impossible personality. But is there yet another, more disturbing possibility? Was the madness that struck Jacob Taubes late in life [he was diagnosed as manic-depressive and treated with electric shock] the generator of the “hypomania” that Muller describes Taubes to have been prey to his entire life?

The writer and editor Leon Wieseltier, who knew Taubes and who is quoted in Professor of Apocalypse summed up Jacob Taubes the man accurately and succinctly. Taubes, he told me, was “an incompletely socialized human being.” Everything about him likely flows from this.

https://publicseminar.org/2022/06/the-many-lives-of-jacob-taubes/

Oriana:
“Antinomian” literally means “against the law.” In theology it refers to rejection of the Mosaic Law as irrelevant after the coming of Christ. Instead of the law, a Christian was supposed to open up divine grace. 

Though Paul is sometimes mistaken for an antinomian, he was actually opposed to it. He said that the message of Jesus did not abolish the law, but rather affirmed it.

Antinomian extremists wanted to dispense with moral law; salvation depended only on faith. Its an extreme version of “sola fide” — by faith alone. This is a dangerous position that may lead to moral irresponsibility and lawlessness. Some point to the hippie movement as an example of modern antinomianism.

A person such as Taubes, however, need not be seen in religious terms. Taubes strikes me as a an outrageous narcissist. Religion was only a disguise for the worship of Jacob Taubes. It could have easily been ideology — note the professor’s dabbling in leftist doctrines at the time when those were “cool,” as we’d now say. Ultimately he had no depth and only followed trends.

Narcissists can be notoriously charismatic — they make excellent actors on the stage of life, with no scruples to stop them from deceiving their admiring audience. They have no empathy — causing pain to others is easy when you literally don’t feel (or can’t imagine) the other person’s pain.

A friend of mine who is a therapist told me that people who are manic also tend to be emotionally shallow. Taubes was “overflowing with ideas,” but had no self-discipline to write them down (after his dissertation, his books were basically composed by his  students on the basis of articles and lecture notes). This lack of ability to focus and do hard intellectual work indicates an overexcited nervous system with deficient inhibition. Think how long it takes a normal child to acquire sufficient impulse inhibition to be able to function socially and pay attention in school or another challenging milieu. Inhibition is also the first to go in many cases of dementia. It is the last trait acquired during maturation, and the first one to go (this is sometimes seen in the case of old people unexpectedly expressing very bigoted or otherwise offensive views).

Domenicino: The Ecstasy of St. Paul

*
THE UNIQUE MERGER THAT MADE COMPLEX LIFE POSSIBLE

At first glance, a tree could not be more different from the caterpillars that eat its leaves, the mushrooms sprouting from its bark, the grass growing by its trunk, or the humans canoodling under its shade. Appearances, however, can be deceiving. Zoom in closely, and you will see that these organisms are all surprisingly similar at a microscopic level. Specifically, they all consist of cells that share the same basic architecture.

These cells contain a central nucleus—a command center that is stuffed with DNA and walled off by a membrane. Surrounding it are many smaller compartments that act like tiny organs, carrying out specialized tasks like storing molecules or making proteins. Among these are the mitochondria—bean-shaped power plants that provide the cells with energy.

This combination of features is shared by almost every cell in every animal, plant, fungus, and alga, a group of organisms known as “eukaryotes.”

Bacteria showcase a second, simpler way of building a cell—one that preceded the complex eukaryotes by at least a billion years. These “prokaryotes” always consist of a single cell, which is smaller than a typical eukaryotic one and bereft of internal compartments like mitochondria and a nucleus. Even though limited to a relatively simple cell, bacteria are impressive survival machines. They colonize every possible habitat, from miles-high clouds to the deep ocean. They have a dazzling array of biological tricks that allow them to cause diseases, eat crude oil, conduct electric currents, draw power from the Sun, and communicate with each other.

Still, without the eukaryotic architecture, bacteria are forever constrained in size and complexity. Sure, they have their amazing skill sets, but it’s the eukaryotes that cover the Earth in forest and grassland, that navigate the planet looking for food and mates, that build rockets to Mars.

The transition from the classic prokaryotic model to the deluxe eukaryotic one is arguably the most important event in the history of life on Earth. And in more than 3 billion years of existence, it happened exactly once.

Life is full of complex structures that evolve time and again. Individual cells have united to form many-celled creatures like animals and plants on dozens of separate occasions. The same is true for eyes, which have independently evolved time and again. But the eukaryotic cell is a one-off innovation.

Bacteria have repeatedly nudged along the path towards complexity. Some are very big (for microbes); others move in colonies that behave like single many-celled creatures. But none of them have acquired the full suite of crucial features that define eukaryotes: large size, the nucleus, internal compartments, mitochondria, and more. As Nick Lane from University College London writes, “Bacteria have made a start up every avenue of eukaryotic complexity, but then stopped short.” Why?

It is not for lack of opportunity. The world is swarming with countless prokaryotes that evolve at breathtaking rates. Even so, they were not quick about inventing eukaryotic cells. Fossils tell us that the oldest bacteria arose between 3 and 3.5 billion years ago, but there are no eukaryotes from before 2.1 billion years ago. Why did the prokaryotes remain as simple cells for so damn long?

There are many possible explanations, but one of these has recently gained a lot of ground. It tells of a prokaryote that somehow found its way inside another, and formed a lasting partnership with its host. This inner cell—a bacterium—abandoned its free-living existence and eventually transformed into the mitochondria. These internal power plants provided the host cell with a bonanza of energy, allowing it to evolve in new directions that other prokaryotes could never reach.

If this story is true, and there are still those who doubt it, then all eukaryotes—every flower and fungus, spider and sparrow, man and woman—descended from a sudden and breathtakingly improbable merger between two microbes. They were our great-great-great-great-...-great-grandparents, and by becoming one, they laid the groundwork for the life forms that seem to make our planet so special. The world as we see it (and the fact that we see it at all; eyes are a eukaryotic invention) was irrevocably changed by that fateful union—a union so unlikely that it very well might not have happened at all, leaving our world forever dominated by microbes, never to welcome sophisticated and amazing life like trees, mushrooms, caterpillars, and us.

In 1905, the Russian biologist Konstantin Mereschkowski first suggested that some parts of eukaryotic cells were once endosymbionts—free-living microbes that took up permanent residence within other cells. He thought the nucleus originated in this way, as did the chloroplasts that allow plant cells to harness sunlight. He missed the mitochondria, but the American anatomist Ivan Wallin pegged them for endosymbionts in 1923.

These ideas were ignored for decades until an American biologist—the late Lynn Margulis—revived them in 1967. In a radical paper, she made the case that mitochondria and chloroplasts were once free-living bacteria that had been sequentially ingested by another ancient microbe. That is why they still have their own tiny genomes and why they still superficially look like bacteria. Margulis argued that endosymbiosis was not a crazy, oddball concept—it was one of the most important leitmotivs in the eukaryotic opera.

The paper was a tour de force of cell biology, biochemistry, geology, genetics, and paleontology. Its conclusion was also grossly unorthodox. At the time, most people believed that mitochondria had simply come from other parts of the cell. “[Endosymbiosis] was taboo,” says Bill Martin from Heinrich Heine University Düsseldorf, in Germany. “You had to sneak into a closet to whisper to yourself about it before coming out again.”

Margulis’ views drew fierce criticism, but she defended with equal vigor. Soon she had the weight of evidence behind her. Genetic studies, for example, showed that mitochondrial DNA is similar to that of free-living bacteria. Now, very few scientists doubt that mergers infused the cells of every animal and plant with the descendants of ancient bacteria.

But the timing of that merger, the nature of its participants, and its relevance to the rise of eukaryotes are all still hotly debated. In recent decades, origin stories for the eukaryotes have sprouted up faster than old ones could be tested, but most fall into two broad camps.

The first—let’s call it the “gradual-origin” group—claimed that prokaryotes evolved into eukaryotes by incrementally growing in size and picking up traits like a nucleus and the ability to swallow other cells. Along the way, these proto-eukaryotes gained mitochondria, because they would regularly engulf bacteria. This story is slow, steady, and classically Darwinian in nature. The acquisition of mitochondria was just another step in a long, gradual transition. This is what the late Margulis believed right till the end.

The alternative—let’s call it the “sudden-origin” camp—is very different. It dispenses with slow, Darwinian progress and says that eukaryotes were born through the abrupt and dramatic union of two prokaryotes. One was a bacterium. The other was part of the other great lineage of prokaryotes: the archaea. (More about them later.) These two microbes look superficially alike, but they are as different in their biochemistry as PCs and Macs are in their operating systems. By merging, they created, in effect, the starting point for the first eukaryotes.

Bill Martin and Miklós Müller put forward one of the earliest versions of this idea in 1998. They called it the hydrogen hypothesis. It involved an ancient archaeon that, like many modern members, drew energy by bonding hydrogen and carbon dioxide to make methane. It partnered with a bacterium that produced hydrogen and carbon dioxide, which the archaeon could then use. Over time, they became inseparable, and the bacterium became a mitochondrion.

There are many variants of this hypothesis, which differ in the reasons for the merger and the exact identities of the archaeon and the bacterium that were involved. But they are all united by one critical feature setting them apart from the gradual-origin ideas: They all say that the host cell was still a bona fide prokaryote. It was an archaeon, through and through. It had not started to grow in size. It did not have a nucleus. It was not on the path to becoming a eukaryote; it set off down that path because it merged with a bacterium. As Martin puts it, “The inventions came later.”

This distinction could not be more important. According to the sudden-origin ideas, mitochondria were not just one of many innovations for the early eukaryotes. “The acquisition of mitochondria was the origin of eukaryotes,” says Lane. “They were one and the same event.” If that is right, the rise of the eukaryotes was a fundamentally different sort of evolutionary transition than the gradual changes that led to the eye, or photosynthesis, or the move from sea to land. It was a fluke event of incredible improbability—one that, as far as we know, only happened after a billion years of life on Earth and has not been repeated in the 2 billion years since. “It’s a fun and thrilling possibility,” says Lane. “It may not be true, but it’s beautiful.”

*
In 1977, microbiologist Carl Woese had the bright idea of comparing different organisms by sequencing their genes. This is an everyday part of modern biology, but at the time, scientists relied on physical traits to deduce the evolutionary relationships between different species. Comparing genes was bold and new, and it would play a critical role in showing how complicated life like us—the eukaryotes—came to be.

Woese focused on 16S rRNA, a gene that is involved in the essential task of making proteins and is found in all living things. Woese reasoned that as organisms diverge into new species, their versions of rRNA should become increasingly dissimilar. By comparing the gene across a range of prokaryotes and eukaryotes, the branches of the tree of life should reveal themselves.

They did, but no one expected the results. Woese’s tree had three main branches. Bacteria and eukaryotes sat on two of them. But the third consisted of an obscure bunch of prokaryotes that had been found in hot, inhospitable environments. Woese called them archaea, from the Greek word for ancient. Everyone had taken them for obscure types of bacteria, but Woese’s tree announced them as a third domain of life. It was as if everyone was staring at a world map, and Woese had politely shown that a full third of it had been folded underneath.

In Woese’s classic three-domain tree, the eukaryotes and archaea are sister groups. They both evolved from a shared ancestor that split off from the bacteria very early in the history of life on Earth. But this tidy picture started to unravel in the 1990s, as the era of modern genetics kicked into high gear and scientists started sequencing more eukaryotic genes. Some were indeed closely related to archaeal genes, but others turned out to be more closely related to bacterial ones. The eukaryotes turned out to be a confusing hodgepodge, and their evolutionary affinities kept on shifting with every new sequenced gene.

In 2004, James Lake changed the rules of engagement. Rather than looking at any single gene, he and his colleague Maria Rivera compared the entire genomes of two eukaryotes, three bacteria, and three archaea. Their analysis supported the merger-first ideas: They concluded that the common ancestor of all life diverged into bacteria and archaea, which evolved independently until two of their members suddenly merged. This created the first eukaryotes and closed what now appeared to be a “ring of life.” Before that fateful encounter, life had just two major domains. Afterward, it had three.

Rivera and Lake were later criticized for only looking at seven species, but no one could possibly accuse Irish evolutionary biologist James McInerney of the same fault. In 2007, he crafted a super-tree using more than 5,700 genes from across the genomes of 168 prokaryotes and 17 eukaryotes. His conclusion was the same: Eukaryotes are merger organisms, formed through an ancient symbiosis between a bacterium and an archaeon.

The genes from these partners have not integrated seamlessly. They behave like immigrants in New York’s Asian and Latino communities, who share the same city but dominate different areas. For example, they mostly interact with their own kind: archaeal genes with other archaeal genes, and bacterial genes with bacterial genes.

“You’ve got two groups in the playground and they’re playing with each other differently, because they’ve spent different amounts of time with each other,” says McInerney.

They also do different jobs. The archaeal genes are more likely to be involved in copying and making use of DNA. The bacterial genes are more involved in breaking down food, making nutrients, and the other day-to-day aspects of being a microbe. And although the archaeal genes are outnumbered by their bacterial neighbors by 4 to 1, they seem to be more important. They are nearly twice as active. They produce proteins that play more central roles in their respective cells. They are more likely to kill their host if they are mistakenly deleted. Over the last four years, McInerney has found this same pattern again and again, in yeast, in humans, in dozens of other eukaryotes.

This all makes sense if you believe the sudden-origin idea. When those ancient partners merged, the immigrant bacterial genes had to be integrated around a native archaeal network, which had already been evolving together for countless generations. They did integrate, and while many of the archaeal genes were displaced, an elite set could not be ousted. Despite 2 billion years of evolution, this core network remains, and retains a pivotal role out of all proportion to their small number.

*
The sudden-origin hypothesis makes one critical prediction: All eukaryotes must have mitochondria. Any exceptions would be fatal, and in the 1980s, it started to look like there were exceptions aplenty.

If you drink the wrong glass of water in the wrong part of the world, your intestines might become home to a gut parasite called Giardia. In the weeks that follow, you can look forward to intense stomach cramps and violent diarrhea. Agony aside, Giardia has a bizarre and interesting anatomy. It consists of a single cell that looks like a malevolent teardrop with four tail-like filaments. Inside, it has not one nucleus but two. It is clearly a eukaryote.

But it has no mitochondria.

There are at least a thousand other single-celled eukaryotes, mostly parasites, which also lack mitochondria. They were once called archezoans, and their missing power plants made them focal points for the debate around eukaryotic origins. They seemed to be living remnants of a time when prokaryotes had already turned into primitive eukaryotes, but before they picked up their mitochondria. Their very existence testified that mitochondria were a late acquisition in the rise of eukaryotes, and threatened to deal a knockout blow to the sudden-origin tales.

That blow was deflected in the 1990s, when scientists slowly realized that Giardia and its ilk have genes that are only ever found in the mitochondria of other eukaryotes.

These archezoans must have once had mitochondria, which were later lost or transformed into other cellular compartments. They aren’t primitive eukaryotes from a time before the mitochondrial merger—they are advanced eukaryotes that have degenerated, just as tapeworms and other parasites often lose complex organs they no longer need after they adopt a parasitic way of life. “We’ve yet to find a single primitive, mitochondria-free eukaryote,” says McInerney, “and we’ve done a lot of looking.”

With the archezoan club dismantled, the sudden-origin ideas returned to the fore with renewed vigor. “We predicted that all eukaryotes had a mitochondrion,” says Martin. “Everyone was laughing at the time, but it’s now textbook knowledge. I claim victory. Nobody’s giving it to me—except the textbooks.”

***
If mitochondria were so important, why have they only evolved once? And for that matter, why have eukaryotes only evolved once?

Nick Lane and Bill Martin answered both questions in 2010, in a bravura paper called, “The energetics of genome complexity,” published in Nature. In a string of simple calculations and elegant logic, they reasoned that prokaryotes have stayed simple because they cannot afford the gas-guzzling lifestyle that all eukaryotes lead. In the paraphrased words of Scotty: They cannae do it, captain, they just don’t have the power.

Lane and Martin argued that for a cell to become more complex, it needs a bigger genome. Today, for example, the average eukaryotic genome is around 100–10,000 times bigger than the average prokaryotic one. But big genomes don’t come for free. A cell needs energy to copy its DNA and to use the information encoded by its genes to make proteins. The latter, in particular, is the most expensive task that a cell performs, soaking up three-quarters of its total energy supply. If a bacterium or archaeon was to expand its genome by 10 times, it would need roughly 10 times more energy to fund the construction of its extra proteins.

One solution might be to get bigger. The energy-producing reactions that drive prokaryotes take place across their membranes, so a bigger cell with a larger membrane would have a bigger energy supply. But bigger cells also need to make more proteins, so they would burn more energy than they gained. If a prokaryote scaled up to the same size and genome of a eukaryotic cell, it would end up with 230,000 times less energy to spend on each gene! Even if this woefully inefficient wretch could survive in isolation, it would be easily outcompeted by other prokaryotes.

Prokaryotes are stuck in an energetic canyon that keeps them simple and small. They have no way of climbing out. If anything, evolution drives them in the opposite direction, mercilessly pruning their genomes into a ring of densely packed and overlapping genes. Only once did a prokaryote escape from the canyon, through a singular and improbable trick—it acquired mitochondria.

Mitochondria have an inner membrane that folds in on itself like heavily ruched fabric. They offer their host cells a huge surface area for energy-producing chemical reactions. But these reactions are volatile, fickle things. They involve a chain of proteins in the mitochondrial membranes that release energy by stripping electrons from food molecules, passing them along to one another, and dumping them onto oxygen. This produces high electric voltages and unstable molecules. If anything goes wrong, the cell can easily die.

But mitochondria also have a tiny stock of DNA that encodes about a dozen of the proteins that take part in these electron-transfer chains. They can quickly make more or less of any of the participating proteins, to keep the voltages across their membranes under check. They supply both power and the ability to control that power. And they do that without having to bother the nucleus. They are specialized to harness energy.

Mitochondria are truly the powerhouse of the eukaryotic cell. “The command center is too bureaucratic and far away to do anything,” says Lane. “You need to have these small teams, which have limited powers but can use them at their discretion to respond to local situations. If they’re not there, everything dies.”

Prokaryotes do not have powerhouses; they are powerhouses. They can fold their membranes inwards to gain extra space for producing energy, and many do. But they do not have the secondary DNA outposts that produce high-energy molecules so the central government (the nucleus) has the time and energy to undertake evolutionary experiments.

The only way to do that is to merge with another cell. When one archaeon did so, it instantly leapt out of its energetic canyon, powered by its new bacterial partner. It could afford to expand its genome, to experiment with new types of genes and proteins, to get bigger, and to evolve down new and innovative routes. It could form a nucleus to contain its genetic material, and absorb other microbes to use as new tiny organs, like the chloroplasts that perform photosynthesis in plants. “You need a mitochondrial level of power to finance those evolutionary adventures,” says Martin. “They don’t come for free.”

The kind of merger that creates mitochondria seems to be a ludicrously unlikely event. Prokaryotes have only managed it once in more than 3 billion years, despite coming into contact with each other all the time. “There must have been thousands or millions of these cases over evolutionary time, but they’ve got to find a way of getting along, of reconciling and co-adapting to each other,” says Lane. “That seems to be genuinely difficult.”

This improbability has implications for the search for alien life. On other worlds with the right chemical conditions, Lane believes that life would be sure to emerge. But without a fateful merger, it would be forever microbial. Perhaps this is the answer to the Fermi paradox—the puzzling contradiction between the high apparent odds that intelligent life would exist elsewhere among the billions of planets in the Milky Way, and our inability to find any signs of such intelligence. As Lane wrote in 2010, “The unavoidable conclusion is that the universe should be full of bacteria, but more complex life will be rare.”  And if intelligent aliens did exist, they would probably have something like mitochondria, too.

Writing in 2007, Anthony Poole and David Penny accused the sudden-origin camp of pushing “mechanisms founded in unfettered imagination.” They pointed out that archaea and bacteria do not engulf one another—that’s a hallmark of eukaryotes. It is easy to see how a primitive eukaryote might have gained mitochondria by engulfing a bacterium, but very hard to picture how a relatively simple archaeon did so.

This powerful retort has lost some of its sting thanks to a white insect called the citrus mealybug. Its cells contain a bacterium called Tremblaya, and Tremblaya contains another bacterium called Moranella. Here is a prokaryote that somehow has another prokaryote living inside it, despite its apparent inability to engulf anything.

Still, the details of how the initial archaeon-bacterium merger happened are still a mystery. How did one get inside the other? What sealed their partnership—was it hydrogen, as Martin and Müller suggested, or something else? How did they manage to stay conjoined? “I think we have the roadmap right, but we don’t have all the white lines and the signposts in place,” says Martin. “We have the big picture but not all the details.

Perhaps we will never know for sure. The origin of eukaryotes happened so far back in time that it’s a wonder we have even an inkling of what happened. Dissent is inevitable; uncertainty, guaranteed.

“You can’t convince everyone about anything in early evolution, because they hold to their own beliefs,” says Martin. “But I’m not worried about trying to convince anyone. I’ve solved these problems to my own satisfaction and it all looks pretty consistent. I’m happy.”

https://getpocket.com/explore/item/the-unique-merger-that-made-you-and-ewe-and-yew?utm_source=pocket-newtab


*
OBESITY CHANGES THE BRAIN, WITH ‘NO SIGN OF REVERSIBILITY,’ EXPERT SAYS

Obesity may damage the brain’s ability to recognize the sensation of fullness and be satisfied after eating fats and sugars, a new study found.

Further, those brain changes may last even after people considered medically obese lose a significant amount of weight — possibly explaining why many people often regain the pounds they lose.

“There was no sign of reversibility — the brains of people with obesity continued to lack the chemical responses that tell the body, ‘OK, you ate enough,’” said Dr. Caroline Apovian, a professor of medicine at Harvard Medical School and codirector of the Center for Weight Management and Wellness at Brigham and Women’s Hospital in Boston.

As defined medically, people with obesity have a body mass index, or BMI, of over 30, while normal weight is a BMI of between 18 and 25.

“This study captures why obesity is a disease — there are actual changes to the brain,” said Apovian, who was not involved in the study.

“The study is very rigorous and quite comprehensive,” said Dr. I. Sadaf Farooqi, a professor of metabolism and medicine at the University of Cambridge in the UK, who was not involved in the new research.

“The way they’ve designed their study gives more confidence in the findings, adding to prior research that also found obesity causes some changes in the brain,” she said.

The study, published Monday in Nature Metabolism, was a controlled clinical trial in which 30 people considered to be medically obese and 30 people of normal weight were fed sugar carbohydrates (glucose), fats (lipids) or water (as a control). Each group of nutrients were fed directly into the stomach via a feeding tube on separate days.

“We wanted to bypass the mouth and focus on the gut-brain connection, to see how nutrients affect the brain independently from seeing, smelling or tasting food,” said lead study author Dr. Mireille Serlie, professor of endocrinology at Yale School of Medicine in New Haven, Connecticut.

The night before the testing, all 60 study participants had the same meal for dinner at home and did not eat again until the feeding tube was in place the next morning. As either sugars or fats entered the stomach via the tube, researchers used functional magnetic resonance imaging (fMRI) and single-photon emission computed tomography (SPECT) to capture the brain’s response over 30 minutes.

“The MRI shows where neurons in the brain are using oxygen in reaction to the nutrient — that part of the brain lights up,” Farooqi said. “The other scan measures dopamine, a hormone that is part of the reward system, which is a signal for finding something pleasurable, rewarding and motivating and then wanting that thing.”

Researchers were interested in how fats and glucose would individually trigger various areas of the brain connected to the rewarding aspects of food. They wanted to know if that would be different in people with obesity compared to those of normal weight.

“We were especially interested in the striatum, the part of the brain involved in the motivation to actually go and look for food and eat it,” Serlie said. Buried deep in the brain, the striatum also plays a role in emotion and habit formation.

In people with normal weight, the study found brain signals in the striatum slowed when either sugars or fats were put into the digestive system — evidence that the brain recognized the body had been fed.

“This overall reduction in brain activity makes sense because once food is in your stomach, you don’t need to go and get more food,” Serlie explained.

At the same time, levels of dopamine rose in those at normal weight, signaling that the reward centers of the brain were also activated.

However, when the same nutrients were given via feeding tube to people considered medically obese, brain activity did not slow, and dopamine levels did not rise.

This was especially true when the food was lipids or fats. That finding was interesting, Farooqi said, because the higher the fat content, the more rewarding the food: “That’s why you will genuinely want a burger instead of broccoli, the fat in the burger will biologically give a better response in the brain.”

Next, the study asked people with obesity to lose 10% of their body weight within three months — an amount of weight known to improve blood sugars, reset metabolism and boost overall health, Serlie said.

Tests were repeated as before — with surprising results. Losing weight did not reset the brain in people with obesity, Serlie said.

“Nothing changed — the brain still did not recognize fullness or feel satisfied,” she said. “Now, you might say three months is not long enough, or they didn’t lose enough weight.

“But this finding might also explain why people lose weight successfully and then regain all the weight a few years later — the impact on the brain may not be as reversible as we would like it to be.”

A 2018 meta-analysis of long-term weight loss clinical trials found 50% of a person’s original weight loss was regained after two years — by the fifth year, 80% of the weight was regained.

Caution is needed in interpreting the findings, Serlie said, as much is unknown: “We don’t know when these profound changes in the brain happen during the course of weight gain. When does the brain start to slip and lose the sensing capacity?”

Obesity has a genetic component, and although the study attempted to control for that by excluding people with childhood onset obesity, it’s still possible that “genes are influencing our response in the brain to certain nutrients,” said Farooqi, who has studied the role of genes on weight for years.

Much more research is needed to fully understand what obesity does to the brain, and if that is triggered by the fat tissue itself, the types of food eaten, or other environmental and genetic factors.

“Are there changes that occurred in people as they gained weight? Or are there things that they were eating as they were gaining weight, such as ultra-processed foods, that caused a change in the brain? All of these are possible, and we don’t really know which it is,” Farooqi said.

Until science answers these questions, the study emphasizes, once again, that weight stigma has no place in the fight against obesity, Serlie said.

“The belief that weight gain can be solved simply by ‘just eating less, exercising more, and if you don’t do that, it’s a lack of willpower is so simplistic and so untrue,” she said.

“I think it’s important for people who are struggling with obesity to know that a malfunctioning brain may be the reason they wrestle with food intake,” Serlie said. “And hopefully this information will increase empathy for that struggle.”

https://www.cnn.com/2023/06/12/health/obesity-changes-brain-wellness/index.html

*
LONGEVITY BENEFITS OF VITAMIN K

~ Dr. Bruce Ames is one of the world’s leading authorities on aging and nutrition. Four years ago, Dr. Ames published research indicating that optimum intake of vitamin K plays an important role in longevity.

A new 2014 study on vitamin K confirms that ample vitamin K intake can indeed help you live longer. In a group of more than 7,000 people at high risk for cardiovascular disease, people with the highest intake of vitamin K were 36% less likely to die from any cause at all, compared with those having the lowest intake.

Once thought to be exclusively concerned with blood coagulation, Vitamin K is now known to affect at least 16 Gla-proteins in the body.

These include proteins involved in protecting arteries from calcification, those protecting bones from losing calcium, and ones that help prevent against diabetes and cancer.
A new study demonstrated that people with higher vitamin K intakes are less likely to die from all causes, lending new urgency to the issue of supplementation.

A multitude of studies now point to the fact that adequate vitamin K intake, including supplementation, can offer prevention against atherosclerosis, osteoporosis, diabetes, and cancer.

Assure that your vitamin K intake is adequate by adopting a daily vitamin K supplement that provides both K1 and K2 for optimum coverage.

This protection even extended to those with initially low vitamin K intake who boosted their consumption during the course of the study—demonstrating that it’s never too late to start gaining the benefits of vitamin K supplementation. Increasing intake conferred protection against cardiovascular death as well.

Vitamin K is capable of opposing many of the leading causes of death in modern-day Americans—including atherosclerosis, osteoporosis, diabetes, and cancer—because it has the unique ability to activate proteins involved in these conditions.

In this article, we will review a host of new studies that detail the impact of vitamin K supplementation on preventing these and other major age-related diseases.

THE MANY BENEFITS OF VITAMIN K

Vitamin K was first discovered in 1935, when it was found to be an essential nutrient to prevent abnormal bleeding in chickens. For decades thereafter, vitamin K was identified as the “coagulation vitamin” (in fact, the initial “K” comes from the German spelling, koagulation). During that time, it was established that vitamin K worked by activating certain proteins made in the liver that are required for normal blood clotting. Without sufficient vitamin K, blood would not clot, and severe bleeding would ensue.

Vitamin K activates those blood-clotting proteins by making a small but vital chemical change in the proteins’ structure, specifically on the protein building block called glutamic acid.

By the turn of the 21st century, scientists had learned that vitamin K produces similar changes to glutamic acid molecules to activate a handful of other vital proteins in the body, with the collective name of Gla-proteins. According to 2014 research, 16 different vitamin K-dependent Gla-proteins have been identified. This means that they depend on vitamin K to activate them in order to carry out their intended role.

With the discovery of the Gla-proteins, scientists learned that vitamin K is vital for much more than the healthy clotting of blood. For example, the Gla-protein in bone, called osteocalcin, is responsible for making sure calcium is deposited in bones, while the Gla-protein in arterial walls, called matrix Gla protein, prevents calcium from being deposited in arteries.

Insufficient blood clotting was thought to be the main sign of vitamin K deficiency. However, scientists have since learned that you can have enough vitamin K to promote healthy blood clotting, yet still not have enough vitamin K for it to activate the Gla-proteins necessary to help prevent cardiovascular disease, osteoporosis, diabetes, and cancer, all conditions in which vitamin K-dependent proteins are known to be factors. Fortunately, studies show that vitamin K supplementation can significantly increase the amount of activated Gla-proteins in tissues—without over-activating the clotting proteins.

Vitamin K and atherosclerosis

As we age, calcium that belongs in our bones begins to make its appearance in other unwanted areas, including inside the linings of major arteries. Over time, normal smooth muscle cells in artery walls transform into bone-like cells through the deposition of calcium, essentially turning sections of artery into bony tissue that is not resilient and flexible, and does not have the ability to effectively regulate blood flow. This process lends literal reality to the term “hardening of the arteries,” which we now know as late-stage atherosclerosis.

Nature has provided a powerful inhibitor of arterial calcification in the form of matrix Gla protein, one of the 16 Gla-proteins activated by vitamin K. This specific Gla-protein is produced in arterial walls, but is only activated when sufficient vitamin K is present. In the absence of sufficient vitamin K, arterial calcification is able to continue unopposed, leading to advanced atherosclerosis and its deadly consequences, heart attacks and strokes. Indeed, in older men and women who had the highest levels of inactive matrix Gla protein (indicating low vitamin K levels), there was a nearly 3-fold increased risk of cardiovascular disease compared to those with the lowest levels.

Researchers have known for nearly 20 years that insufficient vitamin K intake in the diet is related to atherosclerosis in the aorta, the body’s largest blood vessel. Since that time, a host of basic science and laboratory studies have indicated that higher vitamin K intake is essential for preventing atherosclerosis in major vessels of all kinds. Animal studies even show that vitamin K can “rescue” calcified arteries that occur as a result of the overuse of drugs that inhibit vitamin K, such as certain blood thinners (Warfarin, aka Coumadin).

Another way matrix Gla proteins help protect against atherosclerosis is by inhibiting the production of inflammatory signaling molecules (cytokines), which contribute to plaque formation and calcification. People with the highest dietary intake of vitamin K have significantly lower levels of those inflammatory markers, and also of substances involved in appetite generation and insulin resistance, both of which are important in preventing atherosclerosis. (Some of these effects may be related to increased levels of another vitamin K-dependent Gla-protein that suppresses inflammation and promotes glucose tolerance.)

SUMMARY:

A recent large study confirms that
people with the highest vitamin K intakes are significantly less likely to die from any cause, compared with those having the lowest intakes.

Because of its unique ability to activate proteins involved in atherosclerosis, osteoporosis, diabetes, and cancer, vitamin K is capable of opposing many of the leading causes of death in modern-day Americans. A host of new studies details the impact of vitamin K supplementation on preventing these, and possibly other, major age-related diseases.

Once considered just a blood coagulation vitamin, vitamin K2 has now achieved the status of a multi-function vitamin. If you are interested in a longer and healthier life, consider supplementing with this often- overlooked nutrient. ~

Note: Newer blood-thinning drugs such as Pradaxa (dabigatran) and Eliquis (apixaban) are not affected by vitamin K intake, meaning you can take full-dose vitamin K and not compromise the desired anticoagulant effects.

https://www.lifeextension.com/magazine/2014/9/the-surprising-longevity-benefits-of-vitamin-k

Oriana:

It seems that Vitamin K is the most important among the least publicized nutrients. In youth, our microbiome can produce enough K2 for our needs. Later, supplementation may be the only thing that can spare you from having calcified arteries. (Sure, you can try relying on natto, as the long-lived Japanese do; but few non-Japanese like those gooey little beans.)

Don't even think of taking calcium and Vitamin D supplement without supplementing with K2 as well. You don't want that extra calcium to accumulate in your blood vessels, kidneys, and other organs.

*

Ending on beauty:

Let me sleep no more beside the harp.
Look at my hand, still a boy’s hand.
Do you think it could not span
the octaves of a lover’s body?

~ Rilke, David Sings Before Saul (2)
 



No comments:

Post a Comment