*
THE THIRD LANGUAGE
In high school I kept a diary
in English, so if the teacher caught me,
and she did, she wouldn’t understand.
I had a small vocabulary and even less
to say. “The weather’s getting warm,”
I confessed in a secret language.
My first class in Los Angeles,
on June evenings,
in the palm-plumed dusk,
was a typing course.
For rhythm, the instructor played
“The Yellow Rose of Texas”
above the cross-fire
of night students pounding
on the jamming keys.
I machined a sinister idiom:
Dear Sir: Due to circumstances
beyond our control —
College was a subordinate clause.
I was a mouse in the auditorium,
scribbling neat, useless notes.
One time I graded three hundred
freshman papers on the death penalty.
I didn’t want to graduate.
Life was penalty enough.
To survive I had to learn
a third language,
a code it takes nightmares to crack:
words husked from the grain of things,
Adamic names that fit
animals like their own pelts —
fluent as flowers, rare as rubies,
occult atoms in lattices of sleep.
To be silent and let it speak.
~ Oriana
WE’VE LOST THE GREAT MILAN KUNDERA
Milan Kundera, one of the biggest names in European literature in recent decades, has died in Paris aged 94.
His best-known work was his 1984 novel The Unbearable Lightness of Being.
Anna Mrazova, a spokeswoman for the Milan Kundera Library in his home city of Brno in the Czech Republic, said he had died after a long illness.
He was a towering figure in Czech literature but his scathing criticism of Czechoslovakia's communist regime saw him flee for France in 1975.
A poetic and satirical author, Kundera's novels won praise for their observation of both politics and everyday life.
Czech Prime Minister Petr Fiala said his work reached "whole generations of readers across all continents and achieved global fame".
“He leaves behind not only notable fiction, but also significant essay work.”
Born in 1929 into an elite Czech family, his father was a piano teacher and a student of the composer Janacek, and ensured Kundera received musical training at an advanced level.
Kundera studied in Prague, becoming a lecturer in world literature. He joined the ruling Communist Party and initially he was an ardent member.
But his writing soon got him into political trouble. His first novel The Joke — a black comedy published in 1967 — led to a ban on his writing in Czechoslovakia.
In 1970 he was asked to leave the party after expressing support for the Prague Spring movement, the period of political liberalization crushed by the 1968 Soviet invasion.
Kundera's activism led to his dismissal from his teaching post and his novels were removed from public libraries, while the sale of his work was banned until the fall of the Communist government in 1989.
For a short time he performed as jazz trumpeter, before emigrating to France in 1975 with his wife Vera, settling first in Rennes then Paris. He became a naturalized French citizen in 1981, two years after he was stripped of his Czech nationality, and eventually wrote in French.
He soon secured a reputation as a ground-breaking author with The Unbearable Lightness of Being, which told the story of four Czech artists and intellectuals and a dog caught up in the brief period of reform that ended when Soviet tanks rolled into Prague.
The book was adapted for the screen in 1987, starring Juliette Binoche and Daniel Day-Lewis. But Kundera expressed dissatisfaction with the film and with what he perceived as a lack of acceptance of the novel in the modern world.
"It seems to me that all over the world people nowadays prefer to judge rather than to understand, to answer rather than to ask," he told his friend and writer Philip Roth in the New York Times.
“So that the voice of the novel can hardly be heard over the noisy foolishness of human certainties.”
It was eventually translated into Czech and became a bestseller in his homeland in 2006.
His 1979 work The Book of Laughter and Forgetting spanned seven narratives and containing elements of the magic realism genre, while in 1988 he wrote one of his best novels, Immortality.
In 1985 he received the Jerusalem Prize — a prize given to writers whose works have dealt with themes of human freedom in society.
And while he was a frequent contender for the Nobel Prize for literature, the award remained elusive.
In 2008 he was beset with more political trouble when he was accused of betraying a Czech airman working for US intelligence.
He issued an unprecedented and heated denial to Czech news agency CTK, prompting an open letter of support from fellow writers including JM Coetzee and Salman Rushdie.
It was not until 2019 that he and his wife had their Czech citizenship restored by Prime Minister Andrej Babis, 40 years after they had it revoked.
Milan Kundera was lauded for having a distinctive voice, although he was sometimes criticized for his portrayal of women and preoccupation with the male gaze.
His final novel in 2014, The Festival of Insignificance, originally published in Italian, received mixed views, with some describing it as a "battle between hope and boredom”.
https://www.bbc.com/news/world-europe-66173059
Juliette Binoche and Daniel Day-Lewis in the movie based on The Unbearable Lightness of Being.
from Wiki:
Kundera's early novels explore the dual tragic and comic aspects of totalitarianism. He did not view his works, however, as political commentary. "The condemnation of totalitarianism doesn't deserve a novel", he said. According to the Mexican novelist Carlos Fuentes, "What he finds interesting is the similarity between totalitarianism and the immemorial and fascinating dream of a harmonious society where private life and public life form but one unity and all are united around one will and one faith". In exploring the dark humor of this topic, Kundera seems deeply influenced by Franz Kafka.
Kundera considered himself a writer without a message. In Sixty-three Words, a chapter in The Art of the Novel, Kundera tells of a Scandinavian publisher who hesitated to publish The Farewell Party because of its apparent anti-abortion message. Not only was the publisher wrong about the existence of such a message, Kundera explained, but, "I was delighted with the misunderstanding. I had succeeded as a novelist. I succeeded in maintaining the moral ambiguity of the situation. I had kept faith with the essence of the novel as an art: irony. And irony doesn't give a damn about messages!”
https://en.wikipedia.org/wiki/Milan_Kundera
*
THE ORIGIN OF CLICHÉS
Clichés are viewed as a sign of lazy writing, but they didn’t get to be that way overnight; many modern clichés read as fresh and evocative when they first appeared in print, and were memorable enough that people continue to copy them to this day (against their English teachers’ wishes). From Shakespeare to Dickens, here are the origins of common literary clichés.
Add Insult to Injury
The concept of adding insult to injury is at the heart of the fable “The Bald Man and the Fly.” In this story—which is alternately credited to the Greek fabulist Aesop or the Roman fabulist Phaedrus, though Phaedrus likely invented the relevant phrasing—a fly bites a man’s head. He tries swatting the insect away and ends up smacking himself in the process. The insect responds by saying, “You wanted to avenge the prick of a tiny little insect with death. What will you do to yourself, who have added insult to injury?” Today, the cliché is used in a less literal sense to describe any action that makes a bad situation worse.
Albatross Around Your Neck
If you studied the Samuel Taylor Coleridge poem “The Rime of the Ancient Mariner” in English class, you may already be familiar with the phrase albatross around your neck. In the late-eighteenth-century literary work, a sailor recalls shooting a harmless albatross. The seabirds are considered lucky in maritime folklore, so the act triggers misfortune for the whole crew. As punishment, the sailor is forced to wear the animal’s carcass around his neck:
Today, the image of an albatross around the neck is used to characterize an unpleasant duty or circumstance that’s impossible to avoid. It can refer to something moderately annoying, like an old piece of furniture you can’t get rid of, or something as consequential as bad luck at sea. Next time you call something an albatross around your neck, you can feel a little smarter knowing you’re quoting classic literature.
Forever and a Day
This exaggerated way of saying “a really long time” would have been considered poetic in the sixteenth century. William Shakespeare popularized the saying in his play The Taming of the Shrew (probably written in the early 1590s and first printed in 1623).
Though Shakespeare is often credited with coining the phrase, he wasn’t the first writer to use it. According to the Oxford English Dictionary, Thomas Paynell’s translation of Ulrich von Hutten’s De Morbo Gallico put the words in a much less romantic context. The treatise on the French disease, or syphilis, includes the sentence: “Let them bid farewell forever and a day to these, that go about to restore us from diseases with their disputations.” And it’s very possible it’s a folk alteration of a much earlier phrase: Forever and aye (or ay—usually rhymes with day) is attested as early as the 1400s, with the OED defining aye as “ever, always, continually”—meaning forever and aye can be taken to mean “for all future as well as present time.”
He may not have invented it, but Shakespeare did help make the saying a cliché; the phrase has been used so much that it now elicits groans instead of swoons. Even he couldn’t resist reusing it: Forever and a day also appears in his comedy As You Like It, written around 1600.
Happily Ever After
This cliché ending line to countless fairy tales originated with The Decameron, penned by Italian writer Giovanni Boccaccio in the fourteenth century. A translation of the work from the 1700s gave us the line, “so they lived very lovingly, and happily, ever after” in regard to marriage. In its earlier usage, the phrase wasn’t referring to the remainder of a couple’s time on Earth. The ever after once referred to heaven, and living happily ever after meant “enjoying eternal bliss in the afterlife.”
It Was a Dark and Stormy Night
Edward Bulwer-Lytton’s 1830 novel Paul Clifford opens with “It was a dark and stormy night.” Those seven words made up only part of his first sentence, which continued, “the rain fell in torrents—except at occasional intervals, when it was checked by a violent gust of wind which swept up the streets (for it is in London that our scene lies), rattling along the house-tops, and fiercely agitating the scanty flame of the lamps that struggled against the darkness.
Regardless of what came after it, that initial phrase is what Bulwer-Lytton is best remembered for today: an infamous opener that has become shorthand for bad writing. No artist wants to be known for a cliché, but Bulwer-Lytton’s legacy as the writer of the worst sentence in English literature may only be partially deserved. Though he popularized “It was a dark and stormy night,” the phrase had been appearing in print—with that exact wording—decades before Bulwer-Lytton opened his novel with it.
Little Did They Know
The cliché little did they know, which still finds its way into suspenseful works of fiction today, can be found in works published in the nineteenth century, according to writer George Dobbs in a piece for The Airship, but was truly popularized by adventure-minded magazines in the 1930s, forties, and fifties. The phrase was effective enough to infect the minds of generations of suspense writers.
Not to Put Too Fine a Point on It
Charles Dickens is credited with coining and popularizing many words and idioms, including flummox, abuzz, odd-job, and—rather appropriately—Christmassy. The Dickensian cliché not to put too fine a point upon it can be traced to his mid-nineteenth-century novel Bleak House. His character Mr. Snagsby was fond of using this phrase meaning “to speak plainly.”
Pot Calling the Kettle Black
The earliest recorded instance of this idiom appears in Thomas Shelton’s 1620 translation of the Spanish novel Don Quixote by Miguel de Cervantes. The line reads: “You are like what is said that the frying-pan said to the kettle, ‘Avant, black-browes.’” Readers at the time would have been familiar with this imagery. Their kitchenware was made from cast iron, which became stained with black soot over time. Even as cooking materials evolved, this metaphor for hypocrisy stuck around.
https://lithub.com/before-they-were-cliches-on-the-origins-of-8-worn-out-idioms/?fbclid=IwAR06b16l8aLKkUHRsYDjkVUNb6VHsF3NwobZSXj6D35rUT-yUZDCpgfDEu4
*
LENIN WAS USED TO THE EUROPEAN STANDARD OF LIVING
~ Lenin was a graphomaniac, (a person with an irresistible impulse to write) of his time penning two long articles a day that sounded like this:
“There can be no doubt that we have before us a certain international ideological current, which is not dependent upon any one philosophical system, but which is the result of certain general causes lying outside the sphere of philosophy.”
How exactly did this learned scribe relate to peasants and working-class people of the Russian Empire who couldn’t read or write?
The same way a child looks up to his parent who doesn’t understand much of the “adult talk:” as an unconditional voice of authority.
The lower classes that constituted more than 90% of the populace in Russia were accustomed to being completely subjugated to the will of the well-educated tsar, and they didn’t expect their new supreme leader to be any different.
As long as he spoke on their behalf and took their interests to heart, they bowed down to his authority. Lenin delivered on both accounts.
After spending several years in Germany and Switzerland, Lenin became accustomed to the comforts of Western Civilization. Upon his return to Russia, he found a country lagging many decades, if not a century, behind Europe in development.
To his dismay, there was not a single house in Moscow that had electricity, running water, and a telephone installed.
Zinaida Morozova, the wife of a prominent millionaire (an oligarch, in modern lingo), had been a progressive woman in her day and age. She’d taken note of new developments and inventions in Europe and fixed up a countryside mansion in Gorky up to the modern standards.
Lenin's mansion in Gorky
Although located thirty miles from the Kremlin, Lenin decided to move into Morozova’s mansion to enjoy electricity (Zinaida had a German electric generator installed), running water, and telephone connection, as well as multiple bedrooms, a summer veranda, a music hall with a grand piano, European fixings.
Lenin purchased a Rolls Royce but then realized there was no proper road between Moscow and Gorky. Two sleds were mounted on the front wheels and caterpillars on the rear wheels to allow brutalsky transportation for the leader of the proletariat.
Lenin's Rolls-Royce converted to a more practical vehicle due to bad roadsIronically, a man who dedicated his whole life to fighting aristocracy chose to live like a nobleman.
Another curious thing, after a stroke left him incapacitated, an electric invalid carriage was ordered for him from London. However, Lenin didn’t enjoy it for long before passing away in 1924.
Dmitry, Lenin’s favorite brother and fellow revolutionary moved into Morozova’s mansion after the war. Dmitry who had lost both of his legs enjoyed riding around the courtyard of the mansion in his brother’s electric carriage. ~ Misha Firer, Quora
Boris Gorlin:
Not only Lenin but all the Soviet leaders, or rather the power grabbers, the so called “nomenclatura”, lived in luxury, even at times of famine and massive hardships that rank and file Soviet citizens suffered.
My boss at an engineering company (“proyektniy institut”) in a provincial USSR City of Odessa tried to boost his career, and in the end of 1970’s started working as a volunteer at a local, low level (city district) Communist Party committee (“raikom”). One day I found him in his office, very much distressed, smoking one cigarette after another. To my question about what’s happened, he told me about his “discovery”.
Any Party Secretary, even at a district level, starting from a 2nd one (not to mention the 1st, then a City, regional, republican, and national level) lived “under communism”. They didn’t pay for anything. Every morning a big box with food items and other goodies was delivered to the doors of their apartments. A new TV model was immediately delivered to them, for free, sent by a nearby department store manager. Etc. …
Naturally, they highly valued those benefits and would do anything in order to keep their position and climb higher on the party ranks “ladder”. They were not elected but assigned, reporting only to their bosses and demonstrating extreme loyalty, no matter what… This system of party discipline and loyalty was called “Democratic centralism”.
That was the system Lenin established and Stalin fully developed and strengthened. Stalin, for instance, had at least 20 mansions around the country — with full-time personnel at each of those mansions, thousands of military guards, etc., awaiting the “leader’s” visit at any time. Formally, he didn’t “own” it, and didn’t need a formal ownership, it was in his disposal for life. He appeared in public wearing “modest” half-military clothes (similarly to Hitler, Mao, and other XX Century dictators), regular-looking boots (made on an individual order by the best shoemakers in the country), smoked a pipe (tobacco was delivered from a specially kept and neatly maintained field in Georgia, as well as individually made wine, etc.). A lot of “communist common sense”…
Now, when I read nostalgic memoirs about “great times” and good life in the Soviet Union, I think their authors were children of those nomenclatura apparatchiks. Or their servants who were grabbing leftovers…
Misha Firer:
Moreover, Putin re-established the same communist system — “aparatchiks” have so much money they don’t have any inkling of its value and the only thing required of them is loyalty to the “vertical”.
Boris Gorlin:
In March of 1921, the Kronstadt rebellion was cruelly suppressed by Lenin’s order. More than 10 thousand people were ruthlessly killed.
The life of “nomenclatura” was in stark contrast with life of the vast majority of the Soviet people. It’s a well known fact. And it was started by Lenin. Not only him, but his allies also got and fully used a chance to live in the former “bourgeoisie» mansions, enjoying plenty of delicious food, luxurious amenities, multiple servants, etc. — conditions that were unthinkable for the “working masses”. It all led to total disappointment with the very concept of socialism (not to mention a mystical Utopia of Communism). That’s why, when four former party secretaries (freshly elected presidents of the Soviet Republics) decided in 1991 to dismantle the USSR, NOBODY stood up to protect and keep the “homeland of victorious real socialism”).
Chaim Magal:
Bella Ahmadullina was a famous Soviet/Russian poet. She gave an interview (it’s on you-tube) about the experiences of her grandmother as a helper in Lenin’s household during the civil war. Lenin often got short tempered, especially when his habits were compromised. He cursed and shouted at her when she could not make a proper cup of coffee with cream (not just plain milk)…
Misha Firer:
The mansion Lenin moved to belonged to one of the richest family in Tsarist Russia. In today’s terminology he was an oligarch. Lenin deliberately chose to live like a major nobleman. His choice of vehicle also reflected that he viewed himself as top class aristocrat, definitely not a minor.
His taste in food could be a result of living in German speaking countries for sixteen years. Ditto tidiness.
After I spent four months in the German part of Switzerland I lost all interest in food. In my experience of working at a British school, Brits ate the kind of low quality food even cleaning ladies were averse to.
I do agree that Lenin possessed the power of persuasion, but his success hinged on his cold-bloodedness. He was a gang leader of violent criminals.
More than a hundred years later their descendants are still in charge of this country. Russia has been ruled by a mafia from 1917. We have to thank Lenin for that.
Oriana:
It’s quite possible that Lenin suffered from syphilis, and it may have been the primary cause of his early death (aside from the strokes and possibly being poisoned by Stalin). But let’s not open any more cans of worms. This post is complex enough as is.
*
DEMOGRAPHIC TRAGEDY IS UNFOLDING IN RUSSIA
Over the past three years the country has lost around 2 million more people than it would ordinarily have done, as a result of war, disease and exodus. The life expectancy of Russian males aged 15 fell by almost five years, to the same level as in Haiti. The number of Russians born in April 2022 was no higher than it had been in the months of Hitler’s occupation. And because so many men of fighting age are dead or in exile, women now outnumber men by at least 10 million.
Post office in the Tambov region*
*
PUTIN’S STATE OF MIND
~ Christian cross and letter Z, a symbol of invasion of Ukraine (“Russia, I’m not ashamed” reads on a shoulder patch on a pilgrim’s military fatigue jacket) are fused into one at the Orthodox pilgrimage route in Kirov Oblast.
Russia is fast becoming Skrepostan. Skrepa, literally bracket, bondage, is a spiritual value promoted by Kremlin in cooperation with Russian Orthodox Church.
The Velikoretsky Cross Procession is 150 kilometer route from Kirov to the village of Velikoretskoye and back to Kirov.
Weather conditions are normally bad, which is a good sign to Russian Orthodox believers - it’s a test that God sends them, and they accept it as a grace.
Those who leave the way of the cross due to a breakdown, illness, or legs worn down to bloody calluses, believers say, “God didn’t let them into kingdom come.”
*
RUSSIANS’ HOPES FOR A BRIGHT FUTURE
~ Hopes of Russians for a bright future of their country collapsed to a historic low.
As the war with Ukraine drags on, drone raids and shelling of Russian territories became the norm, the citizens are losing faith in a brighter future.
As of June 2023, 58% of Russians said the country was facing "hard times" ahead, a public sentiment poll conducted by Levada revealed.
Over the past year, the share of people with negative views of Russia’s future jumped by 10% and reached an all-time record since 2008.
The previous record - 56% - is now broken. It was recorded in 2009, during the global financial crisis, which brought for Russia the strongest economic decline since the 1990s.
Further to that, a quarter of those polled by Levada (24%) said that the country is "going through hard times now."
Only 11% of Russians believe that “hard times are behind us”, which is a minimum over the past 7 years.
So, once again:
11% of Russians think that the hard times have passed.
24% think that the hard times are now.
58% think that the hard times are ahead.
Altogether, 82% of Russians feel gloomy (and 7% refused to answer). Besides, only 3–4% of randomly chosen respondents agree to answer survey questions – and even among these 4% of open-minded citizens, we see such moods.
That’s far cry from Putin’s “80% support for the special military operation”.
However, while Russians don’t hope for a better future, most of them still say that things are going “in the right direction” in the country: 61% of respondents picked that option.
I don’t know what to say about that. “Life is bad and going to become worse, but we are moving in the right direction?” That’s just insanity.
Rather, it’s the standard unspoken requirement for unconditional approval of the government’s policy that kicks in automatically, when dealing with anyone who asks questions on behalf of the state.
Still, even with this unspoken burden, 39% didn’t bow in agreement.
Among those who believe that the country is "moving to a dead end", half (47%) explained their opinion by the ongoing war, and a fifth (20%) complained about inflation, declining incomes and rising poverty.
What can I say? It was hard to destroy Russia, which is so rich in resources, but Putin managed.
~ Elena Gold, Quora
This is key to understanding Russian mindset that baffles Westerners: Russians regard suffering as good, admirable and in its absence wail in frustration. War in Ukraine against Devil worshippers is God send.
Even in the Soviet Union that nominally was atheistic, self-sacrifice to spread communism around the world was actively promoted by the government and living poorly to support war against Afghanistan was the highest social value.
Flag of Russian [Tsarist] Empire with Saint Tsar Nicholas II next to a crucified Jesus Christ. No portraits of Putin, who’s believed to be a Western agent and materialistic.
The history of the Velikoretsky cross procession begins in the XIV century.
According to legend, the image of Nicholas the Wonderworker appeared to the peasant Agalakov on the banks of the Velikaya River.
A chapel was installed in the village, which became a place of pilgrimage, and the icon itself was transferred to Khlynov (the former name of the city of Kirov).
Every year, pilgrims led by clergy accompany the icon of St. Nicholas the Miracle Worker to the village of Velikoretskoye and back to Kirov.
Representatives of the Vyatka Cossack army play a special role in the procession. In the column they go ahead of everyone, next to the icon and the clergy, the rest of the pilgrims follow them.
The Cossacks traditionally accompany the Velikoretsky Cross Procession every year, but now they have patches with the letter Z on their uniforms.
Why does the flag of the Russian Empire accompany the Orthodox procession?
“White symbolizes the church, yellow symbolizes the king, and black symbolizes the Fatherland.
“The fact is that all this suffering, misfortune, World War II, collectivization, the permission of abortion and what is happening with Ukraine now is a bitter medicine. Not a punishment! God does not punish, but a remedy for the sin that lies in the people.
“The sin of apostasy and betrayal of the king in the 20th century!”
~ Misha Firer, Quora
Elena Gold:
The last photo is symbolic: “Contract service in army: From 204,000 rub. • Social perks for the family” — at the start of the war, they were paying 195,000 rub. and it was US $3,500. Now they pay 9,000 rub. more — and it’s only $2,200. Life is getting cheaper in Russia.
Jake Holman:
Putin’s mindset is simple and very Russian: ““There is no happiness.”
Tim Orum:
Conditioning the masses to believe it is righteous to suffer at the hands the Powers That Be (Elohim) for “the greater good”, whether it be a God or a State, is the most obvious indication of Statism.
Drew Phillips:
If you need to wear a patch that says you're not ashamed, you're probably ashamed somewhere deep down. It wouldn't occur to someone who's not ashamed to proclaim that fact to the world.
Casper Hall:
These values and attitudes are comparable 1 to 1 to religious submission in the Middle Ages. Only when we cut off this mental oppression did the resources begin to spread widely in the population and through education development almost exploded. Something has probably been lost in the subsequent frenzy of consumption, but going back to feudal society and submitting to a religious narrative is not the solution — but rather cowardice.
Du Toit Van Schalkwyk:
For decades, for generations, Russians were fed by propaganda about how vile and bad capitalism and consumerism is, and how much better and civilized communism is. So when the USSR ended and the Russian Federation embraced capitalism, they began acting how they were always told capitalists would act like, and began to turn their society into the twisted version of capitalism which they had been led to believe is the reality of it.
Robert Brock:
Nice, sincere people. And totally brainwashed. Maybe time to come out of the 14th century.
Simon Yemčenko:
It seems that Russians don’t like suffering per se. What they really don’t like is to be held responsible for their life decisions, that’s why they tend to proclaim their negative experiences to be some good thing, when in fact they could have prevented much of the suffering by simply doing the right thing instead of blindly following the authority.
Oriana:
This reminds me of the Catholic attitude toward suffering: "God sends suffering to those he loves." The more you suffer here on earth, the less time you’ll have to spend in the fires of Purgatory. So, what could be better than suffering? In fact, some saints allegedly prayed to be sent more suffering.
Markus Brinkmanis:
Misha, you misread the question. This is the mindset of smelly slaves in Russia (холопы смердящие [stinking peasants]) — a historical expression to designate common people. But it is not a mindset of Putin — small, vengeful, stupid and cowardly person. He will not sacrifice himself for anything.
*
*
IS COMMUNAL CHILDCARE BEST?
What child care model did you grow up with?
I
grew up in a very interesting situation, well outside the "norm". I was
raised in community — my parents purchased land with multiple other
families in the 1970s, and created a kind of commune. Each family has
their own house on the 10-acre property, and there is a shared main
house, where there is a communal (restaurant style) kitchen. The group,
which now consists of 3 core families, has dinner together every night;
each adult member is responsible for one night of cooking a week. This
arrangement meant that there was pretty much always available
"childcare" — one or more of the adults could watch all of the kids at
any given time, or pick us up from school. That freed up the rest of the
parents to work or tend to other matters. The kids always had
playmates.
In terms of childcare, it remains the ideal in my
mind. No one paid for childcare, the parents were not completely burned
out, and the kids grew up with multiple adult influences and caretakers.
My husband and I talk all the time about how we can recreate something
like this, but haven't gotten there yet. Maybe someday we will move back
to the community where I grew up in California, but for now we're going
at it ourselves in upstate NY.
Have you looked to other mothers when making these decisions? Who?
Almost
every family I know is constantly trying to puzzle together the various
factors of childcare, career, etc. It is a systemic, societal issue
that can rarely be "solved" unless there is a lot of money and/or
compromise involved. The mothers I know who are thriving often have
family help — a grandmother who lives nearby, for example. This model
seems to be the most sustainable, as well as mutually beneficial for all
parties: cost effective for the families, and fulfilling for the
grandparent/caregiver.
I
look to my own mother for inspiration on how to maintain a creative
career (she is a radio producer and storyteller), while also dedicating
herself to her family and children. Granted she had help from her
community in a way I do not, but she paved the way for a type of
decision making that has stayed with me: I have learned from her that I
must follow my creative passions to their ends in order to be fulfilled,
and that that fulfillment will make me a better mother and an
inspiration to my own daughters down the line.
What would your ideal work & childcare arrangement look like?
My
ideal arrangement would be a hybrid model that included both free
childcare options and subsidies for parents who are caring for their
children. For example, here is an ideal day: Drop my children off at a
COMPLETELY FREE, vetted daycare from 9am-2pm, during which time I could
work and/or take care of household tasks. Once my husband or I picked
them up for school, we'd punch a time card that meant we were on the
clock, and we'd get paid for our childcare work by the government for
the rest of the "work day", from 2pm-6pm.
This way, whoever was
doing the childcare would not be "losing" work hours (when in fact
caring for children is a massive amount of real work), and the anxiety
about money would be unraveled from the availability and accessibility
of childcare. This would also ensure that small children would have a
healthy and happy balance between a social school day and time with
their parents. If and when we needed to work more hours than this
allowed, we could hire part time caregivers (whose salaries would also
be subsidized to ensure they were making a great living wage) to help on
a part time basis.
Another ideal arrangement would be a co-op
model, where multiple families join forces to create a childcare/work
model that works for everyone. This might entail having an office for
freelancers to work, while one of the parents watched a group of
children, and then they'd trade off at a certain point. Of course, a
model like this — essentially the pooling of resources — is only
necessary when those resources aren't readily available in the first
place...so let's go ahead and subsidize the shit out of early childcare,
shall we?
What's one thing you'd definitely get done if you had just one more hour of care?
I
would move my body every day, and maybe even hang out with my husband
sometimes! Because I necessarily prioritize work so much when we have
childcare, these things are first to fall to the wayside.
https://whocaresconvos.substack.com/p/convo-1-molly-prentiss?utm_source=pocket_collection_story
Mary: THE VALUE OF GRANDMOTHERS
I think a kind of communal childcare used to be more the norm before the "nuclear family," consisting of only parents and children, became the usual family unit. When a family was multi-generational, including grandparents, childcare was not the burden of the mother alone, but shared by the grandmother. This is still commonly true in many communities, especially for single mothers.
In my own situation we lived with my grandmother, whose attention and authority supplemented and eased the work of my mother, and was particularly important because there were not one or two, but seven children to care for. Modern families in the US are more fragmented: often the generations separated by great distances, both because people move to chase job opportunities and demands, and because it has become so "normal" for the nuclear family to live separated from the older generation. I think this is a relatively new situation closely related to industrialization.
What we lose is the value of those grandmothers — that evolutionary advantage gained by allowing us, almost uniquely, to survive long past our reproductive lives. We are the only creatures who go through menopause and continue living for decades. Why? Well there is a lot of speculation, but most of it boils down to the advantage of experienced help in childcare, as well as access to all the knowledge grandmother has to share about survival, knowledge gained from her own experience and knowledge gained from her own grandmothers. A grandmother is a treasury of learned experience reaching back through the generations. She is apparently essential to the development of human culture, remembering for us what we need to know, without having to start from scratch again and again.
The crone is thus both wise and essential to human development.
Another evolutionary development unique to humanity is our long, long period of immaturity. Babies are born helpless, and will take not only years, but decades to become independent. The development of our idea of childhood is interesting. Some say that we invented the idea of childhood centuries ago, and have been elaborating on it ever since. Children were once deemed mature much earlier, almost like those noted in societies where they are considered able to survive on their own at ages 7 to 9. The time until adulthood has increased to at least two decades, and recently, three decades in many ways.
Physically the body doesn't finish until the second decade, but the increasing length of childhood, or immaturity, is now more related to social issues, including work and technology. These of course also shape the question of independence — how long does it take to achieve full independent adulthood, with the ability to survive, to support yourself, to have a viable career, to be able to create a new family and sustain it? Much longer than it used to, and I think that time of immaturity and preparation is likely to keep increasing as society becomes more technologically dependent, the tasks of education and work more complex.
Oriana:
And there are some people who continue to living with their parents — or at least their mother — even into their forties. I've met two such women. They weren't the least bit embarrassed. They saw themselves as helping their parents in their old age, while saving a lot on rent — which in turn made it possible for them to travel, take workshops, save up for a new car, and so forth.
It seems like an ideal situation until we ponder the question of dating and possibly getting married. I particularly wondered about the more attractive and sociable of those two women. But then in the past it was not so unusual for the new spouse to move in. Three generations living in the same household is still not all that unusual in Europe. As a child-care arrangement, it's hard to beat. When it comes to the need for privacy, it's a different story.
So here we have a situation where what is best for the children is not necessarily what is best for adults. Tough, life generally requires compromises and accepting less-than-ideal living situations. Somehow we manage to adjust.
*
TAKE A BREAK FROM DOPAMINE
~ Are you bored by being alone with your thoughts? Does the thought of cooking a meal, brushing your teeth, or taking a walk without a podcast, TV show, or music playing send you into a cold sweat? If so—according to a trend circulating on social media—you’re a great candidate for something called a “dopamine detox.”
It involves identifying behaviors that you turn to too frequently for a quick boost—mainly things like social media, gaming, and watching TV—then taking a break from them for a few days to a week. The goal is to recalibrate your brain’s reward pathways.
Though some evidence suggests that taking a break from certain unhealthy behaviors can prove transformative, most research focuses on clinical addictions, not the daily temptations we all face. That hasn’t stopped content creators from overstating the science to promise unmatched happiness, productivity, academic success, and lots of money from a digital detox—all unrealistic claims. It’s just a temporary break, and while that can be nice, it won’t change your life. Real change takes more active work.
But if you keep your expectations in check, you may find that a digital detox is a useful tool for self-reflection.
DOPAMINE’S ROLE IN THE BRAIN
A “dopamine detox” focuses on that particular brain chemical because it’s sensitive to stimuli like social media. Temporarily depriving yourself of such triggers should theoretically recalibrate your brain’s stores of dopamine and therefore make your pleasure centers more balanced, the claims go.
Of course, brain chemistry is more complicated than that. Dopamine is just one neurochemical that contributes to happiness, and unplugging for a few days won’t rewire your mind. But it might help you recognize the triggers you’re leaning on, says Dr. Anna Lembke, a psychiatry professor at the Stanford University School of Medicine and author of the book Dopamine Nation: Finding Balance in the Age of Indulgence. “When we’re consuming digital media,” she says—like TV shows, TikTok, podcasts, and music—“it releases a lot of dopamine in a specific part of the brain called the reward pathway.” When dopamine is sent hurtling down this pathway, it sets off a good feeling in the brain. Any rewarding stimulus—a piece of candy, a “like” on a post, or the start to your favorite song—can give you this little hit.
This pathway works best when it gets to hum at a natural level and spike at different points throughout the day, like at mealtimes. But most of the content on our phones, says Lembke, is designed to activate the reward pathway as strongly as possible, meaning that frequent use theoretically releases a “firehose of dopamine stimulation.”
Our understanding of how the brain responds to ceaseless stimulation from our gadgets comes primarily from research on drug addiction, which commandeers the same reward pathways. “In order to compensate,” says Lembke, “our brain starts to downregulate our own dopamine production and transmission, to bring it back to baseline.” A dopamine deficit, which can result from the extremes of all forms of addiction, can lead to feelings of depression and anxiety. “Now we need to keep engaging in these behaviors—ingesting digital media—not to feel good and happy, but just to feel normal,” explains Lembke. That’s where a detox can be helpful.
“Detox” is a misleading term in this context. The word describes the removal of something harmful and unnatural, but dopamine, made in the brain, is neither of those things—nor is it being removed. The practice is also sometimes called a dopamine “fast,” and while the goal is to starve that dopamine-specific reward pathway of constant activation, the chemical is still present and active throughout the brain.
What’s actually being cut out during this practice is whatever stimulus a person is hoping to feel less dependent on. A more apt (but less catchy) name for the routine might be “dopamine recalibration.” Really, it’s a commitment to breaking bad habits.
Attempting this recalibration isn’t just for people who feel like compulsive media use is taking over their lives, says Lembke. “I love that the younger generation is exploring digital detox and trying to experiment with how they feel when they’re not constantly engaged with our digital devices,” she says. “It’s only by stopping for a period of time that we can really see how this technology is impacting our mental health.”
The most effective “dopamine detox” will be a personalized one, says Lembke. Cutting down on the tech you use most often is an obvious place to start, but dopamine hits can come from lots of places. Lembke, for instance, says that the most powerful break she’s ever taken was from reading romance novels. Even though they weren’t on a screen, the compulsive way she’d churn through their predictable plot points indicated to her that the hobby had hijacked her reward system.
Even after four weeks—which is generally long enough to change a habit —she still craved the books. After taking inventory of her habits, she says, she “was finally able to trace it to listening to pop music, because almost all pop music is love songs. So I stopped listening to pop music, and that really helped me stop craving romance novels, which helped heal my brain to the point where now I can listen to all kinds of music and not crave reading.”
If there’s a habit or device that you feel has too strong a hold over you (maybe, for instance, going to the bathroom without your phone makes you feel antsy), it might be a good target for this approach.
WHAT TO EXPECT DURING A DOPAMINE FAST
Aside from scientific studies about drug addiction, there’s not clear research on what happens when you quit your brain’s favorite reward cold turkey. When it comes to how the brain interacts with social media, “all we really have is our clinical experience,” says Lembke.
“When we’re working with patients who have actually become pathologically addicted to digital media, they usually feel pretty bad for 10 to 14 days” when they first cut it out, she says. After that, she says, patients begin to be able to focus again, to slow down and enjoy activities that may have seemed boring before, like taking a quiet walk or cooking a meal. Gradually, because it’s not being used, the association between the problem behavior and the dopamine reward becomes weaker, making it easier for people to resume using their devices in a less problematic way.
A lot of the self-help content circulating about dopamine detoxes leans into what we know from clinical treatment of true behavioral addiction, but we know less about how more minor behavioral tweaks—like cutting down on social media for a week—affects the dopamine reward pathway. For people without an addiction, a stimulus fast doesn’t need to be methodological; there’s no real right or wrong length of time to try it. What’s more important is paying close attention to how you feel while doing it, which may help you notice automatic behaviors that may not have registered before, like Lembke’s pop-song habit.
Even a temporary step back can teach us a lot. “We’re constantly reacting to external stimuli, which means that we’re not really giving our brains a chance to form a continuous thought or staying quiet long enough to have spontaneous thoughts,” says Lembke. ~
https://time.com/6284428/does-dopamine-detox-work/?utm_source=pocket_collection_story
*
ARISTOTLE’S RULES FOR THE GOOD LIFE: BE WISE
A few recent ones: Am I the asshole for “telling my brother that he is undateable?” For “asking my girlfriend to dress better on a date night?” For “refusing to resell my Taylor Swift Tickets?” Some posts have become famous, or Internet famous, like the one from a guy who asked an overweight seatmate on a five-hour flight to pay him a hundred and fifty dollars for encroaching on his space. The subreddit promises, in its tagline, “a catharsis for the frustrated moral philosopher in all of us.”
What’s striking about AITA is the language in which it states its central question: you’re asked not whether I did the right thing but, rather, what sort of person I’m being. And, of course, an asshole represents a very specific kind of character defect. (To be an asshole, according to Geoffrey Nunberg, in his 2012 history of the concept, is to “behave thoughtlessly or arrogantly on the job, in personal relationships, or just circulating in public.”) We would have a different morality, and an impoverished one, if we judged actions only with those terms of pure evaluation, “right” or “wrong,” and judged people only “good” or “bad.” Our vocabulary of commendation and condemnation is perpetually changing, but it has always relied on “thick” ethical terms, which combine description and evaluation.
This way of thinking about ethical life—in which the basic question is who we are, not what we do—has foundations in a work of Aristotle’s from the fourth century B.C., known as the Nicomachean Ethics. A new translation and abridgment, by the University of Pennsylvania philosopher and classicist Susan Sauvé Meyer, comes with a new title: “How to Flourish: An Ancient Guide to Living Well” (Princeton). The original text, Meyer explains, has been whittled down to “Aristotle’s main claims and positive arguments, omitting digressions, repetitions, methodological remarks, and skirmishes with opponents.”
The volume is part of a series of new translations of ancient texts. Aristotle’s Poetics, for instance, is now “How to Tell a Story: An Ancient Guide to the Art of Storytelling for Writers and Readers,” and Thucydides’ “History of the Peloponnesian War” is now “How to Think About War: An Ancient Guide to Foreign Policy.” You can debate whether these name changes are kitschy or canny, but the title “How to Flourish” isn’t that much of a stretch, because the Nicomachean Ethics is one of the handful of texts chosen that might plausibly be considered a guide in a sense we recognize today. Still, if Aristotle’s ethics is to be sold as a work of what we call self-help, we have to ask: How helpful is it?
We know only a few things about the man who claimed to know how to flourish. He was born in 384 B.C., in a Macedonian city in what’s now northern Greece. His mother came from a wealthy family on the island of Euboea; his father was a court physician to a Macedonian king. Aristotle was seventeen when he left his native land for Athens, where he evidently encountered Plato and his Academy—the legendary circle of scientists, mathematicians, and philosophers. When Plato died, in 347, Aristotle left Athens. About his reasons we can only speculate: one theory is that Aristotle was moved by the perennial anxieties of the immigrant without citizenship in a time of political strife. Outside on the streets, the orator Demosthenes was decrying the wickedness of Macedonians.
A few years later, Aristotle was engaged to tutor a young Macedonian prince, who would later be known as Alexander the Great. One of the more vivid depictions of the philosopher appears in Mary Renault’s “Fire from Heaven,” the opening volume in her splendid trilogy of novels about Alexander’s life. First laying eyes on his tutor, the prince sees “a lean smallish man, not ill-proportioned, who yet gave at first sight the effect of being all head.” A second look “revealed him to be dressed with some care and with the elegance of Ionia, wearing one or two good rings. Athenians thought him rather foppish. . . . But he did look like a man who would answer questions.”
He was certainly that. During an extraordinarily fertile career, he raised, often for the first time, questions in science and philosophy that he treated so thoroughly it was many centuries before anyone could improve on his answers. In ethics, at least, there’s a decent case that no one has improved much on them.
“How to flourish” was one such topic, “flourishing” being a workable rendering of Aristotle’s term eudaimonia. We might also translate the term in the usual way, as “happiness,” as long as we suspend some of that word’s modern associations; eudaimonia wasn’t something that waxed and waned with our moods. For Aristotle, ethics was centrally concerned with how to live a good life: a flourishing existence was also a virtuous one.
For first-time readers of the Nicomachean Ethics, though, the treatise is full of disappointments. It is not, strictly, a book by Aristotle; a later editor evidently stitched it together from a series of lecture notes. (Aristotle’s father and son were named Nicomachus; the title may have honored one of them.) There are repetitions and sections that seem to belong in a different book, and Aristotle’s writings are, as Meyer observes, “famously terse, often crabbed in their style.”
Crabbed, fragmented, gappy: it can be a headache trying to match his pronouns to the nouns they refer to. Some of his arguments are missing crucial premises; others fail to spell out their conclusions.
Aristotle is obscure in other ways, too. His highbrow potshots at unnamed contemporaries, his pop-cultural references, must have tickled his aristocratic Athenian audience. But the people and the plays he referred to are now lost or forgotten. Some readers have found his writings “affectless,” stripped of any trace of a human voice, or of a beating human heart.
It gets worse. The book, though it purports to be about the question of how to flourish, is desperately short on practical advice. More of it is about what it means to be good than about how one becomes it. And then much of what it says can sound rather obvious, or inert.
Flourishing is the ultimate goal of human life; a flourishing life is one that is lived in accord with the various “virtues” of the character and intellect (courage, moderation, wisdom, and so forth); a flourishing life also calls for friendships with good people and a certain measure of good fortune in the way of a decent income, health, and looks.
Virtue is not just about acting rightly but about feeling rightly. What’s best, Aristotle says, is “to have such feelings at the right time, at the right objects and people, with the right goal, and in the right manner.” Good luck figuring out what the “right time” or object or manner is.
And virtue, his central category, gets defined—in a line that Meyer’s abridgment culls—in terms that look suspiciously circular. Virtue is a state “consisting in a mean,” Aristotle maintains, and this mean “is defined by reference to reason, that is to say, to the reason by reference to which the prudent person would define it.” (For Aristotle, the “mean” represented a point between opposite excesses—for instance, between cowardice and recklessness lay courage.) The phrase “prudent person” here renders the Greek phronimos, a person possessed of that special quality of mind which Aristotle called “phronesis.” But is Aristotle then saying that virtue consists in being disposed to act as the virtuous person does? That sounds true, but trivially so.
To grasp why it may not be, it helps to reckon with the role that habits of mind play in Aristotle’s account. Meyer’s translation of “phronesis” is “good judgment,” and the phrase nicely captures the combination of intelligence and experience which goes into acquiring it, along with the difficulty of reducing it to a set of explicit principles that anyone could apply mechanically, like an algorithm. In that respect, “good judgment” is an improvement on the old-fashioned and now misleading “prudence”; it’s also less clunky than another standby, “practical wisdom.”
The enormous role of judgment in Aristotle’s picture of how to live can sound, to modern readers thirsty for ethical guidance, like a cop-out. Especially when they might instead pick up a treatise by John Stuart Mill and find an elegantly simple principle for distinguishing right from wrong, or one by Kant, in which they will find at least three. They might, for that matter, look to Jordan Peterson, who conjures up as many as twelve.
Treated as a serious request for advice, the question of how to flourish could receive a gloomy answer from Aristotle: it may be too late to start trying. Why is that? Flourishing involves, among other things, performing actions that manifest virtues, which are qualities of character that enable us to perform what Aristotle calls our “characteristic activity” (as Meyer renders the Greek ergon, a word more commonly, but riskily, translated as “function”). But how do we come to acquire these qualities of character, or what Meyer translates as “dispositions”? Aristotle answers, “From our regular practice,”
In a passage missing from Meyer’s ruthless abridgment, Aristotle warns, “We need to have been brought up in noble habits if we are to be adequate students of noble and just things. . . . For we begin from the that; if this is apparent enough to us, we can begin without also knowing why. Someone who is well brought up has the beginnings, or can easily acquire them.” “The that,” a characteristically laconic formulation of Aristotle’s, is generally taken to refer to the commonsense maxims that a passably well-parented child hears about not lying, fighting, or talking with food in one’s mouth.
A search for what we might call “actionable” guidance will yield precious little. The text yields just enough in the way of glancing remarks to suggest that Aristotle may have been the sort of man who gave good advice. He says, for instance, that people in politics who identify flourishing with honor can’t be right, for honor “seems to depend more on those who honor than on the one honored.” This has been dubbed the “Coriolanus paradox”: seekers of honor “tend to defeat themselves by making themselves dependent on those to whom they aim to be superior,” as Bernard Williams notes. Replace “honor” with, say, “likes on Instagram” and you have a piece of advice that works as well now as it did in the fifth century B.C.
Aristotle suggests, more generally, that you should identify the vices you’re susceptible to and then “pull yourself away in the opposite direction, since by pulling hard against one fault, you get to the mean (as when straightening out warped planks).” Only the vivid image of the warped planks keeps this remark from being the type of sententious counsel that Polonius might have given his son.
The question then must be faced: Is there anyone who both needs to hear what Aristotle has to offer and would be able to apply it? Sold as a self-help manual in a culture accustomed to gurus promulgating “rules for living,” Aristotle’s ethics may come as a disappointment. But our disappointment may tell us more about ourselves than it does about Aristotle.
I started to study Aristotle at the same time I was learning to cook. At college five thousand miles away from home, I found that it didn’t take long to tire of baked beans and something called “jacket potatoes.” There was Indian takeout, of course, but it was not cheap, and the chefs came from parts of the subcontinent about as far away from my ancestral village as Oxford was from Jerusalem. It was in the days before obliging Indian grandmothers had YouTube channels, and no one seemed to have thought to write a book of recipes from our little corner of Kerala.
Not that a mere recipe would have helped. The philosopher Michael Oakeshott wrote that “nobody supposes that the knowledge that belongs to the good cook is confined to what is or may be written down in the cookery book.” Proficiency in cooking is, of course, a matter of technique. Sometimes we acquire our skills by repeatedly applying a rule—following a recipe—but when we succeed what we become are not good followers of recipes but good cooks. Through practice, as Aristotle would have said, we acquire judgment.
The existence of recipe books, I came to think, was itself a melancholy fact about a world of emigration and the growing distance between generations. The most widely available book of Indian recipes in Britain, by the actress and grande dame Madhur Jaffrey, was, in fact, assembled out of letters from her mother when she was a homesick drama student in nineteen-fifties London. Recipes were second best, the sign of a fall from a condition of organic wholeness.
As I blundered in the evenings at the single stovetop in the student kitchen I shared with half a rugby team, I was also working line by line through the text of the Nicomachean Ethics. I started with an English translation, and then turned to the original Greek, my familiarity with the language acquired over a frantic month at the geekiest summer camp in the world. Something about that juxtaposition—Aristotle in the mornings, clumsy pots of dal in the evenings—has inured me to all visions of moral philosophy as a simple variety of self-help.
At Oxford, the text had been taught the same way since at least the nineteenth century, in a series of weekly tutorials. Mine were solo, and my tutor was a man of enormous charisma and intensity. I would read out my essay on the set passages of text—the local word for such extracts was the charmingly English “gobbet”—while in the background a kettle came climactically to the boil. My tutor would pour out strong black Indian tea along with some weakly complimentary judgment—usually, in my case, the damning “Thorough.” And then we’d start. He passed on to me a scholarly maxim that he had heard from his own tutor, a man combining great erudition and eccentricity who wore gold-rimmed spectacles and dressed like a fugitive from the eighteenth century: “How to read Aristotle? Slowly.”
I tried to comply, but I was never slow enough. There was always another nuance, another textual knot to unravel. My tutor’s fundamental pedagogical principle was that to teach a text meant being, at least for the duration of the tutorial, its most passionate champion. Every smug undergraduate exposé of a fallacy would be immediately countered with a robust defense of Aristotle’s reasoning. When I stayed on as a graduate student and edged into the strange and wonderful world of Oxford’s scholars of ancient philosophy, I attended seminars where hours were spent parsing a single Aristotelian sentence. At one session, a nervous participant asked, “Would you think it precipitate if we moved on to the next sentence?”
What we were doing with this historical text wasn’t history but philosophy. We were reading it not for what it might reveal about an exotic culture but for the timelessly important truths it might contain—an attitude at odds with the relativism endemic in the rest of the humanities. The deliberate pace of the Aristotelians who taught me was not only an intellectual strategy but also an enactment of the lesson of the text I was reading. There is no shortcut to understanding Aristotle, no recipe. You get good at reading him by reading him, with others, slowly and often. Regular practice: for Aristotle, it’s how you get good generally.
A few days into my Ph.D. program, I met a fellow-student, a logician, who announced that he didn’t share my philosophical interests. “My parents taught me the difference between right and wrong,” he said, “and I can’t think what more there is to say about it.” The appropriate response, and the Aristotelian one, would be to agree with the spirit of the remark. There is such a thing as the difference between right and wrong. But reliably telling them apart takes experience, the company of wise friends, and the good luck of having been well brought up.
Even the philosophers who think that we would ideally act in accordance with statable principles must ask themselves how someone without experience could identify such principles in the first place.
I’m convinced that we are all Aristotelians, most of the time, even when forces in our culture briefly persuade us that we are something else. Ethics remains what it was to the Greeks: a matter of being a person of a certain sort of sensibility, not of acting on “principles,” which one reserves for unusual situations of the kind that life sporadically throws up. That remains a truth about ethics even when we’ve adopted different terms for describing what type of person not to be: we don’t speak much these days of being “small-souled” or “intemperate,” but we do say a great deal about “douchebags,” “creeps,” and, yes, “assholes.”
In one sense, it tells us nothing that the right thing to do is to act and feel as the person of good judgment does. In another sense, it tells us virtually everything that can be said at this level of generality. It points us in the right direction: toward the picture of a person with a certain character, certain habits of thinking and feeling, a certain level of self-knowledge and knowledge of other people. In Aristotle’s view, I might, in a couple of years, be just about ready to start studying ethics.
Aristotle’s world, like that of his teacher Plato, was one in which philosophy had to distinguish itself from rivals for the prestige and the authority it claimed. Those rivals, whom Plato regarded as hucksters and grifters, have been tarred forever by the disobliging epithet he gave them: “Sophist.” The Sophists of the ancient world were liberal with the “rules for living” that they gave the teen-age boys who were their most ardent (and paying) customers. Aristotle faced the challenge of courting the same constituency armed with a more modest product.
Later in his life, in 322, as anti-Macedonian sentiment surged among the Athenians after Alexander’s death, Aristotle left Athens to spend his final days in Chalcis, on his mother’s island of Euboea. An ancient source tells us that he did so to avoid the fate of Socrates, and to stop the Athenians committing “a second crime against philosophy.” He may not have been a modest man, but he hadn’t led a sheltered life.
Notoriously, the Nicomachean Ethics ends with a sort of plot twist. Until this point, Aristotle has spent most of his time on a patient explanation of the virtues of character, with only a brief digression to tell us about the virtues of the intellect. But the last few chapters contain a genuine surprise—if you have not been reading closely. The highest of the virtues, he announces, is not (as most of his original audience would have taken him so far to be saying) “good judgment” but, rather, one he labels with that beautiful Greek word sophia.
“Wisdom” is the usual translation, but Aristotle’s discussion of it makes it clear that he is using the word in what Meyer calls “a restricted technical sense.” Her rendering is “scientific learning.” Being sophos, Aristotle says, is “not only knowing what follows from the principles of a science but also apprehending the truth of the principles themselves.” Yet, if sophia is indeed a higher virtue than phronesis, mustn’t a life devoted to the exercise of “scientific learning” be a higher, a more flourishing, existence than one devoted to the exercise of “good judgment” in the practical spheres of living (running a household, ruling a city)?
It surely would have surprised the aristocrats in Aristotle’s original audience to be told that their ambitions to be rich, well regarded, and powerful fell short of the highest flourishing of which human beings are capable.
Aristotle had little hope that a philosopher’s treatise could teach someone without much experience of life how to make the crucial ethical distinctions. We learn to spot an “asshole” from living; how else? And, when our own perceptions falter, we continue to do today exactly what Aristotle thought we should do. He asserts, in another significant remark that doesn’t make Meyer’s cut, that we should attend to the words of the old and experienced at least as much as we do to philosophical proofs: “these people see correctly because experience has given them their eyes.”
Is it any surprise that the Internet is full of those who need help seeing rightly? Finding no friendly neighborhood phronimos ["prudent"] to provide authoritative advice, you defer instead to the wisdom of an online community. Its members help you to see the situation, and yourself, in a different light. “The self-made man,” Oakeshott wrote, “is never literally self-made, but depends upon a certain kind of society and upon a large unrecognized inheritance.” If self-help means denying the role that the perceptions of others play in making us who we are, if it means a set of rules for living that remove the need for judgment, then we are better off without it.
We have long lived in a world desperate for formulas, simple answers to the simple question “What should I do?” Some of my contemporaries in graduate school, pioneers in what was then a radical new movement called “effective altruism,” devised an online career-planning tool to guide undergraduates in their choice of careers. (It saw a future for me in computer science.) I’ve had bemusing conversations with teen-age boys in thrall to Andrew Tate, a muscled influencer who has as many as forty-one “tenets.” My in-box is seldom without yet another invitation to complete an online course on the fine-grained etiquette of “diversity, equity, and inclusion.” (Certificate awarded upon completion of multiple-choice test.)
But the algorithms, the tenets, the certificates are all attempts to solve the problem—which is everybody’s problem—of how not to be an asshole. Life would be a lot easier if there were rules, algorithms, and life hacks solving that problem once and for all. There aren’t. At the heart of the Nicomachean Ethics is a claim that remains both edifying and chastening: phronesis [practical wisdom, good judgment] doesn’t come that easy.
Aristotle devised a theory that was vague in just the right places, one that left, intentionally, space to be filled in by life. ~
https://www.newyorker.com/magazine/2023/07/03/how-to-flourish-an-ancient-guide-to-living-well-aristotle-susan-sauve-meyer-book-review?utm_source=pocket-newtab
*
THE MYTH OF THE MISSING HALF
~ “We are not hemispheres. We are polyhedrons.
Aligning along even a few facets of the polyhedron is something to be grateful for.” ~ James Hollis, The Myth of the Missing Half
Yosemite: Half Dome. Visitors are shocked when they are told there never was the "missing half."
*
HUMANS HAVE WEIRD CHILDHOODS
~ The average human spends at least one quarter of their life growing up. In the careful calculus of the animal kingdom, this is patently ridiculous. Even most whales, the longest of the long-lived mammals, spend a mere 10 per cent or so of their time growing into leviathans. In no other primate has the mathematics gone this wrong but, then again, no other primate has been as successful as we are in dominating the planet. Could the secret to our species’ success be our slowness in growing up? And if so, what possible evolutionary benefit could there be to delaying adulthood – and what does it mean for where our species is going?
The search for the secret to our success is at the heart of anthropology – the study of humans and their place in the world. This most narcissistic of disciplines piggybacked on the fascination for cataloguing and collecting the entirety of the world that rose up during the colonial expansions of 18th-century Europe and the growing popularity of ‘natural laws’ that explained the workings of the world in terms of immutable truths, discoverable to any man (and it was open largely only to men) with the wit and patience to observe them in nature.
Early anthropology collected cultures and set them end on end in a line of progress that stretched from fossils to frock coats, determining that the most critical parts of Man – the secrets to his success – were his big brain and his ability to walk upright.
Everything we are as a species was taken to be a result of our canny forebears playing a zero-sum game against extinction, with some monkey-men outbreeding some other monkey-men. In this grand tradition, we have Man the Hunter, Man the Firestarter, Man the Tool Maker, and the other evolutionary archetypes that tell us the reason we are the way we are is because of a series of technological advances.
However, about 50 years ago, anthropologists made a shocking discovery: women. Not so much that females existed (though that might have taken some of the old guard by surprise), but rather that they could do quite interesting research, and that the topic of their research was not, inevitably, the evolution of Man. It was the evolution of humans, women and children included. New research reframed old questions and asked entirely new ones – ones that did not assume what was good for the gander was good for the goose, and that there might be more drivers to our evolutionary history than the simplistic models that had come before.
Among these new ideas was one that had been consistently overlooked: the entire business of reproducing our species is absolutely off-the-charts weird. From our mating systems to maternal mortality to menopause, everything we do with our lives is a slap in the face to the received wisdom of the animal kingdom. After all, the linchpin of evolution in any species comes at reproduction. Making more of your species is how you stay in the game and, judging by the numbers, we are far and away the most successful primate ever to have walked the earth.
Pioneering researchers such as Sarah Hrdy, Kristen Hawkes, and many others of this new generation finally thought to ask, is it about the way we make more humans that has made us the species that we are?
Our unlikely childhoods begin well before gametes meet. As part of our social organization, humans have a specific type of mating system, a form of reproduction that scaffolds the relationships between animals in our society in a specific way, with specific aims. Despite a tendency by a certain insidious strand of pseudo-intellectual internet bile to use pseudo-scientific terms such as ‘alpha males’ and ‘beta males’ for human interactions, our species is in fact rather charmingly non-competitive when it comes to mating.
While it may be difficult to believe that humans are largely tedious monogamists, our pair-bonded nature is a story written in our physical beings. Not for us the costly evolutionary displays of the male hamadryas baboon, who grows his fangs to 400 times those of his female relatives in order to show off and fight for mates. (Male human fangs are, in fact, slightly bigger than females – but only about 7 per cent, which is nothing in animal terms.)
Furthermore, in animals with more competitive strategies for mating – ones where there is any extra advantage in remaining coupled, depositing sperm, or preventing other couplings from happening – evolution has provided an array of genital morphologies ranging from penis bones and spikes to outsized testes. Humans lack distinction in any measure of genitalia so far studied, though it is worth noting that most anthropologists have chosen to focus on male genitalia, so surprises may remain in store for future research.
This physical lack of difference between sexes sets up a social system that is, in animal terms, weird: pair bonding. Virtually no other animals reproduce in pair bonds – only about 5 per cent, if you discount birds, who do go for pairing in a big way. But an outsize proportion of primates opt for this monogamous arrangement, about 15 per cent of species, including, of course, our own. There are a variety of evolutionary theories for why pair bonding should appeal so much to primates, including maintaining access to females that roam, supporting offspring, or increasing certainty about paternity. One prominent theory is that pair-bonded males have less motivation for infanticide, though as the anthropologist Holly Dunsworth pointed out in her Aeon essay ‘Sex Makes Babies’ (2017), this does suggest a type of understanding in primates that we don’t always even ascribe to other humans.
Other theories point to female roaming requiring a pairing system so mating opportunities aren’t lost whenever she moves on. Pair bonding has emerged perhaps as many as four separate times in the primate family, suggesting that the motivation for the invention of the mate may not be the same in all monkeys. What does seem clear is that humans have opted for a mating system that doesn’t go in as much for competition as it does for care. The evolution of ‘dads’ – our casual word for the pair of helping hands that, in humans, fits a very broad range of people – may in fact be the only solution to the crisis that is the most important feature of human babies: they are off-the-scale demanding.
Our babies require an intense amount of investment, and as a species we have gone to staggering lengths to give it to them. As placental mammals, we solved the limitations placed on babies who are gestated in eggs with a fixed amount of resources by capturing the code of an RNA virus in our DNA to create the placenta: a temporary organ that allows our embryos and fetuses to draw sustenance directly from our bodies. As humans, however, we have gone a step further and altered the signaling mechanisms that maintain the delicate balance between our voracious young and the mothers they feed off.
Our species’ pregnancies – and only our species’ pregnancies – have become life-threatening ordeals specifically to deal with the outrageous demands of our babies. Gestational diabetes and preeclampsia are conditions virtually unknown in the animal kingdom, but common killers of pregnant humans thanks to this subtle alteration. Babies grow to an enormous size and plumpness, and they’re so demanding that the resources in one body aren’t enough to sustain them. They emerge into the world with large brains and a hefty 15 per cent lard, but still unripe and unready.
The question of why we have such large but useless babies – unable to cling like other primate babies can, eyes and ears open but with heads too heavy for their necks – is one that evolutionary theory has long treated as a classic moving sofa problem. As posed by the author Douglas Adams, or the popular TV series Friends, the moving sofa problem asks the question: how do you get something big and awkward through a small and awkward space? Our babies have very large heads, and our mothers quite narrow pelvises, and what seems a trivial question about furniture logistics is in fact a huge impediment to the successful reproduction of our species: this makes human birth dangerous, and mothers die giving birth at a far higher rate than any other species.
Classically, this was viewed as an acceptable trade-off between competing evolutionary demands. This is what the anthropologist Sherwood Washburn in 1960 called the ‘obstetrical dilemma’: the dangerous trip down the birth canal is necessitated by our upright posture and the tight fit required by our big brains. This widely accepted theory provided functional explanations as to why male and female hips were different sizes and why our births are so risky. Until recently, it was thought that humans had in fact developed a mitigation of this size mismatch in a unique twist performed by the baby as it travels through the birth canal, forcing the baby to emerge with head to the side rather than facing towards the mother’s front.
There is one problem with this particular explanation: we are not the only species to sneak in a twist at the end of our grand pelvic-canal dive – in fact, we’re not even the only primates. Research by Satoshi Harota and colleagues has shown even chimpanzees, who have ‘easy’ births, do the twist. Even the pelvis size and shape differences we identified as critical in human evolution turn out to be less-than-unique. Many animals have differences between male and female pelvises that surpass those of humans, without having difficult births. Shape difference might be something that is far more ancient in the mammal line. For human hips, variation tracks many factors, such as geography, rather than just male/female divides. But human babies really do have a terrible time coming into the world, above and beyond other species, due to that tight fit. So what gives?
The answer may be in that glorious pinchable baby fat. Having precision-engineered our offspring to siphon resources from their mothers in order to build calorifically expensive structures like our big brains and our chubby cheeks, we have, perhaps, become victims of our own success. Our babies can build themselves up to an impressive size in the womb, one that comes near to being unsurvivable. But the truly fantastic thing is that, having poured so much into our pregnancies, after we hit the limit of what our babies can catabolize from their mothers’ bodies, they are forced to emerge into the world still fantastically needy. For any mammal, survival after birth calls for the magic of milk, and our babies are no different, but here we find another very unusual feature of humans: our long childhood starts with cutting off infancy early.
Even accounting for differences in size, human babies are infants on the breast for a far shorter time than our closest relatives. Breastfeeding can go on for four to five years in chimpanzees and gorillas, and eight years or more in orangutans. Meanwhile, babies in most known human societies are fully weaned by the age of four, with a lot of agricultural societies past and present opting to stop around age two, and of many modern states with capital economies struggling to get breastfeeding to happen at all, let along go on for the WHO-recommended two years or more.
After the first few months, we start complementary feeding, supplementing our babies with solid foods, including the rather unappealing pre-chewed food that seems to nonetheless support not just human but all great-ape infants as they grow. Our fat, big-brained offspring require a huge investment to support the amount of brain growth required in our babies’ first year, but they don’t – and can’t – get what they need to build the adult 1,200 g brain from milk alone. This is where those pair bonds come in handy. Suddenly there are two food-foragers (or chewers) to hand, which is convenient because we kick off our babies from the breast quick – but, once they’ve moved from infancy into childhood, there is yet another surprise: we let them stay there longer than any other species on the planet.
*
Childhood in humans is extended, by any measure you care to use. We can look at the 25-odd years it takes to get to physical maturity (in fact, the tiny end plate of your clavicle where it meets the sternum doesn’t fully finish forming until your early 30s) and compare it with our nearest relatives, to see that we have slowed down by a decade or more the time it takes to build something great-ape sized. To find a mammal with a similarly slow growth trajectory we have to look to the sea, at something like a bowhead whale. A bowhead whale, however, which will top out at about 18 meters and around 90 tons, is on a trajectory of growth well beyond a piddling human. We can look at our markers of social maturity and find they are even more varied. Our individual cultures tell us very specifically when adulthood is – ages of legal responsibility, for instance, or the timing of major rituals – and these might hover near our physical maturity or they might depart from it entirely. Perhaps the most clear-cut definition describes childhood in terms of investment: it is the period when you are a net resource sink, when other people are still investing heavily in you.
One of the most fascinating things in the study of humans is our ability to extend our lens back, beyond the borders of our species, and look at the adaptive choices our ancestors have made to bring us to this state. We look at the shape of fossil hips and knees and toes to learn how we came to walk upright; we measure skulls and jaws from millions of years ago to see how we fed our growing brains. Paleoanthropology allows us to reconstruct the steps that brought us here, and it is where we can find microscopic tell-tale signs of the journey that carried us into our extended childhood.
There are a handful of juvenile fossils in the hominin record, a very small proportion of the already vanishingly small number of remains from the species living over the past 3 to 4 million years that form the family tree that led to humans. Two of these, the Taung child and the Nariokotome boy, provide some of the best evidence for how our species evolved. The Taung child is an australopithecine dating back about 2.5 million years, and the Nariokotome boy belongs to Homo erectus, about 1.5 million years in our past. Looking at the teeth and skeletons of these fossils, we see that the teeth are still forming in the jaws, and the bony skeleton has not yet taken its final form.
If our ancestors grew like modern humans – that is, slowly – then the absolute chronological age they would have been at that stage of development would be about six and 12 years old, respectively, though it would be younger if they grew more rapidly, like apes.
Luckily for science, there is a timer built into our bodies: a 24-hour rhythm recognizable in the minute tracks left by the cells that form dental enamel that can be seen, perfectly fossilized, in our teeth, and a longer near-weekly rhythm that can be seen on the outside of teeth. When we count the growth tracks of enamel in the Taung child’s teeth, we can see they were closer to three than to six, and the Nariokotome boy only about eight. Our long childhood is a uniquely evolved human trait.
There is one more adaptation at play in the support of our needy offspring that should be accounted for: the utter unlikeliness that is a grandmother. Specifically, it is the almost unheard-of biological process of menopause, and the creation of a stage of life for half of our species where reproduction just stops. This is outrageous in evolutionary terms and it occurs only in humans (and a handful of whales). If the goal is to keep the species going, then calling time on reproduction sounds catastrophically counterintuitive, and, yet, here we are, awash in post-reproductive females.
Why? Because, despite the denigration many older women face, women do not ‘outlive’ their sole evolutionary function of birthing babies. If that was the only purpose of females, there wouldn’t be grandmas. But here they are, and ethnographic and sociological studies show us very clearly that grandparents are evolutionarily important: they are additional adults capable of investing in our needy kids. If you remove the need to invest in their own direct offspring, you create a fund of resources – whether it is foraged food, wisdom or just a pair of hands – that can be poured into their children’s children.
All the unique qualities of human childhood are marked by this kind of intense investment. But that raises the big question. If ‘winning’ evolution looks like successful reproduction, then why would we keep our offspring in an expensive holding pattern for longer than necessary?
It is only when we start to consider what this extension is for that we get close to understanding the evolutionary pressures that brought us to this state. And we actually have quite a good idea of what childhood is for, because we can see the use that other animals put it to. Primates have long childhoods because you need a long time to learn how to be a better monkey.
The same principle applies to social species like crows, who need to learn a complicated series of social rules and hierarchies. We, like monkeys and crows, spend childhood learning. Growing up human is such an immensely complicated prospect it requires not only the intense physical investment in our big brains and high-fat bodies but an extended period of care and investment while our slow-growing offspring learn everything we need them to learn to become successful adults. The cost of this investment, 20 to 30 years’ worth, is staggering in evolutionary terms.
A long childhood is our greatest evolutionary adaptation. It means that we have created needy offspring, and this has surprising knock-on effects in every single aspect of our lives, from our pair bonds to our dads to our boring genitals to our dangerous pregnancies and births and our fat-cheeked babies and even that unlikely creature, the grandmother. The amount of time and energy required to grow a human child, and to let it learn the things it needs to learn, is so great that we have stopped the clock: we have given ourselves longer to do it, and critically, made sure there are more and more investors ready to contribute to each of our fantastically expensive children.
What’s more, as humans, our cultures not only scaffold our evolution, but act as bore-drills to open up new paths for biology to follow, and we find ourselves in a position where the long childhood our ancestors took millions of years to develop is being stretched yet further. In many societies, the markers of adulthood are increasingly stretched out – for the most privileged among us, formal education and financial dependence are making 40 the new 20.
Meanwhile, we are taking time away from the most desperate among us, placing that same education out of reach for those foolish enough to be born poor or the wrong color or gender or in the wrong part of the world. A human child is a rather miraculous thing, representing a huge amount of targeted investment, from mating to matriculation. But given the gulfs in opportunity we are opening up between those that have and those that do not, it would benefit us all to consider more closely the childhoods we are investing in, and who we are allowing to stay forever young.
https://aeon.co/essays/why-have-humans-evolved-to-have-a-long-journey-to-adulthood?utm_source=Aeon+Newsletter&utm_campaign=9fb15f7039-EMAIL_CAMPAIGN_2023_07_14&utm_medium=email&utm_term=0_-b43a9ed933-%5BLIST_EMAIL_ID%5D
Anthony Weir:
To enlarge upon the important last paragraph of this article, Colin Turnbull (in his famous books ‘The Forest People’ and ‘Wayward Servants’) stated that an Mbuti ‘Pygmy’ in the late 1950s could fend for his/herself in the African rain-forest by the age of seven — and would be expected to. I’m pretty sure he was right. By the age of 14 BaMbuti were ‘successful adults’ without the need for mechanical support, gadgets, certificates, ridiculous Rites of Passage — or of religion.
Sage Coward:
I would argue that childhood concludes around the ages of 7 or 8, after which most of our experiences can be classified as re-enforcement of “socialization” and “acculturation.” The predicament with these two constructs is the potential loss of our authentic human goals and essence. We become pawns, influenced and constrained by our culture, parents, and society. We become trapped, and only a fortunate few of us catch glimpses of enlightenment during moments of introspection or periods of emotional turmoil later in life, realizing the concealed reality that has evaded us since the end of our magical childhood years.
I mention the age of 7 or 8 because in almost every other culture, particularly those considered more primitive, children attain consciousness at that age, form genuine memories, and construct their own fantasy worlds. The impact of grandparents (particularly the influence of grandfathers) who share stories and experiences not witnessed by the grandmother or overshadowed by the father’s preoccupation, should not be underestimated. I could delve deeper into this topic, but my main point is that in today’s political climate, the children themselves may be fine, but it is the approach to parenting that necessitates a comprehensive overhaul.
*
WHY BRACING FOR THE WORST MAY CAUSE MORE HARM THAN GOOD
~ Folk wisdom suggests that if you expect the worst, then you won’t be disappointed. This advice is pervasive; it can drive meteorologists to over-promise rain and companies to overestimate delivery times. Expecting rain might prompt people to carry an umbrella, while receiving a package ‘early’ might lead to a five-star review. Is it wise, however, to ‘expect the worst’ when it comes to anticipating events in our personal lives?
Expecting the worst to avoid feeling bad later is known as bracing. People report bracing to help them prepare for emotionally challenging situations, particularly in the moments before these situations occur. People brace for the worst while waiting for a range of potentially negative outcomes, such as exam grades, medical test results, or financial outcomes. Someone might also brace for the worst in anticipation of stressful events, such as giving a presentation at work or having a difficult conversation with a loved one.
The idea that bracing can be helpful is built into the meaning of the word: ‘to make something stronger or firmer’, such as a structure or a person. But research from psychology offers a more complex answer to the question of whether bracing is a useful thing to do.
Some psychological theories suggest that bracing should help. For example, ‘decision affect theory’ proposes that how we feel about a situation is determined partly by comparing what actually happened with what could have happened. Based on this idea, people should be happy when an event exceeds their expectations, and disappointed when an event falls short of their expectations. Therefore, expecting the worst should protect someone from feeling bad in the future, because any outcome will likely surpass expectations.
But other psychological theories undermine the idea that bracing will be helpful. It has been theorized – and empirically established – that expectations can powerfully influence reality. (This premise underlies many bestselling books and helps explain the placebo effect.) There are two key ways that expectations can shape reality. First, people may behave in ways that fit with their expectations. If you expect to fail an upcoming test or interview, then you might not bother to spend much time preparing for it, which in turn reduces your chances of doing well.
Second, people may interpret a situation in line with their expectations. Imagine you believe you are insufficiently qualified for a job you’ve applied for. During the job interview, you’re likely to interpret blank expressions from the panel in line with this belief, which could negatively affect your interview performance. In actual fact, the panel likely did not want to give anything away – a conclusion you might have drawn with more positive expectations of the situation. In sum, the influence of expectations on reality suggests that it might be better to hold positive expectations than to brace for the worst.
If psychological theories suggest conflicting answers about whether bracing is helpful, what have research studies found? Before we dive into this research, it’s important to note that bracing can affect how people feel at two distinct points in time: before an outcome is known, and afterward.
Let’s look first at the effects of bracing on how someone feels before an outcome is known. Researchers have investigated how people’s expectations about important exam results relate to how positive or negative they feel while they wait. These studies suggest that holding negative expectations (eg, I’m going to bomb this exam) makes people feel worse than holding positive expectations (I studied hard, so I know I will do well on this exam). In other words, bracing seems to have emotional costs while someone is awaiting an important outcome. These costs likely occur for several reasons: negative expectations might elicit worry by reminding you of what’s at stake if you do ‘bomb’, or could result in you feeling bad about yourself because you see yourself as someone capable of performing badly.
The effects of bracing on how people feel after learning an outcome are more complicated. Some research suggests bracing may have short-lived emotional benefits. In one study, university students who expected and received a negative outcome on their exam grades felt better immediately after receiving them than those who expected a positive outcome and received a negative one. The authors of the study therefore conclude that it’s risky to be optimistic just before receiving an outcome, at which point the outcome is out of your control.
Despite mixed evidence on the immediate benefits of bracing, research consistently suggests that bracing does not have more lasting benefits. When students were asked to rate how they felt about their exam grades 24 hours after receiving them, there was no difference between students based on whether they had held positive or negative expectations about their grades. Similarly, the extent to which students ‘braced’ – as measured by the decline in their expectations from two weeks before the exam to one day before – did not predict how they felt two days after receiving their exam grades.
In recent studies conducted in my lab, led by Elise Kalokerinos, we gained further insight into bracing by investigating the effects of feeling negative in anticipation of either learning an outcome or going through a stressful event. We used an in-depth method called experience sampling, which involves surveying people several times a day over a set length of time. This method is powerful because it allows us to determine what predicts changes in how a person feels, relative to how they normally feel. In doing so, we can control for all the reasons why one person might feel different from another, such as personality traits, mental health differences, or how generally optimistic or pessimistic someone is.
In one study, we followed university students for two and a half days as they awaited exam results that would determine whether they could continue with their degrees. We then followed these same students for another six and a half days after they received their grades. We found that when students felt more negative than usual as they waited for their exam grades (consistent with bracing), they also felt more negative than usual during the week after receiving their grades. This result occurred regardless of how well students actually did on the exam – contrary to the idea that bracing should be particularly helpful for coping with negative outcomes.
In another study, we followed people as they navigated learning about and completing a stressful public speaking task. We found that feeling negative in the lead-up to this task seemed to make people feel worse immediately after the task, but not once they left the lab where the task took place. In other words, the harmful effects of negative anticipation were short-lived. Although some of the specific findings differed between our studies, we never found that feeling negative in advance of an outcome was helpful after the outcome was known, counter to the proposed benefits of bracing.
Drawing together the scientific studies, it seems like expecting the worst is an unwise way to prepare for upcoming news or results. Bracing appears to make people feel worse as they wait, with little to no benefit – and, potentially, emotional costs – after they receive what they are waiting for.
But awaiting important if uncertain news and preparing for stressful events are inescapable parts of our lives. If it’s time to move on from the folk wisdom that tells us to brace for the worst, what strategies could you use instead? Here are a few evidence-based ideas:
Favor optimism, particularly when you have control over an outcome. Returning to the link between expectations and reality, researchers recently found evidence that having positive expectations about an outcome means one is more likely to put effort into achieving that outcome. Of course, you don’t necessarily want to assume you’ll get a positive outcome without putting in the work, but it’s better to be optimistic than to expect the worst.
Try not to spend your energy figuring out what the outcome might be, especially if you lack control over the outcome or you’re currently waiting to find out what the outcome is. Instead, you could try distracting yourself from the situation, for example by watching a TV show, doing exercise, or catching up with a friend. You could also try to acknowledge and accept how you feel about the situation, without trying to change it. These are both helpful ways of managing emotions.
Open up to those around you. If you let people know that you’re waiting for an important outcome that could turn out badly, they can be ready to offer support when you receive the outcome.
If you do find yourself bracing for the worst, try to limit it – ideally, waiting until right before the moment when you learn an outcome – in order to reduce the emotional costs of this strategy. In light of what the psychological research tells us, it seems that the Stoic philosopher Lucius Annaeus Seneca had it right when he issued this relevant warning: ‘It is tragic for the soul to be apprehensive of the future and wretched in anticipation of wretchedness. … What madness it is to anticipate one’s troubles!’ ~
https://psyche.co/ideas/why-it-might-not-help-and-could-hurt-to-brace-for-the-worst?utm_source=Psyche+Magazine&utm_campaign=133b3a814e-EMAIL_CAMPAIGN_2023_07_14&utm_medium=email&utm_term=0_-a9a3bdf830-%5BLIST_EMAIL_ID%5D
*
ANTI-INTELLECTUALISM IN AMERICA
There are basically three kinds of anti-intellectualism:
Seeing knowledge and intellectualism as a threat. Knowledge itself is dangerous, and intellectuals must be up to something, like upheaving the existing society and plotting for a revolution or something similar. Dumb is good, smart is bad. Four legs good, two legs bad.
Seeing the intellectuals as impractical and useless. The classic Philistine attitude on things. Knowledge is good only if it benefits somehow. Otherwise it is just useless filling of the head. “Thank God Washington wasn’t a philosopher,” has Thomas Jefferson been attributed to have said.
Populist anti-elitism. Let’s face it: being smart is just not cool. Popular cultural rarely portrays intellect with reverence. This attitude stems from both powerholders and the hoi polloi. Powerholders always consider the intellectual as a necessary nuisance and a threat, while the hoi polloi sees them as bloodsucking elites. They find each other in anti-intellectualism.
Anti-intellectualism produces a low level of civic literacy and civic engagement by an increasingly apathetic citizenry. These things feed on each other; resulting in either tyranny or demagogic political leadership on the way to Populist misrule. According to the late author Isaac Asimov, there is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’
The Trumpist regime in Washington was a great example to the era of “alternative facts,” “alternative reality,” “facts are not facts,” neglect of facts, logic, reason, and rationality. This disease of anti-reason, anti-science, anti-elite, anti-expertise, anti-historical knowledge and professionalism is the plague of the United States.
The first reason is the sorry state of the American schooling. It is likely to be the worst in any First World country, and even the USSR had better basic education. i mean, the public schools in the USA are generally bad, especially at the poorer counties and districts. The Americans do not learn enough and they do not learn to think independently, be critical and seek information.
The second reason is that USA is the promised land of religious kooks. The absolute freedom of religion and the fact that all kinds of religious screwballs migrated from Europe to USA, contributed to Jeebusland concept: American religious communities tend to be literalist on their Biblical interpretation, fundamentalist by their learning, and anti-science.
The third reason is that USA lacks the class structure prevalent to many other societies. While egalitarianism is generally a good idea, it inevitably leads into concept “my ignorance is as good as your knowledge”. USA lacks a similar class of intelligentsia, civil servants and highly educated professionals as many other countries have, but they rather appreciate plutocrats, businessmen, entrepreneurs and managers.
Fourth reason is that many Americans see knowledge only as a tool, not a goal in itself. This is well seen on the attitude on eg. learning foreign languages: “how do I benefit from it?” This leads to general ignorance of things outside one’s small sphere of immediate experience.
Fifth reason is belief in American exceptionalism and superiority. This belief consists that USA is the best country in the world, it is exceptional and superior in every aspect and that things which apply to the rest of the world do not apply to the US. This can well be seen on the American recalcitrance to the Metric system, their completely broken health care system and their insane weapon and violence culture, in which they see no flaws.
Sixth reason is that Americans love simple solutions. Dima Vorobiev has inspected this aspect. And for each problem there are solutions which are working, simple and easy to implement — select two, cannot get third. ~ Susan Viljanen, Quora
*
~ There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge.”
The Trump regime in Washington was a great example to this era of “alternative facts,” “alternative reality,” “facts are not facts,” neglect of facts, logic, reason, and rationality. This disease of anti-reason, anti-science, anti-elite, anti-expertise, anti-historical knowledge and professionalism is spreading like a plague.
The era of Donald J. Trump is the era of anti-Enlightenment in the full sense of the world. There is an assault on the corners of civilizations that made the present world order function so well, and gave us for the past 70 years or so the best time in human history.
The Enlightenment principles produced the research, the science, the technology of today. By ignoring that, we are burying our heads in the sand or betraying one of the best achievements in human history: the Enlightenment.
Complacency breeds forgetfulness. The lack of civic courses in the educational system and the respect for liberal arts are much to blame for the advance of political illiteracy that produces unfit, unqualified and dangerous leadership. “The Oklahoma Council of Public Affairs commissioned a civic education poll among public school students. A surprising 77 percent didn’t know that George Washington was the first president; couldn’t name Thomas Jefferson as the author of the Declaration of Independence; and only 2.8% of the students actually passed the citizenship test. Along similar lines, the Goldwater Institute of Phoenix did the same survey and only 3.5 percent of students passed the civics test.”
Now, how did we get here? Ant-intellectualism doesn’t grow and develop in a vacuum. In my view, supported by historical evidence, many religious corners played a significant role in catering and protecting the notion that only through faith and revelation and religious texts alone can the world can be known and discovered. This notion annihilates or diminishes the development of science, reading, research, and inquiry.
“Even the Catholic Church sees this not as a theological issue but as a natural law, more a subject for philosophers, psychiatrists, and scientists than for preachers as such. But science is evil in the rightists’ ideology: Scientists invent nonexistent things like evolution and global warming. The ideology also holds that guns are the very essence of government. Regulate guns in any way and we lose justice, liberty and comity, crushed by an instant tyranny. The ideology also insists that women should be subordinate to men, blacks to whites, and experts to ‘common man.’ ”
This ideology of worshiping the common man’s wisdom against professional expertise in specialized fields, and scholarly research and learning, provided the background for the rise of Trumpism as an ideology replacing the Enlightenment principles that saved and served humanity so well.
In a report that analyzed the key factors for the rise of Trumpism, researchers found these five elements:
First, Authoritarian Personality Syndrome: This factor emphasizes total and complete obedience to authority. It advocates rigid hierarchy and power structure. It instills fear and threatens punishment. The leader is portrayed as a messiah, fearless and infallible.
Second, Social Dominance Orientation. This factor is based on the belief that some groups are more dominant and important and powerful than others. Example: Whites are superior to Blacks, immigrants, and ethnic groups. There is the “we are better than them” syndrome. Divisions are drawn and power structures are advocated.
Third, Prejudice: Here racism and bigotry play a leading role. Calling immigrants from Mexico and South America murderers and undesirables and rapists is one example. Banning Muslims to the United States is another.
Fourth, Intergroup Contact. This refers to contact with other groups beyond one’s group. For example, a 2016 study found that the racial and ethnic isolation of Whites at the zip code level is one of the strongest predictors of Trump support.
Fifth, Relative Deprivation: Here poverty, anger at the rate of change, immigration, social change, and loss of jobs to technology and China and to Mexico are all factors at play.
The common denominator that runs deep into all these factors is anti-intellectualism. Why? Because any citizen who is aware and well read and informed at least at a minimum level in civic education should know that authoritarian tendencies in rulers are a recipe for disaster.
They know that social dominance breeds conflict and division; and is in a democracy unhealthy, even destructive, to the social contract that is at the heart of a liberal democracy. They know that prejudice and fear and discrimination are also unbecoming, unwarranted, and are for real losers. They know that interaction among groups in society in this century is a precondition for social peace and prosperity. It is an imperative, as diversity is a modern reality. Immigration and cultural interaction are here to stay and will expand and prosper with or without the negative leadership attitudes we witness these days. They know relative poverty and alienation are never an excuse for electing false prophets and being a victim to fairly-tale story-sellers of any kind.
The common man is truly besieged these days by rapidly accelerating upheaval and constant changes in technology, climates, financial sectors, workplace and immigration threat of cultural up-rootedness and alienation. “As the saying goes,” wrote David Brooks, “everybody is now everywhere. We’re entering into states of interdependence with all sorts of people unlike ourselves. In the course of this, millions of people perceive that they are losing their country, losing their place, losing their culture.”
In such an environment, it would seem easy for demagogues to gain traction and win votes. But if the common man is well prepared and well informed, then the task of deceivers is not easy or doable. Thus, we build the walls of literacy and curiosity to counter the advance of political cheaters and con men and women. Anti-intellectualism is a fertile ground for people who are hungry to gain the power to take societies to places that are unknown to them and that they may regret entering into down the road.
“Conservatives tend to emphasize the value of being rooted in place. Progressives tend to celebrate living across differences. Life is miserable, and a nation is broken, unless you do both.”
Here we go again: back to education and civic courses in high schools, colleges and universities to try to minimize the cancer of anti-intellectualism. ~
https://www.libertymagazine.org/article/is-america-the-anti-intellectual-empire
Yet one more article:
ANTI-INTELLECTUALISM IS KILLING AMERICA
Ignorance, or an aversion to reason, has allowed things like gun violence and racism to define American culture.
Anti-intellectual societies fall prey to tribalism and simplistic explanations, are emotionally immature, and often seek violent solutions.
Corporate interests encourage anti-intellectualism, conditioning Americans into conformity and passive acceptance of institutional dominance.
Decrying racism and gun violence is fine, but for too long America’s social dysfunction has continued to intensify as the nation has ignored a key underlying pathology: anti-intellectualism.
The exaltation of ignorance in America
America is killing itself through its embrace and exaltation of ignorance, and the evidence is all around us. Dylann Roof, the Charleston shooter who used race as a basis for hate and mass murder, is just one horrific example. Many will correctly blame Roof's actions on America's culture of racism and gun violence, but it's time to realize that such phenomena are directly tied to the nation's culture of ignorance.
In a country where a sitting congressman told a crowd that evolution and the Big Bang are “lies straight from the pit of hell,” where the chairman of a Senate environmental panel brought a snowball into the chamber as evidence that climate change is a hoax, where almost one in three citizens can’t name the vice president, it is beyond dispute that critical thinking has been abandoned as a cultural value. Our failure as a society to connect the dots, to see that such anti-intellectualism comes with a huge price, could eventually be our downfall.
Anti-intellectualism and racism
In considering the senseless loss of nine lives in Charleston, of course racism jumps out as the main issue. But isn’t ignorance at the root of racism? And it’s true that the bloodshed is a reflection of America's violent, gun-crazed culture, but it is only our aversion to reason as a society that has allowed violence to define the culture. Rational public policy, including policies that allow reasonable restraints on gun access, simply isn't possible without an informed, engaged, and rationally thinking public.
Some will point out, correctly, that even educated people can still be racists, but this shouldn’t remove the spotlight from anti-intellectualism. Yes, even intelligent and educated individuals, often due to cultural and institutional influences, can sometimes carry racist biases. But critically thinking individuals recognize racism as wrong and undesirable, even if they aren’t yet able to eliminate every morsel of bias from their own psyches or from social institutions. An anti-intellectual society, however, will have large swaths of people who are motivated by fear, susceptible to tribalism and simplistic explanations, incapable of emotional maturity, and prone to violent solutions. Sound familiar?
And even though it may seem counterintuitive, anti-intellectualism has little to do with intelligence. We know little about the raw intellectual abilities of Dylann Roof, but we do know that he is an ignorant racist who willfully allowed irrational hatred of an entire demographic to dictate his actions. Whatever his IQ, to some extent he is a product of a culture driven by fear and emotion, not rational thinking, and his actions reflect the paranoid mentality of one who fails to grasp basic notions of what it means to be human.
Hyper-patriotism and fundamentalist religion
What Americans rarely acknowledge is that many of their social problems are rooted in the rejection of critical thinking or, conversely, the glorification of the emotional and irrational. What else could explain the hyper-patriotism that has many accepting an outlandish notion that America is far superior to the rest of the world? Love of one’s country is fine, but many Americans seem to honestly believe that their country both invented and perfected the idea of freedom, that the quality of life here far surpasses everywhere else in the world.
But it doesn’t. International quality of life rankings place America far from the top, at sixteenth. America’s rates of murder and other violent crime dwarf most of the rest of the developed world, as does its incarceration rate, while its rates of education and scientific literacy are embarrassingly low. American schools, claiming to uphold “traditional values,” avoid fact-based sex education, and thus we have the highest rates of teen pregnancy in the industrialized world. And those rates are notably highest where so-called “biblical values” are prominent. Go outside the Bible belt, and the rates generally trend downward.
As this suggests, the impact of fundamentalist religion in driving American anti-intellectualism has been, and continues to be, immense. Old-fashioned notions of sex education may seem like a relatively minor issue to many, but taking old-time religion too seriously can be extremely dangerous in the modern era. High-ranking individuals, even in the military, see a confrontation between good and evil as biblically predicted and therefore inevitable. They relish the thought of being a righteous part of the final days.
Fundamentalist religion is also a major force in denying human-caused climate change, a phenomenon that the scientific community has accepted for years. Interestingly, anti-intellectual fundamentalists are joined in their climate change denial with unusual bedfellows: corporate interests that stand to gain from the rejection of sound science on climate.
The role of corporate interests in anti-intellectualism
Corporate influence on climate and environmental policy, meanwhile, is simply more evidence of anti-intellectualism in action, for corporate domination of American society is another result of a public that is not thinking critically. Americans have allowed their democracy to slip away, their culture overtaken by enormous corporations that effectively control both the governmental apparatus and the media, thus shaping life around materialism and consumption.
Indeed, these corporate interests encourage anti-intellectualism, conditioning Americans into conformity and passive acceptance of institutional dominance. They are the ones who stand to gain from the excessive fear and nationalism that result in militaristic foreign policy and absurdly high levels of military spending. They are the ones who stand to gain from consumers who spend money they don’t have on goods and services they don’t need. They are the ones who want a public that is largely uninformed and distracted, thus allowing government policy to be crafted by corporate lawyers and lobbyists. They are the ones who stand to gain from unregulated securities markets. And they are the ones who stand to gain from a prison-industrial complex that generates the highest rates of incarceration in the developed world.
Americans can and should denounce the racist and gun-crazed culture, but they also need to dig deeper. At the core of all of this dysfunction is an abandonment of reason.
https://www.psychologytoday.com/us/blog/our-humanity-naturally/201506/anti-intellectualism-is-killing-america
***
HEAT WAVES CAN KILL
~ There’s a temperature threshold beyond which the human body simply can’t survive — one that some parts of the world are increasingly starting to cross. It’s a “wet bulb temperature” of 95 degrees Fahrenheit (35 degrees C).
To understand what that means, it helps to start with how the human body regulates its temperature. Our bodies need to stay right around 98.6 degrees F. If that number gets too high or too low, bad things can happen. And since bodies are always producing heat from normal functions, like digesting, thinking, and pumping blood, we need a place for that heat to go. That’s why our bodies have a built-in cooling system: sweat.
Sweat works by using a physics hack called evaporative cooling. It takes quite a bit of heat to turn water from a liquid to a gas. As droplets of sweat leave our skin, they pull a lot of heat away from our bodies. When the air is really dry, a little bit of sweat can cool us down a lot. Humid air, on the other hand, already contains a lot of water vapor, which makes it harder for sweat to evaporate. As a result, we can’t cool down as well.
This is where the term wet bulb temperature comes in: It’s a measure of heat and humidity, essentially the temperature we experience after sweat cools us off. We can measure the wet bulb temperature by sticking a damp little sleeve on the end of a thermometer and spinning it around. Water evaporates from the sleeve, cooling down the thermometer. If it’s humid, it hardly cools down at all, and if the air is dry, it cools down a lot. That final reading after the thermometer has cooled down is the wet bulb temperature.
In Death Valley, California, one of the hottest places on Earth, temperatures often get up to 120 degrees F — but the air is so dry that it actually only registers a wet body temperature of 77 degrees F. A humid state like Florida could reach that same wet bulb temperature on a muggy 86 degree day.
When the wet bulb temperature gets above 95 degrees F, our bodies lose their ability to cool down, and the consequences can be deadly. Until recently, scientists didn’t think we’d cross that threshold outside of doomsday climate change scenarios. But a 2020 study looking at detailed weather records around the world found we’ve already crossed the threshold at least 14 times in the last 40 years. So far, these hot, humid events have all been clustered in two regions: Pakistan and the Arabian Peninsula.
The warm water in the Red Sea and the Persian Gulf makes the air above extremely humid. Inland, on the Arabian Peninsula, the arid continental heat causes temperatures to skyrocket. And when these two systems meet, they can tip the wet bulb temperature above that 95 degree F wet bulb threshold.
In Pakistan, it’s a little less clear what’s driving these hot, humid extremes. But scientists think it’s caused by warm, humid air flowing inland during the monsoon season. As it passes over the Indus River, the air only gets more humid until it hits cities like Jacobabad, often referred to as one of the hottest cities on earth. To date, Jacobabad has crossed that deadly wet bulb threshold a whopping six times — the most of any single city on record.
If we plot all these events over time, it’s clear these hot, humid extremes are increasing as the planet warms. Scientists expect these events to occur even more frequently in these regions going forward. Other places like coastal Mexico and a large portion of South Asia might soon be at risk of crossing these thresholds for the first time.
Extreme heat is deadly at temperatures well below the 95-degree threshold. Healthy young adults can experience serious health effects at a wet bulb temperature of 86 degrees F. And even dry heat can be dangerous when people’s bodies simply can’t pump out sweat fast enough to cool themselves.
Worldwide, extreme heat likely kills at least 300,000 people each year. But it can be notoriously difficult to track the death counts associated with individual heat waves. Heat often kills indirectly — triggering heart attacks, strokes, or organ failures — making it hard to determine whether those deaths were caused by the heat or an unrelated medical condition.
Even relatively mild heat waves can be deadly when they occur in places where people are not prepared for those temperature extremes. For example, a 2010 heat wave in Russia, where summer temperatures rarely rise above 74 degrees F, killed an estimated 55,000 people despite only hitting about 100 degrees F.
Heat-related death counts are even harder to calculate in regions without accurate or timely death records. In Pakistan — home to many of the world’s humid heat records — the government doesn’t officially track deaths, said Nausheen Anwar, director of the Karachi Urban Lab, a research program that studies the impacts of extreme heat in Pakistan. Instead, her lab often relies on interviews with doctors, ambulance drivers, or graveyard owners to calculate the impacts of heat waves.
With every degree of global warming, these dangerous heat events are becoming even more likely. Stopping climate change may be our best chance to keep them as rare as possible. ~
https://grist.org/climate/the-temperature-threshold-the-human-body-cant-survive/?utm_campaign=fy23-pocket&utm_source=pocket&utm_medium=paid-social
Oriana:
Heat waves kill more people across the US than any other form of extreme weather.
*
THE EARTH’S UPPER ATMOSPHERE IS COOLING
There is a paradox at the heart of our changing climate. While the blanket of air close to the Earth’s surface is warming, most of the atmosphere above is becoming dramatically colder. The same gases that are warming the bottom few miles of air are cooling the much greater expanses above that stretch to the edge of space.
This paradox has long been predicted by climate modelers, but only recently quantified in detail by satellite sensors. The new findings are providing a definitive confirmation on one important issue, but at the same time raising other questions.
The good news for climate scientists is that the data on cooling aloft do more than confirm the accuracy of the models that identify surface warming as human-made. A new study published this month in the journal PNAS by veteran climate modeler Ben Santer of the Woods Hole Oceanographic Institution found that it increased the strength of the “signal” of the human fingerprint of climate change fivefold, by reducing the interference “noise” from background natural variability. Sander says the finding is “incontrovertible.”
But the new discoveries about the scale of cooling aloft are leaving atmospheric physicists with new worries — about the safety of orbiting satellites, about the fate of the ozone layer, and about the potential of these rapid changes aloft to visit sudden and unanticipated turmoil on our weather below.
Chicago, July 12, 2023
Until recently, scientists called the remote zones of the upper atmosphere the “ignorosphere” because they knew so little about them. So now that they know more, what are we learning, and should it reassure or alarm us?
The Earth’s atmosphere has a number of layers. The region we know best, because it is where our weather happens, is the troposphere. This dense blanket of air five to nine miles thick contains 80 percent of the mass of the atmosphere but only a small fraction of its volume. Above it are wide open spaces of progressively less dense air. The stratosphere, which ends around 30 miles up, is followed by the mesosphere, which extends to 50 miles, and then the thermosphere, which reaches more than 400 miles up.
From below, these distant zones appear as placid and pristine blue sky. But in fact, they are buffeted by high winds and huge tides of rising and descending air that occasionally invade our troposphere. And the concern is that this already dynamic environment could change again as it is infiltrated by CO2 and other human-made chemicals that mess with the temperature, density, and chemistry of the air aloft.
Climate change is almost always thought about in terms of the lowest regions of the atmosphere. But physicists now warn that we need to rethink this assumption. Increases in the amount of CO2 are now “manifest throughout the entire perceptible atmosphere,” says Martin Mlynczak, an atmospheric physicist at the NASA Langley Research Center in Hampton, Virginia. They are “driving dramatic changes [that] scientists are just now beginning to grasp.” Those changes in the wild blue yonder far above our heads could feed back to change our world below.
The story of changing temperatures in the atmosphere at all levels is largely the story of CO2. We know all too well that our emissions of more than 40 billion tons of the gas annually are warming the troposphere. This happens because the gas absorbs and re-emits solar radiation, heating other molecules in the dense air and raising temperatures overall.
But the gas does not all stay in the troposphere. It also spreads upward through the entire atmosphere. We now know that the rate of increase in its concentration at the top of the atmosphere is as great as at the bottom. But its effect on temperature aloft is very different. In the thinner air aloft, most of the heat re-emitted by the CO2 does not bump into other molecules. It escapes to space. Combined with the greater trapping of heat at lower levels, the result is a rapid cooling of the surrounding atmosphere.
Satellite data have recently revealed that between 2002 and 2019, the mesosphere and lower thermosphere cooled by 3.1 degrees F (1.7 degrees C ). Mlynczak estimates that the doubling of CO2 levels thought likely by later this century will cause a cooling in these zones of around 13.5 degrees F (7.5 degrees C), which is between two and three times faster than the average warming expected at ground level.
Early climate modelers predicted back in the 1960s that this combination of tropospheric warming and strong cooling higher up was the likely effect of increasing CO2 in the air. But its recent detailed confirmation by satellite measurements greatly enhances our confidence in the influence of CO2 on atmospheric temperatures, says Santer, who has been modeling climate change for 30 years.
This month, he used new data on cooling in the middle and upper stratosphere to recalculate the strength of the statistical “signal” of the human fingerprint in climate change.
He found that it was greatly strengthened, in particular because of the additional benefit provided by the lower level of background “noise” in the upper atmosphere from natural temperature variability. Santer found that the signal-to noise ratio for human influence grew fivefold, providing “incontrovertible evidence of human effects of the thermal structure of the Earth’s atmosphere.” We are “fundamentally changing” that thermal structure, he says. “These results make me very worried.”
Much of the research analyzing changes aloft has been done by scientists employed by NASA. The space agency has the satellites to measure what is happening, but it also has a particular interest in the implications for the safety of the satellites themselves.
This interest arises because the cooling of the upper air also causes it to contract. The sky is falling — literally.
The depth of the stratosphere has diminished by about 1 percent, or 1,300 feet, since 1980, according to an analysis of NASA data by Petr Pisoft, an atmospheric physicist at Charles University in Prague. Above the stratosphere, Mlynczak found that the mesosphere and lower thermosphere contracted by almost 4,400 feet between 2002 and 2019. Part of this shrinking was due to a short-term decline in solar activity that has since ended, but 1,120 feet of it was due to cooling caused by the extra CO2, he calculates.
This contraction means the upper atmosphere is becoming less dense, which in turn reduces drag on satellites and other objects in low orbit — by around a third by 2070, calculates Ingrid Cnossen, a research fellow at the British Antarctic Survey.
On the face of it, this is good news for satellite operators. Their payloads should stay operational for longer before falling back to Earth. But the problem is the other objects that share these altitudes. The growing amount of space junk — bits of equipment of various sorts left behind in orbit — are also sticking around longer, increasing the risk of collisions with currently operational satellites.
More than 5000 active and defunct satellites, including the International Space Station, are in orbit at these altitudes, accompanied by more than 30,000 known items of debris more than four inches in diameter. The risks of collision, says Cnossen, will grow ever greater as the cooling and contraction gathers pace.
This may be bad for business at space agencies, but how will the changes aloft affect our world below?
One big concern is the already fragile state of the ozone layer in the lower stratosphere, which protects us from harmful solar radiation that causes skin cancers. For much of the 20th century, the ozone layer thinned under assault from industrial emissions of ozone-eating chemicals such as chlorofluorocarbons (CFCs). Outright ozone holes formed each spring over Antarctica.
The 1987 Montreal Protocol aimed to heal the annual holes by eliminating those emissions. But it is now clear that another factor is undermining this effort: stratospheric cooling.
Ozone destruction operates in overdrive in polar stratospheric clouds, which only form at very low temperatures, particularly over polar regions in winter. But the cooler stratosphere has meant more occasions when such clouds can form. While the ozone layer over the Antarctic is slowly reforming as CFCs disappear, the Arctic is proving different, says Peter von der Gathen of the Alfred Wegener Institute for Polar and Marine Research in Potsdam, Germany. In the Arctic, the cooling is worsening ozone loss.
Von der Gathen says the reason for this difference is not clear.
In the spring of 2020, the Arctic had its first full-blown ozone hole with more than half the ozone layer lost in places, which von der Gathen blames on rising CO2 concentrations. It could be the first of many. In a recent paper in Nature Communications, he warned that the continued cooling means current expectations that the ozone layer should be fully healed by mid-century are almost certainly overly optimistic. On current trends, he said, “conditions favorable for large seasonal loss of Arctic column ozone could persist or even worsen until the end of this century … much longer than is commonly appreciated.”
This is made more concerning because, while the regions beneath previous Antarctic holes have been largely devoid of people, the regions beneath future Arctic ozone holes are potentially some of the more densely populated on the planet, including Central and Western Europe. If we thought the thinning ozone layer was a 20th century worry, we may have to think again.
Chemistry is not the only issue. Atmospheric physicists are also growing concerned that cooling could change air movements aloft in ways that impinge on weather and climate at ground level. One of the most turbulent of these phenomena is known as sudden stratospheric warming. Westerly winds in the stratosphere periodically reverse, resulting in big temperatures swings during which parts of the stratosphere can warm by as much as 90 degrees F (50 degrees C) in a couple of days.
This is typically accompanied by a rapid sinking of air that pushes onto the Atlantic jet stream at the top of the troposphere. The jet stream, which drives weather systems widely across the Northern Hemisphere, begins to snake. This disturbance can cause a variety of extreme weather, from persistent intense rains to summer droughts and “blocking highs” that can cause weeks of intense cold winter weather from eastern North America to Europe and parts of Asia.
This much is already known. In the past 20 years, weather forecasters have included such stratospheric influences in their models. This has significantly improved the accuracy of their long-range forecasts, according to the Met Office, a U.K. government forecasting agency.
The question now being asked is how the extra CO2 and overall stratospheric cooling will influence the frequency and intensity of these sudden warming events. Mark Baldwin, a climate scientist at the University of Exeter in England, who has studied the phenomenon, says most models agree that sudden stratospheric warming is indeed sensitive to more CO2. But while some models predict many more sudden warming events, others suggest fewer. If we knew more, Baldwin says, it would “lead to improved confidence in both long-term weather forecasts and climate change projections.”
It is becoming ever clearer that, as Gary Thomas, an atmospheric physicist at the University of Colorado Boulder, puts it, “If we don’t get our models right about what is happening up there, we could get things wrong down below.” But improving models of how the upper atmosphere works — and verifying their accuracy — requires good up-to-date data on real conditions aloft. And the supply of that data is set to dry up, Mlynczak warns.
Most of the satellites that have supplied information from the upper atmosphere over the past three decades — delivering his and others’ forecasts of cooling and contraction — are reaching the ends of their lives. Of six NASA satellites on the case, one failed in December, another was decommissioned in March, and three more are set to shut down soon. “There is as yet no new mission planned or in development,” he says.
Mlynczak is hoping to reboot interest in monitoring with a special session that he is organizing at the American Geophysical Union this fall to discuss the upper atmosphere as “the next frontier in climate change.” Without continued monitoring, the fear is we could soon be returning to the days of the ignorosphere. ~
https://grist.org/climate/the-upper-atmosphere-is-cooling-prompting-new-climate-concerns/
*
INSIST ON MORE FASTING: what churches and the communist party have in common
“Charisma is typically associated with a saint or with a knight, some personal attribution, and what Lenin did was remarkable. He did exactly what he claimed to do: he created a party of a new type. Lenin made the party charismatic. People died for the party. It’s as if people would die for the DMV. Most people don’t get too excited about the Department of Motor Vehicles because it's a bureaucracy. What Lenin did was combine the attributes of personal heroism and the efficiency of impersonal organization, and created a charismatic organization. That's been done before. It's been done by Benedictines, it’s been done by Jesuits, but it’s never been done by a political party before.” ~ Ken Jowitt
Thinking about the Jesuits and their high, elitist demands made me think that the Catholic church — all churches — are an attempt to create a charismatic organization. To be charismatic, an organization needs to make high demands of its members. Only the most severe fundamentalist churches have shown growth during this period of religious decline.
The more sacrifice people have to make, the higher the standards (“impossible to meet” in fact works very well), the more they will justify and glorify the organization, and the more ardently they will believe.
How foolish the Catholic church has been in abolishing not eating meat on Friday. What it needs is completely opposite: insist on more fasting.
Still pondering the church/communist party similarities and opposites: both required blind obedience. But the party (in its heroic, persecuted stage) made the members feel they were heroic and wonderful. The church chose to make the members feel worthless, rotten. They were miserable sinners, the spawn of Satan.
Maybe the Jesuit priests, the intellectual elite of the Catholic church, felt they were heroic and wonderful, but not the average devout Catholic. The most important thing was to see yourself as a sinner deserving of eternal damnation, if not for the "fact" that Jesus died for our sins.
I understand the need of salvationist religions to make people feel guilty and ashamed of themselves, but I wonder if making people feel wonderful about themselves might not work even better? I guess the doctrine of sin and salvation stands in the way. It's hard growing up when you get the message of being worthless. Not just "unworthy," but completely bad, a reject, hellfire fodder.
*
IS RELIGION STILL NECESSARY IN THE MODERN ERA?
~ I can't really imagine a scenario where religion will completely disappear primarily because for many it fills a basic need. That is the need to be comforted about death. Many people have a deep fear that death is the end. That there's just nothingness.
In saying that you argue that religion is needed for relative peace, peace of mind for the believers maybe, but historically religion has done little for world peace. So many wars have been actively fought based on religion, from the crusades to the current conflicts in the Middle East. Now I'm not saying that the wars weren't also a question of power but religion was so often the backdrop and message. ~
Icescream Kone:
I went through Catholic Bible school for 6 years, and eventually left because I was finally old enough to have the guts to leave. I only thought about and eventually left because I started to realize that all the stuff taught at Bible school was…stoopid.
Unfortunately, religion will always be necessary, but not in the way your friends have described. Your friends should give humanity more credit. We're not THAT bad.
People won't just “lose their shit" if the whole of humanity slowly phased out religion. Sounds too far fetched? Well not really. The whole world is steadily becoming more secular, and even religious people are leaning towards secularism. It's just trending in the direction of pure understanding and away from “miracles". Of course, except ISIS and other religious nuts, like Evangelicals.
But, people love a “big brother". People innately need a purpose, need drive that only spirituality will bring. Which is why religion will always be here, always relevant, and for some/many people, necessary. And this has nothing to do with reputable proof.
Chances are, science will never disprove god. And to religious people, the inability to conclude that god isn't real is good enough to mean he is real. Eventually, religious people will accept the fact that their religious texts are nothing more than words on paper. However, they would never give up their God/s. Because the experience of god is deeply personal.
David Sylvester:
I think that organized religions are not good for society. They often use fear tactics, indoctrination, and inspire guilt and shame. They’ve also been the source of many of history’s wars, and have a knack for inserting their beliefs into political systems through their perceived ‘objective morality’. Any good thing that a church has done (say humanitarian aid, or sense of community) can also be done through entirely secular means.
Spirituality on the other hand is something that people seem to have a need for in some sense, to give some greater sense of self or to believe that there’s more to life than birth and death. People should be welcome and encouraged to create their own belief systems, whatever works for them, but as soon as those beliefs start infringing on the rights of others in the name of unprovable faith-based claims, that’s when religion needs to be shut down.
Peter:
Religion can be a crutch. Some people can't handle life without religion. OK. Now, some atheists say that we should remove those crutches. But if you remove a person's crutches, you are obligated to provide them a soft place to land. And don't many of us need some sort of help? Heck, without my glasses, I'd get hit by a car very soon. If you take my glasses I will be very angry with you and you will not be doing me a service.
Religion can be a motivator. Some people do good things because they think God wants them to.On the negative side: Religion can be an excuse. Religion can be an excuse for hatred and bigotry. The trouble is that "God told me" is impossible to argue against.
Mark:
Religion has actually been an outdated idea for quite some time. We can never go backwards into something like that again; that’s for sure. All religions were created thousands of years ago — ALL of them, when people were really ignorant. Science and technology have now taken over for good, and this will just increase as time goes on. God is dead.
*
HOW VOICE CHANGES WITH AGE
The vocal cords are what produce the sound of your voice. They are located in the larynx, a part of the respiratory system that allows air to pass from your throat to your lungs. When air passes out of the lungs and through the larynx, it causes the vocal cords to vibrate – producing sound.
The vocal cords are composed of three main parts: the vocalis muscle, vocal ligament, and a mucous membrane (containing glands) to cover them. This keeps the surface moist and protects them from damage.
There are also approximately 17 other muscles in the larynx that can alter vocal cord position and tension – thus changing the sound produced.
Pre-puberty, there's very little difference in the sound the vocal cords produce between females and males. But during puberty, hormones begin exerting their effects. This changes the structure of the larynx – making the "Adam's apple" more prominent in men – and the length of the vocal cords. After puberty, they're around 16mm (0.63in) in length in men, and 10mm (0.39in) in women.
Women's vocal cords are also 20-30% thinner after puberty. These shorter, thinner vocal cords are the reason why women typically have higher voices than men.
Even after puberty, hormones can affect the voice. For instance, a woman's voice may sound different depending on the stage of her menstrual cycle – with the best voice quality being in the ovulatory phase. This is because the glands produce most mucus during this phase, giving the vocal cords their best functional ability.
Research also shows that women taking the contraceptive pill show less variation in voice quality because the pill halts ovulation.
On the other hand, hormonal changes during the premenstrual phase impede the vocal cords, making them stiffer. This may explain why opera singers would be offered "grace days" in the 1960s to ensure they didn't damage their vocal cords. And, because women's vocal cords are thinner, they may also be more likely to suffer damage from overuse.
As with almost every other part of the body, vocal cords age. But these changes might not be as noticeable for everyone.
As we get older, the larynx begins increasing its mineral content, making it stiffer and more like bone than cartilage. This change can begin happening as early as your thirties – especially in men. This makes the vocal cords less flexible.
The muscles that allow the vocal cords to move also begin wasting (as do our other muscles) as we age. The ligaments and tissues that support the vocal cords also lose elasticity, becoming less flexible.
There's also a decrease in pulmonary muscle function, reducing the power of the air expelled from the lungs to create the sound. The number of glands that produce the protective mucus also decrease, alongside a reduction in the ability to control the larynx.
While vocal cords age at largely the same rate in most people, many lifestyle factors can increase the risk of damage to them – and so can change the way your voice sounds.
Smoking, for example, causes localized inflammation, increased mucus production, but can also dry out the mucosal surfaces. Alcohol has a similar effect. Over time, these factors can damage the vocal cords and alter the voice's sound.
Some over-the-counter and prescription drugs can also alter the voice – such as steroid inhalers used for laryngitis. Blood thinners may also damage the vocal cords and can cause polyps to form, making the voice sound raspy or hoarse. Muscle relaxants, too, can lead to irritation and vocal cord damage due to the drug allowing stomach acid to wash back into the larynx. The irritation and changes caused by these medications typically disappears after stopping use.
One other lifestyle factor can be overuse, which is typically seen in singers and other people who use their voice a lot during work, such as teachers and fitness instructors. This can lead to an uncommon condition called Reinke's edema, which can also be caused by smoking. Reinke's edema causes fluid to swell in the vocal cords, changing the pitch of the voice – often making it deeper.
In extreme cases of Reinke's edema, surgery is needed to drain the fluid. But in most cases, rest and avoiding irritants (smoking and alcohol) is beneficial, while speech and language therapy can also address the change in sound.
While we can't help some of the age-related changes that happen to our vocal cords, we can maintain some of our vocal quality and ability through continued use. This may explain why, in many cases, singers show significantly less vocal change with age than their non-singing counterparts.
Singing or reading out loud daily can give the vocal cords sufficient exercise to slow their decline.
Looking after your vocal cords is also important. Staying hydrated and limiting intake of alcohol and tobacco can help prevent high rates of decline and damage.
https://www.bbc.com/future/article/20230703-why-our-voices-change-with-age
Beverly Sills sings La Traviata: https://www.youtube.com/watch?v=I-AcsT9LRII
*
MARIJUANA CAN REMOVE ALZHEIMER’S PLAQUE
Salk Institute scientists have found preliminary evidence that tetrahydrocannabinol (THC) and other compounds found in marijuana can promote the cellular removal of amyloid beta, a toxic protein associated with Alzheimer's disease.
While these exploratory studies were conducted in neurons grown in the laboratory, they may offer insight into the role of inflammation in Alzheimer's disease and could provide clues to developing novel therapeutics for the disorder.
"Although other studies have offered evidence that cannabinoids might be neuroprotective against the symptoms of Alzheimer's, we believe our study is the first to demonstrate that cannabinoids affect both inflammation and amyloid beta accumulation in nerve cells," says Salk Professor David Schubert, the senior author of the paper.
Alzheimer's disease is a progressive brain disorder that leads to memory loss and can seriously impair a person's ability to carry out daily tasks. It affects more than five million Americans according to the National Institutes of Health, and is a leading cause of death. It is also the most common cause of dementia and its incidence is expected to triple during the next 50 years.
It has long been known that amyloid beta accumulates within the nerve cells of the aging brain well before the appearance of Alzheimer's disease symptoms and plaques. Amyloid beta is a major component of the plaque deposits that are a hallmark of the disease. But the precise role of amyloid beta and the plaques it forms in the disease process remains unclear.
The researchers found that high levels of amyloid beta were associated with cellular inflammation and higher rates of neuron death. They demonstrated that exposing the cells to THC reduced amyloid beta protein levels and eliminated the inflammatory response from the nerve cells caused by the protein, thereby allowing the nerve cells to survive.
"Inflammation within the brain is a major component of the damage associated with Alzheimer's disease, but it has always been assumed that this response was coming from immune-like cells in the brain, not the nerve cells themselves," says Antonio Currais, a postdoctoral researcher in Schubert's laboratory and first author of the paper. "When we were able to identify the molecular basis of the inflammatory response to amyloid beta, it became clear that THC-like compounds that the nerve cells make themselves may be involved in protecting the cells from dying.”
Brain cells have switches known as receptors that can be activated by endocannabinoids, a class of lipid molecules made by the body that are used for intercellular signaling in the brain. The psychoactive effects of marijuana are caused by THC, a molecule similar in activity to endocannabinoids that can activate the same receptors. Physical activity results in the production of endocannabinoids and some studies have shown that exercise may slow the progression of Alzheimer's disease.
Schubert emphasized that his team's findings were conducted in exploratory laboratory models, and that the use of THC-like compounds as a therapy would need to be tested in clinical trials.
In separate but related research, his lab found an Alzheimer's drug candidate called J147 that also removes amyloid beta from nerve cells and reduces the inflammatory response in both nerve cells and the brain. It was the study of J147 that led the scientists to discover that endocannabinoids are involved in the removal of amyloid beta and the reduction of inflammation. ~
https://www.sciencedaily.com/releases/2016/06/160629095609.htm
*
CAFFEINE COMBINED WITH TAURINE IMPROVES MENTAL AND PHYSICAL PERFORMANCE
~ Researchers from Turkey have found evidence of a that combining caffeine and taurine has synergistic effects on physical and cognitive performance. Their findings have been published in the journal Nutrients.
The motivation behind this study was to investigate the potential ergogenic effects of caffeine, taurine, and their combination on the performance of elite boxers. Elite athletes are always looking for ways to gain a competitive advantage, and supplements have become an integral part of their training regimen.
Taurine is a naturally occurring amino acid that plays several important roles in the body. Taurine is naturally present in some foods, particularly animal-based protein sources such as meat, fish, and dairy products. It is also available as a dietary supplement and is often included in energy drinks.
To conduct the study, twenty elite male boxer athletes were recruited. They participated in five testing sessions, which included a familiarization session and four experimental trials with different supplements: caffeine (6 mg/kg), taurine (3 g single dose), a combination of caffeine and taurine, and a placebo (300 mg maltodextrin). The participants were randomly assigned to different supplement groups in a double-blind, crossover design.
The researchers measured various parameters related to anaerobic power, balance, agility, and cognitive performance. The Wingate Anaerobic Test, which measures maximal anaerobic power, was performed on a specialized cycle ergometer. Blood lactate levels, rating of perceived exertion, balance, and agility were also assessed.
Cognitive performance was evaluated using the Stroop test, which is a well-known neuropsychological test. The Stroop test measures the interference effect between reading words and naming the ink color in which the words are printed. Participants are presented with color words (e.g., “red,” “green,” “blue”) written in different ink colors, and they are instructed to ignore the word and focus on naming the ink color as quickly and accurately as possible.
The results of the study showed that caffeine, taurine, and their combination had varying effects on different performance parameters. Taking caffeine and taurine together improved the boxers’ power output, which means they were able to generate more force during the Wingate test. It also made them feel less exhausted during the exercise.
When comparing the combined intake of caffeine and taurine to taking them separately, there were differences in power output and other performance measures. This suggests that combining caffeine and taurine may have a synergistic effect and be more effective than taking them individually. The combined intake of caffeine and taurine also improved the boxers’ balance and agility.
In tests that evaluated cognitive performance, the combined intake of caffeine and taurine improved the boxers’ reaction time compared to when they took taurine, caffeine, or a placebo alone. This means their ability to react quickly to stimuli was enhanced.
But neither caffeine nor taurine, when taken separately or together, had a significant effect on lactate levels. Lactate is a compound that builds up in the muscles during intense exercise.
The researchers concluded that the combined intake of caffeine and taurine was the most effective supplementation strategy for improving anaerobic strength, balance, agility, and cognitive performance in elite male boxers. However, it’s important to note that individual responses to these supplements may vary, and further research is needed to fully understand their effects on athletic performance. Additionally, further research is needed to fully understand the mechanisms behind the effects of caffeine and taurine on performance. ~
https://www.psypost.org/2023/06/combining-caffeine-and-taurine-boosts-physical-and-mental-performance-study-finds-165803
Oriana:
Food sources of taurine include scallops and other seafood, egg yolks, and the dark meat of chicken and turkey. In addition, taurine is an inexpensive supplement. With age, we tend to become deficient in taurine, so taking taurine (try the 3 g dose) with your coffee seems a perfect answer.
*
PARKINSON’S AND THE MICROBIOME
~ Parkinson’s disease affects millions of people around the world, but it remains unclear exactly what causes it, and there is currently no cure for this condition. In an effort to better understand the mechanisms involved, some researchers are now looking to the gut.
Millions of people around the world live with Parkinson’s disease, a neurological condition that primarily affects mobility, balance, and muscle control, though its symptoms can include many other issues, from mood changes to gastrointestinal issues and a deterioration of memory and other cognitive functions.
According to data from the World Health Organization (WHO), the global prevalence of Parkinson’s has doubled in the past 25 years, and as per the most recent estimates, the disease has resulted in “5.8 million disability-adjusted life years” globally.
While much of this increase is driven by increasing numbers of older adults, there is also some evidence that age-adjusted incidence is also on the rise.
Dopaminergic medication, deep brain stimulation, and speech and occupational therapy are some of the treatments currently available to people with Parkinson’s disease, but researchers are constantly on the lookout for more and better treatments.
Several studies from the past 12 months have focused on one particular aspect of Parkinson’s disease, namely gut health. But why is gut health important in Parkinson’s, and what could it reveal about the disease?
Over the past few years, an increasing amount of evidence has come to light indicating that there is a two-way communication route between the brain and the gut. Researchers have termed this gut-brain axis.
The gut-brain axis has been implicated in many health conditions affecting the brain, from dementia to depression. And while the gut-brain connection may be less obvious in other conditions, it is, in fact, clearer in Parkinson’s disease, which, in some people, is also characterized by gastrointestinal symptoms, such as constipation.
One perspective on Parkinson’s disease, known as the Braak Hypothesis, suggests that, in many cases, an unknown pathogen can reach the brain via two routes, one of which implicates the gut.
According to this hypothesis, one way for pathogens to reach the brain could be by being swallowed, reaching the gut, and then advancing to the brain via the vagus nerve — the longest cranial nerve that connects the brain with, among others, the intestines. This may then trigger the onset of Parkinson’s disease.
Dr. Ayse Demirkan acknowledged that, at first, the notion of looking to the gut to understand more about Parkinson’s disease might seem surprising, but that the Braak hypothesis provides an intriguing lens through which to assess potential mechanisms at play.
“[Through the Braak hypothesis,] there comes the idea that the disease actually starts in the intestines, and then through the vagus nerve, it spreads to the other tissues and toward the brain,” she explained.
According to her, Parkinson’s disease is the neurological condition most interesting to study in relation to gut health for one simple reason: Parkinson’s gut microbiome stands out the most.
Gut microbiome is different in Parkinson’s
Through the recent study they conducted, Dr. Demirkan and her colleagues saw that individuals with Parkinson’s disease had distinct gut microbiomes characterized by dysbiosis — the phenomenon of imbalance between so-called good versus bad bacteria.
Their study suggested that around 30% of the proportion of gut bacteria in people with Parkinson’s disease is different from those without Parkinson’s.
Dr. Demirkan and her colleagues found that bacteria such as Bifidobacterium dentium — which can cause infections such as brain abscesses — were at significantly elevated levels in the gut of people with Parkinson’s disease.
Other infection-causing bacteria more abundant in people with Parkinson’s were E. coli, Klebsiella pneumoniae, which can cause pneumonia, and Klebsiella quasipneumoniae, which can cause similar infections.
The study conducted by Dr. Demirkan was not the only recent research to zoom in on the differences in gut bacteria.
Research from the University of Helsinki — published in May 2023 in Frontiers — in animal models of Parkinson’s disease, suggests that Desulfovibrio bacteria may be implicated in this condition. These bacteria produce hydrogen sulfide, which may lead to forms of inflammation.
Desulfovibrio also came up in a study from The Chinese University of Hong Kong, which appeared in May 2023 in Nature Communications. This study, whose aim was to find a method of diagnosing Parkinson’s earlier, identified an “overabundance” of these bacteria in people with REM sleep behavior disorder and early markers of Parkinson’s.
REM sleep behavior disorder is a deep sleep disturbance tied to a higher risk of Parkinson’s disease. In people with this disorder, the usual brain mechanisms that prevent them from “acting out” the content of their dreams no longer work, which means that they perform uncontrolled movements in their sleep.
If gut bacteria do play a role in Parkinson’s disease, the question that arises is: What mechanisms might mediate their impact on neurological health?
One hypothesis hinted at in the studies on the link between the gut and the brain in Parkinson’s is that systemic inflammation may be one of the mechanisms involved, since some of the bacteria that are overabundant in this condition are pro-inflammatory, meaning that they can trigger inflammation.
There is research indicating the effectiveness of immunosuppressant medication of Parkinson’s disease, which suggests that a similar type of medication may also help manage the condition.
Indeed, chronic brain inflammation is an important part of Parkinson’s disease, and some studies seem to indicate that systemic inflammation may worsen brain inflammation and thus contribute to disease progression.
Some inflammatory conditions have actually been linked with a higher risk of Parkinson’s. For example, one Danish study from 2018 suggested that people with inflammatory bowel disease (IBD) have a 22% higher risk of Parkinson’s disease than peers without this inflammatory condition.
Could diet fight dysbiosis in Parkinson’s?
If gut bacteria may play a role in Parkinson’s disease, it may seem reasonable to infer that diet could help fight gut dysbiosis and perhaps provide an easy option for symptom management.
While there are some dietary recommendations and nutritional supplements that may help provide some symptom relief for some people, it remains unclear just how much diet can actually do to alter the course of this disease.
One study from 2022 suggests that diets high in flavonoids — natural pigments found in many fruits — are linked to a lower risk of mortality in Parkinson’s disease.
And an older study, from 2018, argued that a protein found in many types of fish, called “parvalbumin,” may help prevent Parkinson’s disease by stopping alpha-synclein from collecting into clumps in the brain — which is what happens in the brains of people with Parkinson’s, disrupting signals between brain cells.
However, when asked about the potential of diet and supplements to regulate gut bacteria in people with Parkinson’s, Dr. Demirkan expressed some reservations.
She emphasized that since people have different risk factors for Parkinson’s, as well as different iterations of the disease, it is difficult to make general recommendations that would actually prove helpful.
Can exercise help with Parkinson’s?
There is, nevertheless, some research suggesting that exercise can be an effective means of managing the symptoms of Parkinson’s disease.
One study from 2022, published in Neurology, suggested that participating in regular, moderate-to-vigorous exercise could help slow down the progression of Parkinson’s disease for those in the early stages.
Research from 2017 advised that at least 2 and a half hours of exercise per week could help people with Parkinson’s improve their mobility while slowing down disease progression.
Dr. Demirkan agreed that exercise can be a helpful strategy for managing Parkinson’s disease. “[E]xercise itself is an amazing way of shaping our brain and body,” she said.
“[I]n terms of reversing [Parkinson’s] pathology, there are some large physiological effects that we can think about. If you’re running a marathon, for example, it’s a big thing that your body has to go through. […] [F]or instance, one thing is that your heat increase for a long time in like a […] feverish way, right? There is a long-term increase in the core heat, that’s one thing, and that should definitely have an important effect [on the gut],” she explained.
Indeed, some research suggests that the heat stress taking place during exercise could reduce intestinal blood flow, which eventually may impact the gut microbiome by potentially suppressing some bacteria and making room for others to expand.
As to which form of exercise is best for people with Parkinson’s disease, a Cochrane review published in January 2023 concluded that pretty much all forms of exercise can help improve life quality for those living with this condition.
According to the review authors, existing evidence suggests that aqua-based training “probably has a large beneficial effect” on quality of life. Endurance training is also helpful, both in improving life quality, in general, and in managing motor symptoms, in particular.
When it comes to managing motor symptoms, the authors write that dance, aqua-based exercise, gait/ balance/ functional exercise, and multi-domain training could all be equally helpful.
And some past research — in women with overweight but without Parkinson’s — has suggested that endurance training results in an increase in beneficial bacteria called Akkermansia, which contribute to improved metabolic function.
Shaughnessy, who regularly takes part in demanding and arduous marathons and other sports challenges to raise funds for Parkinson’s research told us that exercise has helped him more than anything in maintaining his well-being.
“[E]xercise has become a big part — was already a part of my life before [the diagnosis], but it’s become […] a big way of helping me to manage and control the condition,” he said.
“I gradually went from, you know, a bit of running to marathons. And then the latest thing I’ve done was a 14-day cycle from Liverpool to Ukraine — 1,400 miles, which was probably a little bit beyond my capability, to be honest,” he mused.
But challenging himself in this way, he said, truly helped him on a mental level. “[W]hile I’m exercising, I don’t feel like I have Parkinson’s, quite often,” Shaughnessy said.
For him, it is all about focusing on what you are actually able to achieve at any given point in time, and aiming for that.
https://www.labiotech.eu/in-depth/gut-microbiome-parkinsons-disease/
Oriana:
One of the side benefits of eating the kind of diet that nourishes a healthy microbiome (think soluble fiber and fermented foods, e.g. sauerkraut) might be a lower risk of Parkinson's. Drinking coffee is also related to lower risk. Basically, however, Parkinson's remains a mystery: we don't know what causes it (is it basically autoimmune?) and we certainly don't know how to cure it. It remains the second most common neurological disorder after Alzheimer's.
It's also interesting that one of the symptoms of Parkinson's is constipation. In fact, the gastrointentinal symptoms may appear before the typical Parkinson's symptoms such as tremors. Perhaps the famous Greek physician Hippocrates was right when he said, "All disease begins in the gut."
*
KETOGENIC DIET AND AUTOIMMUNE DISORDERS
The ketogenic diet has been the center of some controversy over the years. It is characterized by a very low consumption of carbohydrates—less than 50 g a day—offset by a higher proportion of fat. Its opponents often demonize it for cutting out whole food groups while its advocates maintain that its benefits outweigh the risks.
However, apart from its well-studied benefits in managing epilepsy in children, evidence for its other potential advantages—such as reducing inflammation—has remained scarce, at least in humans.
What’s certain for now, however, is that we are still uncovering the exact mechanisms behind why and how this diet works and impacts health.
How keto came to be
In 2021, the ketogenic diet officially marked its 100th anniversary. In the 1920s, the keto diet was introduced as an alternative therapy to help children with epilepsy after doctors saw that mimicking the metabolism of fasting not only improved symptoms but also helped control seizures.
“[The diet] was used to try and treat epilepsy because it had been observed when people who had seizures didn’t eat [carbohydrates] the seizures would stop. But, of course, that wasn’t sustainable. So, it was sort of developed to try and further explore this therapeutic potential of fasting in epilepsy. It was effective in adults and children,” said Dr. Masino.
Mimicking ‘starvation’
Keto’s main mechanism of action is via prompting the body to switch into a different energy-forming process—using fat rather than simple carbohydrates (such as glucose and fructose) and complex carbohydrates (such as starch and dietary fibers) as its primary source for fuel.
When the liver starts breaking down fats, it starts producing chemicals called ketones. When the level of ketones in the blood reach the appropriate level, and the body relies on fat, or specifically ketone bodies, for energy, it enters a metabolic state called ketosis.
“[W]hen you have restricted carbohydrates, or just insufficient calories, you will start generating ketone bodies instead of glucose. And so your body will be using these ketones for fuel,” explained Dr. Masino.
The keto diet, in a sense, stresses the body initially, which sparks a protective response much like exercise does to muscles. As a result, it reduces inflammation, oxidative stress, and sensitivity within the nervous system— all of which can help with managing chronic pain.
Dr. Masino underscored that it isn’t always necessarily ‘a stress-inducing state’ for the body when it produces ketones, and evolutionarily, humans have experienced this state quite frequently when there was less available food.
She added that the body can start generating ketones even in “a relatively short duration of insufficient calories or restricted carbohydrates.”
Multiple mechanisms of action
Using food—the fuel we use for many metabolic processes in the body—as potential treatments for metabolic-related disorders or chronic conditions is nothing new.
Dr. Masino elaborated on the multiplicity of mechanisms of the keto diet:
“[It may] address a number of different conditions because it can increase energy production and reduce inflammation. [A] lot of my work has been centered on adenosine, which is a really interesting molecule that is involved in communication between nerve cells—it’s involved in energy cycles, it can impact DNA methylation.”
She added that the ketogenic diet increases production of adenosine.
Although many people report rapid weight loss while on a keto diet, the reverse—in less extremes—has also been true.
“[W]hat’s been amazing to me is that this kind of ketone-based metabolism seems to help people who are overweight to lose weight. But it [also] helps people who are underweight or animal models that suffer from underweight to maintain and stabilize their weight. So it’s not always a weight loss diet,” said Dr. Masino.
Dr. Masino believes the keto diet may help restore a state of physiological balance.
“It’s almost something that I think through this multiplicity of mechanisms [that] is helping your body to get to its sort of ideal physiological state, where it is then more resilient to other stresses that may come in on it,” she said.
“[If] you have a more resilient physiology that’s less inflammatory, has great mitochondrial energy production—which the ketogenic diet absolutely does— then you can fend off assaults from all of the things that our bodies are assaulted with,” Dr. Susan Masino said.
Previous research has shown that diet can influence inflammatory pain, finding particular links between a Standard American Diet (SAD), which is typically high in processed foods, fats, and carbohydrates, and chronic inflammation.
“[T]he fastest way to change your microbiome is through diet. So it’s not surprising that this would have a rapid effect, and that changes signaling in your brain and in your body,” said Dr. Masino.
Her research, for example, was based on the hypothesis that ketone-based metabolism could increase adenosine— a molecule that could be instrumental in the body’s inflammatory response.
“[A]denosine is also released during any kind of injury or wound during that inflammatory process. And that’s something that can help with the healing. It’s [a] very powerful anti-convulsant molecule,” she elaborated.
She hypothesized that the keto diet could be a way to promote the neuroprotective and anti-seizure benefits of this molecule, which could help regulate the nervous system.
“There’s a lot of interest now in trying to use these metabolic approaches, particularly ketogenic approaches in mental illness, which are all of our neurological disorders are associated with a metabolic and inflammatory component,” she added.
Disrupting pain signals in the body
As for keto’s inflammation-reducing benefits, Dr. Masino elaborated on the mechanism:
“Inflammation itself is something that can cause pain. So reducing the inflammation in general, so that it’s not chronic or inappropriate, is itself a critical kind of pain disorder benefit of the ketogenic diet.”
“More specifically, if we increase adenosine in the central nervous system, that means the brain and the spinal cord—if the ketogenic diet is able to do that—that helps calm down the nerve cells directly so that they’re not firing and sending that pain signal,” Dr. Susan Masino said.
“If you have better metabolism, if your mitochondria are in good shape, that helps to clean up all the broken things and keep your cells functioning and able to recover at a cellular level. If we can reduce the pro inflammatory cytokines, that is another important mechanism,” she added.
Another way a ketogenic diet may help calm down chronic pain arising from an overexcited and overstimulated nervous system is via ketones.
One ketone, called Beta-hydroxybutyrate, can block the immune system receptors linked to inflammation, and help decrease nervous system activity.
How essential are ketones?
However, it remains unclear whether such ketone bodies produced in a ketogenic diet are essential for mediating pain and inflammatory responses, or whether following a low-glucose diet could also produce similar results.
“[I]n some specific pain conditions, or epilepsy conditions or inflammatory conditions, maybe the ketone bodies are critical in that case, and in other cases, maybe just reducing and stabilizing glucose is enough to relieve those symptoms,” said Dr. Masino.
Addressing controversies on keto
The low fiber, high fat content of the keto diet, as well as its effects on weight and its sustainability in the long term have been the main areas of concern.
Is a high fat diet always bad?
One particular area of controversy for keto is its high fat content—both in saturated and unsaturated fats.
“[This] has been related to changes in dietary recommendations, where fat was basically vilified and became another focus of changing our food system so that we eat less fat and less saturated fat. So [the ketogenic diet] became not only perceived as less scientific, but actually dangerous to eat this much fat,” explained Dr. Masino.
High fat diets aren’t necessarily bad, stressed Dr. Masino.
“It’s really high fat diets in combination with carbohydrates that have those toxic effects,” she said.
Dr. Hilary Guite pointed out the main problem many people have when following a diet higher in fat.
“[If] you have a high fat diet, in the face of low carbs, your body reacts very differently to the fat. It starts using the fat. Whereas if you have a standard American diet, then you’re having high fat and high carbs. And that’s where the danger from fats coming,” she said.
Dr. Masino agreed.
“[T]hat’s exactly the issue. [It is a] completely different environment with high fat and restricted carbohydrate versus high fat and high carbohydrate, which is the environment that most of us are physiologically, in most of the time, which is much more pro inflammatory, will cause you to gain weight. [W]hereas [with] high fat, low carb, you’re not putting on weight, you’re maintaining your weight,” she said.
Will keto lead to high cholesterol?
Keto’s impact on lipids because of its high saturated fat content has been another aspect deemed controversial. Consuming saturated fats, which are found in butter, cheese, and fatty meats, has been linked to higher total and low-density lipoprotein cholesterol (LDL-C) levels, which can increase the risk for heart attack and stroke.
However, it appears that high saturated fat diets increase the larger buoyant sub fraction of LDL (which have not been associated with increased cardiovascular disease). Furthermore, a keto diet decreases in the small dense sub fraction of LDL-C, high levels of which have been associated with increased heart attack and stroke.
However, there is conflicting evidence in the literature about keto’s impact on cholesterol and cardiovascular disease, and can lead to alarming rises of LDL-C though case studies like this do not distinguish the type of LDL-C.
“[T]here has been a concern that if the diet increases your cholesterol levels, [then] that would be a hallmark for cardio metabolic problem[s]. I think we need to reevaluate the mantra that cholesterol is the villain in terms of cardio metabolic issues,” said Dr. Masino.
Does keto deplete healthy gut bacteria?
Dr. Guite, meanwhile, expressed concern about the health of the gut microbiome while on the keto diet. She said recent research has indicated that children on long-term ketogenic diets had lower levels of gut bacteria that protect the gut lining.
“[I]f you take away those legumes and the high fiber foods, then the bacteria can start using the mucus around the lining and actually damage the gut [in the] long term. What do you think the impact of a long term ketogenic diet is on the gut microbiome and the integrity of the gut lining?” asked Dr. Guite.
Dr. Masino acknowledged that, especially in the absence of a trained dietician, such negative effects may be seen.
“[I] do want to mention that dieticians actually recommend eating prebiotic foods like fermented foods [such as] pickles, sauerkraut. [P]rebiotic foods like that might be really protective against that possible negative consequence,” she said.
There is also research that suggests the ketogenic diet can increase certain types of gut bacteria, such as Akkermansia muciniphila, one of several markers of good metabolic health that occur on the keto diet.
Lived experience: Lupus and keto
To see if the keto diet may actually help improve the management or severity of pain and inflammation-related symptoms, Medical News Today asked Shea, who has lupus, about his experience on-and-off the diet over the years.
Shea first described how he felt after his body had switched into a state of ketosis.
“[I initially] had a lot of gastrointestinal problems, [a] bit of stomach gurgling, bloating, [feeling] really, really lethargic. And I didn’t really want to do much, [it] felt like a headache as well. And that lasted for about four or five days,” he said.
These symptoms are more commonly referred to as the “keto flu“.
What makes keto hard to follow
On a more personal level, Shea also shared his struggles with strictly following the diet, especially in social settings.
“[It] was very hard to do in the beginning. Because in the beginning, you just want a cake, cookies, bread, rice, anything. And then you have to realize that those are all things that you can’t really eat,” he said.
Shea said one of the hardest switches he had to do was with snacks because most of them were very high in carbohydrates.
“[So] you have to make your own kind of snack or just have a slice of cheese to hold you down,” he said.
Around the 2-month mark is when Shea grew accustomed to the keto diet and he knew what he could eat.
Less inflammation, less allergy
Although Shea didn’t adopt this diet specifically to see whether it helped with his lupus, he did see positive changes in the management and frequency of his symptoms.
“I didn’t start the diet to help with my autoimmune disease. It was mainly for health. And this was just a side effect of what happened with it,” he explained.
“Before I started the keto diet, I would take about two to three allergy tablets a day. And then when I was on the keto diet, probably at the 2-month mark, I wasn’t taking that many tablets anymore. It was usually about one a day. And it got to the point where I was taking half a tablet a day or I can miss a day and I would still be fine. And the longest I’ve gone is about three days without taking medication,” Shea said.
Reduced pain
In terms of chronic pain, Shea also saw improvements.
“[W]hen I was on the diet, my pain, I could almost feel that it would reduce the longer I was on it. And it wasn’t as [wide]spread over as it was before. Because before I would feel it in almost all my joints, and it would feel quite stiff,” he said.
“[T]he longer I was on the diet—it wasn’t magically going away, or like I could feel [a] huge change—but there was something there that I could kind of notice where I wasn’t as stiff as before. And my body wasn’t burning as much and in a lot of agony over it,” Shea said about his experience with lupus and the keto diet.
Is keto sustainable?
“There are people that have definitely been following a ketogenic diet for decades and have not had any ill effects. So, I think we need to dispel some of the myths that this is really dangerous, or not sustainable, or not recommended by the medical profession,” Dr. Masino said.
“I would encourage [people] to find a keto-literate provider that could help give them advice or a dietitian,” she advised.
Shea believes, for him, it’s not a diet that he can do in the long term.
“[I]t’s not very sustainable [because] you’re just constantly eating fatty foods. And you will get to the point where you want to go out for a meal with some friends or you want to go on a date or anything like that. And you can’t really get the options for a keto diet out there,” he said.
“[I]f you break the diet, it can have consequences with it almost immediately. When I do come off the diet, I have a huge flare up,” he said. However, he also said that the keto diet brings really fast end results for him.
For now, it seems, keto’s benefits on chronic pain and inflammatory autoimmune conditions remains rather speculative. However, as more evidence surfaces, this approach may prove valuable in treating or complementing existing therapies for such conditions. ~
https://www.medicalnewstoday.com/articles/in-conversation-is-the-ketogenic-diet-right-for-autoimmune-conditions
Oriana:
Keto appears to help patients with multiple sclerosis. I think at this point we have enough evidence to say that keto appears helpful in all neurological diseases.
*
KETO DIET SEEMS TO BENEFIT PATIENTS WITH RHEUMATOID ARTHRITIS
A systematic review of 70 dietary studies revealed that fasting, omega 3 and vitamin D3 significantly reduced RA symptoms. Fasting and calorie restriction, in particular, were associated with improvement of RA activity, with stronger effects on subjective symptoms.
Keto diet may affect RA in several different ways. First, beta-hydroxy-butyrate suppresses macrophages and neutrophils' synthesis of IL-1 by inhibiting NIC and thus reducing TNF. Secondly, BHB has been shown to suppress proinflammatory interleukins including IL-1, IL-12, and IL-6 by activation of HCAr. Furthermore, BHB may inhibit the release of IL-1β and IL-18 mediated by NLRP3, contributing to the anti-inflammatory role of KD.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8712653/
Oriana:
Since keto diet has an anti-inflammatory effect, it's helpful for all kinds of arthritis.
*
Ending on beauty:
I have tried to write Paradise
Do not move
Let the wind speak
that is paradise
~ Ezra Pound
No comments:
Post a Comment