Saturday, May 17, 2025

WHEN FLUORIDATION IS STOPPED; JOAN DIDION WITHOUT HER STYLE; CAB AND AMBULANCE DRIVERS HAVE A LOWER RISK OF ALZHEIMER'S; WHY NO HOMELESS PEOPLE IN MOSCOW; HOW DRUG COMPANIES INFLATE THE COST OF DRUGS; THE MYTH OF HAVING IT ALL

Egret at dawn; photo: Rob Travis

THE THIRD LANGUAGE

In high school I kept a diary
in English, so if the teacher
caught me, and she did,
she wouldn’t understand.

I had a small vocabulary
and even less to say.
“Weather’s getting warm,”
I confessed in a secret language.

My first class in Los Angeles,
on June evenings,
in the palm-plumed dusk,
was a typing course.

For rhythm, the instructor played
“The Yellow Rose of Texas”
above the cross-fire
of night students pounding

on the jamming keys.
I machined a sinister idiom:
Dear Sir: Due to circumstances
beyond our control —

College was a subordinate clause.
I was a mouse in the auditorium,
scribbling neat, useless notes.
One time I graded three hundred

freshman papers
on the death penalty.
I didn’t want to graduate.
Life was penalty enough.

To survive I had to learn
a third language,
a code in the brain
it takes nightmares to crack:

words husked from
the grain of things,
Adamic names that fit
animals like their own pelts —

fluent as flowers,
rare as rubies, occult
atoms in the lattices
of sleep. To be silent
and let it speak.

~ Oriana


Gerrit Dou: A Scholar Sharpening a Quill


*
JOAN DIDION WITHOUT HER STYLE


Joan Didion, Thomas Powers observed after she died aged 87 in 2021, is “almost brutally direct, but it’s never entirely clear what she means to say” – including to her, one might add. Bluntness and a certain opacity, exactitude and elusiveness, even avoidance: this paradoxical blend, as Didion’s iconic status attests, proved to be culturally intoxicating. She set the forthright, cagey tone as early as her first, reputation-making essay collection, Slouching Towards Bethlehem (1968). In her reluctant preface, she complains that after the title essay was published, “I saw that, however directly and flatly I thought I had said it, I had failed to get through.”

Didion learned to write by typing out Hemingway’s stories as a teenager, and later by writing copy, especially captions, for Vogue, where she worked in her twenties. 

Hemingway’s “perfect sentences”, she said, “taught me how sentences worked”: “Very direct sentences, smooth rivers, clear water over granite, no sinkholes.” It’s a seductive, telling image: a cool, transparent medium through which you glide; no snags or hidden depths; nothing to drag you beneath the surface; nothing, as she put it of Henry James’s long, complicated sentences (“sentences with sinkholes”), in which you could “drown.” 

Her exacting editor at Vogue (“Run it through again, sweetie”) would “get very angry about extra words, about verbs not working”: “everything had to work, every word, every comma.”
“Work”: not make every word “tell” (let alone “mean”), as Strunk and White’s classic style guide has it, but work together, as in a well-oiled machine, or work on its own terms – as though the ideal were a kind of internal harmony, a functional coherence. It’s an emphasis in keeping with Didion’s keen sense of writing as a technical craft. 

A writer was to her “a person whose most absorbed and passionate hours are spent arranging words on pieces of paper”, as she put it in her beguiling, somewhat baffling 1976 essay “Why I Write” – not an inaccurate definition but a partial one, leaving open the possibility that writers could be collagists of nonsense. 

Didion explains that she has little idea what she’s going to say until she says it. She claims to be unable to “think” and to be without “even limited access to my own mind.” An enigma to herself, writing is a route to eloquent self-discovery: “I write entirely to find out what I’m thinking.”

She worked backwards from the sounds of the sentences. Writing for Didion was akin to writing music (“grammar is a piano I play by ear”). “The arrangement of the words tells you, or tells me, what’s going on,” she explained, dogmatic about her intuitive methods: “Nota bene: It tells you. You don’t tell it.”

The beauty and potency of Didion’s prose is indeed partly down to its strong, dramatic cadences. But the drift of her sentences was not always as clear as they sounded, and the relationship between the music and the meaning would later seem more complicated. If in 1976 Didion could enthuse that “all I know about grammar is its infinite power,” by the time of her 2005 memoir The Year of Magical Thinking, she was sounding a more equivocal note: “I developed a sense that meaning itself was resident in the rhythms of words and sentences and paragraphs, a technique for withholding whatever it was I thought or believed behind an increasingly impenetrable polish.”

It’s an appropriately ambiguous, indeed “increasingly impenetrable” sentence. Note the unsatisfying seam – a little “sinkhole”? – between the clauses: to what exactly does “technique” refer? Writing is no longer a way of finding out what you think. Now, Didion already knows “whatever it was I thought,” and regards
rhythm as a way of “withholding” it: a means not of discovering your meanings but hiding them.

By the time of her last book, Blue Nights (2011), Didion found that writing by ear – letting the “rhythm tell me what it was I was saying” – “no longer comes easily to me.” She at first puts it down to a “certain weariness with my own style,” “a wish to be more direct,” even for an “absence of style,” but she fears it is simply “frailty”: “What if I can never again locate the words that work?” 

Blue Nights is indeed a frailer, less musical, altogether less accomplished book. “I need to talk to you directly”; “Let me again try to talk to you directly,” she says repeatedly – sentences that don’t sound as perfect, as self-possessed, as direct as her earlier “smooth rivers.” Didion eventually acknowledges that the book’s subject – the death of her adopted daughter, at 39 – is exacerbating her problem: “Quintana is one of the areas about which I have difficulty being direct.”

Blue Nights was in a sense Didion’s second effort to be direct about this “area” (itself a strikingly nebulous, evasive word). The Year of Magical Thinking was written in the aftermath of the sudden death of her husband, the writer John Gregory Dunne, in 2003, but it had also been in part about their daughter, who had become seriously ill and was admitted to hospital a few days before Dunne suffered a fatal heart attack. It was the beginning of many months in and out of ICUs in what Didion, in Blue Nights, calls Quintana’s “cascade of medical crises.” She would die of acute pancreatitis in 2005, while Didion was promoting The Year of Magical Thinking.

That famous memoir is a riveting study of a reeling mind, an exercise in clinical self-scrutiny, a rational record of Didion’s irrational ruminations: obsessively rehearsing the details of Dunne’s death, in search of evidence that would relieve her of her feelings of responsibility and allow her to relinquish the secret hope that he might come back. It is full of penetrating reflections and wrenching details: plugging Dunne’s phone into charge when she gets back from the hospital, keeping his shoes because “he would need shoes if he was to return.”

Despite its searing intensity – this is writing to find out what you are thinking at its most captivating – Magical Thinking is not especially confessional, as though an inquiry into a grieving mind that only happens to be hers. Quintana is a particularly elusive presence, and the reader is left in the dark about the extent of her recovery. There is “a sense of things missing,” as Martin Amis observed of The White Album in 1980.

Blue Nights is only somewhat more forthcoming. We learn of Quintana’s “startling depths and shallows” as a child, “the quicksilver changes of mood”. Didion refers, grudgingly, to her daughter’s various “diagnoses” (eventually “borderline personality disorder” – Didion won’t waive the quote marks). 

She refers only once to her daughter’s alcohol addiction, with a characteristic mix of clarity and circumlocution: “She was depressed. She was anxious. Because she was depressed and because she was anxious she drank too much. This was called medicating herself.” 

As in Magical Thinking, the introspection is abstracted. One of the refrains of the book runs: “When we talk about mortality we are talking about our children.” Talking about Quintana is in part a way of talking about other things, and those things, perhaps, a way of not talking about Quintana.

Notes to John, the first book to emerge from Didion’s archive, plugs the gaps, to say the least. It is the most direct book Didion wrote – or rather, pointedly didn’t write – on the “area” about which she found it so difficult to be direct. It is a journal, apparently kept for her husband, recording regular sessions with a psychiatrist, Roger MacKinnon, beginning in late 1999. 

She started seeing MacKinnon at the suggestion of Quintana’s psychiatrist (“Dr Kass”), who thought it might help Quintana, then in her mid-thirties, severely depressed and drinking heavily (“really a hardcore alcoholic”, as Kass tells MacKinnon at one point – the two are in communication). Didion is herself “very depressed,” racked by the suffering of her daughter, before which she feels helpless.

The release of material that Didion carefully preserved (apparently typed up and chronologically organized in a filing cabinet next to her desk) but did not choose to make public is controversial. That her memoirs are selective with detail appears to have been, in Magical Thinking especially, a conscious formal decision, and there a highly effective one: isolating grief as a psychological phenomenon with thrilling precision, rather than dilating on her family circumstances. Regarding Quintana, reticence was presumably also a matter of compassionate discretion.

Whether or not it was right to publish it, Notes to John is an undeniably interesting book, if not an obviously stylish one. It reads as a functional chronicle of Didion’s exchanges with MacKinnon, delivered as “directly and flatly” as she perhaps ever mustered, though its seeming completeness, its level of detail, does imply virtuosity: namely, astonishing powers of recall, as Didion charts seemingly every turn in their conversation, including reproducing whole paragraphs of MacKinnon’s speech. 

Their sessions range widely, from Didion’s early childhood – her sadness and fear when her father enlisted in the war (she stopped growing), his depression when he returned – to her feelings about adopting Quintana (always afraid she would lose her and so invested in her dependency) and her attitude to work (the best salve for anxiety yet invented, according to MacKinnon). 

But it is also a grueling account of the cyclical patterns of Quintana’s affliction (“Every four to six weeks a crisis, an opening, then the withdrawal, the distancing”). The best way Didion can help Quintana, MacKinnon suggests, is by accepting that she can’t. To “break” the pattern of overprotection and overdependency Didion must accept their separateness. “You can’t protect her anymore… What she needs is your trust.” Children grow up, MacKinnon says, by coming “to trust that their parents trust them.”

Since the journal is almost entirely reported speech, with no commentary, Didion herself is an oddly absent presence, the subject and the narrator but somehow not quite either. One close friend, objecting to the publication of the book, said they could not “think of anything more private than notes kept about one’s psychiatry sessions.” There are sensitive revelations: Didion reveals she was secretly treated for cancer. [breast cancer, treated with radiation]

At one point she admits to having the “extremely upsetting” thought that she “didn’t like” her daughter. There are also intimate details of a more banal variety (Didion and Dunne give Quintana $100,000 for Christmas; by the end of the journal the money is running out). 

But do even unspeakable thoughts really count as revelations? We may not make a habit of publicizing our hurtful ambivalences, but that they exist is not exactly shocking. What’s more, the therapeutic context is generalizing, a setting in which innermost patterns are interpreted according to an impersonal framework, an idea of how minds and families work.

The quantity of arresting and widely applicable insights makes Notes to John a profound, rich document. Any sense of prying is counter-balanced by the definite feeling that you are learning about more than the particular unhappiness of Didion’s family. Didion herself has rarely seemed so sympathetic in her own writing. Perhaps we are all sympathetic on the couch – there, we are the ones telling the story (in Didion’s case, twice over: telling and then telling the telling). As her famous line has it, “we tell ourselves stories in order to live”, to make our lives livable.

The literary merits of Notes to John are harder to appraise – it’s so neutral and unvarnished that there is little sense of writing as “a performance” (as Didion once described it). Yet that it is such a readable narrative suggests there may be more craft involved than meets the eye. Perhaps Didion here perfected the “absence of style” she would seek at the end of her career.

One can only speculate about her wishes, conscious or unconscious: what to make of her leaving the entries neatly by her desk, or neglecting to destroy them when she can’t have been unaware of the coming onslaught of posthumous interest in every scrap she left behind. Maybe there were things part of her wanted the world to know but that she was unable to say to us – directly. Or perhaps she simply didn’t know what she thought.

https://www.newstatesman.com/culture/books/book-of-the-day/2025/04/joan-didion-without-style?utm_source=substack&utm_medium=email

Oriana:
I read only parts of The Year of Magical Thinking and Blue Nights. The former is justly admired; it’s perhaps the best book on the subjective experience of grief ever written. Blue Nights, on the other hand, didn’t work  for me. It reveals a lack of insight into alcoholism; thus, in my view, it has little to offer to the reader. But then it’s perhaps unfair of me to expect the author, shaped by literature and not science, to understand the biology of addition, something that even experts don’t fully possess. 

On the other hand, I admire Didion for grasping that an adopted child may find the knowledge of being adopted (i.e. “given up —or even “abandoned” by her own mother) difficult to process, even traumatic. It shatters any rosy view of adoption that the reader may have. Again, it would be unfair to blame Didion for her ignorance of the trauma of adoption, a relatively new concept. 

Yes, Didion can come across as evasive, but I wonder to what extent that evasiveness simply stems from her ignorance. Perhaps it was in part willful ignorance, but that's a different proverbial can of worms.

*
WHY THERE ARE NO HOMELESS PEOPLE IN MOSCOW

A Victory Day parade float with Moscow Kremlin tower snagged on a flag bunting and collapsed from the bed of a moving truck.

Many American bloggers who come to Moscow note that there are no homeless people in the streets. They believe that this makes Russia a more advanced country than the United States.

There is no homeless population in the streets of Russian cities because they get eliminated, bussed away, liquidated. This is an easy feat to accomplish for two reasons.

Firstly, all public places in Russia are not “public”, nor “commons.” They are state property. Your presence on state property is conditioned on your proper behavior. That’s why no Occupy Wall Street or color revolution is possible in Moscow. Protesters would be arrested and if they resist exterminated, with machine guns if necessary.

Secondly, no person in Russia has any human rights. There are no rights, only privileges. Your privileges can be snatched away from you on a whim, without any explanation. As such, a homeless person has no human rights to occupy a place on the street or in the library like it’s the case in the U.S.

The state can do whatever it likes to this person. He has forfeited privileges when he let himself go. I remember once I was wearing scruffy clothes and my hair was disheveled. I had a plastic bag with stuff next to me on a park bench. At some point I realized that the park ranger was making a move towards me because he took me for a homeless person.

“I’m not homeless!” I said quickly. A few seconds more and he’d be calling cops to pack me into cop bus to Soylent Green me in the woods.

That’s the Mongol Empire legacy. There’s the khan, an ultimate ruler. He receives dukes who wish to rule over territories dispersed over huge territory. They bring Khan gifts and prostrate themselves and are granted a “label” to rule over the territories and collect tribute for the khan from the local populaces.

The tributes collected are spent lavishly in the capital, Moscow in Russia’s case, whose privileged populace (for as long as they behave properly) parasites on the provincial populaces whose dukes are too busy collecting tribute for the khan to take any care of them in order to enjoy perks and privileges.

That is the system that Putin has rebuilt after Mongol Empire where nobody has any rights and whose freedom and property can be snatched away at any moment. ~ Misha Firer (Brutalsky)

Claire Jordan:
We did a lot on the history of 19th and early 20th century Russia in history class at my school in the 1970s, and one thing that stuck with me was that up till about 1880 Russian peasants were nearly slaves, and could be waged as stakes in card games and gambled away from their homes. That wasn’t true in Britain even in the Medieval period. We always had the idea that the lords had duties of care to their peasants as well as vice versa, but that doesn’t seem to have been the case in Russia.

But, if Putin *really* wants to recreate history, Ukraine used to rule Russia….

Stanislaw Zalewski:
I think they also had the general idea that serfs are people, it's just the administration failed to enforce it. Eastern parts never had serfdom, because it was never properly colonized and western parts only got it shortly before abolishing it entirely. In Russian literature there is this concept of Russian Soul, poor peasant that only longs for more misery, almost a masochist. From POV of political elites, this is a group of people that basically begs to get abused.

John Dawson:
Excellent post Misha. Europe was split when Christendom went crusading to rid the holy land of Muslims instead of helping stem the horde to our eternal disgrace and resulting in ever since a lack of empathy across our continent.

James Clark:
“History is prologue “ — UNTIL we learn from it !

*
THE NINETEEN NINETIES IN RUSSIA: CHAOS AND LAWLESSNESS

With the collapse of the Soviet Union, its planned economy and supply chains also collapsed. Millions of people became unemployed literally overnight, and even in factories that continued to operate by inertia, wages went unpaid for many months as nobody had any clear idea who has to pay them now, or from what funds.

The now legally-ownerless vast state enterprises became targets for a frenzy of privatization. In the absence of anything resembling regulation, the grab of the most lucrative establishments was literally cutthroat. Many of the emergent “businessmen” were former military and security service officials with the knowledge of organizing armed groups and ready access to weapons. 

They had a pool of hundreds of thousands of unemployed and disgruntled war veterans from the recent Afghanistan war and athletes from bankrupt sports clubs with no marketable skills to recruit enforcers from. Common criminals also wanted a share of the pie.

Soon, the streets of Russia ran red with blood as armed gangs fought brutal turf wars over the legacy of the Soviet Union. There are still entire “bandit sections” in cemeteries across Russia, gaudy and over the top monuments marking the graves of the fallen of those turf wars. They only represent a minority of successful and popular gangsters — far more rest in unmarked graves in the back woods.

The gang violence of the early 90’s reached proportions of an undeclared civil war. As many as one million Russians are believed to have died untimely deaths related to gang warfare. Even more perished from the massive spike of alcoholism, drug addiction (now that the Soviet draconian anti-drug laws were no longer in effect) and suicide that followed the economic collapse. Diseases once eradicated by the robust Soviet healthcare program again began to reap their deadly toll. Demographics took a nosedive.

The violence of organized crime reached almost cartoonish proportions comparable to Mexico at the height of cartel wars, to the point that even criminals began to complain about lawlessness. One source described this time as “a sort of competition between criminals of who would commit the most callously-cynical, sadistic and pointless atrocity.” 

 Meanwhile, the ordinary people often has to rely on foreign aid like the infamous American chicken to survive. This no doubt added to the national humiliation of having to subsist on handouts from a recent enemy.


Eventually the survivors of the gang wars smartened up and realized it was more profitable to cooperate than fight. The unholy pact between legitimate business, uniformed services and organized crime was struck that would come to turn Russia into a mafia state. Businessmen had the legitimacy, security services the weapons and know-how, and organized crime the money. The emergent class of oligarchs often had the same people represent all three groups.

For the ordinary Russian, the 90’s represented a hardscrabble and precarious existence, the violence, lawlessness and poverty being compounded by the Chechen Wars and terrorism related to them. Even as the crime wave abated, oligarchs continued to despoil the country and corruption was even more rife. An entire generation grew up idolizing criminals and talented youth much preferred to move abroad, to the point that it became hard to find skilled tradesmen.

It was in this context that president Putin managed winning the Chechen Wars. Organized crime were offered a deal to be allowed to continue business as usual on the condition they abandoned any and all political pretenses. Any who refused were helped to a high window or a serving of radioactive tea. Putin’s regime was still corrupt to the core, and most Russians knew that, but at least it was a controlled corruption with clear-cut rules instead of the chaotic lawlessness of the 90’s. ~ Janis Šnepsts, Quora

Susanna Viljanen:
Compared on how the Central European and Baltic states debolshevicized themselves, this total failure of Russia feels even more incomprehensible — and tragic.

Baruch Cohen:
Well, compare Czech republic and Russia —
Czech serfdom aborted 1785, socialism — 1945 - 1991
Russia: serfdom ended 1861, socialism — 1917–1991

This explains a lot.

*
BRUTALSKY ON RUSSIA’S NOSTALGIA FOR THE USSR

Brutalsky (Misha Firer) near the Sokolniki Metro Station

Russia has geriatric leaders holding the levers of power, with delusions of returning to a former glory that never really was. They are feeding us nausea-inducing nostalgia at every turn paying for it with our taxes. I’m at a municipality-organized retro street festival celebrating 80 years of Moscow Metro. This is Sokolniki, the location of the first underground station.

There are 1950s style activities for adults and children. Revelers are writing letters to Putin dipping pens in an ink pot. President doesn’t use internet and there is a greater chance that he is going to read pensioners’ handwritten love notes.

At Pravda (“Truth”) newspaper stand, passersby read news approved and curated by the Communist Party of the Soviet Union. Is it “disinformation” or “misinformation”? Or both?

A portrait of Soviet dictator Joseph Stalin in a trench coat and a military peaked cap on front page. As his predecessor Tzar Nicholas II, Stalin styled himself as first and foremost a chief military commander bent on empire expansion. Another world war became a self-fulfilling prophesy.

Subway staircase
The steep staircases still have no facilities for the wheelchairs and old pensioners writing letters with an ink pen. State-hired cosplayers pretend to be Bolshevik commissars who used to combine administrative roles with political indoctrination.

Our eyes are bravely fixated on the past that provides us with meaning, greatness, glory.

The oldest metro line in Moscow on Sokolniki line has color code red. It has stations like Red Gates, Red Rural Settlement, Member of Communist Youth Organization, Library Named in Honor of Lenin, while former names included Lenin Hills, Dzerzhinsky (after founder of political police), Marx Avenue, Palace of the Soviets. 

The delusion of converting the world to the religion of communism ultimately failed, and the current nostalgia for glorious past is short lived to be swept away by the merciless grind of the daily routine.

Shedpeasant:
To add authenticity random members of the public should be taken away to ‘Gulag’ for ‘re-education’ or behind a wall and shot in the head? (With a water pistol or some-such obviously…) Russia no longer as paranoid about revolutionaries as it was then ….?

Walter Libl:
I really wonder if this is hysterical history, or historical hysteria. Not many nations are doing it so well…

Tom Binns:
People who like to attend things in crowds (so as not to be alone with their thoughts) always seem to like mass hysteria inducements. The popularity of gladiatorial circuses, rock concerts and charismatic hyper-churches is testament to this, I think.

*
UKRAINE’S CHANCES OF WINNING ARE IMPROVING

~ Ukraine has a very good chance, and it’s getting better all the time. Let’s compare and contrast each nation’s recent accomplishments.

Ukraine has developed a slew of long range drones. Recently Ukraine struck a Russian airbase 1,800kms away. That’s not all. Ukraine just completed successful tests of a 3,000km drone. More Russian oil depots/refineries are now vulnerable. Since these are domestically produced, there’s no limitations on the targets.

Ukraine developed a sea drone that can launch other drones to attack.
Ukraine Ukraine developed a sea drone that can shoot down Russian helicopters.
Ukraine is now manufacturing over 1 Million drones per year!
Ukraine just new deployed a full new Army in Eastern Ukraine.
Ukraine now has added Mirage 2000s and they are already conducting ground strikes against Russian targets alongside Ukraine’s growing fleet of F-16s.

On the other side, we’ve got Russia.

Russia is so desperate for troops, they are deploying wounded troops, literally on crutches, to the battlefield.

Russia has lost so many APCs and IFVs, there aren’t enough to go around. Russian assaults are now carried out on foot, on motorcycles, or electric scooters. Gone are the days of Russia’s armored infantry. They don’t have enough left.

Russia has lost so many logistic transport trucks, that they’re logistics trains are collapsing, and they have begun moving ammo and supplies with…donkeys.  ~ Eric Wicklund, Quora

Anthony Kingshott:
Poor donkeys. Visions of the third Reich on horses …

Oriana:
During WW2, many countries still relied on horses, at least for logistics. Only the American forces were fully mechanized.

Vojta Rod:
The fundamental question is what we mean by winning. And I say this as a person who unequivocally supports Ukraine, because it is simply a victim in this conflict and Russia continues the "best traditions" of aggression, violence and the spread of its backwardness of the Soviet Union/Czarist Russia to other countries.

The big question mark will be what will happen to Russia after the end of the war, politically, socially and economically. The reality is that the country has never been closer to collapse in the last three decades, or a coup d'état in the last 30 years.

Elena Gold:
Ukraine “won” this war in the first week, when it was clear that Putin’s plan of “3-day special operation” had failed.

R.W. Carmichael:
The Ukrainians have discovered what George Washington discovered in the Revolutionary war. That he did not have to win on the battlefield to win. All he had to do was to stay in the field and not be defeated, and eventually the realities of the situation would cause the British to withdraw on unequal terms.

Oriana:
The failure of the attempt to seize Kyiv was certainly important. But perhaps even more important was Zelensky’s courage — his famous, “I don’t need a ride, I need ammunition.”

And since I can’t help but look look at history through the lens of symbols, I saw the sinking of the Moskva (i.e. "Moscow"), the flag ship of the Russian Black Sea fleet, as an omen of the sinking of Russia, at least Russia as the current bloated and corrupt dictatorship. The two smoking sailors who were blamed for the disaster, rather than the two Ukrainian drones, in my mind because a symptom of Russia’s schizophrenic propaganda.

*
A LONG WAR FAVORS NEITHER RUSSIA NOR UKRAINE — BOTH COUNTRIES ARE ON THE LOSING END

The war where the U.S. was supporting Ukraine militarily was favoring the USA — because it allowed America to battle-test and tweak its newest weapons, as well as get rid of the old equipment and munitions, which were close to their expiration date, hence saving on their demilitarization.

If you remember, it was during the WW2 that the USA became the most powerful country and economy on Earth — because Europe was lying in ruins.

The war in Ukraine also boosted the U.S. weapons sales to Europe and their willingness to increase defense spending.

But Trump got infected by the Kremlin mind virus, so in his delusional parallel reality, sending weapons to Ukraine was bad for America. So, Trump decided to abruptly halt Ukraine’s ability to use weapons on the battlefield, by cutting off intel sharing, and turned back the U.S. planes delivering more weapons. This absolutely destroyed trust in the U.S. from the side of NATO allies — and obviously, the sales of U.S. weapons to Europe.

Instead, Europe was forced to take the role that America was playing — until then, the U.S. policy was not to allow Europe to grow strong, that’s why America was promising security to Europe — to restrict them and make them reliant on the U.S. It was a policy.



As for Russia and Ukraine, the long war was destroying the Russian economy, turning it into a wartime economy — while Ukraine was being destroyed physically, with Russian army leveling the Ukrainian cities in the East of the country and killing tens of thousands of Ukrainians.

Russia also suffered massive demographic losses, with millions of educated, skilled Russians leaving the country — in addition to hundreds of thousands of Russian soldiers killed on the battlefield. Russia lost 10 times more young men and women to emigration than the size of military casualties suffered by the Russian army.

The war also destroyed the myth about Russia’s military prowess, while highlighting Ukraine’s ability to withstand against the mighty Russian war machine — with Europe realizing that if they have Ukraine, they have a shield of trained troops against Russia (which the U.S. under Trump is threatening to withdraw).

Russia lost massively in political capital, while Ukraine’s political significance skyrocketed.
For Russia, with Europe fully backing Ukraine to win (while the U.S. goal was to just not let Ukraine lose), the current Russia’s possession of territory is likely the most they can hope for — but if the U.S. is out, Europe will want to quickly arm Ukraine to win.

Putin is terrified of Europe fully joining Ukraine’s war efforts — Russia’s borders with NATO are basically unprotected, all troops are in Ukraine — so Putin is trying to keep the U.S. involved, knowing that the American foreign policy has always been to keep the status quo. (Putin used to love quoting Brzezinski.)

Putin thought Trump would help him maul Ukraine into surrender — but instead, Ukraine got full backing from Europe; Putin didn’t count on that.

He hoped his attempts to install Russia-friendly governments in European countries would be more successful — but Musk and Vance screwed it up, trying to help him.

Putin has no chance to take Kyiv and gobble up Ukraine.

Ukraine, on the other hand, can potentially de-occupy its territories — including Crimea — they have already de-occupied 50% of the territory captured by Russia in the first weeks of the invasion in 2022. And the Ukrainian advances are usually massive.

For a long time, Ukraine hadn’t attempted large counter-offensive operations in Ukraine (since 2023). Instead, they chose to advance on the territory of Russia itself — Kursk, Belgorod regions.

This could be another vector of Ukraine’s advances — and we know that Russia struggles to defend its own territory.

A lot of Ukrainian military are against halting the war, saying the Russian occupiers must be destroyed and expelled — while the Ukrainian civilians mostly support the idea of halting the war at the current line of separation (at the same time, both the Ukrainian military and civilians strongly oppose the idea of recognizing occupied territory as Russian).

The long war does not favor Russia — but it favors Putin.

For Putin, the war gives him legitimacy and extra powers to squash dissent. And it still maintains the illusion of Russia’s might in the minds of Russian citizens — which will be hard to maintain once the war is over and “veterans of the SMO” return home.

Putin and his clan are the main beneficiaries of the war. It allows them to retain power.

Fred Daniels:
Putin is stuck between a rock and a hard place. He can’t defeat Ukraine, particularly now that they have Europe’s full and vocal support, and he can’t afford to have disgruntled “veterans of the SMO” returning to Russia. He’s in a no-win situation while the Russian economy is gasping for air. He’s f*cked.

*
WE ARE LIVING IN TWO DIFFERENT ECONOMIES

For many millennials and Gen Zers, financial security remains out of reach — even as their net worths grow on paper.

“We’re living in two separate economies,” said Freddie Smith, an economics content creator who talks about the different financial realities between generations. “The middle class, unfortunately, is dead for millennials and Gen Zers. Or, best-case scenario, the goalpost has just moved and it’s still obtainable, but you have to make over six figures to have that middle-class life.”

Rachel Schneider, CEO of emergency payment fintech company Canary and co-author of “The Financial Diaries,” describes a large portion of Americans as living “at break even.”

Over the course of the year, they might make enough money to pay for basic living expenses and cover their bills, but if one major thing happens then they can get behind,” Schneider told CNBC.

Meanwhile, costs keep rising. Housing, health care, and insurance have all become more expensive. Additionally, unlike decades ago, Americans now bear more responsibility for funding their own retirement.

Despite older Americans’ criticism of younger generations for lifestyle inflation, many experts argue the problem is structural, not behavioral.

It’s a lot harder for young people today to save up for markers of the American Dream than it was for previous generations,” said Joanne Hsu, director of the University of Michigan’s Surveys of Consumers and a research associate professor.

“People often feel a lot of shame and distress when their financial lives are not going smoothly,” Schneider said. “And yet, a lot of what they’re experiencing is not the result of anything that they have done or could have done differently.”

https://www.cnbc.com/2025/05/12/millennials-struggle-financially-despite-higher-earnings.html?utm_source=firefox-newtab-en-us

*
THE MYTH OF HAVING IT ALL

There is a secret out there—a painful, well-kept secret: At midlife, between a third and a half of all successful career women in the United States do not have children. In fact, 33% of such women (business executives, doctors, lawyers, academics, and the like) in the 41-to-55 age bracket are childless—and that figure rises to 42% in corporate America. 

These women have not chosen to remain childless. The vast majority, in fact, yearn for children. Indeed, some have gone to extraordinary lengths to bring a baby into their lives. They subject themselves to complex medical procedures, shell out tens of thousands of dollars, and derail their careers—mostly to no avail, because these efforts come too late. In the words of one senior manager, the typical high-achieving woman childless at midlife has not made a choice but a “creeping nonchoice.”

Why has the age-old business of having babies become so difficult for today’s high-achieving women? In January 2001, in partnership with the market research company Harris Interactive and the National Parenting Association, I conducted a nationwide survey designed to explore the professional and private lives of highly educated, high-earning women. The survey results are featured in my new book, Creating a Life: Professional Women and the Quest for Children.

In this survey, I target the top 10% of women—measured in terms of earning power—and focus on two age groups: an older generation, ages 41 to 55, and their younger peers, ages 28 to 40, as defined for survey purposes. I distinguish between high achievers (those who are earning more than $55,000 in the younger group, $65,000 in the older one) and ultra-achievers (those who are earning more than $100,000). I include a sample of high-potential women—highly qualified women who have left their careers, mainly for family reasons. In addition, I include a small sample of men.

The findings are startling—and troubling. They make it clear that, for many women, the brutal demands of ambitious careers, the asymmetries of male-female relationships, and the difficulties of bearing children late in life conspire to crowd out the possibility of having children. 

In this article, I lay out the issues underlying this state of affairs, identify the heavy costs involved, and suggest some remedies, however preliminary and modest. The facts and figures I relate are bleak. But I think that they can also be liberating, if they spur action. My hope is that this information will generate workplace policies that recognize the huge costs to businesses of losing highly educated women when they start their families. I also hope that it will galvanize young women to make newly urgent demands of their partners, employers, and policy makers and thus create more generous life choices for themselves.

The Continuing Inequity

When it comes to career and fatherhood, high-achieving men don’t have to deal with difficult trade-offs: 79% of the men I surveyed report wanting children—and 75% have them. The research shows that, generally speaking, the more successful the man, the more likely he will find a spouse and become a father. 

The opposite holds true for women, and the disparity is particularly striking among corporate ultra-achievers. In fact, 49% of these women are childless, but a mere 19% of their male colleagues are. These figures underscore the depth and scope of the persisting, painful inequities between the sexes. Women face all the challenges that men do in working long hours and withstanding the up-or-out pressures of high-altitude careers. But they also face challenges all their own.

Slim Pickings in Partners

Let’s start with the fact that professional women find it challenging even to be married—for most, a necessary precondition for childbearing. Only 60% of high-achieving women in the older age group are married, and this figure falls to 57% in corporate America. By contrast, 76% of older men are married, and this figure rises to 83% among ultra-achievers.

Consider Tamara Adler, 43, a former managing director of Deutsche Bank in London. She gave her take on these disturbing realities when I interviewed her for the study. Adler was the bank’s most senior woman, and her highly successful career had left no room for family. She mentioned the obvious reasons—long hours and travel—but she also spoke eloquently about how ambitious careers discriminate against women: “In the rarified upper reaches of high-altitude careers where the air is thin…men have a much easier time finding oxygen. They find oxygen in the form of younger, less driven women who will coddle their egos.” 

She went on to conclude, “The hard fact is that most successful men are not interested in acquiring an ambitious peer as a partner.”

It’s a conclusion backed up by my data: Only 39% of high-achieving men are married to women who are employed full time, and 40% of these spouses earn less than $35,000 a year. Meanwhile, nine out of ten married women in the high-achieving category have husbands who are employed full time or self-employed, and a quarter are married to men who earn more than $100,000 a year. 

Clearly, successful women professionals have slim pickings in the marriage department—particularly as they age. Professional men seeking to marry typically reach into a large pool of younger women, while professional women are limited to a shrinking pool of eligible peers. According to U. S. Census Bureau data, at age 28 there are four college-educated, single men for every three college-educated, single women. A decade later, the situation is radically changed. At age 38, there is one man for every three women.

The Time Crunch

Women pay an even greater price for those long hours because the early years of career building overlap—almost perfectly—the prime years of childbearing. It’s very hard to throttle back during that stage of a career and expect to catch up later. As policy analyst Nancy Rankin points out, the career highway has all kinds of off-ramps but few on-ramps.

In fact, the persistent wage gap between men and women is due mainly to the penalties women incur when they interrupt their careers to have children. In a recent study, economists Susan Harkness and Jane Waldfogel compared that wage gap across seven industrialized countries and found it was particularly wide in the United States. For example, in France, women earn 81% of the male wage, in Sweden 84%, and in Australia 88%, while in the United States, women continue to earn a mere 78% of the male wage. 

These days, only a small portion of this wage gap can be attributed to discrimination (getting paid less for doing the same job or being denied access to jobs, education, or capital based on sex). According to recent studies, an increasingly large part of the wage gap can now be explained by childbearing and child rearing, which interrupt women’s—but not men’s—careers, permanently depressing their earning power. 

If the gap between what men and women earn in this country is wider than elsewhere, it isn’t because this country has done an inferior job combating discrimination. It is because it has failed to develop policies—in the workplace and in society as a whole—that support working mothers.

Ironically, this policy failure is to some extent the fault of the women’s movement in the United States. Going back to the mid-nineteenth century, feminists in this country have channeled much of their energy into the struggle to win formal equality with men. More recently, the National Organization for Women has spent 35 years fighting for a wide array of equal rights, ranging from educational and job opportunities to equal pay and access to credit. The idea is that once all the legislation that discriminates against women is dismantled, the playing field becomes level and women can assume a free and equal place in society by simply cloning the male competitive model.

In Europe, various groups of social feminists have viewed the problem for women quite differently. For them, it is not woman’s lack of legal rights that constitutes her main handicap, or even her lack of reproductive freedom. Rather, it is her dual burden—taking care of a home and family as well as holding down a job—that leads to her second-class status.

The Second Shift

The problem with the notion that American women should be able to successfully clone the male competitive model is that husbands have not picked up a significant share of women’s traditional responsibilities on the home front. Even high-achieving women who are married continue to carry the lion’s share of domestic responsibilities. (See the exhibit “Primary Child Care and Household Responsibilities.”) Only 9% of their husbands assume primary responsibility for meal preparation, 10% for the laundry, and 5% for cleaning the house. When it comes to children, husbands don’t do much better. Only 9% of them take time off from work when a child is sick, 9% take the lead in helping children with homework, and 3% organize activities such as play dates and summer camp.

Yes, these percentages have grown over the years—but not much. At the end of the day, the division of labor at home boils down to one startling fact: 43% of the older, high-achieving women and 37% of the younger, high-achieving women feel that their husbands actually create more household work for them than they contribute. (Thirty-nine percent of ultra-achieving women also feel this way, despite the fact that half of them are married to men who earn less than they do.)

Stubborn Biology

So this is the difficult position in which women find themselves. According to Lisa Benenson, former editor of Working Woman and Working Mother magazines, “The signals are very clear. Young women are told that a serious person needs to commit to her career in her 20s and devote all her energies to her job for at least ten years if she is to be successful.” But the fact is, if you take this advice you might well be on the wrong side of 35 before you have time to draw breath and contemplate having a child—exactly the point in life when infertility can—and overwhelmingly does—become an issue.

Media hype about advances in reproductive science only exacerbates the problem, giving women the illusion that they can delay childbearing until their careers are well established. My survey tells us that 89% of young, high-achieving women believe that they will be able to get pregnant deep into their 40s. But sadly, new reproductive technologies have not solved fertility problems for older women. The research shows that only 3% to 5% of women who attempt in vitro fertilization in their 40s actually succeed in bearing a child. [The chances are higher using a donor egg.]

This kind of information is hard to come by because the infertility industry in this country likes to tout the good news—with dire consequences. Too many career women put their private lives on the back burner, assuming that children will eventually happen for them courtesy of high-tech reproduction—only to discover disappointment and failure.

A Costly Imbalance

I can’t tell you how many times over the course of this research the women I interviewed apologized for “wanting it all.” But it wasn’t as though these women were looking for special treatment. They were quite prepared to shoulder more than their fair share of the work involved in having both career and family. So why on earth shouldn’t they feel entitled to rich, multidimensional lives? At the end of the day, women simply want the choices in love and work that men take for granted.

Instead, they operate in a society where motherhood carries enormous economic penalties. Two recent studies lay out these penalties in very specific terms. In her study, economist Waldfogel finds that mothers earn less than other women do even when you control for marital status, experience, and education. In fact, according to her research, one child produces a “penalty” of 6% of earnings, while two children produce a wage penalty of 13%. In a more recent study, economists Michelle Budig and Paula England find that motherhood results in a penalty of 7% per child.

Given such a huge disincentive, why do women persist in trying to “have it all”? Because, as a large body of research demonstrates, women are happier when they have both career and family. In a series of books and articles that span more than a decade, University of Michigan sociologist Lois Hoffmann has examined the value of children to parents and finds that, across cultures, parents see children as enormously important in providing love and companionship and in warding off loneliness. Children also help parents deal with the questions of human existence: How do I find purpose beyond the self? How do I cope with mortality?

Thus, the fact that so many professional women are forced to sacrifice motherhood is patently unfair, and it also has immense implications for American business, since it causes women intent on motherhood to cut short their careers. This is, of course, the flip side of the same coin. For if a large proportion of women who stay on track in their careers are forced to give up family, an equally large proportion who opt for family are forced to give up their careers. According to my survey, 66% of high-potential women would like to return to full-time jobs.

The cost to corporations and to our economy becomes monumental in the aggregate. Our nation needs professional women to stay in the labor force; we can ill afford to have a quarter of the female talent pool forced out of their jobs when they have children. But in 2000, at the height of the labor crunch, Census Bureau data showed that fully 22% of all women with professional degrees (MBAs, MDs, PhDs, and so on) were not in the labor market at all. What an extraordinary waste of expensively educated talent!

At the same time, we need adults at all income levels to become committed, effective parents. When a parent devotes time, attention, and financial resources to help a child become a well-adjusted person—one who succeeds in school and graduates from college—not only do parents feel deeply fulfilled, but society, of course, is graced with productive workers who boost the GDP, obey the law, and pay their taxes. Thus, we are all stakeholders in parents’ ability to come through for their children.

And when women come to understand the value of parenthood to the wider community, they can quit apologizing for wanting both a career and a family. A woman can hold her head high when she goes into her boss and asks for a schedule that fits her needs.

The Challenge to Business

The statistics I’ve laid out here would be bearable if they were purely historical—the painful but isolated experience of a pioneering generation—but they are not. My survey shows that younger women are facing even more difficult trade-offs. (The sidebar “The Delusions of a Younger Generation” suggests that younger women may be more dangerously complacent than their elders.) Can we reverse these pernicious trends and finally create the possibility of true work-life balance? I believe we can.

The first challenge is to employers, to craft more meaningful work-life policies. Professional women who want both family and career know that conventional benefit packages are insufficient. These women need reduced-hour jobs and careers that can be interrupted, neither of which is readily available yet. And more than anything, they need to be able to partake of such benefits without suffering long-term damage to their careers.

High-achieving women make it abundantly clear that what they want most are work-life policies that confer on them what one woman calls “the gift of time.” Take Joanna, for example. At 39, Joanna had worked for five years as an account executive for a Chicago head-hunter. She believed her company had great work-life policies—until she adopted a child. “My main problem,” Joanna said, “is the number of hours I am expected to put in. I work 60 hours a week 50 weeks of the year, which leaves precious little time for anything else.” Joanna asked for a reduced schedule, but it was a “no go. The firm didn’t want to establish a precedent,” she said. Joanna began looking for another job.

According to my survey, some employers take family needs into account: 12% offer paid parenting leave and 31% job sharing. Many more, however, provide only time flexibility: 69% allow staggered hours, and 48% have work-at-home options. These less ambitious policies seem to be of limited use to time-pressed, high-achieving women.

So, what do professionals want? The high-achieving career women who participated in my survey were asked to consider a list of policy options that would help them achieve balance in their lives over the long haul. They endorsed the following cluster of work-life policies that would make it much easier to get off conventional career ladders and eventually get back on:

A Time Bank of Paid Parenting Leave. This would allow for three months of paid leave, which could be taken as needed, until the child turned 18.

Restructured Retirement Plans. In particular, survey respondents want to see the elimination of penalties for career interruptions.

Career Breaks. Such a leave of absence might span three years—unpaid, of course, but with the assurance of a job when the time came to return to work.

Reduced-Hour Careers. High-level jobs should be created that permit reduced hours and workloads on an ongoing basis but still offer the possibility of promotion.

Alumni Status for Former Employees. Analogous to active retirement, alumni standing would help women who have left or are not active in their careers stay in the loop. They might be tapped for advice and guidance, and the company would continue to pay their dues and certification fees so they could maintain professional standing.

Policies like these are vital—though in themselves not enough to solve the problem. In particular, companies must guard against the perception that by taking advantage of such policies, a woman will tarnish her professional image. Outside the fiction of human resource policies, a widespread belief in business is that a woman who allows herself to be accommodated on the family front is no longer choosing to be a serious contender. Top management must work to banish this belief from the corporate culture.

The good news is that, where top management supports them, work-life policies like the ones I’ve listed do pay off. My survey data show that companies offering a rich array of work-life policies are much more likely to hang on to their professional women than companies that don’t. High-achieving mothers who have been able to stay in their careers tend to work for companies that allow them access to generous benefits: flextime, telecommuting, paid parenting leave, and compressed workweeks. In contrast, high-achieving mothers who have been forced out of their careers tended to work for companies with inadequate work-life benefits.

I heard a wonderful example of the loyalty these kinds of policies engender when I spoke with Amy, 41, a marketing executive for IBM. Her son had just turned three, and Amy was newly back at work. “People don’t believe me when I tell them that my company offers a three-year personal leave of absence,” she said. As she described the policy, it applies not only to mothers; others have used it to care for elderly parents or to return to school. The leave is unpaid but provides continuation of benefits and a job-back guarantee. “IBM gave me this gift,” she said, “and I will always be grateful.” Clearly, in the aggregate, business leaders hold the power to make important and constructive change.

Because companies can’t be expected to craft all the policies that will make a difference in women’s lives, government should also take action. I have urged policy makers at the national level, for example, to extend the Family and Medical Leave Act to workers in small companies and turn it into paid leave. State and federal governments could also accomplish much by providing tax incentives to companies that offer employees flextime and various reduced-hour options. And we should promote legislation that eliminates perverse incentives for companies to subject their employees to long-hour weeks.

The Challenge to Women

My book focuses on what women themselves can do to expand their life choices. In a nutshell, if you’re a young woman who wants both career and family, you should consider doing the following:

Figure out what you want your life to look like at 45. If you want children (and between 86% and 89% of high-achieving women do), you need to become highly intentional—and take action now.

Give urgent priority to finding a partner. My survey data suggest that high-achieving women have an easier time finding partners in their 20s and early 30s.

Have your first child before 35. The occasional miracle notwithstanding, late-in-life childbearing is fraught with risk and failure. Even if you manage to get one child “under the wire,” you may fail to have a second. This, too, can trigger enormous regret.

Choose a career that will give you the gift of time. Certain careers provide more flexibility and are more forgiving of interruptions. Female entrepreneurs, for example, do better than female lawyers in combining career and family—and both do better than corporate women. The key is to avoid professions with rigid career trajectories.

Choose a company that will help you achieve work-life balance. Look for such policies as reduced-hour schedules and job-protected leave.

That’s an easy list to compile, but I have no illusions that it will change the world, because identifying what each woman can do is only half the battle. The other half is convincing women that they are entitled to both a career and children. Somehow the perception persists that a woman isn’t a woman unless her life is riddled with sacrifice.

An End to Self-Sacrifice

In February 2001, I conducted an informal focus group with young professionals at three consulting firms in Cambridge, Massachusetts. During that session, a young woman named Natalie commented, “This is the third consulting firm I’ve worked for, and I’ve yet to see an older, more senior woman whose life I would actually want.”

Natalie’s colleague Rachel was shocked and asked her to explain. She responded, “I know a few hard-driving women who are climbing the ladder at consulting firms, but they are single or divorced and seem pretty isolated. And I know a handful of working mothers who are trying to do the half-time thing or the two-thirds—time thing. They work reduced hours so they can see their kids, but they don’t get the good projects, they don’t get the bonuses, and they also get whispered about behind their backs. You know, comments like, ‘If she’s not prepared to work the client’s hours, she has no business being in the profession.’”

This is the harsh reality behind the myth of having it all. Even in organizations whose policies support women, prevailing attitudes and unrelenting job pressures undermine them. Women’s lives have expanded. But the grudging attitudes of most corporate cultures weigh down and constrain what individual women feel is possible.

https://hbr.org/2002/04/executive-women-and-the-myth-of-having-it-all

*
NON-TRADITIONAL LIFESTYLES

Growing up in rural Michigan, Nina Job got familiar with peers and people in her community following a “traditional trajectory.” That meant “you go to college, you get married, you have 2.5 kids, you know — happy home, white picket fence type-thing,” she tells CNBC Make It.
But moving to New York gave her a new perspective: “I remember being 20 and so surprised to meet single people much older than what I was used to ever seeing back home who were happy.”

The varied domestic situations she encountered “opened my eyes to the possibility of so many different lifestyles, and just non-traditional family setups,” the now-36-year-old says.

Now, Job is among the growing number of Americans who are choosing to live a child-free life. The U.S. fertility rate fell to a record low of around 1.6 births per woman in 2023, according to the National Center for Health Statistics.

Societies need to maintain a fertility rate of roughly 2.1 births per woman in order to sustain the population — in other words, to make sure there are enough people to keep the workforce up and running. Fewer babies can mean fewer workers, fewer taxpayers and, as a result, shrinking economies.

These demographic shifts have raised some alarms for economists, as well as certain politicians and public figures who frame the decline as indicative of moral decay. Not wanting kids is “selfish,” the pope declared in 2022.

The reasons that more Americans say “no” to parenthood are more complex than they often appear, though. Becoming a parent is expensive, but money is not the No. 1 reason given for remaining child-free. In many cases, Americans simply have more options — and realize that they can pursue happiness in other ways.

“I think I was just raised on, ‘This is what success looks like,’ and it’s this traditional family setup,” Job says. “To be able to come out here and see it work 1,000 different ways made me realize I could have that.”

Parenthood in the U.S. is expensive ...

Many Americans want children. Just over half of adults ages 18 to 34 without children say they are interested in having them, a 2023 Pew Research survey found. However, the responses don’t break down evenly by gender: 57% of men say they want kids, but only 45% of women do.

For those who do want children but end up putting off or even forgoing parenthood, a prevailing narrative is that it’s simply too expensive. Perhaps babies have become a “luxury item,” a 2023 Vogue article pondered.

Raising a child in the U.S. is particularly pricey, and families can’t count on much help from the government. “America’s welfare system is pretty generous for the elderly, but relatively stingy for kids. Comparing the United States to almost 40 other countries in the OECD, only Turkey spends less per child as a percentage of their GDP,” the NPR podcast Planet Money recently reported. 

The U.S. is, famously, the only wealthy nation that does not mandate any paid parental leave. Here, “only about a quarter of American workers — regardless of gender — have access” to it, according to Planet Money.

And U.S. parenthood has gotten even more expensive in the last couple of decades. Day care and preschool prices spiked by about 263% between 1991 and 2024, according to a KPMG analysis of Bureau of Labor Statistics data. The total estimated cost to raise a child in 2023 from birth to age 18 is over $330,000, according to a Northwestern Mutual analysis.

Still, just 36% of childless adults under age 50 say they couldn’t afford to raise a child, Pew finds. An even smaller 12% of adults over 50 without kids say affordability was a deciding factor.

Of those under 50 who say they’re unlikely to ever have children, 57% say they just don’t want to, Pew finds. Other major reasons for being unlikely to have kids include wanting to focus on other things (44%) and having concerns about the state of the world (38%).

That’s a stark difference from older adults. Among those over age 50 without children, 31% say they never wanted to, per Pew.

A large contributor to the declining birth rate in the U.S. comes from a drop in unintended pregnancies, the rate of which in the U.S. fell by 15% between 2010 and 2019, according to the Centers for Disease Control and Prevention. In oth+er words, more people who don’t want to become parents can avoid it, thanks to advances in contraception and reproductive technologies.

And an increasing share of adults under age 50 say they never intend to have children, a separate Pew study found. That group grew from 37% of adults in 2018 to 47% of adults in 2023. 

If costs aren’t the deciding issue, why don’t younger Americans want kids as badly as their own parents seemed to? For many, it’s because the demands and requirements of parenthood itself have changed.

Callie Freitag, 33, lives in Madison, Wisconsin, where she works as a public policy researcher, demographer and an assistant professor at the University of Wisconsin. She and her partner “arrived at the decision not to have kids because neither of us feel interested in being responsible for the care and feeding of small children,” she tells CNBC Make It.

“We’d rather spend our time, energy and resources in other ways. We love being aunt and uncle, but prefer not to be on the clock watching children 24/7,” she says.

Her goals include continuing to build her career, traveling and engaging with her community. Those are possible to prioritize with kids, she acknowledges. But “having children adds layers of complications.”

“Having children is expensive, time-consuming and exhausting, especially in a country that does not adequately prioritize affordable child care or paid family leave,” she adds.

Parenting culture has shifted over the last couple of decades, while millennials and Gen Zers were growing up and forming their opinions on what parenthood should look like. The mindsets of many people in those generations have changed accordingly.

The stakes feel extremely high, and the concern about possibly messing up is real, Paula Fass, a cultural historian and professor at University of California, Berkeley, tells CNBC Make It.
“I do think that, right now, there is fear about child rearing and parenting, a kind of general anxiety that penetrates the younger generation. So they’re conflicted about whether it’s even worth having children, when so much is expected of you as a parent,” Fass says.

Today’s parents spend more time with their children than parents of yesteryear. Mothers in 2012 spent roughly twice as much time — an average 104 minutes a day — with their kids as moms in 1965 (54 minutes per day), a 2016 study found.

Meanwhile, fathers spent four times as much time on child-care duties in 2012 — an average of 59 minutes per day, up from 16 minutes in 1965.

The expectations to be “always on” when you are a parent can be discouraging or daunting to adults who want to have kids but also want to continue pursuing their careers, hobbies or other passions. That could contribute to a sense that would-be parents have to reprioritize their own lives, and even reshape their own personalities, if they want to have kids.

Parenting doesn’t just require more money and more time than it used to. Safe, trusted advice can be harder to come by.

Earlier generations of Americans had a singular national expert, Dr. Spock, who was widely regarded as a trusted source for parenting advice, Fass says. These days, “there’s anxiety without an answer,” she says.

“You go online and there are 10 or 15 different perspectives on what should be done about a particular [parenting] thing, and not only are there different perspectives, but there’s a lot of slamming of the way people do things.”

Parents may feel pressured to do everything in their power to give their kids a childhood they think will put them on a good path. The list of “musts” can include “gentle parenting,” specialized schooling, elite sports training, state-of-the-art technology and more.

Brianna, a 29-year-old living in Connecticut, knew for most of her life she didn’t want to be a parent and chose last year to get surgically sterilized. Her name has been changed for privacy concerns.

“It’s something that I’ve wanted for as long as I’ve known that it was a thing,” she says.
Nonetheless, it took years of documenting with her doctor that she was set on her choice before she got the greenlight to proceed with the procedure, Brianna says. When the Supreme Court decision overturning Roe vs. Wade came down and abortion restrictions went into effect throughout the country, Brianna’s doctor became more willing.

In her case, adopting a dog four years ago helped solidify her choice to not become a mom. “The amount of stress I feel in making sure that she’s living her best life and is healthy is not an amount of stress that I would want a child to have to deal with,” Brianna says.

“I’m very neurotic with her,” she says. “I know I would be even more neurotic with a human child.”

Policy can only go so far to encourage reproduction

Declining birth rates aren’t unique to the U.S. Countries around the globe have seen their birth rates begin to or continue to fall, as in the case of South Korea, which has the lowest fertility rate in the world.

Plenty of governments have taken steps to try to encourage their citizens to expand their families. South Korea increased a monthly allowance for families with newborns through their first year. Taiwan has introduced a cash benefit and tax break for parents as well as expanding its paid family leave compensation.

Few of these policy solutions have made a significant difference. Even countries like Norway, which are known for having robust family support policies, have started to see birth rates decline.

To a certain degree, that’s expected, Jessica Grose reports for the New York Times in an essay titled, Stop panicking about the birthrate. “There’s a pattern that occurs when both incomes and quality of life go up; societies move ‘from lots of births and lots of deaths to fewer births and longer life expectancies,’” she writes, citing demographer Jennifer Sciubba.

In addition, the more educated a population is, the more both  men and women tend to delay becoming parents and have fewer children overall,” Grose writes. “It’s tough to argue that more education and longer life expectancies are bad things for humanity.”

Whether something needs to be done about this situation at all on a policy level is debatable. “There are several reasons not to worry about falling birthrates,” writes demographer Leslie Root for the Washington Post. After all, “the U.S. population has continued to grow over nearly four decades of sub-replacement fertility rates,” she writes.

For individuals, of course, the decision to have children remains a highly personal one. And the Americans who increasingly prefer to opt out of parenthood may still be pro-family or pro-child in general.

“I love kids,” Job says. “I want to be able to help other people with their kids in moments where they’re really struggling.”

“Seeing how much work went into [having kids] from such a young age, I was like, ‘This is a lot,’ and you have to decide at some point,” she adds. “You can have anything you want. You can’t have everything you want.”

https://www.cnbc.com/2024/08/16/why-more-americans-dont-want-kids.html

*
OVERPRICED DRUGS — THALIDOMIDE-DERIVATIVE FOR MULTIPLE MYELOMA

The pain jolted me awake. It was barely dawn, a misty February morning in 2023. My side felt as if I’d been stabbed.

I had been dealing with pain for weeks — a bothersome ache that felt like a bad runner’s cramp. But now it was so intense I had to brace myself against the wall to stand up.

A few hours after arriving at the emergency room, I heard my name. A doctor asked me to follow him to a private area, where he told me a scan had uncovered something “concerning.”

There were lesions, areas of bone destruction, on top of both of my hip bones and on my sternum. These were hallmarks of multiple myeloma. “Cancer,” he said.

Multiple myeloma is a blood cancer that ravages bone, leaving distinctive holes in its wake. Subsequent scans showed “innumerable lesions” from my neck to my feet as well as two broken ribs and a compression fracture in my spine. There is no cure.

I walked out of the ER in search of fresh air. I sat on a metal bench and did what many patients do. I turned to Google. The first link was a medical review stating that the average lifespan of a newly diagnosed patient was three to five years. My stomach churned.

I soon learned that information was outdated. Most patients today live much longer, in large part due to a drug with a horrific past. It was a doctor at the hospital who first told me I would likely take a thalidomide drug as part of my treatment.

That couldn’t be possible, I told him.

I knew the story of thalidomide, or at least I thought I did. It represented one of the darkest chapters in the history of modern medicine, having caused thousands of severe birth defects after it was given to pregnant women in the 1950s and 1960s. The drug was banned in most of the world, and the scandal gave rise to the modern-day U.S. Food and Drug Administration.

It turns out the drug once relegated to a pharmaceutical graveyard had new life as a cancer fighter.

That drug I take is called Revlimid. It is a derivative of thalidomide, a slightly tweaked version of the parent compound.

Revlimid is now one of the bestselling pharmaceutical products of all time, with total sales of more than $100 billion. It has extended tens of thousands of lives — including my own.

But Revlimid is also, I soon learned, extraordinarily expensive, costing nearly $1,000 for each daily pill. (Although, I later discovered, a capsule costs just 25 cents to make.)

That steep tab has put the drug’s lifesaving potential out of reach for some cancer patients, who have been forced into debt or simply stopped taking the drug. The price also helps fuel our ballooning insurance premiums.

For decades, I’ve reported on outrageous health care costs in the U.S. and the burden they place on patients. I’ve revealed the tactics used by drug companies to drive sales and keep the price of their products high.

Even with my experience, the cost of Revlimid stood out. When I started taking the drug, I’d look at the smooth, cylindrical capsule in my hand and consider the fact I was about to swallow something that costs about the same as a new iPhone. A month’s supply, which arrives in an ordinary, orange-tinged plastic bottle, is the same price as a new Nissan Versa.

I wanted to know how this drug came to cost so much — and why the price keeps going up. The price of Revlimid has been hiked 26 times since it launched. Some of what happened was reported at the time. But no one has pieced together the full account of what the drugmaker Celgene did, how federal regulators failed to rein it in and what the story reveals about unrestrained drug pricing in America.

What I discovered astonished even me.

The rise of Revlimid

Celgene had kept the price of Thalomid low when it was initially intended for AIDS patients, CEO John Jackson told investors in 2004, as the company “didn’t want huge numbers of people demonstrating in front” of its office.

That wasn’t a problem with cancer patients. There was “plenty of room for very substantial increases” in the price of the drug now, Jackson told investors.

Just two days earlier, Celgene had hiked the price of Thalomid to $47 a pill.

“There was a common internal theme at Celgene that cancer patients were willing to pay almost any amount Celgene charged,” wrote David Schmidt, a former national account manager at the company, in a whistleblower lawsuit he filed after his employment was terminated in 2008. The lawsuit was voluntarily dismissed by Schmidt. (Jackson didn’t respond to requests for comment; Schmidt declined to talk to me.)

When Celgene launched Revlimid in December of 2005, it set the initial price at $55,000 a year, or $218 a pill, which was about double what analysts expected.

Seven months later, when the FDA approved the drug for multiple myeloma, the price jumped to $70,560 a year, or $280 a pill.

The cost to manufacture each Revlimid pill, meanwhile, was 25 cents. I found a deposition marked “highly confidential” in which a top Celgene executive testified that the cost started at a quarter and never changed.

Even on Wall Street, which cheered higher pricing, the initial cost of Revlimid prompted concern among analysts who tracked the company that such aggressive maneuvering would cause insurers to push back. In the U.S., that is one of the only real checks on the price of prescription drugs.

The generic threat

After the FDA approved Revlimid in late 2005, it also granted Celgene something else: seven years of market exclusivity because the drug treats a rare disease. In those seven years, Celgene raised the price of the drug nine times, increasing the price per pill by 82% to $397 in 2012.

The company also fended off challengers by claiming its patents protected the drug from competition until 2027.

But by 2010 generic makers were already working on copies of the drug, preparing to challenge those patents and enter the market earlier. A government analysis has found that generics generally lower the price of brand name drugs by an average of 85% after just one year.
Celgene was well aware of the danger generics posed and warned in a 2012 financial filing that their entry into the market could have a “material adverse effect” on its finances. At that point, Revlimid sales made up 70% of the company’s revenue.

‘Ridiculous,’ ‘Ugly’ and ‘Killer’

Revlimid turned out to be a unicorn for Celgene, a drug whose financial success proved impossible to replicate.

In October of 2017, Celgene announced it was abandoning a once-promising effort to develop a drug for Crohn’s disease. Shares of Celgene declined by 11%.

As it had done so many times in the past, Celgene tapped Revlimid to try to mitigate the damage. The day it announced the failure of the Crohn’s drug, it quietly raised the price of Revlimid by 9%.

By the end of the year, Celgene had cumulatively raised the cost 20% to $662 a pill, the largest one-year increase in the drug’s history.

That made Revlimid the most expensive Medicare drug that year, with the government insurance program spending $3.3 billion to provide it to 37,459 patients.

At Celgene, the brash increases triggered rare internal dissent. Betty Swartz, the company’s vice president of U.S. market access, objected to the measures in a pricing meeting with the CEO, who at the time was Alles, and other top executives. She said her concerns were swiftly dismissed, according to a whistleblower lawsuit she filed and later dismissed.

“Why would you be afraid to take an increase on our products?” she said the CEO told her. “What could be the worst thing that happens … a tweet here or there and bad press for a bit.” 

Swartz declined to comment.

The price increases added to the burden faced by many patients. In online groups, patients use words like “ridiculous,” “ugly” and “killer” when talking about the financial pain they have experienced related to the high costs associated with Revlimid. Some have taken out mortgages, raided retirement funds or cut back on everyday expenses like groceries to pay for Revlimid. Others have found overseas suppliers who ship the drug for pennies on the dollar, although doctors caution there’s no way to guarantee quality. Some just decide not to take the drug.

By increasing the price of Revlimid, Celgene executives in several instances boosted their pay. That’s because bonuses were tied to meeting revenue and earnings targets. In some years, executives would not have hit those targets without the Revlimid price increases, a congressional investigation later found.

In total, Celgene paid a handful of top executives about a half-billion dollars in the 12 years after Revlimid was approved.

High prices have consequences beyond individual patients. While there have been tremendous advancements in the treatment of my disease, there is still no cure. The specter of relapse hovers over every blood test, every new ache or pain.

The day I learned I was in remission, in November 2023, was bittersweet. I wrote at the time that I didn’t get to ring a bell — the traditional sign that a cancer patient has finished treatment. Instead, my doctor explained the next step: “maintenance” treatment.

This includes not only continuing Revlimid, but making monthly visits to my cancer center to get a shot of a bone-strengthening drug, have another drug injected into my stomach and blood drawn for lab tests.

“The visit,” I wrote that day, “only reinforced the fact that I’m a patient, and I always will be.”

For most of us, cancer will return at some point after treatment. And for most patients, the drugs eventually stop working.

Revlimid can also be difficult to live with. Some patients quit the drug after developing severe gastrointestinal issues, infections or liver problems. The drug also poses an increased risk of stroke, heart attack and secondary cancers.

Those are the trade-offs for keeping multiple myeloma in check.

Meanwhile, the drumbeat of price increases continues under Bristol Myers Squibb, helping the company bring in $48 billion in revenue from Revlimid since it purchased Celgene. Bristol said its pricing “reflects the continued clinical benefit Revlimid brings to patients, along with other economic factors.” The company said it is “committed to achieving unfettered patient access to our medicines” and provides some financial support for eligible patients. “While BMS develops prices for its medicines, we do not determine what patients will pay out of pocket.”

Last July, the cost of my monthly Revlimid prescription increased by 7% to $19,660.

At the beginning of this year, my insurer switched me to generic Revlimid. I didn’t fight it, thinking it would result in a dramatic decrease in what ProPublica’s health plan pays for the drug.

It turns out it is not much of a savings: The generic costs $17,349 a month.

https://www.cnn.com/2025/05/10/health/revlimid-price-of-remission-propublica  (severely condensed)

*
MYCOREMEDIATION: USING MUSHROOMS TO CLEAN UP POLLUTED LANDSNCAPES

Danielle Stevenson’s work focuses on mycoremediation, a technique that uses fungi to rehabilitate polluted land. Mycoremediation harnesses fungi’s natural abilities to collect contaminants scattered in soil and either concentrate them so that they can be removed or break them down into materials that aren’t harmful.

The fungal kingdom is host to an estimated 2.2 million to 3.8 million species, ranging from single-celled yeasts to the largest organism in the world, a sprawling member of the honey mushroom genus that occupies more than 2,000 acres of soil in eastern Oregon. Once considered more similar to plants than animals, fungi can’t create their own food through photosynthesis; instead, most of them obtain nutrients from dead or decaying organic matter. 

To do so, fungal cells secrete enzymes that break insoluble carbohydrates down into simpler sugars that those cells can absorb and store. That ability to decompose and metabolize nutrients makes them invaluable contributors to ecosystems around the world. Fungi transform materials that plants and animals otherwise wouldn’t be able to use to their benefit.

Fungi can convert complex toxins such as petrochemicals and pesticides into simpler molecules that they and other organisms can repurpose. They also absorb and concentrate heavy metals like lead and cadmium, which remain intact in their biomass and can be relocated safely when the fungi are harvested. Workers have an easier time picking mushrooms that accumulate lead and disposing of them at a landfill, for example, than excavating and relocating tons of lead-riddled soil.

“The Last of Us”—and the video game franchise it’s based on—imagine scenarios where fungi take over the world. The reality, as Stevenson is eager to discuss as a consultant and educator, is that they can play a significant role in our environments—and they are powerful potential partners in human-led efforts to restore polluted spaces. “We can transform contaminated sites into parks, green spaces and affordable housing,” she says. “There’s just so much potential for this type of approach to work on a lot of different problems at the same time.”

Persistent human-made chemicals called chlorophenols are used to manufacture products like pharmaceuticals, agricultural chemicals and dyes that eventually become hazardous pollutants. Those byproducts—and heavy metals like lead, mercury, cadmium and chrome—can remain in the environment after serving their industrial purposes, contaminating soil and posing a threat to both environmental and human health.

Scientists’ first inkling that fungi could help solve the problems caused by these pollutants came in 1963. In the Journal of Phytopathology, plant-disease expert Horst Lyr reported that the enzymes white rot fungi use to degrade lignin—a complex polymer found in and between plants’ cell walls—could also be used to break down chlorophenols.

Around the same time, U.S. government researchers Catherine G. Duncan and Flora J. Deverall demonstrated that wood-inhabiting fungi could also be used to achieve this aim.
Researchers following in these pioneers’ footsteps zeroed in on Phanerochaete chrysosporium, a crust fungus that grows on the dying or dead parts of woody plants and forms flat, almost feathery fruiting bodies—as opposed to mushroom shapes.

They treated it as the model white rot fungus for study—much as the fast-breeding, easily alterable fruit fly has formed the basis of many geneticists’ experiments. In the crust fungus’s case, the ease and speed with which it can be grown and handled in a lab and its efficacy in breaking down lignin made it an ideal investigative focus.

By the mid-1980s, biochemists at Michigan State University were publishing evidence that, in experimental conditions, this crust fungus could degrade pollutants like the insecticide DDT, polychlorinated biphenyls and polycyclic aromatic hydrocarbons. With these findings, the field of what we now call mycoremediation research beckoned investigators around the world. In the three decades since the team in Michigan published their research, that influential work has been cited more than 1,400 times.

wild oyster mushroom

Maya Elson and her collaborators in central California inoculated filter socks meant to halt toxic wildfire runoff with cultivated strains of this local wild oyster mushroom.

Mycologist Paul Stamets coined the term mycoremediation in Mycelium Running, an influential primer on growing mushrooms “for the purpose of reaping both personal and planetary rewards.” 

Stevenson, meanwhile, is among the grassroots activists putting mycoremediation techniques to work in her community. She serves on a council within a California Environmental Protection Agency department that protects communities from toxic substances, researches and develops methods to decontaminate land, and compels manufacturers to make safer products. She earned her PhD in environmental toxicology from the University of California, Riverside, studying three Los Angeles-area brownfields, areas where redevelopment or reuse is potentially problematic due to the presence of hazardous substances. Using previously untested combinations of fungi and plants, she and her team inoculated and seeded wood chips and soil at the sites with species intended to convert and concentrate toxic contaminants.

The initial results from that pilot study were encouraging. After just three months, she saw on average about a 50 percent reduction in all organic contaminants in the soil—such as diesel, gasoline and solvents. After a year, the contaminants were almost undetectable in treated plots. Stevenson also found that arbuscular mycorrhizal fungi—fungi that live in plants’ root tissue and collect nutrients for those plants in exchange for sugar—enhanced the plants’ abilities to take up heavy metals such as lead. 

That result ran contrary to previous studies that had suggested plants simply weren’t effective at removing toxins like lead from soil; in fact, all they needed was a little teamwork. Those previous studies didn’t include fungi, says Stevenson. “We saw significant reductions compared to not treating at all; depending on the metal, [it averaged between] 15 and 50 percent removal in just a year,” she says. “That’s really something.”

The City of Los Angeles funded Stevenson’s research—for good reason. The conventional method of addressing contaminated soil, known as “dig and dump,” involves hauling it up with a bulldozer, then transporting it to another disposal site. It’s disruptive, costly and time-consuming, and “dig and dump” doesn’t solve the problem of toxic materials, it just relocates them elsewhere. Soil treated with fungi, however, can be removed and incinerated or repurposed much more cheaply.

Stevenson adds the metals we’ve scattered across the country aren’t going to disappear; our responsibility isn’t to wish them out of existence but to minimize their potential for harm.
Her pilot study awaits peer review and publication. Unfortunately, there’s no shortage of opportunities for her colleagues to replicate her work; the United States is home to anywhere from 450,000 to 1 million brownfields.

“A 50 percent reduction in soil contaminants after three months of bioremediation does seem possible,” says Mia Maltz, a soil microbial ecologist at the University of Connecticut. Maltz served as the lead author on a meta-analysis of how soil inoculation with different species of mycorrhizal fungi can affect degraded ecosystems. Launched in 2023, her lab investigates how fungi and other microbes can foster ecological resilience. Maltz notes that other studies have shown reduced levels of toxins and heavy metals akin to Stevenson’s reported results within a similar time frame.

What Maltz finds interesting about Stevenson’s work is the evidence she’s collecting of how different microbes and soil conditions might enhance or hinder the uptake of heavy metals. “Having field trials like [Stevenson’s] helps to build the knowledge base of which plants are able to accumulate metals in which conditions promote their uptake,” she says.

Stevenson’s methods echo locally through an initiative in the Los Angeles Unified School District, where high school students are trained on hands-on bioremediation techniques. The project arose because some of her test sites were right next to a high school. “It’s a way of trying to get resources and opportunities into the communities most burdened by these polluted sites,” she says.

Mycoremediation in the wake of wildfires: biofiltration

Community members install biofiltration socks to mitigate pollution in runoff next to a burn scar on the Hawaiian island of Maui.

In the summer of 2020, a catastrophic fire in central California raged for 38 days, destroyed nearly 1,500 buildings and burned more than 86,000 acres of land. Following that initial devastation, Maya Elson—a mycologist and educator—sprang into action. She and her colleagues at CoRenewal, an environmental nonprofit group that promotes biodiversity and community-led responses to natural disasters, knew that the rainy season would send toxic runoff from devastated structures coursing down eroded hillsides to pollute local waterways and endanger aquatic life. So they installed filter socks—long, mesh tubes stuffed with absorbent straw that create physical barriers to prevent ash, sediment and chemicals from eroding and contaminating more land and water. They would also be the perfect materials for a mycoremediation project.

At about 20 different burn sites in Bonny Doon, a mountain community inland from Santa Cruz, she inoculated the filter socks with local strains of oyster mushrooms, knowing that the straw would be a fantastic food source for those mushrooms. As the fungi feasted on the straw and developed a mycelial network between its fibers, they created both a physical and a biological impediment to the toxic runoff. As in the brownfields, Elson explains, “the fungi can biodegrade petroleum-based, polycyclic aromatic hydrocarbons, and it can hold on to the heavy metals from the debris.”

That preliminary round of experimentation expanded into a larger investigation of how fungi can help heal soil after wildfire, with sites from southern Oregon all the way down to Southern California. (Many experiments have tested these measures in the lab, but research from disaster sites is far rarer.) Elson and her colleagues are now looking to tackle toxins and to investigate how their inoculated filter socks might support other efforts. How could their tweaks to that existing technology, for example, assist in efforts to regenerate damaged ecosystems? They’re also looking at more types of inoculants for the world beyond their regional oyster mushrooms.

The inoculated silt-sock technique has made its way across the Pacific to Hawaii, where the Maui Bioremediation Group has put local strains of fungi to work in service of recovery from the catastrophic Lahaina fire in August 2023. That group is developing a Hawaii-specific approach to bioremediation that includes treating carbon-rich charcoal created from plant matter with a compost “tea” composed of fungi and other microorganisms collected from the same areas where they’ll be installed; they then stuff that charcoal and wood chips inoculated with Pleurotus cystidiosus (the abalone oyster mushroom) in their biofiltration socks.

Like Stevenson, Elson is anxious both to have results to present to the scientific community and to share effective remediation methodologies far and wide. “We get people reaching out to us from all over that want solutions, and we have a robust enough body of scientific research to demonstrate that these methods are worth trying,” she says. “However, it is a newer science, and we are looking to better understand the full extent of the possibilities and to refine our methodologies.”

Maltz can speak to those efforts with authority: She is the primary investigator for the multistate effort to which Elson’s Post-Fire Biofiltration Initiative project contributed. In partnership with the Glassman Lab at the University of California, Riverside, and researchers in its College of Natural and Agricultural Sciences and Department of Microbiology and Plant Pathology, the Fire Ecology in X-Site study explores how inoculation with fungi can restore fire-affected soil. As long as communities use proper protective equipment, Maltz explains, using biofiltration techniques like Elson’s have potential to support restoration after fires. “I have some concerns about using non-native species for this work, especially in burned forests or in the wildland urban interface,” she says. Bringing over local species from unburned areas near burn sites and pairing them with biodegradable onsite materials, she says, likely has the best potential for success.

As for what she believes should come next, more peer-reviewed articles highlighting those and similar approaches could expand scientists’ understanding of what’s possible. “I’d like to see more studies using native fungi, either from burned or neighboring unburned systems,” Maltz says. When fire damages land, a natural succession of plants and fungi—known as fire-following species—begins the process of regeneration. “Using some of those fungal taxa, collecting and cultivating them, and using their mycelium would be important both in lab-based studies and for field ecological experiments,” Maltz says.

Experts emphasize the importance of following best practices and honoring scientific principles while exploring solutions to planetary emergency. As the authors of a recent research review in Applied Science , an “approach must be tested on a laboratory scale before being replicated in real field situations.”

Healing beyond ceasefires

Leila Darwish, a bioremediation specialist and the author of Earth Repair, researches conflict zones and has found that in the aftermath of devastation caused by war, natural strategies have potential to assist in healing the land. A mycofiltration technique analogous to Elson’s, for example, could intercept contaminated runoff from a bombed building. Pollutants from explosives, in turn, could be broken down with fungi much as they are in industrial brownfields.

“Mycoremediation used in combination with phytoremediation, microbial remediation, biochar [toxin-absorbing charcoal created with wood and plants], and other innovative strategies can offer important tools for healing the complex, toxic and deeply damaged landscapes left behind by war,” she says.

A recent research review in the Journal of Fungi evaluating attempts at remediating conflict-affected soil between 1980 and 2023 echoes her observations. Its authors conclude that there is “vast proof of the effectiveness of fungi” for breaking down or accumulating and sequestering five classes of warfare pollutants (metals, metalloids, explosives, radioactive elements and herbicides)—and that mycoremediation doesn’t destroy soil the way decontaminating it by incinerating it does. Considering that benefit is important, they note, “given the growing demand for food and arable land.”

Mycoremediation practices won’t completely rescue us, Stevenson says, but they can connect us. “I’ve shown up to give talks that people have titled ‘Mushrooms Will Save the World!’—and it’s not that way,” she adds. “I actually think it’s better.” Looking for so-called heroes is disempowering, she says, adding that small actions taken together, locally, will always outpace individual agents of change.


photo: T.R. Hummer

https://www.smithsonianmag.com/science-nature/can-scientists-harness-the-magic-of-mushrooms-to-clean-up-polluted-landscapes-180986561/?utm_source=firefox-newtab-en-us

*
HOW THE CHESTNUT TREES TRACED THE RISE AND FALL OF THE ROMAN EMPIRE

The chestnut trees of Europe tell a hidden story charting the fortunes of ancient Rome and the legacy it left in the continent's forests.

The ancient Romans left an indelible imprint on the world they enveloped into their empire. The straight long-distance roads they built can still be followed beneath the asphalt of some modern highways. They spread aqueducts, sewers, public baths and the Latin language across much of Europe, North Africa and the Middle East. But what's perhaps less well known is the surprising way they transformed Europe's forests.

According to researchers in Switzerland, the Romans had something of a penchant for sweet chestnut trees, spreading them across Europe. But it wasn't so much the delicate, earthy chestnuts they craved – instead, it was the fast-regrowing timber they prized most, as raw material for their empire's expansion. And this led to them exporting tree cultivation techniques such as coppicing too, which have helped the chestnut flourish across the continent.

"The Romans' imprint on Europe was making it into a connected economical space," says Patrik Krebs, a geographer at the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL). "They built a single system of governance all over Europe, they improved the road system, the trade system, the military system, the connection between all the different people all over Europe."

As a result of that connection, "specific skills in arboriculture [the cultivation of trees] were shared by all the different civilizations", he says.

The arboreal legacy of the Romans can still be found today in many parts of Europe – more than 2.5 million hectares (6 million acres) of land are covered by sweet chestnut trees, an area equivalent in size to the island of Sardinia. The trees have become an important part of the landscape in many parts of the continent and remain part of the traditional cuisine of many countries including France and Portugal.

Krebs works at a branch of the WSL in Switzerland's Ticino canton on the southern slope of the Alps, an area that is home to giant chestnut trees, where many specimens have girths greater than seven meters (23ft). By the time of the Middle Ages, sweet chestnuts were a staple food in the area. But it was the Romans who brought the trees there – before their arrival in Ticino, sweet chestnuts did not exist there, having been locally wiped out in the last ice age, which ended more than 10,000 years ago. 

Using a wide range of evidence, including paleoecological pollen records and ancient Roman texts, Krebs' research team analyzed the distribution of both sweet chestnut (Castanea sativa) and walnut (Juglans regia) trees in Europe before, during and after the Roman empire. Sweet chestnut and walnut trees are considered useful indicators of the human impact on a landscape, as they generally benefit from human management – such as pruning and suppressing competing trees. Their fruits and timber are also highly desirable. 

In countries such as Switzerland, France and parts of Germany, sweet chestnut pollen was near-absent from the wider pollen record – such as, for example, fossil pollen found in sediment and soil samples – before the Romans arrived, according to the study and previous research. But as the Roman Empire expanded, the presence of sweet chestnut pollen grew. Specifically, the percentage of sweet chestnut pollen relative to other pollen across Europe "shows a pattern of a sudden increase around year zero [0AD], when the power of the Roman empire was at its maximum" in Europe, Krebs says.


The ancient chestnut trees in Ticino, Switzerland, have grown to be true giants over the centuries

After the Barbarian sacks of Rome around 400-500 AD, which signaled the beginning of the end of the Roman Empire amid widespread upheaval, the chestnut pollen percentage then drops temporarily. This decrease suggests that many of the Roman-era orchards were abandoned, Krebs says, probably not only due to the fall of the Roman Empire, but also, because a wider population decline in many areas at the time. 

"Juglans [walnut] has a different pattern," says Krebs. The spread of pollen from these trees is less clearly associated with the rise and fall of the Roman empire, he and his colleagues found. Its distribution around Europe had already increased before the arrival of the Romans, perhaps pointing to the ancient Greeks and other pre-Roman communities as playing a role.

But while the Romans can perhaps take credit for spreading the sweet chestnut around mainland Europe, some separate research suggests they were not behind the arrival of these trees in Britain. Although the Romans have previously been credited with bringing sweet chestnuts to the British isles – where they are still a key part of modern woodlands – research by scientists at the University of Gloucestershire in the UK found the trees were probably introduced to the island later.

Sweet chestnut trees can be striking features of the landscape. They can grow up to 35m (115ft) tall and can live for up to 1,000 years in some locations. Most of those alive today will not have been planted by the Romans, but many will be descendants or even cuttings taken from those that ancient Roman legionnaires and foresters brought with them to the far-flung corners of the empire. The oldest known sweet chestnut tree in the world is found in Sicily, Italy, and is thought to be up to 4,000 years old.

Wood for fortresses

Why did the Romans so favor the sweet chestnut tree? According to Krebs, they did not tend to value the fruit much – in Roman culture, it was portrayed as a rustic food of poor, rural people in Roman society, such as shepherds. But the Roman elites did appreciate sweet chestnut's ability to quickly sprout new poles when cut back, a practice known as coppicing. This speedy regrowth came in handy given the Romans' constant need for raw materials for their military expansion.

"Ancient texts show that the Romans were very interested in Castanea, especially for its resprouting capacity," he says. "When you cut it, it resprouts very fast and produces a lot of poles that are naturally very high in tannins, which makes the wood resistant and long-lasting. You can cut this wood and use it for building fortresses, for any kind of construction, and it quickly sprouts again."

Coppicing can also have a rejuvenating effect on the chestnut tree, even after decades of neglect.


As the Roman Empire rapidly expanded, they needed fast growing timber so they could build fortifications

In Ticino, chestnut trees became more and more dominant under the Romans, according to the pollen record. They remained popular even after the Roman Empire fell, Krebs says.

One explanation for this is that locals had learned to plant and care for the tree from the Romans, and then came to appreciate chestnuts as a nourishing, easy-to-grow food – by the Middle Ages, they had become a staple food in many parts of Europe. The chestnuts, for example, could be dried and ground into flour. Mountain communities would also have welcomed the fact that the trees thrived even on rocky slopes, where many other fruit trees and crops struggled, Krebs adds.

"The Romans' achievement was to bring these skills from far away, to enable communication between people and spread knowledge," he says. "But the real work of planting the chestnut tree orchards was probably done by local populations."

Chestnut Pickers by Georges Lacombe

When they are cultivated in an orchard for their fruit, sweet chestnut trees benefits from management such as pruning dead or diseased wood, as well as the lack of competition, all of which prolong their life, Krebs says: "In an orchard, there's just the chestnut tree and the meadow below, it's like a luxury residence for the tree. Whereas when the orchard is abandoned, competitor trees arrive and take over."

Research on abandoned chestnut orchards has shown that when left alone, chestnut trees are crowded out by other species. In wild forests, "Castanea reaches a maximum age of about 200 years, then it dies," Krebs says. "But here in Ticino, where chestnuts have been cultivated, they can reach up to almost 1,000 years, because of their symbiosis with humans.”

Europe's landscape was altered by the Romans' forestry approach, reintroducing the sweet Italian chestnut to areas it hadn't existed since the last ice age

By the end of the Roman era, the sweet chestnut had become the dominant tree species in Ticino, displacing a previous forest-scape of alders and other trees, the pollen record shows: "This was done by humans. It was a complete reorganization of the vegetal landscape," Krebs explains.

In fact, pollen evidence from a site in Ticino at some 800m (2,625ft) above sea level shows that during the Roman period there was a huge increase in Castanea pollen, as well as cereal and walnut-tree pollen, suggesting an orchard was kept there, Krebs says. 

By the Middle Ages, long after the Romans were gone, many historical texts document the dominance of sweet chestnut production and the importance of foods such as chestnut flour in Ticino, says Krebs. "In our valleys, chestnuts were the most important pillar of subsistence during the Middle Ages."

People in Ticino continued to look after the trees, planting them, coppicing them, pruning them and keeping out the competition, over centuries, Krebs says: "That's the nature of this symbiosis: humans get the fruit [and wood] of the chestnut tree – and the chestnut gets longevity", as well as the opportunity to hugely extend its natural area of distribution, he explains. 

A similar transfer of chestnut-related knowledge to locals may have happened elsewhere in the Roman Empire, he suggests – and possibly left linguistic traces. As a separate study shows, across Europe, the word for "chestnut" is similar to the Latin "castanea" in many languages. 

Today, Europe's sweet chestnut trees are facing threats including disease, climate change and the abandonment of traditional orchards as part of the decline in rural life. But chestnut trails and chestnut festivals in Ticino and other parts of the southern Alps still celebrate the history of sweet chestnuts as a past staple food – reminding us of the long legacy of both Roman and local ideas and skills in tree-care.

https://www.bbc.com/future/article/20250513-what-chestnuts-reveal-about-the-roman-empire

*
STRANDED IN PART-TIME JOBS

Several years ago, to research the novel I was writing, I spent six months working in the warehouse of a big-box store. As a supporter of the Fight for $15, I expected my co-workers to be frustrated that starting pay at the store was just $12.25 an hour. In fact, I found them to be less concerned about the wage than about the irregular hours. The store, like much of the American retail sector, used just-in-time scheduling to track customer flow on an hourly basis and anticipate staffing needs at any given moment. My co-workers and I had no way to know how many hours of work we’d get—and thus how much money we’d earn—from week to week. We’d be scheduled for four hours one week and 30 the next.

For my co-workers, these fluctuating paychecks made it nearly impossible to get an auto loan or to be approved for a lease on an apartment, let alone to save money. Many didn’t have cars. They walked to work—in the middle of the night (our shift started at 4 a.m.), in the snow, in the rain. 

Even more maddening, in many states, social-safety-net programs such as Medicaid and food stamps require beneficiaries to document their work hours, meaning that if workers are, through no fault of their own, scheduled for too few hours in a given period, they could lose the very benefits that their lack of hours makes them need even more. Human-resources departments usually tell workers that the way to get more hours is to increase their availability—that is, if you want more hours at one job, you’re advised to promise to be available whenever you may be wanted. This makes it very hard to hold down a second job.

Work as Americans understand it began in 1940, with a piece of New Deal legislation. Before then, Americans commonly worked 60 or 80 hours a week for little more than subsistence-level pay. Even the much-mythologized jobs in industry and manufacturing—the “good jobs” that Americans regret losing to globalization—consisted of dangerous, poverty-level work.

Then came the Fair Labor Standards Act. The FLSA established the federal minimum wage and limited child labor. And it stipulated that employers pay most nonmanagerial workers overtime, or time and a half, for all hours worked beyond 40 in a week. Combined with the rise of unionization, the FLSA changed work in fundamental ways.

Americans began to believe something novel in human history: that if a person was willing to work, he or she should be able to make a decent living—maybe not a lavish one, but more than the kind of bare subsistence that had always been the lot of most human beings. When popular songs and movies use “9 to 5” as a shorthand for work, they are referring not to some natural phenomenon but to a way of life formalized by the FLSA.

Over the past 20 years, however, employers have figured out a clever way to circumvent the FLSA, taking advantage of the fact that the law sets a ceiling on work, but not a floor.

In 2005, The New York Times obtained a revealing memo written by a senior Walmart human-resources executive. The memo, drafted with advice from McKinsey consultants, recommended various ways of cutting costs. One of those suggestions would become particularly consequential: hiring more part-time workers. 

A year later, the Times revealed that Walmart planned to double the percentage of its workers who were part-time, from 20 percent of its workforce to 40 percent. Walmart is hardly unique in that regard. At Target, for example, where pay starts at $15 an hour, the median employee makes not $31,200, the annualized full-time equivalent, but $27,090, meaning that at least half of its employees are part-time. Kohl’s and TJX (the owner of such stores as T.J. Maxx, Marshalls, and HomeGoods) also rely on predominantly part-time workforces.

The most obvious reason employers favor part-time labor is to avoid paying benefits. 

Starbucks, for example, talks up its generous benefits. But the median Starbucks worker made just $14,674 last year. For baristas, who earn a $15 minimum wage, this amounts to about 19 hours a week, just shy of the 20 hours a week that the company requires to be eligible for those benefits.

But an even bigger and less well-understood driver of the shift to part-time work is the rise of just-in-time scheduling. With a part-time workforce, made up of workers not guaranteed a set number of hours, employers can schedule the bare-minimum number of worker hours they expect to need on a given day. If business turns out to be brisker than expected, as it often does, they have a reserve of part-time workers to call on at the last minute.

The ability to schedule low and add more worker hours as needed saves employers money by freeing them from the necessity of offering, and paying for, 40 hours of work (or whatever number of hours it defines as full-time), week in and week out, even when business is slow. For the system to operate effectively, workers must be not merely part-time but also underscheduled—so desperate for more hours that they will reliably come in at the last minute.

Employers, and many economists, argue that this approach is efficient because it allows businesses to use only the number of worker hours they actually need. That is true, in the same sense that child labor and 80-hour workweeks were efficient during the original Gilded Age. The fact that what is most efficient for an employer might prevent workers from living stable, prosperous, healthy lives is why labor laws exist.

Employers and industry lobbyists also claim that they are merely responding to employee preference. The National Retail Federation, the country’s largest retail trade organization, argues that “flexibility and part-time options are essential” for many employees, such as “students pursuing a degree, working parents and teenagers.”

In fact, the available evidence suggests that most part-time workers would prefer to have stable full-time work. A survey of more than 6,000 Walmart employees conducted by the Center for Popular Democracy, a progressive advocacy group, found that 69 percent of part-time workers would like to be full-time. If Walmart wished to contest this claim, it could conduct its own survey. But the nation’s largest employers have not only chosen not to disclose precisely what percentage of their work forces are part-time; they also haven’t released any data to support their claim that many workers prefer these sorts of schedules.

Meanwhile, issues concerning hours are often among the first demands made by employees who form unions today. The platform of Target Workers Unite, for example, lists as its first demand not increased hourly pay or better benefits, but “more hours.” The second demand is “stable schedules.” The platform goes on to say, “Target workers can’t live decent lives when we have no fixed schedules or no guaranteed hours while we are encouraged to have open availability and be on call for any open last-minute shifts.”

The FLSA worked as well as it did because it dealt with the two components of income—wages and hours—whereas efforts to raise the minimum wage alone, however well-intended, deal with only half of the equation. But for all of its virtues, the FLSA never contemplated the problem of underwork.

Congress has the power to correct that oversight. It could require large employers to set schedules in advance, as some municipalities have done in recent years. It could remove some of the incentives for employing people part-time by either rewarding businesses for hiring full-time workers or penalizing them—such as by charging them the equivalent of benefits—for hiring part-time workers.

Another modification of the law would be to let hourly workers at big firms choose whether they want to work part- or full-time, the same way they choose to sign up for health insurance or change insurers during an annual window. The advantage of this approach is that it doesn’t involve the federal government putting its finger on the scale in favor of a particular type of employment. Instead, it allows employees to determine what type of schedule works best for them—something that should appeal to employers, who have spent years insisting that they’ve moved to part-time schedules because that’s what their employees want.

But none of these or any other reform ideas will gain any traction unless the issue of part-time work becomes a political issue, much as the minimum wage has. So far, the dismantling of “9 to 5”—of the kind of steady, predictable work it assumed—has gone largely unnoticed, except among low-wage workers themselves, who unfortunately tend to lack access to the levers of power.

One reason for this is the very success of the FLSA. It so effectively instilled the 40-hour workweek as the norm that even many economists habitually assume that workers choose whether they work full-time or part-time. The Bureau of Labor Statistics calculates annual earnings in various sectors by multiplying the average reported hourly wage by 2,080, the number of hours you’d work if you worked 40 hours a week, 52 weeks a year. As a result, many earning statistics that are widely relied on for policy prescriptions have become more aspirational than reality-based.

A second reason is that professional-class workers tend to imagine part-time work as a mutually beneficial arrangement agreed on by employee and employer—say, for a mother who’d like to spend more time at home with her young children. For professional workers, this is what part-time work generally has been.

But the appeal of just-in-time scheduling for employers is not inherently limited to low-wage professions. Even those of us who don’t work in a retail or food-service environment could find that our jobs nevertheless can be made more “flexible”—organized around projects or workload rather than customer flow. 

 This has happened already to some white-collar jobs. Consider higher education, where well-paid, full-time positions for professors have been replaced with tenuous, part-time adjunct gigs. Without a concerted policy response, more industries could be affected. You might think your boss needs you for 40 hours every week of the year. But are you sure?

https://www.theatlantic.com/economy/archive/2025/05/part-time-jobs-underwork/682768/?utm_source=firefox-newtab-en-us

*
WHY JOB OPENINGS GO UNFILLED


President Trump has been upending the global economy in the name of bringing manufacturing back. President Joe Biden signed into law massive investments aimed at doing something similar. The American manufacturing sector is reviving after decades of decay.

But there's something a bit weird undercutting this movement to reshore factory jobs: American manufacturers say they are struggling to fill the jobs they already have.

According to data from the Bureau of Labor Statistics, there are nearly half a million open manufacturing jobs right now.

Last year, the Manufacturing Institute, a non-profit aimed at developing America's manufacturing workforce, and Deloitte, a consultancy firm, surveyed more than 200 manufacturing companies. More than 65% percent of the firms said recruiting and retaining workers was their number one business challenge.

Part of the story has been a tight labor market. There have been similar worker recruitment and retention issues in other sectors, like construction and transportation. But the shortfall of manufacturing workers is about more than just that — and, with both parties pushing to reshore manufacturing, analysts expect the industry's workforce issues to get even more challenging.

The Biden administration invested over $2 trillion on initiatives aimed at reinvigorating American industry, in legislation like the Infrastructure Investment and Jobs Act (IIJA), the CHIPS and Science Act, and the Inflation Reduction Act (IRA). There's now an explosion of spending to construct new factories in America, and analysts expect the demand for manufacturing workers to pop.

The average manufacturing worker is also relatively old, and the industry expects a tidal wave of retirements in the coming decade.

The Manufacturing Institute and Deloitte projected that the industry will need 3.8 million additional workers by 2033, and that as many as "1.9 million of these jobs could go unfilled if workforce challenges are not addressed."

These estimates, mind you, were calculated before President Trump's recent tariffs, which, at least theoretically, are supposed to compel even more manufacturers to build factories in America. If things go to plan, we may need even more Americans to start working in manufacturing in coming years.

Today in the Planet Money newsletter: If manufacturing jobs are so great, why aren't more Americans doing the ones we already have? And what can the industry and the government potentially do to address this issue? 

Industry Whiplash

Gordon Hanson is an economist at Harvard Kennedy School who has published influential research on American manufacturing, including on what happened to it in the face of competition with China.

Hanson sees what you might call whiplash in an industry seeing a reversal of fortune. "A period in which there's been a substantial increase in manufacturing — it's a very recent phenomenon," Hanson says. " I think we can fill those jobs, but we're just not gonna fill 'em overnight."

One big reason manufacturers can't fill these jobs overnight is because they require workers to have particular skills. And it's not just skills needed to work on assembly lines. Only around two in five manufacturing jobs are directly involved in making stuff. Manufacturers also employ people to do research and development, engineering, design, finance, sales, marketing, and so on.

Part of the political appeal of bringing manufacturing back is that, historically, they've provided good jobs and career ladders for people without a college education. However, many manufacturing jobs these days actually require college degrees.

Carolyn Lee, the president and executive director of the Manufacturing Institute, says that roughly half of the open positions in manufacturing require at least a bachelor's degree.

That said, the other half of open manufacturing jobs don't require a bachelor's degree. And manufacturers say they are also struggling to fill those.

Lee says some of the most in-demand positions in manufacturing right now are maintenance technicians, machine operators, material handlers, and forklift operators.

Is manufacturing pay high enough?

A classic solution to so-called worker shortages: offer higher pay. That would probably convince workers to invest in acquiring coveted skills and enter the manufacturing workforce.

Which is one reason why Oren Cass, the chief economist and founder of American Compass, a conservative think tank, says he's skeptical whenever employers complain about worker shortages.

" I have less than zero sympathy for employers who go around complaining about labor shortages and skills gaps," Cass says. He joked that he has a side hustle, running an "incredibly innovative" biotech firm. "It employs leading scientists at $10 an hour to develop extraordinary cures. I have 500,000 job openings as well, and I have not been able to fill one of them."

Cass has a point. We've covered a similar phenomenon in trucking: trucking companies have complained of a "worker shortage" for decades, yet relatively low wages and challenging work conditions are clearly a huge factor behind that industry's workforce woes. Addressing those issues would probably go a long way to dealing with their "shortage."

Manufacturers have, however, hiked their pay in recent years. That has helped slash the number of open manufacturing positions, from their peak of over one million open positions in April 2022. (Like many other industries during the COVID-19 pandemic, manufacturers saw a wave of retirements, deaths, and quits).

Nearly half a million U.S. manufacturing jobs are vacant

Offering higher paychecks would likely compel more Americans to flock to the remaining open positions.

But the higher pay that Americans demand to work in manufacturing is one of the big reasons why many manufacturers left America in the first place. And so this wage issue begs the question whether many manufacturers, particularly labor-intensive ones, can be profitable and globally competitive in the United States.

Cass believes that tariffs can help even the playing field with foreign competitors. And he stresses that one of the keys to reshoring manufacturing — while maintaining good-paying manufacturing jobs — is higher productivity.

" If somebody in the United States is 20 times as productive as somebody in China and you have to pay them 20 times as much, you are equally competitive," Cass says.

So, if American manufacturers can prove to be much more productive than foreign competitors — meaning American workers can make more in less time — they can pay the higher wages needed to attract and retain American workers while still remaining globally competitive.

That, however, is a big if. American manufacturing has been seeing an alarming slowdown in productivity growth in recent years.

The manufacturing PR and skills problems

Lee agrees that manufacturers will be able to entice more people into the industry by offering higher pay. However, she says, the industry's issues with recruitment and retention go beyond dollars and cents.

For one, she suggests, manufacturing has a PR problem. Many Americans have outdated notions of what manufacturing jobs actually entail. She suggests that many imagine they're the dirty, monotonous, and dangerous factory jobs depicted in Charles Dickens novels. But, Lee says, they're actually "clean and bright and full of technology." 

She sees changing American perceptions about manufacturing as one crucial part of convincing more young people to work in the sector. More generally, if the industry is seen as vibrant and growing, as opposed to dying, that will probably help with recruitment.

But even if the word gets out and more young Americans want to do these jobs, they'll still need the skills to be able to do them.

"These jobs are in factories that are completely different from the factories of 25 years ago," Hanson says. "They require people to know how to use pretty sophisticated machinery."

"The hardest skills to find are the ones that maintain and fix equipment," Lee says. "Every company we speak with is trying to hire technicians. Every single one. The challenge is that there is no one walking around on the street with these skills, and it takes 1-2 years to teach those skills and another 1-2 years to contextualize those skills to the specific plant environment.”

Harry Moser, the founder and president of the Reshoring Initiative, argues that America should invest much more heavily in apprenticeships to build the manufacturing workforce of the future. Apprenticeships provide young people with pathways to learn vocational skills without having to obtain an expensive, four-year college degree. Moser says American leaders have overemphasized college to the detriment of vocational training, and that our system of apprenticeships pales in comparison to the ones in countries like Germany and Switzerland.

According to Third Way, a centrist think tank, in 2022 only 0.3% of the American working-age population was in apprenticeship programs. For comparison, in Switzerland, that number was 3.6%, or 12 times higher.

Apprenticeships and other means to help Americans obtain vocational skills may be especially important for the high-tech manufacturing jobs we have today. Lee says today's manufacturing jobs often require some combination of "knowledge of electrical systems, mechanical systems, logic controllers, hydraulic power, and robotics."

The Manufacturing Institute has been working to develop better apprenticeship programs to help Americans build the skills the manufacturing sector needs. " The very best models of workforce development that we see and that we engage in at the Manufacturing Institute are locally and regionally led public-private partnerships, where manufacturers come to the table — and with the support of the community college system and the local business community — they build the talent pipelines that they need," Lee says.

Lee told us about one particular apprenticeship program that she's very excited about. It's called The Federation for Advanced Manufacturing Education, or FAME.

FAME was founded by Toyota back in 2010. The car company had trouble finding machine technicians at their manufacturing plant in Georgetown, Kentucky. So they did something proactive: they partnered with other local companies and a community college and developed an apprenticeship program to create the workforce they needed.

Lee says FAME is a 21-month program in which students juggle work and school. Each week, they spend three days at the factory and two days at a community college, learning how to operate and repair high-tech equipment, and other crucial skills.

"And  at the end of the 21 months, students come out with, in most cases, no college debt," Lee says. These students also see dramatically higher earning potential.

One study, by the Brookings Institution and Opportunity America, found much higher graduation rates for students who entered the FAME program. "Earnings and employment gaps were if anything more pronounced," the authors write. "Five years after completion, FAME graduates were earning nearly $98,000, compared to roughly $52,783 for non-FAME participants — a difference of more than $45,000 a year." It pays to have scarce, in-demand skills.

Lee says that FAME has become a model of workforce development for other manufacturing companies. In 2019, she says, Toyota "recognized we're a car company, not an apprenticeship operating company," so they handed the program over to the Manufacturing Institute to expand it around the country.

All the sources we spoke to agreed that the government and businesses need to do more to invest in programs like these to provide opportunities to Americans and develop the workforce that the manufacturing industry needs.

"As we go from this period of declining jobs to expanding jobs, we shouldn't expect that we're just gonna automatically reincorporate all that labor overnight," Hanson says. "It takes a workforce system to make it happen."

Late last month, President Trump issued an executive order aimed at "preparing Americans for the high-paying skilled trade jobs of the future." President Trump ordered various administration officials to create "a plan to reach and surpass 1 million new active apprentices."

We'll be watching to see what the details of this plan are, and whether that plan becomes action.

But what is clear is that bringing American manufacturing roaring back will likely require more than just slapping up tariffs or investing lots of money to build new factories. Leaders may need to re-gear our education system to help more Americans acquire the skills that manufacturers need for a productive and capable workforce.

Which brings us to an even bigger question: What warrants all these interventions to boost one particular sector of the economy? Is manufacturing actually special? And, if so, what makes it so special?


https://www.npr.org/sections/planet-money/2025/05/13/g-s1-66112/why-arent-americans-filling-the-manufacturing-jobs-we-already-have

*
HOW CRUDE OIL WAS FORMED (NO, NOT FROM DINOSAURS)

Oil, primarily crude oil, is formed from the remains of ancient marine organisms and plants that accumulated on the seafloor millions of years ago. Over time, these remains were covered by layers of sediment and subjected to intense heat and pressure, transforming them into kerogen, a waxy substance. Further heating and pressure transformed the kerogen into hydrocarbons, which then migrated through porous rock formations to accumulate in underground reservoirs.

Here's a more detailed look at the process

1. Accumulation of organic matter.

Dead marine organisms, such as phytoplankton and algae, sink to the seafloor and accumulate in a mixture of mud and sand.

2. Formation of KEROGEN:
Over millions of years, layers of sediment cover the organic matter, increasing pressure and temperature. This heat and pressure transform the organic matter into kerogen.

3. Hydrocarbon generation and migration:
As the kerogen undergoes further heating and pressure, it transforms into hydrocarbons like oil and natural gas. These hydrocarbons then migrate through porous and permeable rock formations until they reach traps, where they accumulate.

4. Reservoir Formation:
The trapped oil and gas form underground reservoirs, which are then extracted by drilling wells.

Abiogenic Theories:
While most scientists agree on the organic origin of oil, some propose abiogenic theories, suggesting that oil can form deep within the earth's mantle from inorganic processes. These theories are less widely accepted than the organic origin theory.

The abiogenic petroleum origin hypothesis proposes that petroleum (oil and natural gas) was formed inorganically, not from the remains of ancient life, but from deep carbon deposits within the Earth. It suggests that hydrocarbons can be generated in the mantle and migrate to the crust, forming reservoirs.
 (~ AI-generated answer)

Most geologists today believe that oil was formed millions of years ago from a combination of hydrocarbons synthesized by living organisms and hydrocarbons formed by thermal alteration of organic matter in sedimentary rocks.

Crude oil was formed in ancient shallow seas and lakes where there was limited oxygen below a certain depth. Small, ancient aquatic life including algae, plankton, and bacteria flourished in shallow seas.

The theory has been used to explain the occurrence of oil and gas in geological settings that don't fit the traditional biogenic models.

*
WHY THE ATTACKS ON THE BIBLE

I don’t know anyone who “attacks” books of ancient mythology like Homer’s Odyssey and Iliad, or Virgil’s Aeneid, or the ancient Norse and Celtic myths.

Why are books like the bible and quran so different?

Because billions of human beings continue to claim that such primitive books are “the word of god” despite their myriad obvious failures, including hideously evil commandments and grotesque“morals.”

I’m not an expert on the quran, so I will leave it to others, but these are legitimate reasons to oppose the bible being taught to innocent, trusting children who deserve honesty, not evil nonsense being forced down their throats as “truth.”

The bible is book that commands hideous evils:
Slavery
Sex Slavery
Infanticide
Genocide, including the mass-murders of children, toddlers, infants, babies, unborn babies and their mothers, with only the virgin girls being kept alive as sex slaves. (Numbers 31 and many other genocidal bible passages)

Ethnic Cleansing

The unfathomably evil stoning to death of children, child brides and rape victims. (Deuteronomy chapters 21-22)

Incredible cruelty to animals.

While christians claim Jesus is a loving savior, according to their bizarre theology, Jesus will condemn billions of human beings to an infinitely cruel, purposeless hell for guessing wrong about which religion to believe. That makes Jesus infinitely more evil than the Devil, who has no say in who goes where.

The book of Acts accuses the Holy Ghost of murdering two christians, Ananias and Sapphira.

Again, why believe in evil gods?

*

Why do I bother with the bible?

If the bible was treated like other books of mythology, I wouldn’t bother.

But I consider the brainwashing of billions of children with the bible to be a terrible crime.

That is why I point out the bible’s hideous evils and its myriad other flaws. The bible is a poisonous book if believed.

~ Michael R. Burch, Quora

Oriana:
The brainwashing of children with any religion should — at the very least — become a matter of public debate. I am particularly worried about radical Islam.

At this point in its history, Christianity doesn’t represent even a tiny fraction of the threat that Islam does. Still, convincing a young child that he or she is a miserable sinner and threatening them with hell does not enhance the child's mental health, to put it mildly.

I realize that a ban would be impossible, so at least public debate over the evils of religious propaganda and childhood indoctrination should be taking place. If we saw it as a form of child abuse, much suffering could be prevented.

The one positive development is the decline in the number of believers across all denominations. This decline in religious observance seems to be a global phenomenon. Catholics and Protestants seem to have grown lukewarm over their centuries-old grudges. It would be very hard, and hopefully impossible, to find a Catholic ready to die for the belief that the communion wafer literally changes into the body of Christ. In fact it’s now hard to believe that centuries ago people were ready to kill — or die — for believing or rejecting such fictions. To those who see only increasing evil in the world, I point out this one fabulous development: the decline in religion.

Yes, I know, there is still a long way to go — but it may happen quite suddenly, within one generation. Fewer and fewer young people show any interest in religion.

Of course I have nothing against celebrating Christmas or other traditions. That’s were religion is at its best: it creates community, without the need to perpetuate hatred for the Other.

*
FREQUENTLY CONSUMING POULTRY MAY RAISE CANCER RISK

A recent study conducted in southern Italy presented some surprising findings that linked the regular consumption of poultry to potential increases in gastrointestinal cancers and all-cause mortality. This has caused one question to arise — is eating chicken really as healthy as we think it is?

The study’s findings indicated that exceeding the weekly recommended amounts — that is, eating more than 300 grams (g) of poultry, such as chicken and turkey, per week — resulted in a 27% higher risk of all-cause mortality compared to eating moderate amounts.

Moreover, the research suggested that higher poultry intake was linked to a 2.3% increase in the risk of gastrointestinal cancers, with a higher observed risk among men at 2.6%. The findings were published in the journal Nutrients.

What has concerned consumers is that these findings contrast with current established dietary guidelines, such as the Mediterranean diet, of which poultry is an important component.

However, should such results make people reconsider their diets? Could the results be overestimated? What should consumers watch out for when interpreting the results of similar studies on nutrition?

Medical News Today spoke to two experts — Wael Harb, MD, board certified hematologist and medical oncologist at MemorialCare Cancer Institute at Orange Coast and Saddleback Medical Centers in Orange County, CA, and Kristin Kirkpatrick, MS, RD, dietitian at the Cleveland Clinic Dept of Department of Wellness & Preventive Medicine in Cleveland, OH, and senior fellow at the Meadows Behavioral Healthcare in Wickenburg, AZ, to find out more.

CAUSATION VS CORRELATION

Both experts reiterated that an association from an observational study is not enough to draw definitive conclusions about a dietary item and its links to cancer.

“The findings are interesting, but as this is an observational study, it doesn’t prove causation. The broader body of evidence still supports moderate poultry consumption as part of a balanced diet,” Harb told MNT.

Harb underscored that poultry played an important role in healthy diets and advised caution when interpreting the results.

Another important point to consider is that cancer, as a disease, is very complex and multifactorial, meaning it is hard to pinpoint its causes to one factor.

“Studies show that the development of cancer from one person to another is complex and encompasses multiple factors, including but not limited to genetics, environment, diet, physical activity, exposure to toxins, and even age and inflammation. Therefore, we need to look at any study and try to assess how it can be translated to our lifestyle,” said Kirkpatrick.

“If you smoke, for example, the first step before cutting chicken out may be quitting smoking. This is just one example of how we can assess data,” she added.

IS IT THE POULTRY ITSELF, OR THE ADDITIVES CAUSING CANCER?

How food items are cooked or whether other additives such as oils and spices are added can change the ‘healthiness’ of a food source. The two experts said the potential cancer risks associated with eating poultry could be more closely connected to those aspects, rather than the poultry itself.

When poultry is grilled, fried, or cooked at high temperatures, it can form compounds like heterocyclic amines (HCAs) and polycyclic aromatic hydrocarbons (PAHs), which have been linked to cancer risk. However, these compounds also occur in red meat and processed meats, so the issue may lie more in cooking methods than the type of meat itself,” Harb explained, highlighting that how a good is cooked could result in the release of cancer-causing chemicals.

Kirkpatrick further explained how a food item is processed and cooked can impact the potential benefits or risks of consuming it.

“For example, a frozen chicken nugget may be considered ultra-processed, and breaded and fried chicken may pose risks from the process of high-heat frying as well. Both may differ in their impact on health when compared to a plain chicken breast that is baked,” she said.

White vs. red meat: Which is healthier?

The study has also fed into a long-standing debate about whether white meat is healthier than red meat.

Although white meat, such as chicken and turkey, has lower fat content and a higher protein-to-fat ratio than red meat, this does not necessarily translate into lower cholesterol levels.
In fact, a 2019 study found that white and red meat may both have similar effects on blood cholesterol levels, specifically LDL or “bad” cholesterol and apolipoprotein B (apoB).

The current study also fails to identify the specific type of poultry consumed and its links to heightened cancer risk.

“The study was not able to identify the specific type of poultry (for example, was the consumed protein processed deli meat or was it a grilled chicken breast). The processing of meats in general may change their health risks. We would need more studies assessing various types of poultry and various types of red meat to truly assess significant differences between the two,” Kirkpatrick said.

“Based on what we know, the current guideline of 300 grams of poultry per week is reasonable — especially if the poultry is skinless, minimally processed, and not cooked at high temperatures,” according to Wael Harb, MD.

However, for those with certain health conditions or a family history of cancer, a lower intake may be more appropriate.

“For those who are particularly health-conscious or have a family history of cancer, staying closer to 200 grams per week and incorporating more fish, legumes, and plant proteins may be a prudent option,” Harb said.

https://www.medicalnewstoday.com/articles/could-eating-chicken-heighten-cancer-risk-experts-weigh-in-on-latest-claims

*
WHEN WATER FLUORIDATION IS STOPPED

The U.S. Centers for Disease Control and Prevention recommends that communities across the country add 0.7 milligrams of fluoride for every liter of water. It’s up to state and local governments to decide if they want to follow that recommendation. In 2022, the CDC reported that 63 percent of Americans received fluoridated water.

Adding fluoride to water has been contested in the United States since the practice became widespread in the mid-20th century. Opponents have historically voiced health concerns, including about tooth staining and disproven worries that fluoridated water could cause bone cancer, as well as claims that fluoridation amounts to mass medication and violates individual freedoms. More recently, people have pointed to research showing an association between fluoride and lowered IQ in children. But those findings, which have been heavily criticized, looked at fluoride concentrations much higher than those found in most Americans’ drinking water.

What happened in Calgary, as well as in Juneau, Alaska, which stopped water fluoridation in 2007, may be a cautionary tale for other municipalities. Science News spoke with researchers and other experts in both cities to understand what can happen when local governments opt to stop adding fluoride to drinking water.

LOOKING INTO THE MOUTHS OF SECOND-GRADERS IN CALGARY

Lindsay McLaren says she never anticipated becoming a self-described fluoridation researcher. As a quantitative social scientist at the University of Calgary, she studies how public policies can affect the health of a population. She hadn’t given much thought to fluoridation until 2011, when the Calgary City Council decided to remove fluoride from the city’s water.

The move prompted McLaren to design a study looking at how the dental health of the city’s children fared once fluoride was removed. She recruited dental hygienists to go to schools and inspect the mouths of second-grade students. Some went to schools in Calgary and others went to schools in Edmonton, a similar city in the same province that still fluoridated its water.

In Calgary, the team surveyed 2,649 second-graders around seven years after fluoridation ended, meaning they had likely never been exposed to fluoride in their drinking water. Of those, 65 percent had tooth decay. In Edmonton, 55 percent of surveyed children had tooth decay. While those percentages may seem close, they mark a statistically significant difference that McLaren calls “quite large” on the population level.

“Compared to Edmonton kids, Calgary kids were now considerably worse as far as dental health goes,” McLaren says. Other factors, including diet and socioeconomic status, did not explain the differences between children in Edmonton and Calgary, she says.

In 2024, another study found a higher rate of tooth decay-related treatments for which a child was placed under general anesthesia in Calgary than in Edmonton. From 2018 to 2019, 32 out of every 10,000 children in Calgary were put under general anesthesia to treat tooth decay, compared with 17 for every 10,000 children in Edmonton.

The findings didn’t surprise local dentists, says Bruce Yaholnitsky, a periodontist in Calgary. “This is just obvious to us. But you need to have proper science to prove, in some cases, the obvious.”

MEDICAID CLAIMS IN JUNEAU

Years before Calgary’s city council opted to remove fluoride from its water, members of the local government in Juneau made a similar decision.

Jennifer Meyer says she first became interested in studying the effects of lack of fluoridation in Juneau after moving there in 2015. At the time, she had two young children; a third was born in Juneau. She was surprised at how much dental work, including fillings, she noticed among many other preschool and elementary school children.

“I thought ‘Wow, what’s going on here?’ Because I could see a lot of the decay and the repairs,” Meyer says.

Juneau had stopped adding fluoride to its drinking water in 2007 after asking a six-member commission to review the evidence around fluoridation. A copy of the commission’s report obtained from Meyer, a public health researcher at the University of Alaska Anchorage, shows that two commission members opposed to fluoridation made claims about the health effects that Meyer says are “false” and “not grounded in quality investigations.”

The commission’s chair criticized anti-fluoride positions, at one point writing that part of the literature was based on “junk science.” But he ultimately recommended that the city stop fluoridation, claiming that the evidence about its safety at low concentrations was inconclusive. With the commission’s members split at 3–3, the Juneau Assembly voted to end fluoridation.

Meyer and her colleagues analyzed Medicaid dental claims records made before and after the city stopped fluoridation. They found that the average number of procedures to treat tooth decay rose in children under age 6, from 1.5 treatments per child in 2003 to 2.5 treatments per child in 2012.

The cost of these treatments in children under 6 years old, when adjusted for inflation, jumped by an average of $303 dollars per child from 2003 to 2012.

Meyer says that increased Medicaid costs for dental treatments ultimately end up being paid by taxpayers.

“When politicians decide to withhold a safe and effective public health intervention like fluoridation, they are imposing a hidden health care tax on everyone in their state or community,” Meyer says.

Continued calls to end fluoridation

Today, many opponents to fluoride in water cite a controversial systematic review released last year by the National Toxicology Program, which is nestled in HHS and evaluates the health effects of substances. That August 2024 review concluded with “moderate confidence” that water with more than 1.5 mg of fluoride per liter was associated with lowered IQ in children.

But that dose is more than double the CDC’s recommended amount. And the review authors couldn’t determine if low fluoride concentrations like those found in treated drinking water in the United States had a negative effect on children’s IQ. In addition, merely finding an association does not prove that higher levels of fluoride caused lowered IQ, the NTP notes on its website.

More broadly, Meyer says, “ending fluoridation … based on weak or misrepresented evidence is not a precaution, it’s negligence.”

Juneau remains without fluoridated water. In Calgary, though, residents voted in 2021 to bring it back. With 62 percent of voters opting to reintroduce fluoride, the margin was higher than it was in the 1989 vote that brought fluoride to Calgary in the first place. Guichon says McLaren’s study, combined with “determined advocacy,” helped bring the electorate to the polls.

“More people voted to reinstate fluoride than voted for the mayor. So that’s a success,” Meyer says. “But in America, we are entering a dark time.”

https://www.sciencenews.org/article/fluoride-drinking-water-dental-health#:~:text=Science%20reveals%20what%20happened.&text=In%20cities%20that%20have%20stopped%20adding%20fluoride,experience%20more%20tooth%20decay%2C%20studies%20have%20shown
 

Oriana: THE REAL CULPRIT: SUGAR

The whole debate over fluoridation seems to miss a much bigger point. It’s not lack of fluoride that causes cavities; it’s mainly the consumption of sugar and high-glycemic carbohydrates. Sugar really does rot your teeth — and sugar is added to a huge number of processed foods.

Chocolate seems to be one happy exception — apparently the cocoa polyphenols overcome the potential damage from the added sugar. Even so, why not enjoy the added health benefits of darker chocolate. Its relative bitterness is the sign of goodness.

Harmful oral bacteria also thrive on certain amino acids. I once made the mistake of using powdered glycine as a sweetener. Within a matter of months, I began suffering from toothaches. I ended up running up huge dental bills. Too late, alas, I remembered a biochemist’s warning against powdered amino acids. It’s not only sugar that rots your teeth. Certain amino acids — actually most of them — are just as bad. (I don’t mean natural proteins — I mean the artificially isolated, powdered amino acids that are sold in the “health food” section. Amino acids sold in capsules seem OK).

*


WHEN WATER FLUORIDATION IS STOPPED



The addition of fluoride to drinking water is far from universal.



Australia may have just been through an election campaign, but it’s fair to say that certain communities have been more fired up about water fluoridation – a non-issue at federal level – than anything postulated by Tony Abbott and Kevin Rudd.

The debate over whether to add fluoride to water supplies has flirted with the unhinged, an anti-fluoride protester at a public meeting in Lismore last week ominously telling NSW’s chief medical officer, “I have friends in Syria, do you know of sarin gas?” (Sarin contains fluorine.)
The NSW opposition has said it has had enough, suspending a motion that would compel local authorities to add fluoride to water, ending what NSW Labor calls the “anti-fluoridation circus."

Use of fluoride in drinking water to strengthen teeth is strongly backed by the Australian Dental Association, scientific studies and state governments, which have attempted to shoot down scientifically unproven theories that fluoride can cause everything from allergies and arthritis to cancer and bone fractures.

However, the use of fluoride is far from universal, prompting warnings that children in areas without fluoridated water are suffering far higher rates of tooth decay than those who consume the substance.

Water fluoridation was introduced to Australia in the 1960s and every state and territory now provides it to 70% or more of its population. There are discrepancies – for example, 96% of people in NSW are provided fluoridated drinking water, compared with 70% of those in the Northern Territory and 86% in Queensland.

So how do other countries compare?

USA:


Grand Rapids in Michigan became the first city in the world to have fluoridated drinking water in 1945. Now, more than 204m people in the US have access to fluoridated drinking water – roughly two thirds of the population.



Canada:


Canada has been held up by anti-fluoride campaigners as a standard bearer for a fluoride backlash. Rates of water fluoridation vary wildly between provinces – about three quarters of the population in Ontario, compared with just 4% in British Columbia – but several high profile decisions have bolstered the anti-fluoride cause.

About 30 Canadian municipalities have banned fluoride in recent years, most notably the region of Waterloo in 2010, followed by Calgary in 2011.

New Zealand:


The New Zealand government says it “strongly” recommends the adoption of water fluoridation. About half of the population has access to fluoridated water. However, Christchurch’s mayor has ruled out adding the substance to the city’s “perfect” water and Hamilton voted to remove fluoride in June.



UK and Ireland:


Just 10% of the UK’s population – or about 6m people – get either naturally fluoridated water or artificially added fluoride.



Like other countries, there are regional variances – West Midlands provides fluoridated water to 84% of the population, compared with just 2.6% in Yorkshire.



Meanwhile, Ireland is one of the more enthusiastic adopters of water fluoride, with nearly three quarters of the population having access to fluoridated water, although it appears the tide is turning, political party Sinn Fein recently backing a bill that would introduce a prison term of up to five years for adding fluoride to the water.



Continental Europe
.

Just four European Union countries back fluoride on a national scale, and nations such as Germany, the Netherlands and Sweden have discontinued water fluoridation.



Anti-fluoride activists claim the example of continental Europe shows the widespread unease over the health impact of fluoride, although in some cases governments have stopped adding it due to the adoption of other methods to improve dental health.

There are pockets of fluoride uptake – 11% of the Spanish population, for example. But, significantly, Germany halted its water fluoridation in the 1970s and France never started.



However, proponents state that no country has banned the practice outright and point to the fact that many European nations add fluoride to salt.



Brazil


Water fluoridation has been taking place in Brazil since the 1950s, and the government has recently ramped up efforts to provide fluoridated water to as many cities as possible.



About two thirds of Brazilian cities now have fluoridated water, and studies show that results have generally been positive.




China embarked upon a pursuit of water fluoridation for about 20 years before backing away entirely from it in the 1980s. Parts of the country have high levels of naturally occurring fluoride, which one study has linked to developmental difficulties in children.



https://www.theguardian.com/world/2013/sep/17/water-fluoridation



*


DECLINE OF CARIES PREVALENCE AFTER THE CESSATION OF WATER FLUORIDATION IN THE FORMER EAST GERMANY



In contrast to the anticipated increase in dental caries following the cessation of water fluoridation in the cities Chemnitz (formerly Karl-Marx-Stadt) and Plauen, a significant fall in caries prevalence was observed. This trend corresponded to the national caries decline and appeared to be a new population-wide phenomenon. 

Additional surveys (N=1017) carried out in the formerly-fluoridated towns of Spremberg (N=9042)and Zittau(N=6232) were carried out in order to test this unexpected epidemiological finding. 



Pupils from these towns, aged 8/9-, 12/13- and 15/16-years, have been examined repeatedly over the last 20 years using standardized caries-methodological procedures. While the data provided additional support for the established fact of a caries reduction brought about by the fluoridation of drinking water (48% on average), it has also provided further support for the contention that caries prevalence may continue to fall after the reduction of fluoride concentration in the water supply from about 1 ppm to below 0.2 ppm F.  

Caries levels for the 12-year-olds of both towns significantly decreased during the years 1993-96, following the cessation of water fluoridation. In Spremberg, DMFT fell from 2.36 to 1.45 (38.5%) and in Zittau from 2.47 to 1.96 (20.6%).

These findings have therefore supported the previously observed change in the caries trend of Chemnitz and Plauen. The mean of 1.81 DMFT for the 12-year-olds, computed from data of the four towns, is the lowest observed in East Germany during the past 40 years. The causes for the changed caries trend were seen on the one hand in improvements in attitudes towards oral health behavior and, on the other hand, to the broader availability and application of preventive measures (F-salt, F-toothpastes, fissure sealants etc.).

There is, however, still no definitive explanation for the current pattern and further analysis of future caries trends in the formerly fluoridated towns would therefore seem to be necessary.



https://www.researchgate.net/profile/Gerardo-Maupome publication/227643338_Pattern_of_dental_caries_following_the_cessation_of_water_fluoridation/links/5f0c90d5a6fdcca32ae6920b/Pattern-of-dental-caries-following-the-cessation-of-water-fluoridation.pdf

Oriana:


I don’t have an opinion on water fluoridation. But I do have a strong opinion on the need for sugar-free diet. Sugar (sucrose) really does rot teeth, and there is no excuse for using it. We finally have excellent substitutes, including xylitol, erythrinol, and allulose (these three do not leave a bitter aftertaste). Harmful bacteria that reside in the human mouth can’t digest sugar alcohols. 

*
SHINGLES VACCINE MAY LOWER THE RISK OF HEART DISEASE

One out of every three adults around the world will develop shingles — a reactivation of the varicella-zoster virus. The varicella-zoster virus is the same virus that causes chickenpox.

Adults ages 50 and older can be vaccinated against shingles. The vaccine provides more than 90% protection against developing shingles.

Past studies show that in addition to providing protection against shingles, the vaccine may also provide other health benefits, including a potentially lower risk for dementia, as well as heart conditions such as heart attack and stroke.

“Shingles has traditionally been regarded as an infectious disease,” Sooji Lee, MD, researcher in the Center for Digital Health in the Medical Science Research Institute at Kyung Hee University Medical Center in South Korea, told Medical News Today.

Previous studies suggested an association between shingles and chronic conditions such as cardiovascular disease. This points to a potential link between infections and chronic diseases. This is why further investigation into the broader impact of shingles vaccination is essential,” she explained.

Lee is the first author of a new study that has found people who receive the shingles vaccine have a 23% lower risk of cardiovascular events, such as heart attack, stroke, or coronary heart disease, with this protective benefit lasting for up to eight years post-vaccination.

The findings were recently published in the European Heart Journal.

https://www.medicalnewstoday.com/articles/shingles-vaccine-can-lower-heart-disease-risk-23-new-study#Shingles-vaccine-may-lower-risk-for-any-cardiovascular-events

Capsule summary:

Receiving a specific type of shingles vaccine may provide a 23% lower risk of cardiovascular events like stroke or heart failure for up to 8 years. With reports of other shingles vaccines protecting against the risk of dementia, scientists are trying to understand the mechanism underlying these unintended benefits.

*
COULD HIV DRUGS HELP PREVENT DEMENTIA?

Researchers at UVA Health have found that a class of HIV drugs called nucleotide reverse transcriptase inhibitors (NRTIs) may significantly reduce the risk of developing Alzheimer’s disease.

Their large-scale analysis of United States health insurance data revealed that patients taking these medications had up to a 13% lower risk of Alzheimer’s disease each year.

Based on these findings, the team is calling for clinical trials to test whether these drugs could be used to help prevent Alzheimer’s.

The team had earlier discovered a possible biological mechanism explaining how the drugs might offer protection against Alzheimer’s.

https://www.medicalnewstoday.com/articles/hiv-drugs-help-prevent-alzheimers-disease-nrti

*
TAXI DRIVERS AND AMBULANCE DRIVERS HAVE A LOWER RISK OF ALZHEIMER’S


Researchers have found that the risk of death due to AD is markedly lower in taxi and ambulance drivers compared with hundreds of other occupations. And the reason could be that these drivers develop structural changes in their brains as they work.

Drawing a connection between Alzheimer's disease and work

In the past two decades, small studies demonstrated that London taxi drivers tend to have an enlargement in one area of the hippocampus, a part of the brain involved with developing spatial memory. Interestingly, that part of the brain is one area that's commonly damaged by AD.

These observations led to speculation that taxi drivers might be less prone to AD than people with jobs that don't require similar navigation and spatial processing skills.

A recent study explores this possibility by analyzing data from nearly nine million people who died over a three-year period and had occupation information on their death certificates. After accounting for age of death, researchers tallied Alzheimer's-related death rates for more than 443 different jobs. The results were dramatic.

Taxi and ambulance drivers were much less likely to die an AD-related death than people in other occupations. AD accounted for 0.91% of deaths of taxi drivers and 1.03% of deaths of ambulance drivers. Among chief executives, AD accounted for 1.82% of deaths, which is close to the average for the general population. While these differences may seem small, they translate to more than 40% fewer deaths related to Alzheimer's among taxi and ambulance drivers.

This benefit did not seem to extend to others with jobs involving navigation. For example, aircraft pilots (2.34%) and ship captains (2.12%) had some of the highest rates of death due to AD. Bus drivers (1.65%) were closer to the population average but still not nearly as low as taxi and ambulance drivers.

Other types of dementia did not follow this pattern. Rates of death due to dementia other than AD were not lower among taxi and ambulance drivers.

One possible explanation is that jobs requiring frequent real-time spatial and navigational skills change both structure and function in the hippocampus. If these jobs help keep the hippocampus healthy, that could explain why AD-related deaths — but not deaths due to other types of dementia — are lower in taxi and ambulance drivers. It could also explain the older studies that found enlargement in parts of the hippocampus in people with these jobs.

And why aren't bus drivers, pilots, and ship captains similarly protected? The study authors suggest these other jobs involve predetermined routes with less real-time navigational demands. Thus, they may not change the hippocampus as much.

The findings could be due to chance, especially because there were just 10 AD-related deaths among taxi drivers. Even a small number of overlooked deaths due to AD could sway the results.

And even if driving a taxi or ambulance could lower your risk of AD-related death, what's the impact of GPS technology now in widespread use? If these jobs now require less navigational demand due to GPS, will the protective effect of these jobs evaporate?

https://www.health.harvard.edu/blog/two-jobs-may-lower-the-odds-of-dying-from-alzheimers-disease-but-why-202505063098

*
ending on beauty:

MY GRANDMOTHER’S HAIRNETS
 
Other grandmothers knitted.
Mine only crocheted.
And exclusively hairnets.

Ever since I was a toddler,
I remember her that way,
with a little silver hook,

spiraling around and around
the nothing at the top.
Endless hairnets! 

She kept her hair short.
Even after eighty,
she was only beginning to gray.

Her hairnets were brown
or black, the yarn so fine
the hairnet hardly showed.

It was not about need.
It was about that spiraling
around empty space,

the eye of wisdom that opens
when you come to know
how in one moment

you can lose all except
your very
soul. Everything else

is a ball of yarn.
It’s about the flight
of the hook. 

~ Oriana

below: my grandmother Veronika's first ID photo after Auschwitz:



No comments:

Post a Comment