Sunday, April 26, 2026

WITTGENSTEIN AND SAMUEL BECKETT; FALSE CONFESSIONS SURPRISINGLY COMMON; WAGE SUPPRESSION; CITY WITH THE CLEANEST AIR; WATER POLLUTION WITH NITRATES; ELON MUSK'S TURN TO THE FAR RIGHT; WOMEN GETTING MORE JOBS;


*
EDELWEISS

It was a field mouse that ran in
as I described an edelweiss, 
stem, leaf and petal
covered with dense plush.

Before hiding in the night, 
the mouse stood up, 
lifted his head, and for a hushed
eternity we stared —

I must have seemed a raptor —
an owl-like mask, 
predator’s frontal eyes — 
yet my nickname used to be 

‘Little Mouse’. I overheard 
a student once: “Look how she 
walks — not across the yard, 
but along the wall like a mouse.” 

Don’t you see the courage 
of the mouse?
Its wisdom in keeping close
to the shadowing wall?


Not with doves and roses 
the mouse and I claim our 
survival, but with a starlike
flower tucked into rock.

*

Little mouse, little mouse,
I know you no longer exist —
your shoelace tail, pale feet,
your half-moon ears —

And I feel I have lived
several lives by now,
without needing to 
die. Or rather, I have 

died, meaning changed.
Mouse, in memory 
of both of us,
I name you Edelweiss.


~ Oriana


*
LUDWIG WITTGENSTEIN'S INFLUENCE ON SAMUEL BECKETT

"Words are all we have.” ~ Samuel Beckett

"The limits of my language mean the limits of my world."
(Die Grenzen meiner Sprache bedeuten die Grenzen meiner Welt) ~ Ludwig Wittgenstein

The connection between Samuel Beckett and Ludwig Wittgenstein is one of the most interesting intersections in modern literature and philosophy. While Beckett was notoriously private about his intellectual influences, several concrete sources verify his deep engagement with Wittgenstein's work.

The most definitive evidence comes from the physical contents of Beckett’s home. Scholars cataloging Beckett's personal library (notably Beckett’s Bookshelf) found six books by Wittgenstein and six books about him. Many of these contained Beckett’s own handwritten marginalia, showing he wasn't just a casual reader but an active student of the texts.

His correspondence reveals that he was reading Wittgenstein at least as early as the late 1950s. He specifically mentioned reading the Philosophical Investigations and the Blue and Brown Books.

In James Knowlson’s Damned to Fame, the authorized biography, Knowlson notes that Beckett’s interest in Wittgenstein grew significantly after 1950. Beckett reportedly felt a "kinship" with Wittgenstein’s struggle to express the inexpressible.

Memoirs from Beckett’s friends and colleagues mention his "avowed interest" in the philosopher, particularly regarding the limits of logic and the "mess" of human experience.

Literary critics point to specific moments in Beckett's work that seem to "stage" Wittgensteinian problems. In the Tractatus, Wittgenstein uses the famous analogy of a ladder that one must throw away after climbing it. In Beckett’s novel Watt, a character says, "Do not come down the ladder, Ifor, I haf taken it away," which many scholars view as a direct nod to this logical paradox.

The repetitive, rule-bound dialogue between Vladimir and Estragon in Waiting for Godot is often analyzed as a series of Wittgensteinian "language-games"—where words serve as moves in a social ritual rather than as tools for conveying factual information.

Critics like Stanley Cavell have linked Beckett’s The Unnamable to Wittgenstein’s critique of the word "I." Both suggest that "I" does not name a "thing" or a soul, but functions as a placeholder in language.

Finding a direct quote where Samuel Beckett discusses Ludwig Wittgenstein is notoriously difficult because Beckett was famously reticent about his philosophical influences. He often claimed he "didn't understand philosophy," despite his library being full of heavily annotated philosophical texts.

In the authorized biography Damned to Fame, James Knowlson records that Beckett explicitly admitted to an interest in Wittgenstein during their conversations. While not a "proverbial" quote, it is the most direct confirmation we have: "Beckett’s interest in Wittgenstein... was an avowed one.”

Knowlson notes that Beckett felt a deep "kinship" with Wittgenstein’s struggle to use language to define the limits of language.

In Beckett’s letter to Axel Kaun (1937), although it doesn't mention Wittgenstein by name, it is often called Beckett’s "Wittgensteinian Manifesto." In this letter, Beckett describes his goal for writing in terms that mirror Wittgenstein’s Tractatus:

"To drill one hole after another into [language] until what is cowering behind, be it something or nothing, begins to leak out—I cannot imagine a greater goal for today's writer.”

In the study Samuel Beckett's Library, researchers Dirk Van Hulle and Mark Nixon reveal that Beckett’s "direct quotes" of Witgenstein are actually found in the margins of his books. In his copy of Philosophical Investigations, Beckett underlined sections regarding the "language-game" and the "limits of my world.” ~ René Troy Tun, Facebook 



*
MERTON ON LANGUAGE AND SILENCE

For language to have meaning there must be intervals of silence somewhere, to divide word from word and utterance from utterance. He who retires into silence does not necessarily hate language. Perhaps it is love and respect for language which imposes silence upon him.” ~ Thomas Merton, “Disputed Questions”



*
THE LOST PRONOUNS OF ENGLISH INTIMACY

Tales of love and adventure from 1,000 years ago reveal a dazzling range of now-extinct English pronouns. They capture something unique about how people once thought about "two-ness". But why did they die out in the first place?

Which word would you use to refer to yourself? "I", presumably, in the singular. And how about you and a group of people? "We", of course, in the plural.

But how about you and one other person? 

In modern English, there is no word for that. You would probably just use "we" or "the two of us". 

But more than 1,000 years ago, you would have said: "wit".

This term, once also used affectionately to describe the closeness between two people, is one of many personal pronouns that have been lost or transformed amid huge social and political change over the centuries. The English language has become simplified – but at times this has left gaps, creating confusion.

"Wit" means "we two" in Old English, a Germanic language spoken in England until about the 12th Century, which evolved into the English we speak today. Now completely lost, "wit" was part of an extinct group of pronouns used for exactly two people: the dual form, which also includes "uncer" or "unker" ("our" for two people) and "git" ("you two"). That dual form vanished from the English language around the 13th Century.

"There's a whole history in the [personal] pronouns", including the impact of Viking and Norman invasions on the English language alongside shifting norms and customs that have changed how we talk, says Tom Birkett, a professor of Old English and Old Norse at University College Cork in Ireland.

Many Old English pronouns are still in use, says Birkett. Our oldest English personal pronouns include "he" and "it", as well as "we", "us", "our", "me" and "mine", Birkett says. They have made it through more than 1,000 years of history and upheaval, almost intact.

"'He' definitely is a very old English form, and also 'hit', which lost the 'h' and became 'it'," Birkett says. The Old English "Ic" has also been resilient, losing only one letter, to become the modern English "I".

But other pronouns were cast off – such as the once-common dual form. "It's fairly widespread in Old English texts. Particularly in poetry, we get the use of 'wit' and 'unc' for 'us two, the two of us'," says Birkett.

To illustrate the poetic power of the dual, Birkett gives the example of a love poem, known as Wulf and Eadwacer, that is over 1,000 years old. In the poem, a woman yearns for her lover, Wulf, who is separated from her because he was rejected by her clan. The last line reads, in a modern English translation:

"One can easily split what was never united,
the song of the two of us."

In the Old English original, the words for "the song of the two of us" are "uncer giedd" – meaning "our song", but just for two people.

"The dual pronoun is used in that poem, and I think it's quite an intimate use, because it's all about 'We two together against the world'," says Birkett. "Certainly in poetry, it has that use of creating an intimate connection between two people.”

Fighting the "hronfixas"

In the Beowulf, the dual makes a dramatic appearance: two warriors swim in the sea holding swords, "to defend the two of us against whales" ("wit unc wið hronfixas werian" in the original). Thought to be written in the 8th Century, Beowulf is the earliest European epic written in the vernacular – the language commonly spoken – rather than a high culture, or literary language.

The dual form survived the transition from Old English to Middle English, after the Norman conquest in 1066, but then disappeared. "That's a whole category of pronouns that's just been lost," says Birkett. According to him, one of the last times the dual appears is in "Havelok the Dane", a text by an unknown author from around 1300:

"Roberd! Willam! Hware ar ye? Gripeth eþer unker a god tre, and late we nouth þise doges fle."
("Robert! William! Where are [you all]? Both of you two grab a good staff, and let's not allow this dog to flee!")

Introducing "she"

The dual form still exists in some languages, such as Arabic. But why did such a poetic pronoun go extinct in English? It seems especially strange when popular culture still celebrates that sense of a special two-ness today and the prevalence of "just the two of us" pervades song and literature.

Generally speaking, "language tends towards simplicity", Birkett says. Given that the broad, plural "we" can also be used for two people, there may simply not have been a strong enough reason to make the extra effort of keeping the dual form alive, in his view. 

He points out that many other Old English pronouns have, in fact, not survived to the modern day either – replaced by words from foreign languages or more useful alternatives.

"She", for example, is younger than "he", and seems to be an amalgamation of two Old English female pronouns, Birkett says – "heo" and "seo". "[These] probably combined over time, to make 'she'," he says.

Two-ness is often portrayed as special, as in this European mid-12th Century scene


Vikings and werewolves  

Another commonly used modern pronoun, "they" – along with "them" and "their" – is actually not Old English at all, according to Birkett. It arrived with Old Norse, a Scandinavian language spoken by the Vikings who invaded and settled in England from the 800s onwards. "They" then spread and replaced the Old English "hie".

The foreign "they" may have become popular for practical reasons, Birkett suggests: the native "hie" was potentially confusing as it could mean "they" but also, "her" – whereas "they" was distinct and therefore clearer.

Later, "they" was also occasionally used in the singular, as it is today when used as a gender-neutral pronoun, Birkett says. The singular "they" appears, for example, in the 14th-Century text "William and the Werewolf", as well as in "The Pardoner's Prologue", by Geoffrey Chaucer, written around the same time.

"Chaucer was using 'they' as a singular back in the 14h Century," says Birkett. "It's a very, very old usage, and very useful when you don't know the person, [and don't want to refer] to them as 'he' or 'she'.""It was natural to extend that [French 'vous'] to English and to use that plural 'you' form to talk to the king and to the aristocracy," he says. "And then it was used as a respectful term for people in senior positions, and then, eventually, for everybody."

In the process, "thou", "thee" and "thine" disappeared, replaced by the catch-all "you".

"So you've got the politics there, of Norman French and the aristocracy, and the influence of French on English, which of course has been extensive," Birkett says. 

You, you all, you guys

In the centuries after the Norman conquest of England in 1066, our language underwent another profound change: we began using "you" to address one person, and also, many people.

Before, there had been different words for that. In Old English, "Þu" (later spelled "thou") was the word for "you (singular)". A different word, "ge", which has survived in some English-speaking regions as "ye", was used for "you (plural)". With the Norman conquest, another chapter of transformative multilingualism began in England, in particular, intense contact between English and French. The Norman French "vous" arrived in England, which is used to address both a group, and also, in a formal context, only one person. The native English plural was then also used as a mark of respect in the singular, when addressing just one person, Birkett says.

"It was natural to extend that [French 'vous'] to English and to use that plural 'you' form to talk to the king and to the aristocracy," he says. "And then it was used as a respectful term for people in senior positions, and then, eventually, for everybody."

In the process, "thou", "thee" and "thine" disappeared, replaced by the catch-all "you".  

"So you've got the politics there, of Norman French and the aristocracy, and the influence of French on English, which of course has been extensive," Birkett says. 

Today, some dialects of English spoken in Ireland and elsewhere still make distinction between the plural and singular you, he says. "In my area, in Munster in Ireland, 'ye' is very, very commonly used as a plural. People wouldn't tend to write it down as much, but in spoken English it's used a lot," he says. In Glasgow and west central Scotland, another version, "youse", is often used as the plural in the local dialect.

And people today also use spontaneous workarounds to clarify the plural in everyday life, such as "you all" and "you guys".

Despite these changes, Birkett says that compared to nouns and verbs, personal pronouns have remained quite stable and retain some grammatical features of Old English that have entirely disappeared from English nouns. For example, we still say "he", "his" or "him", depending on the case – whereas English nouns and adjectives, which in the past also changed depending on the case, no longer do.

In Old English, for instance, the word for "king" – "cyning" – changed depending on its role in a sentence: "Hē is cyning" is in the nominative case and means "he is king", whereas "mid Þæm cyninge" is in the dative case and means "with the king".   

"[Personal pronouns] have tended to survive because they're the bedrock of language," says Birkett. "They're used every day, all the time, and they've certainly changed less than nouns or verbs in the [English] language. Pronouns have had that kind of staying power." 

Is there any chance the extinct English dual pronouns might return one day, turning Bill Withers' "Just the Two of Us" into "Just Wit", and Taylor Swift's "Our Song" into "Uncer Song"? Based on Birkett's historical examples, a comeback seems unlikely: once the dual fell out of use, it did not reappear. 

However, surely, the future of our pronouns is whatever we want it to be. Perhaps wit – you and me – could make a start, and sprinkle some lost pronouns into our conversations today?

https://www.bbc.com/future/article/20260408-the-extinct-english-words-for-just-the-two-of-us

*
PUTIN’S WAR ON UKRAINE IS THREATENING TO CAUSE A COLLAPSE OF THE RUSSIAN FEDERATION

Putin is in total despair. 
He miserably failed.His war in Ukraine is now causing the collapse of Russia.

How ironic: Putin was the one accusing Gorbachev in destroying the USSR. Now Putin is the one who started something that is going to destroy the Russian Federation.

That’s totally NOT what Putin wanted.

Putin wanted to break up NATO and punish the USA for causing disintegration of the Soviet Union—his life-long plan, which he “inherited” from his spiritual mentor Sobchak (whom young Vova had to liquidate [Oriana: by poison] before his glorious 2000 presidential installation, because "Sobchak knew too much"].

Sobchak was a KGB snitch during the Soviet times—there is a phrase in the canonical book of Soviet satire “The Golden Calf” (by Ilf and Petrov) describing post-1917 sentiment among Russian bureaucrats: “Who could know? People made do as best they could.”

Professor Sobchak was “making do as best he could” in the USSR, snitching on other professors.

At the same university, Putin was a student snitching on his fellow students—and he was so good at snitching, they invited him to work for the KGB. And then the KGB sent Putin to work as Sobchak’s aide, when during Gorbachev’s Perestroika Sobchak rose as a new “democratic leader” and became the mayor of Leningrad (now St. Petersburg).


So, Putin’s whole life and presidency of Russia are a subconscious attempt to justify his life before the 1991 collapse of the USSR

He is reinstalling the USSR in Russia to justify his adolescent life choices (this time married to mafia, not communism—mafia was Vova’s 1st love, before he got into the KGB structures on advice of his judo trainer/underground mafia boss).

This is the part that many western analysts miss—or never knew about—that the most basic installation in Putin’s psyche were “the understandings” of the Soviet mafia of “thieves in law,” not the KGB.

So, this mafioso lack of care for anyone outside “The Family” coupled with KGB’s limitless cruelty and cynicism is what we have in the man steering the course of the largest country on Earth since 1999.

And because he was so unremarkable and talentless, he decided that it was God’s will (the Universe, higher forces, Providence) that elevated him to this position—that he was “The Chosen One” to fulfill the vision of glory and grandiosity for Mother Russia.

Deep inside, Putin is superstitious, not religious—he sought blessing of shamans, kissed the Quran, and dutifully attends Orthodox Christian prayer on Easter, holding a candle and standing up (as required in canonic Russian Orthodoxy) through the whole service.

Putin during his private Easter Service at the Kremlin

Putin didn’t want the Russian Federation to self-destruct when he embarked on his plan to destroy America and steal the global hegemony for Russia.

Russia’s demise wasn’t in his plan—he planned for Russia’s revival.

Now he’s destroying the very thing he says he wanted to make great—the Russian Federation’s geopolitical standing.

Putin’s Russia showed herself to be a predator—not a benevolent benefactor.

This completely negates any claims that Russia is a “force for good.”

Killing over a million people in the heart of Europe—including his own people—isn’t something the people of the West can ever forget.

Putin hoped a “victory” would give him absolution. But no matter his “successes” on the battlefield in Ukraine, his name is now firmly ingrained in the row with the likes of Hitler and Assad.

Even if he managed to capture Ukraine (he won’t), this would be only a temporary “success” like Hitler’s invasion of Poland in 1939.

How can Putin be happy, seeing everything he was working for during his whole life going up in smoke?


China has a massive amount of wind and solar farms—not because they have to abide by “net zero” policies, but because these are technologies of the future.

What is happening at Hormuz Strait, with Iran refusing to allow ships through the narrow shipping channel, is shaping the global energy infrastructure of tomorrow—and it’s not going to be the world relying on fossil fuels produced in Russia, Iran, and the Middle East.

You can literally feel the shift of geopolitical gears.

In fact, Putin in his delusive dreams of being “the Chosen One” might be onto something—he’s the Destroyer of the Old World Order indeed.

The New World Order is being born as we speak. But it’s not the world order Putin dreamed about. 
It is paradigmatically, principally different. 

Putin was “chosen” as a destroyer of the old empire. His job was to destabilize the Russian Federation enough, so that the new Europe could rise up. It is rising now. ~ Elena Gold, Quora
 

*
WHEN AND WHY ELON MUST MADE HIS HARD TURN TO THE RIGHT

“The coronavirus panic is dumb,” tweeted Elon Musk in early March 2020, his first public comment on COVID-19. (It was also his first tweet to earn more than one million likes.) To him,
the true virus was informational. The cybernetic collective of social media functioned like a communal id, where posts spread not because of their truth but their “limbic resonance.” “You can’t talk people out of a good panic,” Musk told Joe Rogan, “They sure love it.” By late March, he had landed on a new phrase for the phenomenon: a “mind virus.”

It was an interesting choice of words. Social media virality had been Musk’s great asset, the mechanism through which he converted attention into value. But here, virality was being invoked in a negative sense: it wasn’t just about circulation but sickness. The phrase reached back to Richard Dawkins, whose 1993 article “Viruses of the Mind” argued that human consciousness was susceptible to infection by irrational ideas like religion and superstition the way malware infected a computer. For Musk, social media was now the superspreader of such contagions.

He elaborated further in a conversation with Joe Rogan on May 7, 2020. As the “memesphere” had become global, Musk said, it created the conditions for a “mind virus” that could infect the whole world. Rogan was confused. He thought Musk was talking about Neuralink—a virus that interfered with a brain-computer interface. No, Musk clarified: a mind virus referred to a “wrong-headed idea that goes viral.” To Musk, the political-economic struggles of the pandemic were not just being waged in factories or governments but in the immune systems of collective thought itself.

Twenty-one days later, a group of protesters burned down a police station in Minneapolis in retaliation for the killing of George Floyd, a Black man murdered by a white officer. Protests spread around the country and around the world. By the summer of 2020, between 15 million and 26 million Americans had participated in the demonstrations, making it the largest social movement in U.S. history. One consequence was the election of Joe Biden in November 2020: as multiple studies have shown, the protests contributed to Democratic electoral gains across the country.

Once in office, Biden would pursue the most progressive domestic agenda in decades. His administration oversaw an expansion of the social safety net, a regulatory push around antitrust and consumer protection, and the most pro-labor National Labor Relations Board since the 1940s. The sequence of events fits the classic pattern of a Twitter Revolution. The George Floyd protests seemed to fulfill the promise of social media as a catalyst of progressive change. The woke social network that spawned Occupy Wall Street and Me Too now brought tens of millions of Americans into the street and helped eject Donald Trump from the White House. 

The hashtag progressivism of the 2010s had been vindicated on a very large scale.

In retrospect, however, the victory was fleeting. The George Floyd protests provoked a major backlash. Right-wing forces mobilized on social media to counter narratives about police brutality and racial inequality, and to celebrate figures like Kyle Rittenhouse, the white teenager who shot three men with a semi-automatic rifle at a protest in Wisconsin in August 2020 and was subsequently acquitted of all charges after claiming he acted in self-defense. 

Conservatives increasingly appropriated the word “woke” for their own purposes, turning it into a catch-all for the kind of politics they opposed. “Woke” had been a Black term and then, at the hands of figures like Jack Dorsey, came to describe the supposedly democratizing effects of social media. In the aftermath of the George Floyd protests, however, it became a pejorative label for perceived excesses in the pursuit of justice. By 2021, national Republicans were railing against “wokeness.”  

This was the backdrop against which Musk’s thinking about virality underwent a further mutation. After labeling the coronavirus panic a “mind virus” in the spring of 2020, over the course of the following year he became convinced that something more virulent was circulating: a “woke mind virus.” His first public use of the phrase came on the evening of December 2021, when he posted the following tweet: “traceroute woke_mind_virus.” 

Traceroute is a diagnostic tool used to map the path of data through the internet—the digital equivalent of injecting dye into a patient’s veins to illuminate areas of concern in an MRI. In his elliptical way, Musk was expressing a desire to trace the spread of the woke mind virus. The term’s origin probably lies with right-wing commentator Dave Rubin, who had started tweeting about the “progressive mind virus” in 2019 and by 2020 had devised a new slogan: “Wokeism is a mind virus.”

Regardless of the precise etymology, however, Musk’s adoption of the phrase signaled his rightward shift. 2022 was the year he began to consistently proclaim right-wing viewpoints. As he did so, he frequently referred to the woke mind virus as his principal enemy. At stake was no longer just whether he could reopen his factory, but the survival of civilization itself. “Unless it is stopped, the woke mind virus will destroy civilization and humanity will never reached [sic] Mars,” he tweeted in May 2022.

The imperative to merge with the machine had originated in the need to prevent AI from annihilating the human race. But the woke mind virus designated a new kind of civilizational threat—one that perversely exploited the solution to the problem of superintelligence. If Musk had formerly conceived of the cybernetic collective as a safeguard against an evil AI, he now saw it as a carrier for a mental plague that evil humans were using to sicken the minds of millions.

*
There are several ways to understand Musk’s turn to the right. The material reasons are easy to surmise. Like other billionaires who projected a liberalish public image, especially those from Silicon Valley, Musk felt alienated by the growing influence of the American left. He despised President Biden’s proposal for a wealth tax on the super-rich, as well as the administration’s support for unions and the regulatory and anti-trust push of FTC Chair Lina Khan. 

Biden’s failure to invite Musk to a White House summit of electric-vehicle manufacturers in August 2021, reportedly because of Tesla’s history of union-busting, enraged him. Another grievance was the Justice Department’s August 2023 lawsuit accusing SpaceX of discriminating against asylees and refugees in its hiring practices. Musk has repeatedly claimed that federal export control laws prohibit SpaceX from hiring such individuals, which is incorrect.

Musk also formed an affinity with the right through their shared hostility toward public health measures during the pandemic. When he was lambasting the lockdowns, the people cheering him online were conservatives—up to and including President Trump himself, who had used his bully pulpit on Twitter to demand the reopening of the Fremont plant. Musk’s first sustained interactions with right-wing accounts on Twitter date from this period. Further, the prospect of building a new fanbase on the right may have appealed to him, especially as his views on COVID-19 ran the risk of hurting his reputation among liberals.

But none of these factors account for the apocalyptic intensity of Musk’s rhetoric. “The woke mind virus is either defeated or nothing else matters,” he tweeted in December 2022. Neither do they say much about the content of the virus, what its “code” actually consisted of. Musk himself wasn’t always much help on this question, as he liked to cast a wide net. (When the chief film critic of The New York Times failed to put Top Gun: Maverick in his top ten list for 2022, Musk decried the paper for being “woke.”

We can come closer to an explanation by starting with a theme that occupied an especially prominent role in his tirades: transphobia. “Pronouns suck,” Musk tweeted in July 2020. It was an opening salvo in an anti-trans campaign that steadily intensified in the coming years.

This wasn’t unique to Musk: anti-trans politics became a defining feature of the right-wing counteroffensive launched in the aftermath of the George Floyd protests. Moreover, Musk had a personal connection to the issue: his daughter Vivian came out as trans through an Instagram post in 2020, and officially changed her name and government-documented gender on the day of her eighteenth birthday in 2022. Musk later told Jordan Peterson that he considered his child to be dead—“killed by the woke mind virus.”

Musk’s transphobia suggests an answer to the question of what the woke mind virus really meant, and why the stakes of the struggle to defeat it may have felt so existential. Muskism’s mandate to meld us with our machines represented an effort to turn humans into cyborgs, both figuratively and literally. The cyborgs of the Muskist imagination were drawn from cyberpunk science fiction, where cybernetic augmentation gives people superpowers, such as enhanced strength and intelligence. 

But it is also possible to think of a transgender person as a cyborg. Their superpower is the ability to modify their body to better fit their gender identity, which is achieved through the use of technologies like hormone replacement therapy and surgery. This raises a troubling possibility for Muskism: dissolving the boundary between the natural and the artificial might open the door for other boundaries to be redrawn.

The theorist Donna Haraway, in her 1985 essay “A Cyborg Manifesto,” pointed to such opportunities as proof of the cyborg’s progressive potential. Communication technologies and biotechnologies were “recrafting our bodies,” she wrote. In doing so, they enabled new configurations of identity and embodiment. Cyborg feminism wasn’t just about expanding the palette of personal expression, however, but inventing a new kind of politics. By “rejoicing in the illegitimate fusions of animal and machine,” cyborg feminists could discover the political forms capable of fracturing the “matrices of domination” imposed by capitalism, patriarchy, and racism. 

But this wasn’t the only shape that a cyborg politics could take, Haraway cautioned. The fusions of animal and machine could also serve to strengthen traditional social hierarchies rather than undermine them. Here, the endpoint was “the final imposition of a grid of control on the planet,” an idea that Haraway associated with Ronald Reagan’s Star Wars program. The “grid of control” is a good description of Muskism’s guiding ambition. (The Star Wars reference is also evocative, given the importance of the program’s legacy to the early years of SpaceX.) 

Vigilance was required to ensure that the cyborg synthesis did not disturb the existing distribution of power. In the Western tradition, Haraway observed, “the relation between organism and machine has been a border war.” For Muskism, this border war had to be waged in such a way as to erase some lines while hardening others. Humanity should merge with the machine—so long as it remained segmented by gender, race, and class. Call it cyborg conservatism.

Wokeness became Musk’s all-inclusive term for anything that endangered this arrangement. In George Floyd’s America, traditional hierarchies of gender, race, and class were being challenged on multiple fronts. And technology was playing an integral part. If technology let trans people alter their bodies, it also let activists record police violence on their smartphones and share the recordings on social media. This is, after all, how George Floyd’s murder was documented and disseminated, leading to the first protests. 

Cyborg fluidities were overflowing the grid of control.

These developments may help explain why the woke mind virus felt so threatening to Musk. It wasn’t just the prospect of a platform weaponized for mind control, of memes repurposed as pathogens. There was a more fundamental anxiety. When we fuse with our machines, it is hard to predict where such fusions might lead.

https://lithub.com/when-and-why-exactly-did-elon-musk-make-his-hard-turn-to-the-right/

Muskism

Muskism begins with a simple proposition. We live in a bewildering moment defined by a bewildering man: Elon Musk.

Not that the book’s authors, Quinn Slobodian and Ben Tarnoff, believe there’s much to be gained by peering into Musk’s soul. Muskism, like Fordism, is not an individual but a system … Musk is the entrepreneur who sells electric cars and satellite service (among other things). Muskism characterizes a new, technologically driven political economy that dismantles state institutions with one hand while promoting self-reliance, or the fantasy of it, with the other.

The ensuing cycle is virtuous for Musk and vicious for almost everyone else. If your self-reliance requires a Tesla charger or Starlink access, you have to plug into infrastructures that Musk owns. Other tech billionaires may want consumers to become entirely reliant on their products, but Musk has been operating on a different scale, helping to sell the idea that the public good is so degraded that consumers can count only on him. Slobodian, a historian, and Tarnoff, a technology writer, note that one of Muskism’s defining traits is this paradox of autonomy and dependency.

Another tenet of Muskism is what the authors call ‘financial fabulism.’ This is the mix of soothsaying and realism that entrepreneurs like Musk deploy to raise money for their companies. The idea is to be memorable and inspire confidence. Acquiring Twitter has allowed Musk to further erode the power of traditional media while drawing ‘investors deeper into his reality.’ Even when Musk posts statements that seem asinine or completely unhinged, the authors argue that he is merging with an ecosystem that selects for attention-getting shenanigans — another kind of ‘symbiosis.’

It’s a measure of Musk’s cosmic wealth that the unlikely phenomenon of Muskism has gotten this far. Unlike Trumpism, which is inextricably entwined with one man, Muskism — with its uncanny mix of ruthless state power and juvenile memes — is already bigger than its namesake.”

~ Jennifer Szalai on Quinn Slobodian and Ben Tarnoff’s Muskism (The New York Times)
https://bookmarks.reviews/5-reviews-you-need-to-read-this-week-8/

*
ROSALIE EDGE, A PIONEERING CONSERVATIONIST

In 1934, a wealthy New York socialite did something that baffled the locals in rural Pennsylvania. She walked into a real estate office and leased a mountain just to evict them. 

Her name was Rosalie Edge, and she was 57 years old.

At the time, Kittatinny Ridge was known locally as "The Slaughterhouse." Every fall, thousands of hawks, falcons, and eagles migrated along the ridge, riding the air currents south for the winter. But waiting for them were hundreds of men with shotguns and easy targets.

It wasn't hunting for food; it was slaughter for sport. The ground was often carpeted with the rotting bodies of magnificent birds, while many others were left wounded to die slowly in the brush.

The state of Pennsylvania actually encouraged it, even paying a $5 bounty on goshawks. Predators were seen as "vermin" that threatened chickens and game birds, and the general consensus was that they should be wiped out. Even the National Audubon Society refused to intervene, telling Mrs. Edge that protecting hawks simply wasn't a priority.

She was furious. She famously stated, "The time to save a species is while it is still common.”

But she didn't just write letters—she took action. She founded the Emergency Conservation Committee, and when established conservation groups wouldn't buy the land to stop the shooting, she did it herself. She secured a lease on 1,400 acres of the ridge and hired a warden, Maurice Broun, to guard it.

When the hunters arrived that season, expecting their usual sport, they found "No Trespassing" signs and a determined woman and her warden blocking the path. The shooting gallery was officially closed.

The hunters were angry. There were threats against her life and promises of violence, but Mrs. Edge stood firm, relying on her legal rights as a private property holder.

She turned a place of death into the world’s first sanctuary for birds of prey. She understood the value of predators, the delicate balance of the ecosystem, and the future of conservation. Her sanctuary, Hawk Mountain, later provided the crucial data that proved the dangers of DDT. Without her stubbornness, we might have lost the bald eagle entirely.


Rosalie Edge proved that a single citizen with a lease and a backbone can change the course of history. ~ Cheryl Andersen, Facebook


*
URBAN AIR POLLUTION AND THE CITY WITH THE CLEANEST AIR


Bangor, Maine, is the lone city on all three of the American Lung Association's cleanest air lists.

The American Lung Association calls it a “grim indication of the deterioration of air quality nationwide”: Bangor, Maine, is the last city standing on all three of its “cleanest cities” lists.  

Bangor has zero days of unhealthy ozone and short-term particle pollution, and some of the lowest year-round concentrations of dangerous particle pollution in the country, according to the association.

Typically, the association’s annual “State of the Air” report has at least one other city making all three lists. In some years, it’s had several. But this year’s report, published Wednesday, has the Queen City of the East – home of horror author Stephen King and the mythical birthplace of lumberjack Paul Bunyan – standing alone.

The country’s air quality is dangerous for millions of Americans, the report says. Nearly half the population – about 152 million people – breathes unhealthy air and lives in a county that the association gives a failing grade for air pollution.

About 32.9 million people live in counties with failing grades for all three pollution measures, and people of color are more than twice as likely as White people to live in a community with a failing grade on all three.

Los Angeles has the worst ozone pollution in the country, according to the new report.

HEALTH DANGERS OF AIR POLLUTION

Ozone and particle pollution are considered two of the most widespread and dangerous pollutants measured by the US Environmental Protection Agency. The EPA defines particulate matter – also called particle pollution or soot – as a mix of solid and liquid droplets that float in the air. It can come in the form of dirt, dust or smoke. Coal- and natural gas-fired power plants create it, as do cars, agriculture, unpaved roads, construction sites and wildfires.

Particle pollution threatens human health because it is so tiny – a fraction of the width of a human hair – and can travel past the body’s usual defenses. When a person breathes these particles, they can get stuck in the lungs and move into the bloodstream, causing irritation and inflammation.

Even in the short term, particle pollution exposure can cause breathing problems or trigger a heart attack. Particle pollution is also considered a significant factor in premature death around the world, according to the World Health Organization. Exposure can raise the risk of conditions like certain cancers, stroke, asthma, preterm births, dementia, depression and anxiety.

Ozone pollution, also called smog, is the presence of ground-level ozone that forms when chemicals like nitrogen oxides and volatile organics from electric utilities, car exhaust, gasoline vapors, industrial facilities and chemical solvents react to sunlight.  

Exposure to ozone pollution can cause asthma attacks and chest pain in the short term. Long-term exposure can also cause decreased lung function and premature death.

The data in the new report comes from 2022-24, the latest available from the EPA. Ozone pollution affected more people in the US in this year’s report than in the previous five. Levels of particle pollution showed some improvement, but groups exposed to high levels of this pollution faced much higher levels than in the past.

Los Angeles remains the worst in the country for ozone pollution, as it has been in all but one of the 27 years of the report. Bakersfield, California, has the worst year-round particle pollution for the seventh year in a row, but it improved this year in terms of short-term particle pollution. Now Fairbanks, Alaska, is ranked as worst on the short-term particle pollution list.  

Bangor earned an “A” for ozone and short-term particle pollution exposure and came in 10th on the list of 25 cities with the lowest year-round particle levels. Bozeman, Montana, took the top spot there this year.

The last time only one city made the three clean-air lists was 2012, when Santa Fe-Espanola, New Mexico, was named the top city. 

In 2022, 10 cities made the lists.

“An important part of our brand”

Bangor’s presence on the cleanest air lists for several years has been a real selling point for city, according to Anne Krieg, the city’s director of community & economic development department.

“We’ve heard many people say that they’ve read that Bangor has the cleanest air, and ‘I want to be there for that,’ ” Krieg said. “It’s healthy, and there’s a good outdoor environment. It’s really an important part of our brand. You can have a city life but also have access to clean air. Not a lot of cities do.”

A few factors set the city’s air apart from much of the rest of the country, said Dr. Jean MacRae, an associate professor who teaches a course in air pollution and solid waste at the University of Maine’s College of Engineering and Computing.

The area benefits from weather systems, a high concentration of forest land that filters the air, a good distance from polluting industries and a population of only about 33,000, so there are not as many cars on the road.

“Sometimes, bad air comes up the coast, so the southern part of the state sees some of that,” MacRae said.

Maine itself has been called the “Tailpipe of the Nation” since pollution from power plants and car manufacturers in the Midwest and from Northeastern states would drift toward it, but “air pollution controls have really reduced that over time,” MacRae said.

In fact, two US senators from Maine are credited with ushering in some of the country’s most important air pollution regulations.

Sen. Edmund Muskie championed the Clean Air Act of 1970, which created the first federal limits on pollution and launched the EPA. Sen. George J. Mitchell helped create the 1990 Clean Air Act Amendment that required states meet air quality goals and tightened car and truck emission standards.

The success of those protections is threatened under the Trump administration, according to Will Barrett, the assistant vice president of nationwide clean air advocacy with the American Lung Association.

“Significant rollbacks to our critical life-saving clean air rules are well underway because this EPA is moving away from their public health mission,” Barrett said.

The administration has undertaken what the EPA called the “biggest deregulatory action in US history,” reconsidering regulations on power plants, the oil and gas industry, mercury and toxic air standards, the greenhouse gas reporting program and curbs on car and truck pollution.

The climate crisis also makes it more difficult to breathe clean air. Extreme heat, wildfire smoke and drought all make air pollution worse. “All of that really speaks to the need for strong progressive policies,” Barrett said.

Krieg says warmer temperatures have, however, benefited her town at least in one way: driving an influx of new residents from states like Texas who are looking for a place with a more temperate climate and cleaner air.

“Maine is gorgeous for that,” she said.

https://www.cnn.com/2026/04/22/health/clean-air-report-bangor


*
ONE IN FIVE AMERICANS MAY HAVE A DANGEROUS TOXIN IN THEIR WATER 

Over 62 million Americans — roughly 1 in 5 people — may be exposed to potentially dangerous levels of nitrates in their tap water, a new report has shown.

A compound of nitrogen and oxygen found naturally in air, water, soil and plants, nitrates become a health risk when rainfall causes nitrogen-rich fertilizers used in agriculture to leach into groundwater, streams and rivers and end up in public water systems miles downstream.  

Invisible, tasteless and odorless, nitrates at low concentrations in drinking water have been linked to thyroid disease, gastric, kidney, bladder and colon cancers, preterm births and birth defects, and other health harms, according to the report released Thursday by the Environmental Working Group, or EWG, a nonprofit health advocacy organization.

Thirteen-year-old Ben is so concerned about nitrates in the tap water of his hometown of Des Moines, Iowa, that he recently sent his local congressman a letter and poem.

“I remember when I could drink water from the faucet, but now it is a health concern,” Ben wrote Iowa State Rep. Dr. Austin Baeth. “Please don’t ignore this problem!”

Des Moines is a hot spot for nitrate pollution in source water, with levels so high in local rivers the city had to build one of the largest nitrate removal plants in the world. The cost to operate is more than $10,000 a day. 

“I’ve read Ben’s letter and poem numerous times, and I still get choked up,” said Baeth, an internist who is calling attention to Iowa’s nitrate levels with hard-hitting, sometimes satirical videos on social media. “Isn’t it sad children have to worry about water that might be harming their health?”

To see how many Americans are exposed to nitrates at those lower concentrations, researchers used the EWG tap water database, which aggregates data from nearly 50,000 public water systems in the United States. 

“We used measurements of nitrates in public drinking water between 2021 and 2023 in cities and towns in all 50 states, mapping exposure down to 3 milligrams per liter,” said report author Anne Schechinger, EWG’s senior director of agriculture and climate research.“This is a first-of-its-kind map — no one has done this before,” Schechinger said. “And it’s searchable by zip code so people can go and check their own levels of nitrates and other contaminants.”

The report does not cover private well water, which is not regulated by the US Environmental Protection Agency.

More than 6,000 community water systems, serving more than 62.1 million people, tested at or above the 3 milligrams per liter of nitrates, according to the report. Studies have linked these levels to pediatric cancers and other health harms.

More than 3,200 of the 6,000 systems tested at or above 5 milligrams per liter, a level connected to colorectal and ovarian cancer.

The Los Angeles Department of Water and Power, which serves nearly 4 million people, tested at or above 3 milligrams per liter on 255 different occasions, the report found. Other major cities with more than 1 million residents that also tested at 3 milligrams per liter or above included Phoenix; Philadelphia; Las Vegas; San Jose, California; and Columbus, Ohio.

A spokesperson for The Fertilizer Institute, which represents industry, told CNN in an email that US farmers have doubled corn production over the past three decades with just a slight increase in fertilizer use.

“Nitrate is a naturally occurring compound found throughout the environment,” said TFI Vice President of Public Affairs Christopher Glen. “While fertilizer is one source, others include organic matter mineralization, septic systems, urban stormwater, and atmospheric nitrogen deposition from industrial and vehicle emissions. Attributing elevated nitrate levels in drinking water primarily to fertilizer use, as the EWG report implies, oversimplifies a complex issue.”

Extremely high contamination from well systems 

More than 3 million people, served by 606 water systems in the US, were exposed to nitrates at or above the legal limit of 10 milligrams per liter.

Seventy of those systems had nitrate levels at or above 20 milligrams per liter, twice the federal limit. Another 21 systems contained levels at 30 milligrams per liter or even higher: A water system serving 31 people near Dinuba, California, tested at 50 milligrams per liter — the highest in the nation.

Most of the communities with the highest levels were quite small, serving under 1,000 people, but not all. More than half a million people in Fresno, California, used tap water with up to 14 milligrams of nitrate per liter..

More than 35,000 people in Garden City, Kansas, were exposed to up to 37 milligrams of nitrate per liter, while some 32,000 people in Laverne, California, used tap water with 26 milligrams per liter.

“Nearly all of the water systems with extremely high levels are groundwater systems that obtain their water from local wells,” said biologist and chemist Christopher Jones, a former research engineer at the University of Iowa who monitored the state’s water quality. Today, Jones is running to be Iowa’s next secretary of agriculture.

“Having 40 milligrams per liter in groundwater is not unheard of, not at all,” said Jones, who was not involved in the EWG report.

The primary sources of nitrates in groundwater are from livestock manure and other nitrogen-rich fertilizers placed on crops by farmers and ranchers, experts said.

Without proper safeguards, rainfall and water irrigation flow easily into groundwater and into wells, while also spilling into rivers and streams that feed into public water systems. And you don’t have to be close to agriculture to be affected, Schechinger said.

“The nitrate contamination can affect people far, far downstream from farms,” she said. “Your water may come from a reservoir outside your major city, but the stream or the river that feeds that reservoir comes from miles and miles upstream where farms may be.

“Although it’s an agricultural issue, it affects people across the country in really tiny, rural towns and really large cities,” Schechinger said.

What can be done  

Public water systems that regularly test at levels above the legal limit of 10 milligrams per liter are required to notify residents and take action to clean the water.

That requires expensive mitigation systems — costs that water utilities often pass on to the consumer. Des Moines spent more than $4 million in 1990 to construct its ion-exchange treatment plant.

If you use water from a filter on a refrigerator, that also needs to be connected to the reverse osmosis system, he said.

“Don’t turn to bottled water as a solution — it’s less regulated in general than tap water,” Schechinger said. “Just look up your zip code in our tap water database to see if you need to filter or not. We also provide information about
at-home water filters as well.”

Until tighter regulations are passed, it’s up to the consumer to decide on a course of action, experts said.

“It’s a peace-of-mind issue,” Jones said. “If you know the water coming out of your tap is above 3 milligrams per liter of nitrates and you want peace of mind, then I think a reverse osmosis system on the kitchen cold tap is advisable.” 

DrPhyzx:
Reverse osmosis wastes a huge amount of water, taking 10 units of input water to make one unit of output. So, this solution puts even more strain on water supply that is increasingly limited in many communities. The real solution is more thoughtful application of fertilizers, which are spoiling waterways all over the country.

Anybodybutorange:
The awful truth is that the majority of Americans don’t care about toxins, poisons and the food that they put into their bodies. They are just surviving.

cnn-user-y3zxcn:
Missing from the article is any mention of nitrates in processed meats such as bacon, sausage and hotdogs which all have very high levels of nitrates. If you want to reduce nitrate intake, start here.

RR:
If you live anywhere, you have been poisoned by industrial human activity.
It’s not just political, it’s economics, greed and shortsightedness.
It’s a sad fact, the more “progress” humans have achieved, the more pollution in every facet of our existence.

Miss Anne Thrope:

The Iowa legislature won’t do anything about the fertilizer runoff because they’re terrified of criticizing farmers and Big Ag.
It’s a Republican majority so no surprise there.

And then they wonder why Iowa has the second-highest cancer rate in the nation.

https://www.cnn.com/2026/04/23/health/nitrates-tap-water-toxin-wellness#openweb-convo


*
MIGRATION REVERSAL: WHY MORE AMERICANS ARE MOVING TO IRELAND


In a historic reversal, the number of Americans moving to Ireland last year was higher than the number of Irish people migrating to the US. Was this just a blip or the start of a more profound trend?

Michael Sable is an American stand-up comedian and communications manager who moved from Washington DC to Dublin in 2016. 

Sable, who draws on his experience of being an American living in the Republic of Ireland in his stand-up routine, says that, when he first arrived, many Irish people he met were surprised he'd made the move, but now they don't question it. 

"I've noticed that, as the years go on, people have been less and less incredulous when hearing that an American moved to Ireland," he says. 

Sable is one of a rising number of people who have moved to Ireland from the US, with the latest data showing the figure nearly doubling from 4,900 to 9,600 between 2024 and 2025, exceeding the number of Irish people headed in the opposite direction.   

It comes as the US saw more people leave than arrive last year, according to a report from US think tank the Brookings Institution. 

It said this was the first time that this had been the case "in at least half a century". The think tank highlighted "dramatic changes in immigration policy" under the second administration of President Trump, including more removals of undocumented foreign workers, and the White House "all but suspend[ing]" the US program for accepting refugees.   

Separately, more American citizens are choosing to move abroad than ever before, says the Wall Street Journal. It calculated that "at least 180,000 Americans" voluntarily left the US in 2025, which it said was a record high.  

The reversal in the flow of migration between Ireland and the US marks a historic turning point in the shared history of the two countries, which have deep-rooted ties. 

 

For centuries, millions of Irish people emigrated to the US in search of work or a better life, making Irish Americans one of the country's largest ethnic groups. There have been several US presidents with Irish ancestry, including John F Kennedy and Joe Biden, whose visits to Ireland were celebrated more like homecomings than diplomatic missions.  

Irish writer Colm Tóibín has frequently explored the relationship between Ireland and the US in his work, particularly in his novels, Brooklyn and Long Island, which follow the story of an Irish girl who emigrates to America in the 1950s. 

Tóibín, who lives in the US, says the relationship between the US and Ireland has changed. "A myth was created that America was a great place of opportunity and wealth," he says. "It was built into the [Irish] culture that if there's any trouble, you go to England, if there's any ambition or spark you go to America.”

"The flow of young people [from Ireland to the US] looking for work just hasn't continued. So, that is going to be a big change in the future, because you're not going to get the same easy connection between Ireland and America," he adds. 

The trend partly reflects political shifts in both countries. As well as developing into a high-tech, export-driven knowledge economy, Ireland has undergone a social transformation in recent decades, moving from a deeply conservative society to a liberal, progressive nation, following referendums on divorce, abortion and gay marriage.   

Tóibín, who was a prominent campaigner for the "yes" vote in Ireland’s same-sex marriage referendum, says: "Everyone became more aware that Ireland was a more liberal, cosmopolitan, open society, and that it would be a good place to live."

By contrast, the US has moved to the political right under President Trump, who, since returning to office, has launched a major crackdown on illegal immigration.  

Tóibín revisits the Irish-American immigrant experience in his latest short story collection, The News from Dublin. In one story in the collection, Five Bridges, the protagonist is an undocumented Irish immigrant preparing to leave America before US Immigration and Customs Enforcement, known as Ice, come looking for him. The number of Irish citizens deported from the US rose by more than 50% in 2025. 

"There are large numbers of Irish people in America who are official in every way – they pay taxes, own their own house, have kids in school, but they came in on a tourist visa," says Tóibín. "If Ice found them, Ice would detain them. That's really frightening."

Expatsi, a firm that helps US citizens to move abroad, reported it experienced a month's worth of traffic in the hours after President Trump's election in 2024. Expatsi co-founder Jen Barnett says the reasons Americans cite for leaving are wide-ranging. "[Factors include] what's going on politically in the US, and has been for 10 years, the cost of living, and then safety. Gun violence is so prevalent," she says. 

Politics was also a driving factor for Kevin Wozniak, an American lecturer living in Ireland. He and his husband left Boston in 2023 after he got a post at Maynooth University, 23km (14 miles) west of Dublin. 

"My motivation to look at opportunities abroad was deep fears about the trajectory of the country in the era of Donald Trump," Wozniak says. "Ireland has liberalized very significantly and has moved in exactly the opposite direction that the US is moving.”

Natalia Lange, a migrant support worker based in Crosshaven, county Cork, moved there from Michigan with her husband after the 2024 US election. Lange, who is half-Hungarian and has an EU passport, dreamed of living in Ireland since visiting on a school trip. "Politically, Ireland aligns much more with how we think," she says. "The US has the infrastructure to take in a lot more people, but it's a lot less inviting."

Lauren Udoh, from Houston, Texas, moved to Claregalway, in county Galway, in 2021 after marrying her Irish husband. Udoh, who has two young children, works as an executive assistant and documents expat life on social media under the moniker TheGalwayGal. 
"One of the biggest benefits is safety. I feel a lot safer here," she says. "With kids going to school, you don’t have to worry about school shootings.” 

Emigration from Ireland has fallen, with 65,600 people leaving in the year to April 2025, a 6% decrease on 2024, according to the Irish Central Statistics Office (CSO). For those wanting to live abroad, Australia is the most popular destination. 

The CSO said 13,500 people left Ireland for Australia in the year to April 2025, the highest level since 2013. This is more than double the estimated 6,100 people who left Ireland for the US, which was, however, still up 22% on the previous year.  

Restrictions around J1 visas, which allow Irish university students to work and travel in the US, has caused many Irish students to rethink studying in the US.  

Karen McHugh, CEO at Safe Home Ireland, an organization supporting Irish-born emigrants wishing to return to Ireland, says: "We've certainly noticed an increase in queries about returning [from the US to Ireland]," she says. "Australia and Canada are two major countries where people are now going to from Ireland, and that's ease of getting a visa."

Irish student Jamie McElhinney is currently on a work placement in Portugal as part of his hospitality management course. McElhinney, who studies at Dundalk Institute of Technology, says current immigration controls put him off seeking a work placement in the US. "I was going to go to Boston, and then all of the news about Ice started coming out. It’s a deterrent from going over there," he says. 

While there are opportunities in Ireland, with employment at 74.4%, there is also a growing housing crisis, with recent protests over the shortage of affordable housing. 

Barnett says the housing shortage is something the Americans her company helps to relocate are mindful of, adding: "They don't want to add to [the housing crisis], so one of the things we talk to them about is staying out of city centers."

Sable, whose Irish grandparents emigrated to the US in 1939, is one of many Americans whose ancestry enabled them to get an Irish passport. Applications for Irish passports in the US rose by 10% in 2024, the Irish Department of Foreign Affairs reported. 

Sable now thinks of Ireland as home. "Culturally, I identify a lot more with Irish people," he says. "It's a very collective society, not huge on individualism like in the US."

Bill Hillyard, another American with Irish roots, relocated to Ireland for the cooler climate. He and his wife Anne left California following wildfires in 2019. "My wife said we need to move somewhere where it rains." They now run a pub called The Algiers in Baltimore, Co Cork. 

Hillyard says he now tries to avoid turbulent events in the US. "I tried not to follow the news because I'm here now," he says. "But I'm reminded every day, because everyone asks me about Trump and what's going on there… I'm West Cork's ambassador to the US at this point."

https://www.bbc.com/worklife/article/20260414-why-more-americans-are-now-moving-to-ireland

*
WOMEN ARE GETTING MOST OF THE NEW JOBS

 
The Labor Department says the vast majority of new jobs created over the last year went to women, most of them in health care.

In December 2016, as Donald Trump was headed to the White House for the first time, Betsey Stevenson offered the incoming president some economic advice.

Stevenson, a professor of public policy and economics at the University of Michigan, argued in an op-ed that it would be a disservice to encourage men "to cling to work that isn't coming back." She cited Trump's promise to bring an iPhone factory to the U.S.

"If Trump really wants to get more Americans working," she wrote at the time, "he'll have to do something out of his comfort zone: make girly jobs appeal to manly men.”

It's a message she believes is even more relevant today.

For decades, the focus has been on getting more women into male-dominated fields. Some efforts have been more successful than others. But now, with the vast majority of new jobs going to women, it's clear that men need help, too.

"This is happening at a time where it's become verboten to talk about diversity, equity and inclusion," Stevenson says. "And yet the people we need to be talking about right now are men."

17 times as many jobs filled by women 

In the mid-1970s, women held about 40% of jobs in the U.S, not including farm work or self-employment. By the early 2000s, women's share of jobs had grown to just under half. It's hovered around there since, crossing the 50% threshold just a few times, including during the Great Recession, just before COVID, and now.  

That parity masks the significant gains women have made in the labor market recently. Of the 369,000 jobs the Labor Department says were created since the start of Trump's second term, nearly all — 348,000 of them — went to women, with only 21,000 going to men. That's nearly 17 times as many jobs filled by women as by men.  

The lopsidedness was driven by huge growth in health care, where women hold nearly 80% of jobs. Over the past 12 months, health care alone added 390,000 jobs, more than in the economy overall, making up for job losses elsewhere.

"If we want to see job growth that's as robust for men as it is for women, we're going to have to see men embracing those kinds of jobs," says Stevenson.

So far, that hasn't happened in any meaningful way. Stevenson believes it's because men are more likely than women to have an identity tied to a particular occupation, making it harder for them to find work outside that field, much less in one dominated by women.

Meanwhile, in his second term, Trump has not strayed from his message that manufacturing will make the country strong. It's something he emphasized in his second inaugural address, declaring that "America will be a manufacturing nation once again," and in his repeated promises that tariffs would "bring factories roaring back."

When manufacturers added 15,000 jobs in March, the White House called it proof that "the best days for American workers, manufacturers, and families are still ahead," despite the fact that the sector is still down 82,000 jobs from when Trump took office.

"We have seen a year of a president absolutely fixated [on] growing the manufacturing sector," Stevenson says. "There's not enough of those jobs for men as a whole to thrive."

The Labor Department warns that raw job counts do not reflect the full picture of the labor market. The dramatic gender imbalance in new jobs is a "misleading snapshot," says Courtney Parella, a spokesperson for the Labor Department.

"Under President Trump's leadership, we're creating opportunities across a multitude of industries, including mortgage-paying manufacturing jobs," she said in a statement. "Both men and women are benefitting from a strong economy."

A push for policies to open doors for men 

Yet what's happening now in the labor market comes as no surprise to Richard Reeves, president of the American Institute for Boys and Men, a nonpartisan think tank.

He says not enough attention has been paid to the scarcity of men in certain professions, and now we're seeing the consequences.

"There is no cause for panic here," says Reeves, who's been studying the decades-long decline in labor force participation among men. "But I do think we should be alert to signs that the labor market might be moving even more quickly in directions that are leaving too many men behind." 

Reeves notes that for years, the country has embraced policies and programs aimed at getting more women into science, technology, engineering and math, and the share of women in STEM jobs has grown.

"But that didn't happen by itself. It happened as a result of concerted efforts to break down gender stereotypes," he says.

Still, gaps remain, and some of those efforts have seen their government funding cut under Trump.

Now Reeves says what's needed are policies and programs to draw male workers into fields such as nursing, teaching and social work.  

"Those are occupations that serve people, and they should look like the people that they serve," he says. "And it's good for men because it means they won't lose out on those jobs if that's where the growth is coming from."   

Framing jobs as more masculine

Stevenson has been thinking about ways to make the fastest-growing sectors of the economy more welcoming to men.

"I think there are ways for us to talk about those jobs as being particularly masculine," she says.

For instance, many health care jobs could be framed as roles requiring the strength to lift people. Preschools could highlight the need for teachers who serve as positive male role models.  

"Kids love to be rough and tumble and build things," she says.

Stevenson knows some people will be offended by such gender stereotyping.

"But I do want to encourage us to realize that we have to help men understand that they can do caregiving roles and stay masculine," she says.

Ongoing challenges for women and men 

What Stevenson doesn't want people to conclude is that everything is OK now that women are leading on jobs.

"We know that there is still discrimination that holds people back," she says.

For women, she says, that discrimination might be preventing them from getting the promotion that they deserve, contributing to the widening gender pay gap. For men, it may mean sitting on the sidelines because they don't think there's a role for them in the economy.

"I think we can use this moment to realize that discrimination, occupational segregation … these are things that harm all of us, not just one narrow group," she says.

https://www.npr.org/2026/04/10/nx-s1-5773327/women-men-jobs-health-care-manufacturing

*
THE HIDDEN POWER OF KEEPING WAGES LOW

It was the early 1930s in Britain. And a young economist named Joan Robinson and her husband were having tea at their home near Cambridge University. Chamomile? Oolong? We don't know. But we do know their guest was B.L. Hallward, a scholar of ancient Greece. That seemingly random detail becomes important to this story.

In the years after this meeting, Robinson would go on to become an influential author, a rabble-rousing professor, and a celebrated member of the "The Cambridge Circus," an intellectual group closely associated with John Maynard Keynes during the Keynesian revolution.

But when she sat down for tea with Hallward in the early 1930s, Robinson was far from achieving all of that. She wasn't yet a professor. She had no influential books or papers. And, like many women at the time, she was struggling to break into a male-dominated field that wasn't exactly rolling out the welcome mat.


Joan Robinson, 1920s

Robinson, however, was writing her first book, and it would help change everything for her. Probably because the book was so brilliant and audacious. With it, Robinson aimed to demolish an important pillar of old-school economics and replace it with something new. She would give this book the title The Economics of Imperfect Competition.

For a long time, economists had focused on the opposite — the economics of perfect competition. It's still a staple in Econ 101. Think a bajillion businesses competing. Infinite consumer and worker choices. No one has real power. Intense competition acts as a check against a company's worst impulses. They can't jack up prices because competitors can just swoop in and undercut them at any time. And they can't underpay workers because rival firms will poach them away. It paints a sort of
dream version of the free market where there is no power, no exploitation, no shenanigans — and outcomes almost always serve the public interest.

The problem? Economists knew the real world often didn't look like the fantasyland that they sketched on their blackboards. They weren't naive. They knew markets could be uncompetitive. Since at least the 16th century, for example, scholars had used the term "monopoly" to refer to situations where a single seller dominates a market.

But Robinson, as she was writing her book, noticed something was missing: there was no word for when a single buyer dominates a market. It's a concept that's especially important for the labor market — because employers buy our labor. What would it mean for workers and society if there was something like monopoly power on the buyer side?

Calling a company "a monopoly buyer" was kinda awkward. Because monopoly is a Frankenstein word stitched together using roots from ancient Greek — and it means one seller. So "a monopoly buyer" would translate to "one seller buyer"? It didn't make any sense.

This is why that random detail that Robinson was having tea with that scholar of the classical world, B.L. Hallward, is important. Because Hallward was familiar with ancient Greek.
Robinson told Hallward that she wanted to coin a similar word to "monopoly," but one that centered on buying instead of selling. They played around with Greek words, and they settled on "monopsony."

Monopsony is a cool word for an important idea, especially in labor markets: when employers face limited competition for workers, they gain power to pay them less and treat them worse than they otherwise could.

While Robinson and other scholars believed monopsony power could be a significant force in the economy, for a long time mainstream economists treated monopsonies as a kind of unicorn — found only in rare circumstances, like small towns with a single dominant employer or companies that employ highly specialized kinds of workers who don't have other job options.

But in a new book, The Wage Standard: What's Wrong in the Labor Market and How to Fix It, the economist Arindrajit Dube offers a theory — drawing on a growing body of peer-reviewed research — that monopsony power is much more widespread throughout the economy than previously thought, even in markets that at first blush seem rather competitive. And that matters because monopsony power could be used to suppress wages.

"The truth is employers have a lot of real power over setting wages, and when that power goes unchecked, paychecks stay smaller than they should be," Dube says.

Without fierce competition checking how employers treat and pay workers, companies may need something else to check their power. Dube argues one important reason why income inequality has exploded in America since the 1980s is due to a systematic erosion of countervailing forces to monopsony power. Think like a federal minimum wage that's barely budged, laxer antitrust enforcement, declining labor unions, and a vibe shift in corporate boardrooms away from concerns about pay fairness.

But Dube offers some optimism in The Wage Standard. In recent years, he says, the United States has seen movements that have successfully confronted monopsony power and pushed our society towards greater equality and fairness in the labor market. And he offers a range of policy ideas that he believes could do much more.

How monopsony faded — and returned

Despite the influence of The Economics of Imperfect Competition, which was translated into more than a dozen languages, the concept of monopsony power would go on to collect dust on the shelves of mainstream economics.

Most economists assumed the labor market was generally competitive enough that monopsonies could be treated as a footnote. And they continued to embrace and teach an influential framework centered on perfect competition. The model is a hallmark of Econ 101 — so widely used it's often called "the standard model."

In that model, employers have little or no power to set wages because they compete intensely for workers. If a company tries to be stingy, workers can simply go somewhere else for higher pay. " The econ textbook says that in a competitive market, if your boss underpays you, you leave," Dube says.

That's why, in this framework, wages aren't really set by the choices of employers — they emerge organically from the market. It can almost seem magical. In the textbook portrayal, "the invisible hand" of the free market brings the supply and demand for labor into a kind of perfect embrace by finding the exact "right" wage that will bring them together.

This model has a powerful implication. If the government steps in and mucks with the price of labor — by, say, imposing a minimum wage that makes labor artificially more expensive — that sends supply and demand out of whack. At this government-imposed higher wage, employers demand less labor while workers want to supply more of it. The result, in theory, is unemployment.

For a long time, a core prediction of this competitive model became almost like a dogma for many economists: a minimum wage will lead to higher unemployment.

Which is why the road to taking monopsony power more seriously began in the early-to-mid 1990s, when the economists David Card and Alan Krueger kicked off a revolution in economics with an innovative study on the effects of minimum wage laws.

When Card and Krueger analyzed the effects of a minimum wage hike on the fast food industry in New Jersey, they found no evidence that it killed jobs. The finding triggered a major shift in economics (for more on this, check out this Planet Money newsletter from when David Card received a Nobel Prize in economics, largely for this work).

For economists who embraced old-school models of a competitive labor market, Card and Krueger's findings were a head-scratcher. And they began theorizing why a minimum wage would not kill jobs. And it re-energized interest in what was then a pretty fringe idea about the labor market: that it was full of employers who had monopsony power, or the ability to influence wages.

The basic idea is that, maybe, employers don't have to literally be the only employer in town in order to underpay workers, so when the government comes in and forces them to pay more with a minimum wage law, it doesn't actually kill jobs because employers have considerable wiggle room to pay their workers more. Meanwhile, that higher wage has benefits for employers, like lower turnover or higher productivity, and so economic damage is relatively minimal.

Still, despite this evidence and some early enthusiasm, the idea that monopsony power was pervasive in the economy remained kinda fringe. Even as late as the early 2010s, Dube says, monopsony power was "a very niche topic," and he recalls these small conferences in "remote locations" where he and ragtag crew of economists would discuss monopsony issues for several days "because, hey,  this is all the people who were interested in the topic."

Monopsonyfest 2010 was apparently a dud and had a bunch of vacant seats. But Monopsonyfest 2026? It's sold out and getting lit.

Over the last decade or so, there's been an explosion of studies in top journals, including by Dube, finding that monopsony power is quite pervasive. And many economists are taking monopsony power more seriously these days.

Why monopsony power might be everywhere

So why, in Dube's view, is monopsony power so widespread, even in places where there seem to be numerous employers competing to hire and retain workers? In the book, Dube mostly answers this with what he calls the "triumvirate of endemic monopsony." These three reasons are "concentration, search frictions, and job differentiation."

First of all, Dube says, research suggests that if you look at how many employers there are in a given area for particular kinds of workers, "the typical American [labor] market is about as concentrated as having about three employers. And that's a very shocking number."

So, yeah, we're not talking about literal monopsonies dotting the American landscape. But research suggests, at the same time, there is often not intense competition between employers for workers either. Worker options are somewhat limited, and so they might be less gung-ho to quit if an employer kinda sucks.

"If a company's paying 10% lower in a highly competitive market, quits should just go off the roof," Dube says. But studies find they don't. Yes, people often do quit lower-paying jobs when higher-paying options present themselves, but not nearly at the rate classic models would predict. 

Second, there are "search frictions." In other words, there are logistical challenges for workers looking for a new job. They have to find information about job openings, apply for it, interview for it, risk getting rejected, fill out paperwork, and so on. These "frictions in job transitions prevent workers from easily moving to better-paying companies that may be interested in hiring them," Dube writes. "The resulting 'puddles' give employers monopsony power, even in dense metropolitan labor markets."

Finally, there's what he calls "job differentiation." Every job is different, and keeping certain jobs may be desirable for reasons beyond just pay. For example, if you live close to your job, you may not want to switch to another job that is further away. Or you might like a particular manager or your co-workers or something else. "Just as brand loyalty in cereals can give a single company like General Mills — the maker of Cheerios — some pricing power, so can a worker's personal attachments or convenience factors give an employer wage-setting clout," Dube writes.

Beyond the "triumvirate of endemic monopsony," employers sometimes intentionally collude to make it harder for workers to jump ship and work somewhere else. Dube says this concept goes back well before Joan Robinson. He traces the concept as far back as the late 1700s, when Adam Smith, in his classic book, The Wealth of Nations, wrote, "Masters are always and everywhere in a sort of tacit, but constant and uniform, combination, not to raise the wages of labour above their actual rate."

One incarnation of this sort of monopsonistic collusion is known as a "no-poaching agreement." These agreements tend to be illegal, and the federal government has worked to unravel them.

For example, Dube says, in the early 2000s, the big tech companies "had a secret agreement to not recruit each other's engineers.  If you worked at Apple, Google wouldn't call you, and vice versa."

During a federal investigation of these collusive agreements, investigators actually uncovered an email from Steve Jobs enforcing this no-poaching agreement. A recruiter from Google apparently made the "mistake" of seeking to recruit an Apple employee. Jobs, the CEO of Apple, was unhappy, and he emailed the CEO of Google, Eric Schmidt.

In a very short email, Jobs wrote, "Eric, I would be very pleased if your recruiting department would stop doing this."

Google then fired the recruiter who sought to hire this Apple employee. When Jobs found out, he sent an email with a simple response: a smiley face :).

What monopsony power means for workers

If you believe that the economy is filled with companies exercising considerable monopsony power, how wages get set looks much different than the standard model, and it has serious policy implications. Worker pay and income inequality becomes about more than just market forces, and the delicate dance of supply and demand for particular kinds of workers with particular kinds of skills and credentials.

In a world with companies that have considerable monopsony power, employers have more discretion to set wages how they like. And things like power, institutions, social movements, culture, unions, and beliefs can matter for determining how much workers get paid.

Sometimes what executives believe, either morally or strategically, could really matter. For example, Dube says, look at UPS and FedEx. They have ostensibly very similar business models. "Same trucks, same routes, same neighborhoods," he says. But, he says, UPS pays considerably more than FedEx. It's a similar story with Walmart versus Target. Target pays considerably more. "Again,  it's the same sector, similar labor pool, but very different wages.”


Parcels near UPS and FedEx trucks in a street of the Manhattan borough in New York City on December 4, 2023.

Dube argues it's hard to explain these differences with old-school competitive models of the labor market. " That really is only feasible in a market where they actually have some power to set wages — i.e. monopsony power," Dube says.

So how, in Dube's view, do we compel employers to pay more and reduce the gap between those with the big paychecks and those scrimping to get by? Dube says we need to make choices, both in the public and private sectors, that create greater fairness in pay.

Dube argues that Americans have already started doing the work. Over the last decade, for example, after a long period of federal inaction, states and localities have been passing higher minimum wage laws that are raising pay at the bottom of the income distribution. And there have been political movements and public pressure campaigns against leading employers, which have essentially shamed them into adopting "voluntary minimum wages."

In 2018, Dube writes, Amazon adopted a voluntary minimum wage of $15 an hour, a number that had been demanded by labor unions and activists in the "Fight for $15.” 

Dube offers a whole bunch of ideas for how to combat monopsony power and deliver workers higher pay in the book. One he believes is important is revitalizing collective bargaining. Dube, for example, argues we should adopt sectoral bargaining like other industrialized nations, where unions or policymakers set industry-wide minimum pay standards for the workers in whole industries or types of occupations.

" It's about choices," Dube says. Stagnant wages and extreme income inequality are not inevitable. "It was the result of choices by corporations, by policymakers, and by experts, including economists who told us too often that markets were working just fine."

The Wage Standard is a compelling book. It would be sad — and ironic — if it had only one buyer. Maybe check it out?

https://www.npr.org/sections/planet-money/2026/04/21/g-s1-118071/the-hidden-power-keeping-wages-low 

*

How did the labor market stop working for so many in the workforce? Why did wages at the bottom and in the middle of the pay scale fail to keep up with a growing economy that delivered over 70 percent productivity gains and soaring incomes for those at the top? What caused this divergence, and what can we do about it now? The Wage Standard is a deep dive into these very questions—questions Arin Dube has explored in over two decades of influential research.

Painting a new picture with data, Dube shows us how wages for most workers became painfully frozen. But also, he argues, this fate was not inevitable, and more importantly, that it can be reversed. The Wage Standard lays bare how the labor market really works, revealing levers to pull to shift course: to reshape corporate decisions, rethink policy priorities, and rebalance economic power and social norms to better protect the typical worker. These are the keys to unlocking broad-based prosperity

Dube delivers a hopeful message. First, chances are, you deserve a raise. And second, it’s not necessary to fix the broken politics of Washington, DC, in order to get one. Political will, public engagement, and persistence can set a new standard to reset the labor market and improve the lives of American workers starting today. In fact, signs of progress are already offering a glimpse of what a fairer economy can and will look like.
~ Amazon

"The slowdown of the income growth of working people in the United States is one of the most striking and consequential events of the last half century (just think of all the discontent and political turmoil that have followed it). The Wage Standard makes a powerful argument that the main culprit is the growing power of corporations and the weakening of labor market institutions and norms that enable workers to get a fair share of improvements in productivity and profits. With clear and convincing prose, the book shows how the US labor market looks nothing like the competitive benchmark many people assume, and how this is tightly interwoven with inequality."—Daron Acemoglu, Winner of the 2024 Nobel Prize in Economic Sciences and New York Times bestselling co-author of Why Nations Fail

"Wage suppression and the destruction of workers' bargaining power lie behind nearly all of our social and cultural crises. In his lucidly written and richly researched new book, Arin Dube offers not only an acute diagnosis, but also a practical and humane pathway out of it. The Wage Standard is essential for reformers and working people."—Sohrab Ahmari, author of Tyranny, Inc.

“This is a must-read for anyone hoping to understand why wages for working people haven’t kept pace with economic growth. Unions and collective bargaining paved the way for record levels of shared prosperity in this country, where wages and productivity grew together. 

Professor Dube shows how attacks on unions, and declining union membership, have reversed those gains: leading to lower wages for workers, more profits going to corporate executives, and growing income inequality. He also offers hope for a future where the economic choices we make as a society can rebuild a wage structure that assures workers get a fair share. Unions and collective bargaining are powerful tools in his vision and organized labor can move us to a better, fairer future where we can assure prosperity for all Americans.”—Elizabeth H. Shuler, President of the AFL-CIO


*
FALSE CONFESSIONS ARE SURPRISINGLY COMMON

False confessions are much more common than people typically think and can have drastic consequences

The false belief that false confessions are rare is bad enough when it is held by the general public. It’s even more problematic when people involved in the justice system hold this view. For example, research shows that juries and potential jurors also commonly share misconceptions about false confessions (Mindthoff et al., 2018). Worse still,
a recent study shows that similarly inaccurate beliefs are held by the people we would expect to be most knowledgeable of the problem: legal professionals themselves.

The study was conducted by Teresa Schneider and colleagues and published in the journal, Psychology, Public Policy, and Law (Schneider et al., 2025). The authors surveyed a sample of German legal professionals, including criminal judges, public prosecutors, and defense attorneys. They asked them questions designed to appraise their knowledge, perspectives, and experiences about false confessions and compared them to what we know about the topic from empirical research. 

The results were troubling, revealing key weaknesses in the justice system.

 To appreciate the study’s findings, it’s important to contextualize them by emphasizing the importance of false confessions and how they work. To start, there are many reasons why people might admit to a crime they didn’t actually commit (Leo, 2009). These include both dispositional and situational causes. ‘Dispositional causes’ are characteristics or conditions that make a person more prone to making a false confession. For instance, they could be delusional, feel guilty about something else, not fully understand the situation, or possibly be trying to protect someone like a family member.

Situational factors can play a role too. Many of these have to do with police interrogation procedures rather than the suspect themselves. During interrogation, law enforcement may use pressure, threats, deception, and sleep deprivation on suspects. They might confess to crimes they didn’t commit because of these, because they have been led to misunderstand what’s really going on, or because they just want the severe treatment to end.

Regardless of cause, false confessions have important consequences for both the individuals wrongly accused of a crime, the criminal justice system more broadly, and society at large. For the individual, these include the harms of the conviction itself: harm to reputation, emotional trauma, loss of freedom in the case of incarceration or other penalties, and likely strains in their social life. False confessions also cast doubt on the fairness and effectiveness of the criminal justice system. They raise doubts about legal procedures, investigations, and interrogations, undermining institutional trust. 

At the societal level, false confessions can be expensive. For instance, they can lead to retrials, appeals, and eventual compensation for the wrongfully accused. They also disproportionately affect at-risk populations—such as youth and adults with intellectual disabilities—adding to existing social disparities. Meanwhile, the real guilty party goes unpunished and free to do more harm.

These facts about false confessions, and many more like them, are well known by researchers, but are legal professionals aware of them? 

Study Results

The study revealed that, overall, the legal professionals in the sample displayed important deficits and inaccuracies in their understanding of false confessions brought about through police interrogation pressure. 

Defense attorneys showed greater awareness of the dangers involved than either criminal judges or public prosecutors. The latter two groups did not differ significantly from each other on most measures. However, even the attorneys tended to underestimate the risks of false confessions and underestimate protective measures. Some examples of these findings demonstrate the overall pattern and the problems it indicates.

First, only 75% of the sample believed false confessions can happen under police pressure. Comparing professionals though, the defense attorneys did the best, with 99% agreeing it could happen. Only 65% of criminal judges thought so and just 53% of public prosecutors. The attorneys also led in believing that 11% of all innocent suspects interrogated by the police falsely confessed, compared to only 4% of both criminal judges and public prosecutors.

When it came to how many false confessions they had personally witnessed, the attorneys on average reported that they thought they had seen six on average. The criminal judges on average reported an average of only two and the prosecutors just one. Besides their own perceived experiences, the attorneys also reported hearing about false confession cases from other people the most at 10 times on average. By comparison, the prosecutors and judges both heard only about four false confessions.

When it came to risk factors associated with false confessions, similar patterns emerged. A significant number of respondents showed a lack of awareness of how situational issues could increase the likelihood of their occurrence. More than a quarter of the overall sample did not think that police strategies increased the risk, including practices like confrontational questioning, leading questions, or emotional manipulation. 

The professionals performed somewhat better when it came to assessing the importance of dispositional factors, especially mental illness. However, 15% still did not think factors like sleep deprivation, low intelligence, or drug withdrawal increased risk of false confession. Across both sets of factors, defense attorneys continued to show greater awareness than both judges and prosecutors.

Improving the Process

False confessions are a significant problem, but they are also an often overlooked or underestimated one. As this study indicates, even legal professionals seem to take them too lightly. Hopefully, work like this will help to draw more attention to this crucial issue. It may also help to address it. The study suggests some steps that can be taken right now.

First, the results revealed the misunderstandings about false confessions that are prevalent among legal professionals. This highlights the need for additional education in the field, including during law school, but also in subsequent training throughout their careers. This will help professionals maintain up-to-date currency with what ongoing research reveals.

Training should also include science-supported, best practices when it comes to interview techniques. This way, professionals like prosecutors can be more confident in the information they collect themselves and all professionals can better assess information collected through a variety of police interview techniques.

Professional testimony should also be considered more often. Psychological professionals can be consulted to evaluate how reliable a confession is, based on the well-known risk factors of false confessions such as those considered above.

Finally, simply recording suspect interrogations — consistently and thoroughly — can make a big difference. Allowing the court access into exactly what happened can produce more confidence in the value of confessions or better cause to doubt suspicious ones.

https://www.psychologytoday.com/us/blog/strange-journeys/202604/why-false-confessions-are-surprisingly-common

*
PLANTS THAT SURVIVED EARTH’S WORST MASS EXTINCTION MAY REVEAL HOW LIFE ADAPTS TO EXTREME HEAT

How ancient plants survived extreme heat after the Permian–Triassic mass extinction and what their strategy could mean for a warming world.

Lycophyte Phlegmariurus crassus

Around 252 million years ago, life on Earth came close to collapse. Global temperatures surged, carbon dioxide levels rose sharply, and ecosystems broke down so severely that up to 81 percent of marine species and 89 percent of land vertebrates disappeared. Forests died off, leaving much of the planet stripped of vegetation. But one group of plants didn’t just survive, but took over.

New research led by the University of Leeds finds that primitive plants called lycophytes (“wolf plants”) made it through this period and spread across the landscape that was left behind. The study, published in Nature Ecology and Evolution, shows these plants likely used a different strategy by taking in carbon dioxide at night instead of during the day.

“Our results suggest that under future warming, plants with CAM photosynthesis traits could become far more important,” said lead author Zhen Xu in a press release. “If the world experiences sustained extreme heat, plant communities may shift toward species that are better able to tolerate high temperatures and water stress.”

Extreme Heat After the Permian–Triassic Mass Extinction

The extinction event, often called the “Great Dying,” was followed by a prolonged stretch of extreme heat. For roughly five million years, carbon dioxide levels remained high, climbing to more than four times modern levels, pushing temperatures past what most plants can survive.  

Under these conditions, many plants could no longer function as they normally would. Water loss became harder to control, and the process that powers photosynthesis began to fail.

Fossil records show dense forests gave way to low, sparse plant cover dominated by small, fast-growing species that could tolerate heat and drought.

How Plants Survived With Nighttime Photosynthesis

To piece this together, researchers combined fossil evidence, climate modeling, and chemical data preserved in ancient plant remains.

Carbon isotope signatures, which show how plants processed carbon, showed that lycophytes behaved differently from other plants of the time. Their signatures closely match those of modern plants that use CAM photosynthesis, a way of taking in carbon dioxide at night to conserve water.

Instead of following the usual daytime cycle, these plants likely stored carbon dioxide and used it later, reducing water loss and avoiding heat stress.

Climate simulations back this up, as many fossil sites were located in regions where average daily temperatures exceeded 104 degrees Fahrenheit (40 degrees Celsius), with peaks reaching as high as 149 degrees Fahrenheit (65 degrees Celsius). These are the kinds of conditions these plants could handle.

"The analysis pulled together many separate scientific disciplines to test how this group of enigmatic plants not only survived the great dying but also how they were able to thrive in a highly stressed environment,” said co-author Barry Lomax in the press release.

How Lycophytes Took Over After the Mass Extinction

As other plants disappeared, lycophytes quickly spread across continents, becoming what scientists call “disaster taxa,” species that quickly take hold after environmental collapse. These smaller, less productive plants likely removed less carbon from the atmosphere than the forests they replaced, and cycled fewer nutrients through ecosystems, shaping how the planet recovered over millions of years.

lycophyte fossil

Understanding how plants’ diverse physiological strategies shaped ecosystems in the past helps us to anticipate how vegetation might reorganize in the future, and because plants are the base of terrestrial food webs, changes in dominant plant strategies can alter the functioning of the entire Earth system,” added co-author Benjamin Mills.

What followed wasn’t a return to earlier ecosystems, but a shift to something new, shaped by plants built for extreme conditions. If temperatures continue to rise, similar shifts may not stay in the past.

https://www.discovermagazine.com/plants-that-survived-earth-s-worst-mass-extinction-may-reveal-how-life-adapts-to-extreme-heat-48991?utm_source=firefox-newtab-en-us

*
ANCIENT DNA SHOWS THAT HUMAN EVOLUTION SPED UP AFTER FARMING

Natural selection is usually a term reserved for science textbooks and not for conversations about contemporary humans. For decades, scientists believed that human evolution had largely slowed to a crawl after our species spread across the globe. But a new analysis of ancient DNA, published in Nature, suggests the opposite.

According to the new study, natural selection didn’t taper off — it accelerated.

By analyzing nearly 16,000 genomes spanning 10,000 years, researchers found that hundreds of gene variants have evolved far more recently than expected, many with distinct links to modern health and disease.

“This work allows us to assign place and time to forces that shaped us,” said David Reich, senior author of the study, in a press release.

How Natural Selection Accelerated

At the core of this study is a concept called directional selection. This form of natural selection occurs when a specific version of a gene becomes more common over generations because it improves survival or reproduction.

Previously, researchers had identified only a handful of such cases in ancient human DNA, leading to the assumption that this process was relatively rare in recent human history.

However, the new research overturns that idea. Instead of a few isolated examples, scientists uncovered 479 gene variants that were strongly favored — or actively weeded out — in populations across West Eurasia over the past 10,000 years.

“This single paper doubles the size of the ancient human DNA literature. It reflects a focused effort to fill in holes that limited the power of previous studies to detect selection,” explained Reich. 

Many of these genetic changes are still visible today. Some are tied to physical traits like lighter skin or red hair, while others influence health outcomes, including reduced risks for conditions such as bipolar disorder or alcoholism.

One of the more significant findings was that natural selection intensified after humans transitioned from hunting and gathering to farming. As lifestyles shifted, so did evolutionary pressures, favoring traits better suited to agricultural diets, denser populations, and new disease environments.

What This Means For Future Ancient DNA Research

Because more than half of the identified gene variants are linked to modern traits and disease risks, this research opens new pathways for studying human health. By tracing where certain genetic traits became common, scientists may better understand why some populations are more susceptible or resistant to specific conditions.

The researchers have made both their data and methods publicly available, paving the way for similar analyses in other regions and time periods.

To what extent will we see similar patterns in East Asia or East Africa or Native Americans in Mesoamerica and the central Andes? If we can’t use ancient DNA to study the most important period in human evolution 1 to 2 million years ago, then at least we can study selective pressure on human genomes during more recent periods of change and learn broader principles,” concluded Reich.


*
DEEP CAVE BACTERIA RESISTANT TO ANTIBIOTICS

Lechuguilla Cave is one of the world's longest and deepest limestone caves

Ancient bacteria, trapped in caves for millions of years, live in a miniature world of terror. 

Their only food source is each other. The survival tactics they develop make them resistant to almost all antibiotics. Now scientists hope to use their tricks to inspire new drugs and treatments.

Deep underground, plunging 1604ft (489m) beneath the Chihuahuan Desert in southern New Mexico, lies the Lechuguilla Cave, a cavern which stretches on for 149 miles (240km). There is no light, and little to eat either. Any living thing must eke out an existence under conditions of near starvation.

"You can go in an entrance and travel for 16 hours in one direction before you get to the end of it," says Hazel Barton, professor of geological sciences at the University of Alabama. 
"So you're a very, very, very long way from the entrance. You're isolated, and there are places in that cave where more people have walked on the moon than have been in that area." 

Yet despite the darkness, there is a dazzling diversity of microbial life. Because the bacteria have been isolated for millions of years, they offer a unique window into the past. What's more, each has evolved a different strategy to survive. Some extract energy from rocks and the atmosphere. Others are predators, feeding off other bacteria.

"Like in the rainforest, we see predators that just run in and grab, stab and kill other microbes," says Barton. "But we also see other microbes that work together to get nutrients and energy out of a system that otherwise wouldn't be able to yield enough energy to survive."

The bacteria also have an even more surprising trick up their sleeve – they are resistant to most antibiotics, despite the fact that they have been trapped in a cave that formed six million years ago, most of which was completely sealed off from humans until 1986. Not only is this resistance a remarkable natural phenomenon, it is now helping guide researchers to drugs that can withstand the onslaught of antimicrobial resistance in modern medicine.

But let's rewind slightly. Today, the emergence of antibiotic-resistant bacteria, often called "superbugs", is a growing global health crisis. These pathogenic, disease-causing bacteria have developed resistance to multiple types of antibiotics, making infections harder to treat. 

Bacterial antimicrobial resistance (AMR) was found to be directly responsible for 1.14 million deaths in 2021, and an estimated 39 million people are expected to die due to AMR between 2025 and 2050. Already, it's estimated that millions of children are dying each year from infections resistant to antibiotics.

The cause of the AMR crisis is usually attributed to the misuse and overuse of antimicrobials in humans, animals and plants. Yet this isn't the whole picture. In 2006, for example, Gerard Wright, professor of biochemistry and biomedical studies at McMaster University in Ontario, discovered soil-living bacteria packed full of antibiotic resistance genes. The mud-loving microbes had the exact same resistance genes that are found in bacteria that cause disease in humans.

"These were not pathogenic bacteria. They weren't causing disease. They were just sitting around minding their own business," says Wright.

This suggested that antimicrobial resistance wasn't new and was in fact hard-wired into many bacteria, a finding backed up by the fact that bacteria with resistance have also been found in glacial ice cores extracted from Antarctica, as well as the soils, seas and rocks of this isolated continent. AMR bacteria have also been discovered in ancient permafrost, as well in the gut bacteria of villagers from an isolated Amazonian jungle tribe.

Yet Wright's finding by itself was not enough to convince the scientific community that AMR had emerged without human contact. After all, the overuse of antibiotics in agriculture is well documented. The soil bacteria could have come into contact with antibiotics this way.

"We're living in the anthropogenic age, so there's no place that is without evidence of human activity, whether you're at the top of Mount Everest or at the bottom of the Mariana Trench," says Wright.

What was needed was a pristine environment. One that had been cut off from humans for millennia. Enter the Lechuguilla Cave. This cave formed millions of years ago from rainwater trickling deep underground. The water combined with hydrogen sulphide in the depths of the Earth, creating sulphuric acid. The acid was then forced upwards under immense pressure, dissolving the limestone as it went. Eventually the acid-rich water hit a cap rock made of insoluble sandstone.

Lake Castrovalva is one of multiple lakes within the Lechuguilla Cave

"Because of that cap rock, nothing can get into the cave," says Barton. "The caves formed millions of years ago, and it takes about 1,000 years for any surface water to get to that part of the cave where we were sampling. It was also a newly discovered passage that we know no humans had ever been before." 

In other words, there's no possibility that antibiotic drugs could have washed into the caves.   

Barton has been studying microscopic life in caves for more than 20 years. She is one of the few people who have access to the Lechuguilla Cave. So in 2012, she teamed up with Wright to investigate whether these microbes could be resistant to antibiotics too. Barton went down into Lechuguilla Cave to collect samples. The cave is over 1,200ft (366m) in depth, so getting samples required abseiling down a dozen ropes. The effort was worth it though. 

"Not surprisingly, we found that all the microbes in there were resistant to basically every natural antibiotic that's ever been used in the clinic," says Barton.

This actually makes sense from an evolutionary perspective.

"The mechanisms and pathways that lead to antibiotic resistance don't form quickly," says Barton. "If you look at the structure of an antibiotic, that's a molecule that probably took hundreds of millions, if not billions of years to form, and so it's likely that resistance to those antibiotics is as old as the antibiotics themselves."

The bacteria were still killed by synthetic or semi-synthetic antibiotics, however, as they had never been exposed to them. 

One microbe, a non-pathogenic strain of bacteria called Paenibacillus sp LC231, was resistant to 26 of 40 antibiotics tested, including daptomycin, a relatively new antibiotic that is considered a last resort against drug-resistant bacteria like methicillin-resistant Staphylococcusaureus (MRSA).

The researchers sequenced the entire genome of Paenibacillus sp LC231, and found that many of the resistance genes were identical to those found in known drug-resistant bacteria. 

However, the team also identified five resistance genes that had never been encountered before. Interestingly, a cousin of the ancient, isolated Paenibacillus – a spore-forming species found widely above ground – also has the same resistance mechanisms. This means that resistance to antibiotics evolved before the bacteria were trapped in the cave, not after. 

"The punch line for us, and the reason why we were trying to do this, was to say that antibiotic resistance is part of the natural history of microorganisms on the planet," says Wright.

"Most antibiotics come from bacteria and fungi, so they've been making these and fighting with each other for hundreds of millions, if not billions of years.”

There are places within Lechuguilla Cave where more people have walked on the moon

According to Wright, for most of the Earth's history, antibiotic resistance has been confined to non-pathogenic strains of bacteria – ones that don't cause disease. Our extensive use of antibiotics to treat infections, however, has provided a strong selective pressure that has encouraged pathogenic microbes to adopt these defenses too. As bacteria can quickly pass genes to each other, antimicrobial resistance has spread fast.

There may be something about the harsh environment of caves that has encouraged the bacteria to keep and hone their defenses, however. As nutrients and resources are so scarce, bacteria must compete with one another to survive, says Barton. Microbial warfare is a likely outcome.

"If you reduce the number of resources available to a community, then it's going to get a lot more aggressive, and there's going to be a lot more infighting in the way that microbes fight each other," says Barton.

True to form, the biologists found cave microbes that were lobbing out antibiotics left right and center. One specimen produced 38 different antimicrobial compounds, with three novel antibiotic structures.

Lechuguilla Cave, Selenite Chandelier

Using cave microbes to combat AMR

So could we use this new knowledge to help us in the fight against antimicrobial resistance?

It's possible that uncovering the treasure trove of bacteria's secret arsenal could help produce new treatments. Traditionally the way scientists have discovered new antibiotics is by going out into nature, taking samples from water and soils, and painstakingly trying to purify and extract those compounds that might be beneficial. In 2025, the first new class of antibiotic in almost 40 years was brought to market – one discovered by Wright and his colleagues in the soil. 

Finding bacteria in isolated, untouched areas could help with this, as it's possible that cave microbes could produce ancient antibiotics that surface bugs have long forgotten how to defend themselves against – or never even encountered. 

For instance Naowarat (Ann) Cheeptham, a microbiologist at Thompson Rivers University in Canada, is aiming to do just this. Over the last decade, Cheeptham's team has explored caves, taken soil samples, and cultured the resulting bacteria in a petri dish. The bacteria were then screened against known superbugs to see if the cave microbes could kill them.

Cheeptham has so far tested more than 2,000 bacteria and has identified many promising candidates. For example, her team found two species of bacteria in the Iron Curtain Cave in Canada that could kill multidrug-resistant strains of Escherichia coli. She also discovered five microbes in the , located in the Monashee Mountain range in south-central British Columbia, that produced antibiotics that were effective against MRSA.

However, the lack of funding for antibiotic discovery research has led to her pausing her search for new drugs, at least for now.

"We found potential compounds, but it will take us a lot of time and financial investment to get us to a point where pharmaceutical companies will work with us," says Cheeptham. "They [the promising candidates] are still in the refrigerator, so when we have money, we will look at them again." 

Alternatively, cave microbes could help the fight against AMR by allowing scientists to predict when bacteria might evolve resistance to a new class of antibiotics.

"The first thing you need to know is, what are the mechanisms of resistance that already exist out there?" says Wright.

"Because if I'm going to discover an antibiotic tomorrow and I want to bring it to the clinic, it'd be a good idea for me to understand what its liabilities are, what its vulnerabilities are to what exists out there, because then you'll be better prepared for the emergence of resistance, not if, but when it occurs."

Common mechanisms of resistance include simple pumps that simply spit the antibiotic back out of the bacteria, while others involve much more complicated enzymes that either modify or somehow degrade the antibiotics. 

Knowing how a bacterium destroys the antibiotic could help scientists design new drugs to overcome its defenses. For example, penicillin by itself often doesn't work anymore, because many bacteria have an enzyme that binds to the antibiotic and inactivates it. However if you add a compound called clavulanic acid, this molecule binds to the enzyme instead and inhibits it. So by adding clavulanic acid to penicillin, you counteract the resistance mechanism, and penicillin works again. It's hoped that identifying similar processes in cave bacteria could therefore give medical researchers a powerful advantage.

"By figuring out what mechanism a microorganism might use to overcome an antibiotic, you can actually figure out how to defeat it before it ever shows up in the clinic," says Barton.

https://www.cidrap.umn.edu/antimicrobial-stewardship/scientists-find-ancient-cave-dwelling-resistant-bacteria#:~:text=What%20Barton%20found%20in%20the%20cave%20were,pressure%20created%20by%20human%20use%20of%20antibiotics.

*
WE MAY HAVE “MISUNDERSTOOD THE UNIVERSE,” NOBEL PRIZE WINNER SAYS

For a long time, scientists believed they understood how fast the universe is growing — but the more they learn, those theories seem to be expanding as fast as the universe itself.

Basically, the “Hubble constant” measures the speed at which the universe is expanding. The only problem? Different instruments keep providing different values for it, giving rise to what’s known as the “Hubble tension.”

It raises an interesting possibility: that, as Nobel Prize-winning physicist Adam Riess explains in a NASA blog, much of what we thought we knew about the universe may have been wrong.

“We’ve now spanned the whole range of what Hubble observed, and we can rule out a measurement error as the cause of the Hubble tension with very high confidence,” the Johns Hopkins physicist said.

Add ‘Em Up

Pointing the Webb telescope deep into the universe to try to confirm Hubble’s complicated numbers back in 2023, scientists were puzzled when the newer telescope confirmed the findings of its predecessor, deepening the discrepancy. 

One possibility for the surprising readers could be stellar crowding, which occurs when space telescopes see more stars than they are able to handle, might affect expansion measurements as the light from them essentially warps their vision. Stellar dust makes this effect even stronger, but as NASA explains, the Webb should be able to cut through the noise and get more accurate imaging and distance measurements.

*

DARK MATTER MAY BE A DEFORMED MIRROR UNIVERSE, SCIENTISTS SAY  

You know dark matter, the mysterious stuff that most physicists now believe makes up the bulk of the universe — even though it remains completely undetectable, except for its gravitational effects on regular matter

There’s no shortage of far-out theories about the hypothetical material: that it’s hiding inside an extra dimension, that it originated in a second Bing Bang, that it’s information itself, or even that it doesn’t exist at all. 

Now, as spotted by Flatiron Institute astrophysicist and indefatigable science journalist Paul Sutter, a new paper offers yet another exotic potential explanation: that dark matter resides in a deformed mirror universe inside of own, where atoms failed to form.

Coincidencer 

As Sutter explains, the research builds off a pair of intriguing coincidences. First, observations suggest that there’s a roughly comparable amount of regular and dark matter out there (tipped a bit toward dark matter, which is believed to outweigh conventional matter by a factor of about five.) And second, neutrons and protons have almost precisely the same mass, allowing them to form stable atoms — a fortuitous property, because otherwise our universe wouldn’t be host to any of the lovely atoms that make up stuff like stars, planets, and ourselves.

Basically, the theory goes, maybe there’s a shadow universe to our own in which neutrons and protons don’t have that convenient symmetry in mass, meaning the whole thing is a sad soup of subatomic particles that don’t interact much, explaining why dark matter doesn’t seem to clump up much.

Important to note: the paper isn’t yet peer reviewed, and it’s just another theory among many jostling to crack the mysteries of dark matter, a galling and lingering unknown in our understanding of the universe. But it does have an impressive author list, with researchers ranging from Fermilab to the University of Chicago — so we’ll be watching to see how it’s received in the broader world of physics.

https://futurism.com/the-byte/dark-matter-mirror-universe

*
DARK MATTER DOESN'T EXIST AND THE UNIVERSE IS 27 BILLION YEARS OLD, ACCORDING TO STUDY

Blue Rider, Kandinsky (Oriana: The Blue Rider is also part of the Universe)
 
The universe feels simple at first glance: stars, gas, dust, and the gravity that binds it all. Then you look more closely and realize that nothing could be farther from the truth.

For decades, the standard picture has said that most of what is out there is not what we can see. It is a mix of ordinary matter and two invisible components often called dark matter and dark energy.

That picture has guided textbooks, space missions, and how we read the sky. It has also raised tough questions that have never quite gone away, mainly because of the fact that dark matter and dark energy have never actually been “seen.”

After searching for this elusive “dark matter” for decades, at what point does the scientific community decide it may not actually exist at all?

Challenging dark matter’s existence

A new line of thinking takes those questions seriously and suggests we may not need those “dark” invisible components after all.

After years spent probing longstanding cosmology puzzles, physics professor Rajendra Gupta has proposed a model that aims to explain the universe without dark matter or dark energy.

Gupta teaches astrophysics at the University of Ottawa and argues that familiar assumptions might be impeding progress.

“The study’s findings confirm that our previous work (“JWST early universe observations and ΛCDM cosmology”) about the age of the universe being 26.7 billion years has allowed us to discover that the universe does not require dark matter to exist,” explains Gupta.

“Tired light” and the CCC theory

Gupta’s approach blends two concepts: covarying coupling constants (CCC) and “tired light” (TL).

CCC asks whether the so-called constants of nature – like the strength of forces or the speed of light – might shift across time or space. If they do, even slightly, many calculations about how the universe evolves would change.

TL offers a different take on why light from faraway galaxies appears redshifted. Instead of treating redshift solely as a sign of cosmic expansion stretching light, TL suggests that photons shed energy over vast distances, shifting their color toward red.

Taken together, the CCC+TL model seeks to account for cosmic signals.

Most scientists think dark matter is real

The idea of dark matter did not arise in a vacuum. In the 1930s, astronomer Fritz Zwicky noticed that galaxy clusters seemed to move in ways that did not match their visible mass.  

Later, astronomers saw that many galaxies rotate faster than expected at their outskirts. Something appears to add extra gravity.  

Gravitational lensing – the way mass bends light – also points to more pull than starlight alone can explain.

In the standard breakdown, dark matter is thought to make up about 27% of the universe

Ordinary matter – everything we can directly detect – adds up to less than 5%. 

The rest is labeled dark energy, a placeholder for whatever drives the universe’s accelerated expansion. This picture also includes a commonly accepted age of roughly 13.8 billion years.

Questioning the need for dark matter

Gupta contends that if the forces of nature weaken over time, we do not need dark energy to explain why the expansion appears to speed up.

He also argues that major observations can be matched without dark matter by allowing constants to vary and by letting light lose a small amount of energy as it travels long distances to reach us, the observers.

 “Contrary to standard cosmological theories where the accelerated expansion of the universe is attributed to dark energy, our findings indicate that this expansion is due to the weakening forces of nature, not dark energy,” Gupta continues.

Redshifts and cosmic observations

A substantial part of the work centers on redshifts – how light shifts toward longer wavelengths as it travels.

The analysis compares how galaxies are distributed at low redshift with patterns from the early universe at high redshift.

The claim is that these signals align under the CCC+TL approach without requiring dark matter in the equations.

“There are several papers that question the existence of dark matter, but mine is the first one, to my knowledge, that eliminates its cosmological existence while being consistent with key cosmological observations that we have had time to confirm,” Gupta confidently concludes.

If CCC+TL continues to pass tests, much would change. The model would offer new routes to explain the cosmic microwave background, the timeline of how galaxies formed and grew, and the way light bends on its journey to our telescopes.

It would also change how we read distance and time from the sky, since redshift would no longer be only a ruler for expansion.

It would challenge the Big Bang–anchored timeline. Those are substantial claims that require careful tests.

Testing Gupta’s theory

Specific predictions need to be articulated. Any model has to meet observations head-on: galaxy rotation profiles, lensing maps, the pattern of hot and cold spots in the microwave background, and the way galaxies cluster across hundreds of millions of light-years.

If constants vary, even a little, that could leave signatures in atomic spectra from distant quasars. If light tires, the effect should be measurable with enough precision and a clean way to separate it from other causes.

Teams are already poring over deep-sky surveys, precise supernova samples, and high-resolution microwave maps.

As instruments improve, the bar for any alternative rises. The goal is straightforward: make a clear, testable forecast, then see if the universe agrees.

Dark matter, CCC+TL, and next steps

Two central questions remain. Are dark energy and dark matter just bookkeeping devices we used while working with fixed constants and a single redshift story? Could the true age of the universe be significantly older than the standard estimate? 

The only way to answer is to press for independent tests that can separate one picture from the other.

Researchers are tuning methods to compare models fairly, using the same data pipelines and error checks. That helps avoid apples-to-oranges results.

If CCC+TL keeps matching the sky, interest will grow. If it stumbles on a key observation, that will be clear too.

Cosmology moves forward when claims meet data. This study advances a bold alternative: a universe where constants can change, light can lose energy across great distances, and neither dark matter nor dark energy need to be included in the ledger.

It offers clear, testable statements about cosmic age and the cause of the apparent acceleration.

The work will be validated or refuted by measurements. That is how the field operates: no shortcuts, no hand-waving – only observations and models that either fit or fail.

The full study was published in The Astrophysical Journal.

https://www.earth.com/news/dark-matter-does-not-exist-the-universe-is-27-billion-years-old-rajendra-gupta-theory/


*
BUILD-UP OF FREE OXYGEN 2.5 BILLION AGO AND CREATION OF IRON ORE DEPOSITS

90% of the world's economically minable iron ore exists because, 2.5 billion years ago, microscopic bacteria poisoned the oceans with their waste product: oxygen.

To understand how, you have to start with what the oceans looked like before that happened.
Iron in a World Without Oxygen.

Iron dissolves readily in water in its reduced form (Fe²⁺, or ferrous iron). In the Archean oceans, before about 2.5 billion years ago, volcanic activity continuously supplied dissolved iron to seawater, and it simply stayed there. Without free oxygen to react with, iron accumulated in the oceans over hundreds of millions of years, reaching concentrations perhaps a thousand times higher than in modern seawater.

Enter Cyanobacteria

Photosynthetic cyanobacteria began producing oxygen as a metabolic waste product, likely as early as 3 billion years ago. 

At first, this oxygen didn't accumulate in the atmosphere — it was consumed almost immediately by reacting with the vast reservoir of dissolved iron in the oceans. The reaction is straightforward:

Dissolved ferrous iron (Fe²⁺) encountered free oxygen (O₂).

It oxidized to ferric iron (Fe³⁺), which is extremely insoluble in water.

The resulting iron oxide minerals precipitated out and sank to the ocean floor.

This process created enormous layers of iron-rich sediment, alternating with silica-rich layers in a striking pattern. These are known as Banded Iron Formations, or BIFs, and they are visually spectacular — thin bands of red, black, and silver layered like pages in a book, sometimes stretching across hundreds of kilometers.

 

The Scale of the Deposits

The numbers involved are staggering. BIFs deposited primarily between 2.5 and 1.8 billion years ago account for roughly 90% of the world's economically minable iron ore. The Hamersley Basin in Western Australia, the Carajás formation in Brazil, the Transvaal Supergroup in South Africa, and the iron ranges of Minnesota and Ontario all owe their existence to this same process. The Hamersley deposits alone are estimated to contain tens of billions of tons of iron ore.

Why the Banding?

The alternating iron-rich and silica-rich layers likely reflect seasonal or cyclical fluctuations in cyanobacterial oxygen production. During periods of higher photosynthetic activity, more oxygen was produced, more iron precipitated, and an iron-rich layer formed. During quieter periods, silica dominated. Some researchers have proposed that the cycles correspond to blooms and die-offs of microbial populations, though the exact mechanism remains debated.

The End of BIFs

Once the oceans were finally scrubbed clean of dissolved iron — a process that took hundreds of millions of years — oxygen began accumulating in the atmosphere. This transition, the Great Oxidation Event around 2.4–2.0 billion years ago, permanently changed Earth's surface chemistry. With no more dissolved iron reservoir to draw from, large-scale BIF deposition essentially ceased by about 1.8 billion years ago (with a brief, enigmatic return around 750 million years ago, possibly linked to Snowball Earth events).

In effect, the iron ore that built the modern steel industry is a geological receipt for one of the most transformative events in Earth's history: the moment life began to reshape the planet's chemistry on a global scale. ~ NovaPrism, Quora 

Thomas Snider: ~ You should’ve added how plate tectonics brought the ancient sea floor up to the surface so the ore could be mined. Like how the meerschaum found in Turkey’s mountains was once on the bottom of the ocean.

Wander Sprenger:  ~ A small addition with respect to the manner in which iron(II) ions are transformed into iron(III)oxide. Many biological and abiotic processes may explain the overall process, but it is not a single step. Oxide ions react immediately with water molecules into hydroxide ions. So the primary substance formed upon oxidation of iron(II) ions may be a gel-like form of iron(III)hydroxide, that is indeed less soluble than iron(II)hydroxide, and is then later transformed into solid Fe2O3.

See the wiki quote below.
“Regardless of the precise mechanism of oxidation, the oxidation of ferrous to ferric iron likely caused the iron to precipitate out as a ferric hydroxide gel. Similarly, the silica component of the banded iron formations likely precipitated as a hydrous silica gel. The conversion of iron hydroxide and silica gels to banded iron formation is an example of diagenesis, the conversion of sediments into solid rock.”

Michael Hutson:
I’ll add that today there is still such a thing as “bog iron”: in freshwater deposits that are alkaline and oxygen-deprived, iron can dissolve into the water and then precipitate as iron oxides or hydroxides.

Alexander Baalanda:
This leads to the question of what is organic, and what is inorganic. You might say that Limestone is inorganic, but it required Life to create it. Actually I read that there are 3 times more minerals on the surface of the Earth because of Life.

Roger King:
Before the generation of atmospheric oxygen there were high populations of sulfate-reducing bacteria in the oceans that produced sulfide which reacted with iron to form a range of iron sulfides and also the great sulfur domes in the USA.

*
FREQUENT OE LONGER NAPS IN OLDER AGE MAY SIGNAL DECLINING HEALTH

A new study highlights that older adults tend to nap more often, for longer, and earlier in the day as they aged.

Notably, the study links increases in nap duration and frequency over time to a higher risk of death.

While the association does not prove causation, it suggests that changes to nap patterns may reflect underlying health decline or disrupted body rhythms.

Monitoring shifts in daytime napping could serve as a simple early warning sign to identify older adults who may need further medical evaluation.

Napping is a fairly common practice among U.S. adults, with estimates suggesting that roughly half of middle- and older-aged Americans report regular daytime napping. Other studies consistently report that napping is more common in older adults than in other age groups.

Like most adults, older individuals require about 8 hours of sleep for optimal health. However, multiple factors, such as age-related changes in circadian rhythm and sleep patterns, health conditions, medications, cultural beliefs, and lifestyle changes, can make sleep difficult and may contribute to a higher prevalence of napping.

Napping may relate to multiple health outcomes in older adults and could offer a modifiable behavioral factor that impacts health. However, research on napping in older adults has yielded mixed results, with some suggesting that infrequent short naps may be beneficial, while others suggest that frequent longer naps may be detrimental.

Now, a new long-term study published in JAMA suggests that changes in daytime napping habits among older adults could serve as an early indicator of underlying health issues or increased risk of death.

The researchers found that individuals who took longer or more frequent naps, particularly in the morning, had a higher risk of death compared with those whose napping habits remained stable.

Notably, each additional hour of daytime napping was associated with a 13% increase in mortality risk. Likewise, each additional nap per day corresponded to roughly a 7% higher risk.
People who napped in the morning had a 30% higher mortality risk than those who napped in the afternoon. The study suggests that inconsistent napping patterns were not associated with an increased risk of mortality.

While the study does not prove that napping itself causes poorer outcomes, it highlights a strong potential association between evolving nap patterns and declining health.

https://www.medicalnewstoday.com/articles/frequent-longer-naps-older-age-may-signal-declining-health#Tracking-naps-over-time


*
ending on beauty:

And the days are not full enough
And the nights are not full enough
And life slips by like a field mouse
Not shaking the grass.

~ Ezra Pound


























No comments:

Post a Comment