Pont Neuf, Paris
*
LE CHIEN
~ Billy Collins
*
STEVENSON’S TREASURE ISLAND
At 32, he was dying of tuberculosis and coughing up blood in a Swiss sanatorium. To entertain a bored twelve-year-old boy, he wrote a pirate adventure that shaped everything we know about pirates today.
Stevenson was spending a rainy summer in Scotland with his new wife, Fanny, and her restless twelve-year-old son, Lloyd Osbourne. Lloyd was drawing, and Stevenson—ever the storyteller—asked to see the sketch.
It was a map of an imaginary island.
Stevenson took the drawing and made it his own. He added landmarks like “Skeleton Island” and “Spy-glass Hill.” He drew a big, clear “X” to mark where treasure was hidden and named the map “Treasure Island.”
To keep Lloyd entertained, Stevenson began writing a chapter a day, reading it aloud to the captivated household. He wrote the entire first draft in a stunningly quick fifteen days, driven by the pure fun of invention.
When Treasure Island was published in 1883, it didn’t just tell an adventure story—it invented the pirate genre as we know it today.
Before Stevenson, real historical pirates were simply violent criminals. They didn’t bury their loot (they spent it), didn’t sing cheerful shanties, and were not the romanticized figures we imagine.
Stevenson, however, created tropes so vivid and irresistible that they became the standard template for all pirate fiction that followed:
The “X marks the spot” on a treasure map.
Pirates with a peg leg and a parrot on their shoulder (embodied by the unforgettable Long John Silver).
The Jolly Roger flag as the universal symbol of the pirate ship.
Swashbuckling action and a hunt for buried gold.
The core of the book’s genius was Long John Silver.
Silver, the one-legged cook who is secretly the pirate mastermind, was revolutionary. He wasn’t a cartoon villain. He was charismatic, intelligent, genuinely fond of the young protagonist Jim Hawkins at times, yet capable of chilling ruthlessness.
He became one of literature’s first great morally ambiguous characters—a complex, charming rogue that showed readers that evil wasn’t always obvious or ugly. This characterization influenced storytelling for generations.
Treasure Island became a phenomenon, giving the sickly writer financial security and international fame. He went on to write other classics, including Kidnapped and Strange Case of Dr. Jekyll and Mr. Hyde.
But his health continued to decline. Seeking a cure, Stevenson eventually sailed to the South Pacific and settled in Samoa. The warm climate gave him a few more productive years, where the locals affectionately called him “Tusitala”—the teller of tales.
He died in 1894 at the age of 44—longer than doctors predicted—and was buried on Mount Vaea, overlooking the sea.
Stevenson proved that a short, difficult life spent battling illness can produce an eternal legacy. He took his stepson’s boredom and his own vivid imagination and created a world of adventure that still defines pirates today.
All because a dying man, confined to bed, just wanted to entertain a twelve-year-old boy.
Stevenson proved that our greatest limitations—like severe illness or financial worry—can become the driving force behind our greatest acts of creation.

He reminds us that imagination is the ultimate escape, and that even a life lived on borrowed time can leave an eternal legacy. ~ We Are Human Angels, Facebook
*
HOW RUSSIANS SEE UKRAINIANS

On the blackboard: "Once again we will have to destroy Berlin and enter Paris, Vienna."
Russian TV host and Kremlin propagandist Vladimir Soloviev’s core audience are boomers who are still mentally living in the shadow of the Cold War and a cohort of middle-aged losers, the so-called turbo-patriots who support Russian aggression in Ukraine and avidly consume resentment-focused propaganda that gives them some sense of community and meaning for all the bad choices that they have made in life. His ratings have been falling but Kremlin keeps him in prime time as an angry dog perfect to unleash on gentlesky Westerners.
There is however a very strong whiff of tragicomedy about this whole enterprise.
At the beginning of the invasion, Soloviev was almost crying on air watching how Russian army was systematically destroying everything that he had so carefully crafted for his family in Italy: his spacious villas, his massive vineyards, his languid evenings by the pool with a glass of red wine in hand. All had been taken away from him for having to publicly support Putin’s invasion. He since then learned to redirect that anger at the West.
The quote at the WEF [World Economic Forum] stand was likely written by my daughter’s friend’s mother. She’s a chief writer at Soloviev Talk Show and pens most of memorable quotes for him. Soloviev hires mainly well-educated liberals because he’s a life-long liberal himself from a nice Moscow liberal family.
The author of those quotes wanted to leave for Argentina after the war broke out. Her now former husband refused to leave although he’s a liberal too. To get ahead with her journalistic career with few available options she got a job at Zvezda army TV channel and then at Soloviev Talk Show and then divorced her husband who was critical of her choice of employment to marry an odious propagandist.
In the course of New Year celebration at this little girl’s father’s apartment, she talked my daughter into waving yellow and blue balloon and dirty socks at the TV screen when Putin was giving his New Year speech as a form of protest.
This was absolutely hilarious because next day this very girl would be traveling with her mother and her new husband, a Kremlin’s propagandist, and they would be worshiping Putin as a provider of their considerable wages.
This mental asylum that we are living in is the direct result of the failure to establish American political model that was amicably forced upon us in the 1990s.
If the Russians had managed to adopt to the framework of two-party system they would have been able to channel their energies into the narrative of one political party over the other or visa versa and wage cultural wars. They wouldn’t be static and pretentious and rather than following news about ICE shenanigans to live vicariously would have plenty of their own action to bring some excitement into their dull lives.
The liberals wouldn’t have to pretend that they aren’t liberal and wouldn’t have to seek jobs under false pretense and say things that they don’t believe in. They would have their own mass media and would be honest with themselves and their audience.
Conservatives who love their strong leader wouldn’t have to hire fake conservatives because they’re short of genuine conservatives as they would have fewer channels and would have to get active rather than rest on the laurels and repeat the same nonsense ad nausea that nobody dares to oppose lest he’s sent to jail or out of a tall window.
What an exciting country Russia would be! Everyone is fighting everyone else with no side prevailing for long. What a dynamic system it would be.
In America if you’re a contrarian but manage to fetch up a large support base it can pay off after your opinions become part of the mainstream.
In Russia, there needs to be an actual revolution to have your contrarian opinions become mainstreamed while the supreme leader is still in power.
Hence there are many opinions that go contrary to the established narrative about the war in Ukraine that nobody would ever tell you (unless very drunk).
Unfortunately, in Russia there’s no fundamental trust and belief in laws-based system. Even that liberal dad that I mentioned above with all his anti-Putin rhetoric doesn’t understand why laws are more important than unwritten rules.
“This is a moral thing to do and it’s more important than whatever is written on paper.” All these lonely heroes in Hollywood movies who take justice into their own hands — that’s Russian mindset in a nutshell.
As such, any struggle for power will always end up with the winners writing their own rules because it feels right. And certainly the rules will always be tipped in their favor until any critics are vanquished.
This is why it is hard to answer definitely what Russians think about Ukrainians. When there’s an article of “discrediting Russian armed forces” that can land you in jail for the wrong words spoken, thoughts are hidden from the public.
In any case this very much depends on a person, but I will provide some generalizations based on cohorts for an easier navigation of this issue.
The reality check was partial mobilization in September 2022 when rather than go to war millions of Russian men fled and hid.
What you call boomer generation to whom President Putin belongs perceive Ukrainians and Russians as the same people sharing Soviet mindset, culture and traditions. Of course when we say Soviet, we mean Russian.
They have been the greatest supporters of the war and most ardent consumers of Kremlin propaganda but they also are the ones who have no skin in the game: they won’t be called up, they own their home but are dependent on the state for monthly pension checks in the mail.
Another dimension is Kremlin’s narrative that Ukraine’s independence movement that has been traditionally supported by Western powers with the singular goal of weakening or destroying Russia and this time is no exception.
The generation between the boomers and millennials are a mixed bag.
On the one hand they have provided the bulwark of the recruits for the war as they have been brought up by tough as nails fathers who taught them that it’s their duty to defend the fatherland. They won’t fight for free but they will for the money.
There are millions of urban residents who came of age during the democratic 90s and perceive the West as a model to be emulated rather than fought against.
In Moscow and big cities there’s no particular resentment towards the West as the state and people feel that they shouldn’t burn bridges thinking ahead to post peace deal, but if you go deeper into boondocks the anti-Western sentiment grows stronger.
RIA Novosti ~ Schoolchildren will be taught how to assemble drones during lessons on the fundamentals of homeland defense.
The millennials most strongly associate themselves with the new post-Soviet Russian identity and Ukraine is extraneous to it. They don’t necessarily perceive it as an enemy, but as “other” at odds with their civilization. I guess it’s sort of what the Americans think about places like North Korea or Iran.
As for Ukrainian ethnicity itself, there is very little antagonism. Russia has been promoted as a multi-ethnic country.
Prolonged war has increasingly become part of Russians’ lives and even identity.There’s a hierarchy of course, and an ethnic Uzbek would be at the bottom but a Ukrainian to the contrary would be high up, but still a rung or two below Rusky, and require some condescending tutoring to explain how the real world works and that it is not what they think it is, and more complex.
And this is indeed a complex topic and I can go on but it suffices for now. ~ Misha Firer, Quora
Belladonna:
My relatives in Moscow support Putin wholeheartedly (quite ironical, because they come from the sole line in my family with no member of Red Army or Communist Party). On the other hand, a brother and a nephew of my Ukrainian teacher seem to have another opinion.
My friend (she speaks Russian at home, by the way) cut ties with her aunt too. Contrary to my relatives, she called her in February 2022, saying something like: “Don’t worry, we will liberate you soon”. My friend used to be amused with it until her brother’s death (he was a Ukrainian soldier).
John Doyle:
“Russian TV host and Kremlin propagandist Vladimir Soloviev’s core audience are boomers who are still mentally living in the shadow of the Cold War and a cohort of middle-aged losers, the so-called turbo-patriots who support Russian aggression in Ukraine and avidly consume resentment-focused propaganda that gives them some sense of community and meaning for all the bad choices that they have made in life.”
in America we have pretty much the same….Sean Hannity and MAGA.
Oriana:
Western readers may be puzzled by what’s written on the blackboard. To understand the psychology of it, you need to be aware that the Western world has a vision of the future — of progress, of new frontiers. By contrast, the “Russian World “ (Ruskiy Mir) feeds on its mythical glorious past, and seems to yearn for the return to the past when the Soviet Union was feared by the West. Putin’s most successful threat, as far as his Russian TV audience goes, is “We can repeat.”
Repeat what? Why, the glorious past — crowned with entering Berlin.
*
OLIVENE MAY HAVE GIVEN LIFE A JUMP START
A mineral common throughout the solar system nudges a reaction that produces sugar molecules from formaldehyde.
Olivine, named for its green color, is the most common mineral in Earth’s upper mantle and widespread throughout the solar system.
Olivine can form as a gemstone, peridot, but it’s anything but preciously rare. On Earth, it’s the most abundant mineral in the upper mantle and a main component of the basalt rocks that form oceanic crust. Beyond Earth, scientists have found olivine everywhere from Mars, where it’s been spotted by rovers, to asteroids. It’s even present in cosmic dust.
Now, new research published in Earth and Planetary Science Letters has suggested that this cosmically cosmopolitan mineral can help turn formaldehyde, a toxic gas, into sugars. The finding could help explain how the earliest organisms obtained sugars, which they use for energy and to build genetic molecules such as RNA and DNA.
Today, life creates its own amino acids, fats, sugars, and nucleic acids—critical components that create and sustain many biologic processes. But the first cell couldn’t have produced its own building blocks. They must have formed through reactions that don’t involve life.
Scientists have long thought that the formose reaction, which takes in formaldehyde and spits out a mix of different sugars, may have played a role in creating at least one component of the building blocks of life. Formaldehyde is a simple molecule consisting of a carbon atom bonded to two hydrogen atoms and one oxygen atom. Though rare on Earth’s surface today, it is common in the universe and has been identified in asteroids, in comets, and even in the interstellar medium. It’s possible that impacts delivered formaldehyde to early Earth.
Before Life
The formose reaction begins with the formation of a molecule called glycolaldehyde from two formaldehyde molecules. Glycolaldehyde is the simplest molecule that follows the general chemical formula for sugars.
That first step is the hardest, said study author and astrochemist Vassilissa Vinogradoff of Aix-Marseille University in France. Without a catalyst, it is slow to produce glycolaldehyde, and this holds up the rest of the formose reaction. So it needs a boost, especially when the reaction takes place in water.
Once glycolaldehyde forms, Vinogradoff said, the formose reaction becomes self-sustaining. It consumes formaldehyde to regenerate glycolaldehyde and churn out more and more monosaccharides, or simple sugars. (The name itself is a portmanteau of formaldehyde and aldose, one category of monosaccharide produced by the reaction.)
Given enough time, this cyclical reaction could produce “kilograms, tons, maybe megatons of sugars,” said organic chemist Oliver Trapp of Ludwig Maximilian University in Munich, Germany, who was not involved in the study. Showing how this reaction could get started under natural conditions, he said, is a “remarkable step.”
Vinogradoff and her colleagues wondered whether olivine-rich rocks, which would have been abundant on early Earth and elsewhere in the young solar system, might have been the catalyst for the planet’s first formose reactions.
The researchers reacted formaldehyde with finely ground olivine-rich rock that they carefully cleaned to avoid contamination with organic molecules. (They also set aside some formaldehyde and olivine separately as controls.) The reaction chambers were filled with water and kept warm and oxygen-free. Such conditions are not unlike those that scientists expect existed at hydrothermal vents on early Earth or within watery asteroids. After 2, 7, and 45 days, the team took samples and measured the reaction products using a technique called multidimensional gas chromatography.
“This reaction forms many, many, many compounds, and analysis is very, very difficult,” said prebiotic chemist Yoshihiro Furukawa of Tohoku University in Japan, who was not involved in the study. “The authors used a state-of-the-art analysis.”
The experiments revealed that in the presence of olivine, glycolaldehyde—and sugars—formed much more efficiently. Olivine also helped the reaction produce more complicated sugars such as glucose. Chemical models suggest that the surface of olivine interacts with formaldehyde molecules in a way that makes it easier for their carbon molecules to form bonds to each other.
Because olivine is widespread, the right conditions for making sugars from formaldehyde could have occurred—and could still occur—throughout the solar system, from the seafloor of early Earth to the interiors of asteroids.
Olivine’s ubiquity makes the finding relevant to not just one hypothesis for the origin of life, but several. The mineral is already central to a hypothesis placing the origin of life at hydrothermal vents in Earth’s primordial ocean. The warm, alkaline fluids at these vents come from a reaction between olivine and seawater.
Other hypotheses involve the delivery of organic molecules from space. Furukawa and his colleagues recently identified sugars, including ribose, a key component of genetic molecules, in meteorites. Because certain asteroids contain olivine and formaldehyde, the new results could help explain the existence of such space sugars.
“Olivine is a common mineral,” Vinogradoff said. “That’s the interesting point.” That it could catalyze such a potentially important reaction is something “we had not imagined before.”https://eos.org/articles/olivine-may-have-given-life-a-jump-start
*
AGNES POCKELS AND SURFACE CHEMISTRY
Agnes Pockels was only nineteen when she noticed something unusual in the dishwater. The year was 1881, and she stood at the sink in her family home in Brunswick, Germany, observing how grease glided across the surface, how soap altered everything, and how the surface of the water itself behaved in puzzling ways. Most people would have finished washing the dishes and moved on. Agnes wrote it down. In her diary she noted, “1881: I have discovered the abnormal behavior of the water surface.”

She longed to study physics formally. At the Municipal High School for Girls, she had shown remarkable enthusiasm for science. But in Germany at that time, universities did not admit women. Even if they had, she was needed at home: her father had returned from military service in Italy weakened by malaria, her mother was chronically ill, and someone had to manage the household. “Like a soldier,” she later said, “I stand firm at my post caring for my aged parents.” Yet soldiers have time to think while they keep watch—and Agnes spent those hours thinking about surface tension.
Her younger brother Friedrich studied physics at the University of Göttingen and sent her his textbooks. She devoured them, teaching herself the theory and mathematics she was not allowed to learn in a classroom. Then she returned to the kitchen sink with new determination. She needed a method to quantify what she saw. So she built one.
In 1882 she designed a device she called the Schieberinne, or sliding trough—essentially a simple metal container filled with water and equipped with a movable partition that allowed her to compress or expand the surface area. To measure surface tension, she placed a tiny disk—about six millimeters wide, roughly the size of a button—on the water’s surface and used a delicate balance to measure the force needed to pull it free. “1882: I have developed the Schieberinne,” she wrote. “1883: I had a large Schieberinne made.”
With this homemade setup, Agnes embarked on ten years of independent research. She tested how oils, soaps, and household chemicals altered water’s surface tension. She discovered that even minuscule impurities could drastically affect the surface. She found that compressing a surface covered with certain substances produced a consistent tension up to a critical point—after which the tension suddenly changed. She had identified the moment when a single layer of molecules, only one molecule thick, formed on the surface. She calculated that one molecule occupied about twenty square angstroms of space. This threshold would later become known as the Pockels Point.
She conducted all this work alone—no lab, no mentors, no collaborators, no financial support—just a woman in her kitchen, making remarkably precise measurements and recording them meticulously. And she had no way to publish them. “I was not able to directly publish the scientific results,” she said later, “partially because the publishing houses here were unlikely to accept contributions from a woman and partially because I had no sufficient information regarding work carried out by others on the same topic.” She felt completely isolated, unsure whether anyone else was even exploring similar questions.
Then, in 1890, she read an article in a German scientific journal. It described the work of Lord Rayleigh, a prominent English physicist, who was studying water surfaces and asking questions strikingly similar to hers.
Agnes made a bold decision. She wrote to him.
On January 10, 1891, she sent Rayleigh a twelve-page letter in German, summarizing a decade of research. She explained her equipment, her observations, and her conclusions. Her tone was modest to the extreme: “My Lord, will you kindly excuse my venturing to trouble you with a German letter on a scientific subject? … For various reasons I am not in a position to publish them in scientific periodicals, and I therefore adopt this means of communicating to you the most important of them.” She told him he could use her findings however he wished: “I completely leave it up to you to use my modest work and this information.”
Rayleigh read the letter and immediately understood its significance. On March 2, 1891, he forwarded it to the editor of Nature—the leading scientific journal in the English-speaking world—with a note: “I shall be obliged if you can find space for the accompanying translation of an interesting letter which I have received from a German lady, who with very homely appliances has arrived at valuable results respecting the behaviour of contaminated water surfaces… The later sections seem to me very suggestive, raising, if they do not fully answer, many important questions.”
Ten days later, on March 12, 1891, Agnes Pockels’s research appeared in Nature under the title “Surface Tension.” She was twenty-nine years old. She had never attended a university. And her kitchen experiments had become part of the scientific literature. A 1971 paper would later call her letter “a landmark in the history of surface chemistry.”
The world nearly missed it. A different reaction from Rayleigh—a dismissal of a letter from an unknown German homemaker, a lack of trust, or a desire to claim credit—might have buried her work forever. Countless women in science have been lost that way. Rayleigh chose otherwise. Because he did, Agnes Pockels’s name endured.
After her publication in Nature, her circumstances began to shift. She continued corresponding with Rayleigh and went on to publish fourteen papers over the next decades, mostly in German journals. In 1893, the University of Göttingen offered her laboratory space, but she declined; her parents still depended on her. Nevertheless, she kept working, improving her methods. She discovered that even airborne dust could disrupt her measurements—an insight professional labs would not match for years. She measured molecular film thicknesses as small as thirteen angstroms. She laid the foundation for an entire scientific discipline.
In 1917, the American chemist Irving Langmuir adopted her technique to study oil films at General Electric. Building upon her sliding trough, he demonstrated the existence of monolayer films and made groundbreaking discoveries about molecular surfaces that transformed physical chemistry. In 1932, Langmuir won the Nobel Prize in Chemistry. His apparatus is known as the Langmuir-Blodgett trough—though some refer to it as the Langmuir-Pockels trough, a name it deserves more widely.
Public recognition reached Agnes only late in life. In 1931, she received the Laura R. Leonard Prize from the German Colloid Society. In 1932—the same year Langmuir received the Nobel Prize—the Technical University of Braunschweig awarded her an honorary doctorate in engineering. She was the first woman to receive it, and remains the only one to this day. She had waited four decades. “I learned to my great joy,” she wrote, “that my work is being used by others for their investigations.”
Agnes Pockels died on November 21, 1935, in Brunswick, the city where she had lived her entire life. She was seventy-three. She never married, never held a paid scientific post, and never stopped wondering about the behavior of dishwater.
Today, the Agnes Pockels Laboratory at the Technical University of Braunschweig introduces children—especially girls—to chemistry through simple, hands-on experiments. No theoretical background needed. No advanced tools. Only curiosity, observation, and the question that once captivated Agnes herself: Why does water behave the way it does?
The motto could have been hers: learning by doing.
With a metal trough, a button, a balance, and forty years of determined, solitary investigation, she showed that a brilliant scientific mind does not require an official laboratory to change the world.
https://www.facebook.com/share/p/1KVx2VQb95/
*
WHY GERMANY LOST THE TECH WAR FROM DAY ONE OF WW2
Many of them believe that the Allies were only becoming smarter and better towards the end of the war. That is wrong. In fact, the Allies got an enormous technology advantage in 1939.
It was not about some new gun or some greater bomb. It was concerning the fuel that caused things to get things going.
The Power of Real Oil
As soon as the war commenced, the Allies (the UK, in particular) enjoyed a colossal advantage. They were being fed with oil tankers running in unending supply in Texas and Persia.
This provided them with the finest high-octane fuel worldwide.
They had engines that were capable of running more intensively and quicker than breaking.
The Second-Rate Struggle of Germany
German synthetic fuel factory
The Allies had the best oil whereas Germany was left behind with a second position. They did not possess their own oil fields. They were forced to attempt production of fake (synthetic) oil and fuel.
It was low-octane.
It was low-quality
Since their fuel was not of first grade, they had second grade engines as well. The party that has poor fuel is already losing in a war of engines.
The 1,000-Mile Gas Station
Ploesti, Romania became the only source of virtually all of German real oil. This was a time bomb that was about to blow up.
It was 1,000 miles away from Berlin.
It was so close to the Soviet border: only 43 miles.
The German army relied on a gas station which was too distant and too near to the enemies. With one such stroke the entire German war machine would be paralyzed.
In 1939, the Allies led the technology battle due to the possession of the best oil in the world. Germany was never a step ahead, in attempting to make a high-speed war with low-speed fuel. By the time they had no idea anything wrong was happening, the Allies were miles ahead.
~ Tyson Arnold, Quora
David Eliezer:
Also not enough fuel. The Germans tried to compensate, by making artificial fuel from coal, using the Fischer-Tropf Process, but it was expensive, didn’t produce enough, and was not high enough quality (lack of aromatic molecules).
The Allies also had excellent chemical engineering, which enabled them to produce exceptionally high quality fuels, with very high octane.
*
MISHA FIRER: RUSSIA’S CRASHING ECONOMY
Russian winter has returned to the Motherland. In Kamchatka, that also a proverbial term for “boondocks”, snowdrifts have reached the fifth floor of the Soviet-era apartment blocks. KGB agents have to devise new methods for execution.
There’s a Russian saying: “A man is a boy who against all odds has survived into adulthood.”
With the crashing economy, Russian employers feel almighty. They can do what they want and do not face any negative consequences for their unfair actions. At least not yet. They lower wages, withhold wages for months, fire on the spot without a severance pay.
The line of applicants to every position is miles long as the unemployment has hit all time high. And no, you won’t hear about it from Kremlin-controlled news sources.
The new line of consolidated attack on employees is a legislation that permits bosses to send their staff to a psychiatrist in case their mental acuity deteriorates.
In Russia, your boss can literally commit you to an insane asylum — and Moscow has a record number of mental institutions of all European capitals, which explains why this society is so much lacking in critical thinking. If you speak up there’s a greater chance you wind up being lobotomized, than jailed.
As local residents are fleeing from coal extraction city of Vorkuta above the Arctic Circle, they sell their apartments for the price of a garage in Moscow. IT developers pick up the apartments and move to enjoy snow wasteland and work in peace.
12 Soviet aircraft: nine Tu-204-214, one An-148, and two Il-96s that have been scrapped after forty years of service have been returned to service to such airlines as Burning Wings and YouScare. Sorry, I mean Red Wings and U-tair.
Two more restored Tu-204s are planned to be delivered to operators this year for several months before they fall down. Scrapped Boeing 747s are also being reinstated with knockoff aircraft parts to patch them up after its service has expired.
Reminds me of that joke:
A nervous passenger asks a flight attendant:
"Do your airline planes crash a lot?"
"Only once."
~ Misha Firer, Quora, January 20, 2026
*
“SOCIALISM IS A HATE CRIME”
It is remarkable that, despite its long record of failure, socialism is now more popular than ever among college students and in progressive precincts of the Democratic Party, at least judging by the cult status of figures such as Bernie Sanders and Alexandria Ocasio-Cortez. Now an avowed socialist has been elected mayor of New York, the commercial capital of the United States and home to that great capitalist institution, the stock market. Even more recently, socialists here and around the world have spoken out in unison against the arrest of Nicolás Maduro, the socialist dictator of Venezuela.
It is ironic that these socialists, along with their supporters and fellow travelers, like to censor conservatives for, allegedly, promoting “hate” and “division.” On that basis, they have banned conservative speakers from appearing on college campuses, and just a few years ago urged Twitter and Facebook to close the accounts of conservatives who spoke out against socialism.
This raises the question: given the historical record, why don’t we label socialism as a hate crime?
After all, the evidence for socialism’s malignant effects is obvious to anyone with sufficient curiosity to open a history book. Socialists are responsible for the murder, imprisonment, and torture of many millions and perhaps hundreds of millions of innocent people during the ideology’s heyday in the middle of the twentieth century. That history of murder and tyranny continues on a smaller scale today in the handful of countries living under the misfortune of socialism—Cuba, North Korea, and (most recently) Venezuela.
How do socialists escape the indictment that they are purveyors of tyranny and mass murder?
Some of them deny that Stalin, Mao, and others were true socialists, or, equally absurdly, assert that true socialism has never really been tried. But socialism has been tried many times in many places and has always failed.
Senator Sanders, Alexandria Ocasio-Cortez, and Mayor Mamdani claim that they are for something called “democratic socialism,” a more peaceful version of the doctrine, but that’s what Lenin, Mao, and Castro said until they seized power and immediately began to sing a different tune. “Democracy” and “diversity” are what they say when out of power; tyranny and raw power are what they practice once in power. That is the tried-and-true technique of all socialist movements.
R. J. Rummel, a noted scholar of political violence and totalitarian movements, coined the term “democide” to describe large-scale government killings for political purposes—in other words, politically motivated murder. While communists and socialists have not had a monopoly on democide, these movements have been responsible for far more political killing than any other political movement or form of government in the modern era.
After looking at the facts, Rummel, writing in 1993, drew this conclusion:
In sum the communists probably have murdered something like 110,000,000, or near two-thirds of all those killed by all governments, quasi-governments, and guerrillas from 1900 to 1987. Of course, the total itself is shocking. It is several times the 38,000,000 battle-dead that have been killed in all this century’s international and domestic wars. Yet the probable number of murders in the Soviet Union alone—just one communist country—well surpasses the human cost of wars.
Rummel suspected that the estimate of 110 million killed may be too low. In fact, he believed the death toll from socialist democide in the twentieth century may be as high as 260 million. Below is a breakdown of the bloody record of socialist murder and violence in the twentieth century.
The Soviet Union was the first large-scale experiment in socialism, commencing with the Bolshevik Revolution in October 1917. For those who like to think that there is a meaningful distinction between communism and socialism, it should be noted that USSR stands for the Union of Soviet Socialist Republics. Whatever Lenin and Stalin thought they were doing, they agreed they were engaged in a socialist enterprise.
Rummel wrote that “the Soviet Union appears the greatest mega-murderer of all, apparently killing near 61,000,000 people,” with Stalin being directly responsible for at least 43 million of these deaths, mostly via forced labor camps and government induced famines. There are even higher estimates, with some historians suggesting that as many as 126 million people died as a result of Soviet policies in the twentieth century.
In what has come to be known as the Holodomor, in the early 1930s Stalin’s government killed millions of peasants, most of them Ukrainians, who resisted collectivization or failed to meet mandated production quotas. Several distinguished historians have documented this catastrophe. Robert Conquest, in The Harvest of Sorrow (1986), estimated that 11 million people died of starvation or outright murder in European sections of the Soviet Union between 1932 and 1934. Anne Applebaum, in her book The Red Famine: Stalin’s War on Ukraine (2017), agreed with Conquest’s estimate and showed that these deaths arose as a consequence of deliberate Soviet policy.
A few years later, between 1936 and 1938, Stalin orchestrated a campaign of repression and terror that, according to Conquest’s The Great Terror (1990), led to the murder of some 700,000 people who were judged to be opponents of the socialist regime. Many of those killed were leaders of the 1917 Bolshevik revolution whom Stalin came to regard as traitors or rivals for power. Some historians judge the toll of Stalin’s terror to have been greater than one million killed.
At the time, and for decades thereafter, Western apologists either denied that killings on this scale had occurred or justified them as necessary to maintain the regime. It was only in 1956, when Nikita Khrushchev admitted to some of Stalin’s crimes, that Western fellow travelers reluctantly acknowledged their monstrous scale.
Then there is the awkward example of Nazi Germany, a regime rivaled in horror and mass murder only by Stalin’s Soviet Union and Mao’s China. Rummel does not include the Nazis in his calculations of socialist democide, though this may be judged an oversight on his part, because Nazism was in fact a socialist movement. The term “Nazi” was shorthand for Hitler’s political party, the National Socialist German Workers’ Party (NSDAP in German). Hitler and his henchmen were socialists, albeit of a somewhat different stripe than Lenin and Stalin.
The scale of Nazi murder across nearly the whole of the European continent is difficult to quantify. Rummel, whose estimates mirror those of other scholars, concluded that the Nazis killed perhaps twenty-one million innocent people via outright murder, including six million Jews murdered in concentration camps and many other groups killed by Nazi institutional practices such as forced labor, “euthanasia,” forced suicides, and medical experimentation.
We now come to the deadliest socialist regime of them all: Communist China. Following the Communist victory in the Chinese Civil War in 1949, Mao Zedong launched a series of campaigns that put him in the same league as Stalin and Hitler in terms of the number of people murdered, tortured, and imprisoned.
In the first phase of Mao’s rule, from 1948 to 1951, Mao sought to destroy the property-owning class by killing at least one landlord in every village via public execution. One of Mao’s deputies said in 1948 that as many as thirty million landlords would have to be eliminated. Hundreds of thousands were shot, buried alive, dismembered, and otherwise tortured to death in the early years of the regime. Mao and his comrades killed at least 4.5 million Chinese during this period, according to estimates compiled by Rummel and confirmed by other scholars.
Mao, alas, was just getting started. During the 1950s the Chinese Communists carried out murder campaigns against Christians and other undesirables, causing the deaths of thousands and perhaps hundreds of thousands of innocent people.
In the so-called Great Leap Forward (1958–62), a misnomer if ever there was one, Mao accelerated his campaign for collectivization and industrialization, emulating Stalin’s policies of the 1930s, and with eerily similar results. Frank Dikotter’s carefully researched book Mao’s Great Famine (2010) concludes that a staggering 45 million Chinese were killed via murder, torture, starvation, and imprisonment over that four-year period.
In Tombstone: The Great Chinese Famine, 1958–62 (2012), the journalist Yang Jisheng, using government sources, placed the number of “unnatural” deaths at 36 million, as Communist officials seized land and produce from peasants to redistribute elsewhere and systematically killed any and all who stood in the way of the regime’s collectivist policy. Some have described this episode as the single greatest mass murder in the recorded history of the world.
In 1966 Mao launched the Great Proletarian Cultural Revolution, designed to purify Communist Chinese ideology by purging remaining capitalist and traditional elements. This is the stock response among socialists when confronted with the failure of their schemes: counterrevolutionary elements are to blame. The brutal campaign of state-sponsored murder, torture, and persecution went on for a full decade through different phases of insanity, finally ending with Mao’s death in 1976.
Merrill Goldman, a noted scholar of modern China, estimates that as many as a hundred million people were persecuted during the Cultural Revolution, and between five and ten million people were killed via executions, communal massacres, and starvation. Rummel placed the death toll from the Cultural Revolution at 7.7 million, with many millions more suffering persecutions of various kinds. The Chinese government today is understandably embarrassed by this barbaric episode in its recent history and has withheld records that would allow scholars to arrive at a more exact estimate of the numbers killed, injured, and persecuted.
Thus, over a period of just three decades, Mao’s socialist government was responsible for the killing of some fifty to sixty million Chinese, most of those casualties being incurred in three brutal episodes of political cleansing and socialist “reform.”
In total, the three “super socialists”—Stalin, Hitler, and Mao—were thus responsible for the murders of well over a hundred million people between the years 1930 and 1976. In the Hall of Fame of socialism, these three occupy exalted platforms.
Let us now move to the “minor leagues” of socialism. In Cuba, Rummel estimated that Castro’s government killed at least 73,000 people for political reasons, and perhaps as many as 140,000, in a country with a population of 11 million today but just six million when he seized power in 1958. Castro staged hundreds of public executions after he seized power, imprisoned thousands of opponents—real or suspected—and seized property from landowners and foreign corporations.
Compared to his Communist brethren, Castro appears almost humane in terms of the “modest” scale of his killings. In reaching this conclusion, however, one must leave to one side Castro’s wish to launch a nuclear attack against the United States in 1962, in retaliation for the U.S. demand for the removal of offensive Soviet nuclear weapons from the island. Like other socialists, Castro was ever ready to consider extreme measures.
*
In Cambodia between 1975 and 1979, the socialist Khmer Rouge regime, under the leadership of Pol Pot, murdered some two million people in a country with a population of only seven million, according to estimates compiled by Rummel and verified by a war-crimes tribunal set up in 2001 by a successor government in Cambodia.
Below is Rummel’s summary of this catastrophe:
In proportion to its population, Cambodia underwent a human catastrophe unparalleled in this century. Out of a 1970 population of probably near 7,100,000, Cambodia probably lost slightly less than 4,000,000 people to war, rebellion, man-made famine, genocide, politicide, and mass murder. The vast majority, almost 3,300,000 men, women, and children (including 35,000 foreigners), were murdered within the years 1970 to 1980 by successive governments and guerrilla groups. Most of these, a likely near 2,400,000, were murdered by the communist Khmer Rouge.
Pol Pot and his comrades sought to follow Mao’s lead and purge the socialist movement of impure elements. Doing so meant the massacre of religious and national minorities, intellectuals, and city dwellers. Hundreds of thousands of victims were murdered in the “killing fields,” various sites across the country where Khmer Rouge soldiers and officials carried out executions and buried victims in mass graves. This slaughter ranks near the top of the list of socialist atrocities in terms of the proportion of the population killed.
Some socialists and fellow travelers have blamed the U.S. war in Vietnam for the slaughter, apparently because socialists are liable to act like madmen if provoked. It was, of course, to prevent this kind of lunacy that the United States intervened in the first place in Southeast Asia.
The Democratic People’s Republic of North Korea must be judged as the most bizarre of all socialist states, which is saying something in light of the standard established by the regimes listed above. The fact that the whole country is an open-air prison camp with a regimented population does not make it much different from other socialist regimes. The country is unusual in having a dynastic government run by the Kim family (now in its third generation of rule), with the hereditary succession written into the fundamental law of the country.
Rummel estimated that, in a country of twenty-five million people, between 700,000 and 3.5 million people have been murdered in the North Korean democide, with a reasonable midpoint being around 1.6 million. It is difficult to quantify the victims, because North Korea is a closed society. Rummel judged that the great proportion of those killed by the regime died in prison camps from forced labor, starvation, and illness.
During the Korean War, Communist officials followed North Korean troops as they advanced into South Korea and systematically massacred South Korean civilians perceived to be anti-communists. They then repeated these massacres as North Korean troops retreated northwards. In addition, the regime impressed some 400,000 South Koreans into their army, a large proportion of whom died from being forced into the most dangerous or laborious assignments. North Korea also failed to account for many thousands of American prisoners of war.
The contemporary case of Venezuela is different from other experiments in socialism because of the relative absence of democide, at least to the extent catalogued above. Venezuelan socialism has instead resulted in economic collapse and social chaos. In Venezuela, socialists did not seize power by violent revolution but were initially elected by the voters, similar to Hitler’s accession to power. In socialist regimes elsewhere, the kind of economic failure now taking place in Venezuela has provoked repression, extrajudicial decrees, the elimination of legal protections, and mass murder. Beginning under Hugo Chávez and continuing under Maduro, legal and constitutional protections have evaporated in Venezuela, but the regime did not resort to large-scale killings, perhaps because it is no longer a practical option. Now that is progress.
Venezuela was one of the more prosperous South American countries for most of twentieth century, owing to a diversified economy and, more recently, to abundant oil reserves that allowed the country to accumulate export surpluses. Oil profits promoted a higher standard of living in the country, though they also drew more labor and capital into the oil industry and put the country’s economy at the mercy of the ebb and flow of international prices. When Chávez won the presidency in 1998, he moved quickly to nationalize the oil industry, raise taxes on corporations, and redistribute land. He also supported a revised constitution for the country giving the president a longer term and more power and granting new social rights to the population.
Rising oil prices in the early years of the regime allowed Chávez to increase social spending and distribute funds to constituent groups, even as foreign corporations withdrew capital from the country. Since socialists do not believe in the price system, Chávez did not fully understand that oil prices could go down as well as up. In the event, oil prices collapsed in the great recession of 2008, leading to inflation, collapse of the currency, capital flight, and general economic chaos—all inevitable consequences of socialist policies.
In addition, Chávez and Maduro mismanaged the country’s oil industry, expelling foreign interests, failing to invest in new technology, and subjecting it to state ownership. In 1998, before socialists came to power, Venezuela produced 3.5 million barrels of oil per day; in 2025 that number is down to around one million barrels per day. This is yet more proof that, despite wanting to run everything, socialists are incapable of running anything except a prison camp.
In response to protests and mounting opposition, the socialist government cracked down on critics. In 2013, Maduro, Chávez’s successor, requested an enabling law to permit him to rule by decree. The next year he created the “Ministry of Supreme Happiness” to coordinate government social programs. The measures did not “work,” if by that term we mean a return to prosperity and stability; of course, they are never going to “work,” since socialism is an ideological doctrine rather than one of workable economics. The ongoing crisis in Venezuela is a direct result of these failed policies.
To make matters worse, the regime has sought to export its troubles around the region and to the United States, by running drugs and encouraging gang members to enter the United States via an open southern border during the Biden years. This brought down a criminal indictment on Maduro from the United States, which he never thought would be enforced. In the event, the Trump administration arrested him and threatened to bring to an end that country’s unfortunate experiment in socialism.
Some say that Venezuelan voters chose this course when they elected Chávez and then Maduro, and so they deserve to reap the consequences of what they have sown. Given how flagrantly the regime rigged elections, it would be unfair to blame the poor people of Venezuela, many of whom are against the socialist government. Others may have voted for the socialists out of naiveté or misplaced hope, just as some Americans have done recently in New York’s mayoral election.
The question has often been asked: why does the same thing happen over and over again wherever socialism has been tried? Socialist plans and policies—central planning, five-year plans, collectivization of agriculture, nationalization of industry, the concentration of power into the hands of a few—lead inevitably to economic collapse and repression, and often large-scale killing. Socialism always and everywhere begins with idealistic promises and ends in barbarism.
F. A. Hayek answered this question as long ago as 1944 when he published The Road to Serfdom, his classic critique of socialism. At that time, the socialist experiment was still in its early stages with just two examples from which to draw lessons, the Communist regime in Russia and Hitler’s Nazi regime in Germany. The brutal history of socialism was yet to play out fully in the post-war era, but the lessons Hayek drew from Stalin and Hitler would turn out to apply perfectly to Mao, Castro, the Kim dynasty, and all of the socialist tyrants that came later.
As Hayek pointed out, in socialist movements there is a tendency for the most brutal and unscrupulous people to rise to the top because they are the types who are willing to take the necessary steps to seize and relish exercising absolute power. Lenin, Stalin, Hitler, Mao, and Pol Pot were not the kinds of people one might have encountered in faculty lounges or middle-class town meetings. They were blackguards and thugs one and all, which is the key trait needed to rise to the top in a movement in which power goes to those willing to use extreme measures for the sake of “progress.”
Socialist policies, moreover, are always going to fail because it is impossible for central planners to allocate capital, goods, and services efficiently across a large economy. Capitalism has the price system to aggregate that information; socialism has planners who know little about how the economy actually works. When there arose shortages of food or housing or military equipment—when socialist policies failed—leaders were faced with a choice of admitting failure and abandoning the socialist path or doubling down on their policies and preserving their power. It was in their nature to choose the latter course and thus to press forward with more extreme measures, which typically involved the identification of “counterrevolutionary” scapegoats. From there it was but a few steps to the catastrophic outcomes described above: show trials, terror famines, mass starvation, cultural revolutions, killing fields, and democide.
New York voters who elected a socialist mayor are unlikely to face the worst of these consequences, since they reside in just one city in a free country where mass arrests or mass killings of the kind cited above will not be permitted. But, if history is a guide, they are likely over the next few years to deal with rising crime, deteriorating city services, failed experiments, wasted public funds, people and corporations fleeing the city, and extremist rhetoric designed to cover for the accumulating failures. It is possible that the damage done will reach the point where New York’s decline becomes irreversible. ~ James Piereson
https://newcriterion.com/dispatch/socialism-is-a-hate-crime/
*
WHICH GOD HAS FAILED?
Political-intellectual currents in the aftermath of the collapse of Soviet Communism continue to stimulate compelling questions, some of them timeless, others closely linked to the historic event. How do people acquire strong political beliefs and commitments, and why do they retain them even after they have proven to be destructive, foolish, contradictory, or irrational, as the case may be? Under what conditions do idealism and fanaticism become indistinguishable? How do moral and political values intersect?
Such questions remain relevant because of the familiar spectacle of fanatics slaughtering, with a clear conscience, their perceived enemies to advance a cause and rid the world of undesirables. Puzzling questions are also raised by the seemingly unyielding disposition of some intellectuals to overcome the “cognitive dissonance” between historical evidence and their residual commitments. Clinging to such beliefs is especially remarkable when specific historical events and revelations have decisively invalidated them, when their ethical substance has been shown to disappear without a trace in the process of their attempted realization.
Over a decade after the fall of Soviet Communism, it is apparent that numerous major Western public figures, opinion-makers, and intellectuals have preserved some of their core beliefs, if not in the defunct political systems themselves, at least in the supporting ideas; the collapse did not discredit these ideas in their eyes. Their continuing devotion is of great intellectual-historical and psychological interest.
Daniel Singer’s belief that “the tragic abortive attempt [in the former Soviet Union] proves nothing about the impossibility … of building socialism” is quite typical. Cornel West gets around the problem simply by declaring that “Marxist thought becomes even more relevant after the collapse of communism in the Soviet Union and Eastern Europe than it was before.”
Louis Althusser accomplished the same by, in Tony Judt’s words, “remov[ing] Marxism altogether from the realm of history, politics, and experience, and thereby [rendering] it invulnerable to any criticism of the empirical sort.” John Cassidy in an article on “The Return of Karl Marx” proposed that “Marx’s legacy has been obscured by the failure of Communism”—another way of suggesting that Marxism had little to do with Communist states.
A survey of media responses to the 150th anniversary of the publication of the Communist Manifesto in Society found widespread reverence and “unrestrained celebration.”
https://newcriterion.com/article/which-god-has-failed/
"The Soviet Union was never a socialist country. It was a fascist country." ~ Misha Iossel
*
WOMEN WHO CHOOSE MOTHERHOOD SOLO VIA A SPERM DONOR
Instead, her journey to becoming a mother began with IVF and donor sperm, a choice she made during the pandemic after she realized how much she missed seeing her sister's and friend's children.
She jokingly told her parents that she could have a child on her own and recalls: "I expected them to laugh it off but they said I should and got excited about it.
"I wasn't expecting that reaction and it made me think I actually should do it," she told Radio 4's Woman's Hour.
In her 20s, Lucy was engaged and always thought she would be a mother. When she found herself single just before her 30th birthday she says she went through a "real period of grief around what if that doesn't happen for me".
Lucy's first son is now almost three-years-old and she is pregnant again with sperm from the same donor.
She doesn't know his identity or even what he looks like.
"I look at my boy all the time and think how much does he looks like the donor but it's impossible to know and it doesn't matter because he just looks like himself."
She's excited to give birth to her second child and says it will be "interesting to see what the new baby looks like and whether they will look similar or have similar traits.”
The number of mothers deciding to have a baby solo is increasing rapidly. Data from the UK regulator of the fertility industry HFEA, shows that in 2019, 3,147 single women in the UK received fertility treatment with donor sperm. In 2022 — the most recent figures available — the number had risen to 5,084, an increase of over 60%.
Nina Barnsley, director of UK-based charity Donor Conception Network, says one of the biggest factors for women choosing the solo route is time, "both in fertility and wanting children at a certain stage in life."
Open conversation
Barnsley says choosing proactively to be a solo mum can come with additional emotional, societal and practical challenges.
Many women can expect questions around who their donor is and while it's "usually well meaning, it can feel intrusive."
Lucy says she's been open from the start about how she had her son.
She has already begun explaining his conception to him using language she describes as "simple but honest."
Most importantly, she wants him to "develop confidence in talking about it".
"I don't want him to think his family isn't as acceptable or as solid as someone who has two parents."
Lucy ignores those who call her decision selfish.
"A child's happiness isn't about having one parent or two, it's about love, care and time.”
While Lucy knew that choosing this route meant she would be a single mother, she never felt alone as part of the plan was for her parents to be heavily involved.
During her pregnancy in 2023, her mum became seriously ill, a turn that reshaped not just her parenting plan, but her world.
Last year, when her son was 18 months old, Lucy's parents died within six weeks of each other.
"There were times when I thought, how am I going to do this but it was a case of just having to navigate it because there was no choice."
Yet, in those months of illness and loss, her son helped, she says.
"He made everything better because it was a huge distraction."
Kim, 30, says his mum did a great job at raising him on her own
Kim, who is now 30, is the grown up child of a mum, who like Lucy, chose to become a solo mother via sperm donor.
He says the absence of a father never felt like a void and he never resented his mum, Emily, or wished his family was any different.
What has shaped him more was the way Emily, a retired social worker, raised him, rather than how she conceived him.
"Having seen how much she did by herself, it has given me a strong sense of independence," he says.
He adds that he doesn't understand those who think his mum's decision to go it alone was selfish.
"The real selfish thing to do is have a child when you're not absolutely sure that you want one."
After growing apart from a long-term partner in her 20s, Emily wasn't sure about getting into another relationship and when she realized she could have a baby without a boyfriend she knew "that's the way it was going to be."
She says the best part about parenting alone was not having to negotiate or compromise.
"Once I'd made a decision, no matter how hard, I never had to compromise and I could always have it my way."
The 72-year-old says she has no regrets with how things turned out and her son has become "just the sort of person I'd have asked for and he couldn't be the more ideal son for me."
https://www.bbc.com/news/articles/clye738nxz0o
*
THREE MASSIVE CHANGES DUE TO GLOBAL WARMING
World leaders are heading into the final days of COP30, the United Nations climate meeting in Brazil. They are trying to agree on how to curb global warming and pay for the costs of an increasingly hotter planet.
For the past eight years, one of the primary objectives of the annual negotiations has been to limit global warming to 1.5 degrees Celsius, compared to the temperatures in the late 1800s. That temperature goal was established after a landmark international scientific report laid out the catastrophic effects of exceeding that amount of warming.
But that goal is no longer plausible, scientists say. Humanity has not cut planet-warming pollution quickly enough, and the planet will exceed 1.5 degrees Celsius of warming, likely in the next decade, according to a recent United Nations report.
However, all is not lost. If countries can cut overall greenhouse gas emissions in half by 2035, scientists say the planet would quickly return to lower levels of warming.
"We must move much, much faster on both reductions of emissions and strengthening resilience," U.N. climate chief Simon Stiell told world leaders at COP30. Right now, countries are pursuing policies that would cut emissions by just 12% by 2035.
"The science is clear: We can and must bring temperatures back down to 1.5 [degrees Celsius] after any temporary overshoot," Stiell said.
If countries follow through on current promises to reduce greenhouse gas emissions, the latest estimates suggest that Earth's temperature will top out around 2.5 degrees Celsius of warming this century.
The latest science also makes clear the profound human costs of exceeding 1.5 degrees Celsius of warming, even temporarily. The planet has warmed about 1.3 degrees Celsius, according to the World Meteorological Organization. And communities are already experiencing more dangerous storms, flooding and heat waves.
When the planet heats up beyond 1.5 degrees, the impacts don't get just slightly worse. Scientists warn that massive, self-reinforcing changes could be set off, having devastating impacts around the world.
Such changes are sometimes called climate tipping points, although they're not as abrupt as that term would suggest. Most will unfold over the course of decades. Some could take centuries. Some may be partially reversible. But they all have enormous and lasting implications for humans, plants and animals on Earth.
And every tenth of a degree of warming makes these tipping points more likely, according to a new report from 160 international climate researchers.
CORAL REEFS COULD BE GONE FOREVER

Bleaching
coral in Kahalu'u Bay in Kailua-Kona, Hawaii. Corals are highly
sensitive to heat, and as the oceans warm, the future of reefs is in
peril.
For coral reefs, the tipping point may have already begun. Widespread coral die-offs have been seen around the globe as ocean temperatures heat up, making it the first domino to fall, according to a new report.
By overall area, coral reefs are a tiny part of the ocean. But they're a bedrock ecosystem for marine life, supporting an estimated 25% of all species.
Corals are highly sensitive to heat. When marine heat waves hit, corals come under stress, and they expel the algae that live inside them and that they need to survive. The reefs then turn a ghostly white color.
A bleaching event doesn't necessarily mean the end for a coral reef. Corals have the ability to recover, given enough time. But repeated heat waves, as seen at Australia's Great Barrier Reef and off the coast of Florida, can kill a reef, leading to the collapse of the ecosystem.
Oceans are also becoming more acidic as they absorb the carbon dioxide emitted by humans from burning fossil fuels. That also stresses corals, making it difficult for them to build their skeletons.
High ocean temperatures caused a global coral bleaching event in 2023-24, the second in the last 10 years. If the world passes 2 degrees Celsius of heating, an estimated 99% of the world's coral reefs could be lost. The damage is happening faster than scientists expected. Combined with the effects of pollution and human development, half of all reefs worldwide will be in unlivable conditions by 2035, according to a recent study from the University of Hawai'i at Mānoa.
"The coming decades will bring, I think, unprecedented change for both these reef systems and humanity in general," says Erik Franklin, professor at the Hawaii Institute of Marine Biology at the University of Hawai'i at Mānoa, who worked on the study.
It's estimated that half a billion people around the world depend on coral reefs for food, income and livelihoods. Losing reefs would destabilize many countries, along with risking extinction for marine life that can only be found on coral reefs.
“There are entire societies and economies that are built around reef systems, especially in equatorial and tropical regions," Franklin says. "So these societies will be in dire straits.”
Many scientists are searching for "refuges" — pockets of the ocean where conditions might remain livable for coral reefs. They're also selectively breeding corals, both in Florida and Australia, boosting the corals' natural abilities to withstand heat. The hope is that it might help corals hold on, surviving just long enough until humans can get their heat-trapping emissions under control.
ICE SHEETS IN GREENLAND AND WEST ANTARCTICA COULD COLLAPSE
Ice sheets are the massive frozen expanses that cover Greenland and Antarctica and contain about two-thirds of the freshwater on Earth. Climate change is already causing them to melt and raising sea levels around the world.
Snow and ice are melting more quickly than they are being replaced on the world's largest ice sheets. That's causing the ice sheets to get out of balance and rapidly destabilize, sending enormous amounts of freshwater into the ocean and driving global sea level rise.
But if the Earth lingers at, or above, 2 degrees Celsius of warming, as it is on track to, that melting will steadily accelerate. Scientists warn that will cause parts of the ice sheets to collapse, sending massive amounts of water into the world's oceans.
The million-dollar question is how quickly that collapse will occur. "Collapse tends to be a bit of a loaded word. People think of it like a building collapse," says Ian Joughin, a glaciologist at the University of Washington who has spent decades studying how giant glaciers move and change.
"Maybe a better timescale for an ice sheet [collapsing] is the Roman Empire," Joughin explains. Like a dying empire, the ice sheets in Greenland and West Antarctica are huge. It will take decades or even centuries for them to disintegrate.
This year marks the 29th year in a row that Greenland has lost more ice than it gained.
In 2021, rainfall was recorded at the ice sheet's highest point, rather than snow, a sign that warmer temperatures were triggering widespread melting.
As temperatures continue to warm, scientists say the 2-mile-thick ice sheet in Greenland is getting out of balance. Snow and ice are melting faster than they're being replaced, and as the ice melt accelerates, the process is difficult to stop.
Research suggests that the collapse of the West Antarctic ice sheet may already be underway. A massive glacier there, which covers an area about the size of the state of Washington, is melting quickly in response to climate change and could splinter into the ocean in the coming decades.
The Getz Ice Shelf in West Antarctica. Scientists are working to figure out exactly how quickly ice in West Antarctica is collapsing into the sea. The answer has profound implications for coastal communities around the world.
If that glacier melts entirely, it will add so much water to the oceans that sea levels will rise about 2 feet. If the entire West Antarctic ice sheet melts, scientists estimate that sea levels will rise about 12 feet.
Due to their enormous size, ice sheets have a huge amount of inertia. Once the melt process gets underway, it's difficult to stop.
"It takes a few hundred years to really get going," says Joughin. "And it's kind of a snowball effect, where the faster it goes, the more it's going to go."
But it will take a long time for people around the world to feel the most extreme effects of that melt. "It could be anywhere from two or three hundred years to a thousand years," says Joughin.
If humans slow down the pace of global warming, it will help slow down the pace of ice melting, giving the billions of people who live along coastlines more time to adapt.
PERMANENTLY FROZEN GROUND IS THAWING
Climate change is causing permafrost — the permanently frozen ground in the Arctic — to thaw. And as the Earth approaches 2 degrees Celsius of warming, that thawing ground will cause both local and global problems.
Let's start local. When permafrost thaws, the ice that's trapped in the ground turns into water and drains away. "It can have really profound consequences," says Merritt Turetsky, a permafrost researcher at the University of Colorado Boulder. "We can see lakes draining overnight. We can see ecosystems becoming much drier in some areas, because the permafrost was actually holding the water up at the surface."
That's because when the ground is frozen, it's impermeable to moisture, like the lining of a bathtub. "When it thaws, we pull the drain out of the bathtub," Turetsky explains.

Scientist Keith Larson [in red jacket] walks past a pond formed by thawing permafrost in Sweden.
Thawing permafrost has profound impacts for the millions of people who live in the Arctic. In many places, the land is sinking as it thaws, cracking the foundations of buildings, buckling roads and runways and kinking pipelines. That will accelerate as the Earth heats up more.
Thawing permafrost also has global climate implications. Permanently frozen ground is like the world's freezer: millennia of dead plants and animals are locked up in permafrost.
"When permafrost thaws it's a little like losing power to your freezer. That food starts to rot," explains Ted Schuur, a permafrost expert at Northern Arizona University. Bacteria and fungi start to digest the carbon-rich soil, releasing planet-warming methane and carbon dioxide into the atmosphere.
Basically, it's an infinite loop of greenhouse gases: human emissions cause the planet to heat up. That heat thaws permafrost, which releases more emissions.
In recent years, advances in Arctic data collection have allowed scientists to measure greenhouse gas emissions from permafrost more accurately, says Schuur. The upshot has been sobering. "This new science is showing that this is happening right now," he explains. The so-called tipping point has already begun.
But how much extra carbon ultimately gets released by Arctic permafrost in the future is up to humans. "The faster we can decarbonize society today, the more permafrost carbon we can keep in the Arctic ground where it belongs," says Turetsky. For example, by using renewable energy sources instead of burning fossil fuels.
https://www.npr.org/2025/11/19/nx-s1-5593087/climate-tipping-points-cop30-brazil-coral-glaciers-carbon
*
ENORMOUS RESERVOIR OF FRESH WATER FOUND OFF THE EAST COAST
Researchers are figuring out how a giant fresh water reservoir ended up under the ocean off the East Coast.
A giant reservoir of "secret" fresh water off the East Coast that could potentially supply a city the size of New York City for 800 years may have formed during the last ice age, when the region was covered in glaciers, researchers say.
Preliminary analyses suggest the reservoir, which sits beneath the seafloor and appears to stretch from offshore New Jersey as far north as Maine, was locked in place under frigid conditions around 20,000 years ago, hinting that it formed in the last glacial period due, partly, to thick ice sheets.
Last summer, researchers went on an expedition to follow up on reports from the late 1960s and early 1970s of fresh water beneath the seafloor off the East Coast. "It was quite the project and sort of a lifelong dream," Brandon Dugan, the expedition's co-chief scientist and a professor of geophysics at the Colorado School of Mines, told Live Science.
The research voyage, known as Expedition 501, lasted three months and dredged up 13,200 gallons (50,000 liters) of water from beneath the seafloor in three locations off the islands of Nantucket and Martha's Vineyard. The results aren't finalized yet, but so far it looks as if the reservoir might stretch farther underground than early reports suggested, meaning it might be even bigger than previously thought.
Dugan and his colleagues also think they know what created the reservoir thanks to preliminary radiocarbon, noble gas and isotope analyses, he said.
Fresh water in the region was first reported 60 years ago by the U.S. Geological Survey (USGS), during offshore mineral and energy resource assessments between Florida and Maine. "In a very peculiar way, they found fresh water in the sediment beneath the ocean," Dugan said. "In the 1980s, some of the USGS people came up with ideas of how that fresh water could get there. Then it went quiet for a while — no one was talking about it."
In 2003, Dugan and Mark Person, a professor of hydrology at the New Mexico Institute of Mining and Technology, rediscovered these records and came up with three ideas of how fresh water can end up beneath the ocean. One way that a submarine freshwater reservoir can form is if sea levels are very low for a long time and rainfall seeps into the ground. Then, when sea levels rise again over hundreds of thousands of years, that fresh water gets trapped in the underlying sediment, Dugan said.
A second possibility is that tall mountains close to the ocean funnel rainwater directly down into the seabed from their high elevation point, he said. And thirdly — related to the first hypothesis — a freshwater reservoir can form under the ocean if ice sheets expand, causing sea levels to drop. Meltwater collects at the bottom of ice sheets because they grind against the bedrock, producing heat. The huge weight of the ice sheet then pushes that water into the ground, trapping it beneath layers of sediment.
More than two decades later, the researchers are finally close to getting an answer, with preliminary data indicating that most of the fresh water came from glaciers some time during the last ice age (2.6 million to 11,700 years ago). "We kind of ruled out the large topography for New England, because we don't have big mountains next to the coast," Dugan said. However, "there might be a rainfall component" blended in the glacier water, he said. "You can imagine that in front of a glacier you have rainfall, so it's probably a mixed system."
Expedition 501 extracted water samples from sites 20 to 30 miles (30 to 50 kilometers) off the coast of Massachusetts. The researchers drilled down to 1,300 feet (400 meters) below the seafloor, which was deep enough to reveal a thick layer of sediment engorged with fresh water sitting beneath a layer of salty sediment and an impermeable "seal" of clay and silt.
"We have a seal at the top [of the fresh water] that keeps the seawater above from the fresh water below," Dugan said. This seal is strong enough to separate the two layers now, but it wasn't robust enough to stop a glacier from forcing water down through it — if that is what happened. "Whatever emplaced that water didn't care if there was a seal. There was enough energy to flush it with fresh water," he said.
Salinity measurements showed that water freshness in the reservoir drops with distance from the shore, but it stays well below ocean salinity in the areas studied last summer. The drill site closest to Nantucket and Martha's Vineyard had a salt content of 1 part per 1,000, which is the maximum safe limit for drinking water. Farther offshore, salt content was 4 to 5 parts per 1,000, and at the farthest site, the researchers recorded 17 to 18 parts per 1,000 — or about half of the ocean's average salt content.
"The important part was we collected all the samples we need to address our primary questions," Dugan said. "When we're done drilling and we pull our equipment out, the holes collapse back in and seal themselves up.”
Now, scientists are studying the reservoir in finer detail, including any microbes, rare earth elements, pore space — which can help researchers better estimate the reservoir's size — and the age of the sediments, which will help narrow down when it formed. More definitive results about how and when the reservoir formed are expected in about one month's time, Dugan said.
"Our goal is to provide an understanding of the system so if and when somebody needs to use it, they have information to start from, rather than recreating information or making an ill-informed choice," he said.
https://www.livescience.com/planet-earth/rivers-oceans/enormous-freshwater-reservoir-discovered-off-the-east-coast-may-be-20-000-years-old-and-big-enough-to-supply-nyc-for-800-years?utm_source=firefox-newtab-en-us
*
ENOUGH FRESH WATER IS LOST FROM CONTINENTS TO MEET THE NEEDS OF 280 MILLION PEOPLE
Earth's continents are losing 4 Olympic swimming pools' worth of fresh water every second, with dire consequences for jobs, food security and water availability.
Earth's continents are drying up at an alarming rate. Now, a new report has painted the most detailed picture yet of where and why fresh water is disappearing — and outlined precisely how countries can address the problem.
Continental drying is a long-term decline in fresh water availability across large land masses. It is caused by accelerated snow and ice melt, permafrost thaw, water evaporation and groundwater extraction. (The report's definition excludes meltwater from Greenland and Antarctica, the authors noted.)
"We always think that the water issue is a local issue," lead author Fan Zhang, global lead for Water, Economy and Climate Change at the World Bank, told Live Science in a joint interview with co-author Jay Famiglietti, a satellite hydrologist and professor of sustainability at Arizona State University. "But what we show in the report is that ... local water problems could quickly ripple through national borders and become an international challenge.”
Continents have now surpassed ice sheets as the biggest contributor to global sea level rise, because regardless of its origin, the lost fresh water eventually ends up in the ocean. The new report found this contribution is roughly 11.4 trillion cubic feet (324 billion cubic meters) of water each year — enough to meet the annual water needs of 280 million people.
"Every second you lose four Olympic-size swimming pools," Zhang said.
Far-reaching impacts
The report was published Nov. 4 by the World Bank. Its results are based on 22 years of data from NASA's GRACE mission, which measures small changes in Earth's gravity resulting from shifting water. The authors also compiled two decades' worth of economic and land use data, which they fed into a hydrological model and a crop-growth model.
The average amount of fresh water lost from continents each year is equivalent to 3% of the world's annual net "income" from precipitation, the report found. This loss jumps to 10% in arid and semi-arid regions, meaning that continental drying hits dry areas such as South Asia the hardest, Zhang said.
This is a growing problem. In a study published earlier this year, Zhang, Famiglietti and their colleagues showed that separate dry areas are rapidly merging into "mega-drying" regions.
"The impact is already being felt," Zhang said. Regions where agriculture is the biggest economic sector and employs the most people, such as sub-Saharan Africa and South Asia, are especially vulnerable. "In sub-Saharan Africa, dry shocks reduce the number of jobs by 600,000 to 900,000 a year. If you look at who are the people being affected, those most hard hit are the most vulnerable groups, like landless farmers."
Countries that don't have a large agricultural sector are also indirectly affected, because most of them import food and goods from drying regions.
The consequences for ecosystems are dramatic, too. Continental drying increases the likelihood and severity of wildfires, and this is especially true in biodiversity hotspots, the report found. At least 17 of the 36 globally recognized biodiversity hotspots — including Madagascar and parts of Southeast Asia and Brazil — show a trend of declining freshwater availability and have a heightened risk of wildfires.
"The implications are so profound," Famiglietti told Live Science.
The biggest culprit
Currently, the biggest cause of continental drying is groundwater extraction. Groundwater is poorly protected and undermanaged in most parts of the world, meaning the past decades have been a pumping "free-for-all," Famiglietti said. And the warmer and drier the world gets due to climate change, the more groundwater will likely be extracted, because soil moisture and glacial water sources will start to dwindle.
However, better regulations and incentives could reduce groundwater overpumping. According to the report, agriculture is responsible for 98% of the global water footprint, so "if agriculture water use efficiency is improved to a certain benchmark, the total amount of the water that can be saved is huge," Zhang said.
Globally, if water use efficiency for 35 key crops, such as wheat and rice, improved to median levels, enough water would be saved to meet the annual needs of 118 million people, the researchers found. There are many ways to improve water use efficiency in agriculture; for example, countries could change where they grow certain crops to match freshwater availability in different regions, or adopt technologies like artificial intelligence to optimize the timing and amount of irrigation.
Countries can also set groundwater extraction limits, incentivize farmers through subsidies and raise the price of water for agriculture. Additionally, the report showed that countries with higher energy prices had slower drying rates because it costs more to pump groundwater, which boosts water use efficiency.
Overall, water management at the national scale works well, according to the report. Countries with good water management plans depleted their freshwater resources two to three times more slowly than countries with poor water management.
Virtual water trade
On the global scale, virtual water trade is one of the best solutions to conserve water if it is done right, Zhang said. Virtual water trade occurs when countries exchange fresh water in the form of agricultural products and other water-intensive goods.
Global water use increased by 25% between 2000 and 2019. One-third of that increase occurred in regions that were already drying out — including Central America, northern China, Eastern Europe and the U.S. Southwest — and a big share of the water was used to irrigate water-intensive crops with inefficient methods, according to the report.
There has also been a global shift toward more water-intensive crops, including wheat, rice, cotton, maize and sugar-cane. Out of 101 drying countries, 37 have increased cultivation of these crops.
Virtual water trade can save huge amounts of water by relocating some of these crops to countries that aren't drying out. For example, between 1996 and 2005, Jordan saved 250 billion cubic feet (7 billion cubic meters) of water by importing wheat from the U.S. and maize from Argentina, among other products.
Globally, from 2000 to 2019 virtual water trade saved 16.8 trillion cubic feet (475 billion cubic meters) of water each year, or about 9% of the water used to grow the world's 35 most important crops.
"When water-scarce countries import water-intensive products, they are actually importing water, and that helps them to preserve their own water supply," Zhang said.
However, virtual water trade isn't always so straightforward. It might benefit one water-scarce country but severely deplete the resources of another country. One example is the production of alfalfa, a water-intensive legume used in livestock feed, in dry regions of the U.S. for export to Saudi Arabia, Famiglietti said. Saudi Arabia benefits from this exchange because the country isn't using its water to grow alfalfa, but aquifers in Arizona are being sucked dry, he said.
Reasons for optimism
The solutions identified in the report fall into three broad categories: manage water demand, expand water supply through recycling and desalination, and ensure fair and effective water allocation.
If we can make those changes, sustainable fresh water use is "definitely possible," Zhang said. "We do have reason to be optimistic."
Famiglietti agreed that small changes could go a long way.
"It's complicated, because the population is growing and we're going to need to grow more food," he said. "I don't know that we're going to 'tech' our way out of it, but when we start thinking on decadal time scales, changes in policy, changes in financial innovations, changes in technology — I think there is some reason for optimism. And in those decades we can keep thinking about how to improve our lot.”
https://www.livescience.com/planet-earth/enough-fresh-water-is-lost-from-continents-each-year-to-meet-the-needs-of-280-million-people-heres-how-we-can-combat-that#:~:text=Continents%20have%20now%20surpassed%20ice%20sheets%20as,annual%20water%20needs%20of%20280%20million%20people.
*
CHINA HAS PLANTED SO MANY TREES IT’S CHANGED THE ENTIRE COUNTRY’S WATER DISTRIBUTION
Huge "regreening" efforts in China over the past few decades have activated the country's water cycle and moved water in ways that scientists are just now starting to understand.
The Great Green Wall is a huge regreening initiative in China's north aimed at slowing desertification.
China's efforts to slow land degradation and climate change by planting trees and restoring grasslands have shifted water around the country in huge, unforeseen ways, new research shows.
"We find that land cover changes redistribute water," study co-author Arie Staal, an assistant professor of ecosystem resilience at Utrecht University in the Netherlands, told Live Science in an email. "China has done massive-scale regreening over the past decades. They have actively restored thriving ecosystems, specifically in the Loess Plateau. This has also reactivated the water cycle."
Three main processes move water between Earth's continents and the atmosphere: evaporation and transpiration carry water up, while precipitation drops it back down.
Evaporation removes water from surfaces and soils, and transpiration removes water that plants have absorbed from the soil. Together, these processes are called evapotranspiration, and this fluctuates with plant cover, water availability and the amount of solar energy that reaches the land, Staal said.
"Both grassland and forests generally tend to increase evapotranspiration," he said. "This is especially strong in forests, as trees can have deep roots that access water in dry moments.”
China's biggest tree-planting effort is the Great Green Wall in the country's arid and semi-arid north. Started in 1978, the Great Green Wall was created to slow the expansion of deserts. Over the last five decades, it has helped grow forest cover from about 10% of China's area in 1949 to more than 25% today — an area equivalent to the size of Algeria. Last year, government representatives announced the country had finished encircling its biggest desert with vegetation, but that it will continue planting trees to keep desertification in check.
Other large regreening projects in China include the Grain for Green Program and the Natural Forest Protection Program, which both started in 1999. The Grain for Green Program incentivizes farmers to convert farmland into forest and grassland, while the Natural Forest Protection Program bans logging in primary forests and promotes afforestation.
Collectively, China's ecosystem restoration initiatives account for 25% of the global net increase in leaf area between 2000 and 2017.
But regreening has dramatically changed China's water cycle, boosting both evapotranspiration and precipitation. To investigate these impacts, the researchers used high-resolution evapotranspiration, precipitation and land-use change data from various sources, as well as an atmospheric moisture tracking model.
The results showed that evapotranspiration increased more overall than precipitation did, meaning some water was lost to the atmosphere, Staal said. However, the trend wasn't consistent across China, because winds can transport water up to 4,350 miles (7,000 kilometers) away from its source — meaning evapotranspiration in one place often affects precipitation in another.
The researchers found that forest expansion in China's eastern monsoon region and grassland restoration in the rest of the country increased evapotranspiration, but precipitation only increased in the Tibetan Plateau region, so the other regions experienced a decline in water availability.
"Even though the water cycle is more active, at local scales more water is lost than before," Staal said.
This has important implications for water management, because China's water is already unevenly distributed. The north has about 20% of the country's water but is home to 46% of the population and 60% of the arable land, according to the study. The Chinese government is trying to address this; however, the measures will likely fail if water redistribution due to regreening isn't taken into account, Staal and his colleagues argued.
Ecosystem restoration and afforestation in other countries could be affecting water cycles there, too. "From a water resources point of view, we need to see case-by-case whether certain land cover changes are beneficial or not," Staal said. "It depends among other things on how much and where the water that goes into the atmosphere comes down again as precipitation."
Leanhoser:
The study is incomplete if it is limited to the content of this excellent article. It has been proven that the Amazon requires continual forest for the moisture to migrate to higher elevations. Areas of clear cutting between forest patches have halted the migration of moisture up to forests at higher elevations. ET is thus critical to local or nearby forest precipitation even if some of it may escape to higher altitudes and thus more distant precipitation. This proves nonetheless that the biosphere is a comprehensive system on the hydrology front. One could say the precipitation world wide is benefiting from China's reforestation, or any reforestation for that matter.
The Realist:
China is doing a great thing. Australia should follow China's lead. We have so much land that could be converted to forests, which would help reduce heat on the continent. Help reduce CO2 emissions, help with climate change targets. It would also massively increase wildlife habitat.
Russell Thomas:
Global warming in last 20 years has really kicked off, especially last 5yrs. So with warming air and sea temperatures I would conclude this is having a greater effect on China rainfall compared to its reforestation project. Its a shame China cannot stop new Coal power plants while they are increasing green energy.
RapidRinger:
China is doing more to invest in renewables like solar and wind than any other country on earth. And they are rapidly expanding nuclear power capabilities.edited.
*
TRINITY: A LATE COMER
Paul's writings don't reflect the “trinity” as christians think of it today. Many of the earliest christians, including Paul, had a binitarian theology in which they worshipped both Yahweh and Jesus as two distinct persons of the same substance, like a human King and his son, the Crown Prince, who share the same DNA.
For instance, Philippians 2:5-8 doesn’t say Jesus is the person as god, only that he has some sort of “equality” with god, probably as a human son has equality with his father, but might still be under his father’s aegis. If, for instance, his father was the king and he were a prince.
5 In your relationships with one another, have the same mindset as Christ Jesus: 6 Who, being in very nature God, did not consider equality with God something to be used to his own advantage; 7 rather, he made himself nothing by taking the very nature of a servant, being made in human likeness. 8 And being found in appearance as a man, he humbled himself by becoming obedient to death—even death on a cross! (Philippians 2:5-8)
There was no mention of the Holy Ghost as the third person of this royal family.
For instance, every time Paul greets other christians in his seven letters deemed authentic by the majority of bible scholars, he greets them in the binitarian formula of “God the Father and the Lord Jesus Christ. (1 Thessalonians 1:1, Galatians 1:3, 1 Corinthians 1:3, Philemon 1:3, 2 Corinthians 1:2, Philippians 1:2, Romans 1:7)
We see this binitarian theology in the greetings above and in this verse:
But to us there is but one God, the Father, of whom are all things, and we in him; and one Lord Jesus Christ, by whom are all things, and we by him. (1 Corinthians 8:6)
Paul is making a distinction between the father being “one god” and Jesus being “one lord” and there is no mention of the Holy Ghost as a separate person.
Paul’s binitarianism is also evident in Philippians 2:6-11.
Where is the Holy Ghost in such “relationship” passages?
The Holy Ghost was more of an emanation of Jesus in early christianity — Jesus’s presence once he was no longer walking the earth.
Jesus didn’t know everything that his father knew, such as when he would return to earth. Did his father not trust Jesus, or was it all nonsense that made no ultimate sense?
MARK, circa 95 AD
The author of the first-written gospel also knew nothing about the list above, except for some some earthly miracles of Jesus, which suggests that the big fish was getting bigger and fishier.
Mark’s version of Jesus was fully human and was adopted as a human son of god at his baptism. Mark’s human Jesus became angry, impatient and frustrated. He used mud and spit in some of his faith healings, like the shamans of other religions. That proved an embarrassment to later gospel writers who would correct what they saw as Mark’s theological errors.
I believe Mark was written circa 95 AD or later because he rather obviously based his trials of Jesus of Nazareth on the trials of Jesus bar Ananias in the Antiquities of Josephus (circa 93 AD).
David Fitzgerald in Jesus: Mything in Action agrees that we have “no solid evidence that Mark was written any earlier than the year 100.”
Clement of Rome (died circa 100 AD) was apparently unaware of any gospel.
As of the end of the first century we lack any clear evidence of a gospel being in circulation.
MATTHEW, circa 95-110 AD
Matthew was clearly using Mark as his primary source. Matthew may have been quoted in letters of Ignatius dated to around 107-110 AD.
LUKE, circa 110-130 AD
Luke was based primarily on Mark but appears to have borrowed from Matthew and Antiquities (circa 93 AD) as well.
JOHN, circa 120–180 AD
The author of John seems to have been aware of all three earlier gospels. The first early church father to quote the Gospel of John by name was Theophilus of Antioch, around 180 AD.
The author of the last-written gospel, John, elevated Jesus to a preexistent cosmic deity who was “with god” before the universe was created, and was, indeed, the Creator of the Universe!
But this version of Jesus was still subordinate to the father.
Jesus said “My father is greater than I." (John 14:28)
2ND CENTURY CHURCH FATHERS
Opinions about the relationship between father and son continued to be divided. Why wasn’t the Holy Ghost leading christians into “all truth” as they claimed?
Some second century church fathers saw Jesus as divine but subordinate to the father (Subordinationism) while others said father and son were not distinct persons different but “modes” of the same being, like an actor playing two different roles (Monarchical Modalism). Meanwhile, some christians called Gnostics said the Old Testament god was evil, and Jesus was an agent of a different god, who was truly good!
It was not until Tertullian (c. 155 - 220 AD) that the "trinity" had a name. But Tertullian spoke of “two gods” and considered the Holy Ghost to be an emanation of both the father and the son. So his was not a "trinity" of equals. When Tertullian spoke about the relationship he left out the Holy Ghost:
“The Father is the whole substance, but the Son is a derivation and portion of the whole, as He Himself acknowledges: ‘My Father is greater than I.’ In the economy, the Son is second to the Father... yet they are not divided in essence but distinct in order.” (Against Praxeas, 9)
“We believe that there is one God, but under this dispensation, as it is called, that there is also a Son of this one God, His Word, who proceeded from Him, and through whom all things were made, and without whom nothing was made.” (Against Praxeas, 2)
Arius of Alexandria (c. 250 - 336 AD) claimed that since the father was indivisible, there must have been a time when the son was created, which means there was some time when he did not exist. This meant that the son was subordinate to the father, being a created being.
Other church fathers disagreed with Arius and insisted that the father and son were “of one substance.”
Where was the Holy Ghost in this debate and why hadn’t the Holy Ghost pointed out that it was part of a “trinity”?
3RD CENTURY AD: “WITH A LITTLE HELP FROM MY PAGAN FRIENDS”
Finally a pagan Roman emperor, Constantine, put his foot down and insisted that the silly squabbling christians decide who Jesus was. Thus in 325 AD, after much rancorous debate and a few excommunications, the Council of Nicea (not the Holy Ghost) decided arbitrarily that the father and son were of one “substance,” whatever that means.
The one “substance” was probably to avoid accusation of polytheism.
The Nicene Creed refuted Arianism and declared the full divinity of the Son:
“We believe in one God, the Father Almighty... and in one Lord Jesus Christ, the only-begotten Son of God, begotten of the Father before all worlds, Light of Light, very God of very God, begotten, not made, being of one substance (homoousios) with the Father.”
But what about the junior partner?
Well, three is a more divine number than two!
The more gods the merrier, as long as they are of one “substance”!
Basil the Great (c. 329-379 AD) defended the Holy Ghost’s divinity:
“The distinction between the Father, Son, and Holy Spirit is clearly defined: the Father is unbegotten; the Son is begotten; the Holy Spirit proceeds from the Father. Yet the unity of their nature is preserved, for the three are one, in a shared divinity.” (On the Holy Spirit)
The Council of Constantinople (381 AD) expanded the Nicene Creed to include the Holy Spirit:
“We believe in the Holy Spirit, the Lord and Giver of life, who proceeds from the Father, who with the Father and the Son is worshiped and glorified, who has spoken through the prophets.”
All the earliest church fathers subscribed to Subordinationism with God the Father being superior to Jesus, who was inferior and subordinate.
What a long, strange trip it’s been! ~ Michael Burch
*
BENEFITS OF NON-SMOKE NICOTINE
When we hear the word “nicotine,” most of us think about vaping, cigarettes, addiction, and lung cancer. But beneath this stigma lies an interesting paradox: nicotine, the same compound that makes tobacco so addictive, also possesses enormous therapeutic potential as a treatment for neurological and cognitive disorders.
A SACRED PLANT
Long before Europeans colonized the Americas, Indigenous peoples understood tobacco as a sacred medicine. Traditional tobacco, also known as Nicotiana rustica, was and still is an important component of healing ceremonies. Unlike today’s commercial tobacco products, traditional tobacco was used ceremonially and often it was burned rather than inhaled.
Contemporary Indigenous healers in both North and South America continue to use tobacco today to treat conditions such as pain, respiratory conditions, and wound healing. This traditional use is very different from today’s commercial non-Indigenous tobacco consumption. The Indigenous use of tobacco emphasizes respect, ceremony, and medicinal application rather than habitual use.
The distinction between the sacred or ceremonial use of tobacco and commercial use is critical to understanding nicotine's therapeutic potential. Traditional tobacco contains natural compounds in their pure form, without the hundreds of chemical additives found in modern cigarettes that create addiction and cause cancer.
The Neuroscience of Nicotine
Modern research is beginning to validate what Indigenous peoples already know: nicotine possesses unique neuroprotective and cognitive-enhancing properties. Nicotine works by binding to receptors in the brain called “nicotinic acetylcholine receptors. These receptors are important in learning, memory, attention, and neuroprotection.
When nicotine activates brain receptors, it sets off a chain reaction. Receptors known as α7 receptors allow calcium to flow into neurons, which triggers a cellular pathway that also gets activated by brain chemicals that help neurons stay alive and healthy. This causes the cell to make protective proteins that act like shields, preventing brain cells from dying when they're under stress or attack.
Nicotine also exhibits anti-inflammatory effects in the brain. It activates the "cholinergic anti-inflammatory pathway," which reduces the production of inflammatory chemicals known as “cytokines” while preserving anti-inflammatory signals. This dual action of neuroprotection plus anti-inflammation makes nicotine an intriguing potential treatment for neurodegenerative diseases where neuronal death and inflammation play key roles.
Promising Research in Parkinson's Disease
One of the most exciting areas of nicotine research involves Parkinson's disease. Studies have consistently shown that smokers have a significantly reduced risk of developing Parkinson's disease. This appears to be a dose-dependent effect, with heavier smokers experiencing greater protection.
Animal studies have demonstrated that nicotine can protect dopaminergic neurons, which are the brain cells that are damaged in Parkinson's disease. In laboratory models, exposure to nicotine before or during exposure to toxins protects against damage and preserves motor function.
Researchers have found that nicotine provides this protection by reducing levels of a protein called SIRT6 that causes neurons to die in Parkinson's disease. Brain tissue from Parkinson's patients shows increased levels of this protein, while tissue from tobacco users shows reduced levels. This explains how nicotine protects brain cells and suggests that nicotine therapy might prevent or slow Parkinson's progression.
Cognitive Enhancement and Alzheimer's Disease
The potential for nicotine therapy extends beyond Parkinson's to Alzheimer's disease and other neurodegenerative diseases. Nicotinic acetylcholine receptors are progressively lost during Alzheimer's disease, and current Alzheimer's medications work partly by boosting the activity of the remaining receptors.
A study performed at Vanderbilt University Medical Center showed that nicotine can improve attention, memory, and cognitive processing in both healthy individuals and those with mild cognitive impairment. The improvements appear to be more pronounced in individuals carrying the APOE4 gene variant, which increases Alzheimer's risk.
The Anti-Inflammatory Connection
One of nicotine's most therapeutic mechanisms involves its anti-inflammatory effects. Chronic inflammation in the brain contributes to all neurodegenerative diseases, and nicotine's ability to reduce inflammation may explain many of its protective effects.
Receptors known as "α7 nicotinic receptors" acts as a switch that can reduce inflammation. When activated by nicotine, these receptors suppress the production of inflammatory molecules. This anti-inflammatory action may explain why nicotine shows promise for conditions as diverse as Parkinson's disease, Alzheimer's, and even inflammatory bowel diseases.
Clinical Applications and Current Research
The therapeutic potential of nicotine is being explored through various delivery methods that avoid the harmful effects of smoking. Patches that are applied to the skin are the most widely studied approach. These patches provide steady nicotine levels without the toxins found in tobacco smoke.
In addition to improving memory in individuals with mild cognitive impairment, clinical trials have shown nicotine patches can also improve memory in healthy elderly individuals.
Nicotine has also showed promise as a treatment for depression, attention-deficit hyperactivity disorder, Tourette’s, and schizophrenia.
Nicotine appears to be safe when used appropriately. Unlike tobacco smoke, pure nicotine does not contain carcinogens and is already approved for smoking cessation. Side effects are generally mild and include skin irritation from patches, sleep disturbances, and occasional nausea.
The Path Forward
The research on nicotine as a medicine represents a fascinating convergence of ancient wisdom and modern science. Indigenous peoples have recognized tobacco's medicinal properties for centuries and they continue to use it in controlled, ceremonial contexts that emphasize healing rather than habitual consumption.
Contemporary research is revealing the molecular mechanisms behind this traditional wisdom, showing how nicotine can protect neurons, reduce inflammation, and enhance cognitive function.
As we continue to struggle with rising rates of neurodegenerative diseases and cognitive decline, the therapeutic potential of nicotine deserves serious scientific consideration. Clinical trials will be crucial in determining whether pharmaceutical nicotine can be developed into a neuroprotective medicine.
This research reminds us to approach traditional medicines with respect and scientific curiosity rather than dismissal. It seems that Indigenous peoples, who first recognized tobacco's healing properties, understand something that allopathic medicine is only now starting to appreciate: that compounds found in nature often possess both the potential for harm as well as the possibility of life-changing healing.
https://www.psychologytoday.com/us/blog/the-leading-edge/202506/the-hidden-healing-power-of-nicotine
THE HISTORY OF NICOTINE
1492: Tobacco is discovered.
Tobacco is thought to have been discovered by Christopher Columbus while exploring the Americas. He is said to have then brought a few tobacco leaves and seeds back with him to Europe.
The 1500s: Nicotine gets its name
A French diplomat called Jean Nicot — after whom nicotine is named — began to popularize the use of tobacco throughout Europe, introducing the substance to France in 1558, Spain in 1559 and England in 1565.
The 1700s: The tobacco industry grows
The industry gradually grew throughout this century, with tobacco mainly being produced for pipe-smoking, chewing and snuff. Cigarettes were introduced in the early 1700s but didn’t become popular until the American Civil War.
1763: Tobacco kills pests
Tobacco was first used successfully as an insecticide in 1763 thanks to the toxic properties of nicotine.
1828: Nicotine = poison
Nicotine was first isolated from tobacco and identified as a poison by two German scientists — Wilhelm Heinrich Posselt, a doctor, and Karl Ludwig Reinmann, a chemist.
1880: The rise of cigarettes
The tobacco industry exploded when a machine was first patented to mass-produce paper cigarettes. From this point onwards, cigarettes became much easier to produce, giving rise to the dawn of many major tobacco corporations.
The 1900s: Minors banned from smoking
By the end of the 19th century, lawmakers had begun to realize the harmful effects of nicotine. This led to laws being passed that banned retailers from selling nicotine to minors.
1964: Smoking linked to health conditions
The Surgeon General of the U.S. published a study linking smoking with heart disease and lung cancer.
1994: Nicotine = addiction
The US Food and Drug Administration (FDA) officially recognized nicotine as an addictive drug that produced dependency during the mid-1990s.
NICOTINE AS A STIMULANT
When a body is exposed to nicotine, you will often experience a ‘kick’. This is partly caused by nicotine stimulating the adrenal glands, resulting in the release of a hormone called adrenaline.
This surge of adrenaline is what stimulates the body, causing an immediate release of glucose and increasing your heart rate, breathing activity, and blood pressure.
Nicotine exposure also makes the pancreas gland produce less insulin, causing a slight increase in blood sugar or glucose.
NICOTINE AS A SEDATIVE
Nicotine also has an indirect impact on the brain. In a similar way to drugs like heroin or cocaine, nicotine causes dopamine — a brain chemical that affects emotions, movements, and sensations of pleasure and pain — to be released in certain regions of the brain. These increased dopamine levels then leave you feeling happier and more satisfied, adding to that ‘kick’ experience.
However, as you build up more of a tolerance to nicotine over time, you will often require a higher dose to enjoy the same effects.
Depending on the dose you take, nicotine can also act as a sedative. This is because, once it reaches the brain, it can trigger the release of beta-endorphin — a hormone known for its ability to reduce anxiety, ease emotional distress and create a sense of well-being that makes it easier to get to sleep.
THE BENEFITS OF NICOTINE
While nicotine is mainly associated with the harmful effects of smoking, it can also offer certain benefits. These include:
Increased levels of alertness, euphoria and relaxation
Improved concentration and memory — due to increased activity of the acetylcholine and norepinephrine neurotransmitters
Reduced anxiety — due to increased levels of beta-endorphin, which reduces anxiety
THE SIDE EFFECTS OF NICOTINE
Nicotine can cause a wide range of side effects across several organs and systems of the human body. These largely impact the brain, heart and gastrointestinal system, causing a variety of signs and symptoms that it’s important to look out for.
Our preventative health assessments are a great way of doing exactly that.
If you are a smoker and you experience any of the symptoms listed below, please get in touch with us. Having a health assessment will allow us to identify any pre-existing or unknown health conditions and help find the right treatment plan for you.
Side effects on the brain
Dizziness and lightheadedness
Irregular and disturbed sleep
Bad dreams and nightmares
Possible blood restriction
Side effects on the gastrointestinal system
Nausea and vomiting
Dry mouth
Indigestion
Peptic ulcers
Diarrhoea
Heartburn
Side effects on the heart
Altered heart rate and rhythm
Increased risk of blood clots and atherosclerosis (a condition in which fatty materials build up in your arteries, causing them to become narrowed or blocked, and making it difficult for blood to flow through)
Increased blood pressure
Enlarged aorta
Increased risk of coronary artery disease and stroke
Nicotine exposure through vaping
Vaping is a relatively recent phenomenon that involves inhaling a vapor created by using an electronic cigarette (e-cigarette).
It is becoming an increasingly popular trend amongst younger generations, with many people now using it as an alternative to smoking.
However, using an e-cigarette will still often expose you to nicotine — the highly addictive chemical found in tobacco — and can result in the same variety of side effects listed above.
Depending on the country you live in (or where you buy your e-cigarettes), vaping may even expose you to several other dangerous chemicals, each of which can create various health complications and potentially harm the body. These include:
Diacetyl – a food additive used to deepen e-cigarette flavors that can also damage small passageways in the lungs
Formaldehyde – a toxic chemical that can cause lung disease and contribute to heart disease
Acrolein – a chemical more commonly used as a weed killer that can damage the lungs
UK regulations require producers to provide ingredient information and do not permit certain compounds such as formaldehyde, acrolein and acetaldehyde in e-cigarettes or e-cigarette liquids sold in the UK. However, markets may not be as stringent.
It is also worth noting that although regulations are more strict in the UK around ingredients, we simply do not know enough about the potential risks of e-cigarette use.
Other health complications
Vaping using an e-cigarette can lead to several health-related issues.
Bronchiolitis obliterans, for example, is more commonly known as ‘popcorn lung’ and is associated with an inflammatory obstruction of the lung’s tiniest airways called bronchioles.
When these bronchioles become damaged and inflamed by chemical particles or respiratory infections, this can lead to extensive scarring that eventually blocks the airways. One of the chemicals known to cause popcorn lung is diacetyl, which is often found in most e-cigarette liquids.
Lipoid pneumonia is also commonly associated with e-cigarette use. This condition is caused when fat or oil gets into the lungs, causing the air sacs to become inflamed and fill with fluid. Since e-cigarette liquids often contain oils, this explains the link between vaping and certain cases of lipoid pneumonia.
Vaping can also potentially lead to a collapsed lung, also known as pneumothorax.
Collapsed lungs are typically caused by an abnormal accumulation of air in the space between the lungs and the chest cavity, but can also happen when air blisters on the top of the lungs rupture. These blisters are normally harmless unless they burst, which both smoking and vaping have been shown to increase the risk of causing.
Vaping and lung cancer
The World Health Organisation labels e-cigarettes as ‘undoubtedly harmful’ and recommends that they should not be used and, at the very least, be heavily regulated.
This is because many of the chemicals commonly found in e-cigarette liquids in some countries, such as formaldehyde, nitrosamines and toluene, have carcinogenic properties which, over time, can cause lung cancer.
We will never know the damage of newly introduced cancer and disease ‘risk factors’ until they have had time to be monitored and tested — and by that time, it may be too late for people who have already experienced serious illness from vaping.
https://www.echelon.health/nicotine-the-good-the-bad-and-the-ugly/
IMPORTANT: SIDE EFFECTS OF PURE NICOTINE
Addiction: Nicotine is a highly addictive substance, and prolonged use can lead to dependence and withdrawal symptoms.
Cardiovascular Risks: Nicotine acts as a stimulant and acutely increases heart rate and blood pressure, which poses risks, particularly for individuals with pre-existing cardiovascular conditions.
Side Effects: Potential side effects include dizziness, nausea, sleep disturbances, and skin irritation (from patches).
WHY NICOTINE IS ADDICTIVE
Nicotine targets nicotinic acetylcholine receptors (nAChRs) in the brain, activating reward pathways.
It increases dopamine release, creating feelings of pleasure, focus, and alertness, reinforcing use.
Long-term use desensitizes some receptors, leading to increased excitation and stronger cravings
*
VERY DIFFERENT PSYCHIATRIC DIAGNOSES SHARE COMMON GENES
Substance use disorders cluster together supporting unified addiction-liability.
A 2026 Nature cross-disorder genomics study demonstrated different psychiatric disorders cluster together.
The study showed 14 common psychiatric disorders shared substantial genetic liability within five clusters.
Alcohol, cannabis, opioid, and nicotine use disorders cluster together genetically into one brain disorder.
~ By integrating genome-wide data from a million-plus people with 14 psychiatric disorders, a 2026 Nature study marks a turning point in psychiatric science, showing many common psychiatric diagnoses are not genetically separate diseases. Instead, they share genetic liability (five genomic factors) across major psychiatric diagnoses, shaped by neurodevelopmental biology and life experiences. These findings challenge the disease model in psychiatric diagnoses and the Diagnostic and Statistical Manual of Mental Disorders (DSM).
Psychiatric manual categories still help with communication and treatment planning, but are imperfect proxies for personalized medicine or treatment predictions. The study confirmed what some clinicians already knew: How patients respond to treatment matters more than the disorder’s name. Also, comorbidity is common, and patients often move between diagnoses in the same cluster.
Depression, anxiety, and posttraumatic stress disorder (PTSD) often present together because of overlapping tendencies like negative emotions and stress sensitivity. However, when a patient has both major depressive disorder and generalized anxiety disorder, DSM psychiatry considers them two separate illnesses. The new genetic data suggest that these disorders cluster together due to shared neurodevelopmental risks. Cross-disorder psychiatric genomics shows high comorbidity and genetic overlap. Patients often move within diagnostic clusters over time, such as anxiety preceding depression, and psychotic symptoms preceding a bipolar diagnosis. These are not random transitions but expressions of a latent liability interacting with environment, maturation, trauma, and life experiences. Genomic phenotypes may help psychiatry move away from the trend of stacking one diagnosis on another in a single person.
Substance Use Disorders May Be One Disorder
Cross-disorder genomics strongly reinforces substance addiction as a single disease category: Alcohol, cannabis, opioid, and nicotine use disorders cluster together genetically. Substance use disorder (SUD) diagnoses support a shared addiction-liability dimension across major substances. Reflecting differences in reward sensitivity, impulsivity, stress reactivity, and executive control, some individuals are more susceptible to developing SUDs, regardless of which drug they encounter first.
Highly stereotyped behavioral patterns present with these addictions, and compulsive use, escalation, relapse, drug-seeking, and use despite harm are all hallmarks of addictive conditions, regardless of the drugs involved. Drug testing and laboratory data are commonly utilized to confirm a diagnosis. With alcohol use disorder (AUD), for example, the carbohydrate-deficient transferrin (CDT) test indicates chronic heavy drinking, unlike immediate blood alcohol tests. Directly observable intoxication, physical exam findings, and withdrawal syndromes reinforce the diagnosis.
Cannabis, opioids, cocaine, benzodiazepines, amphetamines, and fentanyl can be detected through routine toxicology screens and forensic testing. In contrast, there are no laboratory tests or measurable markers for most psychiatric diagnoses. Directly observable intoxication, physical exam findings, and withdrawal syndromes reinforce the diagnosis. Clear diagnosis, medical-psychosocial-psychiatric interventions, long-term outcomes, and research evidence have helped personalize treatment. Experts know polysubstance use is the rule rather than the exception. Because of this, an SUD diagnosis more consistently predicts the course of the illness, treatment choices, and responses than other diagnoses.
In practical terms, this new research supports integrated treatment approaches across substances and undermines stigmatizing interpretations of compulsive use. SUDs are not caused by poor judgment or inadequate willpower; they are brain diseases caused by a substance use-related neuroadaptation of reward, motivation, stress, and executive control brain circuits. These changes persist long after detoxification; thus, relapse risks remain high, even after prolonged abstinence. Abstinence without ongoing support is associated with persistent vulnerability and relapse risk, especially under cue- or stress-exposures. Addiction is best viewed as a chronic, relapsing brain disease rather than a behavioral problem.
These large-scale genomic studies show substance use disorders cluster together genetically, supporting a unified addiction-liability dimension and addictive disease rather than drug-specific diseases. Comorbidity between SUDs and other psychiatric disorders should be anticipated because shared liability increases risks across domains. Treatment must be integrated: Treating mood or psychotic symptoms without addressing existing addictions is rarely successful, and treating addiction without addressing underlying psychiatric vulnerability increases relapse risk. Addiction represents a unitary brain disorder with shared susceptibility rather than being a collection of drug-specific illnesses; polysubstance use is logical and should be considered the rule rather than the exception.
Conclusion
This new, large-scale genomic study demonstrates that 14 common psychiatric disorders share substantial genetic liability, helping explain their high comorbidity and frequent diagnostic transitions across the lifespan. The new study’s finding that alcohol, cannabis, opioid, and nicotine use disorders cluster within a single genetic liability dimension provides powerful biological confirmation of addiction as a brain disease with multiple phenotypic expressions. SUDs have predictable clinical courses, established chronic disease management models, and psychosocial and other treatments clearly related to disease mechanism and course.
Long-term management is also essential. Alcoholics Anonymous and Narcotics Anonymous (AA/NA) describe addiction as a unified "malady," considering alcoholism and other addictions a single disorder rather than separate diseases. AA's abstinence and 12-step approach are built on understanding addiction as a profound, overwhelming disorder. This view is in sync with the modern view of addiction as an acquired chronic brain disease. Like diabetes or hypertension, addiction requires ongoing monitoring, medication when indicated, behavioral intervention, and relapse prevention. The convergence of genetics, neurodevelopment, circuitry, and a predictable course reframes addiction as a medical condition, not a moral lapse.
In a field often criticized for diagnostic ambiguity, SUDs stand out as a model of what biologically anchored psychiatric diagnosis could look like—and as psychiatry moves toward personalized medicine, mechanism-based classification, and prediction, addiction medicine is unusually well-positioned to lead rather than follow.
These large-scale genomic studies show substance use disorders cluster together genetically, supporting a unified addiction-liability dimension and addictive disease rather than drug-specific diseases. Comorbidity between SUDs and other psychiatric disorders should be anticipated because shared liability increases risks across domains. Treatment must be integrated: Treating mood or psychotic symptoms without addressing existing addictions is rarely successful, and treating addiction without addressing underlying psychiatric vulnerability increases relapse risk. Addiction represents a unitary brain disorder with shared susceptibility rather than being a collection of drug-specific illnesses; polysubstance use is logical and should be considered the rule rather than the exception.
Conclusion
This new, large-scale genomic study demonstrates that 14 common psychiatric disorders share substantial genetic liability, helping explain their high comorbidity and frequent diagnostic transitions across the lifespan. The new study’s finding that alcohol, cannabis, opioid, and nicotine use disorders cluster within a single genetic liability dimension provides powerful biological confirmation of addiction as a brain disease with multiple phenotypic expressions. SUDs have predictable clinical courses, established chronic disease management models, and psychosocial and other treatments clearly related to disease mechanism and course.
In a field often criticized for diagnostic ambiguity, SUDs stand out as a model of what biologically anchored psychiatric diagnosis could look like—and as psychiatry moves toward personalized medicine, mechanism-based classification, and prediction, addiction medicine is unusually well-positioned to lead rather than follow.
SCIENTISTS FOUND A GENE THAT CAUSES MENTAL CONDITIONS—AND THE MOLECULE THAT MAY TREAT THEM
Researchers discovered that some variants of the gene known as GRIN2A can cause certain conditions all on their own.
Until now, it was thought that most mental conditions were caused by a combination of genes.
Researchers discovered that some variants of the gene known as GRIN2A can cause certain conditions all on their own.
Even though GRIN2A conditions can be inherited, an amino acid that occurs in the body can be used as a potential treatment.
The potential causes of various mental conditions have long been surrounded by endless (and sometimes controversial) debate. While genetics are a possible factor—especially because such conditions tend to run in families—many theories involving inheritance have suggested that these conditions arise from more than one gene. That thinking is about to be turned on its head.
Researchers from the University of Leipzig in Germany, led by geneticist Johannes Lemke, have now discovered that some variants of the gene GRIN2A can cause certain mental conditions on their own. GRIN2A gives instructions for the synthesis of GluN2A, a neural protein found in regions of the brain linked to speech and language. It also uses electrical signaling to activate neurons, which then transmit signals to the brain and influence brain development, neuroplasticity, memory, and even sleep.
There are variants of GRIN2A that shorten genetic coding sequences and were previously known as contributing factors for schizophrenia. Known as GRIN2A(null) variants, they also have a connection to epilepsy and intellectual disabilities, though whether they had anything to do with psychiatric conditions (other than schizophrenia) was unknown. Lemke, however, found that they can also cause anxiety, mood, and neurodevelopmental conditions that often appear early on in life.
“Our findings establish GRIN2A(null) as the first monogenic predisposition to a broad spectrum of early-onset [mental conditions], including early-onset schizophrenia,” he said in a study recently published in the journal Molecular Psychiatry. “GRIN2A(null)-related phenotypes frequently but not necessarily comprise ID as well as a usually self-limiting childhood epilepsy but may also present as an isolated [mental condition].”
Only one other gene, SETD1A, was associated with schizophrenia before, and it was still not the cause entirely on its own—rare variants of other genes that affected genetic coding were also found to be contributing factors in certain individuals. Lemke and his team surveyed subjects who had diagnoses related to GRIN2A variants, specifically hoping to earn the psychiatric symptoms experienced by their subjects, the persistence of these symptoms, the age of their onset, and their patients’ treatment histories.
The team was also interested in comparing how prevalent mental conditions were in the group of subjects with epilepsy to how prevalent they were in the group without. Both GRIN2A(null) and another variant, GRIN2A(missense) were analyzed. Almost 70% of the subjects carried GRIN2A(null) variants and 30% carried GRIN2A(missense) variants. Those who were carriers of GRIN2A(null) had a much higher incidence of having inherited particular conditions—of those 84 subjects, 23 had mental conditions, 13 had mood conditions (such as bipolar), 12 had clinical anxiety, 8 had psychotic conditions, 3 had personality conditions, and one had an eating disorder.
With the gene behind so many conditions identified, the researchers also found a potential treatment. L-serine is an amino acid that is important for brain function, since it is a precursor to the neurotransmitters that convey messages from neuron to neuron. Because it also protects neurons, it was previously seen as a possible therapy for neurological and neurodegenerative conditions, but Lemke’s trials with four of his subjects showed that it could have positive effects on anyone with a condition caused by a GRIN2A(null) variant. There was a significant reduction in symptoms like hallucinations and paranoia.
“Treatment responses to L-serine…[were] positively influenced in particular [mental and behavioral conditions] of several individuals,” he said. “It remains open and should be subject to clinical trials whether such beneficial treatment responses stay limited to GRIN2A(null)-related [mental conditions] or might be observed even beyond.”
Serine (
*
THE TIMING OF CANCER TREATMENTS
Every cell in the human body operates on an intricate internal schedule, governed by circadian rhythms that synchronize our biological processes with the 24-hour cycle of day and night. Coordinated by a master clock in the brain called the suprachiasmatic nucleus, these cellular clocks control essential bodily functions including sleep-wake cycles, hormone production, immune function, and metabolism. When these internal clocks are disrupted, the consequences can be profound, potentially increasing our vulnerability to diseases including cancer.
Chi Van Dang, Bloomberg Distinguished Professor of Cancer Medicine at Johns Hopkins University and CEO and scientific director of the Ludwig Institute for Cancer Research, is at the forefront of research exploring the connection between circadian biology and cancer: how the circadian clock affects tumor biology, and how these clocks can be exploited and even manipulated for therapeutic purposes.
Dang has dedicated his career to understanding the molecular mechanisms driving cancer development and progression. He has spent decades investigating how cancer cells hijack normal cellular processes to fuel their uncontrolled growth. With support from the National Institutes of Health, his work has revealed critical insights into cancer metabolism and the genetic changes that transform healthy cells into malignant ones.
Dang's interest in circadian cancer biology began with a molecular discovery while studying the MYC oncogene, a cancer-driving gene that acts as a switch, altering metabolic pathways in cancer cells. Dang found that MYC proteins bind to the same DNA sequence to regulate gene expression as circadian clock proteins.
"This overlap sparked a key question for me: Could these two systems be interconnected?" Dang says. "It is incredibly fascinating to learn how many things are actually controlled by our circadian clocks, but we didn't realize it. Everyone is talking about fighting cancer and developing new treatments, but most people typically don't factor in our circadian rhythm. There's so much more to be discovered.”
Dang says federal support has allowed him follow new leads in the fight against cancer.
"My NIH-funded work over the years has led to industry efforts in creating medicines targeting metabolism for the treatment of cancer," Dang says. "Ongoing federal research support is crucial for opening up new research avenues and hopes for new cures. Our current work on how the circadian clock and diet affect cancer immunotherapy responses should provide key insights that will improve outcomes for cancer patients."
Dang spoke with the Hub about the connection between the body's internal clock and cancer, why the timing of medical interventions could be as crucial as the treatment itself, and what health care providers, researchers, and the general public need to know about circadian biology's role in medicine.
What is the connection between circadian rhythm and cancer?
The connection between circadian rhythm and cancer is supported by extensive research across multiple domains. Large epidemiological studies following tens of thousands of people have shown that night shift work is associated with an increased risk of breast cancer, with longer exposure leading to higher risk. The International Agency for Research on Cancer has classified night shift work as "probably carcinogenic to humans."
Animal studies reinforce this connection. Mice with genetically induced lung cancer developed significantly more tumors when exposed to disrupted light cycles that simulate night shift work, and mice genetically engineered to lack the BMAL1 gene, which is crucial for maintaining the body's circadian rhythm, developed more tumors and died sooner.
The other aspect to consider is the internal clock specifically of the cancer cells. Analysis of the Cancer Genome Atlas, which contains genetic data from tens of thousands of human tumors, revealed that many cancers, including liver, breast, lung, and pancreatic cancers, showed a disrupted genetic pattern associated with the loss of the clock, meaning that these cancer cells no longer had an internal clock.
Interestingly, not all cancers respond the same way to circadian disruption. While most show increased tumor burden, two cancers involving stem cells—leukemia and glioblastoma—actually resulted in higher survival rates when the internal clock was disrupted, suggesting that these cancers rely on a steady circadian rhythm.
*
The evidence that some therapies are more effective at certain times of day than other times is becoming increasingly solid. There are some chemotherapies that work better at specific times of day, for instance. Another example is that patients who receive radiation therapy in the afternoon experience more side effects than patients who receive it in the morning.
An area that is developing incredibly rapidly at the moment and shows great promise for treating cancer is immunotherapy. Across multiple clinical trials in recent years, it was shown very clearly that patients who receive immunotherapy in the morning do better than patients who get it in the afternoon. We're just starting to understand the underlying mechanisms.
Immunotherapy awakens lymphocytes, the body's immune cells, to fight against cancerous cells in a tumor. These professional killer cells travel in and out of the tumor while fighting it; they don't stay in the tumor. The fascinating thing that was discovered only recently is that they do this in a circadian fashion—lymphocytes enter the tumor more in the morning than later in the day. So administering a drug to activate these killer cells at a time of day where you'll have more of them going into the tumor allows them to kill better.
One focus of my research right now is going back through existing data in patient records to see if we can substantiate what has been observed in the clinic. We always have patients who respond to treatment and, unfortunately, patients who don't respond. I want to see if we can find out when these patients received treatment, and if links between timing of treatment and its effectiveness are specific to certain types of cancers.
There are some practical issues that are going to be a challenge. In a clinic, we can't fit all of the patients into a limited time window. We need to find innovative ways to work around this physical limitation. One approach would be to investigate if there are ways to deliver some of these sophisticated therapies to people in their homes.
Another possibility—which still needs much more research to see how and if this can be done—could be to reset a patient's internal clock. Our circadian clocks are set by the sun. If we could figure out how to pharmacologically mimic the daily resetting of the clock, patients treated in the afternoon could potentially receive the same benefits as those treated in the morning.
It would also be interesting to see if changing meal times could adjust a person's internal clock. We know that our internal clocks function better with time-restricted eating. There have also been studies that showed that animals with time restricted feeding exhibited lower cancer incidence than animals whose feeding time was not restricted. There are currently clinical studies being done to investigate whether time-restricted eating leads to better outcomes for cancer therapies in people.
ARE THERE OTHER CLASSES OF DRUGS THAT COULD BENEFIT FROM CIRCADIAN-INFORMED TIMING?
Yes, research has shown that many drugs work better when taken at specific times of the day. For example, it has been shown time and time again that taking a low dose aspirin in the evening lowers a person's blood pressure more than if taken in the morning. Many people take aspirin to decrease the risk of heart attacks but don't know that taking it at a certain time of day increases its effectiveness. Another example is statins, medications that are broadly used for high cholesterol. It has been shown very clearly that statins are most effective when taken at night, because that's when the levels of the enzymes they block are highest.
A lot of drug metabolism is done in the liver. Within the liver, various enzymes are responsible for metabolizing these drugs, and their number and activity levels increase and decrease in a circadian rhythm. This means if you take a medication at a certain time of day when the enzymes that break it down are highly active, the drug may be metabolized more quickly. This dynamic underscores the importance of understanding the timing of drug administration in relation to circadian biology, an aspect that is often overlooked.
The first piece of the puzzle is education: There needs to be more education about circadian biology. This should start in medical school. Health care providers should be knowledgeable about the fact that circadian biology affects physiology and pathology in order to most effectively treat their patients. And then providers, in turn, need to educate patients and the community at large about circadian biology. For medications, this can be as simple as letting patients know when you prescribe a medication that it's most effective when taken at a certain time of day. It's also important to educate the public about other aspects of circadian biology that impact their health. Several studies have shown that eating at the wrong time of day disrupts a person's clock, so eating at the proper times is important for your health.
The second piece is that there should be a greater investment in research to optimize drug effectiveness by taking advantage of the circadian clock. Pharmaceutical companies should take more initiative to enhance their products by prioritizing research in this area, and when a new drug is developed, researchers should investigate its pharmacokinetics—the way the body absorbs, distributes, metabolizes, and excretes the drug—at different times of day. There is so much more to be discovered in this area that can make a real difference in clinical practice and patient outcomes.
We are currently investigating how disrupting the circadian clock in cancer cells of various types of cancer—such as breast cancer, melanoma, and colon cancer—affects tumor growth and immune system responses. Our preliminary findings suggest that the impact is tumor-type specific. For instance, removing the clock in breast cancer cells leads to faster tumor growth, so a disrupted clock gives the tumor cells an advantage. My hypothesis behind this is that the body's clock acts as an "orchestra leader" for all of the cells in our body, and at night, the metabolism winds down and the body rests, but cancer cells ignore these signals to continuously proliferate.
We are also exploring the impact of manipulating diets, specifically through time-restricted diet and fasting, as food is a cue for our internal clocks. So far, we have seen slower tumor growth and enhanced effectiveness of immunotherapy as a result of the diet manipulation. We think this may somehow be resetting the clock. Our next steps involve examining whether these dietary manipulations still yield benefits when the circadian clock is disrupted in tumor cells or even the whole organism, as well as investigating the mechanisms involved in the immune response and how exactly they are impacted by the body's clock.
https://hub.jhu.edu/2025/10/06/circadian-rhythms-and-cancer-treatments/
*
THE SURPRISING BENEFITS OF STANDING ON ONE LEG
Balancing on a single limb can be surprisingly challenging as we get older, but training yourself to do it for longer can make you stronger, boost your memory and keep your brain healthier.
Unless you're a flamingo, spending time delicately poised on one leg isn't something you probably invest a lot of time in. And depending on your age, you might find it surprisingly difficult.
Balancing on one leg generally doesn't take a lot of thought when we are young. Typically our ability to hold this pose matures by around the ages of nine to 10 years old. Our balance then peaks in our late 30s before declining.
If you're over the age of 50, your ability to balance on a single leg for more than a few seconds can indicate a surprising amount about your general health and how well you're aging.
But there are also some good reasons why you might want to spend more time wobbling about on one pin – it can bring a range of benefits to your body and brain, such as helping to reduce the risk of falls, building your strength and improving your memory. This deceptively simple exercise can have an outsized effect on your health as you age.
"If you find that it's not easy, it's time to start training your balance," says Tracy Espiritu McKay, a rehabilitation medicine specialist for the American Academy of Physical Medicine and Rehabilitation. (More on how to build a one-legged training regime into your day later in this article.)
Why care about your balance?
One of the main reasons doctors use standing on one leg as a measure of health is its link with the progressive age-related loss of muscle tissue, or sarcopenia.
From the age of 30 onwards, we lose muscle mass at a rate of up to 8% per decade. By the time we reach our 80s, some research has suggested that up to 50% of people have clinical sarcopenia.
This has been linked to everything from diminished blood sugar control to waning immunity against diseases, but because it affects the strength of various muscle groups, it is also reflected through your ability to balance on one leg. At the same time, people who practice one-legged training are less likely to be as vulnerable to sarcopenia in their latter decades, as this simple exercise helps keep the leg and hip muscles honed.
"The ability to stand on one leg diminishes [with age]," says Kenton Kaufman, director of the motion analysis laboratory at Mayo Clinic in Rochester, Minnesota. "People are over 50 or 60 when they start to experience it and then it increases quite a bit with each decade of life after that."
There is also another, more subtle reason that makes our ability to balance on one leg important – how it links to our brain.
This deceptively simple pose requires not only muscle strength and flexibility, but your brain's ability to integrate information from your eyes, the balance center in the inner ear known as the vestibular system and the somatosensory system, a complex network of nerves that help us sense both body position and the ground beneath us.
"All of these systems degrade with age at different rates," says Kaufman.
This means that your ability to stand on one leg can reveal a lot about the underlying state of key brain regions, says Espiritu McKay. This includes those involved in your reaction speeds, your ability to carry out everyday tasks and how quickly you can integrate information from your sensory systems.
All of us experience a certain amount of brain atrophy or shrinkage with age, but if this starts happening too quickly, it can impede your ability to remain physically active, live independently in your later years and increase your risk of falling. Data collected by the Centers for Disease Control and Prevention has shown that unintentional falls, typically caused by a loss of balance, are the leading cause of injuries among over 65s in the US. Researchers say that practicing single-leg exercises can be a good way of reducing this fall risk.
According to Kaufman, falls often come down to waning reaction times. "Imagine you're walking along, and you trip over a crack in the sidewalk," he says. "Most often, whether you fall or not isn't a strength issue, but it's whether you can move your leg fast enough, and get it to where it needs to be, to arrest your fall."
Spookily, your ability to stand on one leg even reflects your short-term risk of premature death. Take the findings of a 2022 study, which found that people unable to hold a single-legged pose for 10 seconds in mid-later life were 84% more likely to die from any cause over the following seven years. Another study took 2,760 men and women in their 50s and put them through three tests – grip strength, how many times they could go from sitting to standing in a minute and how long they could stand on one leg with their eyes closed.
The single-leg stance test proved to be the most informative for their disease risk. Over the next 13 years, those able to stand on one leg for two seconds or less were three times more likely to have died than those who could do so for 10 seconds or more.
According to Espiritu McKay, this same pattern can even be seen in people who have been diagnosed with dementia – those who can still balance on one leg are experiencing a slower decline. "In Alzheimer's patients, researchers are actually finding that if they're unable to stand on one leg for five seconds, it usually predicts a faster cognitive decline," she says.
Training your balance
The better news is that research increasingly shows that we can do a lot to reduce the risks of these age-related problems by actively practicing standing on one leg. Such exercises – which scientists refer to as "single leg training" – can not only hone your core, hip and leg muscles, but your underlying brain health.
"Our brains aren't fixed," says Espiritu McKay. "They're pretty malleable. These single leg training exercises really improve the balance control and actually change how the brain is structured, especially in regions that are involved in sensory motor integration and your spatial awareness."
Balancing on one leg can also boost our cognitive performance while performing tasks by activating the pre-frontal cortex of the brain, with one study showing it can even improve the working memory of healthy young adults.
Espiritu McKay recommends that all over 65s should begin doing single leg training exercises at least three times a week to improve their mobility, as well as reducing their future fall risk, but ideally she recommends incorporating it into your daily routine.
There may be greater benefits from starting this kind of training even earlier in life.
Claudio Gil Araújo, an exercise medicine researcher at the Clinimex clinic in Rio de Janeiro and who led the 2022 study looking at one legged standing and premature death risk, suggests that all over 50s should carry out a self-assessment of their ability to stand on one leg for 10 seconds.
"This can be easily incorporated into your daily activities," he says. "You can stand for 10 seconds on one leg and then switch to the other while brushing your teeth. I also recommend doing this both barefoot and with shoes on, because they're slightly different."
This is because wearing shoes produces different levels of stability compared to being barefoot.
Studies have shown that a combination of strength, aerobic and balance training exercises can reduce risk factors associated with falls by 50%, while this connection may also explain why activities such as yoga or tai chi which often involve holding single leg poses, have been linked with healthy aging. Kaufman points to a study which found that tai chi was linked with decreasing risk of falls by 19%.
Most optimistically, Gil Araújo has found that with persistence and consistency, it's possible to retain good balance even well into your nineties, and possibly even beyond.
"At our clinic, we assessed a woman who was 95 and able to successfully hold a single-leg stance for 10 seconds on either foot," he says. "We can train and improve the performance of our biological systems until the last days of our life, even if you're a centenarian.”
https://www.bbc.com/future/article/20260114-the-surprising-benefits-of-standing-on-one-leg
*
ending on beauty:
Aspen tree, your leaves gaze white into the dark.
My mother’s hair never turned white.
Dandelion, so green is Ukraine.
My fair-haired mother did not come home.
Raincloud, do you dally by the well?
My quiet mother weeps for all.
Round star, you coil the golden loop.
My mother’s heart was seared by lead.
Oak door, who ripped you off your hinges?
My soft mother cannot return.
~ Paul Celan, translated by Kerry Shawn Keys










No comments:
Post a Comment