*
MY REAL LIFE
I only pretend
to live at home, in Warsaw,
propaganda posters
rotting in the rain.
In the green of the green
of my heart, I am deep
in the Amazon jungle,
searching for the trembling
drop of sap that could save
humanity from some terrible disease.
phosphor eyes flame in the foliage.
Everything glitters
and slithers. I unspool
the enormous river —
The teacher asks about
oil fields in Romania,
pig iron in Sweden;
the priest wants to know
why I stopped going to church.
I dismiss them all.
My hand is the map of
a greater destiny: I untwine
the passion of vines,
enter crumbling gates
of abandoned empires;
hear the silence of the
square-faced gods
still demanding daily sacrifice —
as I sit at my desk,
drizzle snailing
long tears down the pane.
But beyond the gray
blazes river-sky,
living gold of a sunset
on the Amazon.
Hour after hour
I sit over homework,
subtracting, dividing,
always making errors —
tropical forests
sing in my braided hair.
~ Oriana
*
MARY BENNET: THE BENNET SISTER FOR TODAY
Based on the acclaimed novel The Other Bennet Sister, a new TV imagining of the life of Pride and Prejudice character Mary Bennet has won the hearts of British viewers. Here's why this Austen character is so relatable today.
Mary Bennet and her mother
"Her mother was right, she neither glowed nor bloomed," writes Janice Hadlow in her 2020 novel The Other Bennet Sister, a reimagining of the life of Mary Bennet. Mary is one of the five Bennet sisters in Jane Austen's Pride and Prejudice – the one who is most often dismissed and forgotten. Despite early-19th Century conventions, she has no interest in marriage, or societal occasions: she'd rather have her head in a geology book. Oh, and to her family's dismay, she wears spectacles. But how else is she meant to read?
"Austenmania" is ever-growing, and 2026 alone will see three new major Austen adaptations across film and TV. From zombie comedies to murder mysteries, servant perspectives and Bollywood musicals, there have been endless spin-offs of Austen's beloved stories and characters. Even Mary, who is a minor character in Pride and Prejudice, has not been ignored in modern literature. There are a host of books about what she might have got up to when Elizabeth Bennet wasn't around – including a historical spy trilogy.
Now The Other Bennet Sister has been adapted for the small screen by Sarah Quintrell and Maddy Dai. Telling Mary's story through her own eyes, the show has been a huge hit with British audiences: 7.3 million viewers watched the first episode, and there have been swarms of Gen-Z TikTok fan edits. Why has this interpretation of the "plain" middle child sparked such devotion in viewers, particularly younger ones?One way to answer that question is to go back to Austen's novel. Sandwiched between the two attractive, sensible elder sisters, Jane and Elizabeth, and the immature, younger sisters, Kitty and Lydia, Mary is an outlier. She is neither beautiful, nor silly. So, where does that leave her, and what did Austen intend to do with this character?
Mary's function in Pride and Prejudice is showing "the different ways girls can be," says Sandie Byrne, professor of English at the University of Oxford, UK. "Not all girls about whom you might write are beautiful and perfect. Austen is always mocking the sentimental novel and the gothic novel, in which the heroines are beautiful and talented... She's saying, some girls are like Mary.”
Austen also demonstrates one of the truths of motherhood, Byrne says: "Children aren't always loved, and children aren't always loved equally.
A relatable heroine
When we meet Mary (played by Ella Bruccoleri) in the BBC TV series, she is clumsy, awkward and finds conversation a struggle, spouting random bits of knowledge to carry her through. There are hints such as these throughout that – in the modern day – she might be considered neurodivergent. Mary reminds us that children can be raised in the same household, but have very different perspectives and experiences of life – a familiar feeling for many.
And even if viewers don't relate directly to her, they will at least root for her. "[People say,] I want to see her do well, because we recognize a woman who's been held up to standards in society that she can't reach, that she's having a value system placed on her," the show's writer Quintrell tells the BBC.
Mary's journey properly begins with the death of her father, Mr Bennet (Richard E Grant). As per the inheritance laws of Regency England, his cousin Mr Collins (Ryan Sampson) has become the owner of the Bennet estate, Longbourn, and he swiftly moves in with his new wife before the family have had a moment to grieve. (In a nice touch, Lucy Briers, the original Mary Bennet in the BBC's 1995 TV adaptation of Pride and Prejudice, plays housekeeper Mrs Hill.)
As the only unmarried sister, Mary receives a proposition from her aunt and uncle, the Gardiners (Indira Varma and Richard Coyle), to stay with them in London. It's a far better option than being the full-time companion to her mother, Mrs Bennet (Ruth Jones). Mary relentlessly seeks but cannot gain the approval of her critical, self-absorbed mother, whose signature phrase is "You have no compassion on my poor nerves!"
Mary's move to London, away from the shadows of her sisters and the scathing remarks of her mother, allows her to blossom. She is shown choosing increasingly colorful fabrics for her dresses – a sign that she is beginning to embrace her authentic self. The theme of fashioning your own identity is a timeless one, but it's especially relevant for young people today.
"I think we have a very similar thing today with social media. You have all this messaging, which is telling you to make yourself into a certain person in order to be acceptable to society," Bruccoleri, who plays Mary, tells the BBC. "I think it's a really narrow definition… this is what perfect looks like, and this is what you have to be."
One character who encourages Mary to make her own choices, and helps her to flourish, is Mrs Gardiner. Through gentle encouragement and invitation, she provides the maternal warmth that was lacking in Mrs Bennet – and this gives the series its truest love story. "Mrs Gardiner is a really good example of how we can be with young people in terms of not judging, and not telling them what we think they should do and how we think they should live… but shepherding them," Quintrell says.
Still, Mrs Bennet has a small redemption arc, as the audience is reminded that her actions are a product of society: she needs to save five daughters – and herself – from ruin. "I don't think she's a villain," Bruccoleri says. "I think she's trying to show Mary love in a very practical way, but it's not what Mary needs or is particularly helpful to her.”
The protagonist of The Other Bennet Sister has struck a chord with viewers
The series even manages to convey the humanity in Caroline Bingley and Mr Collins, two of Austen's more snooty and pompous characters, "which really speaks to Mary's ability to see people for who they are," Quintrell says. This is one reason why the series works so well: it offers fresh perspectives on characters that have existed for centuries.
And let's not forget the romance. By being herself, Mary attracts two potential suitors – the endearing Tom Hayward (Dónal Finn) and charming William Ryder (Laurie Davidson). Hayward and Mary's relationship is delicate and considered, founded in particular on their love of reading (not to mention, he also wears spectacles).
In an iconic scene, Mary sits back and relaxes as Ryder and Hayward pull her rowboat to shore. As both men are handsome, intelligent and likable, Mary's agency takes full effect when she is faced with another life-changing decision – which she makes in her own time, and on her own terms.
The Other Bennet Sister is a delight because it shows that all girls have value, however quirky or bookish. And the warmth and humor the series brings to Mary's idiosyncrasies – from her wonky still-life paintings to her enthusiastic bird-call imitations – underline this message.
Above all the series shows the joy to be found as a young adult in discovering one's own agency and unique sense of self, however intense the pressures and expectations of the world around us. As the show's star Ella Bruccoleri puts it: "I just love that this story is about trying to shut out that noise, and about listening to your own instincts.”
https://www.bbc.com/culture/article/20260501-why-mary-bennet-is-austens-most-relatable-heroine
*
WHY ENGLISH LOST THE DISTINCTION BETWEEN FORMAL AND INFORMAL ADDRESS WHILE ITS NEIGHBORS KEPT IT
Modern English uses the historically formal, aristocratic you for everyone, from a newborn baby to a reigning monarch. We didn't lose our formal pronoun; we lost our informal one.
In 17th-century England, addressing an acquaintance with the word "thou" was a profound insult that could ruin a business deal or provoke a physical fight. The language's formal-informal distinction faded not because speakers became more egalitarian, but because they became exceptionally anxious about social status.
Historically, English did not have a formal and informal split. In Old English, thou (and its object form thee) was simply the singular pronoun, while ye (and you) was the plural. This changed following the Norman Conquest in 1066. The French-speaking elite brought with them the practice of using a plural pronoun to show respect to a single person of higher status, similar to the French vous. English speakers adopted this custom, turning the plural you into a formal singular pronoun, while thou became the informal singular.
By the Early Modern period, the rules of address were complex and fraught with social peril. A person was expected to use you for social superiors and strangers, and thou for family, servants, and social inferiors. However, the rapid rise of a wealthy merchant class began to blur the rigid lines of the traditional feudal hierarchy. It became increasingly difficult to gauge exactly where a stranger or an associate stood on the social ladder.
This ambiguity created a daily linguistic minefield. Using thou to address someone who considered themselves an equal or superior was deeply offensive. When attorney Sir Edward Coke prosecuted Sir Walter Raleigh in 1603, he famously weaponized the pronoun to show his extreme contempt, snarling in court, "I thou thee, thou traitor!"
To avoid accidentally insulting anyone, people began defaulting to the polite you in almost all interactions. The shift was driven by self-preservation and courtesy. Over the course of the 17th and 18th centuries, you expanded to cover nearly every social interaction. The Religious Society of Friends (Quakers) famously pushed back, insisting on using thou to emphasize the equality of all people, a practice that earned them intense public mockery and even imprisonment.
Meanwhile, neighboring languages like French, German, and Spanish maintained their distinctions. These languages were often guided by centralized linguistic academies and a cultural insistence on verbalizing social rank, which anchored their pronouns in place.
English, lacking a formal language academy and driven by the shifting tides of a highly mobile middle class, took a different path. English speakers simply adopted polite over-correction until the informal pronoun vanished entirely. ~ Sepia Glyphs, Quora
*
JEAN-PAUL SARTRE ON WHY SOME PEOPLE EMBRACE HATE
What
makes a person hate a particular group of people and support policies
that harm them? Writing in 2020, comparative literature scholar Judith
Greenberg looked at French existentialist Jean-Paul Sartre’s classic
essay “Anti-Semite and Jew” in light of twenty-first-century American
politics.
Sartre wrote the essay in 1944, as people around the
world were learning the extent of the atrocities committed during the
Holocaust. He was also grappling with the continuing power of
anti-Jewish sentiment in France at the time.
Greenberg takes issue with the way that Sartre understood Jews themselves. Ignoring Jewish religious belief, history, and culture, Sartre equated Jewish identity with legitimate fear of antisemitic violence. He claimed that “it is the anti-Semite who creates the Jew.”
However, Greenberg notes that Sartre’s real interest wasn’t in Jewish perspectives but in what makes a bigot. In fact, Sartre explicitly wrote that in other contexts the same scapegoating purpose could be served by Black or Asian people. Today, Greenberg writes, we might also substitute immigrants, Muslims, or members of the LGBTQ community.
Sartre viewed antisemitism as a solution for the fundamental human problems of anxiety and alienation. In particular, he focused on how “being-for-others”—existing with the awareness of others’ perceptions—creates tension through the risks of exposing one’s inadequacies. Because of this, anyone may become overwhelmed by a social world with many different perspectives and demands, and by the possibility of getting things wrong.
To Sartre, antisemites are people who suffer from this insecurity and fear and are unwilling to do the work of adapting to cultural change and learning new things. This leaves them vulnerable to propaganda offering simple answers. The antisemite also revels in the release of constraints imposed by living peacefully in society with others and finds comfort in joining crowds of people like them.
“When gathered among other like-minded people, they celebrate their resistance to difference and difficulty,” Greenberg writes. “In creating fear for the scapegoat, they displace their existential anxiety and bolster themselves with a false sense of superiority.”
In dominating
more vulnerable people, Sartre wrote, the antisemite chooses “to be
nothing save the fear he inspires in others” and finds his identity in
others’ reactions. Greenberg suggests that this resembles what we might
call narcissism—though that’s a psychoanalytic term that Sartre would
not have used.
Sartre argued that antisemitism should have no
place in universities or legitimate public discourse because it is an
unreasoned “passion” rather than an idea that can be sensibly debated.
He described how antisemites weaponize absurdity and “delight in acting
in bad faith” to “intimidate and disconcert” those trying to counter
them with reasoned arguments.
While acknowledging that antisemites may have good characteristics—a loving husband, a generous and conscientious citizen—Sartre claimed that their hatred ultimately defines them.
“A man who finds it entirely natural to denounce other men cannot have our conception of humanity,” he wrote. “He does not see even those whom he aids in the same light as we do.”
Greenberg suggests this remains a useful lens for looking at bigotry today.
https://daily.jstor.org/why-do-people-embrace-hate-sartre-has-an-answer/?utm_source=firefox-newtab-en-us
*
WHY RELIGIOUS PEOPLE TEND TO BE HAPPIER
Across eight studies, one variable statistically explained why the belief that “the world needs me” is related to increased happiness.
The relationship between religion and happiness disappears when you control for this belief.
It’s not about ego. The belief that the world needs you reflects purpose and belonging, not superiority.
Service, caregiving, and reflection may help anyone—religious or not—develop this belief
For decades, psychologists have observed a consistent pattern: religious people tend to report being happier and more satisfied with their lives than non-believers. But why this happens has been much harder to pin down. Is it the social support of belonging to a faith community? A stronger sense of meaning? Hope in the face of hardship? Or perhaps, as the medieval philosopher Thomas Aquinas once suggested, simply the comfort of contemplating God?
Here at the Penn Primals Project, our main research program is focused on primal world beliefs—people’s most basic assumptions about the nature of reality. There are 26 primal world beliefs, such as the belief that the world is abundant versus barren, interesting versus dull, and so forth. Primal world beliefs aren’t religious doctrines or personality traits, but views that people have about what is objectively true of the world generally. Researchers suspect they might function as lenses that color how people interpret new experiences and information.
In our study, recently published in the Journal of Positive Psychology, we analyzed data from eight samples involving more than 3,000 American adults, using multiple different measures of happiness and religiosity. Across all of them, we found the same age-old correlation: religious people were (on average) happier.
In sample after sample, one primal world belief played the biggest role
This happened not just in a few samples, but was remarkably consistent across all of them. No other world belief explained the link as well. So, it wasn’t just that religious people tend to think more positively about life, they had to be positive in this one highly specific way.
The belief that the world “Needs Me” fully explained the religiosity-wellbeing relationship.
Across all samples and religions we examined, people who were more religious were more likely to believe that the world needs them. That belief, in turn, had to be present for religious people to get that wellbeing bump.
Once this belief was controlled for, religiosity did not predict well-being at all.
Whether our participants were healthy adults, people living with illness, or those who had faced trauma, the same pattern held
What is this belief exactly?
This belief is the view that the universe is like this enormous jigsaw puzzle, with one particular—even unique—piece missing, and you are that piece. You, specifically, have an important part to play in this world. You are not simply interchangeable in the cold calculus of the universe’s machinery.
To measure this belief, study participants note how much they agree or disagree with four statements:
The universe needs me for something important.
The world needs me and my efforts.
Life has an important part for me to play.
It feels like the world doesn’t really need me for anything.* (responses are reversed)
Humans thrive on being needed
The point is not self-glorification, but need-person fit.
Importantly, this belief is not the same as narcissism or self-importance. Several studies show that people who strongly endorse “the world needs me” don’t necessarily think they’re better than others—they might simply feel that they have a specific role to play. A doctor might feel her hospital needs her especially; a father might feel his children need him especially; activists might believe they have a unique contribution to make to their cause.
Why it might matter—for everyone
For religious people, perhaps there are ways that one’s faith can promote this primal world belief even more. Many religious traditions already teach that each person has a divine purpose, that their life fits into a larger plan. By emphasizing these teachings, perhaps believers could enjoy even higher well-being.
These findings are especially hopeful for secular folks. Seeing the world as a place that needs you does not require you to be religious. Feeling needed doesn’t have to come from your career. Other people might focus on their roles as a parent, dog owner, or being someone who brings people together by throwing great dance parties.Our research suggests that non-religious people might have a path to well-being, that perhaps they have not considered: even without religious faith, perhaps people can cultivate the belief the world “needs me” through reflection, service, and meaningful roles like volunteering, caregiving, mentoring, or creative work.
Humans thrive on being needed. To be needed is to belong, to matter. Religiosity may foster greater happiness, but in practice it may only do so alongside the belief that the universe has a role for you. Regardless of your religious views, finding that role—or just being more aware of the important roles you already have—may make you a happier person. 
https://www.psychologytoday.com/us/blog/primal-world-beliefs-unpacked/202605/why-are-religious-people-happier
*
A REVERSAL OF THE RELIGIOUS GENDER GAP
It also has found that young American women report the least religiosity among all groups Gallup surveyed.
Surprisingly, these findings have reversed a long-standing gender gap regarding religiosity among Americans.
It may come as a surprise to learn that, in the same couple of weeks in which the U.S. Secretary of Defense repeatedly used religious language to describe the actions of the U.S. military in Iran; the U.S. President posted an AI-generated image of himself on Truth Social that resembled pictures of Jesus (an image which at least one prominent American Christian labeled “blasphemous”); the U.S. Vice President (a recent convert to Catholicism) advised Pope Leo to “be careful when he talks about matters of theology”; and a marathon public reading of the entire Hebrew and Protestant Bibles by various notable figures, including the President, is slated, there were much more arresting developments concerning religion in America—at least for scholars of American religions.
They would surely reserve that distinction, instead, for some startling new findings from a recent Gallup survey about religiosity among adults in America.
The Importance of Religion in the Lives of Young Adult Males in America
Focusing on what columnist Christine Emba, in a piece for the New York Times, describes as “highly networked urban areas... and Ivy League campuses,” various journalists have speculated that religiosity appears to be on the rise recently in America. Emba notes, however, that although conversions may be up in those settings, overall, the long-term trends have not changed. Polls and surveys have tracked a many-decades-long, gradual decline in self-reported religiosity and attendance at religious services among Americans.
It is precisely because of those long-standing trends that the results of this recent Gallup survey have piqued scholars’ interest. Although they do not indicate any reversal of those patterns, they did reveal two unanticipated developments.
The first was a comparatively sudden leap in the percentage of young adult males (ages 18 to 29), from 28 percent in 2023 to 42 percent in 2025, who reported that religion was “very important” in their lives.
The survey also found, within that same demographic group, an increase (from 33 percent to 40 percent) in Gallup’s principal behavioral measure of religiosity, namely, attendance at religious services once or more each month.
Is the Religion Gender Gap Reversing?
Such a surge in religiosity among young American men is certainly noteworthy, but it is the second unexpected finding in the Gallup survey that is even more attention-grabbing. It is young American women (again, ages 18-29) who are now the least likely demographic group that the Gallup survey measured to say that religion is important in their lives, and these same young women (39 percent) now report attending religious services less often than do males (40 percent) of the same age.
These findings reverse what is, perhaps, the best-known pattern in all of social scientific research on religions. In most cultures and most religions, females are more likely than males to affirm the importance of religion in their lives and more likely to pray daily. Women are also more likely to attend religious services. The only exceptions to that pattern of attendance arise among religious groups, for example, Muslims and Orthodox Jews, that restrict (at least some) public expressions of religiosity to males.
The Gallup finding about declining religiosity among young American women is all the more striking because all of these gender gaps—about prayer, attendance, and the importance of religion—have generally proven more pronounced among Christians, compared with other major religions (and of course, Christians remain the largest religious group in the U.S.).
Even more telling, perhaps: All of these gender gaps (until now!) have generally proven more pronounced among American Christians, in particular, compared with Christians in other developed countries, such as Canada and the United Kingdom. Theoretical proposals for explaining the familiar gender gap in religiosity have appealed to various biological, psychological, familial, social, and economic variables or to combinations of some or all of the variables to which these theories point. This new gender gap reversal in America poses an explanatory challenge for them all.
https://www.psychologytoday.com/us/blog/why-religion-is-natural-and-science-is-not/202605/a-reversal-of-the-religious-gender-gap
Oriana:
While the subject of why some people need religion much more than others is no doubt multi-factorial, a few variables stand out. One important factor is education. Those with the least formal education seem the most religious — and until fairly recently, women received less education then men.
But this educational gap has reversed in the recent decades, especially when it comes to post-high school education.
~A 2017 study found that Americans with college degrees are roughly three times more likely to identify as atheist or agnostic (11%) than those with only a high school education or less (4%) ~
Coming from a mono-Catholic country, I was fascinated to find a wild variety of belief among Americans. While the people in my immediate circle are overwhelmingly secular, some of them have developed personal beliefs that fit their needs. Sometimes it’s the need for growth and learning that leads them to believe in multiple reincarnation; sometimes they want to speak again to their dead loved ones, and lean to the accounts of NDE experiences, which tend to emphasize meeting dead relatives and beloved dead pets rather than than the Boss.
Most people seem unwilling to call themselves atheists (within a week of coming to the US, I was warned never to call myself an atheist). They prefer to say that they believe “there is Something out there.” Many experience the universe as endowed with cosmic intelligence and “psychoid,” i.e. responsive to their deep desires. Others miss the community feelings of belonging to a church, and become Unitarians, which doesn’t force them to believe any particular dogmas, and leaves the definition of the divine permissively fluid.
I think religion will further decline but not die out — something I ardently hoped for in my post-Catholic youth. I hope it will evolve toward becoming more nurturing and less concerned with vicious punishment for non-observance, much less non-belief. What I see is religion becoming less collective and more personal – and indeed even more nurturing, supportive, and forgiving compared to the old-time hell-fire religion. Some (even Catholics) indeed suggest getting rid of the concept of hell – which upsets old-style believers – "But without the fear of hell, no one would follow Jesus Christ!" (a direct quotation from a Catholic).
*
WHY BRITAIN ABOLISHED SLAVERY
In the 21st century there has evolved a general consensus about abolition: that Britain turned its back against the slave trade (in 1807) and slavery (in 1833) for economic reasons. This view derived from Eric Williams’ Capitalism and Slavery (1944), which also rejected the previous emphasis on the role played by William Wilberforce and outraged Christianity.
Yet though Williams’ argument – that slavery had helped British industrialization, but that it was inappropriate in a world of free trade – gained popular and scholarly support, it overlooks the profound cultural changes in British life which helped pave the way for abolition. New studies – such as David Richardson’s Principles and Agents (2022) which throws new light on the intellectual roots of abolition – oblige us to rethink the entire story.
The rise of the British abolition movement is curious because it was a rejection of a system that had been unchallenged for two centuries. Britain (and many others) had developed a thriving industry on the backs of armies of enslaved Africans. The extreme brutality of the system was widely known from the start. Then, in the space of 50 years, Britain denounced that system for its inhumanity.
This was perhaps the most spectacular example of a much broader shift in humanitarian attitudes. Across the 18th and 19th centuries a humane sensibility emerged which challenged a range of entrenched cruelties such as violence in the penal code, in military matters, against apprentices and children, and blood sports – the forerunner to the RSPCA was founded in 1824. All this was nurtured by rising literacy and the spread of print, democratic voices, and nonconformity.
This broader revulsion at violence assisted the emergence of abolitionist sentiment, helped, from the late 18th century, by the work of Baptist and Methodist missionaries in the Caribbean, such as Reverend John Smith who was accused of being behind the Demerara revolt in 1823. The violent suppression of slave dissent came to be seen as the persecution of fellow Christians.
Both the slave trade and slavery were ended by Acts of Parliament, and that involved protracted political agitation to persuade both Houses that they were no longer in Britain’s best interests. It was here that Wilberforce and other activists were important, but the political success of abolition was driven by a broader shift in public feeling.
When British abolitionism began in 1787, the Atlantic slave system was enormous, thriving, and profitable: Britons were then forcibly transporting around 50,000 Africans a year to Caribbean colonies where half a million people were held in captivity. Why would Britain end such an established and successful system? Morality provides one answer. Evangelical Christianity experienced a resurgence in the late 18th century, not only among Quakers (who had long opposed slavery) but also among Anglicans and Methodists.
The men who first organized against the slave trade, such as Thomas Clarkson, William Wilberforce, and Granville Sharp, were drawn from such evangelical sects, and their religious fervor would sustain them through the exhausting 20-year-long abolitionist campaign. Once that fervent vanguard began to publicize the horrors of Atlantic slavery via speeches, pamphlets, and posters, they found a receptive audience in a more religiously minded British public that increasingly opposed violence and coercion. That public was also eager to back a campaign that promised ‘Moral Capital’ – national renewal after the humiliating loss of the American colonies.
While morality was important, there were also more hard-nosed reasons for British abolition. By 1787 Caribbean slavery was being eclipsed by Britain’s growing trade to India, and so the end of slavery did not imperil the survival of the Empire. Britain also still stood to profit from slavery after abolition: ‘freed’ Caribbean slaves were forced to work as ‘apprentices’ on their former owners’ plantations for five years and then received no compensation for their enslavement; and the forced labor of enslaved people on American cotton plantations – who would not be emancipated until 1865 – helped to power Britain’s Industrial Revolution.
The ending of slavery was also crucial to the establishment of Britain’s hegemony during the 19th century. Abolition proved a useful screen for the colonization of Africa, with African rulers who refused to sign anti-slavery treaties attacked and annexed. Abolition provided a pretext for Britain to police international sea lanes and to diplomatically pressure rival states. Ending slavery was therefore a true moral crusade – but one that did much to further colonization and the growth of Britain’s Empire.
The British abolished slavery in their Caribbean colonies for several reasons, not least among which was the recurrent rebellions of the enslaved. Historians now refer to the agency of the enslaved in their own emancipation as ‘revolutionary emancipation’.
Armed attempts by the enslaved to free themselves were certainly older than the British anti-slavery movement, which began by the late 18th century. However, planter accusations that white British abolitionism fomented the servile revolts of Barbados (1816), Demerara (1823), and Jamaica (1831) initiated a solid connection between black and white anti-slavery.
In their parliamentary speeches, publications, and personal correspondence, William Wilberforce and his abolitionist colleagues referred specifically to these rebellions to justify their demand for amelioration and then abolition. In response to the Demerara servile war, for example, Thomas Clarkson was convinced that the slaves’ rising was a mandate to the British abolitionists ‘not to delay or suspend but to renew and redouble our exertions’.
On 15 May 1823 Thomas Fowell Buxton warned his fellow parliamentarians that, facing the recurrent threat of servile revolts: Security is to be found – and is only to be found – in justice towards that oppressed people. If we wish to preserve the West Indies – if we wish to avoid a dreadful convulsion – it must be by restoring to the injured race those rights which we have too long withheld.
The Jamaican revolt of 1831-32 brought even greater urgency and apocalyptic warnings. The minute book of the London Anti-Slavery Society recorded after a meeting in 1832 that:
it is only by the interposition of Parliament can any hope be entertained of peacefully terminating its unnumbered evils or any security afforded against the recurrence of those bloody and calamitous scenes which have recently afflicted Jamaica.
The revolutionary conduct of the enslaved was effective. It radicalized British anti-slavery rhetoric and, therefore, provided the fiery material that persuaded Parliament – faced with the prospect of continued servile war – to abolish slavery in August 1834.
Debates over why Britain abolished slavery are tied up in ideas about how Britain understands itself as a nation and empire. It was a process that required the confluence of multiple factors. The politicization of this history has pitted intellectual theorizations against each other, with economic explanations deemed in conflict with the notion of anti-slavery as a moral phenomenon. This has created an unhelpful dichotomy when a synthesis would offer a more considered understanding.
Debate about the causes of abolition stemmed from the publication of Capitalism and Slavery by Trinidadian historian Eric Williams. Williams interrogated the economic preconditions for abolition and challenged the idea that it was humanitarian sentiment alone that led to the dismantling of slavery, although he acknowledged the importance of anti-slavery as a mass movement.
Racial slavery was a feature of Britain’s Atlantic empire, and was supported through the intertwining of mercantilism, monopolies, preferential taxation, and the Navigation Acts. These measures created a closed system of trade which ensured the competitiveness of slave-produced commodities into the 19th century, even as parts of the Caribbean, especially older colonies such as Jamaica, began to falter economically owing to issues such as soil exhaustion.
Historians have contested whether the slave economy was in decline before abolition; some suggest that this weakness contributed to its downfall while others argue that Britain committed an act of economic suicide in the service of its anti-slavery principles.
The Caribbean was not a homogenous economy and the profitability of slavery varied between colonies. For example, British Guiana continued to be highly profitable up to abolition.
After the loss of the Thirteen Colonies Britain increasingly turned eastward to its informal empire in India. As Britain industrialized it sought new markets. Free trade began to emerge as a governing principle, unseating the old economic ideas that underpinned slavery. New interest groups pushed for reform, including changes to parliamentary representation which impacted on pro-slavery lobbying power. Diminished economic and political standing, abolitionist pressure, and an uprising by enslaved people in Jamaica in 1831-32 all contributed to why Britain abolished slavery when it did.
https://www.historytoday.com/archive/head-head/why-did-britain-abolish-slave-trade?utm_source=Newsletter&utm_campaign=1bbec03580-EMAIL_CAMPAIGN_2017_09_20_COPY_01&utm_medium=email&utm_term=0_fceec0de95-1bbec03580-1214148&mc_cid=1bbec03580
*
SHOULD HOMEWORK BE ABOLISHED?
A few days into the new semester this January, the LaSalle Parish school district in rural Louisiana made a pronouncement: No more homework.
Since then, none of the 2,500 students in this district — from the youngest learners up through high school seniors — have been required to do schoolwork at home. Parents can request practice problems if they'd like, Superintendent Jonathan Garrett said, but that work won't be mandatory or graded.
Homework assignments, it turned out, were among the biggest sources of complaints Garrett had heard from parents and students over the years.
"When there was a negative feeling about school, it usually stemmed from what kids are bringing home, the frustrations they feel completing that, and that parents and guardians feel trying to help them complete it," he said in an interview.
Beyond that, Garrett said the move was driven by concerns – shared by many educators – that much of the homework students are assigned – especially in math – is needlessly repetitive, takes too long to complete and hasn't adapted to the challenges posed by Artificial Intelligence.
The response to Garrett's announcement was swift — and overwhelmingly positive. The message is the district's most "liked" post on Facebook by far this year, with hundreds of shares — many of them by parents from neighboring parishes asking how they could get their own schools on board.
The scope of the district's no-homework guidance is new, but it follows a trend that educators and researchers have been noticing for years: More teachers are moving away from homework.
Federal survey data shows that the amount of math homework assigned to fourth and eighth grade students, in particular, has been steadily declining for the past decade.
Some educators and parents say this is a good thing — students shouldn't spend six or more hours a day at school and still have additional schoolwork to complete at home. But the research on homework is complicated.
Some studies show that students who spend more time on homework perform better than their peers. For example, a longitudinal study released in 2021 of more than 6,000 students in Germany, Uruguay and the Netherlands found that lower-performing students who increased the amount of time they spent on math homework performed better in math, even one year later.
Other studies, however, suggest homework has minimal outcomes on academic performance. A 1998 study of more than 700 U.S. students led by a researcher at Duke University found that more homework assigned in elementary grades had no significant effect on standardized test scores. The researchers did find small positive gains on class grades when they looked at both test scores and the proportion of homework students completed.
More homework was also associated with negative attitudes about school for younger children in the study.
"The best educators figured out a long time ago that we can control what we can control," and that's what happens during the school day, Superintendent Garrett said, not homework. "There has been a shift away from it naturally anyway, and I felt like this made it equitable across our entire school system.”
In math especially, students need practice
The debate over homework has swung back and forth for more than a century, and the tide of public opinion has shifted every few years. It's likely to continue changing for a simple reason: Researching homework is a challenge.
There's no good way to isolate the amount of time spent on homework and its effects on students, because it may take one student five minutes to complete the same math problem that another student spent 45 minutes on. That extra time doesn't necessarily result in the struggling student performing better than the student who grasped the assignment more quickly.
However, just like playing the violin or hitting a baseball, or any other skill that requires training, there is evidence that students need practice to master academic subjects, particularly in math.
Some experts worry the overall decrease in homework could be a problem for math achievement, at a time when math scores across the country are already at a dismal low.
"The best argument for homework is that mathematical procedures require practice, and you don't want to waste classroom time on practice, so you send that home," said Tom Loveless, a researcher and former teacher who has studied homework.
The effects of AI on homework
Generative artificial intelligence has added a new wrinkle to the homework debate, too. More than half of teens said they used chatbots to help with schoolwork, and 1 in 10 said they used virtual assistants to do all or most of their schoolwork, according to a recent survey by Pew Research Center.
A different survey of teachers by the EdWeek Research Center found that 40 percent said homework assignments had decreased over the past two years, and of those, 29 percent said it was because students' use of AI had lessened the value of homework.
Between 1996 and 2015, very few fourth graders — between 4 and 6 percent — reported being given no math homework the previous night, according to surveys from the Nation's Report Card. By 2024, that percentage was up to more than a quarter. There was a similar trend for eighth graders.
Ariel Taylor Smith, senior director of the Center for Policy and Action at the National Parents Union, a nonprofit that advocates for parents, has seen this trend in her own fourth grader's public elementary school class in Vermont, whose teacher doesn't assign homework.
"The thing they point to is that it's an equity issue, and not all parents have the same availability and ability to support their students," said Smith.
She believes, however, that students should do some homework without the help of their parents. "I would make the argument that if a kid is really far behind in school, that's an equity issue. They need the additional time to practice."
Smith said she and her mother create their own homework now for her son: reading exercises and flash cards in math. Kids, she said, "need more practice. … Sometimes, you do have to practice the boring stuff, like math."
Not everyone feels this way about homework. For Jim Malliard's two children in Franklin, Pa., adverse experiences at school became a barrier to completing homework.
"It became a fight because the kids had so much school-based anxiety from trauma and bullying at school that they didn't want to deal with school when they got home," said Malliard, whose kids attended a public high school.
Malliard, who writes about education issues and is a full-time caregiver to his wife, doesn't think his children were overburdened with homework at their school, but he also doesn't believe they were benefiting from it.
"The teachers would tell us homework only takes 15 minutes a night — sure, if a kid sits there and does it right away and is attentive and wants to do it," Malliard said. "It was getting to be an hour for us."
He eventually enrolled his children in a virtual charter school, which they attended for the rest of their K-12 schooling.
How much is enough?
Over the years, research has attempted to answer the thorny question of how much homework is appropriate, with varying degrees of success.
Education groups and researchers generally recommend 10 minutes of homework each night per grade level. But it's almost impossible to assign work that will take every student the same amount of time to complete, and research has shown there are harmful effects from too much time spent on homework.
https://www.npr.org/2026/04/28/nx-s1-5795647/should-schools-get-rid-of-homework
*
DOES IT STILL MAKE SENSE TO SEND YOU CHILDREN TO COLLEGE?
A few weeks ago, while I was dealing with taxes, it occurred to me that the money my wife and I were putting away in a college fund for our children might be better used somewhere else. This wasn’t a novel musing, but it felt particularly pressing as I watched my account balance go down, a portion of its resources funneled into something that can’t be touched for at least the next nine years.
When my nine-year-old daughter graduates from high school, in 2035, I asked myself, will the landscape of higher education look the way that it does now? Will it still be as expensive? Do I actually need to squirrel away money for tuition, or should I just put what I have into a stable-growth account so that later I can cash it in to buy her an apartment, an iPhone, and whatever other tools she needs to deal with a world governed by our coming A.I. robot masters? (Maybe a machete and a copy of “My Side of the Mountain.”)
For the next few weeks of this column, I will dig into questions about the viability of the American university system. The pressures on higher education seem extraordinary, even to someone like me, who is generally convinced that real change is rare, perhaps especially when it comes to America’s tried-and-tested system for replicating its élites. Private and state universities have had their funding cut by the Trump Administration, professors report rampant A.I.-assisted cheating by their students, and seemingly every week brings a new report about how nearly all entry-level white-collar jobs—whether they’re in consulting, insurance, finance, management, or the sciences—will be replaced by friendly chatbots that may or may not someday destroy the world.
A recent survey found that more than one in four college students in America believe that their tuition was not a good investment, at a time when more than forty per cent of college graduates between the ages of twenty-two and twenty-seven hold a job that does not require a college degree. According to Pew, seven in ten Americans think that the “higher education system in the United States is going in the wrong direction,” with most respondents expressing concern about the high price of tuition. With all this happening, should I continue to contribute to my children’s 529s?
The short, easy, and most likely correct answer is yes—I should assume that, when my nine-year-old reaches high school, she will go through the familiar gantlet of academic competition and spend much of her time building a résumé for college-admissions committees. I should also assume that the cost of whatever college she attends will not come down during the next nine years.
The university system in America has survived worse than A.I.: pandemics, wars, campus unrest, massive open online courses, the internet. If colleges seem impervious to revolutions in information technology, is it possibly because their actual appeal has less to do with the transfer of knowledge than their administrators might want to admit?
As the economist Bryan Caplan has observed, “The main function of education is not to teach useful skills (or even useless skills), but to certify students’ employability. By and large, the reason our customers are on campus is to credibly show, or ‘signal,’ their intelligence, work ethic, and sheer conformity.”
As long as college remains a way for upwardly mobile kids to stand out from one another, and as long as employers believe that a better college degree is a sign of a better potential worker, the American university system should survive, even if teaching methods change.
*
Nonetheless, it seems a bit odd that, when it comes to predictions about our A.I. future, which typically range from friendly revolution to organ-harvesting apocalypse, declarations about higher education have been relatively mellow. Granted, many of the commentators offering these predictions are employed by traditional universities, and might tend to believe more strongly in the enduring relevance of the academy.
There are exceptions: the OpenAI C.E.O. Sam Altman has suggested that his own kid might not attend college; Howard Gardner, a psychology professor at Harvard, recently surmised that A.I. will significantly shorten the time children need to be in school. But the consensus is that college will still exist in ten or twenty or thirty years, a forecast that, for a parent of two staring down future tuition bills, is a bit disappointing.
Even some pundits who are open to A.I. as a major development agree that higher education isn’t going anywhere. Tyler Cowen, for instance, Caplan’s colleague in George Mason University’s economics department, has argued that more instruction time should be devoted to A.I. in American classrooms—and mused that A.I. might help students better understand the Odyssey—but maintains that the traditional subjects and pedagogy of higher education should largely remain intact.
Sal Khan, the founder of the free online-learning service Khan Academy, has launched a partnership with TED and the Educational Testing Service called the Khan TED Institute, which aims to provide a “world-class higher education accessible throughout the world at a radically low cost.” (Around ten thousand dollars, he says; details are a bit thin. The institute’s website is filled with a lot of pablum about opening “new pathways into the AI economy where skill-based measurement becomes the critical link between learning and livelihood.”)
But Khan doesn’t see his latest venture as a wholesale replacement for the brick-and-mortar university; he has described it as a reasonably priced alternative that can keep pace with a world that is changing “very, very fast.” (Khan also believes that tutoring, which is both effective and expensive, could ultimately be done by A.I. agents, making one-on-one instruction more accessible, though one of the parties would be a robot.)
Scott Galloway, a professor, a popular podcaster, and perhaps the most influential public voice on the value of a university education, has declared that “this narrative that A.I. is going to destroy higher education is such ridiculous bullshit.” Higher education could drastically change soon, he says, if tech giants start partnering with prestigious universities to expand their enrollment through online degrees, thereby effectively shutting down hundreds of smaller, private colleges. But those changes would be driven by supply and demand, rather than a fundamental shift in opinion about whether it’s still good to go somewhere, in person, to learn things.
I don’t believe that these thinkers are necessarily wrong to dismiss the idea that enormous changes will come to higher education during the next two decades; as long as Americans want to distinguish their children from other children, the hierarchical college system will prevail. But these defenses of higher education feel almost performatively cynical, especially for an institution that has traditionally draped itself in high-flown sentiment about the pursuit of truth and the shaping of young minds, or whatever. (The motto splashed on all the brochures for my alma mater was “The Best Four Years of Your Life.” They were not, but I recall genuinely believing that they would be.)
I also wonder if the skeptics might be overstating the power of inertia, especially at a time of extremely low public trust in all institutions, not just those of higher education. In the world of prestige media that includes The New Yorker, for example, it has long been much harder to break in without an Ivy League degree, and that remains the case; but the draw of working at a legacy-media institution has also never been weaker. Would a fifteen-year-old hellbent on a journalism career be best served by working himself to the bone both academically and extracurricularly to get into Harvard, or should he just start a Twitch stream and get to work?
Reasonable people can disagree about that. But I feel certain that most of the ambitious fifteen-year-olds who already know what they want to do these days would choose the self-made option—particularly if they come from families that can’t easily afford college tuition, let alone thousands of dollars in supplemental application prep. A.I. might not factor directly into such a decision for an aspiring reporter, but the already impressive abilities of large language models to hone research, approximate historical knowledge, and target potential sources would soften any disadvantages that this hypothetical student might suffer from skipping college.
Can college really be laid so bare and survive? Will the roughly sixty per cent of recent high-school graduates who invest in higher education still see the value of it if they come to believe—rightly or wrongly—that the whole knowledge part of college has been replaced by an agreeable chatbot? Our hypothetical ambitious fifteen-year-old is exceptional, of course, and certainly not the bellwether for today’s disaffection about higher education. Few teen-agers know what they want to do in life, and it’s not always good for kids of that age to limit their choices.
What I find concerning, however, is that so many other white-collar industries and professions—finance, consulting, the law—are even more institutional in their thinking than the media is. They, too, are held in low esteem by the public, and that decline in trust has frayed the traditional line of thinking that you should join one storied institution, a university, to later work at another. If we agree that college primarily serves a credentialing process that stamps select young people as worthy of work, and, if we agree that A.I. helps to expose it as such, might we not conclude that, at some point, people will collectively stop paying into the system, or will start seeking out other, less expensive credentials?
I do not think that A.I. will singlehandedly destroy college. But I do think that it will accelerate an already growing disillusionment with higher education.
In 2013, seventy-four per cent of eighteen-to-thirty-four-year-olds polled by Gallup said that a college education was “very important.” By 2019, three years before the public adoption of ChatGPT, that number had dropped to forty-three per cent; it fell again, in 2025, to thirty-five per cent, a decline that represented the steepest drop among all age groups that were surveyed. This drop might level off at some point, simply because most things regress to previous norms. But I cannot come up with any reason why the trend would reverse direction without radical changes to cost and access at the types of élite colleges that facilitate class mobility.
What seems likely is a winner-takes-all scenario, in which the élite schools and flagship state universities survive on account of their cultural, financial, and reputational advantages, while other schools die out, leading to either a huge expansion in enrollment among the survivors (unlikely) or a steady drop in the number of young people who seek out a four-year degree. That may be a good outcome, but the gospel that I grew up with—the idea that everyone should get a college education not only for upward mobility but also to explore reading, thinking, and writing for their own sake—will be dead.
The future of college as we know it may rely on the ability of people who have a stake in the credentialing economy to convince the youth that there is still value in classroom instruction, in writing papers without A.I. assistance, in talking to imperfect humans about misshapen ideas. But they will be making this case to a generation of students who learned many things—skateboarding, the piano, cooking—from YouTube, and who have been able to ask Claude to assist them in every academic endeavor they’ve undertaken. [Oriana: This reminds me of a storm over the introduction of the first simple calculators.]
Who will be the most receptive audience for this sales pitch? Probably those who trust institutions the most, and who can sacrifice some efficiency for an outdated but fancy stamp of approval—in other words, the children of the wealthy and educated. But, when you consider that the vast majority of students at élite private colleges—which is to say, this same group—already use A.I. in nearly all aspects of their academic lives, it can seem as though this fight has already been lost.
College will still exist as a place—or, at least, as a website or app—that employers will use to distinguish one applicant from another. But will it still look the way it does today, with thousands of campuses around the country, of varying reputation, quality, financial health, and philosophical missions? For now, we have only the questions.
https://www.newyorker.com/news/fault-lines/will-ai-make-college-obsolete?utm_source=firefox-newtab-en-us
*
HOW NATURE HEALS BOMB CRATERS
In February 1945, towards the end of the second world war, a German V2 rocket struck Walthamstow Marshes in east London. The explosion tore a crater into the marshland. Left untouched, it slowly filled with water, sediment … and life. Today, this wartime scar has become a thriving pond.
A drone’s view of the Irpin valley, where abandoned positions, cratered ground and standing water mark the frontline of the 2022 battle for Kyiv
“It’s small but it really punches above its weight,” says Luke Boyle, a ranger for the Lee Valley Regional Park Authority, as he kneels at the edge to examine aquatic plants sprouting their early spring shoots. “We can’t manage the hydrology here, so it is actually a vital part of the ecosystem – it supports a range of plants, insects and amphibians, more than you might expect,” he says.
Walthamstow’s Bomb Crater Pond lies within a fenced-off section of the marshes, protected as part of a site of special scientific interest. Its clear waters provide a year-round refuge for wildlife in an otherwise highly managed urban landscape. More than a million people visit Walthamstow Marshes each year, most unaware that the modest pond near the perimeter fence began its existence due to an instrument of destruction.
“It’s like an engine room for the marshes,” Boyle says. “Despite its size, it supports the wider ecosystem around it.”
Unlike most wetlands, there are no sluice gates here, no managed hydrology – the pond’s natural depth means it holds water all year round, reliable and clean. The cattle drink from it too, and their hooves disturb the ground around the margins, creating a patchwork of habitats that allow different species to take hold.
“In winter, you get snipe and lapwing out on the wetlands,” Boyle says, scanning the margins of the pond. “Then, by late spring, this becomes a breeding ground: newts and grass snakes. The dragonflies and mayflies come, butterflies along the edges.” Frogs and herons are regulars.
“Last year was good nationally for butterflies,” he adds. “But, here, it’s like that every year.”
One species is creeping marshwort, among Britain’s rarest aquatic plants, recorded at only two sites in the UK. It is too early in the season to see it but Boyle knows it is there, somewhere beneath the surface, waiting. “We watch that carefully,” he says. “It’s about keeping the balance between open water and the vegetation that wants to take over.” Under a countryside stewardship agreement overseen by Natural England, he is required to maintain at least 80% open water, pulling out reedmace by hand when it encroaches.
It is, he adds, his favorite spot on the marsh. As he speaks, the Stansted Express flashes past on the elevated line beyond the fence. A grey heron circles above as if signaling it would like to land.
The power of small ponds
Small ponds have been one of ecology’s most underestimated assets. “Historically, because they are small, ponds have been dismissed as insignificant,” says Prof Jeremy Biggs, CEO of Freshwater Habitats Trust. “In fact, the evidence shows the opposite: across a landscape, ponds support a wider range of freshwater plants and animals – including more rare and protected species – than other freshwater habitats, such as big rivers or lakes.
Part of the reason is counterintuitive. Bigger water bodies attract bigger problems. Rivers accumulate diffuse pollution from the land they drain. Lakes receive runoff from vast catchments. Ponds, by contrast, are small enough to avoid the systems that damage larger waters. Nobody routes a sewage outflow into a pond – it’s too small to dilute it.
“Ponds are powerful precisely because they are small,” Biggs says. “Many still hold clean water – something now increasingly rare in the wider countryside.” That, he adds, is why creating and protecting clean-water ponds is such an effective way to support freshwater biodiversity. They are also astonishingly varied – acidic or alkaline, shaded or open, grazed or undisturbed – and create a mosaic of conditions across any landscape that no single large water body could replicate.
“Darwin famously proposed that life began in a ‘warm little pond’, and freshwater species have spent millions of years evolving to colonize exactly these kinds of small, still waters,” says Biggs. “While it takes many decades or even centuries for most types of habitats to become established, wildlife arrives at new ponds almost immediately, and these small waters can become ecologically rich within just a few years.”
Evidence from bomb craters makes this vivid. Biggs says: “At Tommelen in Belgium, 144 ponds – many created unintentionally by second world war bombs – now form a nature reserve supporting seven amphibian species, including great crested newts and tree frogs.
“At Ashley Range in the New Forest, craters left by bouncing bomb tests, including one made by a 10,000kg bomb, have filled with water to create a network of ponds in heathland. These examples demonstrate how quickly ponds can become colonized. If the ponds are clean and in the right place, they can become rich habitats for a wide range of species.”
Every crater tells a different story [UKRAINE]
Some 1,500 miles (2,400km) east of Walthamstow, the ground tells a different story. Since Russia’s invasion of Ukraine in February 2022, the scale of destruction written into the landscape defies comprehension. Satellite imagery studies have identified more than 600,000 craters in just two southern regions (Mykolaiv and Kherson) alone. Across the country, researchers estimate the number now runs into millions.
The human cost has been devastating: tens of thousands killed, millions displaced, entire cities reduced to rubble. For Anastasiia Splodytel, a Ukrainian soil scientist who has worked across the country’s devastated regions, each crater marks a moment of violence.
What happens to the land afterwards is what she has spent years trying to understand. The answer, she says, is never simple. “It all depends on who is standing at the edge: their professional background, how they perceive what they see and a range of natural factors: soil type, the landscape-geochemical structure and the type of weapon used.”
A bomb does not simply tear a hole. The shock wave disrupts the layered structure of soil built up over centuries, deeply embeds metal fragments and recasts the microrelief.
Some of the most extreme cases that Splodytel has encountered involve phosphorus munitions. White phosphorus burns at up to 2,760C – hot enough that fertile black soil turns into cracked stone.
Ukraine’s chernozem, its famous black earth that is among the most productive agricultural soil on the planet, can be reduced in seconds to something inert. The thermal shock kills what lies beneath: the bacteria, fungi and microorganisms that drive fertility and regulate moisture.
Heavy metal contamination in individual craters does not always reach catastrophic levels, often rising only modestly above the natural background before stabilizing. But Splodytel is careful about what reassurance that offers. “The main threat is not toxicity, but the loss of soil fertility – disruption of soil structure and reduction in microbial populations.”
Explosive compounds in the soil have barely been studied, and for communities living nearby, invisible contamination is easily underestimated. “It is precisely this underestimation that presents a significant threat,” she says.
Yet there is hope for even the most damaged soils Splodytel has examined. “I was pleasantly surprised by how quickly nature begins to heal itself without waiting for human intervention.” In shallower craters, she has observed accelerated pedogenesis – the rapid reformation of soil structure, new layers beginning to build almost before the dust has settled. Something, quietly, is trying to return.
For Bohdan Prots, a Ukrainian ecologist who has spent decades studying his country’s natural landscapes, there are few straightforward answers. His starting point is pragmatic. In up to 90% of cases, the right course of action is simply to fill a crater with clean soil and return the land to use. “It is not as big a deal as people think when they talk about craters causing huge pollution. Not when you are looking at a single explosion.”
Scale changes everything. In the heavily bombarded zones of eastern Ukraine, where explosions have occurred every few meters, the cumulative effect is catastrophic, with entire forests reduced to bare earth, soil overturned to depths of several meters and the layered structure that took centuries to form pulverized beyond function.
And then there are the mines. “The problem is not really the crater,” Prots says. “The problem is unexploded devices.
He describes a colleague’s father in the Kherson region who, on just half an acre of land, discovered 36 unexploded devices over three years, even after military deminers had certified the area clear multiple times. There is, he notes, a well-known rule: one year of war equals an average 10 years of demining. Ukraine, now four years in, already faces 40 years of that work ahead.
Yet he keeps coming back to something he has observed across a lifetime in the field. Growing up in western Ukraine, decades after the Second World War, he found old bomb craters in the forest – depressions slowly silting over, but still functioning as miniature reservoirs. “They were already not as deep as before,” he recalls, but were “still a good resource for animals to drink and frogs to breed”.
He pauses over the observation for a moment. “Diversity of habitats, even created by war, leads to species diversity.”
Nature’s return is not always straightforward. The first species to arrive in damaged habitats are often pioneer or invasive ones, filling the vacuum before native plants and animals can re-establish. “People see green and think it is fine,” Prots says. “But green is not always beneficial.”
Each landscape must be read on its own terms, he says, whether that is wetland, forest, farmland or urban ground. Every crater carries a different story, depending on the soil, location, power of the explosion, intensity of the battle and whether clean water is present.
When asked about the postwar future – the tens of thousands of water-filled craters that will remain across Ukraine once the fighting ends – Prots’s answer is measured but not without hope.
“When it is in the forest,” he says, “it becomes a biodiversity hotspot. At the bottom of these explosions, which go three to eight meters deep, there will always be water in the end, even in very dry periods. It is like establishing a small wetland.”
He pauses. “And that is a place where animals are always eager to come.”
https://www.theguardian.com/environment/2026/may/06/london-to-ukraine-how-nature-is-thriving-in-bomb-craters?utm_source=firefox-newtab-en-us
*
HAVING MORE CHILDREN THAN DESIRED MAKES PARENTS UNHAPPY
The question of whether having children makes their parents happy is not easy to answer.
There is no single number of children that makes a parent maximally happy. One essential factor that has often been neglected in psychological research on having children and psychological well-being in parents is fertility desires. For someone with absolutely no desire to have any children, not having children may not lead to any negative feelings. In contrast, someone who has a strong wish to have children may be emotionally devastated if they are not able to conceive. Thus, to truly understand the relationship between having children and psychological well-being in parents, psychological studies should assess fertility desire.
A new study on how a mismatch between the desired and the actual number of children
A new study entitled “How a Mismatch Between Actual and Desired Fertility Relates to Well-Being Across Adulthood”, just published in the Journal of Personality, now focused on investigating how the difference between the desired and the actual number of children affects parents’ psychological well-being (Buchinger and co-workers, 2026). The research team, led by scientist Laura Buchinger from the University of Berlin, analyzed data from the so-called German Socio-Economic Panel Study.
Overall, data from more than 23,000 volunteers were included in the dataset. All volunteers answered the question “How many children would you ideally have?” and indicated how many children they had. Based on the results of these two questions, the scientists divided the overall data set into five distinct groups:
People who had chosen to be childfree
People who were involuntarily childfree
Parents who had exactly the number of children they wanted
Parents who had children, but fewer than they had desired
Parents who had children, but more than they had desired
All volunteers indicated how satisfied they were with their lives overall and how satisfied they were with several domains of their lives, such as work life and family life. Moreover, several additional pieces of information about the volunteers were gathered, such as the region they came from, the quality of childcare in their region, and their religion.
Results of the study
Only one group showed a strong decline in psychological well-being compared to the other groups.
Across all groups, the average actual number of children was 1.56, and the average ideal number of children was 2.35.
Overall, four of the five groups showed similar psychological well-being. These include both childfree groups and parents who had exactly as many children as they had desired, and parents who had children, but fewer than they wanted.
Only one group had substantially lower psychological well-being than the other four across all age groups: parents who had more children than they had desired to have.
People who were involuntarily childfree showed an age-dependent effect. While young people who are involuntarily childfree showed no impairment in psychological well-being, older unfulfilled parents showed less life satisfaction than other groups.
Religion, social norms, and childcare infrastructure did not have any substantial effects on the results.
Takeaway:
In a study with more than 23,000 volunteers, the scientists showed that people who had more children than they had desired had lower psychological well-being than the other groups.
While the scientists could not conclusively answer the question of why this is the case, they discuss that financial constraints and a loss of autonomy may be factors linked to lower well-being in parents who have more children than they had desired. Also, older people who were childless despite wanting children showed lower psychological well-being. These results show that fertility desires are a crucial factor for understanding whether having children makes a couple happy.
https://www.psychologytoday.com/us/blog/the-asymmetric-brain/202604/having-more-children-than-desired-makes-parents-unhappy
*
HOW GENES AND ENVIRONMENT SHAPE OUR PERSONALITIES
Laurie Clarke delves into the devilishly complex forces that shape our personalities – and the new research revealing ever more about how our genes do, and don't, make us who we are.
In 2009, Abdelmalek Bayout faced a nine-year prison sentence in Trieste, Italy, for stabbing and killing a man who had mocked him in the street. Aiming to reduce the sentence, his lawyer made an unusual legal argument.
His client's DNA, he said, indicated the presence of the "warrior gene", a mutation that decades of scientific research had tied to aggressive behavior. Because of this, the argument went, he couldn't be held fully accountable for his actions. The appeal was successful: a year was sheared off Bayout's sentence.
From the 1990s, evidence had accumulated of some kind of link between violent behavior and a variant of a gene called monoamine oxidase A, or MAOA. By 2004, it had earned the media-friendly moniker of the "warrior" gene.
Since then, however, our understanding of how genes influence traits and behaviors has deepened significantly. "Initially, people thought that behaviors were influenced by a few genes with very large effects," says Aysu Okbay, assistant professor of psychiatry and complex trait genetics at Amsterdam UMC in the Netherlands. "That has been completely debunked."
Instead, over the past 15 years, a far more nuanced picture has emerged. Even traits thought to be highly heritable, like height, have proven far more complicated to isolate on the genome than once assumed.
Now, though, new methods for large-scale genetics studies are beginning to widen the picture. By revealing ever more about how our genes do – and don't – make us the people we are, they are yielding new insights into the devilishly complex forces that shape human nature.
The age-old question
People have long been fascinated by the extent to which our temperament and the trajectory of our lives is set at birth. Still, the origins of "personality", the relatively stable pattern of thoughts, feelings and attitudes that make up an individual, have proved difficult to pin down.
The question of "nature or nurture" was popularized in its current sense by English polymath (and the founder of eugenics) Francis Galton, who in 1875 helped pioneer a way of studying traits in twins. But his methods were rudimentary, and it wasn't until the 1920s that scientists began comparing the similarity of identical twins, who share 100% of their DNA, with fraternal twins, who share only 50%.
Twin studies have been popular ever since. Today, scientists have convened on the idea that personality consists of five broad dimensions: openness, conscientiousness, extraversion, agreeableness and neuroticism (often called The Big Five personality traits). And many twin studies have now examined whether these personality dimensions are passed down genetically.
While identical twins are typically more similar than fraternal twins, their personalities are by no means identical. A 2015 comprehensive meta-analysis of more than 2,500 twin studies between 1958 and 2012, covering almost 18,000 complex human traits, found (unsurprisingly) that identical twins are typically more similar than fraternal twins. But their personalities are certainly not identical.
For the 568 traits that were descriptions of temperament or personality, the study found that 47% of differences could be attributed to genetic differences. The remaining portion, it concluded, must be accounted for by environmental influences. Other studies seem to support this – only around 40-50% of personality differences are genetic.
In 1979, American psychologist Thomas Bouchard set out to track down twins separated in infancy. He found that identical twins raised apart were often strikingly similar.
Most famously, Bouchard came across identical twins called Jim who had been separated at birth and reunited at age 39. "The twins were found to have married women named Linda, divorced, and married the second time to women named Betty," he wrote in a 1990 study. "One named his son James Allan, the other named his son James Alan, and both named their pet dogs Toy."
Critics, however, have argued that Bouchard's studies contained methodological flaws, and noted that such coincidences could easily occur between unrelated persons, if one drew from enough data.
Twin studies have always been an inexact art, often relying on estimates based on the differences between twins and other family members. But around 2010, huge strides in genetics began opening up other exciting new avenues to scientists interested in measuring personality differences.
The missing heritability problem
The human genome is an unwieldy beast: there are 23 chromosomes, containing around 20,000 genes between them. These are further subdivided into about three billion "base pairs" – the smallest unit in the genome – which are typically conceptualized as pairs of letters that unfurl in a particular sequence.
All humans share 99.9% of their DNA, meaning only a miniscule 0.1% of the genome accounts for our differences. While this helpfully limits the surface area that scientists need to examine, it still leaves several million base pairs to rake through. Despite the 2000s yielding cheaper and more easily accessible genomic data, locating the source of our differences within it has proved far trickier than once expected.
The past 15 years, though, has seen an explosion of genome-wide association studies, a method which examines millions of the smallest parts of the genome that can vary among humans, and tries to find associations between these and different personality traits.
The early days of these studies struggled to consistently identify DNA variants related to personality. We now understand one reason for this: human traits are "polygenic", with many different genetic variations each contributing a tiny effect that add up across the whole genome. For complex traits like personality, effects could be spread across thousands of DNA variants.
But even when combining a range of different DNA variants, the effects on personality remain smaller than anticipated. Heritability estimates currently span from 9% to 18% for Big Five personality traits, far below the 40% suggested by twin studies. What explains this "missing heritability"?
Perhaps by increasing the number of participants in these studies and improving their design as we grow our understanding of how different genes interact, stronger genetic effects will be discovered.
Today, though, when comparing the heritability estimates from twin and genome-wide association studies, it’s hard to know which is true, says Okbay. "It's probably somewhere between the two."
What about 'nurture'?
If it's possible that "nature" contributes less than we once thought, it might be tempting to attribute more of our personality to "nurture": the circumstances we grew up in, the people who surround us, the life events that shape our unique histories. It turns out, though, that understanding how our environment shapes our personality is just as complex.
Since studies show that personalities can change over time, you might assume that winning the lottery or losing a leg might trigger a transformation. But it turns out that one-off major life events only have a negligible impact on who we are. Factors like how we are raised or our social interactions also account for only a small portion of personality differences, studies repeatedly find. And while marriage might make one slightly less open, and childbirth may marginally reduce extraversion, taken individually, these events don't dictate much of who we become.
We now know personality differences are polygenic and poly-environmental: meaning many genes and small life experiences combine to create who we are.
Exposure to certain kinds of trauma during childhood has been found to predict psychopathology and poorer cognitive functioning in later life, which can manifest in personality variables such as increased neuroticism. But adversity experienced as an adult seems to be far less consequential.
"That's been the big surprise in this research area… that if a big traumatic life event happens to you in adulthood, it doesn't leave this huge trace," says Brent Roberts, professor of psychology and at the University of Illinois at Urbana-Champaign, US.
The trauma narrative is beloved by popular culture – the idea that we experience personal growth as a result of the bad things that happen to us. But "trauma doesn't make you who you are", says Roberts.
What about the first environment we ever experience, floating in the amniotic sack? A growing body of research suggests that mothers experiencing stress during pregnancy could impact the temperament of their unborn child – part of a hypothesized phenomenon called "fetal programming".
For example, a 2022 study found that mothers who experienced greater fluctuations in stress had infants who expressed more fear, sadness and distress at three months old. There is not yet a clear understanding of why this happens, though an epigenetic mechanism – meaning changes in the gene expression rather than the DNA itself – is one of the candidates under consideration.
Overall though, researchers have concluded that in addition to being polygenic, personality differences are "poly-environmental". Like the many DNA variants across the genome that add up to a given trait, each of our life experiences exerts a small effect, which together combine to have a greater impact.
Genetic and environmental impacts also interact in ways we haven't yet fully grasped. For one, the environment appears to be able to activate or switch off certain genetic predispositions. "Genetic predisposition does not mean that in every environment, people behave in the same manner," says Jana Instinske, research assistant in the department of psychology at Bielefeld University in Germany.
A way through
These are incredibly knotty problems, but, at least on the genetics front, scientists claim to be making breakthroughs with the latest genome-wide association studies. The key? Hugely increasing the number of participants – with the latest analyzing hundreds of thousands or even millions of people's genetic data at once.
"It's only now that we have sufficiently many individuals and genotype samples," says Okbay. "With this many small effects, you need really, really large samples to be able to detect them."
Studies conducted in the past decade have turned up hundreds of DNA variants associated with each of the Big Five personality traits. "A lot of the focus right now is on getting [the genomes of] more and more people, so we can discover more and more genes and build on what others have done before," says Daniel Levey, assistant professor of psychiatry at Yale University in the US.
More studies of people with non-European ancestry are needed, however, adds Levey. "There are going to be very important cultural differences that we're missing out on by being laser-focused on one group," he says.
We are still far from understanding exactly what the tiny permutations across the many pages of our genetic code tell us about how personalities take shape. But some interesting findings are already emerging.
Levey's study, for example, suggested that CRHR1, a gene related to the regulation of the body's stress response, is strongly linked to neuroticism in nervous system tissues. This gene has previously been linked to psychiatric illnesses including depression, anxiety and OCD, all of which are also associated with neuroticism. It suggests that this personality trait is closely tied to how the body naturally responds to stress.
Another highly anticipated study currently being peer-reviewed provides evidence for theories situating the seat of personality in the prefrontal cortex – the area of the brain responsible for complex functions like planning and decision-making. It finds that associations for all Big Five traits (except agreeableness) are enriched in genes expressed in this part of the brain.
Interestingly, the study says, since dopaminergic neurons were not "among the most enriched neuron types", it could present a challenge to neurobiological theories of personality that posit an outsized role for dopamine in mediating extraversion and openness.
Many caveats and unknowns remain, even for the most studied areas of behavioral genetics such as the connections between violence and the so-called warrior gene. Studies indicate that in some groups of males, both the presence of certain moderator genes and certain environmental risk factors (such as an abusive upbringing) could increase the potential for violent behavior in certain scenarios. But the results are far from clear cut.
So far, efforts to boil human behavior down to a handful of genes or life events have failed. It turns out that humans are just far more complex.
What emerges above all is the mutability of the human condition, says Instinske. "It's not that if you have a certain genetic predisposition, you will always, throughout your entire life, behave in a certain way."
https://www.bbc.com/future/article/20260501-nature-vs-nurture-how-much-of-our-personalities-are-determined-at-birth
*
MICRO-FLUCTUATIONS IN THE TIME BETWEEN HEARTBEAT
Until, that is, he began paying attention to heart rate variability, one of the many health data points tracked by his smartwatch. A more complex metric than heart rate – the number of times the heart beats per minute – heart rate variability reflects how the time between heartbeats fluctuates. A growing body of research suggests it's an indicator of cardiovascular health, stress levels, exercise capacity and more, allowing dedicated trackers to make more informed decisions about their fitness regimes and lifestyles.
Now, if Kirillov is on the fence about whether it's better to take a day off or grind it out in the gym, he consults his heart rate variability score. Since adopting that habit, "I feel like I'm in better balance with myself", he says. He's such a convert, he even launched an app dedicated to tracking stress using heart rate variability data.
As wearables become ever-more ubiquitous and research on heart variability accumulates, more people are joining Kirillov in keeping this score, says Deepak Bhatt, director of the Mount Sinai Fuster Heart Hospital in New York City, US.
But do you actually need to track heart rate variability, and what can you learn if you do?
WHAT IS HEART RATE VARIABILITY?
"You want a heart to beat more or less regularly," Bhatt says. When the heart beats extremely irregularly, it's classified as an arrhythmia, which in serious cases can result in complications such as stroke or heart failure.
But even a healthy heart has some variation in the time between its beats, Bhatt says. These variations are tiny, measured on the order of milliseconds (one millisecond is a thousandth of a second). And when looking at changes on this scale, "a higher variability, in general, is considered better" than a lower one, Bhatt says.
There's no single ideal heart rate variability score, as it varies by age, fitness level, sex, tracking device and calculation method. But one wearable brand says the average score for its users, who tend to be active and health-conscious, is 65 milliseconds for men and 62 milliseconds for women. And there's huge variation by age group: the average score for 25-year-olds is 78, compared to 44 for 55-year-olds.
While the maths behind these numbers is complex, you can think of them as approximations of the average fluctuation in the time intervals between heartbeats in milliseconds.
Shooting for a high heart rate variability may seem counterintuitive, since a low resting heart rate suggests someone has good cardiovascular fitness. High heart rate variability, though, is a way to measure how well your nervous system is cycling between its "fight-or-flight" stress response and its "rest-and-digest" relaxation response.
Here's how it works. If you need to outrun a predator – or just go out for a jog – your nervous system triggers a range of physiologic responses that give you energy and acuity. Among other effects, your heart rate rises. When it does, your heart rate variability drops because the heart has to keep beating at a fast and steady pace to sustain you.
Breathwork can help to regulate your heart rate variability, with some experts recommending people setting aside time every day to practice it.When you're back at rest, the nervous system should calm everything back down. In this relaxed state, your heart rate naturally beats at a more variable pace – for example, speeding up a little when you inhale, then slowing down when you exhale.
A high average heart rate variability "shows that your system can, when it needs to, quickly change heart rate and blood pressure to match the environment or to match the circumstances", says Dennis Larsson, a postdoctoral research fellow at Kiel University in Germany who has studied heart rate variability. This suggests it can spring into action when something is stressful but relax again when something doesn't have to be stressful.
Other research has found that people with conditions including post-traumatic stress disorder, dementia and schizophrenia often have lower-than-normal scores. In some cases, when people with psychiatric diagnoses receive treatments such as psychotherapy or transcranial magnetic stimulation, their heart rate variability subsequently improves, suggesting the nervous system is working better, according to a 2025 research review. (Other studies, however, have shown that psychiatric treatments, such as certain antidepressant medications, can lower heart rate variability as part of their broader effects on the nervous system.)
A low heart rate variability, on the other hand, suggests you're getting stuck in one state – most commonly, that stressed-out fight-or-flight mode. Modern life, after all, is full of stressors that can rev up the nervous system, from traffic jams to work deadlines.
Consider an automated temperature control system in a building. Ideally, the system should adjust to small variations in outdoor climate to keep you comfortable inside. If the system gets stuck at one temperature – blasting at high heat even on an unseasonably warm spring day, say – that's not a good thing. You'll be left sweltering (and tempted to call the repairman). Your body isn't so different. When your system is in proper balance, it should be highly responsive to different internal and external cues.
What heart rate variability says about your health
Cardiologists use heart rate variability, along with other metrics, to assess how well your heart is working and look for warning signs of disease. Bhatt's research, for example, suggests heart rate variability data can help identify atrial fibrillation, a potentially serious form of arrhythmia.
Some athletes also use their heart rate variability score to assess how well their body is recovering from strenuous physical efforts. Ideally, heart rate variability should dip during a hard workout, then rise again afterwards. If it stays depressed for days after a gym session, that suggests the body needs extra rest to get back to full strength.
Because it reflects stress and nervous system health, heart rate variability also seems to be a strong indicator of mental health. A 2023 research review found that, across most studies, heart rate variability tends to be lower among people with anxiety and depression, compared to people without these diagnoses. Someone with clinical anxiety is in "a continuous state of stress or duress," says Larsson. "There, you see a continuously reduced level of heart rate variability," signaling that their body is stuck in fight-or-flight mode.
These conclusions must be taken with caution, however. Many studies of heart rate variability and mental health are small, unreplicated and subject to a common problem in the field: there are lots of ways to measure heart rate variability – monitoring people for five minutes compared to a full day, say. Some devices are also more accurate than others, which makes it hard to standardize findings.
Could heart rate variability be a treatment target?
Some researchers think that purposely manipulating heart rate variability could be an effective way to treat various mental and physical health conditions.
Breathwork is perhaps the most accessible way to regulate your heart rate variability, because the heart naturally speeds up and slows down in time with your inhalations and exhalations, says Tim Herzog, a licensed clinical psychologist in Virginia, US, who is also a certified biofeedback practitioner. Herzog recommends that people set aside about 20 minutes, twice a day, to practice slow, mindful breathing – like inhaling for four seconds, then exhaling for six.
More research is needed, though. There are different ways of practicing breathwork besides this, and experts need to work out which is best. Still, some studies suggest it's a promising path to follow. Researchers have found that when people with mental health conditions, including PTSD and depression, practice structured breathing meant to boost their heart rate variability scores, their mental health symptoms tend to decrease.
Other studies outside the mental health realm – albeit mostly small and preliminary ones – have shown that such breathwork programs can result in better sleep, lower blood pressure and lessened chronic pain.
Not all scientists are convinced that heart rate variability needs to be purposely altered, though.
Larsson considers heart rate variability "a metric to look at what the underlying conditions are", but not something that needs to be directly treated.
Bhatt agrees. Heart rate variability often improves when people start adopting healthy behaviors, such as exercising or getting consistent sleep, but "it's a chicken-and-egg sort of thing", he says. "Is the heart rate variability improving, per se, what's important? Or is it what led to it improving?"
How should you track heart rate variability?
Lots of consumer wearables track heart rate variability. But some are dramatically more accurate than others, cautions Karin Steere, an associate professor at the University of Puget Sound in Washington, US. Her research suggests devices that fasten around the chest do a better job than more common styles worn around the wrist.
No matter what kind of wearable you use, she says, remember that heart rate variability is most useful when assessed over time, not at a single moment. Heart rate variability is supposed to change throughout the day. When you're out for a run, your heart rate variability will naturally look different from what it does at rest. So, looking at a single score will tell you less than watching how it changes over time.
"Every morning, take your HRV, see what that looks like, and then think about what just preceded that," Steere recommends. "Did I have a really good night's sleep? Did I have a couple glasses of wine the night before?"
Over time, as you get a sense of your baseline and how your heart rate variability changes depending on your health and behavior, you can use that data to help you make decisions and track progress, Herzog says.
Maybe you were already feeling sluggish and your heart rate variability data drives home that your system is overtaxed and needs rest. Or perhaps you've just started a new exercise regimen but aren't seeing physical results yet. Since heart rate variability tends to improve with exercise, a higher score could encourage you that your gym sessions really are working. Tracking it, if anything, "ends up really enhancing subjective awareness", Herzog says.
And if all of this sounds overwhelming? Don't let your heart skip a beat over it, Bhatt says.
Heart rate variability can be helpful or interesting for people who are highly motivated to track their health data.
But, in Bhatt's view, there are plenty of metrics that are easier to understand – and probably more important – than variability, such as heart rate, blood pressure, weight, waist circumference and cholesterol levels. "Every adult should know those numbers," Bhatt says, "and most people aren't even doing that".
https://www.bbc.com/future/article/20260506-heart-rate-variability-can-say-a-lot-about-your-health-and-heres-how-to-track-it
*
SIX CANCERS RISING FASTER IN YOUNGER ADULTS THAN IN OLDER ONES
Six cancer types are rising faster in younger adults than in those who are older in at least five countries, a new study of global cancer incidence shows, and two types — colorectal and uterine — are becoming both more common and more deadly among the young.
The massive study combed through data from two large cancer databases to better understand the recent rise of cases in adults under 50, a trend that belies the traditional understanding of the disease as one that disproportionately affects the elderly. Though still relatively rare among those in middle age and younger, the rising incidence of several cancer types in that cohort has raised concern among experts.
The work, published in November, painted a disturbing yet complex picture that varies globally according to cancer type, sex, and national context. The study examined cases that occurred between 2000 and 2017 and found 13 cancers on the rise in those under 50 in at least 10 countries, and six cancers — colorectal, cervical, pancreatic, prostate, kidney and multiple myeloma — rising faster in younger adults than in older adults in at least five.
The trends of both higher incidence and mortality in those under 50 occurred in fewer countries — in five for uterine cancer and, for colorectal cancer, three nations for females and five for males.
Colorectal cancer, particularly in North America, Europe and Oceania, drew particular attention from the authors, who said that 10 percent of global cases already occur in those under 50.
They cited estimates that, by 2030, colorectal cancer incidence in those ages 20 to 34 will rise by 90 percent, and, in those ages 35 to 49, by 46 percent.
The news is better, however, for late-onset colorectal cancer, with several countries showing incidence declining, likely in part due to screening programs that target older adults and detect precancerous growths early.
https://news.harvard.edu/gazette/story/2026/02/six-cancers-rising-faster-in-younger-adults-than-older-ones/
*
WHY SCARS NEVER DISAPPEAR
I am a clumsy guy. If there are sharp corners nearby, I’ll bash into them. If there’s a surface underfoot with even a light sheen of polish, I’ll take a tumble. You don’t need to take my word for it. A quick look at my knees, which have become knitted with a patchwork of small scars, tells the story.
I can trace some of these marks back years, and have accepted that they will be on my body for life. But what gives? Why don’t our bodies remove old scars? The answer goes to the heart of how our bodies have adapted to protect us.
Why do some injuries not cause scarring?
“The skin is our protection against the external environment,” says Dr. Corey Maas, an associate clinical professor at the University of California, San Francisco and founder of the Maas Clinic. “It’s a remarkable organ. It’s very important that its integrity be maintained.”
The skin consists of three layers. From outermost to innermost, these are the epidermis, dermis, and fat layer or hypodermis.
3 layers
After our skin is damaged, a cascade of biological processes fires up. If an injury only damages the epidermis, the wound will typically heal without scarring.
But if the injury goes deeper, a scar will form. All scars, big or small, are “designed to repair the skin and restore to you all the continuity and the protective mechanisms that the skin exhibits for the entire body,” says Maas. In other words, our body’s priority is to get the skin strong enough to repel invading microbes—not make it look pristine.
How do scars form?
There are several stages involved in scar formation. The body first forms a blood clot to prevent bleeding, which then dries into a scab.
The immune system then sends specialized cells into the clot to beat back any microbes that may have snuck their way in through the wound. To do so, these cells release specific chemicals (called cytokines), which help prevent infection and send out a loudspeaker message to the body that it’s time for a cleanup in the skin aisle.
In response, more specialized cells in the skin called fibroblasts kick into action. These cells start releasing a type of biological scaffolding, known as the extracellular matrix, made up of molecules like long, fibrous proteins such as collagen. These tough proteins increase the scar tissue’s strength.
While a wound might close quickly, the full process of restoring the skin’s layers can take months or even years.
Can you have too much scarring?
A fully formed scar is made of tough, dense bundles of collagen and other connective tissue, with no sweat glands or hair follicles. This messy mix of hardened tissue isn’t like other skin. There are fewer cells to be renewed and replaced.
“Those collagen molecules are there forever,” says Maas, creating a tough, fibrous tissue that keeps scars on our bodies for years, decades, or even a lifetime. Sometimes our bodies overdo it on collagen production, resulting in large or raised scars.
In its urgency to seal the rip in its protective outer coating, the body piles on extra collagen. This can produce red, raised scars that stay where the original injury, called hypertrophic scars. In some cases, the resulting scar even extends far beyond the original injury. These are called keloid scars.Keloid scars can become itchy or painful as they grow. If they form too close to a joint, they can even impede movement. Surgical removal of keloids can cause them to grow back even larger.
How to look after your scars
Scars can fade and become less prominent over time as initial deposits of disorganized collagen are replaced with flatter, more ordered layers. But even this overhauled tissue looks different from normal skin, which is why scars rarely disappear completely.
Maas says that doctors can alter factors, such as a scar’s discoloration and depth through cosmetic procedures, and that steroids can reduce redness. But the most important consideration is good wound management, says Maas.
Keep the wound clean. If it’s an open wound, keep it covered with fresh dressings. If the wound is closed up, Maas recommends keeping it covered with a thin layer of ointment. He says that some doctors prefer scars to dry up, but in his view, it’s important to protect against microbes while a wound heals.
But scars aren’t all bad. They’re a physical record of the experiences you’ve gone through. A scarred knee might fondly recall a tumble in the playground. A burn scar conjures memories of a busy dinner party. These marks wouldn’t have such power if they simply disappeared after a short time. https://www.popsci.com/science/why-scars-never-disappear/?utm_source=firefox-newtab-en-us
*
EATING MORE EGGS MAY LOWER THE RISK OF DEMENTIA
Some existing research has suggested that egg consumption could benefit brain health as we age, with one recent study indicating that eating one egg per week was linked to lower Alzheimer’s risk.
A new study now claims that eating eggs at least five times a week is linked to a lower likelihood of receiving an Alzheimer’s diagnosis.
The study authors emphasize that moderate egg consumption is part of a balanced diet, which benefits health overall.
However, some questions remain in place about whether or nor the relationship between egg intake and brain health is causal.
Research shows that dietary cholesterol from moderate egg consumption does not contribute to higher levels of “bad” cholesterol in the human body and thus does not heighten heart disease risk.
In fact, there is evidence to suggest that the high nutritive content of chicken eggs could bring several health benefits, including better protein synthesis in muscles, and increased satiety (the sensation of being full) that can aid weight management.
A study published in in The Journal of Nutrition in July 2024 even found a link between egg consumption and a lower risk of Alzheimer’s disease.
Compared to no consumption, having eggs 1 to 3 times per month was linked to a 17% lower risk of Alzheimer’s, and having eggs 2 to 4 times per week was linked to a 20% lower Alzheimer’s risk.
Here are some benefits of egg consumption:
“choline, which is essential for producing acetylcholine, a neurotransmitter involved in memory
lutein and zeaxanthin, which accumulate in the brain and may help reduce oxidative stress
omega-3 fatty acids (including DHA), important for neuronal structure and function
vitamin B12, which plays a role in reducing homocysteine levels and supporting neurological function
high-quality protein and tryptophan, which are involved in neurotransmitter pathways.”
https://www.medicalnewstoday.com/articles/eating-eggs-5-times-a-week-linked-to-lower-alzheimers-risk#Eggs-and-brain-health-research-Where-to-from-here-on
*
CURE FOR MALE PATTERN BALDNESS?
Research suggests the cure to male pattern baldness might be a type of sugar.
The team simulated testosterone-based balding in mice and treated them with deoxyribose sugar, which stimulated blood vessel formation and ultimately caused hair regrowth.
Researchers say that the sugar treatment is just as effective as minoxidil (the active ingredient in Rogaine), a hair loss treatment currently on the market.
A 2-deoxy-D-ribose (2dDR) sugar gel shows potential to treat hair loss as effectively as 5% minoxidil (Rogaine). A 2024 study found that applying this natural sugar topically encourages hair growth by stimulating new blood vessels, boosting blood supply to follicles. Results showed similar efficacy to minoxidil, potentially offering a non-toxic alternative.
Mechanism: The sugar (2-deoxy-D-ribose) promotes angiogenesis—the formation of new blood vessels—which increases nutrient delivery to follicles, promoting the growth phase,
Study Findings: In a study on androgenic alopecia in mice, the gel showed 80%å-90% effectiveness in restoring hair growth compared to control groups, matching the results of standard minoxidil.
Potential Advantages: Researchers believe this natural, non-toxic sugar might have fewer side effects than other treatments and could treat hair loss caused by various conditions, including chemotherapy.
Current Status: While highly promising, the studies have been conducted on mice. Further research and human clinical trials are necessary to confirm efficacy and determine appropriate dosages, which could take years.
Products: Early products containing 2dDR are appearing on the market.
Not Yet Proven in Humans: The current evidence for deoxyribose hair growth is primarily based on animal studies. (AI overview)
Deoxyribose is a five-carbon sugar found primarily as a structural component of deoxyribonucleic acid (DNA), forming the backbone of the DNA polymer by alternating with phosphate groups. It is present in all living cells, including those of animals, plants, and microorganisms, as part of the genetic material.
Deoxyribose differs from ribose (found in RNA) by lacking one oxygen atom at the 2’ position.
Oriana:
Ribose (the sugar) is available online as a sweetener. It’s a so-so sweetener (inferior by far to xylitol, and especially to allulose), and it does nothing for hair growth. Ribose has been suggested as a preferred sweetener for cardiac patients.
Interesting how tiny change in chemical structure can result in profound change in function.
*
ending on beauty:
DEEP RED GLASS
Hours shrouded in black and dove
secrets of Tuesday
buried in the rain
River windows in sepia
and silver cascading
from April’s naked arms
~ Sutton Breiding













