*
HER HEART WAS BROOK
Anna Livia Plurabelle to James Joyce
Creator: I won’t believe in you
until I see your pronouns.
How you tremble to be slowly
translated. But tomorrow will you say,
Here’s the gift of my absence
so you can marry a god –
And I yes, but without
the Sanskrit of your caresses,
in what confluence do I dare
to spell myself? I hate to
disappoint you, but my favorite
word is mollusk.
Mother Church? Don’t even
mention that transvestite,
Maculate Father, giving birth
to a daughter unembraceable as water.
Plurability is your plume,
the slippery shadow between my breasts
urging you to more deltas
and meandering curves.
Anna was: Anna Perenna,
two-headed mother of time.
Livia is: for a river,
the sea is only a pretext.
Plurabelle is to be.
I’m sentenced to
myself, greener than all kisses,
and I yes, but the.
~ Oriana
*
"In the name of Annah the Allmaziful, the Everliving, the Bringer of Plurabilities, haoed be her eve, her singtime sung, her rill be done, unhemmed as it is uneven! ~ James Joyce, Finnegans Wake
The River Liffey, Dublin
*
riverrun, past Eve and Adam’s, from swerve of shore to bend of bay ~ the opening of Finnegans Wake
Moanday, Tearday, Wailsday, Thumpsday, Frightday, Shatterday ~ somewhere in the middle of Finnegans Wake
Then who but Crippled-with-Children would speak up for Dropping-with-Sweat?
Bag bag blockcheap, have you any will?
Sick us a sock with some sediment in it for the sake of our darning wives.
No peeping, pimpadoors!
The keys to. Given! A way a lone a last a loved a long the ~ the ending of Finnergans Wake
*
NORA AND LUCIA: WHAT SORT OF MOTHER ABANDONS HER DAUGHTER?
Baffled by Nora Joyce’s abandonment of her daughter, Annabel Abbs, author of The Joyce Girl, asks why some mothers can’t or won’t love their female offspring
After James Joyce’s death, Nora’s abandonment of Lucia became callously clear. For the last 12 years of her life, Nora never visited her daughter in her English asylum. It appears that she never wrote to her and that she begrudged the expenditure on her daughter’s hospitals and carers.
118 years ago, Lucia Joyce was born out of wedlock to James Joyce and Nora Barnacle, in a pauper’s hospital in Trieste, Italy. Nora returned from hospital to a hovel where Joyce lay prostrate with rheumatic fever. Her fractious baby daughter had a squint, her toddler son was boisterous and demanding, she had no local friends or family, and she disliked the Italian heat.
Nora longed to be married and had been forced to lie about her marital status in order to give birth in a hospital. All in all, not an auspicious start. But was it bad enough to make her abandon her only daughter, 25 years later, when Lucia needed her most?
The breakdown in relations between Lucia and her mother preoccupied me for several years. As a mother of three daughters, I’m familiar with the challenges of this intimate but often complex relationship. As clinical psychologist Linda Blair says: “The mother-daughter relationship is the hardest and most complicated relationship there is.” My mother had a particularly fraught relationship with her own mother, who she believes never loved her. I grew up imbued with the knowledge that some mothers never love their daughters.
Georgio and Lucia in Trieste, 1914
Nora had a deeply important relationship with Giorgio. In cultures where sons are prized over daughters, mothers often expend more emotional effort on raising a boy. Nora and Joyce had hoped for another son. According to a Joyce family friend, Lucia felt “of little importance … and only the brother counted.”
And yet I was bewildered by how easily and ruthlessly Nora abandoned her daughter. Lucia, a talented, aspiring dancer was on the verge of success when she was struck down by mental illness at the age of 25, following a series of personal and professional setbacks. She spent the rest of her life in mental hospitals, finally dying in an English asylum hundreds of miles from friends and family.
It was her father who visited her, wrote to her, sent her gifts and enlisted friends to keep an eye on her. Nora was ominously quiet, although we know she helped care for Lucia initially. But after Joyce’s death Nora’s abandonment become callously clear. For the last 12 years of her life, Nora never visited Lucia. Not once. It appears that she never wrote to her and that she begrudged the expenditure on her daughter’s hospitals and carers. Why, I wondered, would a mother cease caring about a daughter?
Of course there was the shame associated with mental illness then. Was Nora too ashamed of Lucia to visit her? Letters from Joyce before his death suggest Nora was frightened of Lucia, whose rage was clearly directed at her mother. So was it fear that stopped her communicating with Lucia?
At the same time as these questions plagued me, a close friend’s daughter was sectioned with severe mental illness. “She hates me and hits me and I can’t love her anymore,” confessed my friend, tearfully. And yet my friend persevered, giving up her job to care for a daughter whose rage (like Lucia’s) was vented on her mother. Her daughter recovered and their relationship is now recovering too.
Could the same have happened to Lucia, if Nora had persevered? I turned to a psychologist friend for help. “It’s possible they never bonded as mother and baby,” said my friend. She referred me to a recent Turkish study that found daughters who hadn’t bonded with their mothers during infancy were more likely to have psychological and personality problems in adulthood.
Then she suggested I look closely at the early days of Nora and Lucia’s relationship. This wasn’t as easy as it sounded. I was dependent on a few remaining letters, but I found several references to how difficult Lucia had been as a baby, to boils on Lucia’s face that embarrassed Nora, and to Joyce’s concern that Lucia was put quickly onto a bottle – unlike her brother who’d been breast-fed for eighteen months.
When Lucia was two, Nora handed her children over to Joyce’s sister and then to a local girl, so she could take in laundry to make ends meet. The more I researched Lucia’s story, the more I wondered if Lucia’s breakdown, and Nora’s subsequent abandonment of her, were rooted in their early years together.
Back I went to my psychologist friend who suggested I explore the early childhood experiences of Nora, saying “If a mother can’t bond with her baby it’s highly likely to be related to her own experience with her own mother.” I scurried back to my books and found that Nora had been abandoned by her own mother, sent to live with her grandparents as a toddler. In a typically Catholic family where women produced baby after baby, it was common for some children to be fostered out to other family members.
Nora’s biographer, Brenda Maddox, believed Nora “felt cheated of mother-love herself”. But did the children at home fare any better? Mothers exhausted by childbirth, possibly malnourished, sometimes with undiagnosed post-natal depression, may not have bonded easily – resulting in unintentional emotional abandonment. Was this, I wondered, something symptomatic of the old Catholic culture with its insistence on wedlock and its prohibition of birth control? Had the Catholic emphasis on procreation resulted in generation after generation of “emotionally abandoned” children?
Oliver James, the eminent child psychologist, would probably answer in the affirmative. In his latest book, Not in your Genes, he says the crucial period for damaging your children is up to the age of three. In an earlier newspaper interview he blamed his mother for his later inner rage: “Basically I was fucked up by my mother. She had four children under the age of five and found it difficult to cope.”
And yet Nora only had two children. Nor was she incapable of bonding. She had a deeply important relationship with her son, Giorgio. My psychologist friend had an answer for this too, explaining that in cultures where sons are prized over daughters, mothers often expend more emotional effort on raising a boy. She highlighted Italy and Ireland as two European countries with historical reasons for once prizing males over females, giving rise to so-called Mama/Mammy cultures. So was this also part of the emotional geography of the Joyce family?
When I checked my sources again I found that, yes, Nora and Joyce had hoped for another son. And yes, according to a Joyce family friend, Lucia felt “of little importance … and only the brother counted.” My mother’s experience was not dissimilar. Although she wasn’t appreciated or supported by her mother, her brother had always enjoyed full maternal devotion.
But the final clue to Nora’s treatment of Lucia lies hidden in the text of Joyce’s Ulysses. In the infamous soliloquy at the end of the novel, Molly (widely agreed to be based on Nora) rambles at length about her feelings of jealousy towards her daughter, Milly.
Could Nora have envied Lucia’s youth, brains, talent, the opportunities she had? Joyce certainly thought so. In his private notes to Lucia’s psychoanalyst, he accused Nora of harboring envious feelings towards their daughter. So was this why Nora abandoned Lucia to a mental asylum? My mother’s experience bears this out. She believes her own mother was driven to spite out of envy.
For a generation of women forced to have children they didn’t want and denied the chance for any role in public life, it was almost unendurable to watch their daughters grasping new opportunities. As my mother excelled at school, so her own mother’s cruelty increased. As Lucia became increasingly successful as a dancer, so Nora saw how very limited her own opportunities had been. My mother won a competition to become a journalist on Vogue, enabling her to escape to London. Lucia didn’t have the same luck.
Although this isn’t an exclusively Irish problem, I can’t help wondering if it’s yet another reason for the very high figures of young, single women emigrating from Ireland during the last century.
We now know that a good mother-daughter relationship has long-term implications. A study from the University of Georgia found that, even more than other family dynamics, the mother-daughter relationship determines a girl’s future self-esteem and relationship skills. The same study found that overly critical mothers had daughters with reduced self-confidence, fewer social abilities and greater risk of mental health problems. I remind myself of this every time I’m on the brink of suggesting my teenage daughters dress/talk/behave in a different way. It’s a salutary lesson – and one I’ll always thank Nora and Lucia Joyce for.
https://www.irishtimes.com/culture/books/nora-and-lucia-joyce-what-sort-of-mother-abandons-her-daughter-1.2734082
*
Nuvoluccia in her Lightdress: Lucia Joyce’s Mental Illness in Finnegans Wake
Whatever spark or gift I possess has been transmitted to Lucia, and it has kindled a fire in her brain. ~ James Joyce
For several years, the campaign in Ireland to fight stigma associated with mental disorders had Lucia Joyce, daughter of James Joyce, as its mascot, and in 1997, “Lucia Week” was launched in order to raise awareness of schizophrenia. The initiative came to an end because of the economic crisis, but Lucia was a well-chosen symbol for such a campaign. She was born in Trieste, Italy, on July 26, 1907. From youth she demonstrated an artistic talent, and she pursued a career as a modern dancer.
However, she was also psychologically fragile. Her moody and irritable character succumbed to repeated emotional breakdowns, until she became overtly psychotic in 1930, while dating the young Samuel Beckett. After that, recurring episodes of psychotic breakdown coincided with events related to sexuality and family life, such as her parents’ marriage (summer 1931), her own official engagement (in 1932), and her father’s birthday (on Feb. 2, 1932, and Feb. 2, 1934).
Lucia had always had a morbid attachment to her father, who did everything he could to have her cured, even if he often rejected the psychiatric diagnoses provided by the clinicians who were consulted.
Lucia’s father always treated her with understanding, care, and affection, and in Finnegans Wake he alluded to the fragile and poetic soul of the daughter as “Nuvoluccia in her lightdress,” a tender image that blends together an Italian diminutive, “little cloud” (“nuvoletta”), and assonance with “Lucia.” He did not think she was crazy; rather, he considered her special, a “fantastic being,” with her own private language and a mind “as clear and as unsparing as the lightning”.
In 1934, Carl Jung, who briefly treated Lucia, described father and daughter as “two people going to the bottom of a river, one falling and the other diving,” although he was reluctant to diagnose her fully. Such intuition that Lucia’s suffering reflected a similar latent disposition in her father was later echoed by Lacan, who suggested that Joyce’s writing was the supplementary cord that kept him from psychosis.
After 1934, Lucia was admitted to the Burghölzli Psychiatric Clinic in Zurich, where she was finally diagnosed with schizophrenia. In 1935 she was admitted to an asylum in Ivry-sur-Seine, France, and in 1951 she was transferred to St. Andrew’s Hospital in Northampton, England, where Beckett sometimes visited her. She died there at the age of 75, in 1982. In an interview, Carl Jung called her Joyce’s “anima inspiratrix,” explaining, “If you know anything of my Anima theory, Joyce and his daughter are a classical example of it. She was definitely his ʻfemme inspiraticeʼ, which explains his obstinate reluctance to have her certified.”
Carol Shloss further explored this idea and suggested that Lucia was indeed her father’s muse for Finnegans Wake. Notably, the novel has neither a truly narrative plot nor a conventional character construction, and it relies, rather, on sound, rhythm of language, and multilevel wordplay to convey the essence of Joyce’s narrative style.
*
Annabel Abbs, author of The Joyce Girl, asks why was Lucia Joyce, a beautiful woman and talented dancer, left to languish for 50 years in an English asylum?
Lucia Joyce’s life seemed to be little more than a few bald facts strung together and viewed through the lens of other people, many of whom seemed entirely unreliable.
I’d studied Joyce at university, some 25 years earlier, but knew nothing of his daughter. I was instantly struck by the juxtaposition of the world-famous, iconic figure of Joyce and his “unknown” daughter. Everything about her intrigued me. I wanted to know more about this beautiful girl who’d studied modern dance in 1920s Paris.
I wanted to know what happened between her and Samuel Beckett, and between her and the American artist, Alexander Calder. Why had her lovers had such stellar careers, while her own promising career as a dancer languished? I wanted to know what it was like to have a father deemed to be a genius pornographer, and a mother who’d been an uneducated chambermaid – another odd juxtaposition. I wanted to know how it felt to be at the heart of one of the most exciting periods in artistic history.
But most of all I wanted to know why she was left, friendless and forgotten, in an English mental asylum for 50 years. What had happened to make her own mother and brother abandon her so ruthlessly?
Hoping to find some answers, I turned to a 600-page biography written a decade earlier by an American Joyce scholar called Carol Loeb Schloss. The biography was meticulously researched and yet entire swathes of Lucia’s life were unaccounted for. Why? Because most of her letters (to her, from her, about her) had been purposefully destroyed. Her medical records had been burnt. She spent four months in analysis with the legendary Carl Jung in Switzerland. He too had destroyed all his notes.
Poems and a novel she’d written had also been lost or destroyed. Lucia’s life seemed to be little more than a few bald facts strung together and viewed through the lens of other people, many of whom seemed entirely unreliable. And yet newspaper reviews (which I quote in the novel) raved about her talent.
I began reading anything I could find on 1920s Paris and the Joyce family. I went to the National Archives and the James Joyce Centers in Zurich and Trieste, trawling through clippings, photographs and previously-censored material. The more I researched, the angrier I became. Joyce’s daughter had been obliterated from history, her voice smothered. I realized that if I wanted to understand and experience her life, I would have to use the facts gleaned from my research – and imagine the rest.
Only a novel was going to give me the emotional truth of Lucia. Only fiction could provide the emotional access to the past I was looking for. To experience her life, both in the intense, claustrophobic Joyce household and colorful, creative, jazz-age Paris needed imagination, not another biography or history book. And, as Doris Lessing famously said, “There’s no doubt fiction makes a better job of the truth.”
I began writing Lucia’s story, fueled by outrage. I’d never attended any writing courses, but plowed on regardless. As my research gathered pace, I was horrified to uncover a pattern of young, newly liberated women being carted off to asylums: Lucia’s sister-in-law; her first boyfriend’s sister (who was the first French translator of Joyce’s Dubliners); Zelda, the wife of F Scott Fitzgerald, (who studied ballet alongside Lucia). All appear in my novel – and all ended up with diagnoses of schizophrenia. Lucia and Zelda both died in asylums. They had both longed to be professional dancers. They had both, it seemed to me, lived in the shadows of more successful men.
These young women were also victims of the rapid change sweeping through the developed world. The 1920s was a time of huge change – cars, cameras, telephones and radios were altering the lives of everyone. In Paris, hems were up and stockings were down as young women embraced change and all it promised.
The River Liffey flowing through County KildareBut beneath the glamor and glitter lay a dark underbelly. As I wrote The Joyce Girl, I noticed similarities with our own period. Today, technology and social media have revolutionized our world and yet beneath the glossy Technicolor of Instagram and Facebook lurks a similarly dark underbelly, with soaring rates of mental health problems among the young. The more I researched, the more I saw parallels between the 1920s and the 2010s – as new generations (particularly, but not exclusively, female) struggled to adapt to new values, to new ways of behaving and seeing themselves. Hence I decided to give my first-year profits to a charity called YoungMinds.
*
While many children grow up with the sense of a largely absent father, having a writer as a father results in something quite different – a father who is present in body but absent in spirit ~ Annabel Abbs
*
Lucia’s story was particularly interesting because she very much wanted to be a modern woman, and yet her parents retained a strongly Irish sense of propriety – in spite of Joyce’s image as a radical writer changing the face of fiction. It was here, in the father-daughter element of the story, that Lucia’s story resonated at a more personal level. Like Lucia, I grew up with a poet-father who exiled himself in order to pursue his art. The Joyces went to Italy and adopted Italian as their lingua franca. We went to Wales and learned Welsh. Like the Joyces, we lived in relative poverty, moving frequently during the first 10 years of my life.
My childhood was very much the “lite” version – but the similarities enabled me to understand how she might have felt. While many children grow up with the sense of a largely absent father, having a writer as a father results in something quite different – a father who is present in body but absent in spirit. Joyce worked and wrote obsessively. He was convinced of his own genius. He refused to compromise the integrity of his art for anything – or anyone. My father was similarly compulsive. Like Lucia and her brother, Giorgio, I and my siblings grew up in thrall to the creative will. Reading Ulysses for the third time reminded me of how particularly absent Joyce must have been from his children’s childhood. But there the similarities ended.
The Joyce Girl is my attempt to resurrect Lucia and give her a voice, to experience 1920s Paris as she might have done, to understand why her life fizzled out in the way it did. I hope I’ve done her a scrap of the justice she deserves.
https://www.irishtimes.com/culture/books/why-was-james-joyce-s-daughter-lucia-written-out-of-history-1.2687082
*
Some have suggested that Joyce’s daughter, Lucia, and her schizophrenic word-salad speech inspired Finnegan’s Wake.
“Whatever spark or gift I possess has been transmitted to Lucia and it has kindled a fire in her brain.” —James Joyce, 1934
Most accounts of James Joyce’s family portray Lucia Joyce as the mad daughter of a man of genius, a difficult burden. But in this important new book, Carol Loeb Shloss reveals a different, more dramatic story. Her father loved Lucia, and they shared a deep creative bond.
Lucia was born in a pauper’s hospital and educated haphazardly across Europe as her penniless father pursued his art. She wanted to strike out on her own and in her twenties emerged, to Joyce’s amazement, as a harbinger of expressive modern dance in Paris. He described her then as a wild, beautiful, “fantastic being” whose mind was “as clear and as unsparing as the lightning.” The family’s only reader of Joyce, she was a child of the imaginative realms her father created, and even after emotional turmoil wrought havoc with her and she was hospitalized in the 1930s, he saw in her a life lived in tandem with his own.
Though most of the documents about Lucia have been destroyed, Shloss painstakingly reconstructs the poignant complexities of her life—and with them a vital episode in the early history of psychiatry, for in Joyce’s efforts to help her he sought the help of Europe’s most advanced doctors, including Jung. In Lucia’s world Shloss has also uncovered important material that deepens our understanding of Finnegans Wake, the book that redefined modern literature.
https://www.goodreads.com/book/show/1301226.Lucia_Joyce
from Wikipedia:
Lucia Anna Joyce (26 July 1907 – 12 December 1982) was an Irish professional dancer and the daughter of Irish writer James Joyce and Nora Barnacle. Once treated by Swiss psychiatrist Carl Jung, Joyce was diagnosed as schizophrenic in the mid-1930s and institutionalized at the Burghölzli psychiatric clinic in Zurich. In 1951, she was transferred to St Andrew's Hospital in Northampton, where she remained until her death in 1982. She was the aunt of Stephen James Joyce, who was the last descendant of James Joyce.
*
Lucia Anna Joyce was born in the Ospedale Civico di Trieste on 26 July 1907. She was the second child of Irish writer James Joyce and his partner (later wife) Nora Barnacle, after her brother Giorgio. As her parents were expatriates living in Trieste, Lucia's first language was Italian. In her younger years, she trained as a dancer at the Dalcroze Institute in Paris. She studied dancing from 1925 to 1929, training first with Jacques Dalcroze, followed by Margaret Morris, and later with Raymond Duncan (brother of Isadora Duncan) at his school near Salzburg. In 1927, Joyce danced a short duet as a toy soldier in Jean Renoir’s film adaptation of Hans Christian Andersen's "La Petite marchande d’allumettes" (The Little Match Girl). She furthered her studies under Lois Hutton, Hélène Vanel, and Jean Borlin, lead dancer of the Ballet suédois (Swedish Ballet).
In 1928, she joined "Les Six de rythme et couleur," a commune of six female dancers that were soon performing at venues in France, Austria, and Germany. After a performance in La Princesse Primitive at the Vieux-Colombier theatre, the Paris Times wrote of her: "Lucia Joyce is her father's daughter. She has James Joyce's enthusiasm, energy, and a not-yet-determined amount of his genius. When she reaches her full capacity for rhythmic dancing, James Joyce may yet be known as his daughter's father.”
On 28 May 1929, she was chosen as one of six finalists in the first international festival of dance in Paris held at the Bal Bullier. Although she did not win, the audience, which included her father and the young Samuel Beckett, championed her performance as outstanding and loudly protested the jury's verdict. It has been alleged that when Lucia was 21, she and Beckett (who was her father's secretary for a short time) became lovers. Their relationship lasted only a short while and ended after Beckett, who was involved with another woman at the time, admitted that his interest was actually in a professional relationship with James Joyce, not a personal one with Joyce's daughter.
At the age of 22, Joyce, after years of rigorous dedication and long hours of practice, decided "she was not physically strong enough to be a dancer of any kind." Announcing she would become a teacher, she then "turned down an offer to join a group in Darmstadt and effectively gave up dancing."
Her biographer Carol Shloss, however, argues that it was her father who finally put an end to her dancing career. James reasoned that the intense physical training for ballet caused her undue stress, which in turn exacerbated the long-standing animosity between her and her mother Nora. The resulting incessant domestic squabbles prevented work on Finnegans Wake. James convinced her she should turn to drawing lettrines to illustrate his prose and forgo her deep-seated artistic inclinations. To his patron Harriet Shaw Weaver, James Joyce wrote that this resulted in "a month of tears as she thinks she has thrown away three or four years of hard work and is sacrificing a talent.”
Mental illness and later life
Lucia Joyce started to show signs of mental illness in 1930, including a time period during which she was involved with Samuel Beckett, then a junior lecturer in English at the École normale supérieure in Paris. In May 1930, while her parents were in Zurich, she invited Beckett to dinner, hoping “to press him into some kind of declaration.” He flatly rejected her, explaining that he was only interested in her father and his writing.
By 1934, she had participated in several affairs, with her drawing teacher Alexander Calder, another expatriate artist Albert Hubbell, and Myrsine Moschos, assistant to Sylvia Beach of Shakespeare and Company. As the year wore on, her condition had deteriorated to the point that James had Carl Jung take her in as a patient. Soon after, she was diagnosed with schizophrenia at the Burghölzli psychiatric clinic in Zurich.
In 1936, James consented to have his daughter undergo blood tests at St Andrew's Hospital in Northampton. After a short stay, Lucia Joyce insisted she return to Paris, the doctors explaining to her father that she could not be prevented from doing so unless he had her committed. James told his closest friends that "he would never agree to his daughter being incarcerated among the English.”
Lucia Joyce returned to stay with Maria Jolas, the wife of transition editor Eugene Jolas, in Neuilly-sur-Seine. After three weeks, her condition worsened and she was taken away in a straitjacket to the Maison de Santé Velpeau in Vésinet. Considered a danger to both staff and inmates, she was left in isolation. Two months later, she entered the maison de santé of François Achille Delmas at Ivry-sur-Seine.
In 1951, Joyce was transferred back to St Andrew's Hospital. Over the years, she received visits from Beckett, Sylvia Beach, Frank Budgen, Maria Jolas, and Harriet Shaw Weaver who acted as her guardian.
In 1962, Beckett donated his share of the royalties from his 1929 contributory essay on Finnegans Wake in Our Exagmination Round His Factification for Incamination of Work in Progress to help pay for her confinement at St Andrew's.
In 1982, Lucia Joyce had a stroke and died on 12 December. She is buried in Kingsthorpe Cemetery.
*
Each year on Bloomsday (16 June), extracts from James Joyce's Ulysses and other readings related to his life and works are read at Lucia Anna Joyce's graveside. In 2018 on Bloomsday, Letters to Lucia, a play written by Richard Rose and James Vollmar in which characters from Lucia's life, including Samuel Beckett, Kathleen Neel, Nora Barnacle-Joyce and Joyce himself appear, was performed by the Triskellion Irish Theatre Company at the graveside.
Her mental state, and documentation related to it, is the subject of a 2003 study, Lucia Joyce: To Dance in the Wake, by Carol Loeb Shloss, who believes Lucia Joyce to have been her father's muse for Finnegans Wake. Making heavy reference to the letters between Joyce and her father, the study became the subject of a copyright misuse suit by the James Joyce estate. On 25 March 2007, this litigation was resolved in Shloss's favor.
Professor John McCourt, of the University of Macerata, a prize-winning Joyce scholar, trustee of the International James Joyce Foundation, and co-founder and director of the International James Joyce symposium held at Trieste, wrote in A Companion to Literary Biography (ed. Robert Bradford, Wiley Blackwell, 2019) that Shloss, in her "sometimes obsessive" book, "seeks very deliberately to depose Nora (Joyce's wife) as Joyce's chief muse... in doing so, it overplays its hand with exaggerated claims about Lucia's genius and about her importance to Joyce's creative process and vindictively harsh judgments on most members of the Joyce family and circle.”
The book's "most damaging legacy is the cottage industry of derivative versions of Lucia that it has helped to spawn... the key source for a whole series of writings about Lucia that uncomfortably mix fact and fiction" including The Joyce Girl (2016) by Annabel Abbs, of which McCourt wrote "With Abbs, the perverse cycle of interest in Lucia comes full circle. We are back in the territory of fiction fraudulently posing as biography"; he considered the book "a prime contender for the worst Joyce-inspired 'biography' ever". The book was also the subject of criticism in the Irish Times and Irish Examiner regarding the author's "unsubstantiated speculations" regarding incest between Lucia and her brother, and the sources of her mental illness.
In 1988, Stephen Joyce (the grandson of James Joyce) had all the letters written by Lucia that he received upon her death in 1982 destroyed. Stephen Joyce stated in a letter to the editor of The New York Times that "Regarding the destroyed correspondence, these were all personal letters from Lucia to us. They were written many years after both Nonno and Nonna [i.e., Mr. and Mrs. Joyce] died and did not refer to them. Also destroyed were some postcards and one telegram from Samuel Beckett to Lucia. This was done at Sam's written request.”
In 2004, Lucia Joyce's life was the subject of Calico, a West End play written by Michael Hastings, and, in 2012, of the graphic novel Dotter of Her Father's Eyes by Mary and Bryan Talbot. A play exploring her life, titled L, was performed to a limited audience in Concord Academy from 14 to 16 April 2016. It was written and directed by Sophia Ginsburg. 2016 saw the release of Annabel Abbs's biographical novel, The Joyce Girl; in 2018, she was the subject of a novel by Alex Pheby, titled Lucia. Lucia Joyce is the protagonist of the "Round the Bend" chapter of Alan Moore's 2016 novel Jerusalem. Set at the Northampton clinic where she spent her final years, the chapter is written in the style of her father's Finnegans Wake.
DOES ANYONE UNDERSTAND FINNEGANS WAKE?
Carl Jung was diplomatic when he said this, but you can read between the lines:
"The book is a specimen of a schizophrenic mind—not that Joyce himself was schizophrenic, but his mental processes in Ulysses resemble the associative fragmentation seen in schizophrenia."
In a 1932 letter to Joyce, Jung wrote this: "Your Ulysses has presented the world such an upsetting psychological problem that repeatedly I have been called in as a supposed authority on mental disorders.
I had to tell them that Ulysses was no more pathological than modern art in general—it is only ‘unpsychological’ in the sense that it does not submit to the usual laws of human psychology."
It’s hard to know exactly whether Jung was being naive in writing this (he did say some really unbelievable things on UFOs after all), or tactful and polite by calling the work unpsychological (which is a nonsense word, like much of what is found in Ulysses and FW).
Despite his initial fascination, Jung later admitted, "I had an uncle who thought he was the wheatfield and the reaper at the same time. Ulysses is like that—Joyce is both the machine and its operator."
Virginia Woolf had this to say on FW:
"An illiterate, underbred book… the book of a self-taught working man, egotistical, insistent, raw, striking, and ultimately nauseating."
D.H. Lawrence, though not attacking FW, it equally applies to it, since FW is even worse:
"Ulysses is the dirtiest, most indecent, obscene thing ever written—a deliberate, stoked-up madness—a madman’s book."
In Time and Western Man (1927), Wyndham Lewis attacked Joyce’s so-called stream-of-consciousness with:
"Joyce is the Catholic clown of literature, a time-obsessed pedant drowning in his own verbal diarrhea."
Edmund Wilson:
"Finnegans Wake is Joyce’s downfall—an unintelligible labyrinth where genius drowned in its own cleverness."
And not just artists, authors and psychologists. Even philosophers. For example, in The Open Society and Its Enemies (1945), Popper criticizes modernist obscurantism: "There is a cult of incomprehensibility in modern art and literature—a pretentious attempt to simulate depth through deliberate confusion."
Later, in interviews, he singled out FW as "a self-referential word-game masquerading as profundity—philosophically empty."
The analytic philosopher John Searle has also dismissed Finnegans Wake in The Construction of Social Reality (1995): "Joyce’s later work is a deliberate sabotage of linguistic conventions—an interesting experiment, but not meaningful communication."
And so forth. ~ Jan M. Savage, Quora
*
Oriana: "A GENIUS DROWNED IN ITS OWN CLEVERNESS"
Some have suggested that Joyce’s daughter, Lucia, inspired Finnegan’s Wake. Scholars agree that Joyce’s muse for Ulysses was his wife, Nora (a model for the character of Molly), but his muse for Finnegans Wake was Lucia, with her schizophrenic speech. But simply establishing the identity of Joyce’s muse takes us only so far. Schizophrenia or senile chatter — or sheer joy in creating multilingual puns — is the result worth the time it takes to decipher a single paragraph?
I have a copy of Finnegans Wake, and dip into it once in a great while. When I do, I read a paragraph — or try to — for the sake of completely erasing mundane reality and bathing in sheer “poly-language,” if I may coin a term. It’s a special kind of brain exercise, and now and then it brings pleasure (I confess that “Shatterday” delights me, as does "riverrun"). I suspect that for readers who don’t have a fascination with language(s), Joyce’s last book is a waste of time. I share Edmund Wilson’s view that this is an example of “genius drowned in its own cleverness.”
Vladimir Nabokov called Finnegans Wake "a tragic failure." Joyce spent seventeen years writing it, producing a lengthy novel (???) that no one reads.
*
GAZA: STARVATION OR DECEPTION?
Palestinian mother Samah Matar holding her son Youssef in a shelter in Gaza City on July 24, 2025.
You’ve sure got to wonder about all those claims that Israel is sparking starvation in Gaza when every single picture meant to illustrate it turns out to be fake.
Forget starvation: “The subjects have cystic fibrosis, rickets, or other serious ailments,” the FP reported.
Hmm. So all the world media desperate to offer clear evidence of famine have yet to find even a single Gazan healthy before the war but now suffering from actual starvation, and not other illnesses.
How, then, can anyone believe claims that hunger is widespread, let alone imply Israel is to blame?
Some reports actually seem like intentional deception: A July 29 Guardian story on Gaza famine depicted Youssef Matar as “malnourished” but omitted that he suffers from cerebral palsy — even though Reuters had noted it just days earlier.
Other outlets offer sad excuses for their fake photos: CNN blamed “agencies and local journalists” in Gaza for its story on Mosab al-Debs, 14, “suffering from malnourishment” — without mentioning that a head injury had left him paralyzed.
Similarly, excusing its story describing Atef Abu Khater, 17, as healthy pre-war but recently malnourished that failed to note a psychological shock led to him refuse food, the New York Times cited how an “official” (which, in Gaza, means Hamas-directed) report listed “the cause of his death as severe malnutrition.”
We failed to ensure we weren’t just passing along propaganda is a pretty poor excuse for any news professional.
Plus: Uncovering the “missing context” for these 12 subjects “didn’t require in-depth, on-the-ground reporting,” the FP reporters noted. “It took minutes and required nothing more than a computer with stable internet connection.”
That was too tough for CNN and the Times?
And the scandals are still coming: On Monday, the BBC admitted a Gaza woman it said died of starvation actually had leukemia.
This is a pattern of shocking malpractice across multiple “high-quality” institutions.
It doesn’t just mean you can’t believe their reports of Gaza famine, it means you can’t trust any of their reporting on any Israel-Palestinian issue … or maybe on anything at all.
Phil Koprowski:
First clue is the mother isn’t starving.
Dr Feelgood:
In 2000 it was the fake death of a kid, al Durrah. At that time the light came on for many of us that Palestinian news outlets and their Euro friends were Pallywood [“Pal” stands for “Palestine”]…Hollywood style fake stories. Then more recently there was the story of the IDF allegedly bombing a hospital in Gaza killing hundreds…then it turned out that the rocket was shot by Hamas, it landed in the hospital’s parking lot, and killed maybe a couple dozen. At some point, if you are intelligent and fair, you have to disbelieve everything that comes from Pal news sources, whether in Gaza, Europe, New York Times, wherever. I believe what the Israeli government says about war statistics unless there is lots of convincing evidence that it’s wrong and if that happens the Israeli sources will admit their mistake.
Man on the street:
Hamas has won the war of BS propaganda
Tim Scott:
Pure propaganda. Love the Madonna-like images with the woman (Mother...Hamas?) wearing the long hijab with the baby's back always turned to the viewer to show the spine.
Sorry, Hamas, you ain't foolin' no one no more.
*
ISRAEL APPROVES A SETTLEMENT PROJECT IN THE WEST BANK
The project, located in a tract of land known as E1 just east of Jerusalem, has been under discussion for more than two decades. While repeatedly advanced, it was frozen for years under U.S. pressure as successive administrations sought to preserve the viability of negotiations toward a two-state solution. The international community overwhelmingly considers Israeli settlement construction in the West Bank to be illegal under international law and a serious obstacle to peace.
Israeli Finance Minister Bezalel Smotrich holds a map that shows the E1 settlement project during a press conference near the settlement of Maale Adumim, in the Israeli-occupied West Bank, Thursday, Aug. 14, 2025
Far-right Finance Minister Bezalel Smotrich, a former settler leader and one of the government's most outspoken voices on the issue, celebrated the approval as a direct rebuke to Western countries that recently announced plans to recognize a Palestinian state. "The Palestinian state is being erased from the table not with slogans but with actions," Smotrich declared. "Every settlement, every neighborhood, every housing unit is another nail in the coffin of this dangerous idea."
Prime Minister Benjamin Netanyahu has long rejected the idea of a Palestinian state alongside Israel, insisting instead on maintaining open-ended control over the West Bank, annexed east Jerusalem, and the Gaza Strip. Israel seized all three territories during the 1967 Middle East war, and Palestinians have consistently sought them as the foundation of their future state.
The E1 project includes roughly 3,500 apartments that would connect the existing settlement of Maale Adumim with Jerusalem. Critics say the location is strategically significant because it represents one of the few remaining territorial links between the West Bank cities of Ramallah in the north and Bethlehem in the south. Cutting that corridor, they warn, would not only fragment Palestinian population centers but also cripple the geographic contiguity of any future state.
Though Ramallah and Bethlehem are only 14 miles apart, Palestinians already face arduous restrictions traveling between them. To make the journey, they must endure long detours through multiple Israeli checkpoints, often spending hours on what should be a short trip. The hope had been that in the framework of a negotiated settlement, the E1 area could serve as a direct and functional link between the two cities.
"The settlement in E1 has no purpose other than to sabotage a political solution," said Peace Now, an Israeli group that tracks settlement growth. "While the consensus among our friends in the world is to strive for peace and a two-state solution, a government that long ago lost the people's trust is undermining the national interest, and we are all paying the price."
The approval comes as Palestinians face mounting challenges in the West Bank. The war in Gaza has drawn international attention, but in the West Bank violence and displacement have surged. In recent months, there has been an uptick in settler attacks on Palestinian communities, military raids, and tighter checkpoints, as well as Palestinian attacks targeting Israelis.
More than 700,000 Israeli settlers now live in the West Bank and east Jerusalem.
Smotrich also announced the approval of 350 new homes in the settlement of Ashael, near Hebron, during the same meeting. Israel's government, dominated by religious and ultranationalist politicians with strong ties to the settlement movement, has placed settlement growth at the center of its agenda. Smotrich himself has been granted Cabinet-level authority over settlement policy, underscoring the influence of the movement within Netanyahu's coalition.
If the process proceeds quickly, infrastructure work in E1 could begin within months, with home construction starting in about a year. For many Palestinians and rights groups, that timeline adds urgency to what they see as a pivotal moment for the future of the West Bank—and for any remaining hope of a two-state solution.
How many countries recognize Palestine as a state?
As of today, around 140 of the 193 United Nations member states recognize Palestine as a sovereign state. Recognition surged in the late 20th century following the Palestine Liberation Organization's 1988 declaration of independence, and it has grown steadily, particularly among countries in Africa, Asia, the Middle East, and Latin America.
However, major Western powers—including the United States, most of Western Europe, and several others—have withheld recognition, arguing that Palestinian statehood should come through direct negotiations with Israel. The split in international recognition reflects the deep divisions in global diplomacy over the Israeli-Palestinian conflict, and recent moves by some European nations to recognize Palestine have reignited debate on the issue.
https://www.newsweek.com/israel-west-bank-settlement-palestinian-state-2116429
*
*
TRUMP AND INDIA
The irony of Trump 2.0 is, for all his bombast about America First, he may very well end up being remembered for the massive changes he is making to the global order, for better and/or worse. I'm trying to be optimistic about the endgame in Ukraine, but for all I know the deal Trump ends up making will be looked at in time as something akin to Munich in 1938, when the Europeans handed over the Sudetenland to Hitler in exchange for him promising to chill out. Remember, Neville Chamberlain arrived home to London from that summit hailed as a hero and a peacemaker. I am hopeful Trump is not walking a similar path with Putin.
India is the biggest democracy in the world. 1.4 billion people and counting. A hugely influential force in the Pacific, where, like it or not, the next Big War will probably play out.
Our relationship with the Indians over the past 25 years has been one of mutual respect based on similar goals... or at least it was until recently. Trump, for reasons that remain hazy, has taken a particularly aggressive posture toward New Delhi and the Modi government, even though he and the populist Modi are both alike in many ways.
We've hit India with some of the sharpest tariffs of any of our trading partners. A week from today, the effective tariff rate on Indian goods goes from 25% to 50%. India has a reputation for being a protectionist economy, but its mean tariff rate is under 5%, falling from 56% in 1990. As developing economies develop, they tend to lower their trade barriers, which is exactly what India is doing. Why Trump believes bullying the developing world is appropriate behavior for a global hegemon remains one of the more puzzling questions of his trade agenda. He's doing it to Brazil, too, but there his interests are purely political.
For India, Trump has said the high tariff rate is punitive in nature—punishment for India continuing to buy oil from Russia and effectively helping to underwrite Moscow's war machine. I caught Scott Bessent on CNBC yesterday, where he accused the Indians of "profiteering" from the war by essentially buying Russian crude, refining it and selling it back at a higher price on the open market. I certainly get why we'd want them to stop doing this, but India is buying so much Russian oil because the oil is being sold at a steep discount.
Isn't that just good business, capitalizing on the dynamics of the market? It's a bit rich for Bessent, a former hedge-fund guy, to be complaining about what he referred to as "opportunistic arbitrage," ie. the exact strategy that has made people like him unimaginably wealthy, often at the expense of American businesses and workers. I digress.
Another reason for Trump's new hardline tack on India has to do with the brief shooting war between India and Pakistan that erupted in April, after a Pakistani terror attack that killed 26 Indian tourists in Kashmir. Trump has made it a point to take credit for bringing those two nuclear powers to the table and hammering out a ceasefire. The Pakistanis rushed to thank him, and even put him up for the Nobel he covets. India, though, has not credited Trump for brokering that ceasefire and, in fact, scoffs at the notion of any external mediation in its long-running conflict with its neighbor. It seems like Trump has noted this affront to the peacemaker-in-chief persona he is trying to cultivate and is retaliating as such.
Beyond all that, there's also a deep undercurrent of anti-Indian sentiment that runs through MAGA, which Trump is probably aware of. I spend entirely too much of my precious waking hours trawling right-wing message boards and X threads to see what they're talking about (just as I do for the left, which can be equally tedious), and the racism and xenophobia toward Indians coming from the MAGA grassroots is pretty shocking... and I am not easily shocked. A lot of it has to do with this belief that Indians are scamming the H-1B visa program and taking jobs away from Americans, particularly in the high-paying tech field. But there are only 400K H-1Bs approved every year. Foreign workers are an easy scapegoat for the broader trend of AI displacing labor, and the fact that these tech companies want to get the best labor at the cheapest cost. Again, free markets. Don't hate the player…
Meanwhile, Trump is getting cozier with the Pakistanis, inviting their army chief to the White House, despite the fact that Pakistan is in bed with Beijing, harbored bin Laden and helped the Taliban take back Kabul from us in 2021. Not exactly our friends. At the same time, we're taking a hostile posture toward our actual friends who operate a sprawling democracy in a very strategically important region, where we need all the help we can get to counter China's rise. All the while, China continues to get the soft touch with lower tariffs than India while also avoiding sanctions for buying its own Russian oil.
As Nikki Haley writes in a Newsweek op-ed today on this topic: "The United States should not lose sight of what matters most: our shared goals. To face China, the United States must have a friend in India.”
https://www.newsweek.com/1600-what-trumps-beef-india-2116237
*
Trump is the American Hitler, and immigrants are the new Jews.” ~ Dr. John Gartner
*
HEDY LAMARR
In 1933, a beautiful young Austrian woman took off her clothes for a movie director. She ran through the woods, naked. She swam in a lake, naked. The most popular movie in 1933 was King Kong. But everyone in Hollywood was talking about that scandalous movie with the gorgeous young Austrian woman.
Louis B. Mayer, of the giant studio MGM, said she was the most beautiful woman in the world. The film was banned practically everywhere, which of course made it even more popular and valuable. Mussolini reportedly refused to sell his copy at any price.
The star of the film, called "Ecstasy," was Hedwig Kiesler. She said the secret of her beauty was "to stand there and look stupid." In reality, Kiesler was anything but stupid.
At the time she made Ecstasy, Kiesler was married to one of the richest men in Austria. Friedrich Mandl was Austria's leading arms maker. His firm would become a key supplier to the Nazis. Mandl used his beautiful young wife as a showpiece at important business dinners with representatives of the Austrian, Italian, and German fascist forces. One of Mandl's favorite topics at these gatherings — which included meals with Hitler and Mussolini — was the technology surrounding radio-controlled missiles and torpedoes.
As a Jew, Kiesler hated the Nazis. She abhorred her husband's business ambitions. Mandl responded to his willful wife by imprisoning her in his castle, Schloss Schwarzenau. In 1937, she managed to escape. She drugged her maid, snuck out of the castle wearing the maid's clothes and sold her jewelry to finance a trip to London. (She got out just in time. In 1938, Germany annexed Austria. The Nazis seized Mandl's factory. He was half Jewish. Mandl fled to Brazil. (Later, he became an adviser to Argentina's iconic populist president, Juan Peron.)
In London, Kiesler arranged a meeting with Louis B. Mayer. She signed a long-term contract with him, becoming one of MGM's biggest stars. She appeared in more than 20 films. She was a co-star to Clark Gable, Judy Garland, and even Bob Hope. Each of her first seven MGM movies was a blockbuster. But Kiesler cared far more about fighting the Nazis than about making movies.
At the height of her fame, in 1942, she developed a new kind of communications system, optimized for sending coded messages that couldn't be "jammed." She was building a system that would allow torpedoes and guided bombs to always reach their targets. She was building a system to kill Nazis. By the 1940s, both the Nazis and the Allied forces were using the kind of single frequency radio-controlled technology Kiesler's ex-husband had been peddling.
Most of you won't recognize the name Kiesler. And no one would remember the name Hedy Markey. But it's a fair bet than anyone reading this post of a certain age, will remember one of the great beauties of Hollywood's golden age — Hedy Lamarr. That's the name Louis B. Mayer gave to his prize actress.
~ posted by Hedy Habrah, Facebook
Although she died in 2000, Lamarr was inducted into the National Inventors Hall of Fame for the development of her frequency hopping technology in 2014. Such achievement has led Lamarr to be dubbed “the mother of Wi-Fi” and other wireless communications like GPS and Bluetooth. (~ Google)
*
CASH PAYMENTS TO MOTHERS SAVE BABIES’ LIVES
To save the lives of infants and small children living in low- and middle-income countries, there are a handful of tried and tested tools, like anti-malarial drugs, bed nets and vaccines. The results from a massive experiment in rural Kenya suggests another: cash.
Infants born to people who received $1,000, no-strings-attached, were nearly half as likely to die as infants born to people who got no cash, according to a report published Monday by the National Bureau of Economic Research. Cash cut mortality in children under 5 by about 45%, the study researchers found, on par with interventions like vaccines and anti-malarials.
"This paper is really well done, and the result itself is pretty stunning," says Heath Henderson, an economist at Drake University who wasn't involved in the study. Historically, it's been "difficult to study the impacts of cash transfers on mortality with any sort of rigor," he says.
"This study is different," he says, and suggests cash can help people get life-saving care.
Over the past decade or so, the idea of simply giving people living in poverty cash has gained traction, in part by evidence that it can work. The best evidence comes from what researchers call randomized controlled trials. In this set-up, an experimental group gets cash, a control group doesn't, and researchers look for differences in measurable outcomes, like income or savings, to understand what difference cash made.
While studies have found clear links between cash transfers and economic well-being, health has been harder to pin down, especially for the most dire health outcomes.
"Infant and child mortality in rural Kenya is an order of magnitude higher than it is in the U.S.," says Edward Miguel, an economist at the University of California Berkeley and study co-author. "But it's still a relatively rare event to have a child die. Statistically speaking, that means we need a really large sample size to have precise and reliable estimates of the effect of cash on child mortality."
$1,000 to 10,000 families
In 2014, the nonprofit GiveDirectly began a massive experiment. Over the next three years, they gave $1,000 to over 10,000 low-income households across 653 villages in Western Kenya.
"It was designed as a randomized control trial," says Miguel. "So some areas got more cash. Some got less cash, and we can study the impact of that cash."
To study that impact, Miguel and his colleagues collected a lot of data. They completed a kind of birth census for all children that had been born and died before age 5 over the previous decade in the study area. "We ended up collecting data on over 100,000 births. It took a year to do."
They found that cash had major benefits for infant and child mortality, especially when it was delivered close to birth.
Cash payments were associated with a 48% drop in infant mortality, from roughly 40 deaths per 1,000 births to about 21 deaths. Deaths of children under five were 45% lower in households who got cash, dropping to 32 per 100,000 births from 57.
Cash played an outsized role in reducing deaths during birth and in the few weeks after, falling by 70% compared to controls. "That really pointed toward a key role for access to health services right at the moment of delivery being very important," says Miguel.
Why cash cut deaths
For many living across rural sub-Saharan Africa, getting to a health facility, and paying for care there, can be difficult, especially when pregnant.
"When I worked in rural parts of Uganda, one of the things that was really clear for pregnant women was they did not attend antenatal care, because it's so difficult to get to a health care facility," says Miriam Laker-Oketta, GiveDirectly's senior research adviser.
"You're making the decision between, should I go for antenatal care and have my family sleep hungry, or should I stay home and hope that my baby is fine because I'm not feeling sick and we can have a meal that day," she says, since often women would have to forgo work for a day to go to the doctor. "Those are some of the decisions people have to make."
Extra cash seems to make those decisions easier, as long as health care facilities weren't too far away.
Cash made the biggest difference for families who live roughly 30 minutes or less away from a health care facility staffed with physicians. When the distance is greater, the benefits of cash for infants start to wane, though do not disappear entirely.
The researchers saw 45% more hospital deliveries among pregnant people who received cash than those who didn't. It's often more expensive to deliver at a hospital than a smaller clinic, says Laker-Oketta. "We've given people the means to access the care that they need and not to make some of these really difficult choices between getting care and feeding a family."
The extra cash also helped put more food on the table. Children were about 44% less likely to go to bed hungry in households that received cash, the study found. Women who got cash while pregnant also worked about half as much — roughly 21 fewer hours per week — in their first trimester and the months after delivery than women who didn't get cash. Work in these rural areas can often be physically taxing, says Laker-Oketta.
"That's great for the mother's health, but also gives time for her baby to develop well," she says. "She's also available after the baby is born to take the child to any early health visits."
A 'very important’ data point
Altogether, the results impressed Aaron Richterman, a physician who studies poverty reduction at the University of Pennsylvania and wasn't involved in the study.
"It's one data point, but it's a very important data point. We can be very certain that in this case, the cash caused these benefits in mortality that we're seeing," he says. In an environment of shrinking foreign aid, he says cash could offer a simple way of reducing infant mortality.
Just how big a difference cash could make may depend, in part, on how readily people in other locations can use the extra money to get health care.
"I think this paper underscores the point that it's really adequate access to health care that's making all the difference," said Henderson, the Drake economist and author of the book Poor Relief: Why Giving People Money Is Not The Answer To Global Poverty. "It just so happens that in this particular place, people needed cash to access health care."
That's likely the case in many places across sub-Saharan Africa after years of investment in bolstering health care systems, says Laker-Oketta, but not all.
"The answer is not we give cash alone, or we just focus on improving the health care system," she says. "What's obvious in this study is that you need both to be working together."
https://www.npr.org/sections/goats-and-soda/2025/08/18/g-s1-83197/infants-health-cash-aid-kenya
*
YOU DON’T HAVE TO BE PHYSICALLY PERFECT TO BE BEAUTIFUL
Lillie Langtry and Sarah Bernhardt
New research reveals that physical attractiveness is more about personal compatibility than meeting universal standards
Although it’s often dismissed as superficial, the question of whether one is beautiful is undoubtedly important. Being seen as beautiful can impact a person’s life prospects. Apart from making us coveted sexual partners, it has a so-called halo-effect whereby those considered attractive are also perceived positively on other traits such as kindness and competence.
We start hearing about this early in childhood as we encounter beauty in the tales of princesses and villains showing us what is good and bad. As parents adorn us with hairdos, clothes and accessories, we learn that beauty matters for how others perceive and treat us, and we begin caring about the impressions we make. By adulthood, the concept of beauty has become so ingrained that we are an easy target for the multi-billion-dollar industry that dictates beauty standards and that promises us a successful and fulfilled life – if only we purchase the right products.
To many, this might paint rather an unfair and demoralizing picture. Yet, the good news is that recent findings have drastically revised our understanding of beauty, revealing that it is far less about meeting certain aesthetic norms than previously assumed.
Scientific interest in beauty dates back to the 19th century when the English polymath Sir Francis Galton used the newly developed photographic technique to average human faces. He noted that an average face combined from many others looked more beautiful than individual faces. This led him to speculate that those people seen as most attractive are of a generic type with few irregularities. Famous beauties of the day, such as Lillie Langtry – a friend of Oscar Wilde and a suspected mistress of the Prince of Wales – would have been considered beautiful, according to Galton’s theory, not because they were physically exceptional, but because their features more closely resembled the general population than was true for most other individuals. Moreover, he saw beauty as a marker of genetic quality, signaling one’s value as a sexual partner.
Later research using modern image-editing software replicated Galton’s main findings: the majority of observers really do prefer the average of many faces over the individual faces themselves. Other digital manipulations of faces have shown that, by and large, people prefer symmetrical over non-symmetrical faces, and faces that emphasize the typical sexual characteristics of women and men. For example, women tend to have higher cheekbones than men do and, if their faces exaggerate this trait, they stand out as more beautiful. Likewise, men tend to have a more angular and dominant jaw when compared with women. Hence, male faces emphasizing this trait are typically perceived as more masculine and more attractive.
Taken together, these early scientific efforts supported Galton’s idea that there are natural beauty standards and that people who meet them are seen as more attractive. Yet, more recent studies by my colleagues and me have begun to challenge such a one-sided perspective. An important limitation of the earlier research was that it always involved averaging observers’ judgments of beauty, essentially removing any individual variation in beauty preferences.
Novel statistical tools have allowed us to overcome this limitation, though applying them makes data analyses more complex and less intuitive.
Our approach entails using a form of a regression called mixed-effect modelling, which can estimate different sources of variance in attractiveness ratings, including the preferences unique to each observer and the influence of the faces themselves. If variability among individual observers explains more of the data, one can conclude that individual tastes outweigh beauty standards.
On the other hand, if variability among the judged faces explains more of the data, one can conclude that beauty standards outweigh individual tastes. To date, not many studies have adopted mixed-effect modeling techniques, but those that have show that individual tastes contribute significantly to judgments of beauty. In other words, universal beauty standards are not as important as previously claimed.
Let’s say you have a personal preference for brown eyes and a crooked smile. When judging another’s beauty, at first the appeal of their face to you will be affected equally by natural standards (such as how average, symmetrical etc), and by your own idiosyncratic preferences. But then, as time goes by, other researchers have found that the balance swings in favor of individual tastes. Across minutes, hours or days, the allure of another’s brown eyes and that crooked smile will tend to increase.
This means that the beauty standards uncovered by early research and driving the beauty industry matter. Yet, they don’t matter as much as we initially thought, as personal likes and dislikes can be just as important. Indeed, one may speculate that in contributing to our sense of beauty, standards and individual tastes serve different functions. As proposed by Galton, common preferences for opposite-sex individuals may be useful in enhancing reproductive success. Individuals whose features are approaching a social group’s prototype promise to endow offspring with traits that have been tried and tested and that are thus typical, regular and fit for survival.
By contrast, individual tastes may lead one to interact with those others who maximize compatibility or ‘teamability’. Given research suggesting that we can glean aspects of someone’s personality from their facial appearance, such as how sociable, anxious or trustworthy they are, perhaps we are attracted to those with whom we are likely to get along or who might complement our own strengths and weaknesses.
Our research has also shown that attractiveness is much more than a pretty face. This isn’t so surprising when you consider that, outside of the research laboratory, we don’t just look at each other’s faces, we see, hear and smell the whole person from the tone of their voice to the way they move. What may surprise you is that we find the attractiveness of these different aspects of a person tend to correlate. When we asked our participants to rate the attractiveness of isolated voices, body motion or scents, we found they produced similar judgments to when we asked them to rate the same target’s facial photograph. That man or woman with the brown eyes and crooked smile that you find so appealing – it’s likely you’ll also be drawn to the sway of their gait and the huskiness in their voice.
What’s more, just as for faces, we found that the attractiveness ratings for these different information channels were shaped by both beauty standards and individual tastes. A logical conclusion to draw is that what makes a person beautiful to someone else must come from ‘within’ and have a shared effect on that person’s various surface characteristics. How exactly this unfolds and what aspects of a person’s biology translate into observable nonverbal traits relevant for attraction awaits further research.
Before we uncover these hidden mechanisms, our findings already have significant implications for how you should think about your own beauty. They highlight that your attractiveness goes beyond universal aesthetic norms. It’s as much, or more, about whether you are compatible with other individuals you encounter in life.
So, are you beautiful? The science of attraction teaches us the answer to this question is likely ‘yes’. Although most of us may not meet the heralded standards of the beauty industry, we are attractive to at least a few others we encounter because our personal characteristics and values align with theirs. Moreover, chasing beauty by improving our looks can take us only so far, because beauty – rather than being strictly visual – is perceived in every sense, and depends on processes that work from the inside out. Thus, beauty is not a verdict but a dialogue with people who matter – one where our presence, in all its multisensory richness, writes its own answer.
*
“ONE AND DONE”: THE TREND TO HAVE ONLY ONE CHILD
Many parents who initially thought they wanted multiple children have just one.
Rates of less than two births per woman are becoming the global norm; the U.S. rate is at its lowest.
The one-child family is the fastest-growing family unit.
We all imagine the family we want. For some, it’s replicating the family of a friend or our own. It might include many siblings, one sibling, or no siblings at all.
Historically, what has been considered the traditional family—a boy for you and a girl for me—has dominated most people’s thinking. Although many still say they want two children, the reality is that’s not what’s happening. Rather, the one-child family is the fastest-growing family unit in the U.S. and throughout developed countries.
Those of childbearing age demonstrate no need to fit the bygone family formula—two parents, two kids. Nonetheless, many experience a nagging feeling to persist in this approach to family. That highlights the importance of recognizing status quo expectations for what they are, instead of giving in to them.
Like many women today, I was older when I had a child. Within a month of my child's birth, people were quizzing me: When are you having another? How can you do that to your child? I started to wonder what could be so problematic about having one child and embarked on a decades-long investigation of only children.
Being “One and Done”
Today more than ever, science doesn’t hold up to the single child stereotypes that, fortunately, are fading fast: Only children are not particularly lonely or selfish or bossy as children or adults. In fact, the benefits of being an only child and being the parent of one are substantial.
Here’s some of what I learned when gathering stories and research for my new book, Just One: The New Science, Secrets & Joy of Parenting an Only Child. The thoughts and comments are emblematic of the increasing acceptance of “one and done.” The changing attitudes go a long way in explaining the significant drop in birthrates now occurring.
The U.S. birthrate is now the lowest it has ever been, at 1.6 children per woman, according to Statista—and we’re not alone. “Fertility levels of less than two births per woman are becoming the global norm,” the United Nations notes in its “World Fertility 2024” report released earlier this year. “In over half of all countries and areas (55 percent), with more than two-thirds of the global population, the fertility level is below 2.1 births per woman.”
Juliet was 43 when she gave birth to her son, and it was the expense of infertility treatments that led her to forgo having more children. Such costs are frequently the deciding factor for parents who choose to have only one child. “When I was younger, I thought two was my number. As I got older, I worried about my fertility,” she says. “To have a baby took two expensive rounds of IVF, and, of course, they were not covered by insurance. We felt lucky to have a viable embryo and then fortunate to have a healthy child. We agreed to call it quits. We decided not to tempt the fates anymore.”
Similarly, Ingrid, who also started her family at a later age, took the pressure off herself when she accepted that she didn’t have the fortitude to face infertility drugs or the possible sadness of another miscarriage.
You could simply conclude that a second child is not right for you, that one more tiny human to raise may unravel your work-family balance or the intimacy you have with your partner. Or you may determine that a second child is not financially feasible or that you want to prioritize your career.
Acknowledging your desires and limitations is helpful. Self-awareness can help you decide. “I know myself; I’m a lazy, disorganized person who could not manage a larger family,” Francine confesses. “I know that about me.”
Well-documented research shows that mothers’ happiness and mental health drop as more children arrive. An Australian study of than 20,000 families led by Leah Ruppanner, who teaches sociology at the University of Melbourne, reviewed data collected over a period of 16 years. The subjects entered the study when their children were 1 year old. The researchers found that having second children affects parents’ mental health.
“Prior to childbirth, mothers and fathers report similar levels of time pressure. Once the first child is born, time pressure increases for both parents,” Ruppanner concluded. “Yet this effect is substantially larger for mothers than for fathers. Second children double parents’ time pressure, further widening the gap between mothers and fathers.” Time pressures and the stress they created didn’t diminish as children got older. Ruppanner’s findings held when children reached adolescence, a time when they tend to be more difficult and demanding.
Park the Guilty Feelings
No matter what the research reports and whether by choice or circumstance, many feel guilty or conflicted when they decide to stop after having one child. Then they move on. They realize that one child is just right for their family, irrespective of their preconceived notions.
“I simply didn’t have a ‘valid’ reason for having only one child, so everyone would think I was selfish," one woman said. "Recognizing my fear of how I was perceived by others was a pivotal moment in realizing what I truly wanted, rather than what society told me I should want.
“I ultimately knew that my one child was perfect for me, for my family, and she was enough. Being the parent of one child, I was enough.”
https://www.psychologytoday.com/us/blog/singletons/202508/is-an-only-child-enough
*
‘HOMO BIGHEAD’: NEWFOUND HUMAN SPECIES ROAMED CHINA’S WOODLANDS WITH EXTRA-LARGE HEADS
Early humans of Homo juluensis had a large head shape, with measurements notably larger than those of Neanderthals and Homo sapiens.
A Denisovan in the jungle
Scientists have announced the discovery of a new human species, Homo juluensis, following extensive research published in Nature Communications. Professor Christopher J. Bae from the University of Hawaii and Xiujie Wu from the Chinese Academy of Sciences led the study, which sheds light on the diversity of ancient human populations in East Asia.
Homo juluensis lived approximately 300,000 years ago in eastern Asia, specifically roaming the woodlands of northeastern China. The fossils designated as Homo juluensis are fragmented and include several pieces of skull, jaw, and some teeth, as reported by Folha de S.Paulo. The remains of at least 16 individuals have been found, exhibiting unique characteristics such as larger skulls and teeth than Neanderthals and Homo sapiens, according to El Tiempo.
Bild reported that the early humans of Homo juluensis had a large head shape, with measurements notably larger than those of Neanderthals and Homo sapiens. However, scientists emphasize that head size does not necessarily indicate superiority of intelligence, as noted by Euronews Turkish.
The brain volume of Homo juluensis individuals could be quite large, in some cases reaching 1,700-1,800 cubic centimeters, while the average brain volume of modern humans is about 1,200 cubic centimeters, as reported by Correio Braziliense. Despite their larger skulls, it is questionable whether Homo juluensis were more intelligent than modern humans. Professor Christopher J. Bae revealed that Homo juluensis' larger brain did not necessarily mean they were smarter, and warned that the size disparity does not necessarily indicate greater intelligence, according to Bild.
Researchers were particularly intrigued by the size of the teeth of Homo juluensis. El Tiempo reported that the teeth significantly exceed in size those of Neanderthals and Homo sapiens, indicating unique adaptations. This led the team to compare the dental characteristics of Homo juluensis with those of the Denisovans, a mysterious group of ancient humans known primarily through DNA evidence and a few physical remains.
"The molars from Xujiayao of our type specimen are also quite large," Christopher Bae commented. "One of the things that always stood out about the molars from Denisova was that they were quite large," he added. The proposed relationship between Homo juluensis and Denisovans is based mainly on similarities in dental characteristics, particularly molar size and bite surfaces.
However, more research is needed to confirm the connection between Homo juluensis and the Denisovans. BioBioChile reported that the relationship based on similarities between jaw and tooth fossils needs to be tested with more research.
"The East Asian record is making us recognize how complex human evolution is in broader terms and really forces us to revise and rethink our interpretations of various evolutionary models to better match the growing fossil record," Professor Bae stated.
Homo juluensis were capable of remarkable things. They manufactured stone tools, indicating a high level of adaptation and complex social connections, as reported by Proceso. Bild noted that Homo juluensis likely processed animal hides for clothing, possibly for protection against the cold, and survived by hunting animals. They hunted wild horses in small groups, using all parts of the animals for sustenance, including meat, marrow, bones, and skins.
"They probably hunted in groups—surrounded and attacked things like horses," Christopher Bae said, referring to Homo juluensis.Professor Bae stated. "Life in northern China is not exactly easy; especially in winter, it gets very cold. They processed the hides of hunted animals with stone tools," he added.
The study suggests that Homo juluensis organized into independent small groups and communities. El Tiempo reported that researchers estimate they formed small hunting communities, which may have contributed to their vulnerability due to small group living and population size.
The decline of the Homo juluensis population could be attributed to drastic climatic changes of the Late Quaternary, an era marked by repeated glacial periods. Primera Hora noted that this period was characterized by major climatic changes, including a glacial period that brought a colder and drier climate, contributing to the extinction of Homo juluensis.
"They genetically subjugated indigenous populations like the Neanderthals and the Juluensis," Bae said. This suggests that Homo juluensis began to disappear as they integrated with the first modern humans who came to China about 120,000 years ago, according to Euronews Turkish.
The discovery of Homo juluensis adds to the understanding of the morphological and genetic diversity of ancient humans in Asia during the Pleistocene. "The diversity among human fossils from East Asia is greater than we expected," the authors stated, according to La Nación.
"This study clarifies a hominin fossil record that has tended to include anything that cannot easily be assigned to Homo erectus, Homo neanderthalensis, or Homo sapiens," Bae said.
https://www.jpost.com/science/science-around-the-world/article-831441?utm_source=001750a0922bc0be0267fba456f181f3b9&utm_medium=obbow&utm_campaign=00074dcdc16d7d1e4bfafe2fe1d194798a&utm_adtitle=New+Human+Species+Found+in+China+with+Extra-Large+Heads&dicbo=v4-2LzCVII-1075783007-0
(my thanks to Charles)
*
ARE WE ON OUR WAY TO THE SIXTH MAJOR EXTINCTION?
Churning quantities of carbon dioxide into the atmosphere at the rate we are going could lead the planet to another Great Dying
Daniel Rothman works on the top floor of the building that houses the Massachusetts Institute of Technology (MIT) Department of Earth, Atmospheric and Planetary Sciences, a big concrete domino that overlooks the Charles River in Cambridge, Massachusetts. Rothman is a mathematician interested in the behavior of complex systems, and in the Earth he has found a worthy subject.
Specifically, Rothman studies the behavior of the planet’s carbon cycle deep in the Earth’s past, especially in those rare times it was pushed over a threshold and spun out of control, regaining its equilibrium only after hundreds of thousands of years. Seeing as it’s all carbon-based life here on Earth, these extreme disruptions to the carbon cycle express themselves as, and are better known as, “mass extinctions.”
Worryingly, in the past few decades geologists have discovered that many, if not most, of the mass extinctions of Earth history – including the very worst ever by far – were caused not by asteroids as they had expected, but by continent-spanning volcanic eruptions that injected catastrophic amounts of CO2 into the air and oceans.
Put enough CO2 into the system all at once, and push the life-sustaining carbon cycle far enough out of equilibrium, and it might escape into a sort of planetary failure mode, where processes intrinsic to the Earth itself take over, acting as positive feedback to release dramatically more carbon into the system. This subsequent release of carbon would send the planet off on a devastating 100-millennia excursion before regaining its composure. And it wouldn’t matter if CO2 were higher or lower than it is today, or whether the Earth was warmer or cooler as a result. It’s the rate of change in CO2 that gets you to Armageddon.
This is because the carbon cycle is happy to accommodate the steady stream of CO2 that issues from volcanoes over millions of years, as it moves between the air and oceans, gets recycled by the biosphere, and ultimately turns back into geology. In fact, this is the carbon cycle. But short-circuit this planetary process by overloading it with a truly huge slug of CO2 in a geologically brief timespan, beyond what the Earth can accommodate, and it may be possible to set off a runaway response that proves far more devastating than whatever catastrophe set off the whole episode in the first place.
There could be a threshold that separates your run-of-the-mill warming episodes in Earth history – episodes that life nevertheless absorbs with good humor – from those that spiral uncontrollably toward mass extinction.
While it has been more than 60m years since the planet surpassed such a threshold, by Rothman’s calculation we are about to set the planet on just such an ancient and ominous trajectory, one that may take millennia to eventually arrive at the destination of mass extinction, but that may be all but inevitable once we have pushed off from shore.
It turns out that there are only a few known ways, demonstrated in the entire geologic history of the Earth, to liberate gigatons of carbon from the planet’s crust into the atmosphere. There are your once-every-50m-years-or-so spasms of large igneous province volcanism, on the one hand, and industrial capitalism, which, as far as we know, has only happened once, on the other.
Mass extinctions aren’t just very bad things. They are not civilization-halting pandemics, like Covid-19, that kill far less than 1% of a single species of primate. Mass extinctions are not what happen when the world loses a quarter of its vegetation and a third of North America is sterilized, as happened only 20,000 years ago when mile-thick ice sheets plowed over Canada. They are not Yellowstone super eruptions, three of which have detonated in a little over the past 2m years – each of which would have devastated modern agriculture and industrial civilization, but none of which had any effect on global biodiversity. These are part of the bargain of living on Earth. Life wouldn’t have made it this far if it were vulnerable to the sorts of routine indigestion that are part of the workaday operation of a volcanic planet.
But while ours is a sturdy planet, resilient to all manner of unthinkable insults to which it is regularly subjected, once every 50-100m years, something truly very, very bad happens. These are the major mass extinctions when conditions on Earth’s surface conspire to become so vile everywhere that they exceed the adaptive capacity of almost all complex life.
A burning area of Amazon rainforest reserve, south of Novo Progresso in Para state, Brazil.Five such times in the history of animal life this devastation has reached (and in one case far exceeded) the somewhat arbitrary cutoff of wiping out 75% of species on Earth, and so garnered the status of “major mass extinction”. These are known in the paleontology community as the big five (though dozens of other minor mass extinctions of varying severity appear in the fossil record as well). The most recent of the big five struck 66m years ago, a global catastrophe sufficient to end the age of gigantic dinosaurs.
It left behind a 110-mile crater, one discovered in 1978 under Mexico’s Yucatán Peninsula by geophysicists working for the Mexican state oil company Pemex. The size and shape of the crater implied that a six-mile-wide asteroid instantaneously put a 20-mile-deep hole in the ground, followed, three minutes later, by an (extremely temporary) 10-mile-high mountain range of exploding molten granite – 76% of animal species were taken down in the maelstrom.
By comparison, the devastation wreaked by humans on the rest of the living world is relatively mild, perhaps clocking in less than 10%.
Well, at least for the time being. According to an influential 2011 Nature study by palaeobiologist Anthony Barnosky, if we keep it up at our current rate of extinctions, we could jump from our (still horrifying) ranks of a minor mass extinction into the sixth major mass extinction anywhere from three centuries to 11,330 years from now, indistinguishable to future geologists from an asteroid strike. Even worse, there could lurk tipping points along the way, in which the world’s remaining species fall away almost all at once, like the nodes of a power grid failing in concert during a network collapse.
Given how catastrophic the impact of humans on the biosphere has been already, it’s chilling to think that the crescendo of our mass extinction might still lie in front of us.
*
In our planet’s history , one stretch of time stands as uniquely instructive – uniquely hapless, volatile and deadly – when it comes to CO2 overdoses. Three-hundred-million years ago, the planet repeatedly lost control of its carbon cycle and suffered 90m years of mass extinctions, including two of the biggest global catastrophes of all time – both CO2-driven nightmares. In one case, it nearly died. It was felled, in the words of the palaeontologist Paul Wignall, by “a climate of unparalleled malevolence”. At the very end of the Permian period (252m years ago), enough lava erupted out of Siberia and intruded into the crust that it could have covered the lower 48 US states a kilometer deep.
The rocks left behind by these ancient lava flows are known as the Siberian Traps. Today, the Traps produce spectacular river gorges and plateaux of black rock in the middle of Russia’s boreal nowhere. The eruptions that produced them, and that once covered Siberia in 2m square miles of steaming basalt, are in a rare class of behemoths called Large Igneous Provinces (Lips).
Lips are by far the most dangerous thing in the Earth’s history, with a track record far more catastrophic than asteroids. These once-an-epoch, planet-killing volcanoes are of a different species entirely than your garden-variety Tambora or Mount Rainier or Krakatau, or even Yellowstone. Imagine if Hawaii was created not over tens of millions of years and scattered across the Pacific, but in brief pulses in less than 1 million years, and all in one area (and sometimes emerging through the centers of continents).
Lips are the Earth’s way of rudely reminding us that our thin rocky surface, and the gossamer glaze of green goo that coats it, sits atop a roiling, utterly indifferent planetary drama. It’s one in which titanic currents of rock draw down entire ocean plates to the center of the world to be destroyed and reborn. When this process suffers a hiccup, Lips gush out of the crust like tectonic indigestion, leaving gigantic swaths of the Earth buried in volcanic rock. Depending on the pace and size of these eruptions, if they’re big enough and fast enough, they can destroy the world.
At the end of the Permian, in the greatest mass extinction of all time, these eruptions would have featured terrifying explosions, no doubt inducing brief volcanic winters and acid rain. There was also widespread mercury poisoning, and toxic fluorine and chlorine gas, which would have been familiar to suffocating soldiers in the First World War trenches. Most importantly – and most unfortunately, for life – billowing out of the Earth in the biggest catastrophe in history was a planet-deranging amount of carbon dioxide.
The Putorana plateau, formed about 250m years ago, in Krasnoyarskiy Krai, northern Russia.
Curiously, as the Siberian lava has been dated ever more precisely, it turns out that it wasn’t until 300,000 years into the eruptions – and after two-thirds of this lava had already erupted, flooding the northern reaches of Pangaea in steaming rock miles thick – that this worst mass extinction of all time actually began. This is strange. These volcanoes would have been pumping out all the usual nightmare stuff this entire time, putting industrial polluters to shame – and doing so for hundreds of millennia before the mass extinction began.
There would have been uncountable, unthinkably violent eruptions, and noxious storms of acid rain. But the biosphere is tough. And as bad as it was, turning a third of Russia into a volcanic hellscape, it doesn’t explain why, after all those countless centuries of misery, life suddenly winked out en masse, even at the bottom of the ocean, on the other side of the planet.
What was the mechanism for the mass extinction? “You can rule the lavas out,” says Seth Burgess, a geologist at the US Geological Survey. But something about these Siberian volcanoes must have dramatically changed after 300,000 years, when the world quickly disintegrated. So what was it?
The planet started burning fossil fuels.
The result was a flux of carbon into the system so massive that it overwhelmed the planet’s ability to regulate itself and pushed the world out of equilibrium.
All on their own, volcanoes emit lots of CO2: as much as 40% of the gas from a venting volcano can be carbon dioxide. But after Siberia had been smouldering at the surface for countless generations, something far more menacing began to cook below. Colossal 1,000ft-thick sheets of magma, stymied in their ascent to the surface, instead started spreading sideways into the rock far underground, like incandescent rhizomes, baking through the underworld. This is when everything went to hell.
These massive magma roots were burning through an old layer cake of Russian rock eight miles thick. The quarter-billion-year pile of strata had accumulated in the vast Tunguska basin: the remnants of bygone salt flats and sandstones, but more catastrophically, carbon-rich limestone and natural gas deposits from ancient seas, and coals from ages past. The magma cooked through all these fossil fuels and the carbon-rich rock underground on contact, and detonated spectacular gas explosions that shattered the rock far above, erupting at the surface as half-mile craters that spewed carbon dioxide and methane into the air by the gigaton.
After hundreds of thousands of years of familiar surface eruptions, the volcanoes had suddenly started burning through the subterranean world on a massive scale and began acting like enormous coal-fired power plants, natural gas plants and cement factories. “The burning of coal,” one scientist writes of the end-Permian extinction, “would have represented an uncontrolled and catastrophic release of energy from Earth’s planetary fuel cell.” The Siberian Traps suddenly started to emit far too much CO2, and far too quickly for the surface world to accommodate it.
*
Here is a plausible sequence of events at the end of the Permian. First, and most simply: the excess CO2 trapped more energy from the sun on the surface of our planet – a simple physical process that was worked out by physicists more than 150 years ago. And so the world helplessly warmed – models and proxies both point to about 10C of warming over thousands of years – pushing animal and plant physiology alike to their limits. It’s also a simple physical fact about our world that for every degree it warms, the atmosphere can hold about 7% more water, so, as the temperature climbed and the water cycle accelerated, storms began to take on a menacing, drowning intensity. As the ocean warms as well, it holds less oxygen.
Unfortunately, living in hot water is hard work, so the luckless animals in it required more oxygen to live, not less. Thus, as the ocean got hotter and more stagnant, the creatures in it began to fall away, and the seas began to empty. Making matters worse, the carbon dioxide in the air diffused into these gasping seas as carbonic acid (H2CO3). The entire global ocean became more acidic as a result, and the water was robbed of the chalky carbonate dissolved in it, and which animals used to build their shells. In these souring seas, the creatures became brittle and sickly, or even failed to form shells in the first place.
As this sea life was decimated, the global marine food web began to teeter and collapse. Meanwhile, the ecosystem on land was being destroyed by wildfire (themselves spewing even more CO2 into the air) and lashed by violent storms. Terrestrial wreckage washed into the ocean, blasting the coastal seas with decaying vegetation and minerals weathered out of the land, such as phosphorus, that acted as plant food, fueling massive algae blooms offshore. The oceans, already wanting for oxygen from the heat, now began to suffocate in earnest as algae blooms died and decomposed.
Tengiz, one of the world’s largest oilfields, on the northeast shore of the Caspian Sea, Kazakhstan.
As the CO2 continued to issue from the Siberian Traps in massive and unrelenting belches, the planet became hotter still, and the oceans didn’t have a chance in hell. CO2 was now pushing the planet outside the limits of complex life. And just as these lifeless, anoxic, hot seas began to spread, a specter from the Earth’s ancient past was renewed on this dying planet.
Unlike most life on Earth with which we’re familiar, primitive anaerobic bacteria, having evolved aeons ago on an all-but-breathless world, don’t need oxygen to burn their food. For some, sulphate will do the trick. And on this rotting, suffocating world, this microbial life became ominously ascendant, breathing out hydrogen sulphide (H2S) as exhaust.
Unfortunately, hydrogen sulphide is mercilessly toxic, instantly killing humans (and creatures like us), as it sometimes does today in manure pits, or around oil pads like those in Texas’s Permian basin. And so this dark cloud of primeval life spread insidiously through the deep and even into the shallows. The world was now very, very hot, very stormy, almost totally denuded of vegetation, with acidifying, anoxic oceans that belched unsparingly poisonous gas from an ancient microbial metabolism that killed anything that came near it.
On the other side of the planet from the eruptions, once-forested polar South Africa became so denuded of life that rivers that once happily curved and twisted – their banks anchored by living plant roots – now rushed straight over the scoured landscape in braided, sprawling arroyos. Unearthly hot and dry seasons razed the forests with fire, then alternated with apocalyptic superstorms that washed it all away. The animals that had stocked the now-vanished forests for millions of years vanished as well. In the rocks, fungal spores strangely appear in the fossil record all over the world, heralding the collapse of the biosphere. Even insects, whose sheer numbers usually cushion them against mass death, struggled to hold on.
While the heat devastated life at the poles, the Earth’s searing midsection had become plainly unearthly. As CO2 sent global temperatures soaring, the ocean in the tropics became as hot as “very hot soup”, perhaps sufficiently hot, even, to power outlandish 500mph “hypercanes” that would have laid waste to the coasts. In the continental interiors, the temperature would have leaped even further off the charts.
In the planet’s most miserable hour, much of its surface came to resemble less Earth as we know it than the feed from a lander probe on some hopeless and barren exoplanetary outpost. Earth, in its darkest hour, was losing its Earthiness. In fact, the postapocalyptic ocean was so vacant that carbonate reefs all over the world came to be built again in the recovery not by animals such as the archaic corals and lamp shells that were driven extinct, but by calcified mounds of bacterial slime.
Everywhere. Even a short hike from my apartment in Boulder, Colorado, brings me face-to-face with this stromatolite rock from the end of the world, left behind by foul microbial mats. In the Colorado Front Range, where Earth history has been lifted out of the ground, tilted sideways and ornamented with ponderosa pine, one encounters this hummocky red rock laid down, layer by layer, by microbes in a deathly sea 252m years ago. It is wedged between more prosaic sandstones from the Carboniferous before it, and the dinosaur-trampled beach sands of the Mesozoic after it, hogbacks of which loom like a backstop behind Denver – the geology of happier times. But the implications of this brief wedge of bacterial rock, and a global ocean momentarily dominated by mounds of calcifying slime, are truly frightening.
Before long, almost every living thing on the planet was dead. The interiors of the continents were silent except for hot, howling winds that swept over the wastes – a dry desolation that alternated with punishing, unearthly storms that smelled like death. The oceans, whose open seas once flashed iridescent with shoals of bobbing spirals and tentacles, and whose nearshore reefs were once dappled fire-engine red to ultraviolet by life, were now putrid, asphyxiating, empty and covered in slime. Every gear of the grand, intricately interlocking biogeochemical machinery of this planet became jammed, decoupled or spun hopelessly out of control.
Complex life, as a subset of this global geochemical churn, unraveled as well. All from adding too much CO2. If there is a geologic precedent for what industrial civilization has been up to in the past few centuries, it is something like the volcanoes of the end-Permian mass extinction.
Now let’s pull back from the brink. However similar to this era our modern experiment on the planet might first appear, it’s worth acknowledging, even stressing, that the end-Permian climate catastrophe was truly, surpassingly bad.
And on a scale unlikely ever to be matched by humans. Upper estimates for how much carbon dioxide the fossil-fuel-burning Siberian Traps erupted, ranging up to 120,000 gigatons, defy belief. Even lower estimates, of say 30,000 gigatons, constitute volumes of CO2 so completely ridiculous that matching it would require humans to not only burn all the fossil fuel reserves in the world, but then keep putting ever more carbon into the atmosphere for thousands of years. Perhaps by burning limestone for fun on an industrial scale for generations, even as the biosphere disintegrates. As it is, industrial civilization could theoretically generate about 18,000 gigatons of CO2 if the entire world pulled together on a nihilistic, multicentennial, international effort to burn all the accessible fossil fuels on Earth.
But while the sheer volume of CO2 generated by the Siberian Traps dwarfs our present and future output, that total was achieved over tens of millennia. What is alarming, and why it’s worth talking about the Siberian Traps in the same breath as industrial civilization, is that even in comparison with those ancient continent-spanning eruptions, what we’re doing now seems to be unique.
It turns out that the focused, highly technological effort to find, extract and burn as much of the world’s fossil-fuel reservoir as is economically feasible, as fast as possible, has been extremely prodigious at getting carbon out of the crust – even compared to the biggest Lips in Earth history. In fact, the best estimate is that we’re emitting carbon perhaps 10 times faster than even the mindless, undirected Siberian volcanoes that brought about the worst mass extinction ever.
This matters because it’s all about the rate. There’s almost no amount of carbon you can pump into the atmosphere that, given enough time, Earth couldn’t buffer itself against. Volcanic CO2 is supposed to enter the system. Without it, none of this works: the climate wouldn’t be habitable, life would run out of raw material, and oxygen would run out. But everything in moderation.
To maintain its homeostasis, the planet continuously scrubs CO2 from the atmosphere and oceans so that it doesn’t build up and cook the planet. But this process is very slow on a human timescale. It buries this CO2 in coal, oil and gas deposits, and, most importantly, ocean sediments that turn to carbonate rock over millions of years. When more modest-sized eruptions inject a massive slug of CO2 to the atmosphere, threatening to overwhelm this process, the Earth has several emergency handbrakes.
The oceans absorb the excess carbon dioxide, becoming more acidic, but in their millennial overturn they bring these more acidic surface waters to the seafloor on the downdraft of the planet’s great ocean currents. There they dissolve the seafloor’s carbonate sediments – the massive carpeting of tiny seashells at the bottom of the ocean, laid down by life over millions of years – and buffer the seas in the exact same way that a Tums settles an upset, acidic stomach.
This is the first line of defense in the carbon cycle, and it works to restore ocean chemistry over thousands of years. Eventually, these forces work to restore the carbon cycle and coax the Earth back from the edge. On a world without humans or especially catastrophic Lips, these feedbacks usually suffice to rescue the planet. The excess CO2 is removed and transmuted to rock; the temperature eventually falls; and the pH of the ocean is restored over hundreds of millennia.
So it’s not just the amount of CO2 that enters the system that matters, it’s also the flux. Put a lot in over a very long time and the planet can manage. But put more than a lot in over a brief enough period of time and you can short-circuit the biosphere.
Unfortunately, the rate at which humans are now injecting CO2 into the oceans and atmosphere today far surpasses the planet’s ability to keep pace. We are now at the initial stages of a system failure. If we keep at it for much longer, we might see what actual failure really means.
If you want to overwhelm the system in a shorter time frame and shove the carbon cycle dangerously out of equilibrium, you need a much more intense infusion of CO2 into the oceans and atmosphere – faster than biology or weathering can save you. The modern global industrial effort to find, retrieve and burn as much ancient carbon buried in the Earth’s crust as possible in a matter of mere centuries might be up to the task.
Adapted from The Story of CO2 Is the Story of Everything: A Planetary Experiment, published by Allen Lane
https://www.theguardian.com/environment/2025/aug/19/a-climate-of-unparalleled-malevolence-are-we-on-our-way-to-the-sixth-major-mass-extinction
*
WHAT CAUSES GOOD PEOPLE TO DO BAD THINGS
1. Past traumas and adverse life events
The past is a powerful dictator of how you act in the present — even if you aren’t aware of it.
Dr. Rod Mitchell, a psychologist from Calgary, Alberta, indicates, “Good people might act harmfully when triggered by unresolved traumas. Like a shadow from the past, these traumas can momentarily eclipse their inherent human kindness.”
Trauma and adverse life events impact everyone differently, and for some people, the effects can lead them to make bad decisions.
For example, childhood trauma can result in insecure attachment in adulthood. Insecure attachment may lead to validation-seeking that develops into an online affair.
Or, having experienced starvation in the past might make you steal and stockpile food from local stores.
In both scenarios, the intention is not to harm others. The behaviors stem from unresolved negative experiences and feelings from the past.
2. Survival mode
Survival mode, when you feel you’re backed into a corner, is another reason why good people do bad things, according to Dr. David Tan, a child and forensic psychiatrist from Bay of Plenty, New Zealand.
An example would be if you lost your job and were facing major financial hardships, like the loss of your home. Feelings of desperation, overwhelming stress, and pressure to provide for your family might make you consider unethical options like fraud or theft.
3. The need to belong
Humanity is inherently socialTrusted Source and survived throughout history through group cooperation and collaboration. Wanting to feel a sense of belonging among your peers is natural.
“One reason why good people may sometimes do bad things is due to societal pressure or influence,” says Lindsey Tong, a licensed clinical social worker from Woodland Hills, California. “The innate desire to fit in and feel accepted can lead someone to act against their better judgment.”
She gives the example of engaging in harmful gossip about a co-worker even when you know it’s not right.
“This behavior doesn’t make them inherently bad, but highlights how external influences can sway even the most well-intentioned individuals,” she says.
4. Lack of self-awareness
In-the-moment recognition of your feelings and your position in the world around you is known as self-awareness. According to Tong, it’s often a factor in why good people sometimes do bad things
“When someone isn’t in tune with their own emotions or values, they may unintentionally act in a way that goes against their beliefs,” she says. “This can happen when someone is under stress or feeling overwhelmed and doesn’t take the time to reflect on their actions.”
An example, says Tong, would be saying hurtful things during an argument because you’re too caught up in your own feelings of hurt and frustration. Rather than acknowledging those feelings in yourself, you end up projecting that pain onto someone else.
5. The greater good
“The end justifies the means” is a saying derived from the literary works of Italian philosopher Niccolo Machiavelli. It implies that a positive result merits any negative action necessary to achieve it.
This sense of control, says Mitchell, is another reason why good people do bad things.
“Sometimes, good people take actions they believe are for the ‘greater good’, not realizing the harmful consequences,” he says. “This misjudgment stems from an overestimation of their ability to control or predict outcomes.”
An example would be going to extreme lengths of civil disobedience, like damaging property, to draw attention to urgent societal issues, like climate change.
6. Misguided justice
When you feel you’ve been wronged, retaliation in an equally negative manner can feel justified. You didn’t start the interaction; you’re just responding in kind, right?
“We may do bad things when we’ve justified certain actions as being the right thing to do but for misguided and twisted motivations, like when someone decides to retaliate for some perceived slight or offense,” says Tan.
An example would be cheating on your significant other in retaliation for finding out they had been unfaithful.
7. Mental health disorders
Living with certain mental health disorders can affect how you view the world and how you interact with others. Conditions like narcissistic personality disorder (NPD), for example, naturally feature low empathy; the ability to relate to the feelings and thoughts of others.
“There are some folks with certain types of personality disorders that might increase their tendencies to think in these sorts of ways but it doesn’t necessarily make them ‘bad,’” says Tan.
Other conditions, like depressive disorders and anxiety disorders, can cause changes in your behavior that may be seen by others as negative. Sleeping through commitments and neglecting responsibilities in major depressive disorder (MDD), for example, is one way symptoms can come off as “bad” behaviors.
The perception of good vs. bad
When someone you’ve viewed as “good” does something considered bad, it can shake your perception of them to the core. Because you’ve only seen them in a positive light, knowing they’ve done something negative can make you wonder if you ever knew them at all.
But doing something bad doesn’t automatically make someone a bad person. It doesn’t always speak to some hidden nature or deliberate desire to be deceitful or hurtful. Often, it’s a sign that a person is experiencing inner turmoil.
“I don’t think anyone is inherently bad but clearly people can and do act badly,” explains Tan. “People are complicated and we are all capable of doing the wrong thing — sometimes for the right reasons. Of course, it’s equally true that sometimes we do the right things for the wrong reasons.”
Traits of human goodness
What traits indicate human goodness remains a topic of debate. “Good” and “bad” are subjective descriptions, meaning they’re based on individual perception.
Tong, Mitchell, and Tan all agree, however, that kindness, compassion, and empathy top the list. These traits, along with others like courage and self-agency, mold the long-standing patterns of behavior associated with being “good.”
“These individuals create positive relationships by being understanding and kind,” states Mitchell. “They don’t just avoid harming others but also work to support and uplift those around them. This approach to being ‘good’ is about actively contributing to the community’s health and happiness.”
Takeaway:
Long-standing patterns of behavior are how society defines a “good” or “bad” person, but all people are capable of positive and negative actions.
Sometimes people viewed as “good” do bad things. They make mistakes, act without thinking, and react in ways dictated by past experiences.
Empathy, kindness, and compassion are common traits seen among people under the banner of “good.” Understanding that good people sometimes do bad things can help you improve those traits in yourself.
https://psychcentral.com/health/reasons-why-good-people-do-bad-things#takeaway
*
RELIGIOUS FEELINGS ACTIVATE NEURAL REWARD CIRCUITS IN THE SAME WAY AS SEX AND DRUGS
Religious and spiritual experiences activate the brain reward circuits in much the same way as love, sex, gambling, drugs and music, report researchers at the University of Utah School of Medicine. The findings will be published Nov. 29 in the journal Social Neuroscience.
“We’re just beginning to understand how the brain participates in experiences that believers interpret as spiritual, divine or transcendent,” says senior author and neuroradiologist Jeff Anderson. “In the last few years, brain imaging technologies have matured in ways that are letting us approach questions that have been around for millennia.”
Specifically, the investigators set out to determine which brain networks are involved in representing spiritual feelings in one group, devout Mormons, by creating an environment that triggered participants to “feel the Spirit.” Identifying this feeling of peace and closeness with God in oneself and others is a critically important part of Mormons’ lives — they make decisions based on these feelings; treat them as confirmation of doctrinal principles; and view them as a primary means of communication with the divine.
During fMRI scans, 19 young-adult church members — including seven females and 12 males — performed four tasks in response to content meant to evoke spiritual feelings. The hour-long exam included six minutes of rest; six minutes of audiovisual control (a video detailing their church’s membership statistics); eight minutes of quotations by Mormon and world religious leaders; eight minutes of reading familiar passages from the Book of Mormon; 12 minutes of audiovisual stimuli (church-produced video of family and Biblical scenes, and other religiously evocative content); and another eight minutes of quotations.
During the initial quotations portion of the exam, participants — each a former full-time missionary — were shown a series of quotes, each followed by the question “Are you feeling the spirit?” Participants responded with answers ranging from “not feeling” to “very strongly feeling.”
Researchers collected detailed assessments of the feelings of participants, who, almost universally, reported experiencing the kinds of feelings typical of an intense worship service. They described feelings of peace and physical sensations of warmth. Many were in tears by the end of the scan. In one experiment, participants pushed a button when they felt a peak spiritual feeling while watching church-produced stimuli.
“When our study participants were instructed to think about a savior, about being with their families for eternity, about their heavenly rewards, their brains and bodies physically responded,” says lead author Michael Ferguson, who carried out the study as a bioengineering graduate student at the University of Utah.
Based on fMRI scans, the researchers found that powerful spiritual feelings were reproducibly associated with activation in the nucleus accumbens, a critical brain region for processing reward. Peak activity occurred about 1-3 seconds before participants pushed the button and was replicated in each of the four tasks. As participants were experiencing peak feelings, their hearts beat faster and their breathing deepened.
In addition to the brain’s reward circuits, the researchers found that spiritual feelings were associated with the medial prefrontal cortex, which is a complex brain region that is activated by tasks involving valuation, judgment and moral reasoning. Spiritual feelings also activated brain regions associated with focused attention.
“Religious experience is perhaps the most influential part of how people make decisions that affect all of us, for good and for ill. Understanding what happens in the brain to contribute to those decisions is really important,” says Anderson, noting that we don’t yet know if believers of other religions would respond the same way. Work by others suggests that the brain responds quite differently to meditative and contemplative practices characteristic of some eastern religions, but so far little is known about the neuroscience of western spiritual practices.
The study is the first initiative of the Religious Brain Project, launched by a group of University of Utah researchers in 2014, which aims to understand how the brain operates in people with deep spiritual and religious beliefs.
https://neurosciencenews.com/neurotheology-mpfc-reward-5622/
*
CHRISTIANITY AND HELL
The resurrection offers a very strange and disturbing “hope” to Christians: that they will live happily in heaven while billions of their fellow human beings, including potentially their loved ones and friends, suffer in an infinitely cruel, purposeless hell, for guessing wrong about which religion to believe.
This bizarre belief is different from major religions that do not have a concept of an eternal hell, such as Judaism, Buddhism, Hinduism, the Baha'i Faith, and Sikhism.
It also bears noting that Christian Universalists do not believe in an eternal hell.
~ Michael Burch, Quora
*
A “GOLDEN OLDIE” ABOUT BEING AN ABOMINATION IN THE EYES OF THE LORD
In her radio show, Dr Laura Schlesinger said that, as an observant Orthodox Jew, homosexuality is an abomination according to Leviticus 18:22, and cannot be condoned under any circumstance.
The following response is an open letter to Dr. Laura, penned by a US resident, which was posted on the Internet. It's funny, as well as informative:
Dear Dr. Laura:
Thank you for doing so much to educate people regarding God's Law. I have learned a great deal from your show, and try to share that knowledge with as many people as I can. When someone tries to defend the homosexual lifestyle, for example, I simply remind them that Leviticus 18:22 clearly states it to be an abomination ... End of debate.
I do need some advice from you, however, regarding some other elements of God's Laws and how to follow them.Leviticus 25:44 states that I may possess slaves, both male and female, provided they are purchased from neighboring nations.
A friend of mine claims that this applies to Mexicans, but not Canadians. Can you clarify? Why can't I own Canadians?
I would like to sell my daughter into slavery, as sanctioned in Exodus 21:7. In this day and age, what do you think would be a fair price for her?
I know that I am allowed no contact with a woman while she is in her period of Menstrual uncleanliness ~ Lev.15: 19-24.
The problem is how do I tell? I have tried asking, but most women take offense.
When I burn a bull on the altar as a sacrifice, I know it creates a pleasing odor for the Lord — Lev.1:9.
The problem is my neighbors. They claim the odor is not pleasing to them. Should I smite them?
I have a neighbor who insists on working on the Sabbath. Exodus 35:2 clearly states he should be put to death.
Am I morally obligated to kill him myself, or should I ask the police to do it?
A friend of mine feels that even though eating shellfish is an abomination, Lev. 11:10, it is a lesser abomination than homosexuality. I don't agree. Can you settle this? Are there 'degrees' of abomination?
Lev. 21:20 states that I may not approach the altar of God if I have a defect in my sight. I have to admit that I wear reading glasses. Does my vision have to be 20/20, or is there some wiggle-room here?
Most of my male friends get their hair trimmed, including the hair around their temples, even though this is expressly forbidden by Lev. 19:27. How should they die?
I know from Lev. 11:6-8 that touching the skin of a dead pig makes me unclean, but may I still play football if I wear gloves?
My uncle has a farm. He violates Lev.19:19 by planting two different crops in the same field, as does his wife by wearing garments made of two different kinds of thread (cotton/polyester blend). He also tends to curse and blaspheme a lot. Is it really necessary that we go to all the trouble of getting the whole town together to stone them? Lev.24:10-16. Couldn't we just burn them to death at a private family affair, like we do with people who sleep with their in-laws? (Lev. 20:14)
I know you have studied these things extensively and thus enjoy considerable expertise in such matters, so I'm confident you can help.
Thank you again for reminding us that God's word is eternal and unchanging.
Your adoring fan,
James M. Kauffman, Ed.D. Professor Emeritus, Dept. Of Curriculum, Instruction, and Special Education University of Virginia
(It would be a damn shame if we couldn't own a Canadian)
*
WHAT MAKES SUPER-AGER BRAINS MORE RESISTANT TO AGING?
Recent research identified some unique brain features of ‘superagers,’ people at least 80 years old who perform cognitively similarly to people decades younger.
The research also identified that superagers tended to be more sociable than their peers.
Some people have better cognitive function than others as they age, and this is an area of scientific study.
A study recently published in Alzheimer’s and Dementia details the unique features of a group of superagers. These people meet certain word recall cognitive criteria in later life.
The research suggests that superagers are very sociable and also identified unique brain characteristics of this group, such as higher levels of von Economo neurons, also known as “spindle neurons.”
These unique brain cells appear to be involved in emotional processing and social cognition.
What makes a person a ‘superager’?
This research looked at “the first 25 years of the Northwestern University SuperAging Program.” This program seeks to see if it’s possible to avoid the decline in brain capacity that comes with age and the possible biological phenotype — or observable traits — related to this avoidance.
The term superaging was developed by the Northwestern Alzheimer’s Disease Research Center (ADRC).
Superagers are people who are 80 years old or older who meet a certain score on a test called the Rey Auditory Verbal Learning Test. Superagers’ scores are similar to those of people between the ages of 56 and 66. Superagers were also at least average for age in other areas of cognitive function.
Right now, there are 133 active participants in the Northwestern ADRC Clinical Core.
Researchers have conducted 77 autopsies to look at the brain features of deceased participants, based on brain donation.
Researchers did not pinpoint a lifestyle linked to superaging. Some participants followed a healthy lifestyle while others followed less healthy patterns.
Superagers also appeared to have similar medical problems to their neurotypical peers.
However, superagers were noted as being sociable, enjoying extracurricular activities, and expressing extraversion. They were also more likely to rate their relationships positively than their peers.
Using neuroimaging, researchers found that superagers did not display cortical thinning, a thinning in the outer layer of the brain, that nonsuperagers experienced.
While there is more research needed to see if superagers start with larger brains, researchers suggest that cortical thinning happens more slowly in superagers.
They also identified an area of the brain called the anterior cingulate that had more cortical thickness compared to younger neurotypical participants. This area of the brain is involved in things like emotion and social networking.
In the anterior cingulate gyrus there were also higher levels of nerve cells called von Economo neurons. This was even in comparison to younger individuals. Researchers think that superagers might have this higher nerve density from birth.
Researchers also looked for neurofibrillary tangles, a protein buildup in neurons that can be present in Alzheimer’s disease as well as in normal aging.
Overall, researchers found that superagers had fewer neurofibrillary problems than their peers. For example, in superagers, they observed fewer neurofibrillary tangles in the rhinal cortices, an area of the brain.
Superagers’ brains may be more resistant to cognitive decline
Researchers concluded that “there are at least two pathways to the maintenance of youthful memory capacity in old brains.” They suggest that this type of brain could resist the start of neurofibrillary pathology and be resilient to the cognitive effects of neurofibrillary pathology.
Furthermore, they observed that superagers had another type of neuron that was bigger. This difference may make a specific brain pathway resist changes like neurofibrillary degeneration.
Or it could be a reactionary change leading to resilience. When looking at plasma biomarkers, superagers also had lower levels of p-tau181, which researchers note was consistent with the lower levels of neurofibrillary degeneration.
The findings further support that superagers have enhanced functionality of a component of the brain called the cortical cholinergic system at multiple levels. This system can be affected both in Alzheimer’s disease and normal aging.
Finally, researchers observed differences in the microglia of superagers. Microglia are cells in the brain that help control the microenvironment of the central nervous system.
In superagers, there were fewer activated microglia in the white matter, something that happens in physiological aging. Preliminary findings suggest that microglia in superagers may have distinct features. The authors note the need for more research in this area.
In their publication, the authors also included a case study of one superager who was highly independent until she experienced a stroke near the end of her life.
When observing her brain, researchers observed certain characteristics. For example, the amygdala and hippocampus areas of the brain were similar to those of a younger person. They also observed features like “low density of neurofibrillary tangles and pretangles” in the postmortem examination.
Kaushik Govindaraju, DO, from Medical Offices of Manhattan and contributor to Labfinder, who was not involved in the study noted the following about the research to Medical News Today: “We have thought that mental decline with aging is inevitable and even expected/anticipated. We marvel at elderly people who have good memories because for as long as humanity has existed, we have been told and have seen that this is not the biological norm. This research may push back against this in an unprecedented way.”
Study limitations and continued research
This research provides more information on a possible superaging phenotype, but has limitations. For one thing, it examined a fairly small number of participants, and recruitment methods could have impacted the study sample.
This particular paper also did not release information on certain components, such as the gender breakdown of the group or ethnicity. This research is ongoing, and this paper noted components of the first 25 years of the research. Some reported data was also based on preliminary findings, like the biomarker data, so more research is needed.
Certain eligibility requirements, such as being able to attend in-person visits in Chicago, may also affect the research. Methods of data collection may also be important to note, such as the use of surveys.
Researchers also pointed out that current methods for staging of neurofibrillary changes might need to be reevaluated, since it does not reflect the presence of undamaged neurons.
They show one superager who had some neurofibrillary degeneration but also a higher level of normal neurons, which might not be present in neurotypical peers who have the same amount of neurofibrillary degeneration.
More research is required to see what features are present from birth in superagers, as well as how the results may apply to the general population. More research into the distinct differences in superagers’ brains and why they are present may also be helpful.
What can we learn from superagers?
This research could lead to developing strategies to help “typical” agers. Alexandra Touroutoglou, MSc, PhD, an assistant professor of neurology at Harvard Medical School, and director of Imaging Operations at Frontotemporal Disorders Unit at Massachusetts General Hospital, who was not involved in the recent research, noted the following general benefits of studying people who age well: “Superagers are exciting because they show that age-related memory decline is not necessarily inevitable. So much of aging research is focused on looking at pathology and disorder, trying to work backwards to what went wrong. But there are things we can learn from those who age exceptionally well. Studying those people who age best could point the way to new treatments, either in terms of interventions or lifestyle changes, that could prolong cognitive health for all the rest of us who age in more typical ways.”
“This will be a game changer in avoiding senescence,” said Hurst. “I hope this article serves to help our society recognize and elevate the value many of those in our community can contribute in years long past retirement, and help medical professionals see our patients in their ability instead of their numerical age.”
https://www.medicalnewstoday.com/articles/what-makes-superager-brains-more-resistant-to-aging#What-can-we-learn-from-superagers
*
Centenarian Blood Tests Give Hints of the Secrets to Longevity
Centenarians tend to have lower levels of glucose, creatinine and uric acid from their sixties onwards.
Centenarians, once considered rare, have become commonplace. Indeed, they are the fastest-growing demographic group of the world’s population, with numbers roughly doubling every ten years since the 1970s.
How long humans can live, and what determines a long and healthy life, have been of interest for as long as we know. Plato and Aristotle discussed and wrote about the aging process over 2,300 years ago.
The pursuit of understanding the secrets behind exceptional longevity isn’t easy, however. It involves unravelling the complex interplay of genetic predisposition and lifestyle factors and how they interact throughout a person’s life. Now our study, published in GeroScience, has unveiled some common biomarkers, including levels of cholesterol and glucose, in people who live past 90.
Nonagenarians and centenarians have long been of intense interest to scientists as they may help us understand how to live longer, and perhaps also how to age in better health. So far, studies of centenarians have often been small scale and focused on a selected group, for example, excluding centenarians who live in care homes.
Huge dataset
Ours is the largest study comparing biomarker profiles measured throughout life among exceptionally long-lived people and their shorter-lived peers to date.
We compared the biomarker profiles of people who went on to live past the age of 100, and their shorter-lived peers, and investigated the link between the profiles and the chance of becoming a centenarian.
Our research included data from 44,000 Swedes who underwent health assessments at ages 64-99. They were a sample of the so-called Amoris cohort. These participants were then followed through Swedish register data for up to 35 years. Of these people, 1,224, or 2.7%, lived to be 100 years old. The vast majority (85%) of the centenarians were female.
Twelve blood-based biomarkers related to inflammation, metabolism, liver and kidney function, as well as potential malnutrition and anemia, were included. All of these have been associated with aging or mortality in previous studies.
The biomarker related to inflammation was uric acid – a waste product in the body caused by the digestion of certain foods. We also looked at markers linked to metabolic status and function including total cholesterol and glucose, and ones related to liver function, such as alanine aminotransferase (Alat), aspartate aminotransferase (Asat), albumin, gamma-glutamyl transferase (GGT), alkaline phosphatase (Alp) and lactate dehydrogenase (LD).
We also looked at creatinine, which is linked to kidney function, and iron and total iron-binding capacity (TIBC), which is linked to anemia. Finally, we also investigated albumin, a biomarker associated with nutrition.
Findings
We found that, on the whole, those who made it to their hundredth birthday tended to have lower levels of glucose, creatinine and uric acid from their sixties onwards. Although the median values didn’t differ significantly between centenarians and non-centenarians for most biomarkers, centenarians seldom displayed extremely high or low values.
For example, very few of the centenarians had a glucose level above 6.5 (117mg) earlier in life, or a creatinine level above 125.
For many of the biomarkers, both centenarians and non-centenarians had values outside of the range considered normal in clinical guidelines. This is probably because these guidelines are set based on a younger and healthier population.
When exploring which biomarkers were linked to the likelihood of reaching 100, we found that all but two (alat and albumin) of the 12 biomarkers showed a connection to the likelihood of turning 100. This was even after accounting for age, sex and disease burden.
The people in the lowest out of five groups for levels of total cholesterol and iron had a lower chance of reaching 100 years as compared to those with higher levels. Meanwhile, people with higher levels of glucose, creatinine, uric acid and markers for liver function also decreased the chance of becoming a centenarian.
In absolute terms, the differences were rather small for some of the biomarkers, while for others the differences were somewhat more substantial.
For uric acid, for instance, the absolute difference was 2.5 percentage points. This means that people in the group with the lowest uric acid had a 4% chance of turning 100 while in the group with the highest uric acid levels only 1.5% made it to age 100.
Even if the differences we discovered were overall rather small, they suggest a potential link between metabolic health, nutrition and exceptional longevity.
The study, however, does not allow any conclusions about which lifestyle factors or genes are responsible for the biomarker values. However, it is reasonable to think that factors such as nutrition and alcohol intake play a role. Keeping track of your kidney and liver values, as well as glucose and uric acid as you get older, is probably not a bad idea.
That said, chance probably plays a role at some point in reaching an exceptional age. But the fact that differences in biomarkers could be observed a long time before death suggests that genes and lifestyle may also play a role.
https://getpocket.com/explore/item/centenarian-blood-tests-give-hints-of-the-secrets-to-longevity?utm_source=firefox-newtab-en-us
*
THE DUAL NATURE OF URIC ACID
Uric acid, the end product of purine metabolism in humans, is widely recognized for its strong antioxidant properties, particularly in the bloodstream. However, it's important to understand the nuances of its role and the potential dual nature of uric acid, acting as both an antioxidant and a pro-oxidant depending on the context.
Uric acid as an antioxidant
Uric acid is a potent scavenger of various reactive oxygen species (ROS), including singlet oxygen and hydroxyl radicals.
It plays a significant role in protecting cells from oxidative damage, particularly in the central nervous system (CNS), where it may help reduce the risk of neurodegenerative diseases like Parkinson's and Alzheimer's, according to the National Institutes of Health (NIH) | (.gov).
It also contributes to maintaining the integrity of the blood-brain barrier.
Uric acid's dual nature
While uric acid effectively scavenges ROS in the hydrophilic environment of the plasma, its ability to scavenge lipophilic radicals and break radical chain propagation within lipid membranes is limited.
Under certain conditions, such as high concentrations or in the presence of specific oxidants like peroxynitrite or metal ions, uric acid can exhibit pro-oxidant effects, leading to the formation of harmful radicals and potential cellular damage.
This pro-oxidant activity may be associated with the development of conditions like hypertension and metabolic syndrome.
In essence, uric acid can be seen as a strong antioxidant, especially in the context of plasma and the nervous system. However, its pro-oxidant potential under certain conditions highlights the importance of maintaining balanced uric acid levels for overall health and avoiding the risks associated with both excessively high and low levels.
High uric acid is also linked to uric acid kidney stones and chronic kidney disease. In some studies, it’s associated with high blood pressure and heart failure as well as metabolic syndrome — a group of symptoms that increase your chances of diabetes, stroke and heart disease.
Low uric acid, or hypouricemia, gets less attention because it affects far fewer people — only about 0.5% of the population. Yet it’s associated with serious neurologic disorders, including Alzheimer’s disease, Parkinson’s disease and amyotrophic lateral sclerosis (ALS), reduced kidney function and a painful nerve condition called trigeminal neuralgia.
Higher uric acid is known to help protect against these disorders. Low uric acid is also associated with kidney damage after vigorous exercise (called exercise-induced kidney injury) and uric acid kidney stones.
Higher-than-normal uric acid can result from:
Not excreting enough uric acid from the body, sometimes from dehydration but more often kidney disease.
Drinking alcohol, which increases the risk of gout and gout flares.
Eating a high-purine diet that includes a lot of red meat, shellfish, sweets, sugary sodas and high-fructose corn syrup. (Fructose from fruit can also contribute to high uric acid.)
Obesity.
Diabetes.
Certain medications, including some used for arthritis, such as cyclosporine (Neoral) and tacrolimus (Prograf).
Low uric acid can be due to:
Rare inherited disorders that decrease uric acid production.
Fanconi syndrome, which causes the filtering tubes in your kidneys to excrete too much uric acid and other substances.
Diabetes.
Anti-gout drugs such as allopurinol (Zyloprim).
Pregnancy.
Malnutrition.
A family history of hypouricemia.
Uric acid is produced when the body breaks down purines — natural substances found in every cell and in most foods. It’s mainly flushed out through the kidneys, but uric acid is much more than a waste product. It is a double-edged sword, increasing the risk of some health problems and helping prevent others.
High and Low Uric Acid: Risks and Benefits
Uric acid is usually considered high when it’s over 7 milligrams per deciliter (mg/dL) for men (and those who were male at birth) and over 6 mg/dL for women (and those who were female at birth). Low uric acid is defined as less than 2 mg/dL.
You’ve probably heard about high uric acid, or hyperuricemia, because it’s the biggest risk factor for gout — a particularly painful form of arthritis. It’s important to note that the vast majority of people with hyperuricemia never develop gout.
High uric acid is also linked to uric acid kidney stones and chronic kidney disease. In some studies, it’s associated with high blood pressure and heart failure as well as metabolic syndrome — a group of symptoms that increase your chances of diabetes, stroke and heart disease.
Low uric acid, or hypouricemia, gets less attention because it affects far fewer people — only about 0.5% of the population. Yet it’s associated with serious neurologic disorders, including Alzheimer’s disease, Parkinson’s disease and amyotrophic lateral sclerosis (ALS), reduced kidney function and a painful nerve condition called trigeminal neuralgia.
Higher uric acid is known to help protect against these disorders. Low uric acid is also associated with kidney damage after vigorous exercise (called exercise-induced kidney injury) and uric acid kidney stones.
Most mammals have an enzyme that breaks down uric acid so it can be easily flushed out of the system. Only humans and certain apes lack this enzyme, making low — and especially high — uric acid more likely.
Safe Uric Acid Levels
If you’re taking anti-gout drugs — usually because you have several gout flares a year, joint damage or skin nodules called tophi — your doctor may try to keep your uric acid level below 6 mg/dL For people with long-standing or aggressive disease, the target may be even lower. If you have high uric acid but no symptoms, treatment isn’t needed, though your doctor may want to keep an eye on it. Low uric acid that doesn’t cause symptoms usually isn’t a concern, either. But because low levels are associated with neurological problems, you may want to add more purine-rich foods to your diet, with a focus on healthier options like fish, fruit and full-fat dairy.
Purine-Rich Foods
Known to increase uric acid, many of these foods also cause inflammation, affect your heart health and may set the stage for diabetes.
Red meat, especially organ meats like liver and kidney
Alcohol, especially beer
Sugary drinks, candy and desserts
Saturated fats in red meat, butter, cream, ice cream and coconut oil
Sweetened or unsweetened fruit juice, except cherry juice
Some types of seafood, such as shellfish, anchovies and tuna, used to be off-limits for people with gout. Now the health benefits of moderate amounts of fish are thought to outweigh potential harm.
https://www.arthritis.org/diseases/more-about/high-low-uric-acid-symptoms-how-stay-in-safe-range and AI overview on Google
*
Current studies demonstrate that uric acid may exert neuroprotective actions in Alzheimer’s disease and Parkinson’s dementia, with hypouricemia representing a risk factor for a quicker disease progression and a possible marker of malnutrition. Conversely, high serum uric acid may negatively influence the disease course in vascular dementia. Further studies are needed to clarify the physio-pathological role of uric acid in different dementia types, and its clinical-prognostic significance.
https://pmc.ncbi.nlm.nih.gov/articles/PMC6115794/
*
ending on beauty:
I THOUGHT OF YOU
I thought of you and how you love this beauty,
And walking up the long beach all alone
I heard the waves breaking in measured thunder
As you and I once heard their monotone.
Around me were the echoing dunes, beyond me
The cold and sparkling silver of the sea —
We two will pass through death and ages lengthen
Before you hear that sound again with me.
~ Sara Teasdale (photo: David Whyte)
No comments:
Post a Comment