*
WALKING ON WATER
After the storm, the clouds
like half-blown milkweed tufts
fade in the widening sky. I still
don’t know how we survive
our youth, how in a matchstick boat
we cross that wind-clawed sea. When I
look back, I see no boat. I must have
walked on water, holding fast
And like Peter, we step
and start walking across the storm —
hardly even on faith —
call love.
~ Oriana
Mary:
Storm on the sea of Galilee, Rembrandt, 1633
*
The
Sea of Galilee's stormy reputation is due to its geographical location
and the resulting temperature differences and wind patterns. Situated in
the Great Rift Valley, surrounded by mountains, it's prone to sudden,
violent storms when cold, dry air descends from the mountains and
collides with the warmer, moister air over the lake. The shallow depth
of the Sea further exacerbates the impact of these winds, making it
susceptible to rapid whipping up of the water.

PROUST ON VERMEER’S VIEW OF DELFT
Proust’s A la recherche du temps perdu is one of the great novels ever written about paintings. And perhaps most unforgettable (at least for me) is what he wrote about Vermeer’s View of Delft, when Bergotte, the narrator’s favorite author, drags himself out to see the work on the day he dies.
He nearly doesn’t make it, suffering from a massive dizzy spell.
But he knows it is important and so picking himself up, he forges on to the museum. Making his way past several other pictures which he finds pointless and vacuous, he finally arrives in front of Vermeer’s Delft.
Before heading out, he’d been primed to locate a certain stretch of yellow in the work by a review he’d read before leaving his apartment. It was concerned with a patch of yellow wall in the picture “so well painted that it was, if one looked at it by itself, like some priceless specimen of Chinese art, of a beauty that sufficed by itself.”
This is when Bergotte says those haunting words about art and life:
~ 'That’s how I ought to have written,” he said. 'My last books are too dry, I ought to have gone over them with a few layers of color, made my language, precious in itself, like this little patch of yellow wall.'
Meanwhile, he was not unconscious of the gravity of his condition. In a celestial pair of scales there appeared to him, weighing down one of the pans his own life, while the other contained the little patch of wall so beautifully painted in yellow. He felt that he had rashly sacrificed the former for the latter… He repeated to himself 'little patch of wall, with a sloping roof, little patch of yellow wall.' ~
Proust loved the painting and in real life had traveled to Holland to see it in 1902, thereafter considering it to be the best painting in the world.
But where is the little patch of yellow wall?
“petit pan de mur jaune”
I suppose it doesn’t matter.
We always read these articles online, at least I do anyway, of what people regret as they are dying and how it’s always the same: more time sharing meals with family, more time in nature, more time spent with friends— but what about regrets about not living up to our ideals.
For Proust, he was glad to have sacrificed “his life” for art. I am reading a fantastic art memoir right now by Pulitzer Prize winning author Benjamin Moser. Moving to Holland when he is a young aspiring writer, he struggles at first to find his place in a strange land… and it is art that provides the entry for him—especially Dutch Golden Age painting.
His words about Proust and Vermeer are very moving (and I truly loved this gorgeous book):
Vermeer died at forty-three and Proust at fifty-one. I had no expectation of living up to their standard, but I still felt encouraged by its existence. I thought it nobler to aspire to their urge to self sacrifice — to bear in mind those scales. I recalled their admonishment to prefer the little patch of yellow wall to life itself. I didn’t; I hadn’t.
But one oughtn’t be depressed for failing to live up to ideals. When ought to be depressed by not having tried!
And finally speaking of art as life’s great consolation, and this is something explored throughout Moser’s memoir, I am also reading Ferdia Lemmon’s Waterstone Debut Prize winning novel, Glorious Exploits. Set in 412 BC during the Peloponnesian war, the story begins when two out of work potters start spending time in the deadly mines of Syracuse where thousands Athenian war prisoners are being worked to death.
The suffering in the mines recalls things one reads about the death camps in WWII— and because it was Athens who attacked Syracuse, causing tremendous human loss, the the people of Syracuse think nothing of inflicting great cruelty onto the prisoners…. that is until these two potters decide that they want to use the prisoners—who after all come from glorious Athens, the seat of what was the greatest culture in the world, to put on a full-blown production of Medea.
The author in an interview said the idea came to him when reading a description in Plutarch of how some of those defeated Athenians survived by quoting lines from Euripides, the most popular dramatist amongst the Sicilians. He said:
I thought, OK, that’s my story: who were these Syracusans who left Athenian prisoners to die in this open-air pit, yet were so fascinated by their drama that they’re willing to save them in exchange for these precious lines?
The book is brilliant.
I love historical fiction told in a modern voice and this one is in contemporary Dublin slang.
Do you have a favorite painting in the world? For me the “best picture in the world” is Resurrection by Piero della Francesca. But I’ve never been to Delft!
~ Leanne Martin, Facebook
Resurrection by Piero della Francesca
Oriana:
As with music, my favorite paintings vary with time and stages of life. And I’m drawn to so many paintings. One of them is Leonardo’s Virgin of the Rocks. It is the closing image of this blog.
*
“The forest was shrinking, but the trees kept voting for the axe because its handle was made of wood and they thought it was one of them.” ~ René Morten
*
FORGETTING: TO BE FEARED OR CELEBRATED?
Often, forgetfulness is a mere inconvenience: that name, date or task that simply slipped through the cracks. But, sometimes, it’s downright unsettling to forget something. A friend asked me the other day: ‘Remember that hilarious dinner we had there a few years ago?’ And when for the life of me I couldn’t, I felt as if a slice of my existence had been cast into oblivion.
The ancient Greeks harbored a similar, if more pronounced, terror of forgetting. Plato associates forgetting with ‘non-being’, nothingness. Homer’s heroes do heroic things in order to achieve kleos (fame), and thereby defeat the destruction that comes with being forgotten. (As one of the Seven Wise Men said: ‘You will obtain memory through deed.’) Perhaps as a kind of buttress against the fear of forgetting, they anointed Mnemosyne, memory, the mother of the nine muses.
But I enjoyed learning the other day that this negative view of forgetting wasn’t shared by all ancient peoples. Daoism positively celebrates forgetting, indeed raises it to the status of an art. Zhuangzi, a founder of the tradition, urges people to master this art in order to gain a glimpse of Dao (the way), the eternal substratum of our passing world. As the philosopher Xia Chen writes, Zhuangzi’s idea is that the more of the world we’re able to forget – be it morality, history, the arts – the more we’re able to discover our true self, shaving off all that’s inessential to get down to the pith that we ultimately are.
Now, I don’t know if that will be of help when I inevitably confront the next lost memory, but it’s good to remember, if possible, that there’s a certain, subtle benefit in forgetting. ‘Only by forgetting,’ wrote the German philosopher Hans-Georg Gadamer, ‘does the mind have the possibility of total renewal.’
I'm the youngest by far of five children. My mother was 35 when she conceived me in 1951, so chagrined by this chronological indiscretion that she tried to hide the pregnancy from her sister. My mortified oldest brother didn’t want to tell his high-school friends that a new baby was on the way, but it was a small town. Word spread.
My mother’s age and my late arrival in the family felt burdensome to me too, especially when I started school in 1957 and met my classmates’ mothers. They were still having babies! Still piling their children into cars and heading off to picnics at the river or hikes into the lava-capped, wild flower-rampant plateau outside town. They still had to mediate hair-pulling and toy-snatching. But by the time I started first grade, my siblings were gone, the oldest three to college and the youngest to a residential school four hours away, and we went from a very noisy household to a very quiet one.
My family has told me stories about those years before everything changed. How my oldest brother nicknamed me ‘Ubangi’ because my hair grew in tight fat curls close to my head. How my other brother liked to ambush me around corners with a toy crocodile because it never failed to make me shriek in terror. How my oldest sister carried me around like a kangaroo with her joey. But I can offer very few stories of my own from those early years.
My strongest recollection is a constant straining to be with my brothers and sisters. I remember having to go to bed when it was still light out, kicking at the sheets as I listened for their voices coming down the hall or through the windows from the back yard. Sometimes I could smell popcorn. The next morning, I’d search the living room rug for their leftovers and roll the unpopped kernels around in my mouth. I do remember that, probably because it was something that played out night after night – our father loved popcorn.
Several years ago, I thought I might have the chance to recover that lost past when we were all tightly clustered together in one house. My brothers had driven to Bucks Lake up in the Sierras of northeastern California where, until I was around three years old, our family had leased a house every summer to escape the Sacramento Valley heat. They found our old cabin unchanged. Even a table built by a local sawmill was still in the living room. They knocked on the door and, weirdly enough, my younger brother knew the current lessee. He invited them in and then invited the rest of us back for a look.
With our father, we set off a few months later, up highways that narrowed into dusty roads through dark pines and past bright stony summits. When we got to the cabin, my siblings scattered to claim their favorite outdoor spots, but I was rooted near the car, struck by how much this place differed from what I thought I remembered.
I recalled that the water was a long walk across a sandy beach from the house; I had an image of my mother standing on that wide beach, her dress whipped by the wind, her hand cupped near her mouth. But the pebbled shoreline was just a few feet away. I recalled the spine of a dam jutting from the water not far from the house, a perilous and sudden cliff at the edge of the lake that my siblings had once ventured too close to. But even though the lake is a man-made one, the dam wasn’t visible from the house.
I followed my father inside, where the tininess of the kitchen fascinated him. He kept opening cabinet doors and laughing as they banged each other in the narrow aisle. ‘Mother just hated this kitchen!’ he said. ‘She always made big breakfasts – eggs and sausage and pancakes – and as soon as she finished cleaning up, you kids would come running back in the house wanting lunch.’
I didn’t remember that. I didn’t remember the table. I didn’t remember anything about the place. My siblings tugged me through the house, pointing out where everyone had slept – they said I had been in a little alcove in the hallway, though I recalled staying in my parents’ room and watching them sleep in the early morning light. They pointed out other features tied to the life that we all lived in the cabin, eager for me to remember, but there was nothing. I even dropped to my knees and circled the living room at toddler level, peering at dusty windowsills and sniffing at the knotholes in the pine walls and running my fingers over the floorboards. Nothing.
I now know that it would have been unusual for me to remember anything from that time. Hardly any adult does. There is even a term for this – childhood amnesia, coined by Sigmund Freud in 1910 – to describe the lack of recall adults have of their first three or four years and our paucity of solid memories until around the age of seven. There has been some back and forth over a century of research about whether memories of these early years are tucked away in some part of our brains and need only a cue to be recovered.
That’s what I was hoping when I revisited our old cabin with my siblings. I intended to jostle out a recalcitrant memory with the sights, sounds, smells and touch of the place. But research suggests that the memories we form in these early years simply disappear.
Freud argued that we repress our earliest memories because of sexual trauma but, until the 1980s, most researchers assumed that we retained no memories of early childhood because we created no memories – that events took place and passed without leaving a lasting imprint on our baby brains.
Then in 1987, a study by the Emory University psychologist Robyn Fivush and her colleagues dispelled that misconception for good, showing that children who were just 2.5 years old could describe events from as far as six months into their past.
But what happens to those memories? Most of us assume that we can’t recall them as adults because they’re just too far back in our past to tug into the present, but this is not the case. We lose them when we’re still children.
The psychologist Carole Peterson of Memorial University of Newfoundland has conducted a series of studies to pinpoint the age at which these memories vanish. First, she and her colleagues assembled a group of children between the ages of four and 13 to describe their three earliest memories. The children’s parents stood by to verify that the memories were, indeed, true, and even the very youngest of the children could recall events from when they were around two years old.
Then the children were interviewed again two years later to see if anything had changed. More than a third of those age 10 and older retained the memories they had offered up for the first study. But the younger children – especially the very youngest who had been four years old in the first study – had gone largely blank. ‘Even when we prompted them about their earlier memories, they said: “No, that never happened to me,”’ Peterson told me. ‘We were watching childhood amnesia in action.’
In both children and adults, memory is bizarrely selective about what adheres and what falls away. In one of her papers, Peterson trots out a story about her own son and a childhood memory gone missing. She had taken him to Greece when he was 20 months old, and, while there, he became very excited about some donkeys. There was family discussion of those donkeys for at least a year. But by the time he went to school, he had completely forgotten about them. He was queried when he was a teenager about his earliest childhood memory and, instead of the remarkable Greek donkeys, he recalled a moment not long after the trip to Greece when a woman gave him lots of cookies while her husband showed the boy’s parents around a house they planned to buy.
Peterson has no idea why he would remember that – it was a completely unremarkable moment and one that the family hadn’t reinforced with domestic chitchat. To try to get a handle on why some memories endure over others, she and her colleagues studied the children’s memories again. They concluded that if the memory was a very emotional one, children were three times more likely to retain it two years later.
Dense memories – if they understood the who, what, when, where and why – were five times more likely to be retained than disconnected fragments. Still, oddball and inconsequential memories such as the bounty of cookies will hang on, frustrating the person who wants a more penetrating look at their early past.
To form long-term memories, an array of biological and psychological stars must align, and most children lack the machinery for this alignment. The raw material of memory – the sights, sounds, smells, tastes and tactile sensations of our life experiences – arrive and register across the cerebral cortex, the seat of cognition. For these to become memory, they must undergo bundling in the hippocampus, a brain structure named for its supposed resemblance to a sea horse, located under the cerebral cortex.
The hippocampus not only bundles multiple input from our senses together into a single new memory, it also links these sights, sounds, smells, tastes, and tactile sensations to similar ones already stored in the brain. But some parts of the hippocampus aren’t fully developed until we’re adolescents, making it hard for a child’s brain to complete this process.
‘So much has to happen biologically to store a memory,’ the psychologist Patricia Bauer of Emory University told me. There’s ‘a race to get it stabilized and consolidated before you forget it. It’s like making Jell-O: you mix the stuff up, you put it in a mould, and you put it in the refrigerator to set, but your mould has a tiny hole in it. You just hope your Jell-O – your memory – gets set before it leaks out through that tiny hole.’
In addition, young children have a tenuous grip on chronology. They are years from mastering clocks and calendars, and thus have a hard time nailing an event to a specific time and place.
They also don’t have the vocabulary to describe an event, and without that vocabulary, they can’t create the kind of causal narrative that Peterson found at the root of a solid memory. And they don’t have a greatly elaborated sense of self, which would encourage them to hoard and reconsider chunks of experience as part of a growing life-narrative.
Frail as they are, children’s memories are then susceptible to a process called shredding. In our early years, we create a storm of new neurons in a part of the hippocampus called the dentate gyrus and continue to form them throughout the rest of our lives, although not at nearly the same rate. A recent study by the neuroscientists Paul Frankland and Sheena Josselyn of the Hospital for Sick Children in Toronto suggests that this process, called neurogenesis, can actually create forgetting by disrupting the circuits for existing memories.
Our memories can become distorted by other people’s memories of the same event or by new information, especially when that new information is so similar to information already in storage. For instance, you meet someone and remember their name, but later meet a second person with a similar name, and become confused about the name of the first person. We can also lose our memories when the synapses that connect neurons decay from disuse. ‘If you never use that memory, those synapses can be recruited for something different,’ Bauer told me.
Memories are less vulnerable to shredding and disruptions as the child grows up. Most of the solid memories that we carry into the rest of our lives are formed during what’s called ‘the reminiscence bump’, from ages 15 to 30, when we invest a lot of energy in examining everything to try to figure out who we are. The events, culture and people of that time remain with us and can even overshadow the features of our aging present, according to Bauer. The movies were the best back then, and so was the music, and the fashion, and the political leaders, and the friendships, and the romances. And so on.
Of course, some people have more memories from early childhood than others do. It appears that remembering is partly influenced by the culture of family engagement. A 2009 study conducted by Peterson together with Qi Wang of Cornell and Yubo Hou of Peking University found that children in China have fewer of these memories than children in Canada. The finding, they suggest, might be explained by culture: Chinese people prize individuality less than North Americans and thus may be less likely to spend as much time drawing attention to the moments of an individual’s life. Canadians, by contrast, reinforce recollection and keep the synapses that underlie early personal memories vibrant.
Another study, by the psychologist Federica Artioli and colleagues at the University of Otago in New Zealand in 2012, found that young adults from Italian extended families had earlier and denser memories than those from Italian nuclear families, presumably as a result of more intense family reminiscence.
But it doesn’t necessarily take a crowd of on-site relatives to enhance a child’s recollection. Bauer’s research also points to ‘maternal deflections of conversation’, meaning that the mother (or another adult) engages the child in a lively conversation about events, always passing the baton of remembering back to the child and inviting him or her to contribute to the story.
‘That kind of interaction contributes to the richness of memory over a long period of time,’ Bauer told me. ‘It doesn’t predict whether a given event will be remembered, but it builds a muscle. The child learns how to have memories and understands what part to share. Over the course of these conversations, the child learns how to tell the story.’
Borrowing Bauer’s Jello-O analogy, I’ve always suspected that my mother had a tinier hole in her Jell-O mould than mine, which allowed her to retain information until it was set into memory. She seemed to remember everything from my childhood, from my siblings’ childhoods, and from her own first six years. Intensely, she recalled the fight between her mother and father, when her mother wound up getting knocked out cold and her father forced her to tell visiting neighbors that his wife was sleeping.
The day my grandmother packed up my mother and her sister and moved them from Nebraska to Nevada, with their unwanted household goods strewn across their lawn for the townspeople to pick through and haggle over. The day the doctor took out my mother’s appendix on the kitchen table. The day she wet her pants at school and the nuns made her walk home in weather so cold that her underwear froze. I wondered if her memories were so sharp because these were all terrible events, especially compared with my presumably bland early years.
I now suspect that my mother’s ability to tell the story of her early life also came from the constellation of people clustered at the center of it. Her young mother, bolting from a marriage she was pressured into and retreating to her brother’s crowded house, her two girls held close. And her sister, three years older, always the point and counterpoint, the question and response.
My mother and her sister talked their lives over to such an extent that it must have seemed as if things didn’t really happen unless they had confided them to each other. Thus, ‘Don’t tell Aunt Helen!’ was whispered in our house when something went wrong, echoed by ‘Don’t tell Aunt Kathleen!’ in our cousins’ house when something went amiss there.
I might have a very large hole in my Jell-O mould, but I also wonder if our family’s storytelling and memory-setting apparatus had broken down by the time I came along. My brothers and sisters doted on me – I’m told this and I believe it – but it was their job to be out in the world riding horses and playing football and winning spelling bees and getting into various kinds of trouble, not talking to the baby.
And sometime between my being born and my siblings leaving, our mother suffered a breakdown that plunged her into 20 years of depression and agoraphobia. She could go to the grocery store only with my father close to her side, steering the cart, list in hand. Even when she went to the beauty salon to have her hair cut and styled and sprayed into submission, my father sat next to her reading his Wall Street Journal as she cured under one of those bullet-head dryers. When we were home, she spent a lot of time in her room. No one really knows when my mother’s sadness and retreat from the world began – and she’s not around to tell us now – but it might have started when I was very young. What I remember is silence.
Our first three to four years are the maddeningly, mysteriously blank opening pages to our story of self. As Freud said, childhood amnesia ‘veils our earliest youth from us and makes us strangers to it’. During that time, we transition from what my brother-in-law calls ‘a loaf of bread with a nervous system’ to sentient humans.
If we can’t remember much of anything from those years – whether abuse or exuberant cherishing – does it matter what actually happened? If a tree fell in the forest of our early development and we didn’t have the brains and cognitive tools to stash the event in memory, did it still help shape who we are?
Bauer says yes. Even if we don’t remember early events, they leave an imprint on the way we understand and feel about ourselves, other people, and the greater world, for better or worse. We have elaborate concepts about birds, dogs, lakes and mountains, for example, even if we can’t recall the experiences that created those concepts. ‘You can’t remember going ice-skating with Uncle Henry, but you understand that skating and visiting relatives are fun,’ Bauer explained. ‘You have a feeling for how nice people are, how reliable they are. You might never be able to pinpoint how you learnt that, but it’s just something you know.’
And we are not the sum of our memories, or at least, not entirely. We are also the story we construct about ourselves, our personal narrative that interprets and assigns meaning to the things we do remember and the things other people tell us about ourselves. Research by the Northwestern University psychologist Dan McAdams, author of The Redemptive Self (2005), suggests that these narratives guide our behavior and help chart our path into the future. Especially lucky are those of us with redemptive stories, in which we find good fortune even in past adversity.
So our stories are not bald facts etched on stone tablets. They are narratives that move and morph, and that’s the underpinning to much of talk therapy. And here is one uplifting aspect of aging: our stories of self get better. ‘For whatever reason, we tend to accentuate the positive things more as we age,’ McAdams told me. ‘We have a greater willingness or motivation to see the world in brighter terms. We develop a positivity bias regarding our memories.’
I can’t make myself remember my early life with my siblings nearby and my mother before her breakdown, even if I revisit the mountain idyll where the summers of that life unfolded. But I can employ the kinder lens of aging and the research by these memory scientists to limn a story on those blank pages that is not stained with loss.
I am by nature trusting and optimistic, traits that I’ve sometimes worried are signs of intellectual weakness, but I can choose to interpret them as approaches to the world developed by myriad, if unrecalled, experiences with a loving family in those early years. I don’t remember, but I can choose to imagine myself on my siblings’ laps as they read me stories or sang me songs or showed me the waving arms of a crawdad from that mountain lake. I can imagine myself on their shoulders, fingers twined in their curly Ohlson hair.
I can imagine them patiently feeding me the lines to The Night Before Christmas, over and over, hour after hour, day after day, because someone had to have done it – my mother told me that I could recite the whole poem when I was two years old. Not that they remember doing this, because most of them were teenagers by then and off having the kinds of encounters with people and culture that would define their sense of self for years to come. But I’ll imagine and reconstruct it, both for me and for them. Because our pasts had to have had a lot of that kind of sweetness, given our lucky loving bonds today. We’ve just forgotten the details.
https://aeon.co/essays/where-do-children-s-earliest-memories-go
Charles:
My favorite concept is that what we forget can still shape what we become and be a positive aspect of life. In real estate the most important element is location location location and in memory it's also location! Fascinating.
Mary:
*
ABSOLUTE HOPE
There is a sort of hope relevant to cultural loss, what Gabriel Marcel in Homo Viator (1951) calls absolute hope. This is not directed to any specific outcome. It is an open, patient waiting for the good, which can be known only upon experiencing it. As Marcel describes it, when we have patience for someone else, we respect their internal rhythm. We don’t try to force them to our rhythm, nor do we merely abandon them to their own devices. We have confidence in the other person in a way that respects their pace. It is this quality of patience, of not rushing, that is relevant to the process of the transformation of identity in cultural bereavement. ~ Jelena Markovic
https://aeon.co/essays/identity-is-painfully-contingent-for-the-culturally-bereaved
*
viator = traveler (cf VIA, road)
*
TO REMEMBER BETTER, YOU NEED TO ENCODE PROPERLY
To encode information better, go through these FOUR steps:
Focus on it
Organize it in relation to your other knowledge
Understand it, and
Relate it to something you already know
If you get stuck, use CONTEXTUAL DETAILS as retrieval cues. A common mistake when struggling to remember something is to try generating all the possibilities for what it could be. A better approach is to think about contextual details from when you first stored the information, such as where you were and what else was going on at the time.
https://psyche.co/guides/how-to-get-better-at-remembering-names-and-shopping-lists
*
A NEUROSURGEON’S GUIDE TO CONQUERING FEARS

When you’re in a situation where you feel like you don’t have control, it can be easy to become paralyzed by fear and unable to move forward. This often leads to bad decision-making. But understanding how your brain works can provide you with a path for getting back on track and sticking to your goals, says Dr. Mark McLaughlin, author of Cognitive Dominance: A Brain Surgeon’s Quest to Out-Think Fear.
“It’s possible to train your brain to think differently when events make you feel stymied and you don’t know what to do,” he says.
As a practicing neurosurgeon, McLaughlin is no stranger to intense situations. To deal with the stress of his job and enhance his performance, he created a quadrant system to harness the strengths of the brain’s hemispheres. While most of us don’t make life-or-death decisions in an operating room, McLaughlin says the methodology can be applied to any situation that induces fear.
Understand the Hemispheres
First, you need to understand how the brain processes information in order to leverage its capabilities. While the idea of “left-brain or right-brain dominance” is a myth, certain tasks are localized in specific areas of the brain. The left hemisphere is where some aspects of logical thinking occur, so it’s often considered to be the objective, goal-oriented part of the brain. And the right hemisphere is more subjective, looking at the big picture and providing our gut reactions. “This is where stories land that teach us how to act and behave in the world,” says McLaughlin.
Each hemisphere has strengths and weaknesses. For example, if someone delivers bad news, your left brain can get carried away focusing on all of the things that could go wrong, while the right brain takes in clues in facial expressions or gestures that could provide context or additional information. Understanding how the hemispheres work together, you can better know how to move forward.
Use a Quadrant System
Instead of letting fear control your actions, be mathematical about your reaction and decision-making process by graphing out an unexpected event on an X and Y axis, says McLaughlin.
“X is your left hemisphere. It’s logical, objective, and scientific. It’s what we can all agree on in materialistic way,” he explains. “Y is your right hemisphere. It’s big picture and subjective, where things have meaning.”
When you experience an unexpected, stressful event, ask yourself, what are the objective unmistakable facts, and what does the event subjectively mean to you? And do you perceive the facts and thoughts to be positive or negative?
If the subjective thoughts and objective facts are both negative, you are in the lower left-hand quadrant. This is quadrant you’d be in if you lose a loved one or are forced to close your business. “This is the all-is-lost quadrant,” says McLaughlin. “You’ve done everything right and something goes wrong. That’s when you’re throwing your arms in air.”
If the objective facts are positive but your subjective thoughts are negative, you are in the bottom-right quadrant. This could be the feeling you get when you get a great job, but your boss is toxic. “It’s a calm-before-the-storm feeling when you experience a sense of anxiety about the future,” says McLaughlin.
When your subjective thoughts are positive, but the objective facts are negative, you are in the upper-left quadrant, which is what McLaughlin calls the resilience quadrant. For example, you may have lost your job, but found time to complete the book you always wanted to write.
“You hear people say, ‘That was an unpleasant experience, but it turned out to be the best thing that ever happened to me because I grew and became better,'” says McLaughlin. “You are birthing new skills in this quadrant.”
And when the subjective and objective are both positive, you are in the upper right-hand quadrant. “People often call this quadrant ‘flow,'” says McLaughlin. “It’s when we experience a level of clicking and everything is working in concert with what we believe and is of meaning in our lives. It’s a wonderful feeling.”
Unfortunately, you can’t live in flow. The other three quadrants are necessary and make life interesting and fun, says McLaughlin. “It’s a heroic journey you go through to get back up to flow,” he says.
Moving Through the Quadrants
To create change, realize that you’re always going to be in one of quadrants. If you’re dealing with negative facts or feelings, you are working from a place of cognitive dissonance, which is the mental discomfort you get when events are at odds with your values or actions. However, you can train your brain to use cognitive dominance, overcoming the situation to get into new quadrant.
“It’s looking at the situation, and saying, ‘Okay, I know I’m in this quadrant. How do I get to where I want to go?'” says McLaughlin.
In order to move to a new quadrant, you first have to have a clear definition of who you are. “Are you a leader? Helper? Server? What is the best version of you?” McLaughlin asks.
Next, identify one micro-goal that you can take to move the needle one notch over. Make decisions that move in a positive direction on the Y-axis. “Even one step will give you a jolt of dopamine that can create a new network in the brain,” says McLaughlin.
For example, McLaughlin recently had a young patient who died on the operating table. In order to handle the heart-breaking experience, he tapped into the quadrants.
“I was in the all-is-lost quadrant, but I had to speak to his family,” he says.
“I thought about what I can do that is consistent with who I believe I am, which is a caring doctor. I talked to his family in most caring, compassionate way possible, telling them that every chance to save him was explored. I made sure I talked to them in the right setting with support people around. And I made myself available for them if they have questions in the future. Taking those steps helped me move up the Y-axis.”
Knowing where you are is comforting and is the starting point for adapting to change, says McLaughlin. “You can see the big picture and know that something will pass,” he says. “You can also see what you need to do to climb up that Y-axis.”
https://getpocket.com/explore/item/a-neurosurgeon-shares-his-effective-strategy-for-overcoming-fears?utm_source=firefox-newtab-en-us
*
WHY PUTIN WON’T USE NUKES IN UKRAINE
In a boastful interview on state TV, Russian leader Vladimir Putin said he “hopes” he won’t have to nuke Ukraine.
The Kremlin tyrant downplayed the need for nuclear weapons against Ukraine but wouldn’t rule them out in response to a question about strikes on Russian territory.
“They wanted to provoke us so that we made mistakes,” Putin said, according to a translation. “There has been no need to use those weapons … and I hope they will not be required.
“We have enough strength and means to bring what was started in 2022 to a logical conclusion with the outcome Russia requires.”
Putin made startling comments about Russia’s nuclear stockpile on several occasions throughout Moscow’s bloody war it unleashed on Ukraine in 2022.
Shortly after Russia’s unprovoked invasion, Putin put his military’s nuclear forces on high alert. Putin formally reduced the requirements for the Kremlin to deploy its nuclear weapons last year following Ukrainian attacks on the western Russian city of Kursk.
That cleared the way for Russia to use nukes against any nation that attacks its territory and has the backing of a nuclear power.
Ukrainian ambassador to the US Oksana Markarova denounced Putin in response to his provocative remarks about nuclear weapons and urged countries to take his threat seriously.
“At this point, it doesn’t matter how we interpret what he says,” Markarova told CBS News’ “Face the Nation” on Sunday. “We just have to believe what he says and understand what he says. He is a threat, not only to Ukraine but also to anyone who believes that nations should live peacefully.”
President Trump has been trying to broker a peace deal between the two warring countries. In February, he ripped Ukrainian President Volodymyr Zelensky in a fiery Oval Office encounter, accusing his counterpart of gambling with World War III.
President Trump met with Ukrainian President Volodymyr Zelensky before Pope Francis’ funeral last month.
Recently, Trump has begun to sound more glum about the prospects of getting a deal done between the two sides.
“Maybe it’s not possible to do,” Trump told NBC’s “Meet the Press” in an interview that aired Sunday.
“There’s tremendous hatred, just so you understand, Kristen,” he said, referring to reporter Kristen Welker. “We’re talking tremendous hatred between these two men [Putin and Zelensky] and between, you know, some of the soldiers, frankly. Between the generals.”
Russia previously rejected Trump’s pitch for a full cease-fire. Instead, Russia offered a smaller-scale pause in fighting on the energy infrastructure specifically.
But Russia then upped its attacks on Ukrainian civilians, drawing outrage from Trump. Russia has also proposed a temporary cease-fire from May 8 to May 10 to mark the 80th anniversary of the Soviet Union’s defeat of Nazi Germany.
Last week, the Trump administration approved a $50 million weapons package to Ukraine and the two sides inked a new mineral rights agreement.
Russia, which annexed Crimea in 2014, has been fighting to lock down control of the four territories in Ukraine of Donetsk, Kherson, Luhansk and Zaporizhzhia, which it has deemed oblasts.
So far, Moscow lacks complete territorial control over any of them.
Putin called reconciliation with Ukraine “inevitable.”
https://nypost.com/2025/05/04/us-news/putin-hopes-russia-wont-have-to-nuke-ukraine-as-he-brags-on-state-tv-about-logical-conclusion-of-war/
*
THE POSITIVE RESULTS OF ISRAEL’S WAR AGAINST HAMAS
- 80% of the hostages have been returned.
- Rocket fire from Gaza is down by over 99%.
- Around 20,000 - 30,000 Hamas terrorists eliminated.
- Most of Hamas senior leadership has been eliminated.
- The US is backing a plan to evacuate Gaza.
- Thousands of Hezbollah terrorists eliminated.
- 80% of Hezbollah's capabilities destroyed.
- Hezbollah's leaders leadership has been eliminated.
- The Iranian axis has been fractured.
- Iran pushed out of Syria, militias dismantled.
- The Syrian army capabilities crushed. IDF controls Mount Hermon in Southern Syria.
~ Iraqi militias have pulled out of the Israel war.
- Houthis are taking unprecedented strikes.
View of Israel looking south from Rosh HaNikra caves
*
JEWISH DEMOGRAPHY OF ISRAEL IS CHANGING
In 2024, following the October 7th attacks and subsequent war, Israel had a rarely-seen negative migration balance.
On January 1st 2025, as the world was transitioning into the second quarter of the 21st Century, the State of Israel crossed the threshold of 10 million inhabitants, surpassing the size of countries such as Austria, Hungary, and Switzerland, not to mention Denmark, Finland and Ireland.
The significance of the 10 million mark is largely symbolic; due to the gradual changes of sociodemographic processes, the gains or losses of a thousand or so people do not mean anything in terms of the real thrust of a society. Still, the new round number is suggestive for at least two reasons.
The first is that the Zionist leaders and activists David Ben Gurion and Itzhak Ben Zvi published a spirited and somewhat free-floating pamphlet at the beginning of the 20th Century, anticipating that the population of the Land of Israel would one day reach 10 million. They predicted this at a time when there were only 50,000 Jews out of a total population of 600,000 people living in Israel, constituting just 0.5% of world Jewry.
Today, as the total population from the River to the Sea approaches 15 million, the 7.2 million Jews living in Israel constitute 45.5% of the total number of Jews worldwide and (together with half a million of their non-Jewish family members) 77% of the State of Israel’s total legal residents.
Second, at 10 million, the diminutive and protective appellative of ‘small nation’ – often claimed to justify whatever may be going wrong in the country – cannot apply any more. A grown-up Israel is called upon to take greater responsibility for its actions, to be less dependent on the help and support of others – namely world Jewish communities – and, on the contrary, to offer enhanced support to other Jews worldwide.
Israeli population has changed over the years. It continues to do so.
The cultural make-up of the Israeli Jewish population has changed over the years, primarily due to immigration, which, since March 2022, is coming particularly from Russia (though not so much from Ukraine). But in recent years, Israel’s growth has happened primarily due to robust natural increase.
However, the October 7th massacre and the kidnapping of over 250 hostages to Gaza may represent a tragic rupture of the growing population pattern. The subsequent war on several fronts, with hundreds of military and civilian losses, the dislocation of hundreds of thousands in the north and in the south, and the prolonged enrollment of vast numbers of young adults in the reserve army, has caused an unprecedented socioeconomic and existential crisis.
In 2024, based on the assessment of those who left the country in 2023 and did not return one year later, Israel had an utterly unusual negative international migration balance. Such a negative balance has only previously occurred very occasionally, a few times in the 1980s, once in the 1950s, and once in the 1920s. However, when it did, the main reason for leaving Israel was an underlying economic crisis. In the present case, about 80,000 Israelis left the country, about 30,000 returned from previous long-term stays abroad, and about 30,000 were new immigrants.
One possible way to read these data is that the eventual total annual deficit of about 18,000 in 2024 can be attributed mainly to the rapid departures of 14,000 people following the horrors of October 2023. Yet, in many respects, given the challenging circumstances, we might have expected the negative migration balance in Israel to have been significantly larger. In reality, Israelis showed a high degree of resilience – the data also suggest that many returned to the country immediately after the attacks. Whether the migration balance will return to positive depends on the outcome of military operations and Israel’s capacity to return to fast economic growth.
Tel Aviv residential housing
The symmetric view of emigration from Israel is that those leaving Israel help sustain some of the Jewish communities around the world that are declining due to aging and low fertility rates. JPR’s report on Jews in the Netherlands reveals that the Dutch Jewish population is growing mainly due to the immigration of young Israelis, to the point now where most of the Jewish children living in the Netherlands were either born in Israel, or to an Israeli parent.
There is a strong correlation between a nation’s socioeconomic development level and the percentage of Jews among its citizens, to the point that the Index of Human Development can nearly predict the Jewish presence in a specific country. Huge transfers of Jews from Eastern Europe and Muslim countries to the more economically prosperous and developed countries of North America, Western Europe and Israel have completely transformed the global map of Jewish life over the past Century.
Today, more than 85% of Jews worldwide concentrate in only two countries: the US and Israel. But Jewish dispersion is not over, as over 100 countries and territories comprise 100 Jews or more. The global Jewish population is still not as large as it was in 1939, but within the next decade, the worldwide numbers might finally reach the level they were before the Holocaust.
Religious diversity characterizes Israel and, to some extent, other Jewish communities as well, with the proportion of haredim increasing everywhere thanks to their high fertility levels. This is especially true in Israel, where the state supports the many adult men who do not serve in the army, do not participate in the labor force, and devote their time to studying more intensively than elsewhere.
The prospective differential growth of the different religious sectors in Israel could potentially create an entirely different social structure by the mid-21st Century, with lower enrollment in the military, more diffused poverty, and a plurality of haredim in Israel’s educational system. Unless the pendulum of self-segregation and modernization rejection reverses its course, in which case substantial human resources would then become available to the general development of Israeli society and Jewish communities throughout the world.
https://www.jpr.org.uk/insights/israels-jewish-demography-changing-and-it-so-diasporas#:~:text=In%20the%20present%20case%2C%20about%2080%2C000%20Israelis,abroad%2C%20and%20about%2030%2C000%20were%20new%20immigrants.&text=The%20symmetric%20view%20of%20emigration%20from%20Israel,due%20to%20ageing%20and%20low%20fertility%20rates
*
What factors affect Jewish migration?
Since Abraham, the son of Terach, the first Jewish migrant, a vast amount of geographical mobility has been a central feature of the Jewish experience. The centers of gravity of the Jewish presence around the globe have shifted repeatedly due to periods of migration, shaping and reshaping Jewish culture and the modes of interaction between Jewish minorities and the surrounding hegemonic societies.
From their initial origins and prolonged location in the ancient Middle East, Jews moved West following the rise of Islam. Their demographic center shifted from Western to Eastern Europe in the late Middle Ages. Massive migration waves to transoceanic countries, followed by the Second World War and the Holocaust, made the United States the main pole of Jewish life in the early twenty-first century. Today, the state of Israel has recovered its pristine and symbolic role as the prime land of Jewish presence worldwide.
The feasibility and volume of modern large-scale Jewish migration mostly reflect the fall of empires. The decline of the old European powers at the turn of the nineteenth and twentieth centuries, until the First World War, stood behind the massive transatlantic migratory current. The end of the Ottoman Empire allowed the establishment of a new order in the Middle East, enabling immigration to Palestine. The decline of the British Empire was necessary for the State of Israel's independence. The end of the French colonial power anticipated the departure of North African Jews to the West and Israel. The fall of the Soviet Empire opened the doors to Jewish migration waves from former Soviet Union countries.
In each instance, the new order that was created necessitated and enabled a large-scale outflow of Jews who had been underprivileged or discriminated against under the old regime or had suddenly lost much of their previous role as mediators between the elite rulers and the local population.
The role of factors such as antisemitism and violence against Jews, or the rise of terrorism, cannot be denied in the build-up of migration motivations but appear minor. In Argentina, the bombing of the Israeli Embassy and, more dramatically, of the AMIA Jewish community center building in Buenos Aires in the 1990s, was followed only by a pale emigration echo compared to the substantial wave that followed the bankruptcy of Argentina's Federal Bank in 2001. The timing of Jewish emigration from France in the 2010s did not correspond with the most significant terror attacks in the country, either against Jewish targets or in general. Instead, it followed the curve of unemployment in that country.
In fact, over the last thirty years, more than 70% of the annual and country-by-country rates of global aliya to Israel can be explained statistically by unemployment rates in the countries of origin and in Israel. Ideological factors related to the pull of absorbing countries – primarily Israel – have been necessary but insufficient to determine any sudden and large-scale deviation from the customary small trickle of more convinced people. However, it would be a mistake to dismiss it lightly: ideology and culture have played a role in the choice of the country of destination.
https://www.jpr.org.uk/insights/why-do-jews-migrate-and-when
*
JEWISH IMMIGRATION TO AMERICA
The size and character of the American Jewish community has been defined by the 3.5 million Jews who have emigrated since the 17th century
Over 3.5 million Jews have immigrated to the United States since the first Jews arrived back in the 17th century. As a result, the vast majority of American Jews are descended from people who came to America from someplace else.
Today, America’s Jewish community is largely Ashkenazi, Jews who trace their ancestry to Germany and Eastern Europe. However, the first Jews to arrive in what would become the United States were Sephardi, tracing their ancestry to Spain and Portugal.
The first Sephardi settlers arrived in New Amsterdam in 1654 from Brazil. For several decades after, adventurous Sephardi and Ashkenazi merchants established homes in American colonial ports, including New Amsterdam (later New York), Newport, Philadelphia, Charleston and Savannah. While Ashkenazi Jews outnumbered Sephardi ones by 1730, the character of the American Jewish community remained Sephardi into the early 19th century.
All of the early Jewish communities were Sephardi-style “synagogue-communities”: the community and the synagogue were one and the same. Even if some leaders were Ashkenazi, they followed the Western Sephardi liturgy and adhered to Sephardi customs. Early American synagogues also seated congregants in the traditional Sephardi manner: women upstairs, men downstairs and everyone seated around the perimeter. They resembled and maintained ties with Western Sephardi congregations elsewhere, such as Amsterdam, London and the West Indies.
Sephardi hegemony ended in the United States in the early decades of the 19th century. Sephardi immigrants nevertheless continued to arrive on America’s shores, initially from Holland and the West Indies, later from the disintegrating Ottoman Empire and still later from Arab lands, the latter now known as Mizrahi (Eastern) Jews.
Some 50-60,000 Eastern Sephardi Jews immigrated to the United States between 1880-1924, many of whom spoke Ladino (Judeo-Spanish). More arrived following the 1965 Immigration Act, which ended four decades of quotas and made immigration to the United States easier. Today, an estimated 250,000-300,000 Sephardi Jews of different backgrounds live in the United States, comprising 3-4% of the total U.S. Jewish population.
Central Europeans
Between 1820 and 1880, America’s Jewish population ballooned from 3,000 to 250,000, a rate of growth 15 times greater than that of the U.S. as a whole. An estimated 150,000 Jews emigrated to America during these years, the overwhelming majority young German-speaking Central European Jews from Bavaria, Western Prussia, Posen and Alsace. Like the Catholics and Protestants who emigrated from these lands, Jews were spurred to leave by famine, economic dislocation and political discontent.
But Jews emigrated at a rate almost four times that of their non-Jewish neighbors, for they additionally faced severe restrictions on where they could live, what kind of work they could pursue, how they practiced Judaism and even, in some cases, whether they could marry. For them, America represented both economic opportunity and religious freedom.
Overall Jewish emigration from Central Europe peaked in the 1850s — partly in response to the failed liberal revolutions of 1848, partly in response to the antisemitism that followed them, and mostly because of a dramatic rise in food prices and a sharp decline in real wages across the region. While immigration subsequently slackened, German-speaking Jews continued to arrive in America well into the 20th century – 250,000 of them, according to one estimate, by World War I alone.
German-speaking Jews took advantage of America’s expanding frontier and burgeoning market economy. They fanned out across the country, often beginning as peddlers, they spread the fruits of American commerce to the hinterland, building up new markets and chasing after opportunities. They also carried Judaism with them, spreading it literally from coast to coast.
By the Civil War, the number of organized Jewish communities with one or more established Jewish institutions reached 160, and individual Jews lived in about 1,000 other American locations, wherever rivers, roads or railroad tracks transported them.
German-speaking Jews transformed American Judaism. The synagogue-communities gave way to communities of (competing) synagogues, most of them Ashkenazi in one form or another and many of them conducted in German. Where Sephardi Jews had venerated ancestral custom and tradition, many German-speaking Jews looked to modernize Judaism in various ways, while a percentage abandoned religion altogether.
Some, influenced by liberal religious currents in America and Europe, embraced what came to be known as Reform Judaism, with heightened attention to decorum, vernacular sermons, abbreviated services and a relaxed approach to Jewish laws and customs. Others looked to connect as Jews through fraternal organizations, the best-known being B’nai B’rith. German-speaking Jews also took advantage of new technologies to advance Judaism’s message. Books, periodicals and other publications — in English, Hebrew and German — promoted Jewish education, connected Jews one to another and helped Jews defend themselves.
Eastern Europeans
The unification of Germany in 1871 diminished German-Jewish immigration to the United States, but at that very time East European Jewish immigration to America’s shores began to increase. Violent attacks (known as pogroms) led many to risk life and fortune in the new world, but the root causes of the mass migration lay deeper — in overpopulation, oppressive legislation, economic dislocation, forced conscription, wretched poverty and crushing despair, coupled with tales of wondrous opportunity in America and offers of cut-rate steerage travel. Once again, Jews emigrated at a much higher rate than their non-Jewish counterparts. Between 1880 and the onset of restrictive immigration quotas in 1924, well over two million Jews from Russia, Austria-Hungary and Romania settled in the United States.
The majority of East European Jews spoke Yiddish and found jobs in rapidly growing cities on the East Coast and Midwest, especially New York and Chicago, rather than as peddlers on the (fast-shrinking) frontier. Many became involved in the garment industry, as well as in cigar manufacturing, food services and construction. They became active in the labor movement’s struggles to improve conditions for workers; in socialism, communism and Zionism; and in efforts to assist Jews abroad.
They also reinvigorated Orthodox Judaism and then the Conservative Movement, which simultaneously promised to be both religiously traditional and modern. By the time mass immigration ended, in 1924, they had reshaped the whole character of the American Jewish community. It now numbered some 3.5 million Jews, mostly of East European descent, and had become the second-largest Jewish community in the world after Eastern Europe.
Yiddish culture — in the form of drama, journalism, poetry, prose and later film — flourished in American Jewish immigrant neighborhoods. Some of the cultural works they produced, since they were not subject to censorship, impacted Europe too. Immigrants and their children likewise became involved in music, the arts and scholarship. The most successful among those whose parents spoke Yiddish, like Leonard Bernstein and Barbra Streisand, eventually made major contributions to the broader culture. The legacy of East European Jewry thus continues to shape both the American Jewish community and America as a whole.
Later Immigrants
The immigrant quotas imposed by law in 1924 greatly reduced, but did not completely foreclose, Jewish immigration to the United States. Some Jews still received quota certificates and immigrated. Others crossed over from Canada or Mexico hoping not to get caught. Still others, such as pulpit rabbis, enjoyed quota exemptions under the law. For humanitarian reasons, about 200,000 European Jewish refugees gained entry in the late 1930s and 40s, some just prior to World War II and some soon afterward.
In the decades following the revised 1965 Immigration Act, six other major groups of Jewish immigrants arrived on America’s shores. The largest by far, at least 500,000, were Jews from the former Soviet Union, who left following the collapse of Communism. Another 60,000-80,000 Persian Jews fled Iran following the 1979 Iranian Revolution. Thousands of Jews from Latin America immigrated to the United State in response to revolution, unrest, persecution and economic collapse.
Tens of thousands of Jews from Arab lands immigrated (often via Israel) after being driven out by nationalist Arab governments and hostile Islamic neighbors. As many as 12,000 Jews from South Africa moved to the United States during the tumultuous apartheid era and its aftermath. And over 100,000 Israeli Jews live in the United States. Unlike other Jewish immigrants to America’s shores, many Israeli-Americans speak of returning to their homeland at some point in their lives.
The 2020 Pew survey of American Jews reports that about 10 percent of those over 18 were born abroad. Today, as in the past, immigration impacts upon the size and character of the American Jewish community.
https://www.myjewishlearning.com/article/jewish-immigration-to-america-three-waves/
*
*
VICTORIAN NOVELS HIGHLIGHT THE FRAGILITY OF PUBLIC HEALTH
Between 40% and 50% of children didn’t live past the age of 5 in the US during the 19th century.
Thomas Worth’s 1872 illustration for the Household Edition of The Old Curiosity Shop highlights her grandfather’s grief at losing Little Nell
Modern medicine has enabled citizens of wealthy, industrialized nations to forget that children once routinely died in shocking numbers. Teaching 19th-century English literature, I regularly encounter gutting depictions of losing a child, and I am reminded that not knowing the emotional cost of widespread child mortality is a luxury.
In the first half of the 19th century, between 40% and 50% of children in the U.S. didn’t live past the age of 5. While overall child mortality was somewhat lower in the U.K., the rate remained near 50% through the early 20th century for children living in the poorest slums.
Threats from disease were extensive. Tuberculosis killed an estimated 1 in 7 people in the U.S. and Europe, and it was the leading cause of death in the U.S. in the early decades of the 19th century. Smallpox killed 80% of the children it infected. The high fatality rate of diphtheria and the apparent randomness of its onset caused panic in the press when the disease emerged in the U.K. in the late 1850s.
Multiple technologies now prevent epidemic spread of these and other once-common childhood illnesses, including polio, tetanus, whooping cough, measles, scarlet fever and cholera.
Closed sewers protect drinking water from fecal contamination. Pasteurization kills tuberculosis, diphtheria, typhoid and other disease-causing organisms in milk. Federal regulations stopped purveyors from adulterating foods with the chalk, lead, alum, plaster and even arsenic once used to improve the color, texture or density of inferior products. Vaccines created herd immunity to slow disease spread, and antibiotics offer cures to many bacterial illnesses.
As a result of these sanitary, regulatory and medical advances, child mortality rates have sat below 1% in the U.S. and U.K. for the last 30 years.
Victorian novels chronicle the terrible grief of losing children. Depicting the cruelty of diseases largely unfamiliar today, they also warn against being lulled into thinking that child deaths can never be inevitable again.
Routine death meant relentless grief
Novels tapped into communal fears as they mourned fictional children.
Little Nell, the angelic figure at the center of Charles Dickens’ wildly popular “The Old Curiosity Shop,” fades away from an unnamed illness over the last few installments of this serialized novel. When the ship carrying the printed pages with the final part of the story pulled into New York, people apparently shouted from the docks, asking if she had survived.
The public investment in, and grief over, her death reflects a shared experience of helplessness: No amount of love can save a child’s life.
Eleven-year-old Anne Shirley of “Green Gables” fame became a hero for pulling 3-year-old Minnie May through a dramatic battle with diphtheria. Readers knew this as a horrendous illness in which a membrane blocks the throat so effectively that a child will gasp to death.
Children were familiar with disease risks. While typhus runs rampant in “Jane Eyre,” killing nearly half the girls at their charity school, 13-year-old Helen Burns is struggling against tuberculosis. Ten-year-old Jane is filled with horror at the possible loss of the only person who has ever truly cared for her.
An entire chapter deals frankly and emotionally with all this dying. Jane cannot bear separation from quarantined Helen and seeks her out one night, filled with “the dread of seeing a corpse.” In the chill of a Victorian bedroom, she slips under Helen’s blankets and tries to stifle her own sobs as Helen is overtaken with coughing. A teacher discovers them the next morning: “my face against Helen Burns’s shoulder, my arms round her neck. I was asleep, and Helen was – dead.”
The disconcerting image of a child nestled in sleep against another child’s corpse may seem unrealistic. But it is very like the mid-19th-century memento photographs taken of deceased children surrounded by their living siblings. The specter of death, such scenes remind us, lay at the center of Victorian childhood.
Fiction was not worse than fact
Victorian periodicals and personal writings remind us that death being common did not make it less tragic.
Darwin agonized at losing “the joy of the Household,” when his 10-year-old daughter Annie succumbed to tuberculosis in 1851.
The weekly magazine “Household Words” reported the 1853 death of a 3-year-old from typhoid fever in a London slum contaminated by an open cesspool. But better housing was no guarantee against waterborne infection. President Abraham Lincoln was “convulsed” and “unnerved,” his wife “inconsolable,” watching their son Willie, 11, die of typhoid in the White House.
In 1856, Archibald Tait, then headmaster of Rugby and later Archbishop of Canterbury, lost five of his seven children in just over a month to scarlet fever. At the time, according to historians of medicine, this was the most common pediatric infectious disease in the U.S. and Europe, killing 10,000 children per year in England and Wales alone.
Scarlet fever is now generally curable with a 10-day course of antibiotics. However, researchers warn that recent outbreaks demonstrate we cannot relax our vigilance against contagion.
Forgetting at our peril
Victorian fictions linger on child deathbeds. Modern readers, unused to earnest evocations of communal grief, may mock such sentimental scenes because it is easier to laugh at perceived exaggeration than to frankly confront the specter of a dying child.
“She was dead. Dear, gentle, patient, noble Nell was dead,” Dickens wrote in 1841, at a time when a quarter of all the children he knew might die before adulthood. For a reader whose own child could easily trade places with Little Nell, becoming “mute and motionless forever,” the sentence is an outpouring of parental anguish.
These Victorian stories commemorate a profound, culturally shared grief. To dismiss them as old-fashioned is to assume they are outdated because of the passage of time. But the collective pain of a high child mortality rate was eradicated not by time, but by effort. Rigorous sanitation reform, food and water safety standards, and widespread use of disease-fighting tools like vaccines, quarantine, hygiene and antibiotics are choices.
And the successes born of these choices can unravel if people begin choosing differently about health precautions.
While tipping points differ by illness, epidemiologists agree that even small drops in vaccine rates can compromise herd immunity. Infectious disease experts and public health officials are already warning of the dangerous uptick of diseases whose horrors 20th century advances helped wealthy societies forget.
People who want to dismantle a century of resolute public health measures, like vaccination, invite those horrors to return.
*
THE SOVIET PLAN TO REVERSE RIVER FLOW WITH NUCLEAR EXPLOSIONS
To the west of Russia's Ural Mountains lies a picturesque body of water called Nuclear Lake. It's difficult to access, and visitors have to travel north by boat along the Kolva and Visherka rivers from the small town of Nyrob, where the tsars once exiled their political opponents. The lake itself, which is about 690m (2,300ft) at its widest point, is not linked directly to the dozens of nearby waterways, and the final approach is on foot along a boggy track. To get to its shores, you have to pass rusting metal signs warning you are entering a "radiation danger zone" and that drilling and construction are forbidden. Large earth mounds snake around the edge of the lake.
"The water was transparent," says Andrei Fadeev, a Russian blogger from the city of Perm, who travelled to Nuclear Lake on a sunny day in the summer of 2024. "I liked it," he says, even though his dosimeter showed spots where radiation levels were higher than usual. "There wasn't an atmosphere of a threat or something. On the contrary… I think the northern taiga [boreal forest] has just recaptured the place."
Nuclear Lake was formed on 23 February 1971 when the Soviet Union simultaneously fired three nuclear devices buried 127m (417ft) underground. The yield of each device was 15 kilotons (about the same as the atomic bomb dropped on Hiroshima in 1945). The experiment, codenamed "Taiga", was part of a two-decade long Soviet program of carrying out peaceful nuclear explosions (PNEs).
In the 1970s, the USSR used nuclear devices to try to send water from Siberia's rivers flowing south, instead of its natural route north. The project was a grand failure – but 50 years on, the idea still won't completely go away.
In this case, the blasts were supposed to help excavate a massive canal to connect the basin of the Pechora River with that of the Kama, a tributary of the Volga. Such a link would have allowed Soviet scientists to siphon off some of the water destined for the Pechora, and send it southward through the Volga. It would have diverted a significant flow of water destined for the Arctic Ocean to go instead to the hot, heavily populated regions of Central Asia and southern Russia.
This was just one of a planned series of gargantuan "river reversals" that were designed to alter the direction of Russia's great Eurasian waterways. Redirection was intended to alter not just the Volga, but also several Siberian rivers, sending water thousands of kilometers southward via canals and reservoirs.
Blogger Andrei Fadeev in the vicinity of Nuclear Lake in 2024
Years later, Leonid Volkov, a scientist involved in preparing the Taiga explosions, recalled the moment of detonation. "The final countdown began: …3, 2, 1, 0… then fountains of soil and water shot upward," he wrote. "It was an impressive sight." Despite Soviet efforts to minimize the fallout by using a low-fission explosive, which produce fewer atomic fragments, the blasts were detected as far away as the United States and Sweden, whose governments lodged formal complaints, accusing Moscow of violating the Limited Test Ban Treaty.
Fifty years later, Nuclear Lake is a half-forgotten tourist curiosity. But it is also a physical reminder of one of the Soviet Union's last mega-projects – river reversal – and the extraordinary lengths to which the Kremlin was prepared to go to make it happen.
The idea of using canals and dams to redirect freshwater from Russia's north-flowing rivers had been around for a century by the time of the blasts, tempting successive Russian regimes. Perhaps most famously, it was proposed by writer Igor Demchenko in an 1871 booklet called: "On flooding the Aral-Caspian lowlands to improve the climate of adjacent countries." Later, it was raised as a possibility by Soviet planners under Stalin in the 1930s.
The appeal was simple: some of the huge volumes of water flowing through Siberia and northern Russia could be "utilized" by sending them to the more arid lands of Central Asia and southern Russia. Agriculture is a lucrative prospect in the Eurasian heartlands, where there are many more people than the freezing Russian north. The redirected water, planners hoped, could also help save the Aral Sea, which had seen catastrophic water loss in recent decades because its tributaries were over-exploited for agriculture.
For Russia's rulers, "this huge flow of water into the Arctic Ocean was going nowhere useful", says Douglas Weiner, an historian at the University of Arizona specializing in Soviet environmental policy. "It's this big bauble of a resource that's not being used. It's a huge resource. So, there is always this tempting idea that we can somehow find a way to use it.
The closest the Soviet Union came to realizing river reversal was in the 1970s and early 1980s. In this period, hundreds of millions of rubles were poured into developing the project, which involved nearly 200 scientific research institutes, enterprises, and scientific production organizations, and, according to some estimates, 68,000 people.
Arid regions of the former USSR, including the Aral Sea, were seen to be in great need of Siberia's water
Not only did Soviet ideology suggest that nature could be transformed into a rational tool to help build socialism, but prestige projects were a key part of Cold War competition with the West. Plus, demand for water was skyrocketing. "This period saw the active development of irrigated agriculture, it became clear that our own resources of water were insufficient, populations were growing, and existing production technologies were quite water-intensive," says Mikhail Bolgov, a surface water expert at Russia's Institute for Water Problems (this Institute, which still operates in Russia today, was a leading advocate for river reversal during the Soviet period). "And there was already an understanding that the Aral Sea would disappear if irrigation was continued at such a scale."
Soviet planners were inspired by history's great water amelioration projects from history (including Roman aqueducts), and claimed they did not want to redirect whole rivers, just a small percentage of the water in Siberian river basins. Finally, they believed that they might be able to save not only the Aral Sea, but also the Caspian Sea and the Azov Sea, which were both also recording significant drops in water level.
At the same time, river reversal was a colonial project, appealing both to those in the Kremlin with imperialist views, as well as local leaders in the Central Asian republics who believed it would be a way of channelling money and influence. "[It] was connected with bringing modern technology and Slavic settlers to those regions as a way of incorporating them," says Paul Josephson, a professor of Russian and Soviet history at Colby College in Waterville, Maine.
Many were bewitched by the sheer ambition. "The same magic of its big scale was supposed to infinitely inspire its advocates and belittle its opponents," wrote the leading Soviet opponent of the scheme, hydrologist and writer Sergei Zalygin, in his 1986 book Turnabout. "We are the greatest and you are against us – how could that be?!"
In addition to the Volga, those laboring over river reversal in the 1970s focused on two Siberian rivers – the Ob and Irtysh. They planned the construction of a 1,500km-long (930-mile) canal using hundreds of PNEs that would, when completed, channel up to 10% of the water from the basins of the Ob River and Irtysh River to Kazakhstan, Uzbekistan and Turkmenistan. A Communist Party resolution in May 1975 envisaged Siberian water would first arrive in Central Asia in 1985, and that the whole project would be completed by 2000.
It wasn't to be. From the moment serious discussions about river reversal began, there was opposition from scientists and experts. In the early 1980s, however, this opposition spiraled into the sort of broad-based public campaign that was highly unusual in the tightly controlled Soviet Union. There were essays in journals, letters to officials, even novels and poems about the folly of the project. In Ballad About Freedom, Soviet poet Fazil Iskander wrote: "It's completely impossible to know what's going on in the head of the regime / whether they want to wring the neck of the northern rivers, or steal the Gulf Stream!"
Intellectuals like Zalygin raised a whole series of objections – from the project's eye-watering cost that may have run to the hundreds of billions of dollars, to its wastefulness, settlements and culturally significant sites that would have been flooded, the flawed science they alleged lay at its heart, bureaucratic self-aggrandizement, along with a myriad of potentially devastating environmental consequences.
Historian Josephson says that, when he did research at the Institute for Water Problems in Moscow in the late 1980s, he was permitted by the then-director, Grigory Voropaev, a leading advocate for the scheme, to see the official environmental impact report. It was, Josephson realized, completely inadequate. "It boggled my mind to see such conclusions as 'we anticipate local and manageable environmental impacts'," Josephson says.
In fact, there were concerns that diverting the water southward could mean anything from the destruction of unique habitats, to dangerous climatic change, says Josephson. "Ice would set on southward into the rivers earlier and deeper into Siberia. There would be flora and fauna that would transfer from Siberia to Central Asia. There are just so many things that could have happened," he says. "The intellectuals, whether they were trained in biology and environment, or literary types, understood that the scale of the project made it impossible to contain in terms of its environmental impact."
Perhaps the final nail in the coffin was the Chernobyl nuclear disaster in 1986, which not only consumed a huge amount of money, but pushed environmental concerns up the political agenda. Four months after the Number Four Reactor at the Chernobyl Nuclear Power Plant exploded, Soviet Premier Mikhail Gorbachev cancelled the river reversal project. While some have said this was the result of the public pressure, others believe it was the astronomical cost – at a time when depressed oil prices were causing financial problems for the Kremlin. "Everything was set to go," says historian Weiner. "But realistically I don't think they would have done it because they didn't have the money."
It may have seemed that river reversal as a serious prospect died with the Soviet Union, which collapsed five years later. But advocates for the project in senior positions in the Russian government continued to speak out in its defense. In 2008, for example, then Moscow Mayor Yuri Luzhkov published a book called "Water and Peace" that argued in favor of re-directing Siberian rivers to Central Asia.
And, as recently as February 2025, two Russian scientists argued in an article in Russian daily newspaper Nezavisimaya Gazeta that technical advances since the 1980s make river reversal more feasible, and that it aligns well with Moscow's geopolitical "pivot to the East" that has followed the breakdown in relations with the West over the full-scale invasion of Ukraine.
Some academics in both Russia and the West have even suggested that reducing the amount of relatively warm water flowing into the Arctic Ocean could help mitigate the effects of global warming. But this is strongly disputed by others, who say it would have the complete opposite effect.
Tom Rippeth, a professor of physical oceanography at Bangor University in Wales, published a paper in 2022 modeling the effects of Siberian river reversal, which showed it could have disturbed the structure of the Arctic Ocean, causing a warmer, saltier layer of water to rise, and dramatically accelerated the melting of sea ice. "If you upset nature's balance, there are a lot of unintended consequences," he says.
Despite a present lull in political interest in river reversal, historian Josephson predicts that, one day, the idea will resurface – although perhaps with China substituted for Central Asia as a destination for Russian water. "The project will not die," he says. "Russia is a resource empire – it survives by selling its resources. So, it makes sense for Russia, ultimately, in some place and time, to work with the Chinese to transfer water from Siberia across the border to farming regions of northern China."
Even some of those who successfully campaigned in the 1980s to stop the Soviet Union from diverting the great Eurasian waterways were never convinced their victory was final. In their book Lessons of Ecological Failures, Soviet academics Alexander Yanshin and Arkady Melua argued that river reversal would, one day, make a comeback – not least because of competition over water, and rising populations in Central Asia.
"The question about diverting some of the sources of Siberian rivers to Central Asia will most likely be raised again in the third millennium," they wrote in 1991. "However, it's obvious that this will require the development of another project."
Ultimately, the nuclear explosions that created Nuclear Lake, one of the few physical traces left of river reversal, were deemed a failure because the crater was not big enough. Although similar PNE canal excavation tests were planned, they were never carried out. In 2024, the leader of a scientific expedition to the lake announced radiation levels were normal.
But blogger Fadeev says there were some places where the radiation was still significantly elevated — almost half a century after the blasts. After having done a lot of research into radiation, he decided to remain cautious. "I didn't go for a swim," he says.
Oriana:
The beached ship stands there like a symbol of the whole Soviet project — a grand experiment, no matter the cost, both financial and environmental, that fortunately came to nothing except some remnant radiation.
The Aral Sea, once the fourth largest inland sea in the world, is no more. Only 10% of it remains; the rest has turned into a desert — yet another monument to human destructiveness.
But first attempts to introduce vegetation are beginning. There is a local bush, black sacsual, that is effective at holding soil and thus preventing erosion and mitigating sand storms. There is now an active effort to plant more sacsual to at least prevent the spread of the desert.
*
RUSSIA IN AND AROUND SUBWAY STATIONS
In the underpass of the Belorussky Train Station on Leningrad Highway in Moscow, Russia, I saw a drunk combat soldier decked out in the uniform of the special military operation lying on the floor unconscious.
Three police officers stood staring at him. One of them kicked the soldier in the shin. There was no reaction and he said to his colleagues: “He looks dead.” The combat soldier didn’t make it to the train to get to the front and passed out.
Russian propagandists stopped eulogizing China as a close ally. Anna Shafran and Alexander Losev in the program “Strategy of National Security” (which basically comes down to “let’s nuke everyone!”) confessed:
“Beijing’s friendship with Moscow is becoming less and less obvious and has been repeatedly discredited by the actions of the Chinese elites. China remains alone on the battlefield and is greatly weakened, which means that pinpoint nuclear strikes by the United States on individual regions of China are more than possible and acceptable.”
I’ll translate to you from Contrived Russian. Anna Shafran says, “Burn in nuclear hell, Chinese friends. We will not defend you like we didn’t defend Assad in Syria!”
Kremlin envisions multi-polar world as a cold, lonely place. It has no God, no soul, compassion and empathy where everyone is out for its own interests and occasionally lobs nuclear strikes with nonchalance of an alcoholic popping a vodka bottle.
A woman selling canned vegetables in below freezing temperature with her feet buried in the snow by the metro station.
She couldn’t clear the snow around her selling point She didn't bring crates to make her wares more presentable to the potential customers. Her horizon of planning is a few hours. She cares to make a few hundred rubles to supplement her pension to survive another day and nothing more.
The wise elder priest Afanasiy writes Orthodox Science Fiction. It’s a new literary genre in which Russia is crushing superpowers eliminating them with righteous fire.
There are oceans of blood, large troop movements over long distances, unknown aircraft akin to flying saucers, and here, too, there’s the new trend: Chinese comrades betrayal.
China is on Russia's side but after a while tempted by the biggest bribe in history — power over the territory of Siberia — withdraw their troops and allow the allied troops to stab the Russians in the back.
Tsar Ivan, heir of Tsar Putin, who sits on the throne of the Muscovy Empire continues to fight, this time against the Chinese.
Russians pray harder than ever before. By that time all the schools and universities have been shut down and churches were built in their place.
From the strenuous effort of the pious patriots the holy fire comes down from heaven and melts the Chinese military kit. Glorious servants of the Lord, the holy Angels, carry away the traitors’ souls into the deepest pits of hell to burn them forever.
Having defeated the Chinese with the help of the Lord, Tsar Ivan proclaims himself an emperor and sets out to fight Muslims in Asia and then in North Africa.
Russian hockey player Alexander Ovechkin plays for Putin’s Team on the Red Square when he’s not scoring for Washington Capitals in the United States.
It’s a sporting goods store owned by a native Ukrainian located just two miles from the Kremlin where commie-era elites promise every day to obliterate Western countries and turn them into radioactive ash.
If this doesn’t cause you to experience cognitive dissonance, I don’t know what I can throw at you that would. ~ Misha Firer, Quora
*
THE CULT OF STALIN CONTINUES
Sentimental Russians bring flowers and take selfies against the restored bas-relief of the Soviet premier Joseph Stalin at Taganskaya underground station. Stalin allegedly killed more Russians than Hitler.
Stalin means “man of steel.” He industrialized Soviet Union and transformed it into a global superpower. I heard the tear-stricken man in Adidas T-shirt plead with the statue:
“Please, please, Joseph Issarionovich, send this woman with bleached hair in leopard dress and my in-laws to GULAG forced labor camp in Siberia. I want them all to die in terrible pain from malnutrition and exhaustion digging permafrost with a pick to find gold to pay Americansky capitalists for factory equipment.”
“The USSR still exists,” said Russian presidential adviser Anton Kobyakov.
According to him, the legal procedure for dissolving the Soviet Union in 1991 was violated. Konya love is right: Soviet Union exists in Russian hearts, the most glorious open air prison ever built. Freedom proved too much to bear and even Putin turned out to be not Stalin enough.
There were no presidents in the USSR, which means the current one was elected in violation of the laws of the USSR, and is illegitimate. And Russian Federation is a 404 country.
"If the USSR is not dissolved, then logically it turns out that the Ukrainian crisis is an internal process," Putin's adviser Kobyakov is sure. He called for a legal assessment of the collapse of the USSR.
Stalin, the genocidal premier who resettled whole nations randomly for no reason and whose show trials were permitted to have only one verdict “to be shot” is an important part of Russian cultural code.
Russians often complain that Putin is weak. He spares them untold suffering and miserable death by tens of millions. Kudos for the president’s mismanaged pandemic that killed a million, and special military operation that buried another million, but Russians are not sot satisfied with this paltry death harvest.
And so Russians have transformed the passage at the metro station with the controversial landmark into a shrine and a popular photo spot. Six new Stalin statues were inaugurated last years and more are on the way that hopefully will be conductive to the increase of body/hectare ratio in the battlefield. ~ Misha Firer, Quora
*
HEALTH ISSUES IN WIVES SIGNIFICANTLY INCREASE DIVORCE, PARTICULARLY IN OLDER COUPLES
“...To have and to hold from this day forward, for better, for worse, for richer, for poorer, in sickness and in health…”
These vows can be traced back to the medieval church of England, around the year 1549. While they’ve taken on hundreds of different forms and alterations in the years since, their message has remained the same: a promise of faithfulness, cherishing, and commitment.
However, according to February 2025 research from the Journal of Marriage and Family, one of these promises appears to be far more conditional than we’d like to believe. The shocking findings of the European study depict a gendered pattern in “silver splits”—that is, divorces among couples over the age of 50.
A New Divorce Pattern Among Adults Between 50 and 64 Years Old
In the United States, late-life divorce statistics have changed dramatically over the past few decades. In 1989, only about 5 out of every 1,000 adults over the age of 50 went through a divorce. By 2010, this rate had doubled to 10 per 1,000—and has remained relatively steady since.
A similar pattern can be observed in many European countries, including England and Wales. In some nations, such as France and Belgium, the rate of these “silver splits” is even higher. This growing trend has sparked immense interest among researchers, particularly in terms of why so many long-term marriages are breaking down at increasing rates.
As such, in their February 2025 study, psychological researchers Daniele Vignoli, Giammarco Alderotti, and Cecilia Tomassini set out to investigate a particularly pressing question: How does health influence divorce among older couples?
Their study examined data from 25,542 European heterosexual couples between the ages of 50 and 64, collected over an 18-year span from 2004 to 2022. What they found was deeply unsettling.
When both partners remained in good health, divorce rates stayed relatively stable. Likewise, when the husband fell ill but the wife remained healthy, the likelihood of divorce did not significantly increase. However, the pattern shifted drastically when the wife was the one who fell ill.
In marriages where the wife developed a serious illness, the divorce rate was statistically significantly higher. Similarly, when wives experienced physical limitations that made daily tasks difficult, the likelihood of divorce also increased.
This suggests a stark imbalance in how illness affects marital stability—one that raises several concerning questions about gender roles, caregiving, and commitment in later-life relationships.
When “In Sickness and in Health” No Longer Holds True
It’s worth noting that the authors of the 2025 study themselves acknowledge that further qualitative research is needed to completely understand the finer details behind this pattern. That said, even the everyday person could surmise that these results cannot be attributed solely to the stress that comes with health struggles. Entrenched gender roles more than likely play a significant part, too.
The deep-seated expectation that a wife will always ensure that the home runs smoothly is so ingrained, to the extent that any deviation from this role may feel like, or be legitimately considered, a rupture in the marital bond.
Over decades, these roles have been reinforced through socialization processes—beginning in childhood—where girls are subtly taught to value caregiving, domestic skills, and the maintenance of the home. Young boys, on the other hand, are very rarely given the same instruction in tasks such as cooking, cleaning, or child rearing.
A significant body of research suggests that these gendered expectations have persisted, despite how much societal attitudes are shifting within the younger generations. And in many older marriages, traditional norms remain even more strongly in place—with women continuing to carry the mental load of managing household tasks and caring responsibilities.
To husbands, the failure of a wife to fulfill these roles due to illness can be perceived as a breach of sorts in the marital contract—a promise made “in sickness and in health.” As such, when the pillar of domestic management is suddenly weakened, some husbands may feel that the foundational, perhaps even the most important, vow has been broken.
Yet, objectively, it’s this very mindset that breaks the vow. “In sickness and in health” shouldn’t require a woman to place domestic labor above her own well-being for the sake of the marriage. Rather, it should mean that if she can no longer fulfill these responsibilities, her husband can and must step in—just as the researchers suggest wives do when the roles are reversed.
It goes without saying that expecting women to shoulder these duties alone, in the first place, is both archaic and unrealistic. These responsibilities should always be shared between spouses. In reality, however, this sadly isn’t always the case—not even when wives face health struggles.
But when a husband becomes ill, the societal expectation isn’t that the wife will naturally step into the caregiving role; in most cases, this is already her role. An ailing husband doesn’t unsettle the established dynamic of who manages the home—as women are typically pre-socialized to be the caregivers.
In all likelihood, this asymmetry is one of many byproducts of historically sexist expectations, where cooking, cleaning, and caregiving are viewed as an almost innate responsibility for women. A 2023 study from the Journal of Business and Psychology notes that, even in contemporary settings, the division of household labor remains heavily skewed in favor of women.
This division is self-perpetuating: Young boys grow up with little to no role models for household management. Often, as a result, they enter marriage with the unspoken (or even spoken) expectation that their partners will handle these responsibilities. In many older marriages, where gender norms from years gone by remain unchallenged, this expectation remains stubbornly entrenched.
In this sense, when a wife’s illness disrupts her ability to manage the home, this societal imbalance is very likely what undermines the stability of the marriage. Appallingly, it seems this means that the promise “in sickness and in health” can be interpreted differently depending on which partner falls ill.
https://www.psychologytoday.com/us/blog/social-instincts/202505/new-research-reveals-a-concerning-emerging-divorce-pattern
*
THE AHA MOMENTS IN COUPLES’ THERAPY
Couples therapy can be intense and uncomfortable at first: Inevitably, there are awkward pauses. There are revelations that are hurtful to hear and bursts of anger. And all of this while a third party listens.
But eventually you and your partner get into the swing of things at your therapist’s office ― or the Zoom square you’re sharing, if it’s teletherapy.
Even better, you start to learn fundamental things about your relationship and the way you and your partner engage with one another: Maybe you learn about attachment styles and realize that you’re anxiously attached while your partner is avoidant, which has caused a lot of misunderstanding and strife in your relationship. Or perhaps you learn to ask “Do you want comfort or solutions?” when discussing something that bothers one of you.
“Aha” moments and lessons like that can be game changers in relationships. Below, married couples who’ve attended marriage therapy share their “aha moment” and talk about how it changed their relationship for the better. (Their responses have been edited lightly for clarity and length.)
~ “My husband and I have been married for seven years and became first-time parents during the thick of the pandemic. No visitors were permitted in the hospital, and family couldn’t visit as they sought vaccinations. So when couples with new babies usually have their village to support them, it was just the two of us. Becoming new parents is one of the most stressful events in the best relationship. The isolation of parenting during COVID magnified the stress.
As I fell more in love with my baby, my marriage was slowly crumbling. Our sweet baby wasn’t the only one crying and screaming; we joined her voice in our home, fighting each other. Our conflict resolution differences under stress and sleep deprivation became magnified. We were wired to address conflict in very different ways in our lives.
“We sought help in facing this crisis. The biggest thing we learned in therapy, and continue to work on daily, is how we fight. Learning how to communicate in ways that are not tearing each other down is essential. Even more critical, we learned the consequences to our relationship of continuing to fight. ‘Pause before we react’ is a tool our therapist taught us, and we continually work on it. We are more mindful of the results of attacking back. Of course, that doesn’t help our relationship or daughter. Pausing helps us remember that by responding when we are triggered, we are almost guaranteeing an end to our relationship.” ― Vanessa Watson-Hill, a psychotherapist in New Jersey
~ “Monotony used to be a challenge for my husband, Daniel, and I. We’ve been together for 14 years. We’d get into emotional routines, and the boredom would make us shut each other out of our internal worlds. Therapy helped us understand that no matter how much we think we know about one another, there is always more to discover. Always. We’re comrades, but we’re also beautiful strangers. At any given moment, there are things going on in my husband’s head that I can’t see, which I find endlessly thrilling. And whenever I try to mine those things, I discover things about myself I haven’t conceptualized before. It’s an exhilarating give-and-take that not only saves us from boredom but also makes both of us feel seen.
“Since we learned that, even arguments have become more fulfilling. It’s allowed us to let go of expectations about how a relationship should work, which makes us more accepting of our shortcomings. We are gentler with each other and more invested. I can’t tell you how gratifying it is to feel that my partner cares enough to look for the mysteries in me. It makes me feel desired. It creates a beautiful reciprocity.” ― Micah Unice, a medical administrator in Salt Lake City, Utah
~ “We learned the importance of a 30-minute weekly marriage meeting and asking, ‘What do you need?’
“My husband and I have been married for 15 years and have been going to marriage counseling for over six years. We started attending not because there was a crisis but because we ― well, I ― wanted us to be able to communicate in a way that reduced the tension I felt in the relationship and made everything feel easier.
“One of the most helpful pieces of structure that we’ve introduced into our lives because of therapy is a Saturday morning, 30-minute conversation in which we review the last week and look forward to the next week. We have two small children and a house and lives of our own, so life can get busy.
“Figuring it out ahead of time has made our lives easier. And especially because I tend to be the planner (which I literally am by profession) and my husband the ‘go along to get along’ type (a wonderful type to have, by the way, during the COVID quarantine), this structure really lowers my stress about keeping the household running without me needing to maintain an iron grip on it.
“The single most important question we’ve learned to ask is, ‘What do you need?’ Let’s say my husband is angry ― about what is less important. He’s venting. I instinctively start to spin all sorts of (usually totally wrong) tales in my own head about how he’s feeling or what he’s thinking. It creates an uncomfortable atmosphere that I really wish would go away.
“If I simply ask him, ‘What do you need right now?’ then it usually leads quickly to him stating out loud what he needs and what, if anything, I can do for him. No more catastrophizing or guessing on my part. He feels cared for. And sometimes (!!) there’s even something I can do to make his life better.” ― Meg Bartelt, a financial planner in Bellingham, Washington
~ “We learned how much our families of origin affect how we behave in our marriage.”
“My husband, Josh, and I have been married for 13 years. We began going to therapy when our oldest was 3 and our twins were under a year old. My parents actually saw some signs of fracture in our relationship. Having gone through a rocky season with infant twins and a toddler early in their marriage (I have twin brothers three years younger than me), they offered to pay for therapy and watch our kids.
“The biggest revelation for us was when our therapist began to really integrate some family systems theory into our sessions. I don’t think either of us realized how much our families of origin affected how we handle conflict and decisions. My family is loud and debates and fights and lays it all out on the table, and then quickly repairs and moves on from the situation. That can be good but also bad ― sometimes problems need time to simmer and feelings can get steamrolled in the effort of getting back to ‘normal.’ Josh’s family is much quieter about conflict, more likely to hold things inside and quietly stew or be introspective. Again, good and bad. Time to think is good, but stuffing feelings down is damaging.
“These family systems totally shaped how we viewed our relationship. For Josh, a giant blow-up was devastating, whereas for me, it was a release of pressure so that things could return to homeostasis. For me, when Josh holds things in or is introspective, it feels tense and awful. We haven’t entirely unlearned these patterns and probably never will. We have, however, learned to recognize how our partner is dealing with a situation and see it through a different lens.”― Meg St-Esprit, a part-time staff writer at Romper and freelance journalist and content writer who lives in Pittsburgh, Pennsylvania
~ “After our sons’ autism diagnoses, our counselor suggested we discuss how we truly felt to each other.”
“My wife and I have been together for five years, and we have four children in total and two sons together. Recently we discovered both of our sons have autism. Because of our busy day-to-day lives, we rarely discussed in detail how we really felt on the inside or how our lives would be drastically changed through our sons’ diagnosis. I wasn’t knowledgeable about autism and was fearful of the unknown, so I began to withdraw when it came to figuring out autism-related things, like behavioral therapy and speech therapy. Instead, I focused more on what I knew I could do, which was housework and taking care of the boys. I didn’t realize my lack of interest in their autism was an issue until my wife and I got into a heated argument about how she needed me to be more involved in that part of their lives.
“While in couples counseling, our counselor suggested we discuss how we truly felt to each other, and afterwards we entered into a new realm of intimacy with one another. The discussions strengthened our bond and were integral in helping us face our new reality together with love, communication and understanding. I realized my family needed me to be present in every aspect of their lives, not just the parts I wasn’t afraid of. The discussions helped me face my fears and extinguish them as I became more available and hands-on with their autism, and it helped me become a better husband and father.” ― Shon Hyneman, a content creator who lives in the Austin, Texas, area
~ “We learned to use ‘I’ language instead of accusatory ‘you’ language.”
“My fiancé and I recently got engaged and had our first child together, but as we were planning our wedding, it suddenly dawned on us that we had some unhealthy communication styles that we wanted to address in premarital couples counseling prior to walking down the aisle.
“Prior to counseling, I always avoided the difficult conversations with him because one of us would either get too defensive or too prideful to accept criticism about ourselves, and the conversation would go left and nothing would be resolved.
“Thankfully, in couples therapy I learned how to be a reflective listener, which taught me how to actively listen and give my fiancé the opportunity to speak freely. In our conversations, we learned to use the ‘I’ instead of ‘you’ technique that taught me to say, ‘I feel hurt when you do X,Y, Z,’ instead of saying, ‘You hurt my feelings.’ By simply learning how to redirect the emphasis on trying to understand each other rather than focusing on winning the conversation or counter-arguing, we learned how to communicate on a deeper level.” ― Brittney, a stay-at-home mom in Missouri
~ “We learned the four biggest predictors of divorce.”
“My husband and I have one of those meet-cute stories that people ‘aww’ over, and it really set the tone for our early relationship. It was fast-paced, exciting and full of promise. Of course, the honeymoon phase eventually ended, and we were left with no tools to get us through the rest of our lives together.
“Couples therapy has provided numerous long-lasting benefits for my husband and me, but the most powerful takeaway has been learning psychologist John Gottman’s Four Horsemen of the Apocalypse, the key indicators of divorce he discovered through his research of couples: They are criticism, defensiveness, contempt and stonewalling. Being able to identify these as they occur has helped us immediately take a step back, regroup and then re-approach each other and the situation.
“In our short seven-year relationship, we’ve had job transitions, deaths in the family, cross-state moves, financial difficulties, life-threatening pregnancies, sick children and many other major stressors. Having an open dialogue and recognizing the signs of a struggling relationship have helped us face these stressors head-on and come out the other side with our relationship not just intact but strengthened.” ― Jemma, a graduate student who works in marketing and lives in Washington state
~ “We see each other as teammates, we work together on housework and we ask, ‘Is this a listen or a fix it?’”
“Luis and I attended premarital counseling and have been in therapy ever since. We have had some months where we don’t actively go because our therapist says we are OK, but we try to go for maintenance and, honestly, we enjoy it ― both of us do. It’s like going on a date for us. When we leave, we feel rejuvenated and like we just learned something completely new about each other, even after 17 years and three children.
“I would say we have learned three key things in therapy that have greatly helped our relationship. One, we both understand that we are a team and teammates don’t try to hurt each other. We understand that if arguments or issues arise, it’s us against the issues, and not one against the other.
“The second lesson I learned was how to ask him to get certain things done around the house. As Africans from Cape Verde, we were both raised with the habit that the house has to be clean at all times and everything must be in order. I used to get upset when I asked him to do certain things and they wouldn’t get done. The conflict happened when I wouldn’t tell Luis when I needed them done. We are both full-time parents and full-time professionals, so it’s busy. Our therapist suggested I make a list of things I needed done and include deadlines as well. In this way, I could communicate what I needed and he would figure out a good time in his schedule to get them done. So, for example, I can write: ‘1. Change the bathroom light bulb (no later than tomorrow); 2. Fix my laptop (the end of the weekend).’ These were clear asks with deadlines. If the deadlines were unrealistic, we could discuss further.
“Lastly, we learned about ‘Is this a listen or a fix it?’ My husband loves finding solutions to anything his sees as a problem. However, sometimes I just need a listening ear and a shoulder. So in therapy he learned to say, ‘Is this a listen or a fix it?’ instead of telling me ‘You should do this’ or ‘You need that.’” ― Terza Lima-Neves, a professor of political science who lives in Charlotte, North Carolina
~ “We learned how to deal with our drastically different communications style.”
“When Mandi and I chose to engage in couples therapy recently, we had come to the realization that we were communicating in a way that had become unhealthy for us, our child, our sex life and just our relationship in general. Mandi has ADD, and I am a bit more Type A than I’d like to admit, so we were butting heads over simple things like cleaning the house, parenting... you know, all of the stuff that comes along with a relationship.
Our disagreements had gotten hotter and hotter, and her defense mechanism of shutting down and threatening to leave was getting tiresome for us both. She didn’t want to leave, but knew it was a button to push to stop the conversation (I’ve been married twice before). So we decided to enter therapy and found a killer therapist, which was really hard to find.
“For me, the ‘aha’ moment came when I had to make some realizations about myself and how I communicate. I had to own a lot of my own stuff before I could get into how I reacted to her. First and foremost, I learned that I sit on a thing that is really heavy to me but might mean nothing to her. I let myself spin out about that, and once that particular problem is solved, I keep going and will find any and all things negative that pull me into a really nasty spiral. The ‘aha’ came when I realized how miserable my own behavior was making those around me. I had to work very hard, and still do, to catch myself when the spiral begins. I now have the tools to tell Mandi it’s starting, and she knows to let me just go work through it without pushing on the conversation. I also know to not try to enter the conversation in that state.” ― Brian Rickel, a dean of arts, media and entertainment at a community college in Sacramento, California
~ “We learned each other’s love languages.”
“Many people hear ‘couples counseling’ and they automatically assume something is wrong with the relationship. That was not the case in ours. I wanted to be in therapy as a way of keeping our relationship healthy.
“One of our main ‘aha’ moments in therapy was when our therapist had us identify our main love language. For years we were using how we identify love on one another, but we learned in therapy we should love each other the way the other views love. In our early 20s, the world was not talking about love languages, but we somehow made it work. Our therapist taught us how to prioritize how the other identifies love ― a sacrifice that is easy to make when you’re open to learning and growing. My husbands is ‘acts of service,’ and mine is ‘words of affirmation.’ So he started to give me the reassurance I needed and I started taking things off his to-do list. We are nine years strong, have been in therapy for five years and our daughter, Sunset, will be 1 in September!” ― Billi Sarafina Greenfield, a writer, mother and business owner in Southern California
*
THE PERSISTENCE OF ATENOLOL AND OTHER TREATMENTS THAT DON’T WORK
According to the Centers for Disease Control and Prevention, about one in three American adults have high blood pressure. Blood pressure is a measure of how hard your blood is pushing on the sides of vessels as it moves through your body; the harder the pushing, the more strain on your heart. People with high blood pressure are at enormously increased risk for heart disease (the nation’s No. 1 killer) and stroke (No. 3).
So it’s not hard to understand why Sir James Black won a Nobel Prize largely for his 1960s discovery of beta-blockers, which slow the heart rate and reduce blood pressure. The Nobel committee lauded the discovery as the “greatest breakthrough when it comes to pharmaceuticals against heart illness since the discovery of digitalis 200 years ago.” In 1981, the FDA approved one of the first beta-blockers, atenolol, after it was shown to dramatically lower blood pressure. Atenolol became such a standard treatment that it was used as a reference drug for comparison with other blood-pressure drugs.
In 1997, a Swedish hospital began a trial of more than 9,000 patients with high blood pressure who were randomly assigned to take either atenolol or a competitor drug that was designed to lower blood pressure for at least four years. The competitor-drug group had fewer deaths (204) than the atenolol group (234) and fewer strokes (232 compared with 309). But the study also found that both drugs lowered blood pressure by the exact same amount, so why wasn’t the vaunted atenolol saving more people?
That odd result prompted a subsequent study, which compared atenolol with sugar pills. It found that atenolol didn’t prevent heart attacks or extend life at all; it just lowered blood pressure. A 2004 analysis of clinical trials—including eight randomized controlled trials comprising more than 24,000 patients—concluded that atenolol did not reduce heart attacks or deaths compared with using no treatment whatsoever; patients on atenolol just had better blood-pressure numbers when they died.
“Yes, we can move a number, but that doesn’t necessarily translate to better outcomes,” says John Mandrola, a cardiac electrophysiologist in Louisville who advocates for healthy lifestyle changes. It’s tough, he says, “when patients take a pill, see their numbers improve, and think their health is improved.”
The overall picture of beta-blockers is complex. For example, some beta-blockers have been shown clearly to reduce the chance of a stroke or heart attack in patients with heart failure. But the latest review of beta-blockers from the Cochrane Collaboration—an independent, international group of researchers that attempts to synthesize the best available research—reported that “beta-blockers are not recommended as first line treatment for hypertension as compared to placebo due to their modest effect on stroke and no significant reduction in mortality or coronary heart disease.”
Researchers writing in Lancet questioned the use of atenolol as a comparison standard for other drugs and added that “stroke was also more frequent with atenolol treatment” compared with other therapies. Still, according to a 2012 study in the Journal of the American Medical Association, more than 33.8 million prescriptions of atenolol were written at a retail cost of more than $260 million.
There is some evidence that atenolol might reduce the risk of stroke in young patients, but there is also evidence that it increases the risk of stroke in older patients—and it is older patients who are getting it en masse. According to ProPublica’s Medicare prescription database, in 2014, atenolol was prescribed to more than 2.6 million Medicare beneficiaries, ranking it the 31st most prescribed drug out of 3,362 drugs.
One doctor, Chinh Huynh, a family practitioner in Westminster, California, wrote more than 1,100 atenolol prescriptions in 2014 for patients over 65, making him one of the most prolific prescribers in the country. Reached at his office, Huynh said atenolol is “very common for hypertension; it’s not just me.” When asked why he continues to prescribe atenolol so frequently in light of the randomized, controlled trials that showed its ineffectiveness, Huynh said, “I read a lot of medical magazines, but I didn’t see that.” Huynh added that his “patients are doing fine with it” and asked that any relevant journal articles be faxed to him.
Brown, the Washington University cardiologist, says that once doctors get out of training, “it’s a job, and they’re trying to earn money, and they don’t necessarily keep up. So really major changes have to be generational.”
Data compiled by QuintilesIMS, which provides information and technology services to the health-care industry, show that atenolol prescriptions consistently fell by 3 million per year over a recent five-year period. If that rate holds, atenolol will stop being prescribed in just under two decades since high-quality trials showed that it simply does not work.
Just as the cardiovascular system is not a kitchen sink, the musculoskeletal system is not an erector set. Cause and effect is frequently elusive.
Consider the knee, that most bedeviling of joints. A procedure known as arthroscopic partial meniscectomy, or APM, accounts for roughly a half-million procedures per year at a cost of around $4 billion. A meniscus is a crescent-shaped piece of fibrous cartilage that helps stabilize and provide cushioning for the knee joint. As people age, they often suffer tears in the meniscus that are not from any acute injury. APM is meant to relieve knee pain by cleaning out damaged pieces of a meniscus and shaving the cartilage back to crescent form. This is not a fringe surgery; in recent years, it has been one of the most popular surgical procedures in the hemisphere. And a burgeoning body of evidence says that it does not work for the most common varieties of knee pain.
Something like the knee version of the oculostenotic reflex takes hold: A patient comes in with knee pain, and an MRI shows a torn meniscus; naturally, the patient wants it fixed, and the surgeon wants to fix it and send the patient for physical therapy. And patients do get better, just not necessarily from the surgery.
A 2013 study of patients over 45 conducted in seven hospitals in the United States found that APM followed by physical therapy produced the same results as physical therapy alone for most patients. Another study at two public hospitals and two physical-therapy clinics found the same result two years after treatment.
A unique study at five orthopedic clinics in Finland compared APM with “sham surgery.” That is, surgeons took patients with knee pain to operating rooms, made incisions, faked surgeries, and then sewed them back up. Neither the patients nor the doctors evaluating them knew who had received real surgeries and who was sporting a souvenir scar.
A year later, there was nothing to tell them apart. The sham surgery performed just as well as real surgery. Except that, in the long run, the real surgery may increase the risk of knee osteoarthritis. Also, it’s expensive, and, while APM is exceedingly safe, surgery plus physical therapy has a greater risk of side effects than just physical therapy.
At least one-third of adults over 50 will show meniscal tears if they get an MRI. But two-thirds of those will have no symptoms whatsoever. (For those who do have pain, it may be from osteoarthritis, not the meniscus tear.) They would never know they had a tear if not for medical imaging, but once they have the imaging, they may well end up having surgery that doesn’t work for a problem they don’t have.
For obvious reasons, placebo-controlled trials of surgeries are difficult to execute. The most important question then is: Why, when the highest level of evidence available contradicts a common practice, does little change?
For one, the results of these studies do not prove that the surgery is useless, but rather that it is performed on a huge number of people who are unlikely to get any benefit. Meniscal tears are as diverse as the human beings they belong to, and even large studies will never capture all the variation that surgeons see; there are compelling real-world results that show the surgery helps certain patients.
“I think it’s an extremely helpful intervention in cases where a patient does not suffer from the constant ache of arthritis, but has sharp, intermittent pain and a blockage of motion,” says John Christoforetti, a prominent orthopedic surgeon in Pittsburgh. “But when you’re talking about the average inactive American, who suffers gradual onset knee pain and has full motion, many of them have a meniscal tear on MRI and they should not have surgery as initial treatment.”
Still, the surgery—like some others meant for narrower uses—is common even for patients who don’t need it. And patients themselves are part of the problem. According to interviews with surgeons, many patients they see want, or even demand, to be operated upon and will simply shop around until they find a willing doctor. Christoforetti recalls one patient who traveled a long way to see him but was “absolutely not a candidate for an operation.” Despite the financial incentive to operate, he explained to the patient and her husband that the surgery would not help.
“She left with a smile on her face,” Christoforetti says, “but literally as they’re checking out, we got a ding that someone had rated us [on a website], and it’s her husband. He’s been typing on his phone during the visit, and it’s a one-star rating that I’m this insensitive guy he wouldn’t let operate on his dog. They’d been online, and they firmly believed she needed this one operation and I was the guy to do it.”
So, what do surgeons do? “Most of my colleagues,” Christoforetti says, “will say: ‘Look, save yourself the headache, just do the surgery. None of us are going to be upset with you for doing the surgery. Your bank account’s not going to be upset with you for doing the surgery. Just do the surgery.’”
Randomized, placebo-controlled trials are the gold standard of medical evidence. But not all RCTs, as they are known, are created equal. Even within the gold standard, well-intentioned practices can muddle a study. That is particularly true with “crossover” trials, which have become popular for cancer-drug investigations.
In cancer research, a crossover trial often means that patients in the control group, who start on a placebo, are actually given the experimental drug during the study if their disease progresses. Thus, they are no longer a true control group. The benefit of a crossover trial is that it allows more people with severe disease to try an experimental drug; the disadvantage is the possibility that the study is altered in a manner that obscures the efficacy of the drug being tested.
In 2010, on the strength of a crossover trial, Provenge became the first cancer vaccine approved by the FDA. A cancer vaccine is a form of immunotherapy, in which a patient’s own immune system is spurred by a drug to attack cancer cells. Given the extraordinary difficulty of treating metastatic cancer, and high expectations following the abject failure of other cancer vaccines, the approval of Provenge was greeted with ecstatic enthusiasm. One scientific paper heralded it as “the gateway to an exciting new paradigm.” Except, Provenge did not hinder tumor growth at all, and it’s hard to know if it really works.
Provenge was approved based on the “IMPACT study,” a randomized, placebo-controlled trial initially meant to see whether Provenge could stop prostate cancer from progressing. It didn’t. Three-and-a-half months into the study, the cancers of patients who had received Provenge and those who had received a placebo had advanced similarly. Nonetheless, patients who received Provenge ultimately had a median survival time of about four months longer than those who received the placebo. Due to the way in which the IMPACT trial unfolded, however, it’s hard to tell if Provenge was truly responsible for the life extension.
Because Provenge did not halt tumor growth, many of the patients who began the study on it also started to receive docetaxel, a chemotherapy drug that is well established to treat advanced prostate cancer. The cancers of the patients on a placebo were also progressing, so they were “crossed over” and given Provenge after a delay. Their cancer continued progressing, and after another delay, many of them also got docetaxel. In the end, fewer patients in the group that started on a placebo received docetaxel, and, when they did, they got it later in the study. So Provenge may have worked, but it’s impossible to tell for sure: Was the slightly longer survival of one group because they got Provenge earlier or because the other group got docetaxel later?
The year after Provenge was approved, the federal government’s Agency for Healthcare Research and Quality issued a “technology assessment” report examining all of the evidence regarding Provenge efficacy. The report says there is “moderate” evidence that Provenge effectively treats cancer, but it also highlighted the fact that more patients who got Provenge at the beginning of the seminal trial also received more and earlier chemotherapy. The report concludes that the effect of Provenge is apparent “only in the context of a substantial amount of eventual chemotherapeutic treatment.” In other words, it is unclear which effects in the trial were due to Provenge and which were due to chemotherapy.
“The people who went on docetaxel went on it because their disease was progressing, so you’ve already broken the randomization,” says Elise Berliner, director of AHRQ’s Technology Assessment Program. Prasad, the oncologist who advocates for higher standards of preapproval evidence, is less diplomatic: “If the treatment were Pixy Stix, you’d have a similar effect. One group gets Pixy Stix, and when their cancer progresses, they get a real treatment.”
The larger issue has nothing to do with Provenge specifically but about the way it gained FDA approval. Therapies are frequently approved for use based on clinical trials that can’t actually prove whether they work. “Clinical trials almost all have issues like this one,” Berliner says, “and it’s very hard to do randomized controlled trials after drugs are approved.”
According to a new paper in the Journal of the American Medical Association Oncology, even when cancer drugs clearly do work in trials, they often don’t work or work substantially less well in the real world, perhaps because subjects in trials are not representative of typical patients. Berliner is hoping to expand and improve registries that track large numbers of real-world patients as an additional source of information. “I’ve been here for 15 years producing these reports,” she says, “and I’m getting frustrated.”
Ideally, findings that suggest a therapy works and those that suggest it does not would receive attention commensurate with their scientific rigor, even in the earliest stages of exploration. But academic journals, scientists, and the media all tend to prefer research that concludes that some exciting new treatment does indeed work.
In 2012, a team of scientists from UCLA published an article in the prominent New England Journal of Medicine, the most cited medical journal in the world, showing that deep brain stimulation—delivered via electrodes implanted in the brains of Parkinson’s patients—improved spatial memory, a lot. The study was understandably small—just seven subjects—as there are only so many people with electrodes already implanted in their brains.
It was covered in outlets like The New York Times (“Study Explores Electrical Stimulation as an Aid to Memory”), The Wall Street Journal (“Memory Gets Jolt in Brain Research”), and LiveScience (“Where Did I Park? Brain Treatment May Enhance Spatial Memory”). The NEJM itself published an editorial in the same issue noting that the study was “preliminary, is based on small samples, and requires replication” but was worth following up with “well-designed studies.”
Given the potential impact, an international team led by Joshua Jacobs, a biomedical-engineering professor at Columbia University, set out to replicate the initial finding with a larger sample. “If it did indeed work, it would be a very important approach that could help people,” Jacobs says. The team took several years and tested 49 subjects, so that their study would give more statistically reliable results.
The scientists were rather stunned to find that deep brain stimulation actually impaired spatial memory in their study. It was a disappointing result, but they were encouraged to show that brain stimulation could affect memory at all—a step toward figuring out how to wield such technology—and they felt an obligation to submit it to the NEJM. That is how science is supposed to work, after all, because failing to publish negative results is recognized to be a massive source of scientific misinformation.
Replication of results in science was a cause-célèbre last year, due to the growing realization that researchers have been unable to duplicate a lot of high-profile results. A decade ago, Stanford’s Ioannidis published a paper warning the scientific community that “Most Published Research Findings Are False.” (In 2012, he co-authored a paper showing that pretty much everything in your fridge has been found to both cause and prevent cancer—except bacon, which apparently only causes cancer.)
Ioannidis’s prescience led his paper to be cited in other scientific articles more than 800 times in 2016 alone. Point being, sensitivity in the scientific community to replication problems is at an all-time high. So Jacobs and his co-authors were bemused when the NEJM rejected their paper.
One of the reviewers (peer reviewers are anonymous) who rejected the paper gave this feedback: “Much more interesting would have been to find a set of stimulation parameters that would enhance memory.” In other words: The paper would be better if, like the original study, it had found a positive rather than a negative result. (Last spring, ProPublica wrote about heavy criticism of the NEJM’s reluctance to publish research that questioned earlier findings.) Another reviewer noted that electrodes were placed on most of the subjects differently in the replication study compared with those in the original study. So Jacobs and his co-authors analyzed results only from patients with the exact same electrode placement as the original study, and the findings were the same.
Three of the authors wrote back to the NEJM, pointing out errors in the reviewer comments; they received a short note back saying that the paper rejection “was not based on the specific comments of the reviewers you discuss in your response letter” and that the journal gets many more papers than it can print. That is, of course, very true, particularly for important journals. Neuron, one of the most prominent neuroscience-specific journals, quickly accepted the paper and published it last month. (It did not receive the media fanfare of the original paper—or almost any at all—although The Wall Street Journal did cover it.)
The same week the paper appeared in Neuron, Columbia University held a daylong symposium to discuss the replication problem in science. The president of the National Academy of Sciences and the director of the U.S. Office of Research Integrity spoke—so too did Jeffrey Drazen, editor in chief of the NEJM. Jacobs was in the audience.
In the final Q&A, Jacobs stepped up to one of the audience microphones and asked Drazen if journals had an obligation to publish high-quality replication attempts of prominent studies, and he disclosed that his team’s had been rejected by the NEJM. Drazen declined to discuss Jacobs’s paper, but he said that “as editors, we’re powerless,” and the onus should be on the replication researchers, or “the complainant,” as he put it, “and the [original paper] author to work together toward the truth. We’re not trying to say who’s right and who’s wrong; we’re trying to find out what we need to know. Veritas, to advance human health, it’s that simple.”
Jacobs did not find the answer that simple. He found it strange. On a panel about transparency and replication, Drazen seemed to be saying that journals, the main method of information dissemination and the primary forum for replication in science, could do little and that “complainants” need to sort it out with de facto defendants. Many doctors, scientists, patient advocates, and science writers keep track of new developments through premier publications like the NEJM. The less publicly a shaky scientific finding is challenged, the more likely it becomes entrenched common knowledge.
Of course, myriad medical innovations improve and save lives, but even as scientists push the cutting edge (and expense) of medicine, the National Center for Health Statistics reported last month that American life expectancy dropped, slightly.
There is, though, something that does powerfully and assuredly bolster life expectancy: sustained public-health initiatives.
Medicine can be like wine: Expense is sometimes a false signal of quality. On an epochal scale, even the greatest triumphs of modern medicine, like the polio vaccine, had a small impact on human health compared with the impact of better techniques for sanitation and food preservation. Due to smoking and poor lifestyle habits, lung cancer—which killed almost no Americans in the early 20th century—is today by far the biggest killer among cancers.
Thankfully, public pressure to curb smoking has put lung-cancer deaths in rapid decline since a peak in the 1990s. Deaths from lung cancer should continue to diminish, as they are tightly correlated to smoking rates—but with a 20-year lag; that is, lung cancer deaths will decline 20 years after smoking rates decline.
The health problems that most commonly afflict the American public are largely driven by lifestyle habits—smoking, poor nutrition, and lack of physical activity, among others. In November, a team led by researchers at Massachusetts General Hospital pooled data from tens of thousands of people in four separate health studies from 1987 to 2008. They found that simple, moderate lifestyle changes dramatically reduced the risk of heart disease, the most prolific killer in the country, responsible for one in every four deaths.
People deemed at high familial risk of heart disease cut their risk in half if they satisfied three of the following four criteria: didn’t smoke (even if they smoked in the past); weren’t obese (although they could be overweight); exercised once a week; ate more real food and less processed food. Fitting even two of those categories still substantially decreased risk. In August, a report issued by the International Agency for Research on Cancer concluded that obesity is now linked to an extraordinary variety of cancers, from thyroids and ovaries to livers and colons.
At the same time, patients and even doctors themselves are sometimes unsure of just how effective common treatments are, or how to appropriately measure and express such things. Graham Walker, an emergency physician in San Francisco, co-runs a website staffed by doctor volunteers called The NNT that helps doctors and patients understand how impactful drugs are—and often are not. “NNT” is an abbreviation for “number needed to treat,” as in: How many patients need to be treated with a drug or procedure for one patient to get the hoped-for benefit?
In almost all popular media, the effects of a drug are reported by relative risk reduction. To use a fictional illness, for example, say you hear on the radio that a drug reduces your risk of dying from Hogwart’s disease by 20 percent, which sounds pretty good. Except, that means if 10 in 1,000 people who get Hogwart’s disease normally die from it, and every single patient goes on the drug, eight in 1,000 will die from Hogwart’s disease. So, for every 500 patients who get the drug, one will be spared death by Hogwart’s disease. Hence, the NNT is 500. That might sound fine, but if the drug’s “NNH”—“number needed to harm”—is, say, 20 and the unwanted side effect is severe, then 25 patients suffer serious harm for each one who is saved. Suddenly, the trade-off looks grim.
Now, consider a real and familiar drug: aspirin. For elderly women who take it daily for a year to prevent a first heart attack, aspirin has an estimated NNT of 872 and an NNH of 436. That means if 1,000 elderly women take aspirin daily for a decade, 11 of them will avoid a heart attack; meanwhile, twice that many will suffer a major gastrointestinal bleeding event that would not have occurred if they hadn’t been taking aspirin. As with most drugs, though, aspirin will not cause anything particularly good or bad for the vast majority of people who take it. That is the theme of the medicine in your cabinet: It likely isn’t significantly harming or helping you.
“Most people struggle with the idea that medicine is all about probability,” says Aron Sousa, an internist and senior associate dean at Michigan State University’s medical school. As to the more common metric, relative risk, “it’s horrible,” Sousa says. “It’s not just drug companies that use it; physicians use it, too. They want their work to look more useful, and they genuinely think patients need to take this [drug], and relative risk is more compelling than NNT. Relative risk is just another way of lying.”
Even remedies that work extraordinarily well can be less impressive when viewed via NNT. Antibiotics for a sinus infection will resolve symptoms faster in one of 15 people who get them, while one in eight will experience side effects. A meta-analysis of sleep-aid drugs in older adults found that for every 13 people who took a sedative, like Ambien, one had improved sleep—about 25 minutes per night on average—while one in six experienced a negative side effect, with the most serious being increased risk for car accidents.
“There’s this cognitive dissonance, or almost professional depression,” Walker says. “You think, ‘Oh my gosh, I’m a doctor, I’m going to give all these drugs because they help people.’ But I’ve almost become more fatalistic, especially in emergency medicine.” If we really wanted to make a big impact on a large number of people, Walker says, “we’d be doing a lot more diet and exercise and lifestyle stuff. That was by far the hardest thing for me to conceptually appreciate before I really started looking at studies critically.”
Historians of public health know that most of the life-expectancy improvements in the last two centuries stem from innovations in sanitation, food storage, quarantines, and so on. The so-called First Public-Health Revolution—from 1880 to 1920—saw the biggest lifespan increase, predating antibiotics or modern surgery.
In the 1990s, the American Cancer Society’s board of directors put out a national challenge to cut cancer rates from a peak in 1990. Encouragingly, deaths in the United States from all types of cancer since then have been falling. Still, American men have a ways to go to return to 1930s levels. Medical innovation has certainly helped; it’s just that public health has more often been the society-wide game changer. Most people just don’t believe it.
In 2014, two researchers at Brigham Young University surveyed Americans and found that typical adults attributed about 80 percent of the increase in life expectancy since the mid-1800s to modern medicine. “The public grossly overestimates how much of our increased life expectancy should be attributed to medical care,” they wrote, “and is largely unaware of the critical role played by public health and improved social conditions determinants.” This perception, they continued, might hinder funding for public health, and it “may also contribute to overfunding the medical sector of the economy and impede efforts to contain health care costs.”
It is a loaded claim. But consider the $6.3 billion 21st Century Cures Act, which recently passed Congress to widespread acclaim. Who can argue with a law created in part to bolster cancer research? Among others, the heads of the American Academy of Family Physicians and the American Public Health Association. They argue against the new law because it will take $3.5 billion away from public-health efforts in order to fund research on new medical technology and drugs, including former Vice President Joe Biden’s “cancer moonshot.”
The new law takes money from programs—like vaccination and smoking-cessation efforts—that are known to prevent disease and moves it to work that might, eventually, treat disease. The bill will also allow the FDA to approve new uses for drugs based on observational studies or even “summary-level reviews” of data submitted by pharmaceutical companies. Prasad has been a particularly trenchant and public critic, tweeting that “the only people who don’t like the bill are people who study drug approval, safety, and who aren’t paid by Pharma.”
Perhaps that’s social-media hyperbole. Medical research is, by nature, an incremental quest for knowledge; initially exploring avenues that quickly become dead ends are a feature, not a bug, in the process. Hopefully the new law will in fact help speed into existence cures that are effective and long-lived. But one lesson of modern medicine should by now be clear: Ineffective cures can be long-lived, too.
https://getpocket.com/explore/item/when-evidence-says-no-but-doctors-say-yes?utm_source=firefox-newtab-en-us
*
HOW SEAGRASS IS HARMED BY NOISE POLLUTION
Noise pollution affects the structures within seagrass that help the marine plant detect gravity and store energy.
From the whirring propellers that power our ships, to the airguns we use to search for oil, we humans have created a cacophony in the ocean. For years, scientists have known that human-generated noise pollution can hurt marine animals, including whales, fishes, and scallops. However, the damaging effect of noise pollution is, apparently, not limited to animals with ears, or even animals at all. A first-of-its-kind study has shown that at least one species of seagrass, a marine plant found off the coast of nearly every continent, also suffers when subjected to our acoustic chaos.
Scientists have recently discovered that Neptune grass, a protected seagrass species native to the Mediterranean Sea, can experience significant acoustic damage when exposed to low-frequency artificial sounds for only two hours. The damage is especially pronounced in the parts of the plant responsible for detecting gravity and storing energy.
The research was led by bioacoustician Michel André, director of the Laboratory of Applied Bioacoustics at the Polytechnic University of Catalonia in Spain, who says he was inspired to conduct this research a decade ago after he and many of the same colleagues who worked on the current study revealed that cephalopods suffer massive acoustic trauma when exposed to low-frequency noise. Cephalopods lack hearing organs, but they do have statocysts—sensory organs used for balance and orientation. Similar to a human’s inner ear, statocysts sense the vibrational waves we interpret as sound.
“This totally shifted our vision and our approach to noise pollution,” says André, because until that point, researchers had focused on concerns for whales and dolphins, which use sound to mate, find food, communicate, and navigate. But thousands of marine animals, from corals to jellyfish, possess statocysts, opening up the possibility that human-generated sounds could be having much farther-reaching effects. While seagrasses don’t have statocysts, they do have a very similar sensory organ called an amyloplast. These gravity-sensing cellular structures help underwater plants push their roots down through seafloor sediments. That similarity led the scientists to want to test the effects of noise on plants.
In their latest experiment, André and his colleagues used a loudspeaker to blare tanks of Neptune grass with a dynamic mix of artificial sounds with frequencies from 50 to 400 hertz, spanning the range typically associated with human activity. After exposing the seagrass to two hours of this low-frequency mixed tape, the team used electron microscopes to examine the amyloplasts inside the seagrass’s roots and rhizomes, the underground stems that store energy as starch.
The acoustic damage was acute, and worsened over the next five days. Starch levels inside the seagrass’s amyloplasts dropped precipitously. The symbiotic fungus that colonizes Neptune seagrass’s roots, and is likely involved in boosting nutrient uptake, didn’t fare well in response to the din either.
Aurora Ricart, a marine ecologist at Maine’s Bigelow Laboratory for Ocean Sciences who was not involved in the research, says she was shocked by the results, but glad to see seagrass getting attention. She points out that seagrasses, especially Neptune seagrass, sequester lots of carbon dioxide out of the atmosphere by storing it as starch. Over time, seagrass meadows build up in layers, locking carbon in several-meter-thick mats that can persist for thousands of years.
“If the sound is affecting the starch,” Ricart says, “then carbon metabolism within the plant is going to change, for sure. And this might have effects on the role the plants have on carbon sequestration at the bigger scale.”
According to André, the discovery that noise pollution affects seagrass is just the beginning.
“There is no reason to think that other plants should not suffer from the same trauma,” he says.
https://www.smithsonianmag.com/science-nature/seagrass-harmed-noise-pollution-180978290/
*
NO AFTERLIFE IN THE OLD TESTAMENT?
Michael Burch: There is no mention of an afterlife in the alleged first-written books of the bible, the so-called books of Moses.
This means an afterlife was never mentioned to or by Adam, Eve, Noah, Abraham, Isaac, Jacob/Israel, Moses, et al.
Flash forward 3,000 years to around 1,000 BC.
The bible says Solomon was the wisest man who ever lived, or ever will live. According to the bible, Solomon wrote Ecclesiastes, in which he said “all is vanity” or “all is futile” and that human beings are no better than beasts because like beasts, they cease to exist when they die.
Someone who believes in a life of bliss in heaven would not call life “vanity” and ultimately “futile.”
The wisest man of all did not agree with Catholic popes or Protestant pastors.
The Old Testament never mentions “hell” or any possibility of suffering after death.
But if there was an afterlife, the possibility of hell would be the single most important thing for parents to know. Should they give birth if their children might go to hell? What should they tell their children to do, to avoid going to hell?
But the subject was never broached by Yahweh and his prophets.
“Hell” was clearly a very late invention of the bible’s authors.
Where did the idea originate?
The idea of an afterlife was probably a very late invention of the authors of the Old Testament, picked up during the Babylonian captivity during the sixth century BC, nearly a thousand years after the alleged time of Moses. ~ Michael R. Burch, Quora
Bill Ireland: AFTERLIFE IN THE PSALMS AND JOB
The fact that the Torah doesn’t record references to an afterlife doesn’t mean that the concept didn’t exist.
True, the afterlife isn’t a major theme through most of the Tanach (Old Testament). But it was King David who had a vivid revelation of a life after death:
“I will dwell in the house of the Lord forever.” – Psalm 23:6
“All who go down to the dust will kneel before him—those who cannot keep themselves alive.” – Psalm 22: 29
“You will not abandon me to the realm of the dead, nor will you let your faithful one see decay.” – Psalm 16:10
And the book of Job records this remarkable statement: “After my skin has been destroyed, yet in my flesh I will see God.” – Job 19:26
*
EXAMPLES OF CONTRADICTIONS IN THE BIBLE
In the original version of the first-written gospel, Mark, no one saw or spoke to Jesus after the alleged empty tomb was discovered.
The second-written gospel, Matthew, also ended on a note of doubt, with some of eleven disciples doubting the resurrection. How is that possible if Jesus was being touched and handled, as the later-written gospels claim?
All notes of doubts disappeared with the Superman Jesus of Luke, John and Acts.
Are we seeing the evolution of christian beliefs, over time, in the way the New Testament’s five historical (or pseudo-historical) books were written?
In the synoptic gospels Jesus taught via pithy parables, but in the gospel of John he never told a parable, speaking instead in long, windy, repetitive sermons.
In the synoptic gospels Jesus kept his identity hidden, but in John he loudly and boldly revealed his true identity.
Do such radical changes tell us that Jesus was being seen very differently by the time John was written?
If we compare what the earliest christian fathers wrote, to what christian fathers of the third and fourth centuries wrote, we can see radical new beliefs emerging, such as the “virgin birth” and the “trinity.”
Jesus had been transformed from the fully human messiah of Mark, to the demigod “born of a virgin” in Matthew and Luke, to the preexistent god and Creator of the Universe in John.
Historians should consider how the evolution of Jesus in the gospels parallels the evolution of christian beliefs in the writings of the church fathers.
When we do that, we find some of the contradictions make sense, not because they are not contradictions (they are), but because the bible was changing radically in order to keep up with radical new christian beliefs.
In a nutshell (pun intended), fourth-century Jesus was very different from first-century Jesus.
John Spencer:
If the Bible was what believers say it is, there would be no need for a whole apologetics industry to explain anything away.
If the Bible is what believers claim it is — the authoritative and infallible “Word of God”, then none of this would be necessary. An omniscient God would be perfectly aware that people would come along in the future and find all these problems with the scriptures. An omnipotent God would have no difficulty whatsoever in inspiring people to write his truth accurately.
The reality is, the errors, inconsistencies and contradictions in the gospels — as in other parts of the Bible — are proof (if proof be needed) that the Bible is no more a revelation of a God or gods that any other purported ancient holy book.
Because 18th & 19th century evangelicals foolishly promulgated the absurd (and ironically unbiblical) doctrine of Bible ‘inerrancy’ largely in response to the work of German critical Bible scholars. Then it became open season for skeptics and atheists, who easily found myriads of errors and contradictions to throw at the fundamentalists.
This started a kind of arms race of who could outdo whom — its was like a game of theological ‘whack-a-mole’ where a skeptic would raise a problem passage, and an apologist would attempt to to answer it, then onto the next one. That is still going on to today.
Surely it would have been better not to have put the idea into people’s heads that the Bible was infallible in the first place. A bit like LDS with the Book of Mormon — point out an error or contradiction, and their response is, “Meh, so what?” and a shrug.
Mike Pitamber:
Also, if “believers” truly believed (and not be hypocrites), there would be no need for them to go around trying to convert non-believers “by their fruits you shall know them” would be evident.. and this goes for ALL claim-to-be religious people; not just Christians.
*
THE ORIGIN OF THE WORD “VATICAN”
Some scholars regard it an Etruscan loan-word and unrelated to vates "soothsayer, prophet, seer" (vates) — but most others connect them via the notion of "hill of prophecy" (compare Latin vaticinatio "a foretelling, soothsaying, prophesying," vaticinari "to foretell"). Compare vaticinate.
VATES: 1620s, "poet or bard," specifically "Celtic divinely inspired poet" (1728), from Latin vates "sooth-sayer, prophet, seer," from a Celtic source akin to Old Irish faith "poet," Welsh gwawd "poem," reconstructed in Watkins to be from PIE root *wet- (1) "to blow; inspire, spiritually arouse" (source also of Old English wod "mad, frenzied," god-name Woden; see wood)
VATICINATE
"to prophecy, foretell," 1620s, a back formation from vaticination or else from Latin vaticinatus, past participle of vaticinari "foretell, predict," from vates (see vates) + formative element -cinus. Related: Vaticinated; vaticinating; vaticinal; vaticinator; vaticinatress.
*
DOES WHAT YOU EAT AFFECT HOW WELL YOU SLEEP?
Tania Whalen finds it impossible to get enough sleep between her shifts, working early mornings and nights as a fire brigade dispatcher in Melbourne, Australia. So to help her power through a long night of answering emergency calls and sending out crews she would often take snacks to work.
Tania was also a regular at the fire station vending machine, buying crisps or chocolate most night shifts. It was a diet she knew wasn't doing her much good - she was piling on the pounds — and yet it was difficult to resist.
And Tania's behavior was not unusual. When people haven't had enough rest, they crave food.
"Some rather fiendish changes unfold within your brain and your body when sleep gets short, and set you on a path towards overeating and also weight gain," says Prof Matthew Walker, director of the Center for Human Sleep Science at the University of California.
When we're awake for longer, we do need more energy, but not that much — sleep is a surprisingly active process and our brains and bodies are working quite hard, Prof Walker says. Despite that, when deprived of sleep we tend to overeat by more than twice or three times the amount of calories we need.
LEPTIN AND GHRELIN
That is because sleep affects two appetite-controlling hormones, leptin and ghrelin. Leptin will signal to your brain that you've had enough to eat. When leptin levels are high, our appetite is reduced. Ghrelin does the opposite — when ghrelin levels are high, you don't feel satisfied by the food that you ate.
In experiments it has been shown that when people are deprived of sleep these two hormones go in opposite directions — there's a marked drop in leptin, which meant an increase in appetite, while grehlin rockets up, leaving people unsatisfied.
It's like double jeopardy, Prof Walker says. "You're getting punished twice for the same offense of not getting sufficient sleep."
Why might this be happening? Prof Walker thinks there is an evolutionary explanation.
Animals rarely deprive themselves of sleep, unless they are starving and need to stay awake to forage for food. So when we don't have sufficient sleep, from an evolutionary perspective, our brain thinks that we may be in a state of starvation and will increase our food cravings to drive us to eat more.
And not having enough sleep doesn't only affect how much we eat, but also what we eat.
A small study carried out by Prof Walker showed that participants were more likely to crave sugary, salty and carbohydrate-heavy foods when they were sleep-deprived.
None of which is good news for tired night-shift workers like Tania Whalen. In fact, the situation may be even worse for them since it's not just what they're eating that is a problem, but when they're eating it too.
Our bodies are primed to follow a regular 24-hour rhythm, says Dr Maxine Bonham, an associate professor of nutrition dietetics and food at Monash University, Melbourne. "We expect to work and eat and exercise during the day, and we expect to sleep at night, and our body is geared to do that. So when you work a night shift, you're doing everything in opposition to what your body's expecting."
And that means we struggle to process food when we eat night-time meals.
Eating at night can lead to higher glucose levels and more fatty substances in the blood, as the body is less able to break down and metabolize nutrients in the small hours, says Dr Bonham.
Shift workers are known to be more at risk of weight gain, type-2 diabetes and cardiovascular disease.
People who work at night are also more likely to be overweight. They may eat out of boredom or to stay awake, and there may not be anything healthy for them to eat, just a vending machine or takeaway food.
All this has inspired Dr Bonham and her colleagues to set up an experiment to see if they can help people who work night shifts lose excess weight and improve their overall health.
They've recruited some 220 shift workers who want to lose weight, and have been putting them on a variety of diets for a six-month period. Tania Whalen, the fire brigade dispatcher, signed up to follow a fasting program: for two days each week, she had to consume just 600 calories in 24 hours.
"It was tough," Tania says. "I was quite worried that I wouldn't be able to do it. Some weeks it felt like a 12-hour shift took 20 hours.”
But she stuck with it, distracting herself by reading, playing games, going for walks and drinking gallons of peppermint tea.
The study's results are not yet in, but Tania feels like it's been a positive experience that's prompted her to make other changes — for example, she now walks 5km every day. "I certainly have more energy and more desire to move, and I have lost a considerable amount of weight," she says.
Interestingly, Tania thinks it has helped her sleep better, too. "Even in the limited hours that I get, I don't toss and turn as much and I've stopped snoring mostly, or so my husband tells me."
It's not clear whether this improvement is down to the new diet, the exercise, the weight loss, or something else entirely, but it does raise the question of whether what you're eating can affect how you're sleeping.
Until now we've been talking about how sleep — or a lack of it — can affect what you eat. But what if you could eat your way to a good night's sleep?
Dr Marie-Pierre St-Onge is a nutrition and sleep researcher in New York. She had spent years studying the impact of insufficient sleep on diet when, in 2015, she was contacted by the committee drawing up the Dietary Guidelines for Americans. Should they be advising people on what to eat to improve their sleep, they asked?
"My first reaction was, 'Why didn't I think of that?'"
Melatonin, the hormone that promotes sleep and which rises in the evening, comes from a dietary amino acid called tryptophan. "So, if the hormone that regulates your sleep is produced entirely from an amino acid that must be consumed in the diet, then it makes sense that diet would be important in regulating sleep," she says.
And yet Dr St-Onge couldn't find any studies focusing on this relationship. So she and her team began looking at research into other health matters, which had recorded participants' sleep and dietary habits. Examining that data, a clear pattern emerged, she says.
Individuals who followed a Mediterranean-style diet — eating lots of fruits and vegetables, fish and whole grains — had a 35% lower risk of insomnia than those who didn't, and were 1.4 times more likely to have a good night's sleep.
So what is so sleep-inducing about that diet? Foods such as fish, nuts and seeds are high in melatonin-producing tryptophan. And a number of small studies have shown that some specific foods such as tomatoes, tart cherries and kiwi fruit, which contain melatonin, may help people fall asleep more easily and stay asleep for longer.
There are also foods to avoid before bed. Most people know about caffeine, which is a stimulant, but perhaps they don't realize salty foods can make you thirsty, which can disturb your sleep. Dr St-Onge's study also suggests eating sugary foods may lead to a more disturbed night. Her team is looking into why that may be.
Studies examining the influence of food on sleep are still few in number, and small in size, so Dr St-Onge says little can be taken as scientific fact. However, they do raise the possibility that eating certain foods can lead to better sleep.
https://getpocket.com/explore/item/the-surprising-links-between-what-you-eat-and-how-well-you-sleep
*
NEW THERAPIES FOR CANCER
In 2012, clinicians at the Children’s Hospital of Philadelphia treated Emily Whitehead, a 6-year-old with leukemia, with altered immune cells from her own body. At the time, the treatment was experimental, but it worked: The cells targeted the cancer and eradicated it. Thirteen years later, Whitehead is still cancer-free.
The modified cells, called CAR-T cells, are a form of immunotherapy, where doctors change parts of the immune system into cancer-attacking instruments. About five years after Whitehead’s treatment, the first CAR-T drugs were approved by the FDA and were heralded, along with immunotherapy more broadly, as one of the most promising modern cancer treatments. Today, there are seven FDA-approved CAR-T therapies, including the one used to treat Whitehead.
Since then, however, studies have linked CAR-T to fatal complications due to treatment toxicity, and the treatment has had a harder time addressing certain types of cancers, particularly solid tumors affecting the breast and pancreas, although some small clinical trials have been starting to show positive results for solid cancers. “After a decade, a decade and a half, we arrive at the point that there are patients who respond, most of the patients still do not respond,” said George Calin, a researcher at University of Texas MD Anderson Cancer Center.
Now experts say that new therapies are beginning to surpass challenges that previous treatments couldn’t, providing safer, more targeted delivery directly to tumors. These include drugs that contain radioactive substances, called radiopharmaceuticals, which are used to diagnose or treat cancer; medications that can influence the genes that spur or suppress tumor growth; and therapeutic cancer vaccines.
These approaches have shown promise in the lab, and researchers and companies are now conducting various stages of human clinical trials to explore their effectiveness. And some promising treatments have even gained approval by the Food and Drug Administration. The hope is that improving on these strategies will ultimately help treat even the most resistant types of cancer.
Despite researchers’ excitement for innovative treatments, there is rampant online misinformation and there are occasions in which companies have been found to tout and sell fake cures, said Kathrin Dvir, an oncologist and researcher at Moffitt Cancer Center.
But other scientists remain optimistic about the future of cancer research, Calin said: “All the time in science, you have to open the door with something new.”
mRNA therapeutic vaccines for cancer, which use messenger RNA as blueprint material so the body can create proteins that are unique to the tumor to help elicit an immune response, may offer several advantages. The shots can be personalized, for instance, to the patients’ own tumors, said Siow Ming Lee, an oncologist at University College London Hospitals and one of the lead researchers of the trial. Other vaccines are also in the works. “We are in this sort of new era now,” he said.
Another type of genetic molecule could also be a target to help treat cancer. Some RNAs, called microRNAs, can act on genes that are responsible for tumor growth. Researchers like Calin are developing small molecules that bind to cancer-related microRNAs, to turn them off and try to halt the disease’s spread.
With FDA approvals, human clinical trials underway and, with promising preclinical data for many of these therapies, the researchers who spoke to Undark said that the future appears bright. “We’re not just seeing these dramatic improvements in outcomes and survival for patients with some indications, but the quality of life,” Lewis said.
As more of these latest cancer technologies do get approved for treatment, new approaches can bring new problems, experts say. For example, with radiotherapeutics, one big challenge is to source enough radioisotopes for the drugs, and have a specialized workforce to handle radioactivity, said Lewis. For microRNAS, it’s tricky to identify exactly which type to target for a particular cancer, Calin emphasized.
And there are also companies that are trying to capitalize on new, unproven technologies and drugs prematurely. The company ExThera Medical, for instance, has been charging patients tens of thousands of dollars for unproven therapies, according to a recent report by The New York Times.
“All over the world, there are many so-called new therapeutics that are not well tested and not well developed,” said Calin. Dvir encounters misinformation at her clinic almost daily, she said. “Maybe some of those have some data in the preclinical, in animal studies — it doesn’t mean that it works on the human because we need data before you expose people to those therapies.”
Although the FDA faces budget cuts, some of the researchers and clinicians that Undark spoke to insist that the agency will weed out bad science. If not, the clinicians that Undark spoke with said that they can also help guide patients toward evidence-based treatments.
Ultimately, researchers want to continue to improve these treatments to see if they might work in tandem. “I think the name of the game in the next five to 10 years is combinations,” said Dvir. Already, there are trials looking at precisely how using different approaches together might boost their ability to treat cancer, she adds. “We know that these drugs work in synergy. It’s just finding the right combination that is effective but not too toxic.”
https://undark.org/2025/05/12/cancer-therapies-new-era/?utm_source=firefox-newtab-en-us
*
SCHIZOPHRENIA AND THE IMMUNE SYSTEM
At least since Susannah Cahalan wrote the 2012 book Brain on Fire, there has been resurging speculation that psychosis might actually be a disorder of the immune system. Cahalan eventually came to understand that her psychosis was the result of an autoimmune disorder, which occurs when the immune system attacks healthy cells—and interestingly, psychosis caused by autoimmune disorders is generally not responsive to psychiatric drugs.
Autoimmune responses are not the same as a healthy immune system working to counteract a foreign biological invader. Yet the study of parasites or viruses in healthy immune systems is a current interest among researchers who study the development of schizophrenia. This has led to newer studies that distinguish between autoimmune psychosis, like what Cahalan experienced, and psychosis caused by immune systems working overtime to defend against biological threats.
The Immune System and Schizophrenia: A History
The idea that schizophrenia might have something to do with the immune system is not new. It has been around at least since the influenza pandemic in 1918. At that time, psychosis was thought to be caused by cerebral inflammation, called encephalitis lethargica, in which brain tissue swells and sometimes turns red, possibly due to bacterial or viral infections (like the flu).
After the widespread panic caused by the pandemic simmered down, interest in physical diseases as an influence for psychosis fell out of fashion.
Instead, researchers turned to an idea called the dopamine hypothesis, which argued that schizophrenia was a result of an imbalance or dysregulation in the way dopamine was transmitted, synthesized, and used in the brain. This theory was dominant for many decades and led to the development of drugs that target dopamine.
A subsequent theory of interest was that of neurodevelopment—the idea that schizophrenia was a developmental disorder that occurred in adolescence. This would explain why schizophrenia mainly emerges in the young adult years. However, it did not adequately explain the varied causes of all the disorders on the schizophrenia spectrum.
Throughout this time, the idea that immunological disruptions could lead to schizophrenia never fully disappeared. It resurfaced again in the 1970s when the well-known psychiatrist E. Fuller Torrey—author of Surviving Schizophrenia—became interested in the possibility that schizophrenia was largely caused by infections and broadly biological factors.
How far has our knowledge come since then? In recent years, you may have seen research finding that those living with cats early in childhood might be more vulnerable to the development of schizophrenia. This is because bacteria in cat feces are capable of transmitting parasites that could affect the human brain during critical developmental stages.
Researchers in several articles this year have summarized what we know about schizophrenia spectrum disorders and the possibility that they might occur because of a disruption in the immune system.
HOW THE IMMUNE SYSTEM WORKS
The immune system responds to physical, psychological, and pathogenic stress. When we have an infection in the body, the immune system responds by producing an effect called inflammation. Inflammation occurs because the body wants to protect itself and it also helps with the healing process. But it can also do slight damage to other organs since it produces redness, swelling, and other symptoms.
The central nervous system (CNS) is an extension of the immune system specifically designed to protect the brain from infection. It extracts toxins, regulates neurotransmitters, and more. Just as the body can be inflamed, the brain can also experience inflammation (known as neuroinflammation).
A substance called cerebrospinal fluid (CSF), which is a clear fluid that surrounds the brain and protects it from damage, can affect brain development through what is called the blood-brain barrier. CSF is important because whatever is in the fluid can cross over to affect brain function and activity.
There are a few possible ways that the blood-brain barrier can be weakened, including infections during pregnancy, head injuries, childhood trauma, and substance use, to name a few. When this occurs, it can leave the brain vulnerable to future pathogens and infections. In fact, changes in the immune system have been shown to affect the growth of neurons, which in turn can affect dopamine interaction.
SCHIZOPHRENIA AND IMMUNOLOGY
So, can changes in the immune system play a role in the development of schizophrenia?
In a systematic review and meta-analysis by Warren et. al, of 69 studies, 5,710 participants (3,180 of whom had been diagnosed with a schizophrenia spectrum disorder) were included to measure cerebrospinal fluid (CSF) and signs of inflammation. These signs include cytokines, proteins that communicate between cells to essentially tell the immune system to begin fighting off infections, and white blood cell count. When more cytokines and white blood cells are present in the bloodstream, for example, it usually indicates that there is a threat to the immune system and there is an increased amount of inflammation.
Overall, 3.1 percent to 23.5 percent of patients with a schizophrenia spectrum disorder had increased amounts of inflammatory proteins like cytokines. This could indicate inflammation.
However, people on antipsychotics can often experience an increase in white blood cell count while on their drugs. Information about which patients were on antipsychotics at the time of the study was not fully reported. Abnormal CSF proteins can also be present with conditions that are often diagnosed alongside schizophrenia, such as diabetes and alcohol use disorder.
LIMITATIONS AND FUTURE STUDIES
The studies had several limitations: Comorbidity with other behaviors and conditions that could affect CSF were not clearly defined or measured; secondary conditions like autoimmune encephalitis were not acknowledged; and long-term studies were not done often, to name a few.
The authors concluded that it is too early to say whether schizophrenia spectrum disorders more broadly, as well as the specific symptom of psychosis, are caused by immunological dysfunction. Psychosis could be similar to a fever—that is, it may simply be a symptom, the causes of which could stem from one of many underlying biological infections or disturbances. However, the authors note, the current evidence suggests that CSF abnormalities may contribute to various patients’ development of schizophrenia spectrum disorders.
Anti-inflammatory drugs have been developed to treat patients with schizophrenia, but those drugs do not necessarily cure psychosis in every patient. There is some interest in conducting future studies that target patients who specifically experience heightened abnormal CSF and psychosis.
Schizophrenia is very heterogeneous in symptoms—and current evidence suggests it could very well be heterogeneous in origin, too. As theories develop, potential sources of psychosis could be broken down with more precision, which could help patients better manage all forms of psychosis in the future.
https://www.psychologytoday.com/intl/blog/living-as-an-outlier/202411/can-a-broken-immune-system-cause-psychosis
WHY HUMANS DON’T HAVE FUR
Most mammals, including our closest living relatives, have hairy coats. So why did we lose ours?
If an alien race came to Earth and lined up humans in a row alongside all the other primates, one of the first differences they might observe – together with our upright position and unique form of communication – is our apparently furless bodies.
In fact, compared to most mammals, humans are remarkably unhairy (granted, with the exception of the occasional individual). A handful of other mammals do share this quality –naked mole rats, rhinos, whales and elephants among them. But how exactly did we end up in this bare state? Does it bring us any benefits today? And how do we account for the presence of thick, dense hair on some parts of our bodies?
Of course, humans actually do have lots of hair: on average we have approximately five million hair follicles across our body surface. But almost all the hair follicles on human bodies produce vellus hair, fine, short, fuzzy hair which grows from shallow follicles, different to the deeper, thicker terminal hairs only found on our head and (after puberty) our underarms, pubic areas and, mostly among men, faces. "We technically have hair all over our bodies, it's just miniaturized hair follicles," says Tina Lasisi, a biological anthropologist at the University of Southern California who specializes in the science of hair and skin. "But it's miniaturized to the point where it functionally doesn't insulate us anymore.”
Scientists don't definitively know the reason behind this change from thicker, coarser fur to these light vellus hairs, and they also don't know exactly when it happened. Still, several theories have been suggested as to what could have sparked the loss of our hair.
The most dominant view among scientists is the so-called "body-cooling" hypothesis, also known as the "savannah" hypothesis. This points to a rising need for early humans to thermoregulate their bodies as a driver for fur loss.
During the Pleistocene, Homo erectus and later hominins started persistence hunting on the open savannah – pursuing their prey for many hours in order to drive it to exhaustion without the need for sophisticated hunting tools, which appear later in the fossil record.
This endurance exercise could have put them at risk of overheating – ergo the fur loss, which would have allowed them to sweat more efficiently and cool down faster without needing breaks.
Evidence for this theory also comes from studies that have found switches for some genes responsible for determining whether certain cells develop into sweat glands or hair follicles. "So all of these things have a related developmental pathway," says Lasisi. "If we look at that in combination with some of the things we're able to infer about genes that increased human skin pigmentation, then we're able to basically confidently guesstimate that 2-1.5 million years ago… humans probably would have lost their body hair."
A related theory set out in the 1980s suggested that the change to an upright bipedal position decreased the benefits of fur for reflecting radiation off our bodies (bar on the top of our heads). As we can sweat better without fur, this became relatively more beneficial than having fur.
But while the body-cooling hypothesis ostensibly seems to make a lot of sense, and there may be some merit to it, it fails in some realms, argues Mark Pagel, a professor of evolutionary biology at the University of Reading.
"When you study our body heat over a 24-hour period, we lose more heat at night than we want to, and so the net effect of losing your fur is that we're in a sort of energy deficit all of the time," he says. He also notes there are lots of human populations that have not done endurance running for tens of thousands of years, but none have grown their fur back, despite many now living in very cold regions of the world.
Lasisi, however, says that hyperthermia – an abnormally high body temperature – would likely have been a far bigger issue than hypothermia in equatorial Africa, where humans evolved. "It seems to me that there is a bit of a stronger pressure to not overheat, rather than one to necessarily stay warm.”
She also notes many genetic traits can become canalized – difficult to re-evolve in different ways – and that by the time humans reached colder environments, they had developed other technologies to keep warm, such as fire and clothing. They also likely developed other physiological adaptations to cold such as brown fat adaptation, she adds.
In 2003, Pagel and his colleague Walter Bodmer at the University of Oxford put forward another explanation for early human fur loss, which they called the ectoparasite hypothesis. They argued that a furless ape would have suffered from fewer parasites, a major advantage. "If you look around the world, ectoparasites are [still] an enormous problem in the form of biting flies that carry disease," says Pagel. "And those flies are all specialized to land on and live in fur and deposit their eggs in fur… Parasites are probably one of the strongest selective forces in our evolutionary history, and still are." Pagel says "nothing's come along to make us question" this hypothesis since he and Bodmer first came up with it.
The hair on our heads remained when early humans lost their fur – likely as a protection from solar radiation .
Lasisi says that she wouldn't exclude the possibility of other factors contributing to fur loss. But "you really have to ask yourself, well, why would this happen in humans and not chimpanzees, not bonobos, not gorillas?" she says. "I'm inclined to focus on hypotheses that are able to suggest behaviors or migrations into places that would have set humans apart from other apes in a way that would have required hair loss.”
The aquatic ape hypothesis
Another, improbable, theory comes from the largely dismissed aquatic ape hypothesis, which was first proposed in 1960. According to this theory, the apes that eventually turned into humans diverged from other Great Apes by adapting to spend significant time in water. The adaptations that occurred due to this explained characteristics of modern humans such as our hairlessness and bipedalism.
The problem with this? "Anthropologically, there just isn't a shred of evidence that we evolved on beaches or near the water, [or] had an aquatic phase," says Pagel. "It's unfortunate." Other scientists have pointed out that semi-aquatic mammals like otters and water voles are extremely furry, so why would humans have lost their fur for this reason?
*
In particular, tightly coiled human hair has an intricate structure that leaves air pockets open, allowing it to dissipate heat very effectively while minimizing how much heat comes down to the scalp, she says.
"The more space you can put between where solar radiation is hitting, so the top of the hair, and what you want to protect, which is your scalp, the better off you are.” As for pubic and underarm hair Lasisi considers this could either be a so-called spandrel – a byproduct of the evolution of another characteristic – or potentially a leftover from primate ancestors that used pheromones to communicate with each other (there's no good evidence humans use pheromones today).
No matter what prompted the loss of human fur, one thing is extremely likely: it coincided with early humans gaining a darker skin pigmentation where body hair previously would have been as necessary protection from UV radiation.
"It's the logical inference that we can make," says Lasisi. "It could be that some humans just ended up being born with entirely hairless bodies, and then that became an adaptation in tandem with some of those humans evolving darker skin. Or it could be that there was a slightly more gradual reduction in hair that was happening with a slightly more gradual increase in skin pigmentation.”
While it's interesting to consider how we lost our fur, it may seem less than relevant to our lives today. But research has indicated that increased understanding could even have implications for people with unwanted hair loss today due to balding, chemotherapy or disorders that cause hair loss.
In early 2023, Nathan Clark, a geneticist at the University of Utah, and his colleagues Amanda Kowalczyk and Maria Chikina at the University of Pittsburgh, surveyed the genes of 62 mammals including humans to find the genetic shifts hairless mammals shared with each other to the exclusion of their furry cousins. They found that humans seemed to have the genes for a full coat of body hair, but our genome regulation currently stops them from being expressed.
They also found that when a species loses hair, they do it by changes to the same set of genes repeatedly, and uncovered several new genes involved in this process. "Some of those [new] genes hadn't been really characterized at all, because people hadn't done many genetic screens on presence and absence of hair in the past," says Clark. "[They] seem to maybe be master controllers that might be manipulated in the future if people wanted to stimulate hair growth.”
https://www.bbc.com/future/article/20230310-why-dont-humans-have-fur
*
THE DANGER OF SUPERHUMAN AI
Today’s generative AI systems like ChatGPT and Gemini are routinely described as heralding the imminent arrival of “superhuman” artificial intelligence. Far from a harmless bit of marketing spin, the headlines and quotes trumpeting our triumph or doom in an era of superhuman AI are the refrain of a fast-growing, dangerous and powerful ideology. Whether used to get us to embrace AI with unquestioning enthusiasm or to paint a picture of AI as a terrifying specter before which we must tremble, the underlying ideology of “superhuman” AI fosters the growing devaluation of human agency and autonomy and collapses the distinction between our conscious minds and the mechanical tools we’ve built to mirror them.
Today’s powerful AI systems lack even the most basic features of human minds; they do not share with humans what we call consciousness or sentience, the related capacity to feel things like pain, joy, fear and love.
Nor do they have the slightest sense of their place and role in this world, much less the ability to experience it. They can answer the questions we choose to ask, paint us pretty pictures, generate deepfake videos and more. But an AI tool is dark inside.
That’s why, at a machine learning conference in September of 2023, I asked the Turing Award winner Yoshua Bengio why we keep hearing about “superhuman” AI when the products available are so far from what a human is, much less superhuman. My keynote prior to his had openly challenged this kind of rhetoric, which featured heavily in Bengio’s subsequent presentation — just as it does on his website and in his warnings to lawmakers and other audiences that humans risk “losing control to superhuman AIs” in just the next few years.
Bengio was once one of the more sober and grounded voices in the AI research landscape, so his sudden adoption of this rhetoric perplexed me. I certainly don’t disagree with him about the dangers of embedding powerful but unpredictable and unreliable AI systems in critical infrastructure and defense systems or the urgent need to govern these systems more effectively.
But calling AI “superhuman” is not a necessary part of making those arguments.
So, I asked him, isn’t this rhetoric ultimately unhelpful and misleading given that the AI systems that we so desperately need to control lack the most fundamental capabilities and features of a human mind? How, I asked, does an AI system without the human capacity for conscious self-reflection, empathy or moral intelligence become superhuman merely by being a faster problem-solver?
Aren’t we more than that? And doesn’t granting the label “superhuman” to machines that lack the most vital dimensions of humanity end up obscuring from our view the very things about being human that we care about?
I was trying to get Bengio to acknowledge that there is a huge difference between superhuman computational speed or accuracy — and being superhuman, i.e., more than human. The most ordinary human does vastly more than the most powerful AI system, which can only calculate optimally efficient paths through high-dimensional vector space and return the corresponding symbols, word tokens or pixels. Playing with your kid or making a work of art is intelligent human behavior, but if you view either one as a process of finding the most efficient solution to a problem or generating predictable tokens, you’re doing it wrong.
Bengio refused to grant the premise. Before I could even finish the question, he demanded: “You don’t think that your brain is a machine?” Then he asked: “Why would a machine that works on silicon not be able to perform any of the computations that our brain does?”
The idea that computers work on the same underlying principles that our brains do is not a new one. Computational theories of the mind have been circulating since the 20th century origins of computer science. There are plenty of cognitive scientists, neuroscientists and philosophers who regard computational theories of mind as a mistaken or incomplete account of how the physical brain works (myself among them), but it’s certainly not a bizarre or pseudoscientific view. It’s at least conceivable that human brains, at the most basic level, might be best described as doing some kind of biological computation.
So what surprised and disturbed me about Bengio’s response was not his assumption that biological brains are a kind of machine or computer. What surprised me was his refusal to grant, at least initially, that human intelligence — whether computational at the core or not — involves a rich suite of capabilities that extend well beyond what even cutting-edge AI tools do.
We are more than efficient mathematical optimizers and probable next token generators. I had thought it was a fairly obvious — even trivial — observation that human intelligence cannot be reduced to these tasks, which can be executed by tools that even Bengio admits are as mindless, as insensible to the world of living and feeling, as your toaster. But he seemed to be insisting that human intelligence could be reduced to these operations — that we ourselves are no more than task optimization machines.
I realized then, with shock, that our disagreement was not about the capabilities of machine learning models at all. It was about the capabilities of human beings, and what descriptions of those capabilities we can and should license.
Reclaiming Our Humanity
The battle is not lost, however. As the philosopher Albert Borgmann wrote in his 1984 book “Technology and the Character of Contemporary Life,” it is precisely when a technology has nearly supplanted a vital domain of human meaning that we are able to feel and mourn what has been taken from us. It is at that moment that we often begin to resist, reclaim and rededicate ourselves to its value.
His examples might seem mundane today. He wrote about the post-microwave revival of the art of cooking as a cherished creative and social practice, one irreplaceable by even the most efficient cooking machines. Indeed, the skilled and visionary practice of cooking now carries far greater cultural value and status than it did in the late 20th century.
Similarly, the treadmill did not eliminate the irreplaceable art of running and walking outdoors just by offering a more convenient and efficient means to the same aerobic end. In fact, Borgmann thought the sensory and social poverty of the experience of using a treadmill or microwave could reinvigorate our cultural attention to what they diminished — activities that engage the whole person, that continually remind us of our place in the physical world and our belonging there with the other lives who share it. He was right.
Perhaps the ideology of “superhuman” AI, in which humans appear merely as slow and inefficient pattern matchers, could spark an even more expansive and politically significant revival of humane meaning and values. Maybe the moral and experiential poverty of AI will bring the most vitally human dimensions of our native intelligence back to the center of our attention and foster a cultural reclamation and restoration of their long-depreciated value. What might that look like? Imagine any sector of society where the machine ideology now dominates and consider how it would look if the goal of mechanical optimization became secondary to enabling humane capabilities.
Let’s start with education. In many countries, the former ideal of a humane process of moral and intellectual formation has been reduced to optimized routines of training young people to mindlessly generate expected test-answer tokens from test-question prompts. Generative AI tools — some of which advertise themselves as “your child’s superhuman tutor” — promise to optimize even a kindergartener’s learning curve. Yet in the U.S., probably the world’s tech-savviest nation, young people’s love of reading is at its lowest levels in decades, while parents’ confidence in education systems is at a historic nadir.
What would reclaiming and reviving the humane experience of learning look like? What kind of world might our children build for themselves and future generations if we let them love to learn again, if we taught them how to rediscover and embrace their humane potential? How would that world compare to one built by children who only know how to be an underperforming machine?
Or consider the economy. How would the increasingly sorry state of our oceans, air, soil, food web, infrastructures and democracies look if we stopped rewarding mindless, metastatic growth in “domestic product”?
How would the future we are headed for change if we mandated new economic incentives and measures tied to medium- and long-term indicators of health, sustainability, human development and social trust and resilience?
What if tax relief for wealthy corporations and investors depended entirely on how their activities enabled those humane indicators to rise? How would our jobs change, and how might young people’s enthusiasm for investing their energies in the workforce be boosted, if the measure of a company’s success were not simply the mechanical optimization of its share price, but a richer and longer-term assessment of its contribution to the quality of our lives together?
What about culture? How different would the future look if current efforts to use AI to replace human cultural outputs were stalled by a renewed affection for our own capacity to create meaning, to tell the world’s stories, to invent new forms of beauty and expression, to elevate and ornament the raw animal experience of living? What if, instead of replacing these humane vocations in media, design and the arts with mindless mechanical remixers and regurgitators of culture like ChatGPT, we asked AI developers to help us with the most meaningless tasks in our lives, the ones that drain our energy for everything else that matters? What if you never had to file another tax form?
What if we designed technologies like AI with and for the benefit of those most vulnerable to corruption, exploitation and injustice? What if we used our best AI tools to more quickly and reliably surface evidence of corrupt practices, increase their political cost and more systematically push corruption and exploitation toward the margins of public life?
What if populations collectively vowed to reward only those politicians, police and judges willing to take the risks of demonstrating greater transparency, accountability and integrity in governing?
Even in these more humane futures, we’d be far from utopia. But those possible futures are still much brighter than any dominated by the ideology of superhuman AI.
That doesn’t mean that AI has no place in a more humane world. We need AI to take over inherently unsafe or human-unfriendly tasks like environmental cleanup and space exploration; we need it to help us slash the costs, redundancies and time burden of mundane administrative processes; we need AI to scale up infrastructure maintenance and repair; we need AI for the computational analysis of complex systems like climate, genetics, agriculture and supply chains. We are in no danger of running out of important things for our machines to do.
We are in danger of sleepwalking our way into a future where all we do is fail more miserably at being those machines ourselves. Might we be ready to wake ourselves up? In an era that rewards and recognizes only mechanical thinking, can humans still remember and reclaim what we are? I don’t think it is too late. I think now may be exactly the time.
https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/
ending on beauty:
Mary:
LEONARDO: THE VIRGIN OF THE ROCKS
These rocks like giant gourds
and the luxuriant confusion
about which chubby child
is Jesus and which John —
the angel’s geometry
and the Madonna’s closed eyes
lend an eternal calm because
it doesn’t matter which
child is which: not because
we are all Christ (and Judas too);
what matters is the umber light
and the gold-copper hair
coiling on all the figures here,
the angel’s tendrils
greening wing-ward —
and how the heavy rocks
levitate, enlaced with the pale
bronze ferns and vines and palms.
And we want to know
if the angel too
will partake of the picnic
hidden behind a rock.
But the answer of the Holy See
to all such queries is No:
the angel will be kept hungry
century after century
while we feast on his or her
green sleeve and red robe.
No comments:
Post a Comment