*
PICASSO: “SEATED BATHER”
Cruelty requires angles:
the piranha face,
the cheeks that close like teeth
her amber-
gleamed perfection slips
on days that rust in waves
hacked through
by the metalloid sea,
she sits, collapsing
arms clutch knees,
an empty basket
under the cone of nose
that beaks the air
something seeps
from the saw-toothed space
a voice
once
you loved me
~ Oriana
*
DANTE’S DIVINE AUTOFICTION
Studies of the reception and “afterlife” of classic works are becoming something of a trend. Last year, Orlando Reade’s What in Me Is Dark received well-earned praise for its tracking of the surprising career of Milton’s Paradise Lost in the centuries following its composition – not least its role in shaping a revolutionary political imagination. Dante’s Commedia (or Divine Comedy, written in the opening decades of the 14th century) is already a heavily and explicitly political text, in which the poet at times exhibits a positively Trumpian relish in imagining the defeat and torment of his enemies. But its long-term reception is about a great deal more than politics.
Joseph Luzzi has already written, movingly and insightfully, about his own journeyings with Dante during a time of profound personal grief. In this lively and engaging book, he explains how the Commedia (along with some of Dante’s other core writings) in turn attracted ecclesiastical suspicion and became something like a touchstone of ecclesiastical orthodoxy; how it laid the foundations of Italian literature by more or less inventing an “Italian” language – probing, stretching, renovating the Tuscan dialect to make it a credible vehicle for the most ambitious ideas and images; how it attained the status of a definitive portrait of the Romantic sensibility, only to be claimed by 20th-century modernists – Joyce, Pound and Eliot – in a successful counter-coup.
It is a dramatic story, appropriate to Dante’s own dramatic life. Deeply involved in public life in 13th- and 14th-century Florence, his strong resistance to the extension of papal influence in the politics of the city made him some powerful enemies. He ended up exiled from his native city and separated from his family; the Commedia is haunted by the themes of betrayal and homelessness and a passionate longing for the embrace of community, for true peace, internal and external.
But the poem, leading us in turn through Hell, Purgatory and Paradise, is not itself an account of some single individual’s struggles and sufferings. It is a huge irony that Romantic (including some Victorian) readers treated it as a kind of hymn to the unconquerable human spirit: Luzzi quotes Macaulay rhapsodizing about Dante as if he were some kind of Byronic action hero, and shows how Byron himself saw Dante as a paradigm for the “poet of action” – eliding Dante’s vigorous political career with the persona of the poet in the Commedia.
One of the favorite passages for readers of this stamp was the speech Dante puts into the mouth of Ulysses in the Inferno – a rousing call to self-transcending exploration fitting for the splendor and dignity of the human soul: “You were not made to live like brutes,” says Ulysses to his comrades.
Unfortunately, it is quite clear that this wonderful speech is there to illustrate just why (from Dante’s point of view) Ulysses is in Hell. His passion for endless adventure at all costs results in betrayals of his duties and the death of his friends. Luzzi has a fascinating discussion of the creative misunderstanding that Dante somehow endorses Ulysses’ pride and folly, showing how it pervades Madame de Staël’s great Romantic novel Corinne and is reflected, rather more startlingly, in Mary Shelley’s Frankenstein – though, as Luzzi rightly notes, Shelley has grasped “the formidable ironies of Ulysses’ speech” as Dante frames it, the rhetorical elevation that conceals a raging Faustian pride, indifferent to the suffering of others.
And yet, as a later chapter shows us, there is a further depth of irony to be plumbed. Primo Levi describes how, in Auschwitz, recalling the same speech of Ulysses, he experienced “a flash of disassociation, taking brief flight from his living hell”, a reminder of human dignity and solidarity. Paradoxically, Ulysses’ empty but stirring words, illustrating just why he is condemned to Hell for his arrogance, serve for a moment to release someone else from a hell of humiliation and isolation.
Luzzi has some helpful things to say about how, especially in 19th-century England and America, the understandable link between Dante and Milton as Christian epic poets led to a certain blurring of perceptions. Nothing, as Luzzi says, could be more different from Dante’s silent frozen Lucifer, eternally masticating the bodies of sinners with tears of ice on his cheeks, than Milton’s furious, manically talkative, feverishly resentful Satan.
But somehow the Romantic fascination with Milton’s picture of a lost but defiantly rebellious spirit colored the way Dante was interpreted. Luzzi discusses the way in which both poets treat free will as central to God’s gift of dignity to human beings, and suggests that Milton is drawing on Dante in this respect. But I am less convinced: there is nothing in Milton on this subject that is not part of a common heritage of Christian reflection.
What is more, where Milton appeals to human freedom most eloquently in attempting to explain the fall of Adam and Eve, Dante is far less interested in arguments from or about free will, and more concerned with the paradox that the free surrender of our desires to God creates a deeper freedom in us, because we are then released to be ourselves as we should be, in harmony with God and the world.
Milton, says Luzzi, “drew on the literary energies of the Commedia” in his ideal of the artist as well as in his theological evaluation of freedom. Possibly; but the most direct bit of Dantean influence in Milton’s work is not in Paradise Lost but in the much earlier Lycidas, where St Peter’s eloquent polemic against faithless and incompetent Church leaders is, in effect, Dantean pastiche, echoing the way Dante puts his critiques of contemporary political and ecclesiastical life into the mouths of various figures – including St Peter – encountered in the landscapes of the afterlife.
By the time Milton was writing his great epic, any conscious influence from Dante had receded far behind the models of classical epic shaping his ambitions. If Dante has the great Roman poet Virgil alongside as guide and mentor in his wanderings through Hell and Purgatory, Milton speaks directly and authoritatively as a Christian Virgil, tutored only by the “muse” of the Holy Spirit’s immediate inspiration.
This means that Milton can offer without embarrassment something that Dante could never have written – dramatic conversations in Heaven between the persons of the Trinity. The final sections of Dante’s Paradiso display the impossible struggle of human language to respond adequately to this ultimate mystery. It is visualized by the “circle-squaring” image of a human form (Jesus) interwoven with the three circling paths of eternal light that display the divine nature.
And what this raw paradox is meant to do is stir in us a renewed and purified desire that radically alters what is possible for human action, as we learn to yield to the rhythms of unlimited love moving through all things. This is the ultimate freedom, for Dante, when the will is completely in tune with the truth and the beauty of God. It is a very different theological world from Milton’s, one in which silence and delight are at the heart of everything. There are indeed passages of dense theological exposition in the Commedia, but the work is not dominated by the need to “justify the ways of God”.
*
What exactly did Dante think he was doing in the Commedia? It’s certainly not a “personal” memoir of any conventional kind; but, as several scholars have argued, it is a sort of autobiography – or perhaps autofiction. To grasp a bit of what this means, it is essential to avoid what Luzzi says is a common failing among Dante’s readers – focusing on the first section, the account of Hell and its torments. Dante is trying to make sense of his disrupted and damaged life; and so he imagines himself, at the notional half-way point of a human existence, taking stock of all that has happened to him and of all the deep hinterland of his political and religious world.
His enemies and rivals are in Hell partly so that Dante can be clear that his resistance to them is not purely personal; it is something to do with their deep antipathy to the truth. Everyone in Hell is there because they do not recognize their debt to the truth. They have preferred their own fictions, and they live, miserably, with the effects of that absurd choice.
But he is not content to leave it there; the Inferno alone would be a gruesome revenge fantasy. The poet is learning as he travels, thanks first to Virgil, then to the radiant figure of Beatrice, the young girl Dante knew in childhood who remained for him an image of clarity and integrity, and who meets him in Purgatory to conduct him to Heaven.
Purgatorio is full of souls repenting their sins but also animated by the promise of an ultimate homecoming. And Paradise is that home, where, in a dazzling variety of modes and contexts, human selves find they are released into absolute joy, both spiritual and sensuous.
The poet is learning: learning about repentance and the candor needed for it, learning from the diverse strands of wisdom represented by the souls at peace in Heaven, ranging from the great theologians and saints to the utterly unexpected figure of Cunizza da Romano, a rackety aristocratic lady whose sheer joyful generosity of spirit has triumphed over the legacy of a rather spectacular career of intrigue and affairs.
Dante and Beatrice encounter Cunizza da Romano and Folco of Marseille in the heaven of Venus, Paradiso IX. By Giovanni di Paolo (1444–50)
Dante is working to see himself against the backdrop of all this, to understand more fully his own failures and his own gifts in relation to what he thinks of as lives of conspicuous spiritual shipwreck, lives of “convalescent” hopefulness, and lives of “transhuman” delight and fulfillment. And Dante does indeed coin the word trasumanar to express a self-transcendence utterly unlike the transhumanism or post-humanism of contemporary semi-scientific myth that is an irradiation and deepening of human nature, not its cancellation.
*
Luzzi is a generally highly reliable guide, writing with ample quotation and lucid interpretation. Now and then, there is a dropped catch. It is strange that in discussing Dante’s choice of Virgil as a guide, he ignores the role of Virgil in earlier Christian scholarly writing as an honorary “prophet” (Virgil’s Fourth Eclogue was widely read as anticipating the coming of Christ). Luzzi exaggerates and simplifies the suspicion of the medieval Church towards classical authorities in general – but in any case, of them all, it is Virgil who is by far the least problematic.
Luzzi also misleads a little when describing the difficult and much-discussed conversation on the borders of Purgatory between Virgil and Cato, who acts as a gatekeeper between Hell and Purgatory. Cato is not (as Luzzi appears to assume) a Christian convert; he is not on the purgatorial path to full redemption. But he is an exemplar of pagan integrity and courage, who merits his place only just outside the realms of grace, and has thus in some way earned the right to be, as he is, rather abrupt with Virgil.
Again, the account in Luzzi’s second chapter of the criticisms of Dante advanced by late medieval theologians is full of valuable detail; but it seems to imply that Dante’s political opposition to the Pope’s secular power was straightforwardly unacceptable in the Church of the day (incidentally, Luzzi refers to “the Vatican” as the seat of ecclesiastical power; but the Vatican Hill only became the main papal residence well after Dante’s time).
Dante’s view of the papacy is a high one, and his attacks on the popes of his day depend on precisely this. The separation of papal power from actual executive sovereignty was a vigorously argued issue of the 13th and 14th centuries, with reputable theologians on both sides of the question. Luzzi’s final chapter on the enthusiasm of modern popes for Dante suggests that any broken fences have been pretty definitively mended – even that Dante’s party has actually won the argument in the long run.
https://www.newstatesman.com/culture/books/book-of-the-day/2025/03/dantes-divine-autofiction
*
WHY FATHER’S SIDE OF FAMILY TENDS TO MISS OUT
Many people have stronger bonds with their maternal relatives.
Sonia Salari, a sociologist at the University of Utah, regularly teaches a course in family studies—and when she does, she asks her students the same question: “Who here is closest to their maternal grandmother, out of all their grandparents?” Reliably, the majority of hands shoot up. Next, she asks, “Who is closest to their maternal grandfather?” Then she asks about the paternal grandmother, and then the paternal grandfather. With each subsequent question, the number of hands dwindles. “It’s just always the same,” she told me.
Salari’s survey is a perfect segue into her lesson on what researchers call the “matrilineal advantage”: People tend to rate relationships with their mother’s side of the family more favorably. In one study, children reported having stronger bonds with their maternal grandparents, particularly with their maternal grandmothers; the authors noted that the finding seemed especially significant given that kids are more likely to live near their paternal grandparents.
Research has also found that grandparents tend to feel especially connected to their daughters’ children. When participants in one study rated how likely they would be to save a particular cousin hypothetically trapped in a burning building—admittedly an intense survey question—the majority said they would behave most altruistically toward their maternal aunt’s children, followed by their maternal uncle’s, then their paternal aunt’s, and finally their paternal uncle’s.
Although the matrilineal advantage doesn’t apply to every culture and person, it’s been well documented, especially in the U.S. and Europe. So why is this the case?
One key factor may be that women are more likely to fulfill kinkeeping roles, meaning they take on the often invisible labor of maintaining their family’s closeness. That might include calling and visiting relatives, remembering birthdays, sending holiday cards, and organizing vacations and events—along with being sensitive to everyone’s needs when those events happen.
Kathrin Boerner, a gerontologist who studies family caregiving at the University of Massachusetts Boston, told me that kinkeeping isn’t just about, say, hosting dinners. “When the dinner happens,” she said, part of the challenge is “how do I set it up so that people actually want to be in the same room together?” When the researchers behind a 2017 study placed a call for participants who identified as kinkeepers, 91 percent of their subjects were women. Another, published in 2010, examined three-generation families—and found that mothers were responsible for the large majority of caregiving and communication, followed not by fathers but by maternal grandmothers.
Given that women are more likely to maintain family networks, they’re also likely to be at the center of those networks. But the “matrilineal advantage” doesn’t just apply to the women who do the kinkeeping and caregiving—it can extend to their whole side of the family. Think about it this way: Mothers tend to do the majority of the child care and housework; one study found that women are more likely to take on the brunt of those duties even when they earn more than their spouse does.
They are also more likely to receive babysitting help from (and simply spend time with) their own family rather than their husband’s family. Eventually, then, tighter relationships can form between kids and the maternal relatives they’ve grown up around, and that can last into adulthood.
Of course, the division of labor is not so lopsided in every family—and notably, researchers have found the matrilineal advantage to be weaker in European countries where stereotypical gender roles aren’t as pronounced. But in many countries, even as equality in the workplace has advanced, there has been significantly less progress at home: Mom is still commonly the chief-executive parent. The irony is that the same patriarchal system that has historically left women with disproportionate child-care responsibility has also given many of them an invaluable closeness with family.
Lacking those tight ties can be a real loss for fathers and their relatives. In the average month in 2015, 300,000 women took parental leave compared with 22,000 men—but when men do take paternity leave, the majority of them are glad they did. Research suggests that fathers who take leave are more engaged with their kids throughout the first years of their lives. Plenty of people have argued for equitable parental leave on the grounds that it could get fathers more involved from the start in child-rearing. But it wouldn’t just benefit dads and their kids; it could strengthen children’s bonds with their whole extended family.
Granted, the matrilineal advantage can be a vicious cycle: When so many adults feel distant from their father’s side, that can reinforce the association between maternal relatives and family in general—and prevent people from assuming that paternal kin should be just as involved. But if more dads were engaged in parenting and kinkeeping in the first place, they might need to rely on their relatives for help; the more the paternal side is involved, the more the expectations for them might shift.
There’s no reason we shouldn’t give fathers the chance to start that chain reaction. “When men are in roles of caregiving, there is no evidence to suggest they’re not going to do a good job,” Salari told me. “They’re just as capable of it.” And if they’re pulled into the fold, they might bring the rest of their family with them.
https://getpocket.com/explore/item/why-dad-s-side-of-the-family-tends-to-miss-out?utm_source=firefox-newtab-en-us
*
WHY IT’S GREAT TO BE AN ONLY CHILD
When I was growing up, only children were generally regarded as unfortunate souls; lonely, socially clumsy and often bullied. Partly, this was because they were unusual back then, and as those who’ve observed just about any species know, unusual individuals tend to be singled out by the pack. Today we live in a different world. From the late 1960s and 70s, the contraceptive pill, women’s increasing control over their lives and IVF meant that parents were better able to plan their families and often chose to make them smaller. The single child no longer stuck out so much.
But the stereotype has proved to be tenacious – so much so that many people still feel anxiety about the issue: parents over whether they have deprived their child of the experience of having siblings, only children that they may have missed out on a crucial part of their development.
It’s important to address these anxieties, as the trend towards smaller families shows no sign of abating. According to the Office for National Statistics (ONS), women born in 1935 had an average of 2.42 children, whereas those born in 1969 had only 1.91.
Women born in 1984, the ONS’s most recent cohort, have thus far had an average of 1.02 children (although because women today often have children when they are older, this figure may increase slightly). As of last year, Statista states there are now 3.7 million one-child families in the UK – just over 45% of the total. Based on current data, it’s estimated that by 2031 half of all UK families will be raising just one child.
As a clinical psychologist with more than 40 years’ experience working with families and children, I’d like to reassure parents that having one child is now an excellent decision – and here are some of the reasons.
First, a lot of the stigma, the source of so many difficult developmental experiences, has melted away because of the numbers game. It’s simply much less unusual to have no siblings, and less likely to draw unkind attention. This is especially welcome because a bullied child will tend to avoid social situations, depriving them of the kind of opportunities for interaction they might have had less of to start with.
Second, the data that gave the stereotype of an ill-adjusted, unhappy only child a veneer of scientific credibility has been thoroughly debunked. Much of it was the result of a questionnaire that American psychologist EW Bohannon gave to 200 adults in the late 19th century. Bohannon asked respondents to recount the peculiarities of any only children they knew. From this “study” – based on secondhand opinions, biased language, and without a control group – Bohannon concluded that only children were generally spoiled, selfish, intolerant and self-obsessed. Why this description endured for so long is still unclear, but it may be in part because Bohannon was the protege of the hugely influential psychologist G Stanley Hall. Hall is reported to have said that “being an only child is a disease in itself.”
More recent, better-designed investigations have, unsurprisingly, utterly failed to uphold these claims. A meta-analysis of more than 100 studies, published by Toni Falbo at the University of Texas at Austin concluded that, overall, the characteristics of children with and without siblings don’t differ significantly. Yet, surprisingly, the myth of the unhappy single child persists.
That isn’t to say that there aren’t any differences at all between single children and others. For example, 2016 research in China found that they are often more competitive and less tolerant of others; but they also tend to be better at lateral thinking and are more content spending time alone. And while having siblings comes with benefits, in terms of plenty of unsupervised interactions and the resulting boost to social skills, it also has drawbacks, including childhood jealousy and less access to parental attention.
Often people’s anxieties about single-child families are projected into the future. Isn’t it better to have siblings to share memories with in adulthood and who can lighten the load of caring for elderly parents should that become necessary?
Perhaps. It’s true that I’ve worked with a number of only children who complain of exhaustion as they care for their parents in later life. But I would counter that by noting that the worst relationship issues I’ve had to deal with in my clinics are not those between couples, but among adult siblings when it comes to sharing out responsibility for the care of their parents, and who’s entitled to what once they die.
Finally, are parents who have large families happier than those who have just one child? Apparently not. Research from the LSE which included British, German and Danish subjects found that subjective wellbeing is highest around the birth of a first child, but then – for mothers, although not significantly so for fathers – wellbeing gradually decreases with the birth of subsequent children.
When it comes down to it, there are advantages and disadvantages, and any disadvantages for the child can be compensated for by skillful parenting. This is perhaps the key message for mothers or fathers worried about the issue: nothing is set in stone. In the case of only children, helping them learn to share, and prioritizing flexibility – even allowing for some disorder – in day-to-day scheduling is extremely helpful, as these are some of the skills children with siblings acquire as a matter of course.
Only children may one day soon become the norm. The stigma has eased greatly, but lingers on for some in the form of niggling doubts. To those parents I say: consider having the number of children – if you wish to have children at all – that feels right for you, and ignore as best you can the unfounded stereotype of the lonely and awkward “only” child. Happiness and wellbeing depend on many factors.
For children, the most important thing is not how many siblings they have, but how they are parented: whether they know they are loved for their unique self, and how happy and well adjusted they perceive those raising them to be.
https://getpocket.com/explore/item/the-big-idea-why-it-s-great-to-be-an-only-child
Oriana:
One important advantage of being an only child is being able to inherit the entire estate. And prosperity isn't a minor matter.
As for companionship, there are other children — and later, other adults.
*
HOW TO PRACTICE FOR BETTER LEARNING
When you're trying to learn something new — like, say, making that new sales demo really sing — you need to practice. When you're trying to gain expertise, how much you practice is definitely important.
But even more important is the way you practice.
Most people simply repeat the same moves. Like playing scales on the piano, over and over again. Or going through the same list of vocabulary words, over and over again. Or, well, repeating anything over and over again in the hopes you will master that task.
Not only will your skills not improve as quickly as they could, in some cases, they may actually get worse.
According to research from Johns Hopkins, "What we found is if you practice a slightly modified version of a task you want to master, you actually learn more and faster than if you just keep practicing the exact same thing multiple times in a row.”
Why? The most likely cause is reconsolidation, a process where existing memories are recalled and modified with new knowledge.
Here's a simple example: trying to get better at shooting free throws in basketball. The conditions are fixed. The rim is always 10 feet above the floor. The free throw line is always 15 feet from the basket.
In theory, shooting from the same spot, over and over again, will help you ingrain the right motions into your muscle memory so your accuracy and consistency will improve.
And, of course, that does happen — but a better, faster way to improve is to slightly adjust the conditions in subsequent practice sessions.
Maybe one time you'll stand a few inches closer. Another time you might stand a few inches to one side. Another time you might use a slightly heavier, or lighter, ball.
In short, each time you practice, you make the conditions a little different. That primes the reconsolidation pump — and helps you learn much more quickly.
But Not Too Different — or Too Soon
But you can't adjust the conditions more than slightly. Do something too different and you'll simply create new memories — not reconsolidated ones.
"If you make the altered task too different, people do not get the gain we observed during reconsolidation," the researchers say. "The modification between sessions needs to be subtle.”
And you'll also need to space out your practice sessions appropriately.
The researchers gave the participants a six-hour gap between training sessions, because neurological research indicates it takes that long for new memories to reconsolidate.
Practice differently too soon and you haven't given yourself enough time to "internalize" what you've learned. You won't be able to modify old memories — and therefore improve your skills — because those memories haven't had the chance to become old memories.
So if you want to dramatically improve how quickly you learn a new skill, try this.
How to Learn a New Skill
The key to improvement is making small, smart changes, evaluating the results, discarding what doesn't work, and further refining what does work.
When you constantly modify and refine something you already do well, you can do it even better.
Say you want to improve a skill; to make things simple, we'll pretend you want to master a new presentation.
1. Rehearse the basic skill. Run through your presentation a couple of times under the same conditions you'll eventually face when you do it live. Naturally, the second time through will be better than the first; that's how practice works. But then, instead of going through it a third time ...
2. Wait. Give yourself at least six hours so your memory can consolidate. (Which probably means waiting until tomorrow before you practice again, which is just fine.)
Go a little faster. Speak a little — just a little — faster than you normally do. Run through your slides slightly faster. Increasing your speed means you'll make more mistakes, but that's OK — in the process, you'll modify old knowledge with new knowledge — and lay the groundwork for improvement. Or ...
Go a little slower. The same thing will happen. (Plus, you can experiment with new techniques — including the use of silence for effect — that aren't apparent when you present at your normal speed.) Or ...
Break your presentation into smaller parts. Almost every task includes a series of discrete steps. That's definitely true for presentations. Pick one section of your presentation. Deconstruct it. Master it. Then put the whole presentation back together. Or ...
Use a different projector. Or a different remote. Or a lavaliere instead of a headset mic. Switch up the conditions slightly; not only will that help you modify an existing memory, it will also make you better prepared for the unexpected.
4. And then, next time, slightly modify another condition.
Keep in mind you can extend this process to almost anything. While it's clearly effective for improving motor skills, the process can also be applied to nearly any skill.
Don't do the same thing over and over again in hopes you'll improve. You will, but not nearly as quickly as when you slightly modify the conditions in subsequent practice sessions — and then give yourself the time to consolidate the new memories you've made.
Keep modifying and refining a skill you already do well and you can do it even better.
And a lot more quickly.
That's the fastest path to expertise.
https://getpocket.com/explore/item/a-johns-hopkins-study-reveals-the-scientific-secret-to-double-how-fast-you-learn?utm_source=firefox-newtab-en-us
*
LEARNING TAKES RE-LEARNING
A producer for a television business show called and asked if I was available. He described the theme of the segment and asked if I had any ideas. I offered some possibilities.
“That sounds great,” he said. “We’re live in 30 minutes. And I need you to say exactly what you just said.”
“Ugh,” I thought. I’m not great at repeating exactly what I just said. So I started rehearsing.
Ten minutes later, he called to talk about a series he was developing. I almost asked him if we could postpone that conversation so I could use the time to keep rehearsing, but I figured since I had already run through what I would say two times, I would be fine.
Unfortunately, I was right. I was fine. Not outstanding. Not exceptional. Just … fine. My transitions were weak. My conclusion was more like a whimper than a mic drop. And I totally forgot one of the major points I wanted to make.
Which, according to Hermann Ebbinghaus, the pioneer of quantitative memory research, should have come as no surprise.
Ebbinghaus is best known for two major findings: the forgetting curve and the learning curve.
The forgetting curve describes how new information fades away. Once you’ve “learned” something new, the fastest drop occurs in just 20 minutes; after a day, the curve levels off.
Yep: Within minutes, nearly half of what you’ve “learned” has disappeared.
Or not.
According to Benedict Carey, author of How We Learn, what we learn doesn’t necessarily fade; it just becomes less accessible. In my case, I hadn’t forgotten a key point; otherwise I wouldn’t have realized, minutes after, that I left it out. I just didn’t access that information when I needed it.
Ebbinghaus would have agreed with Carey: He determined that even when we think we’ve forgotten something, some portion of what we learned is still filed away.
Which makes the process of relearning a lot more efficient.
As Ebbinghaus writes:
Suppose that the poem is again learned by heart. It then becomes evident that, although to all appearances totally forgotten, it still in a certain sense exists and in a way to be effective. The second learning requires noticeably less time or a noticeably smaller number of repetitions than the first.
That, in a nutshell, is the power of spaced repetition.

The premise is simple. Learn something new, and within a short period of time you’ll forget much of it. Repeat a learning session a day later, and you’ll remember more.
Repeat a session two days after that, and you’ll remember even more. The key is to steadily increase the time intervals between relearning sessions.
And — and this is important — to make your emotions work for you, not against you, forgive yourself for forgetting. To accept that forgetting — to accept that feeling like you aren’t making much progress — is actually a key to the process.
Why?
Forgetting is an integral part of learning. Relearning reinforces earlier memories. Relearning creates different context and connections. According to Carey, “Some ‘breakdown’ must occur for us to strengthen learning when we revisit the material. Without a little forgetting, you get no benefit from further study. It is what allows learning to build, like an exercised muscle.”
The process of retrieving a memory — especially when you fail — reinforces access. That’s why the best way to study isn’t to reread; the best way to study is to quiz yourself. If you test yourself and answer incorrectly, not only are you more likely to remember the right answer after you look it up, you’ll also remember that you didn’t remember. (Getting something wrong is a great way to remember it the next time, especially if you tend to be hard on yourself.)
Forgetting, and therefore repeating information, makes your brain assign that information greater importance. Hey: Your brain isn’t stupid.
So what should I have done?
While I didn’t have days to prepare, still, I could have run through my remarks once, taken a five-minute break, and then done it again.
Even after five minutes, I would have forgotten some of what I planned to say. Forgetting and relearning would have reinforced my memory since, in effect, I would have quizzed myself.
Then I could have taken another five-minute break, repeated the process, and then reviewed my notes briefly before we went live.
And I should have asserted myself and asked the producer if we could talk about the series he was developing later.
Because where learning is concerned, time is everything. Not large blocks of time, though. Not hours-long study sessions. Not sitting for hours, endlessly reading and rereading or practicing and repracticing.
Nope: time to forget and then relearn. Time to lose, and then reinforce, access. Time to let memories and connections decay and become disorganized and then tidy them back up again.
Because information is only power if it’s useful.
And we can’t use what we don’t remember.
https://www.inc.com/jeff-haden/neuroscience-a-small-dose-of-emotional-intelligence-reveal-a-simple-trick-to-learn-a-whole-lot-more-with-a-lot-less-effort.html
*
THE COST OF TRYING TO LIVE FOREVER
We are not anywhere close to conquering death. In fact a study from the University of Illinois in Chicago last year showed that though the average lifespan has gone up, the pace of improvement is slowing and the maximum lifespan hasn’t changed as much. “We’ve now proven that modern medicine is yielding incrementally smaller improvements in longevity even though medical advances are occurring at breakneck speed,” the study’s lead author Professor Jay Olshansky says. “There’s plenty of room for improvement: for reducing risk factors, working to eliminate disparities and encouraging people to adopt healthier lifestyles—all of which can enable people to live longer and healthier.” But room for improvement in living longer is very different from delusions about eliminating death.
Most of the longevity movement is not really about immortality but rather about extending life and limiting the damaging effects of aging. Of course, we all want longevity, and doing what we can to extend how long we live is great—as long as we don’t allow that to distract us from focusing on how to live. The danger of Johnson’s obsessive approach is spending so much time trying to extend your life that you never quite get around to living it.
Indeed, keeping death close—even while pushing it as far into the future as we can—has many lessons to teach us about life. Because as one headline from The Onion puts it: “World Death Rate Holding Steady at 100%.” This punchline, that no one escapes death, can of course be confirmed by multiple scientific studies.
For William Mair, professor of molecular metabolism at Harvard Chan School of Public Health, talk about immortality or radical life extension has a clear opportunity cost. “I worry that in a world where people have limited attention spans, the people who speak with confidence and say they have already cured aging will get the microphone,” he says. “We all age and we all know someone who is suffering from an age-related condition.”
Death can help us focus our attention on living our best life because there’s nothing that can teach us more about how to live life than death. Death is the most universal experience, yet we will do anything and everything we can to curtain it off, to avoid dealing with the only plot twist that we know for sure will be in our story’s last act.
Here lies the crux of the error Johnson, Kurzweil, and their followers are making: by seeing human beings solely as material beings, they have confused an immortal soul with an immortal body. As the philosopher Pierre Teilhard de Chardin put it, “We are not human beings having a spiritual experience. We are spiritual beings having a human experience.” Or, as the poet Hermann Hesse wrote, “You will be neurotic and a foe to life—so says your soul—if you neglect me.”
We neglect our souls by losing ourselves in endless busyness and never-ending to-do lists while never getting around to the big unspoken item that will eventually get its checkmark. We spend our days accumulating, acquiring, achieving. We relentlessly document our lives on social media to forever memorialize moments we never fully experience.
As Joanna Ebenstein writes in Memento Mori, in a world that prides itself on being able to control all aspects of life, death is the “ultimate insult.” There’s a reason why death has been a central part of spiritual traditions and philosophy throughout history. “The one aim of those who practice philosophy in the proper manner,” Socrates says in Plato’s Phaedo, “is to practice for dying and death.”
In ancient Rome, MM—short for “Memento Mori” which means “remember death”—was carved on statues and trees. The phrase is said to go back to a tradition in Roman victory parades in which a slave would hold a crown over the head of the triumphant general or emperor while whispering “remember you will die,” lest the victor lose his sense of perspective.
For the Stoics, accepting death and keeping it close was important to organize and give meaning to daily life. As Seneca put it, “It is not death that a man should fear, but he should fear never beginning to live.”
In Buddhism, the transient nature of life in which “all compounded things are impermanent” is a central teaching. The “maranasati” is a Buddhist meditation that specifically focuses on the awareness of death, reminding us that beginnings and endings are natural and that death is inevitable, which can awaken our appreciation for living.
For the French Renaissance philosopher Michel de Montaigne, not thinking about death was “brutish stupidity” and overcoming our fear of death is the way to liberate ourselves. “To begin depriving death of its greatest advantage over us…let us deprive death of its strangeness, let us frequent it, let us get used to it,” he wrote. “To practice death is to practice freedom.” For the existentialist German philosopher Martin Heidegger, "death opens up the question of being.” In essence, facing our mortality allows us to explore the biggest mystery: what it means to be alive.
Of course, these questions have not just been asked by late philosophers. “For the past 33 years,” said Apple co-founder Steve Jobs in 2005. I have looked in the mirror every morning and asked myself: ‘If today were the last day of my life, would I want to do what I am about to do today?’ And whenever the answer has been ‘No’ for too many days in a row, I know I need to change something. Remembering that I’ll be dead soon is the most important tool I’ve ever encountered to help me make the big choices in life.”
When we don’t allow death into our lives, we lose the clarity, perspective, and wisdom that only death can bring. That’s why psychiatrist Elisabeth Kübler-Ross called death the key to the door of life: “It is the denial of death that is partially responsible for people living empty, purposeless lives; for when you live as if you’ll live forever, it becomes too easy to postpone the things you know that you must do.”
One of the biggest problems with believing that we’re just material beings and putting all our existential eggs in that basket is that it can diminish our appreciation of the joy, mystery, poetry and love that make life—however much we have of it—worth living.
In his book AI Superpowers, AI pioneer Kai-Fu Lee writes about how he once lived his life according to the same operating principles as the technology he was building. “I came to view my own life as a kind of an optimization algorithm with clear goals: maximize personal influence and minimize anything that doesn’t contribute to that goal,” he writes.
After being diagnosed with stage IV lymphoma, Lee went on an inner journey to reexamine his life. One person whose wisdom he sought out was a Buddhist priest, Master Hsing Yun, who told him: “Kai-Fu, humans aren’t meant to think this way. This constant calculating, this quantification of everything, it eats away at what’s really inside of us, and what exists between us. It suffocates the one thing that gives us true life: love.”
That’s the true singularity. Love is what gives life meaning and makes it worth living. No matter what we believe about God or the afterlife, we have a fundamental need to connect — with ourselves, with others, and with something larger than ourselves. And love is both the pathway to that connection and the payoff. When we see life purely in material and empirical terms, we lose access to other dimensions of life—the parts beyond reason and rationality. And what an arid and limited existence that is.
So yes, let’s do what we can to lead a long life. But accepting that our life on earth will come to an end will make us more alive to it. In a post on X, Bryan Johnson wrote that “The coolest question in existence right now is exploring if we are the first generation to not die.”
It might be a cool question, but we know the answer: there’s zero scientific evidence that we’re going to be the first generation to not die. But what about a cooler question: how can we live a good life? For that, death is the ultimate bio hack.
https://time.com/7262950/cost-of-trying-to-live-forever/?utm_source=firefox-newtab-en-us
*
‘THE SPACE IS LIKE AN INSTRUMENT” — ACOUSTICS IN POST-FIRE NOTRE DAME
Performers and visitors to the famous gothic cathedral in the midst of the River Seine may find some subtle differences to the way sound bounces around its walls.
The Cathedral of Notre-Dame de Paris has borne witness to many turning points in history.
The building's striking gothic stonework has stood sentry on an island in the midst of the River Seine since the late 12th Century as coronations, wars and revolutions have unfolded in its shadow. What you might not realize, however, is that the cathedral has played a key part in shaping the music you hear when you turn on the radio or stream a playlist.
Notre-Dame was, for a brief time, the beating heart of a musical revolution – one that changed music forever and laid the foundations for many of the songs we listen to today.
Up until the middle of the 12th Century in Medieval Europe, it was common to hear the haunting melodic chants of the clergy echoing through churches and cathedrals. Gregorian chant, or plainsong, was the musical style of the day, where sacred texts were sung either by a single voice or a choir in unison – something known as monophony.
But faced with the Notre-Dame's soaring nave, ribbed vaults and towering columns, a group of composers began to try something different to take advantage of the way sound rattled around inside the building. They introduced multiple lines of melody simultaneously to produce elaborate polyphonic arrangements, or motets – the early beginnings of a musical texture that is a common feature in modern music, from jazz to pop and hip-hop.
"They had to figure out a way to sing in there that worked with the architecture," says Brian Katz, an expert on acoustics and research director at the Institut D'Alembert at Sorbonne University in France.
Katz has spent more than a decade studying the acoustics of Notre-Dame, painstakingly hanging microphones and taking measurements to reconstruct what it would have sounded like to play music, sing and preach inside the cathedral at different points in its evolution. Now, Katz hopes to return with his microphones and equipment as Notre-Dame enters a new phase. The cathedral recently reopened for the first time after the devastating fire that destroyed the roof, spire and much of the interior of the cathedral in April 2019. The reconstruction – which reportedly cost €700m (£582m, $758m) – saw the building restored stone by stone to how it had been before the blaze.
But Katz believes something may have changed – the way it sounds.
"It's hard to tell exactly how it has changed but we expect it to be more reverberant," he says. Katz has visited Notre-Dame several times to attend masses and performances since it officially reopened on 7 December 2024. The experience has been "incredibly moving", he says.
But as he sat in different parts of the cathedral, he also couldn't shake the sensation that it sounded different. A new sound system and the extensive cleaning of the organ's 8,000 pipes during the restoration work have certainly contributed to that, but he believes that the way sound interplays with the fabric of the building itself is also subtly different.
"The organist has already stated that it will take a bit of time to adapt to the subtle changes in the restored acoustics and organ," he says. "This is very typical of say a new concert hall, where the resident orchestra – and audience – need a few months to adjust to new conditions. The space is like an instrument, and one needs to become familiar with it to get the best out of it."
Katz and his research group at Sorbonne University have worked alongside the architects and builders charged with restoring the cathedral since the fire. They were eager not to lose the unique acoustics of Notre-Dame as the physical building was pieced back together.
"The next day [after the fire], I started getting calls," says Katz. "Colleagues were saying, 'you have those acoustic measurements at Notre-Dame and nobody else has any'."
Katz and his colleagues had gathered these during a series of extensive acoustic surveys of the building in 2015, where they hung microphones in 16 different positions at a time and played sounds from an omnidirectional loudspeaker – a sphere with 12 separate speakers pointing in different directions. They repeated this multiple times in different locations to build up a full acoustic model of the cathedral's interior.
For those who perform inside Notre-Dame cathedral, it is an enlivening experience. The complex stonework and vaulted ceilings make it a challenging but interesting place to sing.
"The greatest privilege is to be able to sing in the evening after closing time, to prepare a concert, and to enjoy the empty cathedral and make its stones vibrate with music," says Henri Chalet, director of the Maîtrise Notre-Dame de Paris music school, which has had an association with the cathedral since it was founded in the 12th Century. "There is an extra soul in this cathedral, which everyone can define in their own way. It is clearly alive."
Gustavo Dundamel, the Venezuelan conductor who led the Orchestre Philharmonique de Radio France at the reopening of Notre-Dame on 7 December 2024, and who led performances there before the fire, agrees there is something special about the building.
"Notre Dame is one of the most magical performance spaces in the world," he says. "Of course there are challenges to coordinating any kind of live performance in such a reverberant acoustic, but the sheer transcendental nature of the cathedral turns that work into a gift."
In the aftermath of the fire in 2019, Katz's team visited the cathedral again, sending their microphones bobbing around debris and scaffolding on a robot normally used to inspect sewer pipes. When they did the measurements, it became apparent just how much of Notre-Dame's special sound had been lost.
"It's dark, you can still smell the carbon and smoke and there's basically 3-4 big holes in the roof," recalls Katz. Those holes combined with the blanket of soot had a dulling effect on the sound. The reverberation time – how long it takes for an acoustic signal to decay within a space – decreased by around 8% compared with before the fire.
When they plugged the data into their "digital twin" of Notre-Dame, they found the acoustics were similar to another key moment in the Cathedral's past.
"It corresponds to the most [sound] absorbing condition we have predicted in Notre-Dame," says Katz. "If you look at Napoleon's coronation – the records of that and the paintings of that show every surface is covered in fabric. When we try to replicate that, we end up with something quite similar to the post fire condition."
In other-words, the voice of Notre-Dame had been muffled by the blaze.
It took five years and a lot of scaffolding for Notre-Dame Cathedral to be restored to its former glory after the fire
Over the months that followed the fire, there were no shortages of proposals for how the grand cathedral could be reconstructed. Architects suggested radical changes such as a glass roof or even a garden on top of the building before the French President Emmanuel Macron announced the cathedral would be restored to its condition before the fire.
Even so, Katz and his colleagues were called upon to help with critical decisions that could influence the overall acoustics inside the cathedral. "One of the things that we were tasked with in the reconstruction was to evaluate the proposals for the new choir organ," says Katz. "They were thinking about moving it and making it larger." One of the proposals was to put the organ up in the triforium – the gallery tucked high above the nave, the main open space of the cathedral – rather than beneath it, where it had been before the fire.
"Putting it up higher would mean it had a better view," says Katz. "But the prediction we got from our digital twin was that most of the energy from an organ up there would stay up in the upper space. It would just go from triforium to triforium and didn't really get down to the audience in the way they wanted. So, in the end, they kept it where it was before."
"It creates this barrier between the choir side and the nave side, which affects how sound propagates from one to the other," says Katz. He experienced this for himself before the fire when he watched a performance of choir soloists.
"During the performance they would move back and forth, taking four or five steps," he says. "When they came closer to us, all of a sudden we were in a small room with them, and they would take a couple of steps back, and their voices excited the whole space, and they felt much farther away. You really get the feel of that barrier."
While Notre-Dame was reconstructed to replicate how it had been before the fire, there are a few crucial changes that Katz believes have altered the sound inside the cathedral. Among those is the removal of a long carpet that ran around much of the outer perimeter of the building's interior. It had been installed in the late eighties to absorb footfall noise from tourists on the hard marble floor.
During their research, Katz and his colleagues had discovered recordings from an acoustic survey conducted in 1987 when there were proposals to install a new organ in the cathedral. When they compared the data with the survey Katz had conducted in 2015, it revealed a strange difference.
"There was a noticeable shift in the reverberation time, in a way that should be perceptible," Katz says. With the carpet now gone and the bare marble floor once more exposed, Katz says it is likely the reverberation will have increased again to how it was nearly half a century ago.
Many of the other soft furnishings such as wall hangings have also gone after being damaged in the fire while 1,500 solid wood chairs now sit where there were once rows of chairs with woven seats.
Another change that may affect the acoustics lies with the stonework itself. During the reconstruction process the building was given what effectively amounts to a giant facemask.
"They had to remove all the lead dust so what they effectively did was spray liquid latex onto the walls and peel it off like a skin," says Katz. "It gets into all the crevices and pulls out all the dust in there. The paint work and plasterwork is new too. So in that respect we expect it to be much more reverberant than it was before the fire.”
The stonework inside the cathedral is noticeably cleaner in 2024 after it was given a latex facemask compared to in 2017 (left) before the fire
For those performing in the cathedral, it has taken only a little time to get used to singing with the unique acoustics inside Notre-Dame again after a five-year absence.
"The Maîtrise used to sing there all the time," says Chalet. "Then for more than five years, we discovered many different places in Paris, in France and even abroad. This allowed us to learn to adapt very quickly to a new acoustic. Now we take advantage of each service and each concert to tame this demanding but benevolent acoustic."
Both he and Dudamel say they haven't noticed any significant difference in how the space sounds during performances. Dudamel says due to the difference in the set up between the performances he led in the cathedral before and after the fire, it is particularly difficult to compare.
"No doubt acoustic studies will show differences, but honestly, the cathedral is very large and has been rebuilt exactly as it was," adds Chalet. "It is just that the stone has been cleaned which can make the sound reflect a little differently. But my memories of it before the fire are broadly the same."
The organ pipes were thoroughly cleaned during the restoration process, which may contribute to any differences in the way Notre-Dame cathedral now sounds
But both men have another theory for why the auditory experience inside the cathedral may feel different to visitors today.
"Due to the lighting – which is a lot brighter – and the cleanliness of the windows, which puts a huge amount of added brightness in the space, I think there is a palpable difference in how the musicians and audience feed that information into what they are hearing, perhaps creating the impression that the sound is brighter and clearer now," says Dudamel.
Chalet agrees. "The sight of a cathedral, the discovery of the restored chapels that you could not see before because they were so dark, gives the impression that the cathedral has become wider. The sight clearly has an impact on listening.”
Certainly, there is some research that suggests visual information can affect our experience of what we hear.
Of course, to determine exactly how much it has changed will require more tests by Katz and his colleagues. But in many ways, he feels, the cathedral has been given a new life acoustically, that is potentially allowing visitors to experience sound in Notre-Dame in a way that probably hasn't been possible for hundreds of years.
"In some ways it is like a brand-new cathedral," says Katz. "One that is 800 years old.”
https://www.bbc.com/future/article/20250306-notre-dame-does-the-cathedral-sound-the-same-now-it-has-reopened-following-the-destructive-fire
*
ANTARCTICA’S STRONG OCEAN CURRENT IS WEAKENING
Flowing clockwise around Antarctica, the Antarctic Circumpolar Current is the strongest ocean current on the planet. It's five times stronger than the Gulf Stream and more than 100 times stronger than the Amazon River.
It forms part of the global ocean "conveyor belt" connecting the Pacific, Atlantic and Indian oceans. The system regulates Earth's climate and pumps water, heat and nutrients around the globe.
But fresh, cool water from melting Antarctic ice is diluting the salty water of the ocean, potentially disrupting the vital ocean current.
Our new research suggests the Antarctic Circumpolar Current will be 20% slower by 2050 as the world warms, with far-reaching consequences for life on Earth.
The Antarctic Circumpolar Current keeps Antarctica isolated from the rest of the global ocean
The Antarctic Circumpolar Current is like a moat around the icy continent.
Unlike better known ocean currents – such as the Gulf Stream along the United States' east coast, the Kuroshio Current near Japan, and the Agulhas Current off the coast of South Africa – the Antarctic Circumpolar Current is not as well understood. This is partly due to its remote location, which makes obtaining direct measurements especially difficult.
The influence of climate change
Ocean currents respond to changes in temperature, salt levels, wind patterns and sea ice extent. So the global ocean conveyor belt is vulnerable to climate change on multiple fronts.
Previous research suggested one vital part of this conveyor belt could be headed for a catastrophic collapse.
Theoretically, warming water around Antarctica should speed up the current. This is because density changes and winds around Antarctica dictate the strength of the current. Warm water is less dense (or heavy) and this should be enough to speed up the current. But observations to date indicate the strength of the current has remained relatively stable over recent decades.
This stability persists despite melting of surrounding ice, a phenomenon that had not been fully explored in scientific discussions in the past.
Advances in ocean modelling allow a more thorough investigation of the potential future changes.
We used Australia's fastest supercomputer and climate simulator in Canberra to study the Antarctic Circumpolar Current. The underlying model, Access-OM2-01, has been developed by Australian researchers from various universities as part of the Consortium for Ocean-Sea Ice Modelling in Australia.
The model captures features others often miss, such as eddies. So it's a far more accurate way to assess how the current's strength and behavior will change as the world warms. It picks up the intricate interactions between ice melting and ocean circulation.
In this future projection, cold, fresh melt water from Antarctica migrates north, filling the deep ocean as it goes. This causes major changes to the density structure of the ocean. It counteracts the influence of ocean warming, leading to an overall slowdown in the current of as much as 20% by 2050.
Far-reaching consequences
The consequences of a weaker Antarctic Circumpolar Current are profound and far-reaching.
As the main current that circulates nutrient-rich waters around Antarctica, it plays a crucial role in the Antarctic ecosystem.
Weakening of the current could reduce biodiversity and decrease the productivity of fisheries that many coastal communities rely on. It could also aid the entry of invasive species such as southern bull kelp to Antarctica, disrupting local ecosystems and food webs.
A weaker current may also allow more warm water to penetrate southwards, exacerbating the melting of Antarctic ice shelves and contributing to global sea-level rise. Faster ice-melting could then lead to further weakening of the current, commencing a vicious spiral of current slowdown.
This disruption could extend to global climate patterns, reducing the ocean's ability to regulate climate change by absorbing excess heat and carbon in the atmosphere.
The need to reduce emissions
While our findings present a bleak prognosis for the Antarctic Circumpolar Current, the future is not predetermined. Concerted efforts to reduce greenhouse gas emissions could still limit melting around Antarctica.
Establishing long-term studies in the Southern Ocean will be crucial for monitoring these changes accurately.
With proactive and coordinated international actions, we have a chance to address and potentially avert the effects of climate change on our oceans.
https://www.bbc.co.uk/future/article/20250303-the-worlds-strongest-ocean-current-is-at-risk
*
MORE AMERICANS WANT TO LEAVE THE COUNTRY AND LIVE OVERSEAS.
Almost half of Americans have considered or plan to move abroad to improve their happiness, according to a Harris poll published today.
Specifically, the poll found that four in 10 Americans have at least thought about leaving the country within the next few years. And among Gen Z and millennials, almost one in five respondents reported “seriously considering” an imminent move.
The results show that Americans are becoming increasingly disillusioned with the “American Dream” as the cost of essentials like rent, healthcare, and education continues to rise.
Here are three main takeaways from the poll:
Home ownership and cost of living are top of mind.
Per the new poll, 68% of Americans are in agreement about two key statements: “These days I feel like I am surviving instead of thriving,” and “Homeownership is no longer attainable for most American citizens.”
Of those who said that they’d consider moving out of the U.S., 49% reported cost of living as their primary consideration. Dissatisfaction with the current political leadership ranked as the second highest concern.
Sentiments around cost of living revealed in this poll are backed up by several recent reports.
In early February, an update from the Labor Department showed that the consumer price index—an inflation barometer that considers essential costs like gas, groceries, and cars—was up 3.3% year-over-year, compared to the previous January.
For the past six months, inflation rates have hovered above the Fed’s 2% target. Meanwhile, Zillow’s most recent Home Value Index found that, “As elevated mortgage rates dampen demand for home purchases, many potential buyers are staying renters for longer,” predicting a 3.7% rise in single-family rents for 2025.
The current economic reality can be even more disheartening for families: Based on a recent analysis by the National Women’s Law Center, the average family would need to earn at least $180,000 annually in 2025 to comfortably afford the national cost of infant care.
*
Who is more likely to be eyeing the exit?
According to the new Harris Poll, these converging economic factors are more likely to push away younger, non-white, and LGBTQ+ Americans.
While only 25% of Gen X and 26% of baby boomers said they’d considered moving abroad, 63% of Gen Zers and 52% of millennials said the same, respectively.
Additionally, LGBTQIA+, Hispanic, and Black respondents were all more likely to consider moving.
Dual citizenship is attractive for young Americans
Younger Americans also expressed a greater desire than their older counterparts to obtain dual citizenship, with 66% of Gen Zers and millennials affirming that they were at least somewhat interested in pursuing it for travel freedom, economic opportunities, and better access to public services.
The top 10 countries that Americans would consider moving to, in order, are as follows:
Canada
The U.K.
Australia
France
Italy
Japan
Mexico
Spain
Germany
New Zealand
https://www.fastcompany.com/91289388/more-americans-want-leave-country-live-abroad-cost-of-living?utm_source=firefox-newtab-en-us
*
LENT (by John Guzlowski)
I went shopping yesterday afternoon and was surprised no one had ashes on their foreheads. It was Ash Wednesday!
Lynchburg, Virginia, where I live, doesn’t have a large Catholic population, but still, a lot of Christians observe Ash Wednesday, so I was expecting to see people with ashes on their foreheads to mark the start of Lent.
When I got home, I contacted a few friends and asked if they had seen anyone with ashes on their foreheads. All my friends said the same thing.
There were no ashes in sight.
This surprised me. I grew up in St. Fidelis Parish, a Polish Catholic parish near Humboldt Park in Chicago, and Ash Wednesday was always a major event. You couldn’t go anywhere without seeing people with ashes on their foreheads.
Lent was a major event back in the 1950s when I was growing up. As kids we had to do what the adults did. We were supposed to fast from eating meat on Ash Wednesday, every Friday during Lent, and Holy Saturday morning. There were also church services we had to attend. Every Friday, the nuns marched us to church where we had to kneel for an hour during the Stations of the Cross.
If that wasn’t hard enough, the nuns expected each student to give up one thing he loved for Lent. My parents were strict believers in Lent. They didn’t limit my sister Donna and me to one thing. It was like my parents wanted to take all of the fun out of our lives. When I was a kid, I loved reading comic books and watching TV comedies like The Jack Benny Show and Gillian’s Island. All of that disappeared from my life during Lent.
But that’s not all! I loved going to Saturday matinees at the local movie theater on Division street. Every Saturday, the theater would show one comedy like Jerry Lewis’s At War with the Army, one horror movie like Invasion of the Saucer Men, and twenty cartoons. During Lent, no matter how much I pleaded with my parents, cried, and banged my head on the floor, I was not allowed to go to the movies.
Why were my parents so strict during Lent?
It took me years to figure this out, but at 76, I know why they were so demanding. They made us give up what we loved because my parents gave up what they loved most. My mom loved to go dancing on weekends at bars and wedding receptions. During Lent, there was no dancing for her. While my mom loved dancing, my dad loved drinking. An alcoholic, he loved his vodka and pints of beer. During the 40 days of Lent, he was totally sober.
If mom couldn’t dance and dad couldn’t drink, you could bet I couldn’t watch Jerry Lewis being stupid.
~ John Guzlowski
*
*
PHYSICIST ALAN LIGHTMAN ON BEGINNINGS, ENDINGS, AND WHAT MAKES LIFE WORTH LIVING
How our cosmic improbability confers dignity and meaning upon our shared existence.
“What exists, exists so that it can be lost and become precious,” Lisel Mueller, who lived to nearly 100, wrote in her gorgeous poem “Immortality” a century and a half after a young artist pointed the world’s largest telescope at the cosmos to capture the first surviving photograph of the Moon and the first-ever photograph of a star: Vega — an emissary of spacetime, reaching its rays across twenty-five lightyears to imprint the photographic plate with an image of the star as it had been twenty-five years earlier, immortalizing a moment already long gone.
And yet in a cosmological sense, what exists is precious not because it will one day be lost but because it has overcome the staggering odds of never having existed at all: Within the fraction of matter in the universe that is not dark matter, a fraction of atoms cohered into the elements necessary to form the complex structures necessary for life, of which a tiny portion cohered into the seething cauldron of complexity we call consciousness — the tiny, improbable fraction of a fraction of a fraction with which we have the perishable privilege of contemplating the universe in our poetry and our physics.
In Probable Impossibilities: Musings on Beginnings and Endings, the poetic physicist Alan Lightman sieves four centuries of scientific breakthroughs, from Kepler’s revolutionary laws of planetary motion to the thousands of habitable exoplanets discovered by NASA’s Kepler mission, to estimate that even with habitable planets orbiting one tenth of all stars, the fraction of living matter in the universe is about one-billionth of one-billionth: If all the matter in the universe were the Gobi desert, life would be but a single grain of sand.
Along the way, Lightman draws delicate lines of figuring from Hindu cosmology to quantum gravity, from Pascal to inflation theory, from Lucretius to Henrietta Leavitt and Edwin Hubble — lines contouring the most elemental questions that have always animated humanity, questions that are themselves the answer to what it means to be human.
Building on his lifelong passion for harmonizing our touching human partialities with the fundamental reality of an impartial universe — our hunger for absolutes in a relative world, our yearning for permanence in a universe of constant change — he writes:
As we have struggled through the ages to fathom this strange and wondrous cosmos in which we find ourselves, few ideas have been richer than the concept of nothingness. For to understand anything, as Aristotle argued, we must understand what it is not. To understand matter, said the ancient Greeks, we must understand the “void,” or the absence of matter.
Because we are self-referential creatures — the consequence of being creatures with selves, itself the consequence of consciousness and the ceaseless electrical storm of neural firings that gives rise to our sense of self — no void troubles us more than that of our own mortality: the notion of our absence from the scene of life. It is difficult enough to grasp how somethingness could have arisen from nothingness — how the universe can exist at all. It savages the mind and its animating selfhood to consider that everything — including the subset constituting the particular something of us — could dissolve to nothingness.
It is a discomposing notion — even for a physicist without delusion about the materiality of life, with a soulful reverence for the poetry of existence. Lightman closes his essay on the science of nothingness with a sentiment of touching, inescapable humanity:
What I feel and I know is that I am here now, at this moment in the grand sweep of time. I am not part of the void. I am not a fluctuation in the quantum vacuum. Even though I understand that someday my atoms will be scattered in soil and in air, that I will no longer exist, I am alive now. I am feeling this moment. I can see my hand on my writing desk. I can feel the warmth of the Sun through the window. And looking out, I can see a pine-needled path that goes down to the sea.
Another essay, titled “Immortality,” explores this irreconcilable dissonance between the creaturely and the cosmic — the dissonance from which we make our most symphonic art as we try to fathom our existence. Lying in his hammock one summer day, Lightman observes:
A hundred years from now, I’ll be gone, but many of these spruce and cedars will still be here. The wind going through them will still sound like a distant waterfall. The curve of the land will be the same as it is now. The paths that I wander may still be here, although probably covered with new vegetation. The rocks and ledges on the shore will be here, including a particular ledge I’m quite fond of, shaped like the knuckled back of a large animal. Sometimes, I sit on that ledge and wonder if it will remember me. Even my house might still be here, or at least the concrete posts of its footing, crumbling in the salt air. But eventually, of course, even this land will shift and change and dissolve. Nothing persists in the material world. All of it changes and passes away.
Robert Fludd’s pioneering 1617 conception of non-space, long before the notion of the vacuum existed in cosmology.
And yet, in an echo of one of the book’s subtlest yet profoundest undertones, Lightman challenges our binary view of life and death. With an eye to consciousness — “the seemingly strange experience” that furnishes “the most profound and troubling aspect of human existence” — he argues that death is not the life-switch in the off position but the gradual dimming of consciousness, of our experience of aliveness, through the deterioration of its physical infrastructure.
Ever since Cecilia Payne discovered the chemical fingerprint of the universe, we have known that the atoms we are made of — seven thousand trillion trillion atoms in each of us, on average — were forged in the furnace of faraway stars. We know, too, that every cell in our bodies — the tendons that stiffen our fists and the cortices that kindle our tenderness — is made of atoms. Lightman writes:
To an alien intelligence, each of us human beings would appear to be an assemblage of atoms, humming with our various electrical and chemical energies. To be sure, it is a special assemblage. A rock does not behave like a person… When we die, this special assemblage disassembles. The atoms remain, only scattered about.
That special assemblage is what we call consciousness. A century after Virginia Woolf observed that “one can’t write directly about the soul [for] looked at, it vanishes,” Lightman writes:
'The soul, as commonly understood, we cannot discuss scientifically. Not so with consciousness, and the closely related Self. Isn’t the experience of consciousness and Self an illusion caused by those trillions of neuronal connections and electrical and chemical flows? If you don’t like the word illusion, then you can stick with the sensation itself. You can say that what we call the Self is a name we give to the mental sensation of certain electrical and chemical flows in our neurons. That sensation is rooted in the material brain. And I do not mean to diminish the brain in any way by affirming its materiality. The human brain is capable of all of the wondrous feats of imagination and self-reflection and thought that we ascribe to our highest existence.
But I do claim that it’s all atoms and molecules. If the alien intelligence examined a human being in detail, he/she/it would see fluids flowing, sodium and potassium gates opening and closing as electricity races through nerve cells, acetylcholine molecules migrating between synapses. But he/she/it would not find a Self. The Self and consciousness, I think, are names we give to the sensations produced by all of those electrical and chemical flows.
An understanding of death as “the name that we give to a collection of atoms that once had the special arrangement of a functioning neuronal network and now no longer does so” renders the boundary between life and death more like a shoreline redrawn by the receding tide pool than like a coastal cliff dropping off into the abyss. And yet even as a scientific materialist with no mystical inclinations and no belief in an afterlife, Lightman remains what we all are — fundamentally human in our special assemblage of atoms — and gives voice to that fundamental humanity with uncommon splendor of sentiment:
'Despite my belief that I am only a collection of atoms, that my awareness is passing away neuron by neuron, I am content with the illusion of consciousness. I’ll take it.
And I find a pleasure in knowing that a hundred years from now, even a thousand years from now, some of my atoms will remain in this place where I now lie in my hammock. Those atoms will not know where they came from, but they will have been mine. Some of them will once have been part of the memory of my mother dancing the bossa nova. Some will once have been part of the memory of the vinegary smell of my first apartment. Some will once have been part of my hand.
If I could label each of my atoms at this moment, imprint each with my Social Security number, someone could follow them for the next thousand years as they floated in air, mixed with the soil, became parts of particular plants and trees, dissolved in the ocean and then floated again to the air. Some will undoubtedly become parts of other people, particular people. Some will become parts of other lives, other memories. That might be a kind of immortality.
As if it were not staggering enough how tiny a fraction of space life animates, Lightman observes that it also animates a fraction of time — not merely in terms of the transience of any one life, but in terms of all life occupying only a slender slice of the totality of time in the universe, as the discovery of cosmic acceleration has revealed.
The cosmic brevity of “the era of life” is bookended on one end by the slow condensation of colossal gas clouds into the first stars that forged the first atoms large enough to form complex structures, after the universe had already existed for about one billion years, and bookended on the other by the eventual death of all stars when they burn out in several thousand billion years, leaving behind a dark lifeless expanse of pure spacetime.
Here we each are, each existence a summer day suspended in the hammock of spacetime.
And yet even in these cold unfeeling cosmic facts, Lightman finds reason to swell the brevity of existence with the warm feeling of kinship that makes life worth living. With an eye to his grain-of-Gobi-sand analogy, he writes:
Life in our universe is a flash in the pan, a few moments in the vast unfolding of time and space in the cosmos… A realization of the scarcity of life makes me feel some ineffable connection to other living things… a kinship in being among those few grains of sand in the desert, or present during the relatively brief era of life in the vast temporal sprawl of the universe.
[…]
We share something in the vast corridors of this cosmos we find ourselves in. What exactly is it we share? Certainly, the mundane attributes of “life”: the ability to separate ourselves from our surroundings, to utilize energy sources, to grow, to reproduce, to evolve. I would argue that we “conscious” beings share something more during our relatively brief moment in the “era of life”: the ability to witness and reflect on the spectacle of existence, a spectacle that is at once mysterious, joyous, tragic, trembling, majestic, confusing, comic, nurturing, unpredictable and predictable, ecstatic, beautiful, cruel, sacred, devastating, exhilarating.
The cosmos will grind on for eternity long after we’re gone, cold and unobserved. But for these few powers of ten, we have been. We have seen, we have felt, we have lived.
https://getpocket.com/explore/item/probable-impossibilities-physicist-alan-lightman-on-beginnings-endings-and-what-makes-life-worth?utm_source=firefox-newtab-en-us
*
THE PEASANT UPRISING OF 1525
The Great Peasants’ War was premodern Europe’s largest popular rising. Early stirrings in the southwestern corner of what is now Germany in the summer of 1524 grew to affect vast parts of the Holy Roman Empire in the first half of 1525, before final confrontations in Austria brought the uprising to an end the following year. Well over 100,000 rebels mobilized in an attempt to force a new, more equitable order.
The peasants sought a world built on Scripture, without exploitative lords. They organized in military bands, agreed sets of demands, attacked castles, monasteries, and fortified settlements, and took on the professional armies of the Swabian League as well as those of other mighty princes. And they achieved some startling successes – notably the surrender of Weinsberg town and castle on 16 April 1525 – before crushing defeats in battles fought in May and June.
Writing in 1975, Peter Blickle, the war’s most influential interpreter, argued that the peasants’ aim was nothing less than a ‘Revolution of the Common Man’. In the end, however, military force triumphed over radical vision and tens of thousands paid with their lives.
the peasants' weapons
This eruption of anger and violence had deep roots: the Holy Roman Empire of the German Nation, a complex and fragmented polity stretching over much of central Europe, was familiar with protests. In the early 1500s a series of small ‘Bundschuh’ revolts – named after the typical peasant footwear – broke out along the Upper Rhine.
The late medieval era had seen a process of communalization in the Empire, enabling not only major cities – such as Nuremberg, which acquired extensive territory – but also village councils to run local affairs fairly independently.
This ‘bottom-up’ development was rooted in the growth of self-determined economic production, after many feudal lords decided to rely on rent rather than cultivate their lands themselves. Soon, therefore, associations of town- and countryfolk were punching above their social weight, gaining both political and religious influence. This is what the reformist preacher Johann Eberlin von Günzburg referred to when he wrote of the peasants becoming ‘witzig’ (aware) in the early 1520s.
In response, many nobles sought to bolster their powers through harsher conditions of tenure, in which peasants became serfs tied to their land and were subjected to marriage restrictions as well as humiliating death duties, such as losing their most valuable animal or garment.
That this took place in the aftermath of Martin Luther’s posting of his Ninety-Five Theses and the upheavals of the Reformation is significant: the timing of the Bauernkrieg – as the war was described while it was unfolding – tells us less about an immediate economic crisis or single trigger, and more about the potent fusion of religious fervor and longstanding social tensions brewing in the Holy Roman Empire.
Articles of war
A map of the Empire in the early 1520s reveals those areas that became embroiled in the war. Its origins can be found in lordships such as Stühlingen where, in June 1524, peasants raised the Fähnlein, a banner symbolizing armed resistance, and appointed the mercenary Hans Müller as their captain. The rising spread to northeastern parts of the Swiss Confederation, then Upper Swabia, Württemberg, Franconia, the Palatinate, and Thuringia in southern and central Germany as well as the (today French) regions of Alsace and Lorraine the following year. Areas around Tyrol in Austria were not pacified until the summer of 1526.
Word of growing unrest spread with lightning speed, infecting more and more rural areas during the peak of the revolt in spring and early summer 1525. In the Allgäu (now part of Bavaria), for example, the peasants forged a ‘Christian Association’ sealed by a sworn oath in February, according to which they pledged to stand together and defend the Holy Gospel.
Each Haufen, or armed band, nurtured particular local grudges – at Stühlingen it was rumored that the countess had asked her subjects to collect snail shells as spindles to be used by lady courtiers – but there was much inter-regional communication, and much common ground.
That common ground was expressed most clearly in the Twelve Articles, a document sometimes considered an early statement of human rights. The Articles, drawn from numerous individual complaints, were compiled from late February 1525 and then deliberated by three peasant bands – from the Allgäu, Baltringen, and Lake Constance regions – which had gathered around Memmingen in Upper Swabia.
The two contributors known by name were Sebastian Lotzer, a journeyman furrier and author of several fly-sheets promoting the Reformation, and Christoph Schappeler, an evangelical preacher in this imperial free city and a man who leaned towards Zwingli rather than Luther.
The Articles cover matters of religion (demands for communal control over the election and dismissal of clergy as well as the administration of tithes), politics (no fresh state laws), and economic grievances (regarding access to natural resources and reduction of seigneurial obligations). They culminate in a breathtakingly radical rejection of human inequality. Considering that the son of God had died for all of mankind, Article Three asks, why should some of us be considered ‘aigen leüt’, serfs owned by other people?
It has been the custom hitherto for men to hold us as their own property, which is pitiable enough, considering that Christ has delivered and redeemed us all, without exception, by the shedding of His precious blood, the lowly as well as the great. Accordingly, it is consistent with Scripture that we should be free and wish to be so.
The reference to Jesus’ passion in the main text and supporting passages from the Bible (added in the margins of some versions) point to Reformation impulses, prompting Luther – who had initially shown a degree of sympathy for rural hardship – to furiously dissociate himself from the rising in a tract which called for the extirpation of these ‘murderous hordes’. In Luther’s view, the notion of ‘freedom’ belonged exclusively to the spiritual sphere; secular rulers were divinely ordained to keep order on earth however they deemed necessary.
Despite this, following their adoption by the bands at Memmingen on 6 March, and their subsequent dissemination via the new medium of print, the Twelve Articles – and numerous related sets of demands – galvanized the rebellion and soon became its manifesto. It would have been fascinating to see how such a program might have been implemented in areas falling under sustained peasant control.
And that prospect was by no means as unlikely as it may sound, for the establishment was caught by surprise. At the time of the war’s outbreak Emperor Charles V was busy fighting in the Italian Wars, winning an important victory against the French king Francis I at the Battle of Pavia on 24 February 1525. Over the following months hundreds if not thousands of monasteries (for example Weissenau in early March), castles (Rothenfels in early May), and cities (the episcopal residence of Würzburg on 9 May and Freiburg im Breisgau on 24 May) surrendered when besieged by the rebels.
Religious houses suffered particularly fierce attacks – and consequences. Abbot Jakob Murer, who had initially engaged with the peasants’ grievances but ultimately fled when an armed band approached, captured the assault on his own Premonstratensian abbey in his famous Weissenau Chronicle. Yet rather than a mindless frenzy, Murer’s depiction of the sack suggests the deliberate targeting – and symbolic reclaiming – of particularly offensive privileges of monastic life: the excessive consumption of food (extracted from countryfolk via feudal dues) in the hall, copious accumulation of wine (replenished through tithe payments) in the cellar, and the monopolizing of natural resources (declared off limits to subjects) in the fishponds.
Peasants attack Weissenau Abbey
Some of the peasants’ gains, especially the early ones in Upper Swabia, required relatively little force, but rebel anger could erupt into brutal violence, most notably at the so-called Weinsberg massacre of 16 April, where, according to one clergyman’s account, the Count of Helfenstein and other nobles ‘were driven through the lances [of the rebels] contrary to all the rules of war and afterwards dragged out naked and let lie there’.
Only once the resources of the Swabian League – a political association of principalities and cities under the military command of Georg III, Truchseβ of Waldburg – had been bolstered by imperial troops returning across the Alps after Pavia did fortunes change in the southwest.
Badly equipped bands of peasants with limited war experience stood little chance against the ranks of seasoned soldiers and mercenaries. The events in the various regional theaters unfolded and overlapped in bewildering patterns. One of the most prominent peasant coalitions – the millenarian movement led by the charismatic preacher Thomas Müntzer in Thuringia – succumbed to the joint forces of the princes of Hesse and Saxony at Frankenhausen on 14-15 May.
Further and ultimately decisive military defeats in the German heartlands followed over the next few weeks, at Würzburg in early June and at Pfeddersheim (Palatinate) later that month. The casualties taken there, the subsequent executions of captives (including Müntzer himself), and other punitive postwar measures against rebel areas resulted in a massive loss of life.
Although any such ‘confession’ needs to be treated with caution, Blesy’s account highlights the extent of the religious fervor behind the peasants’ actions. Rather than a carnivalesque release of pressure, his behavior at the priory reminds us of the influence of Luther’s explosive critique of the Church. Krieg concluded his assault by taking a stick and beating the statue of Saint William over the head. Found guilty on a further ten charges including theft, gambling debts, extortion, grievous bodily harm, threatening behavior, and attempted rape, Krieg was sentenced to death ‘on Thursday after St Bartholomew’s Day’.
Krieg mocked the Catholic mass and he taunted saints. In the late 1970s Henry Cohn identified anticlericalism as a major factor spurring the peasants. This should not be conflated with a lack of belief – rural Christians across the Empire were still donating huge sums for the cure of souls and prayers for the dead on the eve of the Reformation.
But the rebels resented the ‘fat cat’ priests who exploited simple folk’s yearning for salvation. Many priests over-charged for ecclesiastical services (most notably when selling indulgences); others demanded excessive dues from their tenants (the Church owned vast swathes of land). The transgressions at Oberried described by Blesy are those of lay people who felt empowered, no longer in need of mediation by the clergy, and who had readily absorbed sentiments such as ‘the freedom of a Christian’ and ‘the priesthood of all believers’ picked up from Luther’s prewar writings.
In his early statements, Luther had granted parish congregations extensive powers, including the right to elect their own pastors. Yet he backpedalled once it became clear that they might choose the ‘wrong’ candidates, particularly those who accepted no authority beyond the Bible, rejected infant baptism as unscriptural, refused to obey secular rulers in matters of faith, and followed their own ‘inner light’.
Smeared as Schwärmer (‘spiritualists’) or Anabaptists (‘re-baptisers’) by their opponents, proponents of such ideas threatened the regime of the mainstream reformers. Since Luther envisaged religious not worldly liberation, the baton of militant leadership soon passed to more radical figures. One of them was the fiery Thomas Müntzer, who – having been forced out of his initial Wittenberg base, where he had ruffled too many feathers – advocated for a much more holistic godly society, gaining followers who named themselves ‘prophets’ and challenging princes to recognize their responsibility to implement scriptural commands in the here and now.
Müntzer believed that the godly had to take the sword in preparation for Christ’s second coming, which he and his flock saw as imminent. Having done exactly that at Frankenhausen, resulting in a devastating slaughter, Müntzer was captured and executed at Mühlhausen on 27 May. Neither the revolution nor the end of the world had materialized.
Memories of war
Despite its failure, the war had significant repercussions. Albrecht Dürer, the great Renaissance artist, contemplated (and sketched) a monument to the fallen peasant before his death in 1528. Beyond the military defeats and the brutality of the retributions – which included the burning of entire villages – the war scarred the Empire, rivaled in collective memory only by the devastations of the Thirty Years War in the early 1600s.
Rural enthusiasm for the Reformation – built on the hope that the new religion might usher in a fairer Christian society – faded and the Empire’s towns and villages grew cautious not to rock the boat. The remaining Anabaptists drifted further away from the mainstream churches, some becoming drawn into another challenge to the system, the short-lived Kingdom of Münster in the mid-1530s.
But the peasants’ actions did have some immediate consequences. Shocked by the sheer scale of the rising, princes and lords may have exercised more caution when dealing with their subjects in the decades after the war, for example by reducing death duties. In the highly autonomous Three Leagues of the Grisons in the Central Alps (protected by alliances with the Swiss), two sets of Articles passed at Ilanz in 1524 and 1526 echoed elements of the Twelve Articles agreed at Memmingen. Here a rural polity did succeed in institutionalizing clergy elections by the parishioners while stripping the local bishop of his secular powers.
The movement also inspired radicals well beyond the early modern period. In 1850 Friedrich Engels published The Peasant War in Germany written, as he later stated, ‘while the impression of the counter-revolution just then completed was still fresh’. Engels argued that three centuries ‘have passed and many a thing has changed; still the Peasant War is not so far removed from our present struggle, and the opponents who have to be fought remain essentially the same’.
For the East German historians and political leaders living in the socialist society that Engels and Marx had helped to inspire after the Second World War, anti-feudal peasant rebels – and especially a revolutionary such as Müntzer – had far greater appeal than Luther, whom they dismissed as a prince’s servant. In the German Democratic Republic, stamps and coins carried Müntzer’s portrait, while his birthplace Stolberg received the official designation ‘Thomas-Müntzer-Stadt’.
The most remarkable manifestation of the DDR’s endorsement is Werner Tübke’s monumental 360-degree painting of the Thuringian rebels, Early Bourgeois Revolution in Germany. Tübke’s panorama was commissioned by the Ministry of Culture in 1976 and took more than a decade to complete. Still housed in the gallery built specifically for it at Bad Frankenhausen, it opened to the public on 14 September 1989, just weeks before the fall of the Berlin Wall.
500 years on
This year’s quincentenary promises to reinvigorate the field, with commemorative events – including major exhibitions in Baden-Württemberg, Thuringia, and South Tyrol – and new publications. In an openly revisionist essay ‘Beyond the Heroic Narrative: Towards the Quincentenary of the Peasants’ War, 1525’ (2023), the leading early modernist Gerd Schwerhoff questioned heroic narratives:
As enticing as it may be to regard the rebellious peasants collectively as precursors of modern emancipation movements, this narrowing of perspective runs the risk of missing key features of the historical events. Those features include the fragmented and divided character of the movement, the forced involvement of many participants, and the lack of a universal vision.
In retrospect, the German Bauernkrieg was neither the first, nor last, peasants’ war. But, like its predecessor in England in 1381 and a later Swiss manifestation in 1653, it belongs to history’s knife-edge moments, where – for a few brief weeks – turning the world upside down seemed entirely possible.
What made 1525 so distinct compared to ‘typical’ premodern risings was its wide geographical spread over half of the Empire and its fusion of ‘bread and butter’ concerns with a religious ideology, as well as a captivating vision of a world without serfdom.
Mindful of their status and privileges, there was only one way 16th-century lords could respond.
https://www.historytoday.com/archive/feature/great-german-peasants-war?utm_source=Newsletter&utm_campaign=591c9ca1dd-EMAIL_CAMPAIGN_2017_09_20_COPY_01&utm_medium=email&utm_term=0_fceec0de95-591c9ca1dd-1214148&mc_cid=591c9ca1dd
*
A BIT OF HUMOR
1. When one door closes and another door opens, you are probably in prison.
2. To me, "drink responsibly" means don't spill it.
3. Age 60 might be the new 40, but 9:00 pm is the new midnight.
4. It's the start of a brand new day, and I'm off like a herd of turtles.
5. The older I get, the earlier it gets late.
6. When I say, "The other day," I could be referring to any time between yesterday and 15 years ago.
7. I remember being able to get up without making sound effects.
8. I had my patience tested. I'm negative.
9. Remember, if you lose a sock in the dryer, it comes back as a Tupperware lid that doesn't fit any of your containers.
10. If you're sitting in public and a stranger takes the seat next to you, just stare straight ahead and say, "Did you bring the money?"
11. When you ask me what I am doing today, and I say "nothing," it does not mean I am free. It means I am doing nothing.
12. I finally got eight hours of sleep. It took me three days, but whatever.
13. I run like the winded.
14. I hate when a couple argues in public, and I missed the beginning and don't know whose side I'm on.
15. When someone asks what I did over the weekend, I squint and ask, "Why, what did you hear?"
16. When you do squats, are your knees supposed to sound like a goat chewing on an aluminum can stuffed with celery?
17. I don't mean to interrupt people. I just randomly remember things and get really excited.
18. When I ask for directions, please don't use words like "east."
19. Don't bother walking a mile in my shoes. That would be boring. Spend 30 seconds in my head. That'll freak you right out.
20. Sometimes, someone unexpected comes into your life out of nowhere, makes your heart race, and changes you forever. We call those people cops.
21. My luck is like a bald guy who just won a comb.
~ source unknown.
*
THE WORLD’S LARGEST ICEBERG RUNS AGROUND OFF A REMOTE ISLAND
The world's largest iceberg has run aground in shallow waters off the remote British island of South Georgia, home to millions of penguins and seals.
The iceberg, which is about twice the size of Greater London, appears to be stuck and should start breaking up on the island's south-west shores.
Fishermen fear they will be forced to battle with vast chunks of ice, and it could affect some macaroni penguins feeding in the area.
But scientists in Antarctica say that huge amounts of nutrients are locked inside the ice, and that as it melts, it could create an explosion of life in the ocean.
"It's like dropping a nutrient bomb into the middle of an empty desert," says Prof Nadine Johnston from British Antarctic Survey.
Ecologist Mark Belchier who advises the South Georgia government said: "If it breaks up, the resulting icebergs are likely to present a hazard to vessels as they move in the local currents and could restrict vessels' access to local fishing grounds.”
The stranding is the latest twist in an almost 40-year story that began when the mega chunk of ice broke off the Filchner–Ronne Ice Shelf in 1986.
We have tracked its route on satellite pictures since December when it finally broke free after being trapped in an ocean vortex.
As it moved north through warmer waters nicknamed iceberg alley, it remained remarkably intact. For a few days, it even appeared to spin on the spot, before speeding up in mid-February traveling at about 20 miles (30km) a day.
"The future of all icebergs is that they will die. It's very surprising to see that A23a has lasted this long and only lost about a quarter of its area," said Prof Huw Griffiths, speaking to BBC News from the Sir David Attenborough polar research ship currently in Antarctica.
On Saturday the 300m tall ice colossus struck the shallow continental shelf about 50 miles (80km) from land and now appears to be firmly lodged.
"It's probably going to stay more or less where it is, until chunks break off," says Prof Andrew Meijers from British Antarctic Survey.
It is showing advancing signs of decay. Once 3,900 sq km (1,500 sq miles) in size, it has been steadily shrinking, shedding huge amounts of water as it moves into warmer seas. It is now an estimated 3,234 sq km.
"Instead of a big, sheer pristine box of ice, you can see caverns under the edges," Prof Meijers says.
Tides will now be lifting it up and down, and where it is touching the continental shelf, it will grind backwards and forwards, eroding the rock and ice.
"If the ice underneath is rotten — eroded by salt — it'll crumble away under stress and maybe drift somewhere more shallow," says Prof Meijers.
But where the ice is touching the shelf, there are thousands of tiny creatures like coral, sea slugs and sponge.
"Their entire universe is being bulldozed by a massive slab of ice scraping along the sea floor," says Prof Griffiths.
That is catastrophic in the short-term for these species, but he says that it is a natural part of the life cycle in the region.
"Where it is destroying something in one place, it's providing nutrients and food in other places," he adds.
There had been fears for the islands' larger creatures. In 2004 an iceberg in a different area called the Ross Sea affected the breeding success of penguins, leading to a spike in deaths.
But experts now think that most of South Georgia's birds and animals will escape that fate.
Some Macaroni penguins that forage on the shelf where the iceberg is stuck could be affected, says Peter Fretwell at the British Antarctic Survey.
The iceberg melts freshwater into salt water, reducing the amount of food including krill (a small crustacean) that penguins eat.
The birds could move to other feeding grounds, he explains, but that would put them in competition with other creatures.
The iceberg is still huge but shrinking as it moves into warmer waters
The ice could block harbors or disrupt sailing when the fishing season starts in April.
"This will be the most ice from an iceberg we will have ever dealt with in a fishing season, but we are well-prepared and resourced," says Andrew Newman from Argos Froyanes.
But scientists working in Antarctica currently are also discovering the incredible contributions that icebergs make to ocean life.
Prof Griffiths and Prof Johnston are working on the Sir David Attenborough ship collecting evidence of what their team believe is a huge flow of nutrients from ice in Antarctica across Earth.
Particles and nutrients from around the world get trapped into the ice, which is then slowly released into the ocean, the scientists explain.
"Without ice, we wouldn't have these ecosystems. They are some of the most productive in the world, and support huge numbers of species and individual animals, and feed the biggest animals in the world like the blue whale," says Prof Griffiths.
A sign that this nutrient release has started around A23a will be when vast phytoplankton blooms blossom around the iceberg. It would look like a vast green halo around the ice, visible from satellite pictures over the next weeks and months.
The life cycle of icebergs is a natural process, but climate change is expected to create more icebergs as Antarctica warms and becomes more unstable.
More could break away from the continent's vast ice sheets and melt at quicker rates, disrupting patterns of wildlife and fishing in the region.
https://www.bbc.com/news/articles/c20d1xp6046o
*
THE WORLD’S STRONGEST OCEAN CURRENT IS WEAKENING
Flowing clockwise around Antarctica, the Antarctic Circumpolar Current is the strongest ocean current on the planet. It's five times stronger than the Gulf Stream and more than 100 times stronger than the Amazon River.
It forms part of the global ocean "conveyor belt" connecting the Pacific, Atlantic and Indian oceans. The system regulates Earth's climate and pumps water, heat and nutrients around the globe.
But fresh, cool water from melting Antarctic ice is diluting the salty water of the ocean, potentially disrupting the vital ocean current.
Our new research suggests the Antarctic Circumpolar Current will be 20% slower by 2050 as the world warms, with far-reaching consequences for life on Earth.
The Antarctic Circumpolar Current keeps Antarctica isolated from the rest of the global ocean
The Antarctic Circumpolar Current is like a moat around the icy continent.
The current helps to keep warm water at bay, protecting vulnerable ice sheets. It also acts as a barrier to invasive species such as southern bull kelp and any animals hitching a ride on these rafts, spreading them out as they drift towards the continent. It also plays a big part in regulating the Earth's climate.
Unlike better known ocean currents – such as the Gulf Stream along the United States' east coast, the Kuroshio Current near Japan, and the Agulhas Current off the coast of South Africa – the Antarctic Circumpolar Current is not as well understood. This is partly due to its remote location, which makes obtaining direct measurements especially difficult.
The influence of climate change
Ocean currents respond to changes in temperature, salt levels, wind patterns and sea ice extent. So the global ocean conveyor belt is vulnerable to climate change on multiple fronts.
Previous research suggested one vital part of this conveyor belt could be headed for a catastrophic collapse.
Theoretically, warming water around Antarctica should speed up the current. This is because density changes and winds around Antarctica dictate the strength of the current. Warm water is less dense (or heavy) and this should be enough to speed up the current. But observations to date indicate the strength of the current has remained relatively stable over recent decades.
This stability persists despite melting of surrounding ice, a phenomenon that had not been fully explored in scientific discussions in the past.
Advances in ocean modeling allow a more thorough investigation of the potential future changes.
We used Australia's fastest supercomputer and climate simulator in Canberra to study the Antarctic Circumpolar Current. The underlying model, Access-OM2-01, has been developed by Australian researchers from various universities as part of the Consortium for Ocean-Sea Ice Modelling in Australia.
The model captures features others often miss, such as eddies. So it's a far more accurate way to assess how the current's strength and behavior will change as the world warms. It picks up the intricate interactions between ice melting and ocean circulation.
In this future projection, cold, fresh melt water from Antarctica migrates north, filling the deep ocean as it goes. This causes major changes to the density structure of the ocean. It counteracts the influence of ocean warming, leading to an overall slowdown in the current of as much as 20% by 2050.
Far-reaching consequences
The consequences of a weaker Antarctic Circumpolar Current are profound and far-reaching.
As the main current that circulates nutrient-rich waters around Antarctica, it plays a crucial role in the Antarctic ecosystem.
Weakening of the current could reduce biodiversity and decrease the productivity of fisheries that many coastal communities rely on. It could also aid the entry of invasive species such as southern bull kelp to Antarctica, disrupting local ecosystems and food webs.
A weaker current may also allow more warm water to penetrate southwards, exacerbating the melting of Antarctic ice shelves and contributing to global sea-level rise. Faster ice-melting could then lead to further weakening of the current, commencing a vicious spiral of current slowdown.
This disruption could extend to global climate patterns, reducing the ocean's ability to regulate climate change by absorbing excess heat and carbon in the atmosphere.
The need to reduce emissions
While our findings present a bleak prognosis for the Antarctic Circumpolar Current, the future is not predetermined. Concerted efforts to reduce greenhouse gas emissions could still limit melting around Antarctica.
With proactive and coordinated international actions, we have a chance to address and potentially avert the effects of climate change on our oceans.
https://www.bbc.co.uk/future/article/20250303-the-worlds-strongest-ocean-current-is-at-risk
*
Why do many people not believe in God?
For the same reason we don’t believe in:
Santa Claus,
The Easter Bunny,
The Tooth Fairy,
Ghosts,
Unicorns,
Mermaids … or …
Trolls under bridges (although there be plenty lurkz here methinks).
The idea of God (or any gods) is silly and childishly selfish.
Childish? Because the believer surrenders all responsibility to an imaginary entity.
Selfish? Because such belief absolves the believer of charitable consideration for others.
(Praying doesn’t count and if you think it does you’re just confirming the selfishly childish thing.) ~ Abby Hart, Quora
Another answer to why so many people don’t believe in God: Education.
Religion no longer has a monopoly on information.
~ Barry Hampe, Quora
Oriana:
Milosz’s answer: Technology, including progress in medicine. When we develop a health problem, we turn to medicine, rather than religion. That's certainly true even for the Pope. In Milosz’s view, technology provided the counter-magic to religion’s magic rituals. And technology’s great advantage was that it worked, making the world safer and more pleasant, our life less of a “valley of tears.” (Of course each solution also breeds its own problems: technology found ample use in war, as well as being a culprit in the destruction of the earth.)
For me personally, however, the answer was education — in the broadest sense of the word. I became fascinated by other mythologies: other creation myths, other myths about dying and rising gods. And there was already a seed planted by, would you believe it, the first nun who taught us about religion. When talking about Yahweh’s (I knew that tribal name; I did not know about “Elohim”) separating “the waters below from the waters above,” she said matter-of-factly, “This comes from the Babylonian mythology.” At eight-years’ old I didn’t have a clear idea of what mythology was, but I acquired more and more knowledge over the years until, at fourteen, I suddenly had this fateful thought about the Judeo-Christian beliefs: “It’s just another mythology.”
And once I saw religion as mythology, I couldn’t unsee it.
The first page of the bible, the story of creation, is a particularly striking example of mythology. My vague expectations that in old age I might return to religion, in order to deal with the fear of death, crumbled when, in adulthood, I read the story of creation and once more had the thought, “This is such *obvious* mythology.” Adam shaped from the earth and Eve from Adam’s rib — it was no longer possible to take this literally.
And the attempt at reconciling the Judaic creation myth with science by saying “To God, one day could mean a million years,” still doesn’t really work. Add to this, “And on the seventh day God rested.” Why? Was God tired?
And what about the almost infinite number of other galaxies with their multitudes of stars and planets? If the universe is about us, humans, why would God create all those extras? Answers such as “We cannot know the mind of God,” seemed a lame evasion.
There were other problems, such as needing to see oneself as a sinner worthy of eternal damnation, but neither that nor the problem of evil were critical in my liberation from Catholicism. It was all about mythology. If we are forbidden to believe in Zeus or Wotan or Astarte, why should we be required to believe in Yahweh, under the pain of eternity in hell? And not just believe in him, but actually love him?
Even when I still believed, I found it utterly impossible to love him. And since this was the most important commandment, “to love God above all,” and there was no way I could obey this commandment, I knew I was doomed to hell. But that was another sin, the hard to grasp sin against the Holy Ghost. But let me not go into the psychological harm caused by religion — that’s a separate huge topic.
St. Yakub's, my parish church in Warsaw. By coincidence, my maternal grandfather's name happened to be Yakub. He died when I was very young. An Auschwitz survivor like my grandmother (in fact she heroically managed to save his life), he never became harsh or bitter. I was a miracle of new life to him, and I received nothing but love from him. Love and laughter, since he adored making me laugh. One afternoon he had a massive stroke, and died in front of my eyes.
*
HUMAN EVOLUTION UPDATED
About seven million years ago, though that could be way off, the chimpanzee and Homo lines split. The chimp developed one way, we went another. We don't know who our last common ancestor was, or even who the last hominin ancestor from that would produce sapiens was. But we have learned some amazing new things.
We know that with all due respect to stories of supernatural generation of species from dirt, we aren't a singular artifact but a hybrid hominin with some Neanderthal, Denisovan and others inside. While sapiens and others met multiple times over 200,000 years, we newly realize that all non-Africans, all, stem from just one interspecies brush 45,000 years ago.
As Homo progressed, we changed the world around us. We did this chiefly by eating all the big animals. At least we began painting the animals' pictures before driving them to extinction, and did so much earlier than had been known. Yeah we ate plant matter too; it bears pointing out that physiologically, we have to. And we do care. We always did, as indicated by the child of Neanderthals who survived — with Down's syndrome.
It's true that we met and mixed with Neanderthals many times, but only one 'gene flow event' produced all non-Africans alive today, and it was later than we thought, studies find.
Somewhere on the path out of Africa, our ancestors encountered Neanderthals, and their joint children would beget all non-African humans alive today. Now two new research papers have shed startling new light on how this happened. The papers, "Earliest modern human genomes constrain timing of Neanderthal admixture" in Nature led by researchers at the Max Planck Institute for Evolutionary Anthropology in Germany and "Neanderthal ancestry through time: Insights from genomes of ancient and present-day humans" in Science were published Thursday.
The Nature paper by Arev Sümer, Johannes Krause and colleagues analyzes seven modern humans who lived in Europe 49,000 to 42,000 years ago – the earliest humans in Europe to be studied to date. The Science paper by Leonardo Iasi, Priya Moorjani and colleagues analyzed 300 current and ancient individuals to study the timing and the duration of the intermixing.
Previous research showed three major Homo sapiens-Neanderthal admixture events: over 200,000 years ago, 120,000 to 105,000 years ago and then after 60,000 years ago. But the new work implies that all non-Africans today result from a lineage of modern humans that mixed with Neanderthals 49,000 to 45,000 years ago in a single event.
By single event we don't mean one coupling on one starry night, but a process of gene flow that may have lasted centuries or even a few thousand years when the two species overlapped, the researchers say.
The other mixing events did happen. They resulted in hybrid human-Neanderthals. We have found traces of these hybrids. The other lineages went extinct. Ours didn’t.
Caught knapping
The genetic material for the new work was recovered from seven people who lived 49,000 to 42,000 years ago in Ranis, Germany and Zlatý kůň (“golden horse”) in the Czech Republic. Though the two towns are 230 kilometers distant, the seven were related, Sumer said in a press conference.
The Zlatý kůň people were cousins of the Ranis people, fifth or sixth degree. The team also identified the earliest modern family at Ranis: a mother and daughter, found with a second- (or third-) degree cousin.
The people at Zlatý kůň and Ranis were not our ancestors: their line died out. But they had the same Neanderthal background that we do.
Which means? About 50,000 years ago a band of modern humans left Africa. Possibly while crossing through the Middle East, they mated with Neanderthals about 49,000 to 45,000 years ago. Reaching Europe, the descendants of the band split up, with one branch forming the Zlatý kůň and Ranis family, which died out. Another branch became our ancestors.
The humans and Neanderthals may have lived in proximity for thousands of years, the researchers say. That doesn't necessarily mean they were having relations for 5,000 years.
We have no idea how "it" went down, the archaeologists clarified in the press conference. There is no archaeological or cultural evidence whatsoever to temper our fancies in this context. We can't even point at clear cultural transfer.
Note the heartbreak of the Lincombian-Ranisian-Jerzmanowician cultural complex from about 45,000 years ago, Krause says.
LRJ artifacts characterized by sophisticated leaf-shaped stone blades have been found stretching from Britain to Poland in Europe. Note that even in that space, the LRJ sites are very rare, yet they badly muddied the waters because, based on the timing and geography, it was assumed to be a very late and highly skilled Neanderthal culture.
It turned out that modern humans had reached northern Europe by 45,000 years ago. The suggestion arose that the blades were so advanced, the Neanderthals encountered modern humans who graciously taught them extreme knapping.
Recently analysis in an LRJ site in Thuringia deduced that after all, the LRJ makers were modern humans who had penetrated northern Europe that long ago. Ditto regarding the gorgeous Châtelperronian technology; we don't know who made it — Neanderthals, modern humans, hybrids.
So we have no evidence of transfers between Neanderthals and modern humans but do have solid evidence for sex, leading Priya Moorjani to observe that we were all one species.
"The differences we imagine between these groups weren't very big," she says. "They could mix and did so for a long period of time and lived side by side over time, so I think that shows we were far more similar than different. I would expect exchange of ideas and cultures.”
Maybe. The new analyses suggest our Neanderthal ancestry component took shape very fast after that single putative gene flow event, within 100 generations, Krause says, thanks to strong selection of Neanderthal genes. In other words, some Neanderthal heredity strongly supported our occupation of Europe and other genes could have been deadly for us.
Think of it this way. You are an early modern human venturing into prehistoric Europe, which was colder, and the pathogens were different. Neanderthals had been there for hundreds of thousands of years and had adapted to it, developing immunities to the pathogens. Your hybrid children could gain immunity from the Neanderthal parent, conferring a great advantage.
But some genes would not work well for us. Some parts of our genome have heavy Neanderthal signals and others have none (and such was the case already in the earliest hybrids, Krause explains).
Such as, we ladies have almost no Neanderthal or Denisovan signals in our X chromosome. Does that imply the sex was confined to human women with Neanderthal men? It does not. "There are regions in our genome that don't tolerate Neanderthal DNA," Krause explains. Perhaps human fetuses with a Neanderthal X chromosome weren't viable.
It bears adding: If they mixed over centuries or a few thousand years — geologically that's an eyeblink but in terms of human history, consider how much has happened in the last 7,000 years, such as the rise of civilization, Benjamin Peter points out.
Separate work implies that Denisovans, a sister species to Neanderthals (or are we all one?), survived in Southeast Asia, mixing with humans, until perhaps 15,000 years ago. Maybe they did, but we won.
We won? A little humility might be in order. The human story isn't just a story of success. "We also went extinct several times," points out Science coauthor Benjamin Peter. In fact we always went extinct in Europe, and all other modern humans in Europe joined the Neanderthals in that final void, except for one little band that didn't. The end.
ARCTIC RUSH: INSIDE THE 19TH CENTURY CRAZE TO REACH THE NORTH POLE
Interest in Arctic expeditions and the North Pole exploded in the 1840s and 1850s. In Great Britain, but also in the US, interest was fueled by the rescue operations to find John Franklin and the other survivors, and later the race between nations to reach the North Pole. It was Lady Jane Franklin, wife of the missing Franklin, who triggered what was called at the time Arctic fever.
The North Pole has always been associated with competition. Initially it was simply the race to get there first. But many races have since followed. I have only taken part in one, and that was to be the first to reach the Pole without the help of dogs, depots, and snowmobiles. Others have competed to be the first person to go there alone, the first person from their country, the first to go there and come back unsupported, the first to cross the Arctic Ocean in winter.
When we went there, there were other expeditions, from Russia, South Korea, the UK, and Canada, who were also trying to reach the North Pole. It was a busy time out on the ice. The British team were our toughest competitors. The expedition was led by the former SAS soldier Sir Ranulph Fiennes, who has been named as the World’s Greatest Explorer in the Guinness Book of Records. Ran, as we call him, set the record for the expedition to get farthest north unsupported in 1986. The 1990 expedition was made up of two men, the other being the English doctor Mike Stroud. (Børge and I never understood why there needed to be a leader for two men.)
I was not aware that there was a new race to the North Pole until 1987. That year, I read an article in National Geographic magazine, the publication read by most explorers at the time, in which, having reached the North Pole on foot with aerial support, the Frenchman Jean-Louis Étienne asked: “Will anyone ever reach the Pole under his own power, entirely unassisted?” For me, the question was an invitation to try.
*
The mission of Lady Jane Franklin’s husband had been to explore the Arctic; her mission became to prompt, persuade, and pressure the Admiralty and other private institutions to search for him. Between 1848 and 1859, there was a race to find any survivors. More than fifty expeditions set out from the US and Britain.
The search team captured Arctic foxes and attached written messages to them addressed to Franklin and his men, giving the position of rescue ships. Then the animals were released again. The hope was that Franklin might shoot the foxes for food and so find out that a rescue operation was in the vicinity.
A similar message was attached to balloons, which were then released to float over the ice, water, and skerries. Private and government expeditions competed to find clues as to what might have happened. There is no record of how many people died during these rescue attempts, but having read numerous accounts of expeditions to the Arctic and subsequent rescue operations, I think that far more people died in connection with the rescue than with the expedition itself.
Six years after the race to find Franklin had begun, the explorer and surgeon John Rae, from the Orkneys, returned to London in 1854, following one such rescue expedition. He reported that the Inuits had noticed that the British refused to eat the blubber and innards from seals and walrus so essential to the Inuits’ diet, and that they often died from scurvy and starvation as a result.
In his report to the Admiralty, Rae described the remains of some bones he had found, presumably from Franklin’s expedition, that looked as if they had been cooked and cut with a metal knife, and from “the contents of the kettles, it is evident that our wretched countrymen had been driven to the last dread resource—cannibalism.” It seems that the men of the Admiralty had grown tired of sending new expeditions north, and wanted an end to it, so they leaked this information to The Times. People were appalled. The idea that civilized men could eat each other was horrifying to the Victorians.
Readers of newspapers, books, and periodicals, and lecture audiences, had lapped up the details of countless rescue operations and the uncertainty surrounding the fate of the polar explorers. In the spirit of Edmund Burke, they were both captivated and repelled by the terrible sequence of events. But, as Burke had predicted, the readers’ experience of the sublime faded into terror when the specter of scurvy, starvation, and cannibalism loomed.
A good “sublime” story should stoke two strong emotions in the reader: first, shock or helplessness, then enormous relief at what rational people can achieve when faced with grave danger. The latter was missing from the story of Franklin’s final expedition.
Franklin’s wife, Jane, and the writer Charles Dickens were two of the most vocal critics of these claims of cannibalism. It must be understandably hard to accept that your husband may have eaten one of his own crew. Dickens’s explanation of why he thought it was not possible is more interesting—and more revealing. He claimed in his own periodical, Household Words, that the English, with their superior culture, could never be tempted by what he, quite correctly, called “the last resort.”
It must have been the more primitive creatures, the Inuits, whom he preferred to call “lying savages,” who had succumbed to cannibalism. Dickens did not accuse Rae of lying, but said that the stories the Inuits had told him were nothing more than “the chatter of a gross handful of uncivilized people” and should be left untold. For Dickens, it was natural to think that a good Englishman would rather starve to death. Unlike other nations and races, they were far too civilized to eat human flesh.
*
The interest in expeditions to the North Pole through the nineteenth century to 1909, when Robert Peary and Frederick Cook claimed to have reached the Pole, must be seen in light of the technological developments in the same period. As interest in the North Pole and polar explorers like William Parry and John Franklin grew, the first steps in the globalization of news and literature propelled their stories around the world. The culture of the international celebrity was born, and polar explorers became famous throughout the greater part of Europe and eventually the US.
Until the start of the 1800s, access to literature, be that novels, poetry, or nonfiction, was limited. In the first half of the century, thanks to new technology, literature started to be mass-produced, and could be distributed more efficiently by train. More people learned how to read, gas lighting became more readily available in people’s homes for them to read by, prices also became more affordable as printing became steam-powered, and as living standards rose, people gained more free time and disposable income. The novel Frankenstein and books by William Parry and John Franklin were sold in bookshops and kiosks, often in train stations.
Telegraph lines were installed along the railways, and suddenly news could reach the towns and cities in no time at all. The expeditions and their heroic feats became headline news. Newspapers published national and international news every day. In August 1858, a telegraph cable was pulled along the seabed between the US and Great Britain. It fell apart three weeks later, and a new cable was laid in 1865, which also broke. Another cable was successfully laid the following year and cross-Atlantic communication shrank from two weeks to two minutes.
Expeditions and newspapers were heavily dependent on each other. The media needed a constant supply of new heroes and dramatic stories. And the more prestige an expedition could muster, the more money and state support followed. Newspaper barons planned and organized their own polar expeditions, and the rights to stories and books were sold before the expeditions had even set off. Sometimes journalists even became explorers themselves, as was the case with Henry Stanley, who went off in 1871 to find David Livingstone in present-day Tanzania.
* The American polar explorer Anthony Fiala once said that one does not give up until “the command given to Adam in the beginning—the command to subdue the earth—has been obeyed.” It seems that this became a kind of creed for the US, which was starting to claim its position as a polar nation.
The expeditions were militant in their ambitions to conquer nature, map the ocean, carry out scientific studies, and acquire territories on sea, land, and ice. They were also militant in their treatment of other people and cultures, and assumed it was their duty to tame and subjugate the forces of nature. Polar history entered an era that Joseph Conrad described as Geography militant.
Like several other polar explorers, the American Elisha Kent Kane (1820–57) was inspired to travel north when he read about the missing John Franklin. Kane lived in Philadelphia and was involved in the latter part of the US’s invasion of Mexico in 1848 and 1849. He had no experience of the Arctic, but was convinced that America should help to find Franklin. And not only should America offer its assistance, but he himself was the best person for the job.
Kane took part in two rescue operations before he put together his own expedition in 1853. It was far smaller than the British expeditions, but he set off north as captain of the brig USS Advance.
As soon as there was the slightest bit of swell, Kane was violently seasick. He also had no idea how to navigate, had no experience of leadership, and was plagued by rheumatic fever. Only one of the men on board the Advance had any relevant experience from the Arctic region. The reason the crew was so unsuitable was that it had been hard to find volunteers. Kane had eventually been forced to take on sailors without work who were hanging around the harbor.
Instead of heading toward the area where Franklin might be, Kane decided to look for him somewhere he was highly unlikely to be: Smith Sound, which lies east of Ellesmere Island, closer to Greenland. His intention was in fact to find a new route to the North Pole, fighting his way through to ice-free waters, so as to be the first to reach the Pole.
By saying it was a rescue operation, he cleverly secured financial support and a boat. American sponsors liked the idea that their young nation could take over where Great Britain had failed, and save the superpower’s missing superhero.
When Kane’s expedition reached northwest Greenland, he proved that while he might be a poor sailor and inexperienced polar expedition leader, he was a sound pioneer. He brought something new to polar exploration. Unlike Parry and Franklin, he spent time with the Inuits to learn how to drive a dogsled, the best mode of transport for crossing the ice.
In 1854, Kane’s plan was to abandon the Advance when she became beset by ice, and carry on north. But he made the same mistake as countless polar explorers. He gave in to his restlessness and set off too early, when the temperature was around -40°F. He ended up having a mental breakdown along the way. This seems only natural to me, given that he had started the preparations to become an explorer only some years before, by spending a few nights in a tent close to his home city of Cincinnati.
As they trekked north, Kane became convinced that a member of the expedition was a polar bear and gave orders for him to be shot. Fortunately, no one obeyed the order. But two of the crew died later on their return: one from tetanus and the other after his foot was amputated. Kane did not let the deaths or his own mental health deter him. He headed north again the same summer, with those who were still healthy enough to go. At some point, one of the team climbed a ridge to look around. To the northwest, he saw open water and clouds heavy with rain, and believed that the expedition had discovered the ice-free Arctic Ocean.
Pierre Berton concluded in his book Arctic Grail that it was a combination of wind, waves, and wishful thinking. It is hard to know how convinced the man was of his discovery, but Kane, who was waiting down by the sleds, was in no doubt. His expedition had achieved what generations of explorers had dreamed of: open water all the way to the North Pole and the Pacific. The sensational discovery would justify the expedition, but still he felt they could achieve more. Just like myself, Kane also wanted to impress his father. His hope was to be able to “advance myself in my father’s eyes by a book on glaciers and glacial geology.”
The intention was to sail home again in late summer 1854, but the ice did not melt as they had hoped and Advance remained trapped. Kane decided that they would all stay with the boat for a second winter. They had managed to keep the temperature on board the boat above zero the winter before thanks to the Inuits who had taught them how to insulate it with moss and peat. But the crew mutinied, and some of the men started to walk south in the hope of avoiding another winter.
Nearly a year later, on May 20, 1855, Advance was still stuck in the ice. Kane and the remaining men abandoned the boat and walked for eighty-three days to Upernavik, on the west coast of Greenland. In a farewell letter left on board, in case they too went missing and someone found the boat, Kane wrote that they had so little coal left that they would soon have to start burning the ship’s timbers. Staying aboard was no longer an option. The last things they did on board were to pray and quietly pack a portrait of John Franklin.
Only one man died on the return journey—no mean feat, given all that they lacked and the number of men with scurvy. The US dispatched two ships to find and rescue the expedition in summer 1855. Two years of arduous struggle, frostbite, hunger, and very poor hygiene had rendered the men unrecognizable. Even Kane’s brother, who took part in the rescue operation, failed to recognize his dirty, exhausted, bearded brother when they met.
after the North Pole
https://lithub.com/arctic-rush-inside-the-19th-century-craze-to-reach-the-north-pole/
*
*
’THE BEST OF ALL POSSIBLE WORLDS”
The original optimist, Leibniz, was mocked and misunderstood. Centuries later, his worldview can help us navigate modern life
What does it mean to be optimistic? We usually think of optimism as an expectation that things will work out for the best. While we might accept that such expectations have motivational value – making it easier to deal with the ups and downs of everyday life, and the struggle and strife we see in the world – we might still feel dubious about it from an intellectual perspective. Optimism is, after all, by its nature delusional; ‘realism’ or outright pessimism might seem more justifiable given the troubles of the present and the uncertainties of the future.
Those of us familiar with Voltaire’s celebrated novel Candide, or Optimism (1759) might be reminded of his character Dr Pangloss, and his refrain that all must be for the best ‘in this best of all possible worlds’. As Pangloss and his company are subjected to disaster, war, mutilation, cruelty and disease, his article of faith seems increasingly blind. Pangloss, a professor of ‘metaphysico-theologico-cosmolo-nigology’, is a vicious caricature of the great German polymath Gottfried Wilhelm Leibniz, and his catchphrase is Voltaire’s snappy formulation of the German’s attempt to provide a logical argument for optimism.
Or, really, a theological argument. Leibniz didn’t set out to explain why some people are perpetually cheerful about their prospects, but why an all-powerful, all-seeing and all-loving God allows evil to exist in the world. This ‘problem of evil’ has been debated for millennia, but it was Leibniz who first attempted to reason his way to an answer, rather than look to scripture for one. His inspiration came from his realization, in the early 1680s, that the path taken by light through a system of prisms or mirrors always followed the ‘easiest’, or ‘optimal’, path from source to destination. As the contemporary philosopher Jeffrey McDonough argued in A Miracle Creed (2022), he soon realized that many other phenomena followed a similar ‘principle of optimality’ – including, perhaps, the entirety of God’s creation.
Up until that point, it had generally been assumed that the cosmos was precisely the way it had to be. Leibniz, on the other hand, argued that God could have chosen from many laws and ingredients when making the world, but some combinations would not be internally consistent. So God, in His wisdom, chose the particular combination that led to a world that was both ‘simplest in hypotheses and richest in phenomena’. That might not be a perfect world, containing no suffering or evil, but it would be the best of all possible worlds. While there might be many possible ways to make a world, there’s only one optimal way. And this view of the world came to be known as optimism.
Leibniz’s view, put forward in his book Theodicy (1710), did not win instant acclaim. Critics sniffed at the idea that God’s motives could be deciphered in this way. But it was Voltaire, almost half a century later, who dealt the killing blow. Candide proved enormously successful. Published in five countries simultaneously, it is estimated to have sold more than 20,000 copies in its first year alone. Voltaire’s acerbic depiction of Leibniz and his ideas stuck – and, since the German was long dead, he was unable to fight his corner. Even today, many people are much more familiar with Voltaire’s satirical take than Leibniz’s rational one.
But was Leibniz on to something? Could his worldview help us regain a clearer sense of how to be optimistic in the present day? In his day, the idea that the world could be arranged any differently was novel to the point of outlandishness. Over centuries, that gradually changed as it became clearer that the cosmos contains places that are nothing like Earth, and that our planet itself had been dramatically different in the past. But it shot to prominence in the middle of the 20th century, when both philosophy and physics converged on the idea that ours is only one of many possible universes – or, at least, that this is a useful way to think about certain problems in logic and quantum theory.
And many other problems, too. This intellectual respectability has turned into cultural ubiquity: the idea that there are many possible worlds is intuitively appealing in a time when ever fewer people accept the idea of a divine plan. We are more likely to believe that the future is open, with many alternative paths we could take from today to tomorrow.
Whatever its scientific merits, the multiverse has become ubiquitous as a metaphor for the way the courses of our lives, and of our societies, are set by the decisions we make, the actions we take and the accidents that befall us. Once, we might have assumed that our fates lay in the lap of the gods, or that implacable physical laws dictate our every move. Today, we think the responsibility lies with us.
That makes Leibnizian optimism more appealing than it has been for centuries. Besides the renewed relevance of its central metaphor, Leibnizian optimism reminds us that we cannot see or understand the whole design of the world. And if we attempt to fix one problem at one time and place, there will likely be consequences elsewhere that we can’t anticipate. Climate change is a byproduct of the massive economic uplift engendered by the Industrial Revolution. The collapsing demographics of many societies result in part from huge improvements in infant mortality and life expectancy. The technology that lets you read my ideas here also reshapes societal norms and disrupts our polities.
That doesn’t mean we should stop trying to improve the world: it means we should expect perfection to remain perpetually out of reach. That’s not such a bitter pill to swallow: a few technological fantasists aside, no one really expects utopia to arrive tomorrow, or the day after that. But we can still strive for the best of all possible worlds. The key to making Leibniz’s version of optimism relevant to a secular, 21st-century worldview, is to make ‘the best of all possible worlds’ an aspiration, not a statement of belief.
We may or may not live in a multiverse; but we can better shape the Universe we do live in if we think about its unrealized branches and forks – whether those lie along roads not taken in the past, or the paths we might walk into the future.
This might sound abstract or fanciful. But in fact, as I detail in my book The Bright Side (2025), there are many ways in which multiversal thinking is already applied, many of them down-to-earth and serious. We devise scenarios for the future of the climate, and our reactions to it; we role-play how markets might be plunged into crisis, conflicts might unfold and pandemics be contained.
Counterfactual historians ask: ‘What if?’, the better to understand how the world of today emerged from the world of yesterday – and futurists ask the same question to understand how it might give rise to the world of tomorrow. Billions of dollars, millions of lives, the trajectory of industries, societies and nations are determined, more or less explicitly, by possible-worlds thinking.
So, yes, optimism can take the form of blind faith that the future will be brighter than the present. Most of us, in our everyday lives, need some of that faith in order to keep moving forward when times get tough, when obstacles present themselves and the future is full of unknowns. But when it comes to how we think about the world we live in, and the challenges we all face, Leibnizian optimism – the original optimism – provides a fertile way to evaluate our options and shake off our feelings of fatalism. The world, and the future, is always full of possibilities – and we can always strive to make it the best of all possible worlds.
Adapted from The Bright Side: Why Optimists Have the Power to Change the World (2025) by Sumit Paul-Choudhury, published by Canongate Books.
https://psyche.co/ideas/why-its-possible-to-be-optimistic-in-a-world-of-bad-news
*
IRON WINDS AND MOLTEN METAL RAINS RAVAGE A HELLISH HOT JUPITER EXOPLANET
"Our observations indicate the presence of powerful iron winds, probably fueled by a hot spot in the atmosphere.”
Lower those finger horns. "Iron Winds and Metal Rain" may be an awesome title for an album by a heavy metal band, but it's also a fairly accurate weather prediction for a hellish exoplanet called WASP-76b. The discovery of iron winds on this world demonstrate just how truly "alien" some planets beyond our solar system are.
Located around 634 light-years away from Earth in the Pisces constellation, the strange and hostile extrasolar planet or "exoplanet" has temperatures of around 4,350 degrees Fahrenheit (2,400 degrees Celsius), hot enough to vaporize iron and cause iron rains to pummel the planet's surface.
These intimidating temperatures arise from WASP-76b's proximity to its star, which classifies it as a "hot Jupiter" planet. This proximity also causes the planet to be tidally locked to its star, meaning one side always faces its sun, blasting it with radiation in perpetuity.
Recently, a team of scientists from the University of Geneva (UNIGE) and the PlanetS National Centre of Competence in Research (NCCR PlanetS) has discovered the presence of high-speed winds carrying vaporized iron from the permanent "dayside" of the planet to its relatively cooler "nightside," which perpetually faces out into space.
Once at the nightside of WASP-76b, the iron condenses and falls as molten metal droplets due to the cooler temperatures on the planet's permanent dark side.
"This is the first time that such detailed optical observations have been made on the day side of this exoplanet, providing key data on its atmospheric structure," research lead author Ana Rita Costa Silva, a doctoral student at the Instituto de Astrofísica e Ciências do Espaço said in a statement. "Our observations indicate the presence of powerful iron winds, probably fueled by a hot spot in the atmosphere.”
https://www.space.com/hot-jupiter-exoplanet-iron-winds-metal-rains
*
MOUTH BACTERIA MAY INDICATE FUTURE BRAIN HEALTH
Certain bacteria found in people's mouths may be linked to changes in brain function as you age, experts have said.
The study, led by the University of Exeter, found certain types of bacteria were associated with better memory and attention, while others were linked to poor brain health and Alzheimer's disease.
Lead author Dr Joanna L'Heureux said: "We might be able to predict if you have the Alzheimer's gene even before you start getting problems or think about going to the doctor for a diagnosis.”
The research is in early stages but study leads say they are now investigating whether eating certain healthy foods, such as nitrate-rich leafy greens, can influence brain health by boosting certain bacteria.
Co-author Prof Anne Corbett said: "The implication of our research is profound.”
She said: "If certain bacteria support brain function while others contribute to decline, then treatments that alter the balance of bacteria in the mouth could be part of a solution to prevent dementia.
"This could be through dietary changes, probiotics, oral hygiene routines, or even targeted treatments.”
The study recruited 115 volunteers, over the age of 50, who had already carried out cognitive tests as part of another project.
Researchers split them into two groups — those with no issues with their brain function decline and those with some mild cognitive problems.
The participants in both groups sent in mouth rinse samples that were then analyzed and the bacteria populations studied.
The university said people who had large numbers of the bacteria groups Neisseria and Haemophilus had better memory, attention and ability to do complex tasks.
However, Dr L'Heureux said she found greater levels of the bacteria, Porphyromonas, in individuals with memory problems.
Whereas, she said, the bacterial group Prevotella was linked to low nitrite, which was more common in people that carried the Alzheimer's disease risk gene.
Dr L'Heureux said: "We would recommend you have things like beetroot, leafy greens like spinach, rocket, lettuces, lots of salads and reduce consumption of things like alcohol and highly processed sugary foods.”
Red beets and leafy greens are some of the biggest natural sources of nitrates.
Prof Anni Vanhatalo, associate pro-vice chancellor for research and impact at the university, said: "In the future, we could collect these [mouth] samples as part of GP appointments and get them processed to give an early indication if someone is at elevated risk.”
https://www.bbc.com/news/articles/cd7djpvv25ro
*
PEOPLE WITH ADHD HAVE SHORTER LIFE EXPECTANCY
Globally, ADHD affects an estimated 2.8% of people, although experts believe that many people with the condition are likely to be undiagnosed.
As the authors of the new study explain, compared with people without ADHD, those who have the condition are more likely to experience “inequality and adversity, including educational under-attainment, unemployment, financial problems, discrimination, contact with the criminal justice system, [and] homelessness.”
Sleep problems and alcohol and other substance misuse are also more likely in ADHD, as are a number of health conditions, including cardiovascular disease.
The latest study investigates how these factors might impact the lifespan of people with ADHD in the United Kingdom.
ADHD linked to 13 medical conditions
To investigate, the researchers had access to data from 30,039 people aged 18 or older with an ADHD diagnosis. The scientists compared each of these individuals with 10 participants without ADHD matched for age, sex, and other factors.
The scientists found that physical and mental health conditions were more common in those with ADHD. In fact, those with ADHD were more likely to be diagnosed with all 13 of the medical conditions they examined.
For instance, compared with people without ADHD, those with the condition were:
17% of males were more likely to have diabetes, and females had an even higher risk
27% more likely to have hypertension
more than twice as likely to have epilepsy
more than twice as likely to have depression
over 10 times more likely to have a personality disorder.
Importantly, they found that life expectancy for adults with ADHD relative to the general population was:
8.64 years shorter for females
6.78 years shorter for males.
This equates to a total estimated life expectancy of 73.26 years for males with diagnosed ADHD compared with 80.03 years for those without ADHD.
For female participants with diagnosed ADHD, life expectancy was an estimated 75.15 years, compared with 83.79 years for females without a diagnosis.
Additionally, Barry K. Herman, MD, told MNT that people with ADHD “have higher rates of substance abuse. They are also less likely to attend to their physical health conditions, to keep doctor appointments, and to adhere to treatment recommendations for their ADHD and other conditions.”
Speaking with MNT, Beata Lewis, MD, an adult, child, and adolescent psychiatrist at Mind Body Seven, also not involved in this study, explained the importance of the “ripple effect.” She explained:
“Challenges with work stability often lead to financial stress, which can limit access to good healthcare, time for exercise and leisure, and clean food. Similarly, difficulties maintaining relationships can lead to social isolation. All of these factors are linked to shorter lifespans.”
According to Lewis, long-term stress in ADHD plays a pivotal role in physical health, too.
“Think of it like keeping your engine revved too high for too long — the constant stress from managing ADHD symptoms can lead to inflammation and throw off stress hormones like cortisol,” she told us. “That can cause a host of other negative health consequences.”
Dr. Barry Herman explained how focusing on diagnosis may help mitigate some of this reduction in lifespan:
“Despite improvements in diagnosing ADHD in adulthood,” he told Medical News Today, “ADHD often goes undiagnosed, and the longer the delay in diagnosing and treating adults, the more likely they are to experience all of the risk factors that accompany the disorder.”
https://www.medicalnewstoday.com/articles/adhd-linked-to-astonishing-reduction-in-life-expectancy#How-to-support-people-in-your-life-with-ADHD
*
ending on beauty:
TO PICASSO'S SEATED BATHER
Talitha cumi – Maiden arise.
Arise from your dream of death,
and shake off the sand.
Throw away your
toe-pinching sandals,
your tanning lotion
with a sticky scent.
awaits like a real ocean,
with the moon
Talitha cumi – don’t wait.
Marry your life at once.
~ Oriana
No comments:
Post a Comment