Saturday, June 8, 2024

WHY WE ARE HAVING LESS SEX; CRUEL OPTIMISM;TEN PREDICTIONS FOR TEN YEARS FROM NOW; BANNING THE BURQA; HOW ACTORS REMEMBER THEIR LINES; THERE IS NO RECESSION; A MARSHALL’S PLAN FOR UKRAINE? EZRA: A MOVIE TO FORGET; THE METHUSELAH GENE; PEOPLE’S LAST WORDS

 An iris in Presby Memorial Iris Gardens, N.J.; Photo: D. Goska 

*

IN THE HEAVEN OF INDRA

hangs a curtain of pearls
threaded with infinite skill:
each pearl reflects every other pearl,
suspended in the moon gleam.

We too are interlaced
more than we dare believe.
We dream of heaven
because we have known hell.  

My mother, already unconscious,
lifted her arm and reached out
as if to lace her hand with the hand
of someone waiting on the other side.

Then she went into that love.

~ Oriana


Indra, the god of rain

*
SETTLER COLONIALISM AND CONRAD’S HEART OF DARKNESS

early American outpost

~ On the final day of April 1774, the Minerva was plying the waters of the Atlantic Ocean, headed for Boston. On board were London newspapers bearing the first news of England’s response to the Tea Party. Bostonians would soon learn that, as punishment, the Crown had decided to close the port, a step that would spark a civil war in the British empire. Hours before the Minerva’s lookout spied Boston Harbor, seven Native people crossed a smaller body of water as they traveled to a meeting that they hoped would prevent a similar breakdown in relations with colonial settlers. The party knew the trip put them in mortal danger and yet they went anyway; it’s what their family had done for more than four decades.

That family had negotiated relationships between Natives and colonists since the 1730s. So despite whatever apprehensions they might have had, they climbed into their canoes and let a tributary called Yellow Creek carry them about a mile down to the Ohio River. Paddling against the current, they soon reached their destination, a small log cabin that sat just up from the eastern shore on a piece of ground known as Baker’s Bottom.

The seven Natives—­four men, two women, and a baby—­were invited into the cabin’s front room, where a few colonists poured them drinks and allegedly launched into a conventional ceremony of keeping peace. Things were tense, but the alcohol helped. The two groups joked with one another. One of the colonial hosts had a British regimental uniform. As they all got a little tipsy, a Native man put on the red coat and paraded around the room. Someone proposed a shooting contest as a diversion and perhaps to further clear the air. Several went outside and identified a target. You go first, the white men told the Natives, and the latter fired, leaving themselves disarmed. The trap was sprung.

There were more than a dozen colonists hiding in the back room of the cabin. When the shooting started, the crouching, silent men leapt into action, running out into the yard, advancing on the unsuspecting Natives, who turned and sprinted for their canoes. Bullets took down one man, then another and another, spilling blood over the ground as acrid gun smoke rose into the air. An elderly woman also fell. 

A second Native woman clutched her baby and ran for the water’s edge. When she realized escape was impossible, she turned and begged the men to spare the child’s life. She’s the daughter of a white man, the mother pleaded about her child, hoping that might convince them. They paused and allowed her to hand over the baby before shooting her in the face.

Three more Natives, worried about what might be happening across the way, had come looking for their relations. They were caught in the middle of the river when the colonists fired away, killing two of them, too. One, a woman, survived, but was gravely wounded and barely made it to the other side.

In all, eight Native people would perish in the horror at Yellow Creek on April 30, 1774.

*
Recently, scholars looking to interpret encounter between Native peoples and colonists have relied on the concept of “settler colonialism.” Developed largely by Australian scholars writing about what happened to Indigenous peoples there, the concept of settler colonialism has been used to argue that European settlers in the Americas did not simply take land and resources from Indigenous peoples but eliminated them after the fact by destroying their history and discrediting their way of life.

Settler colonialism posits a “double dispossession”: Europeans did more than invade Native lands—­they occupied them. For theorists of settler colonialism, incidents like Yellow Creek are all too easy to understand; what happened to the eight Natives at Baker’s Bottom was not only predictable, but an inevitable and unsurprising part of the conquest of North America.

At its core, settler colonialism has a logic, a thrust, a teleology, too. Just as an early historian’s tale had frontiersmen laying the foundation of American greatness through the trials of taming the West, historians who subscribe to settler colonialism tell us that the double dispossession—­the destruction of Native America—­was also inescapable.

“Invasion is a structure not an event,” Australian historian Patrick Wolfe, the leading theorist of settler colonialism, has argued.

The confidence that comes from knowing how things actually did turn out is an inherent hazard on all these interpretive paths. While we can acknowledge that white colonists possessed enormous power, not least in their sheer numbers, that alone does not explain imperial encounter. Either dismissing the horror of Yellow Creek or expecting it doesn’t tell us how that particular event happened, why it mattered, or what legacies it left.

*
[“The Frontier Thesis, also known as Turner's Thesis or American frontierism, is the argument advanced by historian Frederick Jackson Turner in 1893 that the settlement and colonization of the rugged American frontier was decisive in forming the culture of American democracy and distinguishing it from European nations.” ~ Wikipedia]

*
Perhaps the problem is we are “seeing” things wrong. Maybe it would help if we went back to the 1890s and started over again. In that decade there was another writer who developed his own vision of imperial encounter. The path he laid might lead toward a better understanding of what happened in the early American backcountry.

Józef Teodor Konrad Korzeniowski was born in 1857 in what is now Ukraine, to a family of Polish nationalists who had also lived through an imperial conquest. After Russian officials exiled his father for plotting against the czar, the young family fell apart. Both parents died of tuberculosis before Józef was eleven. He eventually fled to the sea, where he signed on with the merchant marine at age sixteen, and began to call himself Joseph Konrad.

Konrad’s years afloat coincided almost exactly with the “scramble for Africa” in the final quarter of the nineteenth century, which saw European powers race to colonize that continent. At age thirty-­two, with “large black eyes,” “a determined chin,” and a “thick, well-­trimmed, dark brown mustache,” Konrad spent part of 1890 in central Africa, on the Congo River, witnessing imperialism. What he encountered led him to see the process of taming the wilderness vastly different from that of the early historians.

When he escaped from the Congo—­he almost didn’t survive, he was so ill—­he decided to give up his career as a sailor and become a writer. Anglicizing his last name to Conrad, his first novel, Almayer’s Folly, would appear two years after Frederick Turner’s address to the Columbian Exposition [extolling the greatness of pioneers]. In 1899, Blackwood’s Magazine serialized a story Conrad had written about his harrowing months in central Africa. Three years later, that novella would be published as Heart of Darkness. It would come to be regarded as an essential book about European imperialism and what the exploitation of Africa revealed about human nature.

Conrad’s vision of imperial encounter, on its face, would seem to please a theorist of settler colonialism. There is, after all, no heroic mission in Heart of Darkness; there is no triumph. “The conquest of the earth,” Conrad’s narrator says, “which mostly means the taking it away from those who have a different complexion or slightly flatter noses than ourselves, is not a pretty thing when you look into it too much.” Imperialism is not an intrepid adventure that brings out humanity’s best. It is a horror.

In Heart of Darkness, the reader doesn’t find sturdy pioneers but rather haunted white men surrounded by severed heads on pikes. But stopping there would miss much of Conrad’s point. It is not enough to simply describe the violence and exploitation of imperialism. The achievement of Conrad’s novella is not a “thesis” about imperialism but a description of how people caught up in it encountered and experienced the chaos it produced.

Frederick Turner’s thesis is perhaps the best American example of how a proposition about the past can shape the future. “No myth has become more powerful, more invoked by more presidents, than that of pioneers advancing across an endless meridian,” one scholar has recently written. Historians’ sweeping interpretations can become forces in their own right.

“Onward and then onward again.” Those who came after Turner transformed his thesis into an “ideology of limitlessness,” and used it to justify American power all over the world. Settler colonialism, by contrast, suggests that the United States should reconsider its global ambitions and come to terms with how it became a superpower in the first place: via the conquest of a continent and the people who lived in North America. Turner’s interpretation celebrates; settler colonialism condemns.

Heart of Darkness is useful because it undermines the historians’ project of confidently knowing or being able to account for what happened in confusing places like the ones Conrad saw in the Congo. When he reflected on the consequences of empire, Conrad saw no logic or teleology. He saw mayhem. There is no surety in Heart of Darkness; everything that happens in the novella suggests absurdity and bewilderment. 

Upon arriving in the Congo, the book’s narrator, Charles Marlow, finds mountains being blasted apart to make room for railroads that go nowhere. He watches “incomprehensible” European gunships lobbing shells at “enemies” who cannot be seen or even identified. He recoils as he stumbles upon a “grove of death,” an idyllic place right next to the deafening Congo rapids where Native people are worked to the edge of death because they have been judged “criminals,” though they have committed no offense.

A Belgian trading company has sent Marlow to find and bring home a man named Kurtz, and he scoffs when he first sees that Kurtz, the German word for “short,” is tall and lanky. Nothing is what it seems in this bizarre world. Marlow’s perception is marred by fog or smoke or flickering shadows, and he can’t believe the things his eyes see. As one literary scholar has stated, “moments of intense bewilderment occur so frequently in Conrad’s fiction that they seem less unusual than customary.” This is especially true of Heart of Darkness.

Marlow becomes an observer of this bewilderment, registering the strangeness around him. His purpose, as a character, is to document and explicate for the reader the senselessness of empire. We are supposed to be put ill at ease by the absurdity and cruelty of the imperial Congo. 

Bewilderment, for Marlow, is a state of mind; it is a way of existing in this upside-­down world.
But imperialism has even more insidious effects on those who fully engaged in it.

The man Marlow has been sent to the Congo to find operates on a deeper level of bewilderment. Kurtz is a veteran of empire, and it has turned him into a chaos agent. Marlow is sent to retrieve Kurtz because the latter no longer seems to work for anyone ex
cept himself. 

Kurtz has grasped the power of bewilderment and is now exploiting it to his own benefit. In Kurtz’s hands, it is no longer just a description of what imperial encounter does to the world but rather a weapon to be brandished. Bewilderment, for Kurtz, is a state of play; it is a way of making this upside-­down world work to one’s advantage.

In the 1890s, Conrad faced imperialism’s bewildering effects. He watched how some Europeans tried to sow even more confusion to maximize their own rewards, a strategy that further compounded the violence. He tried to render it the best way he knew, through fiction, and the resulting sentences evoked the incomprehension that is inherent to imperial encounter.

https://lithub.com/invasion-is-a-structure-not-an-event-on-settler-colonialism-and-joseph-conrads-heart-of-darkness

James Henry:
The Czech writer Josef Škvorecký has interesting discussions of Conrad's attitude to colonialism — especially Russian colonialism — in The Engineer of Human Souls and an essay "Why the Harlequin? On Conrad's Heart of Darkness." in https://quod.lib.umich.edu/...

The Free State of the Congo wasn't actually an example of settler colonialism but a form of land piracy — its colonists, when they weren't inspired by curiosity like Kurtz and Marlow in different ways, were looters who went in and went out with what they could grab, if they survived.

*

Mary: COLONIZATION: THE DREAM OF AN EMPTY LAND

FundamentaL to any theory of invasion or colonization is the essential idea of the colonizing or invading force that they are responding to an "empty" "undeveloped" "wilderness" open to exploration and exploitation. The invaders know other people live there and have for a long time, but they do not see these people as a true civilization because their uses of the land and relationship to the land is so different from that of the invaders it almost becomes invisible — and is seen as illegitimate and wasteful. The indigenous population are seen as savages in the wilderness, not as people with their own forms of social organization and their own relationship —a very sophisticated relationship — with their environment.

Consider the idea of a "frontier" itself. It is a demarkation between the civilized/humanized landscape and the barbaric emptiness of the Wilderness. The story of the wonderfully resourceful and brave settlers on the frontier. Is the story of occupying and taming a wilderness that was seen as effectively "empty." No fences. Farms. Roads. Cities. Nothing they recognize as truly Occupied. The indigenous peoples become simply an obstacle preventing proper use and development of the land.

In most cases, here and in Australia and South America, the first stage of the conquest occurs almost without intention. The germs brought by the invaders find an indigenous population without experience and without immune defense. They died in droves...the toll was bad enough...and then, the remnant population became nothing more than a problem to be managed — by violent hostilities, both by individuals and government, always pushing and pushing them out of the way of those ambitious "frontiersmen " with their farms and families, ranches and missions and railroads.

And beside that, both here and in Australia the government of the "settlers" instituted forced assimilation, in the disastrous coercion of taking children into cruel and abusive "schools" that  tried to crush and eradicate their native language and culture by force. These horrors continued well into the 20th century.

I agree that the worsening intrusion of the government into women's lives, eradicating rights, taking away choices, limiting control over their bodies and their lives is a terrifying development. I understand some considering drastic measures to re assert control by seeking permanent means to avoid conception — choosing tubal ligation over the risk of forced birth, or even death, when medical care is withheld until she is "dying enough" so the surgeon won't face prosecution as an "abortionist." Avoiding sex under such circumstances is one possible way to have some control over not only your reproductive life, but your existence itself.

I wish this wasn't such a desperate choice, but these far right forces have created a desperate, even horrific, situation, that seems to have an impetus to get much worse.

Oriana:
Regarding the first part of your commentary, I’m reminded of someone saying that we are lucky that aliens from outer space have not contacted us. If they did, the outcome could be disastrous — based specifically on our history of colonization. Aliens capable of contacting us would likely be technologically superior, and just might decide to colonize the earth. And we know all too well from out own history that it would mean an absolute horror.

About women’s rights: The best and most succinct answer I ever encountered was this: The question is not whether a fetus is a person, but whether a woman is.

*

Joseph Milosch: WHY WE ARE HAVING FEWER CHILDREN

One of the most overlooked reasons that women are having fewer children is that the rise of fascism reduces the nation’s birthrates. As this oppressive philosophy grows, the national birthrate diminishes. By supporting child bearing with a stipend and appealing to patriotism, the tyrannical government tries to increase the fertility rate. These strategies were employed by Franco, Mussolini, Hitler, and today by Putin, Orban, and the Republican Party. One of the contributing factors is that oppressive governments adopt the harsh judgment of the Hindu, Jewish, Christian, and Muslim nationalists.

The severe judgment leads to sanctified retribution, which makes vengeance one virtue among the other vices that the far-right values. They brand them noble even though they only resemble nobility compared to the deadly vices of emasculated righteousness. They often coincide with other vices that the American gun culture views as virtuous. The foundation of these tenets is the slogan “Might Makes Right” — a slogan that makes killing the preferred method of solving problems. Thus, secularism and religious nationalism complement the gun culture in promoting solutions through the threat or act of violence.

Thus, cyberbullying, stalking, abusive memes, and trolling combine with the more aggressive intimidation: road rage, mass shootings, stabbings, and spousal abuse. These actions negatively affect personal relationships. No one is immune from viewing themselves badly when they hurt others. Therefore, the immorality of fascism, nationalism, and the gun culture encourages alcohol and drugs as a method of self-medication. However, there is no remedy for soothing one’s conscience, and this treatment produces aggressive, confrontational behavior, leading to violence in the streets and at home.

Today, we know that intimate partner violence and abuse increase mortality, injury, and disability among women and newborns. Research indicates that domestic violence leads to lower birth rates and divorces. In the US, the Republican Party represents fascists, Christian nationalists, and the gun culture. Their solution to the lower birthrate that their political beliefs contribute to is to illegalize birth control and abortions and increase the cost of prenatal, postnatal, and child care. They believe these measures will force more women to marry, become pregnant, and stay home to care for their children.

Consequently, young women take practical steps to protect themselves. Some choose to live at home with their parents. Some substitute the internet for personal interaction. Others limit sexual encounters or become single mothers. All these forms of protection lower the birth rate by impeding the creation of a safe space for romance. There are many good reasons why people have sex less often.  However, the rise of fascism is the most important contributor to social decline in the United States, not feminism or the internet.

Oriana:

Although we may not see it in our lifetime, the global human population has actually begun to shrink. Birth rates in the so-called "developed" countries are way below replacement. Given how overpopulation destroys the environment, this shrinkage is a reason to celebrate -- up to a point.

The atmosphere of coercion when it comes to child bearing certainly seems to have the opposite effect. I hope there will come a time when the men in power (let's not kid ourselves, almost all heads of state are men) will finally acknowledge that having a child has become too expensive, and not just financially. Young mothers (married or not) should be actively helped, with volunteers and paid assistants coming to help so that the mother isn't alone and overwhelmed. 

 Oddly enough, we have this service to families when it comes to the hospice movement, which has been very successful. If it's "hospice at home," then someone comes to the dying person's home every day to help in various ways. So we know now about "dying with dignity." What about becoming a new mother with dignity? But that would mean that women are valued, mothers are valued, children are valued -- really valued, not just given proverbial lip service by politicians. No, mothers (and fathers) and children need REAL services.  

It takes not just a village -- it takes the whole culture. 

*
PUTIN PLANS TO RAISE TAXES TO FINANCE THE WAR WITH UKRAINE


The Russian government is planning a major tax reform, hoping to generate nearly 3 trillion rubles (£26bn) per year.

Nick Trickett, a senior analyst at S&P Global Commodity Insights, told independent news publication The Moscow Times: "What really supercharged it was, frankly, financing the war."

The Russian economy has been hit in recent years — not just by the cost of the invasion — but also by the Western sanctions that followed.

Moreover, the country suffered the loss of energy sales to Europe and, since mid-2022, was forced to turn its attention to the Indian and Chinese markets, where it started selling its oil and gas at a cheaper price.

Analyzing the tax overhaul, political scientist Ilia Matveev argued Russia may have "reached its limits.”

The expert wrote on Facebook, as reported by the Kyiv Post: "The main measures include increasing VAT and corporation tax. Hence, the population and companies will continue to finance the war.

The question is whether the defense industry can bolster the civilian sectors (true 'military Keynesianism'). But Russia has reached its limits in terms of reducing unemployment, utilizing free production capacities and general synergies between the civil and military sectors.

The military sectors are already growing while the civilian sectors are stagnating.

Russia has allocated this year six percent of its GDP to defense and military spending, heightening the need for the state to receive higher contributions.

The tax reform submitted by the Russian Finance Ministry last week would ditch the current rates, standing at 13 percent for people earning less than 5m roubles per year (£43,848) and 15 percent for those earning more.

The corporate tax rate would rise from 20 percent to 25 percent according to this plan, and the Finance Ministry said revenues from this increase would be used to support business, technology and infrastructure projects. ~ Alice Scarsi, Quora

Jim Mazza:
Plus Ukraine is targeting oil refineries to the point that now production has been reduced by around 20%, that does not help Ruzzia’s economy. [Oriana: spelling Russia with two z's is meant to evoke the SS, likening the country to Nazi Germany.]

D Block:
It’s 2024, you can’t just steal another country’s backyard for the fun of it. Get a life life, Putin, because yours is coming to the end, it’s your children who will suffer.

*
HOW ACTORS REMEMBER THEIR LINES; AND WHAT IT TELLS US ABOUT THE NATURE OF MEMORY   

After a recent theater performance, I remained in the audience as the actors assembled on stage to discuss the current play and the upcoming production that they were rehearsing. Because each actor had many lines to remember, my curiosity led me to ask a question they frequently hear: “How do you learn all of those lines?”

Actors face the demanding task of learning their lines with great precision, but they rarely do so by rote repetition. They did not, they said, sit down with a script and recite their lines until they knew them by heart. Repeating items over and over, called maintenance rehearsal, is not the most effective strategy for remembering. Instead, actors engage in
elaborative rehearsal, focusing their attention on the meaning of the material and associating it with information they already know. Actors study the script, trying to understand their character and seeing how their lines relate to that character. In describing these elaborative processes, the actors assembled that evening offered sound advice for effective remembering.

Similarly, when psychologists Helga and Tony Noice surveyed actors on how they learn their lines, they found that actors search for meaning in the script, rather than memorizing lines. The actors imagine the character in each scene, adopt the character’s perspective, relate new material to the character’s background, and try to match the character’s mood. Script lines are carefully analyzed to understand the character’s motivation.

This deep understanding of a script is achieved by actors asking goal-directed questions, such as “Am I angry with her when I say this?” Later, during a performance, this deep understanding provides the context for the lines to be recalled naturally, rather than recited from a memorized text. In his book “Acting in Film,” actor Michael Caine described this process well:

You must be able to stand there not thinking of that line. You take it off the other actor’s face. Otherwise, for your next line, you’re not listening and not free to respond naturally, to act spontaneously.

Michael Caine

This same process of learning and remembering lines by deep understanding enabled a septuagenarian actor to recite all 10,565 lines of Milton’s epic poem, “Paradise Lost.” At the age of 58, John Basinger began studying this poem as a form of mental activity to accompany his physical activity at the gym, each time adding more lines to what he had already learned. 

Eight years later, he had committed the entire poem to memory, reciting it over three days. When I tested him at age 74, giving him randomly drawn couplets from the poem and asking him to recite the next ten lines, his recall was nearly flawless. Yet, he did not accomplish this feat through mindless repetition. In the course of studying the poem, he came to a deep understanding of Milton. Said Basinger:

During the incessant repetition of Milton’s words, I really began to listen to them, and every now and then as the poem began to take shape in my mind, an insight would come, an understanding, a delicious possibility.

In describing how they remember their lines, actors are telling us an important truth about memory — deep understanding promotes long-lasting memories.

A Memory Strategy for Everyone

Deep understanding involves focusing your attention on the underlying meaning of an item or event, and each of us can use this strategy to enhance everyday retention. In picking up an apple at the grocers, for example, you can look at its color and size, you can say its name, and you can think of its nutritional value and use in a favorite recipe. Focusing on these visual, acoustic, and conceptual aspects of the apple correspond to shallow, moderate, and deep levels of processing, and the depth of processing that is devoted to an item or event affects its memorability.

Memory is typically enhanced when we engage in deep processing that provides meaning for an item or event, rather than shallow processing. Given a list of common nouns to read, people recall more words on a surprise memory test if they previously attended to the meaning of each word than if they focused on each word’s font or sound.

Deep, elaborative processing enhances understanding by relating something you are trying to learn to things you've already known. Retention is enhanced because elaboration produces more meaningful associations than does shallow processing — links that can serve as potential cues for later remembering. For example, your ease of recalling the name of a specific dwarf in Walt Disney’s animated film, “Snow White and the Seven Dwarfs,” depends on the cue and its associated meaning:

Try to recall the name of the dwarf that begins with the letter B.

People often have a hard time coming up with the correct name with this cue because many common names begin with the letter B and all of them are wrong. Try it again with a more meaningful cue:

Recall the name of the dwarf whose name is synonymous with shyness.

If you know the Disney film, this time the answer is easy. Meaningful associations help us remember, and elaborative processing produces more semantic associations than does shallow processing. This is why the meaningful cue produces the name Bashful.

https://thereader.mitpress.mit.edu/how-actors-remember-their-lines/

Oriana: It seems that the more clues, the better. "Bashful" is a relatively rare word, and it helps to know that the first letter of the name is B.

*
WHY WE ARE HAVING LESS SEX

Americans are a stunningly lonely bunch. Our decline in friendships and social time began before the pandemic, but was badly exacerbated by it. We live more of our lives online and behind a screen, with many people now working from home, schoolchildren learning online and scores of us engaging with our peers on social media more than we do in real life.

At the same time, our in-person social connections have frayed. Fewer adults are getting married or living with a partner and fewer have children, which wouldn’t be a bad thing if those deep ties were being replicated by other relationships, but they’re not; Americans have fewer friends than we did a decade and a half ago, while our families shrink, too.

Attendance at places of worship and old-school social clubs is also way down — again, not a problem if there were other institutions taking the place of these old spaces of gathering and sometimes of philosophical and moral inquiry, but there aren’t. Yes, a few people have found groups like running clubs, soccer leagues and spiritual retreats, but none of these have been the social forces that old institutions were in terms of allowing for socialization across the lines of age and class (if not so much of belief or race).

There are good reasons why these institutions – church, marriage, the nuclear family – are on the decline. They often imposed suffocating and even bigoted rules, for women especially. Some have historically excluded entire classes of people
LGBTQ+  people most obviously, but many religious groups were also unwelcoming to African Americans or other minority groups.

Women still cannot hold top positions in many religious institutions, and so many have understandably rejected these misogynistic formal patriarchies. But something has been lost, too, in our collective move away from the communal. And while we are individually freer than ever – an obvious and unalloyed good – we are also profoundly lonely.

But just because religious attendance has declined doesn’t mean that religious ideas have died. And
one particularly old-school one is coming up again in our increasingly atomized, antisocial culture: celibacy.

Sex itself is less common among the usually-raring-to-go young than it’s been in decades. Researchers have not agreed on why this is happening, but theories abound, including the fact that young people have less unstructured time and spend less of the time they do have simply hanging out with friends, which probably makes for few opportunities for sex.

My personal theory is that social decline plays a primary role and is helped along by more feminist, empowered young women looking at a pool of young men whose sexual mindsets have been shaped by years of online porn and video games. When things like sexual choking become accepted – a dangerous act that can cause permanent brain damage – it’s not difficult to grasp why young women who feel empowered to say no decide to do exactly that.

This is not exactly feminist progress. If young women do indeed feel freer to opt out of sex they don’t want, that’s great. But it’s not clear that’s actually what’s driving the current sexual decline. And most women desire sex, too, and deserve to have sex that feels good. That too many heterosexual men seemingly can’t or won’t deliver it is a problem.

This doesn’t mean everyone needs to be having sex all the time.
American society is at once hypersexual and puritanical: We are a nation where sexually explicit advertising is pervasive and used to sell everything from cat food to drain cleaner, and also where more than a dozen states have banned abortion. With the stunning successes of the anti-abortion movement, and with threats to contraception access as well, heterosexual sex can feel more perilous than ever.

Many women (and some men) are also owning their choice to opt out of sex entirely or for a period, some perhaps fueled by their negative reactions to a recent campaign by the dating app Bumble that seemed to malign sexual abstinence (amid the backlash, Bumble apologized for the ads and took them down). Even uber-sexy actress Julia Fox announced her celibacy, tying it to our cultural and political moment. “I think, with the overturning of Roe v. Wade and our rights being stripped away from us, this is a way that I can take back the control,” she told Andy Cohen on Watch What Happens Live. “It just sucks that it has to be in that way, but I just don’t feel comfortable until things change.”

In a nation that polices women’s reproductive lives and where our social abilities seem to have declined along with our social connections, celibacy can be a thoroughly rational choice for many people, women especially. One problem, though, is that conservative groups and movements are pushing celibacy too – not to give women more control, but to offer us less.

Attacks on the evils of recreational sex are core to the right-wing efforts to stigmatize and even limit access to modern contraception. As Christopher Rufo, one of the architects of the panic over critical race theory, put it on social media, “the point of sex is to create children.” His implication: Sex for fun and pleasure alone is bad, and society should implement mechanisms to discourage or penalize it.

A recent op-ed in The New York Times also made a case for celibacy that used largely secular language but contained ideas you might hear in a Catholic church sermon — a less punishing vision than Rufo’s, but still one with a particular set of assumptions about human sexuality. And one has to wonder whether, in an increasingly lonely age, if removing opportunities for sexual connection is really an ideal approach.

This view – that sex is for procreation alone, or that taking sex off the table is the only or best way to forge a genuine connection with another person – often stems from very misogynist roots. And it’s also true that for many individual women and men, taking sex off the table for a period of time may be the right choice. The trick is refusing to fall into sexist ideas about what sex is for or how women should be valued.

And beyond sex, one big job we collectively have in this lonely, atomized moment is connecting with each other more, not less. That doesn’t necessarily mean forging sexual connections, but it does mean forging social ones. 

It also means considering how to create and sustain institutions that will allow us to meet offline and deepen our real-world relationships instead of defaulting primarily to online options for staying in touch, dating and making friends. A better-connected and better-socialized population, I suspect, would be a much happier one – and likely a more sexually fulfilled one as well.

https://www.cnn.com/2024/06/04/opinions/sex-life-decline-social-media-filipovic/index.html

Oriana:

Note also that a surprising percentage of young adults still live with their parents. That means less privacy.

*


A MARSHALL PLAN FOR UKRAINE? THE LESSONS FROM GERMANY




Dnipro’s Central Bus Station after a Russian attack



Even the most optimistic forecasts for the future of the war in Ukraine acknowledge that the weeks and months ahead will be difficult; while the U.S.’s latest military aid package has begun to arrive, the long delay in its passage allowed Russia to make key gains on the battlefield. While President Volodymyr Zelensky said Friday that Ukraine had largely managed to stave off Russia’s latest offensive in the Kharkiv region so far, he warned it may be just the first of several waves.



With the war’s future as uncertain as ever and any semblance of a peace agreement still a distant prospect, discussions of how to approach the reconstruction of Ukraine’s economy may seem premature. According to economist Jan Pieter Krahnen, however, if the country wants to avoid past mistakes and make the most of postwar investments, it’s now “high time” for policymakers to begin strategizing.



As Krahnen and his co-authors argue in CEPR’s new report, Ukraine’s Reconstruction: Policy Options for Building an Effective Financial Architecture, Ukraine can’t simply count on foreign grants and one-time investments to rebuild its economy after the war. For one thing, it’s no longer clear that enough money is forthcoming to even begin this process.

Mariupol before the war and now

“In the early days of the war, it was not a big topic. ‘Of course this money will come,’ [we thought]. ‘We have the E.U., we have the U.S., we have Germany, France, etc.’ There were enough donors, in a way that seemed to add up to big sums,” Krahnen tells Meduza. He continues:



But as time goes on, the support has dwindled a bit. And despite my expectation that there will be significant amounts of money available, these amounts will by no means be enough to cover the investment outlays that will be necessary not only to rebuild physical infrastructure in the country that has been destroyed but also to bring the economy up to speed.



For this reason, according to Krahnen, the grants and loans that will be available to Ukraine after the war will not be enough to make the country “competitive with other European economies” or to “orient the whole Ukrainian economy in a stronger sense than before towards its Western neighbors.” Instead, accomplishing these goals will require what Krahnen and his colleagues refer to as “building back better,” a process that he says will entail restructuring the corporate sector as well as the agricultural, energy, and raw materials sectors, among others. And for this, he explains, the available grant funding will not suffice.



“So there is a demand to access the market for private investment,” he tells Meduza. “And this is a global market. In a certain sense — price-wise, information-wise — it’s an integrated global market where Ukraine is competing with the U.S. and other countries for funding.”



To attract the “big pension funds, big institutional investors, big banking groups, and asset managers that might put their money where they think the return is,” Krahnen says, Ukraine needs to make its market attractive enough. Broadly speaking, he explains, this will require “onboarding international money and channeling it to good firms.”



“You need to create a legal framework that is suitable for such an endeavor,” he says.



Lessons from Germany



Krahnen says that while the CEPR’s new report has been met with reservations from the National Bank of Ukraine and the European Bank for Reconstruction and Development (EBRD), its recommendations seem to have resonated with Germany’s Federal Finance Ministry. According to him, that’s no coincidence: the report includes an entire chapter dedicated to Germany’s experience with the Marshall Plan and the lessons it holds for Ukraine.



“The major insight of our chapter on the Marshall Plan is that it’s not the amount of money that made the Marshall Plan successful, because that wasn’t that much after all,” he says. “But the money […] was used in Germany to build institutions rather than directly funding some infrastructure project, [in which] you do it once and it’s gone.”



Krahnen and his colleagues propose the creation of a bank in Ukraine analogous to Germany’s KfW, which was established as part of the Marshall Plan and is now the country’s second-largest bank. “[KfW] does a lot of policy work,” he explains. “For instance, we have this energy transition— something that needs a lot of support money to be viable on the household level and on the firm level.” 

While KfW helps finance projects like these, it’s not responsible for finding good projects or conducting due diligence; instead, “they just do refinancing of the banks for extending the loans that they give to the client.”



While Ukraine already has a number of state-owned banks, Krahnen says that what the CEPR is proposing is a “state-owned bank [that would] get rid of all the others: “The bank we are suggesting would not be a commercial bank that competes with other commercial banks. […] What we suggest is a state-owned bank that channels international money into the banking sector as such and follows certain conditionalities, certain principles [governing] where this money should be spent,” he explains.



While the proposal might sound on its face like a push for more centralized control of Ukraine’s economy, Krahnen says that the opposite is true: privatizing the country’s existing banks and replacing them with a Tier 2 bank — an institution whose only clients are other banks — would help Ukraine develop a more market-based economy.



According to Krahnen, this system would avoid turning the new bank into a bottleneck of red tape by leaving client selection and monitoring responsibilities with the Tier 1, customer-facing banks, while minimizing corruption and perverse incentives through mechanisms such as co-lending.

“[This] means we only put Tier 2 bank money where the Tier 1 bank also puts their own money,” he says. “That’s a mechanism to ensure that the bad projects will not be targeted with this money.”



https://meduza.io/en/feature/2024/05/21/we-don-t-want-to-produce-the-next-oligarchs


*
WHY PUTIN CAN’T GIVE UP

Russian military motorcycle

If he gives up, the best he can hope for is to be ousted from power. The worst-case scenario is that he’ll be killed.

As a student of history, Putin knows this well.

In the twentieth century, Russia/Soviet Union was shaped by five major conflicts.

The Russo-Japanese War
The First World War
The Second World War
The Soviet Afghan War
The Chechen Wars

(I’m not including the Russian Civil War or the Cold War because The Russian Civil War was an extension of the First World War, and the Cold War was not a shooting war.)

Russia won the Second World War and the Chechen Wars…eventually. The Russian strategy for fighting wars seems to be to throw troops against an enemy until that enemy is pacified. This gives you an insight into how they’re prosecuting the War in Ukraine.

In any event, victory in these wars afforded the Russians certain prestige. After the Second World War, they became one of the world’s two superpowers.

After they pacified the Chechens, Russia didn’t retain their status of superpower, but they did enter a period of relative prosperity. Putin is hoping that victory in Ukraine will emulate these successes.

On the flipside, the Russians lost the Russo-Japanese War, the First World War, and the Soviet Afghan War. What were the consequences of these losses? Revolution.

The Russian people don’t mind going to war. But they hate losing. After being defeated in the Russo-Japanese War, the Tsar endured the 1905 Revolution which limited his powers and forced the creation of a State Duma. The Tsar survived, but he was mortally wounded. After Russia’s disastrous performance in the Great War, the Tsar was deposed and murdered.

The Soviet Afghan war was likewise disastrous. Turns out that invading a mountainous and desolate nation for ambiguous reasons and with no clear measure of success is not the most effective way of reviving your economy. The last Soviet troops left Afghanistan with their tails between their legs in 1989. Two years later their country collapsed.

Russians don’t mind war. But they hate losing. When they lose, they take it out on their leaders.

Putin understands this well. It’s for this reason he has been murdering any credible threat to his leadership.

As such, he will persist with this war and hope that the West loses patience or interest and stops funding Ukraine whereupon they will throw wave after wave of conscripts and prisoners at Ukrainian targets until they are destroyed. Putin will install a favorable leader in Kyiv and Russia will enjoy a relative renaissance while Ukraine smolders.

If he can’t do this, he’s dead.  ~ Cain Markov, Quora


Some say 80% of urban Russia looks like that.

*
EZRA (THE MOVIE): IT’S THE FATHER WHO’S THE PROBLEM CHILD

Kidnapping a child, especially an autistic one, and going on the run with him is a lousy thing to do, right? Especially if it violates a restraining order, one that triggers an Amber alert. Movies condition us to root for the protagonist, and the new film Ezra presents its audience with a challenge: can you sympathize with a guy who abducts a child and takes off on an interstate run?


Ezra is a rare motion picture that conditions its audience to at least sympathize with a child kidnapper and see things from his point of view. And despite a few virtues in the film, I found that hard to get past.

The set-up of Ezra is that Bobby Cannavale plays Max, a struggling stand-up comedian and father of a 12-year-old autistic son named Ezra (William Fitzgerald). Divorced from Ezra’s mother (Cannavale’s real-life partner Rose Byrne), Max lives in New Jersey with his dad (Robert De Niro). A one-time elite chef, De Niro’s character now works as a doorman. After a series of disagreements about Ezra’s care lead to a fight and a restraining order, Max grabs Ezra and takes off on the road, soon with his ex-wife and dad in tow.

Despite circumstances that sound like the plot of a Criminal Minds episode, the trip gives father and son a chance to bond while visiting a series of offbeat characters (including an almost unrecognizable Rainn Wilson, settling into a new, promising career as a grizzled character actor). The destination is Los Angeles, where Max has a shot at an appearance on Jimmy Kimmel Live. What’s so mind-boggling about this film is that if not for the kidnapping/restraining order/Amber Alert part, this same story would be touching and affecting.

The film is one of the better depictions of teen autism in recent memory, and Ezra’s story is the most compelling part of the film. Its most emotionally resonant moment is when Ezra physically connects with someone else. Fitzgerald, the actor who plays Ezra, is himself on the spectrum, and he’s fantastic, while Cannavale and De Niro both do substantial work. Ezra was directed by actor Tony Goldwyn—who 25 years ago directed the underseen gem A Walk on the Moon—and written by Tony Spiridakis. It’s based in part on Spiridakis’ own experiences with divorce and parenting an autistic son.

But a lot doesn’t add up. Ezra’s reason for running into the street from his mother’s house, the inciting incident of the entire plot, is a ridiculous misunderstanding. Max isn’t an exceptionally talented comic; like many fictional comedians, he spends most of his time on stage monologuing about whatever’s happening in the movie’s plot rather than telling jokes. And shouldn’t he know he’ll be arrested as soon as he shows up at Kimmel’s studio? (The film also conveys the unintended message that the Amber Alert system, as currently constructed, is highly ineffective.)

That’s not enough, though, to make up for the film’s cavalier attitude towards interstate child kidnapping. And that’s even before De Niro gets one of those cathartic speeches that long-distant fathers in movies often give, except this one amounts to, “I apologize for everything I ever did wrong to you—and by the way, you were right to kidnap your son.”

The conclusion is laughable, grafting a happy and mostly consequence-free ending that’s not deserved.

https://www.splicetoday.com/moving-pictures/ezra-s-even

from another source:

Perhaps the most well-meaning family drama of the year, “Ezra” does everything quote-unquote right. The film’s call sheet is crammed with the biggest names: Robert de Niro, Whoopi Goldberg, Rose Byrne, Bobby Cannavale, Rainn Wilson, Tony Goldwyn and Vera Farmiga. Boiled down, the story is about redemption, sacrifice and forgiveness. There’s a talented child actor who is not only well-positioned in the role, but utterly screen-stealing, even when sparring with Hollywood legends.

All parts do not equal a whole, however, and “Ezra” is missing that magic pixie dust that pushes a film over the edge from merely cohesive to impactful. In addition to starring, Goldwyn directs this little indie-film-that-could, though the film truly hinges on newcomer William A. Fitzgerald, a neurodivergent teen who nails the title role. Ezra is a child with autism who is having a particularly hard time at school and at home. He’s sensitive to many things, and his frequent outbursts have his divorced parents and school administrators concerned.

Cannavale stars as his father Max, an aspiring stand-up comedian who attempts to make his way up in the industry while also living with his aging father (de Niro). His ex-wife Jenna (Byrne) is frustrated by his blasé attitude towards Ezra’s increasing instability. Their relationship, while cordial, was an obvious casualty of Max’s chaotic career path and his own personality defects. Cannavale and Byrne are together in real life, which does give their on-screen relationship an interesting layer.

When he overhears a threatening, albeit sarcastic, comment made by Jenna’s lawyer boyfriend (Goldwyn), Ezra flees the house, and, on his way to his grandfather’s home, is hit by a car. Concerned for his mental and physical well-being, doctors recommend heavy drugs to control the child, a course of action Max is greatly against.

Eager to protect his son, Max kidnaps Ezra and they embark on a father-son bonding road trip to Los Angeles where Max is also up for a big break: performing on Jimmy Kimmel. There are pitstops along the way, like his old friend Nick (Wilson), a fellow comedian, and Grace (Farmiga), a childhood friend living in rural Michigan who helps the traveling duo with a replacement car. When an Amber Alert goes out for Ezra, the end of their little cross-country road trip turns ominous. Max, attempting to do the right thing by his son, in turn breeds more havoc, destruction, but also maybe a bit more understanding and grace.

The film harbors a weak thesis for an important story. It seeks to portray and represent the struggles, as well as the beauty, of children like Ezra. Unfortunately, the nuances of living with autism or living with someone diagnosed with autism are overshadowed by an over-wrought leading character. Max, whose journey the audience is forced to endure more so than any other of the film’s characters, is an utterly unlikeable, pitiable, self-centered, faux-altruistic specimen whose incessant war cries of self-sacrifice for his son ring hollow and distract from the many other important aspects of the film’s message.

The story itself is admirable, but its telling is undercooked. “When in doubt, take a road trip” seems to be the motto by which "Ezra," like many other family dramas before it, operates. Unlike previous successful road trip dramedies (“Little Miss Sunshine” naturally comes to mind), and despite the fact that one of the film’s leads is meant to be a comedian by trade, there is very little comedic relief to offer viewers who will be begging for levity by the time the destination is reached.

Max isn’t the only frustrating character. Byrne’s role is a one-trick pony, a hyperbole of the worried mother trope, jumping over fire to find her son, even if he is in the safe hands of his own father. The witch hunt set upon Max is another sore point in the script, a seemingly over-blown development that only further distances viewers from the film’s reality. De Niro, a legend, is also underutilized, and Wilson and Farmiga are hardly more than AI-generated sidekicks. Worst of all, the film’s murky moral is buried by its lackluster plot which hits its stops on the road trip without true purpose or intention.


https://seattlerefined.com/lifestyle/review-ezra-de-niro-goldberg-rose-byrne-bobby-cannavle-rainn-wilson-tony-goldwyn-autism-film-movie-theater

Oriana:

I have only one thing to add. This is supposed to be a movie about an autistic child, but it’s the father, Max, who emerges as the real problem child — irrational, belligerent, getting into physical fights (he even assaults a physician), straying off the highway into the woods and abandoning his car, and more. The real protagonist in this movie is the father, not the son  and while the autistic child has some surprising charm, Max has none. 

“You can go now,” is one of the phrases used by Ezra, who is surprisingly verbal and high-functioning. It can be taken as advice to a potential viewer: Skip this movie. "You can go now" — to see another movie or do "something entirely different."

Now, I can imagine a different movie where the grandfather (De Niro) is for some reason raising an autistic grandchild (these days a lot of grandparents are doing the actual child raising). The grandfather manages to buy a farm, and this proves wonderfully therapeutic for both of them. The boy loves the horses; De Niro finds companionship and perhaps a late-life romance among the farming community. Oh, there'd have to be some sadness too . . .

Oh, one more thing. Do you remember Rain Man, another road trip movie involving a an autistic idiot savant (played by Dustin Hoffman), kidnapped by his younger brother (Tom Cruise)? Now that was a masterpiece. Perhaps that alone dooms other movies made on the premise of a road trip with an autistic person to be less than that kind of cinematic brilliance.

What is  the message of Ezra? It's safest to say that it's the same as the message of Rain Man: autistics do best in schools or institutions designed for their needs, staffed with trained specialists. And since they respond well to animals, a countryside setting might also be helpful. But then a lot of children would thrive if given a great school in the countryside — especially without a trouble-making parent trying to interfere.

*
MOUNT RORAIMA

In the northeast of South America, there lies a sprawling geological formation known as the Guiana Shield; an expanse of 1.7 billion year old rock. Long ago, it was a high plateau of sedimentary stone, but most of it has since been eroded away.

What remains of this ancient highland are immense blocks of sandstone, flattened at the top. The local Pemon people call them tepuis, meaning “houses of the gods”, and there are hundreds of them.

Most tepuis are scattered across Venezuela, but they extend into neighboring Guyana and Brazil too. Outside of the Andes, these tabletop mountains are among the tallest landforms in South America.

You have Mount Roraima, the inspiration for Arthur Conan Doyle’s The Lost World.


Towering high above the forests and savannahs, surrounded by sheer cliffs on all sides, tepuis are just as remote and inaccessible as islands in the sea; they are effectively islands in the sky. And, just like marine islands, they have a unique array of wildlife which evolved in a vacuum, cut off from the “mainland”.

Erosion has created many other spectacular features on tepuis. There are innumerable caverns beneath the tabletops, most of them unexplored. As they are formed of quartzite, an incredibly hard mineral, they must have taken hundreds of millions of years to form, and may be the oldest caves on Earth.

Some of the chambers are massive and one — Abismo Guy Collet — is the deepest cave in all of South America, at a depth of 671 meters. It’s also the world’s deepest quartzite cave.

Inside the caverns, there are subterranean lakes and rivers, bizarre rock formations, ancient microbial growths, and wildlife such as blind fish, scorpions, spiders and giant cave crickets.

In the bare limestone pavements and stone forests, there isn’t much real estate for plant life, but a handful of hardy species eke out an existence in the cracks.

Some plant species here are only found on one particular tepui, and about a third of the total flora is endemic to the Guiana Shield. Bromeliads, close relatives of the pineapple, are especially common.


Bromeliads in the Guiana Shield

The reddish color of their leaves is due to anthocyanin pigments, which help protect them against extremes of weather. The red-brown hue of the vegetation on some tepuis can even be seen from space.

The tepui parrotlet, which commutes between mountaintops in flocks of several hundred:

There is doubtless much left to discover here; many tepuis have long been near-impossible to access even by helicopter, but they're beginning to reveal their secrets.

~ Gary Meaney, Quora

*
THE PROMINENT REASON FOR THE DEFEAT OF THE OTTOMAN EMPIRE IN WWI

Islam.

There are dozens of answers to this question and not one places the blame squarely on Islam. The primary reasons for Quorans appears to be corruption, poor geopolitical decisions, inferior technology, unlucky warfare outcomes and imperial meddling.

These are valid reasons but they didn't come-out of a vacuum. The malaise, regressive and inhibiting influence of Islam when compared to a world throwing itself into mechanization, modernity and the Industrial Revolution was a root cause.

Dismissing the clear role of Islam here smacks of wokery/political correctness.

Islam comparatively impoverished the Ottoman Empire throughout the 18th/19th centuries by:

1) Inheritance laws. Guided by the Quran, these emphasized egalitarian distribution, hindering the concentration of capital necessary for business development.

2) Individual/Company conflation. When a company owner died, his company was dissolved with him. Assets were liberally willed (see Inheritance laws) so it became a logistical nightmare to reform a company. Established companies became infrequent.

3) Polygamy. This diluted inheritance among numerous heirs, further complicating the continuity of family businesses.

4) Prohibition of interest-bearing loans. This impeded the growth of a robust banking system, obstructing entrepreneurship.

5) Collaboration with Western companies. Islamic law placed restrictions on business practices.

6) Religious authorities/Imams. Progress was impeded with mercurial innovation bans. Famous examples were the Ottoman Empire’s prohibition of the printing press (for over 200 years) and Bills of Exchange (which allowed European merchants to travel without carrying cash).

7) Contracts. Islamic judges barely registered their existence, preferring oral testimony. This meant judicial decisions were based upon memory, open to deceit and would dissuade practically all foreign investment.

8) Testimony. A woman's testimony was worth half that of a man. Given widespread oral testimony (see Contracts), this further reduced the integrity of the judiciary.

9) Focus upon Islamic oral teachings rather than scientific reading/thinking. By 1914, it’s estimated that only 5-10% of its inhabitants could read.

10) Widespread reformation resistance. Most inhabitants believed Mohammed was the final prophet and Islam was the final faith and there could be no argument with the Quran. Progress became nonexistent.

11) Blasphemy. Islamic law provided for severe punishment for blasphemy. This seriously curtailed freedom of speech, dissent and collegiate collaboration.

12) Lack of uniformity. Islamic law varies significantly across its various schools of thought, leading to inconsistencies, duplication of effort and personal disillusionment.

  ~ David Haigh, Quora

Afghan women in a mosque

Jeff Dee:
The court system was intentionally biased. It was essentially impossible for a woman to successfully sue a man, or for a non-Muslim to sue a Muslim.

This meant that people would preferentially contract with Jews and Christians, because if things went wrong they could recover in court.

David Haigh:
The Ottoman Empire were so weak due to hundreds of years of technological lethargy, wastefulness and misdirection that they were decisively beaten in WW1

Douglas Wat:
Polygamy created a large class of frustrated men who caused societal trouble.

Matthew Hofmeister:
If the answer is as simple as Islam you'd want to answer why China was also behind at this time and why Russia was more corrupt and performed comparatively poorly in WWI as well. You could even ask why Austria-Hungary performed so poorly. The answer isn't Islam for those nations.

I think it is less that Islam (or Buddhism, Hinduism, or Shinto) held these societies back. Rather, some countries in Western Europe took huge strides forward over a couple of centuries. Anyone not in that group was at a huge disadvantage.

*
TEN BIG THINGS THAT ARE LIKELY TO HAPPEN DURING THE NEXT TEN YEARS

~
Less than 7 percent of the world will be under the World Bank’s International Poverty Line, now $2.15 a day


Progress against global poverty has been extremely rapid in recent decades. From 1990 to 2019, the extreme poverty rate, as measured by the World Bank, fell from 38 percent to 8.9 percent globally.

The decline was especially fast in East Asia (in particular China) and South Asia (in particular India). In East Asia, the rate fell from 65.4 percent in 1990, the highest of any world region, to a measly 1.2 percent in 2019. The economic disruptions of the pandemic were a major setback across the globe, but poverty reduction seems to finally be getting back on track, at least in South Asia.

The bad news is that today most poor people live in sub-Saharan Africa, and while poverty there fell in recent decades, it fell more slowly than in Asia. Weak, coup-prone states, persistent infectious disease burdens, and recurrent war and violence have made it harder for African countries to build large export-based industries like China, or robust service sectors like India’s. 

The most recent projections I’ve seen by World Bank researchers suggest that the world’s extreme poverty rate will fall to 6.8 percent by 2030. At that pace of progress, the rate should still be over 6 percent by 2034.

The final steps toward eliminating extreme poverty will be the hardest, and progress will likely be slower than it’s been in the past couple decades. It’s important also to remember that the $2.15 per day poverty rate used by the World Bank is very low. The US poverty rate is closer to $30 per day; by that standard, the vast majority of the world will still live in poverty in 2034. But I have faith we’ll still make progress.

A level 4 autonomous vehicle will be for sale in the US to ordinary customers

The autonomy metric developed by SAE (the organization formerly known as the Society of Automotive Engineers) measures on a 0 to 5 scale how capable cars are at self-driving. Zero means the driving is totally manual; 1 means that one element, like the speed of the car, is automated, as in the adaptive cruise control systems common in new cars as of 2024; 2 means that multiple elements, like both speed and steering, are controlled by computer, as in Tesla Autopilot (which, to be clear, is not actually an autopilot).

Level 4 is where things start to get interesting. It signifies a car that can self-drive all the time, provided it’s only used in a certain type of environment (say, only in a city, or only on paved roads). This is where the technology goes from a convenience to a game-changer, and where scenarios like pushing a button to have your car find a parking spot by itself, or commuting to work while typing at a laptop and not even looking through the front windshield, start to become possible.

We already have level 4 vehicles. Sort of. On a trial basis. As of 2023, Waymo, the self-driving division of Alphabet (formerly Google) and by far the industry leader, was operating level 4 taxis in San Francisco and Phoenix. It has expanded to Los Angeles this year and is opening in Austin soon.

Until very recently, Waymo has limited itself to surface streets, and its vehicles are pointedly not for sale. They’re for rent, and even then in a small handful of places. I would very much like to use a self-driving car on my periodic 500-mile road trips from DC to New Hampshire, but alas, we’re still a ways off.

I’m guessing that by 2034, we’ll be there. Waymo and other companies are gathering massive amounts of video, radar, and other data they can use to train cars to operate in diverse environments. Meanwhile the economic advantages of self-driving vehicles — in terms of labor costs, parking costs, idleness, and more — are enormous.

The fact that Waymo can run vehicles reliably in several cities — albeit cities that don’t have much experience with snow and similar adverse weather — is a very encouraging sign. There’s a gap between that and hopping into my 2035 model year Honda Civic, pulling up an address on my phone, and having the Civic chauffeur me there like a sultan. But the gap is narrowing, fast.

World life expectancy will exceed 75

Here, I am mirroring the UN’s World Population Prospects report, which estimates that in 2034, global life expectancy at birth will reach 75.2 years. That number will mask considerable inequality, with North America at 81.7 and Africa at 65.6.

But as with global extreme poverty, this projection also relies on decades of past progress. When UN population data began in 1950, the world had a life expectancy of 46.5 years. Over the next thirty years, it rose to over 60. In 2021, in the midst of a global pandemic pushing the number down, it was 71.

As the world gets better at conquering infectious diseases like tuberculosis, HIV/AIDS, and malaria, and as countries grow richer and their residents better able to afford medical treatment and healthier living conditions, lifespans are growing.

As with poverty, progress in life expectancy is becoming more gradual. In the 1960s alone, global lifespans shot up by over 8 years. We’ve picked enough of the low-hanging fruit that we’re not going to see progress that fast again. But we still have a ways to go in preventing deaths from easily treatable diseases, which means that global life expectancies have not stopped their rise yet.

US adult obesity rates will decline 5 percentage points

In its 2017–2018 survey, the CDC’s authoritative National Health and Nutrition Examination Survey found that the US adult obesity rate was 42.4 percent. (Obesity in the survey means having a body mass index — a metric that has come in for its share of criticism — of 30 or more.) While the agency could not finish its scheduled 2019-2020 survey because of Covid-19, it took the data it had collected by March of 2020 and combined it with the previous cycle to update their estimates.

They found that the obesity rate among US adults over 20 had fallen to 41.9 percent as of that terrible spring — a time before the market for Ozempic and a promising new class of anti-obesity medications had begun to grow exponentially.

There is still plenty of uncertainty about what the long-term health effects of these drugs will be. The future holds too many unknowns: How many people get prescriptions? Will patients stick with the drugs? Will they be able to afford them?

But Wegovy et al. are about to become even more commonplace than they already are, now that Medicare has decided to cover them for heart conditions, which about four in 10 Medicare patients have. Millions of prescriptions will be written.

New drugs are also in the works: Ozempic was a breakthrough because it was “just” a weekly injection (and very effective). Pill forms of semaglutide are now being tested for weight loss. Even if some people do stop taking their medicine, one large new study (not yet peer-reviewed) suggests many of them may be able to keep the weight off.

There have been momentary dips in obesity before, while the long-term trend has stayed stubbornly upward. In 2000, the adult obesity rate was 30.5 percent. Now, it’s north of 40 percent.

But extrapolate the improvement from 2017–2018 to 2019–2020 and the United States could potentially make up a lot of that ground over the next decade. And this time, we have the most effective anti-obesity medications ever developed.

The US will slaughter 11.5 billion or more land animals per year (70 percent)

In 2022, the US hit a grim milestone: It was the first year the country had slaughtered more than 10 billion land animals for food. I predict that in 2033 it will slaughter 11.5 billion or more land animals.

It’s not so much my prediction — earlier this year, the US Department of Agriculture published its meat, dairy, and egg production projections through 2033. The agency didn’t project the number of animals that’ll be raised for food each year but rather the amount of meat, eggs, and dairy that will be produced by weight, which I converted into the number of animals that’ll be slaughtered.

I’m putting my confidence level at 70 percent because there are a number of unpredictable domestic and international factors that could influence the final number: economic recessions, zoonotic diseases (like the ongoing bird flu outbreaks), consumer trends, the cost of farming inputs (like fertilizer and animal feed), and agricultural and trade policy.

But the most important factor, by far, will be demand for poultry, as chickens comprise over 90 percent of the total number of animals raised for food in the US. And that’s expected to continue to rise well into the next decade — and half-century.

The USDA also projects that US per capita consumption of red and white meat combined will rise from 226.8 pounds in 2022 to 235.4 pounds in 2033 — a near 4 percent increase.

Given the enormous ecological and climate impact of meat and dairy production, this is all headed in the opposite direction of what climate scientists say we must do: drastically reduce livestock numbers and transition to a plant-rich food system.

If the US were to align its agricultural policy with its climate policy, we’d move in that direction instead. I’m not a betting man, but if I were — given the power of the meat lobby — I don’t expect that to happen by 2034.

Implantable brain-computer interfaces will be used for human augmentation, not just medical need

Over the next decade, cutting-edge technology will open the floodgates for human augmentation, which could fundamentally change what it means to be human. From CRISPR to biohacking, there are a growing number of methods people might want to use to enhance themselves. 

But I want to make a prediction specifically about brain-computer interfaces (BCIs): By 2034, someone will have an implantable BCI not to help with paralysis or disease, but simply because they want to be a smarter or faster version of themselves.

So far, the FDA has greenlighted brain chips for people with medical conditions like paralysis, allowing them to type or text with just their thoughts. But it’s no secret that some of the makers of these chips — like Elon Musk’s Neuralink — want to go further. Musk has said that the ultimate goal is “to achieve a symbiosis with artificial intelligence.” And Neuralink is explicit about its dual mission: to “create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.”

BCIs exist on a spectrum of invasiveness, with Neuralink at the most extreme end (the device is implanted directly into the brain) and wearables on the other end (they use EEG sensors and other tech that sits on the scalp or skin. so there’s no need for surgery; some are already on the consumer market).

Between these two ends of the spectrum, there are BCIs that are less invasive than Neuralink’s but are nevertheless implantable — they may go in the skull, but not in the brain itself. This type, not a Neuralink device, will probably be used for enhancement purposes first, and I will consider my prediction right if this type is in use by 2034.

I know even that may seem extreme. And I wouldn’t hazard such a guess in the realm of, say, CRISPR, because when it comes to changing our genetics the scientific community has been appropriately cautious in calling for moratoria. But if the ruling mantra in biomedicine is “first, do no harm,” the ruling mantra in tech is “move fast and break things.” And with companies like Meta and Apple also exploring brain-reading technology, appetite is growing: The global BCI market is expected to top $8 billion within the next 8 years. So I think this is a solid possibility.

A nuclear weapon will be used (20 percent probability)

My Future Perfect colleague Dylan Matthews, who is the best forecaster I know, wrote recently that one key to accurately predicting future events is to first establish a “base rate.” A base rate is the rate at which some event has been known to happen in the past, which is a useful starting point for estimating how likely it is to happen in the future.

The base rate for nuclear weapon use in war in a given year is, thankfully, very low. (If it weren’t, I likely wouldn’t be here writing this.) Nuclear weapons have been available to at least one country since the US successfully tested the first atomic bomb at Trinity Site in New Mexico on July 16, 1945. In all the time since, an atomic weapon has been used in war twice — in 1945, by the US at Hiroshima and Nagasaki.

That’s one year out of 79, which equates to a base rate of 1.2 percent. Extend that over the next decade, and the base rate grows to a little more than 11 percent. And even that’s likely putting it high — for the last 78 out of those 79 years, nuclear weapons have been used in war precisely zero times, which should make us more confident that we’ll get through the next decade nuclear-free.

So why do I think there’s a 1 in 5 chance we’ll see a nuclear weapon deployed in war over the next decade? It’s based on my reading on trends in the future of war, arms control, and international relations — all of which are moving in increasingly dangerous directions.
War between states, which had basically ceased over the past several decades, is back once more. Russia’s invasion of Ukraine in 2022 made nuclear weapons a live issue again, in part because Vladimir Putin was not shy of making vague atomic threats, but also because nuclear weapons ultimately defined the contours of the conflict.

There’s a limit to how much NATO can defend Ukraine precisely because Putin controls the single biggest nuclear arsenal in the world. And Putin himself is constrained — though how much he purposefully makes ambivalent — by NATO’s own nuclear weapons, and its stated policy of treating an attack on one member state as an attack on all. So far the balance has held, but that’s a risky place to be.

And even as the war in Ukraine raises the atomic temperature, the arms control treaties that have successfully reduced the nuclear threat since the end of the Cold War are unraveling one by one. The New START treaty, which aimed to constrain the tactical and strategic nuclear arsenals of both the US and Russia, is currently set to expire in 2026.

And even as the war in Ukraine raises the atomic temperature, the arms control treaties that have successfully reduced the nuclear threat since the end of the Cold War are unraveling one by one. The New START treaty, which aimed to constrain the tactical and strategic nuclear arsenals of both the US and Russia, is currently set to expire in 2026.

The number of people forcibly displaced around the world according to the UN will exceed 200 million by 2034 (70 percent)

In May 2022, the UN refugee agency reported that the number of people forcibly displaced from their homes globally had passed 100 million for the first time in history. Since then, the number has only grown, reaching 114 million people as of early 2024, including those displaced within their countries and those forced to flee across international borders.

The causes are manifold, with war as a major driver. More than seven in 10 international refugees come from just five countries — Syria, Ukraine, Venezuela, Afghanistan, and South Sudan — most of which are embroiled in some kind of serious conflict. For countries like Cuba, severe economic disruption can force people to flee in search of a better life, or just survival.

Environmental factors — including climate-driven droughts, famines, and wildfires — are driving others to leave their homes in places like southern Africa. And human rights abuses, such as those perpetrated against the Rohingya minority in Myanmar, are another factor.

 

Predicting that the number of displaced people will reach 200 million between now and 2034 means assuming that the total will rise at least 75 percent. I think that’s a more than reasonable bet. For one thing, while global population growth has been slowing, it’s still increasing a little below 1 percent per year, which means the world will likely have hundreds of millions of more people by 2034. More importantly, the bulk of that growth will be in places like sub-Saharan Africa — a region that is already highly vulnerable to climate disasters and political disruption. That means more people will likely be in harm’s way, even as that harm continues to grow.

Nor do I expect the rich nations for displaced people are trying to reach to suddenly become more welcoming — even if, precisely because fertility rates are falling so rapidly, it would likely be in their long-term economic interest to do so. The very fact that President Joe Biden is considering stringent border policies that sound like something out of the Trump playbook underscores just how toxic migration has become as a political issue. I don’t see that changing.

Lastly, another driving factor is actually a sign of global success. It sounds paradoxical, but it’s true — economic growth in poor nations can increase outward migration. As even the poorest people in the world become richer than they used to be, more people will have the means to escape their countries.

The last decade has seen a startling increase in displacement and migration. I’m willing to bet that the next decade will see even more.

Global energy emissions will peak

Finally, some actual good news.

With the exception of 2020, when pandemic lockdowns froze economic activity, energy-related greenhouse gas emissions have mostly gone in one direction: up. Last year, global energy emissions reached a record high of 37.4 billion metric tons of CO2.

But the peak is in sight, and it may be coming sooner than you think. The International Energy Agency (IEA) has predicted that energy emissions will top out in 2025, a forecast echoed by the energy intelligence company Rystad. Behind those estimates are predictions that global coal consumption — which has been at a record level, driven by demand from developing countries — will peak in the next few years, as will demand for oil.    

Together those two fuels contribute more than 50 percent of global energy emissions. While demand for natural gas continues to grow, as lower-carbon gas substitutes for coal, it’s still a net win for emissions. Add that to the rapid increase in solar and wind energy, along with renewed growth in nuclear energy, and there’s a more than reasonable case that the end of a multi-decade era of growing energy emissions could finally be over.

Of course, nothing is guaranteed. As artificial intelligence ramps up, the demand for electricity to power all those generative searches and data centers could skyrocket. Already, after years of relatively flat growth, US electricity demand is projected to spike, due in large part to AI demand. And there are other potential obstacles; should the EV revolution falter, oil demand could continue to rise for years into the future. The renewable energy revolution in the US will require an equally revolutionary shift in how the country permits energy projects — and that’s far from guaranteed.

Nonetheless, I’m willing to bet that energy emissions will peak, and soon. That doesn’t mean we’ve won the war against climate change — not by a long shot. Emissions will have to peak soon and then drop, rapidly, to avert some of the more dangerous scenarios around climate change. Still, for as long as I’ve been alive, the indicators around climate change have only gone in one direction: worse. Don’t underestimate the psychological benefit of finally turning one around.

The US car fleet will be 10 percent EV

The global car market is on the cusp of its greatest revolution since, perhaps, the Model T first brought automobility to the American middle class. To decarbonize the transportation sector, the world’s second-highest greenhouse gas emitter, governments are pushing to speed up the availability and mass adoption of electric cars.

Last year, according to preliminary data, about 18 percent of new cars sold around the world were electric, up from 14 percent in 2022 and less than 5 percent in 2020. The world’s highest electric car adoption rates are in China, where about a third of new cars are electric, and in Western Europe, but US sales have been rising precipitously, too, hitting 10 percent of new cars in 2023, more than four times the share just two years prior.

EVs’ year-over-year share of new car sales give us a sense of how quickly they’re going mainstream, but it’s the overall makeup of cars on US roads that matters for carbon emissions. At the end of 2022, EVs made up a more sobering 1.2 percent of the national car fleet.

As new federal and state fuel economy rules aiming to sharply reduce vehicle emissions take effect, that number will climb. One leading estimate projected that 10 percent of cars on American roads will be electric by 2030. To reach that, we’d need to add more than 20 million new EVs over the next six years. I don’t think it’ll happen that quickly, but I do think the share will hit 10 percent within a decade.

With interest rates elevated until who knows when, Americans are holding on to old cars for longer than ever — and the average age of cars on the road has been steadily increasing for decades, a trend that’s unlikely to go away. There have also been recent signs, as Vox’s Umair Irfan has reported, that Americans’ enthusiasm for EVs is flagging. 2023 US electric car sales disappointed the expectations of some forecasters, as consumers report being wary of the high price tag, lack of convenient charging stations, and reliability issues, particularly in cold weather. This year, following concerns from the auto industry, the EPA’s new car emissions standards were downgraded from what the agency proposed last year.

The hurdles — like equipping power grids to handle tens of millions of massive EV batteries — are considerable. But mass adoption will depend on carmakers’ and policy leaders’ will to make electric cars no less cheap or convenient than their fossil-burning counterparts.

https://www.vox.com/future-perfect/352255/future-perfect-vox-predictions-2020s-nuclear-war-ozempic-electric-vehicles

*


THERE IS NO RECESSION



The US isn't in a recession; a majority of Americans in a Harris poll for the Guardian said there is one.



Media coverage is one possible explanation for why Americans feel the economy isn't doing so great.

Business Insider looked at data to see what's actually going on with the US economy.

Hey, America, we totally understand if you're not feeling so great about the economy.
But if you think we're in a recession, here's some good news: We're not in one, and there likely isn't one coming, based on economic data and what experts who talked to Business Insider are seeing.



A Harris poll for the Guardian found 56% of Americans believe the US is in a recession. Plus, it found a majority think we have a shrinking economy. Two reasons people may be feeling like the economy isn't doing so well — despite the US not being in an official recession since the two-month one in early 2020 — are due to media coverage and how people view economic trends.

David Kelly, chief global strategist at J.P. Morgan Asset Management; Eugenio Alemán, Raymond James' chief economist; and Gregory Daco, EY's chief economist, told Business Insider the US isn't in a recession.

"Americans' negative attitude towards the economy is largely due to incessantly negative media coverage of economic and social issues amplified by an even more negative social media feed," Kelly told Business Insider in a statement.



Of course, not everything is perfect, and that could sour people's views. Daco said that when you consider cost fatigue, inflation's cumulative effect, the largely frozen and unaffordable housing market, and also "the reduced amount of churn in the labor market and this perception that there are fewer opportunities out there in terms of jobs, then that leads to more pessimism about the implied state of the economy.”



"And I think that's really what we're seeing in terms of this particular survey — is that there is this difference between how people perceive consumer spending trends, inflationary trends, employment trends, and how they are from a data perspective," Daco said, adding "that misperception is exacerbated by the fact that we have different sources of intelligence, different media sources that may bias the underlying take as to how the economy is behaving.”



Of course, not everything is perfect, and that could sour people's views. Daco said that when you consider cost fatigue, inflation's cumulative effect, the largely frozen and unaffordable housing market, and also "the reduced amount of churn in the labor market and this perception that there are fewer opportunities out there in terms of jobs, then that leads to more pessimism about the implied state of the economy.”



"And I think that's really what we're seeing in terms of this particular survey — is that there is this difference between how people perceive consumer spending trends, inflationary trends, employment trends, and how they are from a data perspective," Daco said, adding "that misperception is exacerbated by the fact that we have different sources of intelligence, different media sources that may bias the underlying take as to how the economy is behaving.”



Unemployment rates in the US have been low



The unemployment rate did climb from 3.8% in March to 3.9% in April, but that's still low.


In the Great Recession, the US unemployment rate skyrocketed from 5.0% in December 2007 to 9.5% in June 2009. It took years for the job market to fully recover after that recession, while unemployment plummeted after the brief but deep Covid recession in 2020.



CPI data shows US inflation is stubborn but has been under 4%

Inflation is still elevated and stubborn, but the year-over-year change in the Consumer Price Index has cooled from the high 2021 and 2022 rates. Alemán said while inflation is comparatively low, "the surge in inflation since 2021 has pushed Americans to try to figure out what to buy and what not to buy — something that we were not used to doing before.”


"Probably the cost of searching for a better price has put a lot of stress into Americans' lives that they did not have before," Alemán said.

There isn't a US recession now or one coming soon either

If you're worried about a recession coming soon, you may feel better knowing that experts don't think so. Alemán said Raymond James doesn't foresee one but expects a slowdown in economic activity. Looking at the next 12 months, Daco said recession odds are relatively low. Kelly said the US isn't "even close" to a recession.



"Indeed, the so-called 'misery index', the sum of the inflation rate and the unemployment rate is currently 7.3%," Kelly said. "This is better, that is lower, than it has been more than 75% of the time over the past 60 years.”

There are still some data points and trends Americans may be concerned about. Sales for existing homes and new homes dropped recently. While mortgage rates are back below 7%, they're still elevated. Layoffs are happening at some major companies, inflation is still not back to the Fed's 2% target, and it looks like interest rates are still going to be high for a while.

"

The longer we have very, very high interest rates as we have today, that will increase the probability that something will break and that we might face a recession in the future," Alemán said.

So hooray for no recession and likely no recession anytime soon. However, just because we aren't in a recession doesn't mean the economy is perfect.



https://www.businessinsider.com/what-is-going-on-with-economy-recession-unemployment-inflation-gdp-2024-5?utm_source=pocket-newtab-en-us



*


HOW THE USSR JUSTIFIED INVADING FINLAND IN 1939



In 1939, the USSR propaganda was saying the same things about Finland that it’s saying about Ukraine today, to justify the attack: Soviets proclaimed that a “junta under the control of overseas masters” was in power in Finland — and the Soviets weren’t invaders, but “liberators.”

The Soviet communists loathed seeing how happily the former part of the Russian Empire could live without Russia, and they decided to destroy this country, bomb it from planes and crush it with tanks.



Finland before winter war



For a long time, regiments of the Red Army were concentrating around the border with Finland, engaging in all types of vile provocations.

 At the same time, the Kremlin was blaming “the Finnish military under the leadership of the White Finnish butcher Mannerheim and his overseas masters” for altercations.


The Soviet plan to take over Finland was touted as “help to the brotherly Finnish nation”. For that, the fictitious “People's Republic of Finland” was created in the Kremlin, as opposition to the “Helsinki Junta”.



That “brotherly help” was to send Soviet tanks all the way to Helsinki — naturally, to “liberate long-suffering Finnish people” from the horrible regime that was oppressing them.



Apparently, the Soviet propagandists believed that the Finnish people loathed their government just as much as the Soviet people loathed theirs, that’s why they prepared such propaganda.


In reality, Finns rose up not against the “White Finnish junta”, but against the Soviet invaders.

In the very first days of the war, the Soviets began bombing Helsinki — the Red Army pilots were doing it during the day, at lunchtime, when there were many people in the streets.



Later, when footage of burning Helsinki spread all over the world, the Soviets began to lie that they bombed “only military targets,” but in fact, most of the bombs fell in the city center — around the central train station and major streets.



Stalin's tactics of “not a step back” with barrier troops behind the storm regiments were first used in the Finnish war (the task of “barrier troops” was to shoot anyone trying to retreat).

Red Army soldiers frozen in a trench: the soldiers were simply left to freeze in the forest.





Red Army soldiers captured by Finns



Contrary to the Soviet propaganda, the Finns treated Soviet prisoners better than their own state: Finns gave the captives hot food and kept them warm. In the USSR, all these imprisoned by the enemy were sent to GULAG for 10 years, on their return.



The Finns fought heroically for their country.



One of the notable operations was the battle on the Raat road, in which the Finns defeated the 44th division of the Soviets.

 

The soldiers of the 44th Division scattered through the forest: most of the 17,000 personnel lost in the operation weren’t killed by Finnish bullets, but simply froze to death.



Finnish war: dead soldiers



A considerable part of the dead soldiers were Ukrainians and Belarusians, whom the Soviets mobilized to fight against Finns — although the Finns did nothing wrong to Ukrainians and Belarusians.



The Soviets lost so many soldiers and machinery that they couldn’t continue the war. The Red Army wasn't able to crush the Finnish resistance as planned. Stalin asked for negotiations and signed a “peace deal” with the Finns.



Britain and France had considered providing the Finns with military assistance, but they took too long to actually deliver any. 

Still, one of the reasons why Stalin offered the peace deal to Finland was his worry to be faced with a war against France and Britain. Stalin planned to use the 1940 Peace Treaty (which emasculated Finnish defensive lines) as an interim break, followed by a renewed attack — once the threat of the Western intervention was over.



The cost to Finland was the loss of 11% of its territory — in fact, Stalin demanded more land than the Red Army managed to occupy. 

410,000 Finns living on the territories that were being transferred to the USSR (12% of the population of Finland at the time) had to leave their houses and evacuate to the regions that Finland managed to save, because they didn’t wish to live under the Soviet rule.



Almost the whole population of the ceded territories chose to relocate. Stalin didn’t object the people leaving, but demanded that the buildings and machinery were left behind intact.


The war lasted 105 days — 3.5 months. The “peace deal” was signed in March 1940.



In just 3 months, in June 1940, Stalin invaded and annexed 3 Baltic countries — Estonia, Latvia and Lithuania. For the next 50 years, the Baltics had to live under the Soviet occupation, undergo extensive Russification, had to send their men to the Soviet army — including during the period of the USSR invasion of Afghanistan.



Meanwhile, the Finns had built one of the richest and happiest countries on Earth. ~ Elena Gold, Quora



Christopher Carlin:


A Soviet officer said later, “We had conquered just enough Finnish territory to bury our dead”.



Markuu Hänninen:


Soviets didn’t actually require Finns to move from the regions, but all Finns (about 100 people didn’t) evacuated because nobody wanted to be Soviet citizen.



Ville Hurmalainen:


Yes. It was a common knowledge what happened to these Finnish people who migrated to Soviet Karelia aka The Socialist Paradise. Almost everyone was shot during Stalin's terror.



Simo Henrik:


Yes, Stalin wished Finns to stay as a legitimate “Finnish Soviet Republic” but only like 1 in 
100,000 stayed because they were too old and didn’t want to die away from their home. He didn’t find enough Finns (as they were mostly purged in the Karelian woods) to man his “Finnish Liberation Army”, and here comes that old Soviet sarcastic saying: “Minskin ‘Finns’ (Minskiye Finny) trample Finnish mines (Finskiye miny).”



Vlad P:


The more Russified the country was the worse was the outcome for the country. The occupied territories were vassal states, with most of the wealth going back to Moscow. That’s why Ukraine to have any future, it should not fall to Russia.



Ralph Wortley:


Anyone who trusts Russia is a fool.



Simon Eliasson:

The Finnish turnaround after the war (the country was very, very broken but still managed to become an exemplary democratic state) is interesting and inspiring, to say the least.



Kimmo Saarinen:


Stalin wanted to take over the whole Finland.

 Just after the attack Soviet Union set the so called Terijoki Government to represent the Finland and refused to negotiate with the official government of Finland. But when the war did not go well and because the workers of Finland did not move to support Terijoki Government (and on the contrary, stayed loyal for the official Finnish government), it was dissolved.



Roy Haikarainen:


It’s Raate Road, which was linked to the Battle of Suomussalmi, in which the Soviet 163rd Division was destroyed. The 44th was sent to reinforce the 163rd, but was itself wiped out on Raate Road.



Peter Fabianski:


If the Soviets did not invade Finland they would have 1 million more soldiers to fight the Germans and a shorter front.



John Monteith:


The Finns also had a lovely tactic of posing dead Russian soldiers in grotesque positions to frighten their brotherly comrades in the morning.



Nixieman1:


Hitler was supplying Finland with weapons, and wanted Finland to send him their Jews. They said come and get them and we will shoot you too.



Danilo Oxford:


My grandfather fought in this war. If you changed around some names it sounds exactly like what the Russians say about Ukraine and its people. The Russians haven’t learned from their own mistakes in history. Russians were never popular in Finland afterwards, up to the present time.


*
RUSSIA RECRUITING SOLDIERS IN AFRICA


Fresh units of the Russian army shipped to Ukraine look like this. Vladimir Putin is trying to delay the next wave of mobilization, hoping that either Ukrainians or the West give up.

Putin’s recruiters are working overtime promising Russian citizenship and salaries of over $2000 per month to young males from poor Asian and African countries, paying recruiters high premiums for selling out their countrymen.

There are fighters from Cuba, Nepal, India, Syria, Mali, Sudan, and other African countries delivered to the front lines in Ukraine and sent in “meat attacks”.

With the ongoing war, the next wave of mobilization in Russia is inevitable, but Putin is trying hard to postpone it.

Obviously, a new wave of mobilization in major cities will expose the actual numbers of Russian casualties. Casualties in rural Russia are somewhat hidden due to the vastness of Russian territory, but in cities like Moscow and St. Petersburg the losses are visible to everyone.

Putin already gathered and sent to meat assaults the least privileged and most gullible Russians, tempted by “big money”, who made a mistake to volunteer for his “special military operation”. The remaining men in Russia are better informed, and this is why Putin’s recruitment drive is now struggling.

The new wave of mobilized, who would be more aware of the situation than those who obediently showed up to enlistment offices in September 2022, could rebel and turn the weapons against these who send them into meat attacks.

Another issue for Putin is ideological. t, since the Kremlin still has an ace up its sleeve.The fact that Russia still has the ability to mobilize gives Russians a glimpse of hope that not all is lost.

Once Putin announces a new mobilization and it doesn't change anything on the battlefield, reality will sink in and Russians will have no excuse that "we haven't even started yet."

This is why Vladimir Putin is doing everything possible to delay another mass mobilization.
That’s why there are constant raids on migrants, that’s why Putin pays premiums to foreign recruiters. The Ministry of Defense increased salaries for fighters. More African and Nepalese mercenaries, who don’t speak any Russian, are appearing at the front. Newly arrested criminals are offered to go to war instead of waiting for a trial. But these resources are finite. Especially considering how the Russian generals use them.

In May 2024, deserters and soldiers who refused to return to the war after rehabilitation had been detained and forcibly sent to the front — instead of prosecuting them in due order. 

Soldiers with injuries were also not discharged but shipped back to trenches.

The order to send them to the front came from the Kremlin.

This indicates that the Kremlin is running out of the human resource and soon will have no other option to continue the war as to begin mass mobilization. ~ Elena Gold, Quora

*
“CRUEL OPTIMISM”: MINIMUM WAGE AND THE GOOD LIFE


In early May, executives from the fast casual restaurant Chipotle Mexican Grill announced that the company would be raising its average hourly wage to $15 by the end of June. A few weeks later, Chipotle also announced that its menu prices would be increasing by about four percent to help offset those higher wages (as well as the increasing costs of ingredients). This means that instead of paying, say, $8.00 for a burrito, hungry customers will now instead be expected to pay $8.32 for the same amount of food.

While you might think that such a negligible increase would hardly be worth arguing about, opponents of a minimum wage hike jumped on this story as an example of the supposed economic threat posed by changing federal labor policies. During recent debates in Congress, for example, those resistant to the American Rescue Plan’s original provision to raise the federal minimum wage frequently argued that doing so could disadvantage consumers by causing prices to rise. Furthermore, Chipotle’s news exacerbated additional complaints about the potential consequences of the Economic Impact Payments authorized in light of the coronavirus pandemic: allegedly, Chipotle must raise their wages so as to entice “lazy” workers away from $300/week unemployment checks.

Nevertheless, despite the cost of burritos rising by a quarter or two, the majority of folks in the United States (just over six out of ten) support raising the federal minimum wage to $15 per hour. As many as 80% think the wage is too low in general, with more than half of surveyed Republicans (the political party most frequently in opposition to raising the minimum wage) agreeing. Multiple states have already implemented higher local minimum wages.

Why, then, do politicians, pundits, and other people continue to spread the rhetoric that minimum wage increases are unpopular and financially risky for average burrito-eaters?

Here’s where I think a little philosophy might help. Often, we are attracted to things (like burritos) because we recognize that they can satisfy a desire for something we presently lack (such as sustenance); by attaining the object of our desire, we can likewise satisfy our needs. 

Lauren Berlant, the philosopher and cultural critic who recently died of cancer on June 28th, calls this kind of attraction “optimism” because it is typically what drives us to move through the world beyond our own personal spaces in the hopes that our desires will be fulfilled.

But, importantly, optimistic experiences in this sense are not always positive or uplifting. 

Berlant’s work focuses on cases where the things we desire actively harm us, but that we nevertheless continue to pursue; calling such phenomenon cases of “cruel optimism,” they explain how “optimism is cruel when the object/scene that ignites a sense of possibility actually makes it impossible to attain the expansive transformation for which a person or a people risks striving.” Furthermore, cruel optimism can come about when an attraction does give us one kind of pleasure at the expense of other, more holistic (and fundamental) forms of flourishing.

A key example Berlant gives of “cruel optimism” is the fallacy of the “good life” as something that can be achieved if only one works hard enough; as they explain, “people are trained to think that what they’re doing ought to matter, that they ought to matter, and that if they show up to life in a certain way, they’ll be appreciated for the ways they show up in life, that life will have loyalty to them.” Berlant argues that, as a simple matter of fact, this characterization of “the good life” fails to represent the real world; despite what the American Dream might offer, promises of “upward mobility” or hopes to “lift oneself up by one’s own bootstraps” through hard work and faithfulness have routinely failed to manifest (and are becoming ever more rare).

Nevertheless, emotional (or otherwise affective) appeals to stories about the “good life” can offer a kind of optimistic hope for individuals facing a bleak reality — because this hope is ultimately unattainable, it’s a cruel optimism.

Importantly, Berlant’s schemata is a paradigmatically natural process — there need not be any individual puppetmaster pulling the strings (secretly or blatantly) to motivate people’s commitment to a given case of cruel optimism. However, such a cultural foundation is apt for abuse by unvirtuous agents or movements interested in selfishly profiting off of the unrealistic hopes of others.

We might think of propaganda, then, as a sort of speech act designed to sustain a narrative of cruel optimism. 

The case of Chipotle arises at the center of several overlapping objects of desire: for some, the neoliberal hope of economic self-sufficiency is threatened by governmental regulations on market prices of commodities like wage labor, as well as by federal mechanisms supporting the unemployed — with the minimum wage and pandemic relief measures both (at least seemingly) relating to this story, it is unsurprising that those optimistic about the promise of neoliberalism interpreted Chipotle as a bellwether for greater problems.

Furthermore, consumer price increases, however slight, threaten to damage hopes of achieving one’s own prosperity and wealth. The fact that these hopes are ultimately rather unlikely means that they are cases of cruel optimism; the fact that politicians and news outlets are nevertheless perpetuating them (or at least framing the information in a manner that elides broader conversations about wealth inequity and fair pay) means that those stories could count as cases of propaganda.

And, notably, this is especially true when news outlets are simply repeating information from company press releases, rather than inquiring further about their broader context: for example, rather than raising consumer prices, Chipotle could have instead saved hundreds of millions of dollars in recent months by foregoing executive bonuses and stock buybacks. (It is also worth noting that the states that elected to prematurely freeze pandemic-related unemployment funding, ostensibly to provoke workers to re-enter the labor market, have not seen the hoped-for increase in workforce participation — that is to say, present data suggests that something other than $300/week unemployment checks has contributed to unemployment rates.)

So, in short, plenty of consumers are bound to cruel optimisms about “the good life,” so plenty of executives or other elites can leverage this hope for their own selfish ends. The recent outcry over a burrito restaurant is just one form of how these strings are pulled. ~

https://www.prindleinstitute.org/2021/07/cruel-optimism-minimum-wage-and-the-good-life/

*
THE CONCEPT OF “CRUEL OPTIMISM”

The persistence of the American Dream, Lauren Berlant suggests, amounts to a cruel optimism, a condition “when something you desire is actually an obstacle to your own flourishing.” We are accustomed to longing for things that we know are bad for us, like cigarettes or cake. A key example Berlant gives of “cruel optimism” is the fallacy of the “good life” as something that can be achieved if only one works hard enough. ~ Lauren Berlant

*
BULGARIA BANS THE BURQA

Bulgaria has approved a law banning the wearing of burqas in public places, institutions, schools and government offices.

Those who do not comply with the law are cut off from social aid and are given a fine of 850€. ~Salka, Quora

Oriana:

The burqa is now banned in 16 countries, including France and Austria, but not the UK or the US.

*
LEONIA

Can I tell them: there is no Hell,
When they can learn on earth what Hell is?

In the confessional I listen to Leonie.
She fears damnation, which she thinks would be just.
If you don’t get your due in this lifetime,
She says, you get it in the next.

There goes Leonia, flames erupting
From sulfur lakes behind the gates of Hell.

~ Czeslaw Milosz, Father Severinus, Section 8

*
PEOPLE’S LAST WORDS OFTEN TIED TO FOUR PHRASES

Everyone’s life is different — yet most people still utter one of four common phrases on their deathbeds, according to Pulitzer Prize-winning author and oncologist Siddhartha Mukherjee.
Each of the phrases offers an important lesson for leading a fulfilling and successful life, Mukherjee said during a commencement speech at the University of Pennsylvania last week. 

“Every person that I’ve met in this moment of transition wanted to make four offerings,” he added.

The phrases are:

I want to tell you that I love you.
I want to tell you that I forgive you.
Would you tell me that you love me?
Would you give me your forgiveness?

People who know they’re dying often express some variation of one of those four themes — indicating that they waited until it was late to show their appreciation for others or right their interpersonal wrongs, said Mukherjee, author of the award-winning 2011 nonfiction book “The Emperor of All Maladies: A Biography of Cancer.”

Instead, they harbored grudges, lived with unresolved guilt or spent years being too afraid to be vulnerable, Mukherjee explained. The ensuing remorse, stress, poor mental health and even hormonal and immune imbalances can stunt your personal and professional growth, neurobehavioral scientist J. Kim Penberthy wrote in a 2022 University of Virginia blog post.

“Love and forgiveness, death and transition. Waiting [to express yourself] merely delays the inevitable,” said Mukherjee, adding that young people should “take this seriously. You’re living in a world where love and forgiveness have become meaningless, outdated platitudes. ... They’re words people have learned to laugh at.”

Just make sure you actually mean words like “love” and “forgiveness” when you use them, said Mukherjee.

“I dare you to use these words,” he said. “But not as empty clichés. Imbue them with real meaning. Do it your way, whatever your way is.”

https://www.cnbc.com/2024/05/31/phrases-that-are-often-peoples-last-words-says-doctor-what-we-can-learn.html

The angel of the Rue De Tubigro, Paris

*
THE “METHUSELAH GENE” AMONG ASHKENAZI JEWS

Intrigued by an Ashkenazi Jewish population in New York whose members often lived healthy lives past age 90 or even 100, Dr Nir Barzilai has devoted much of his career to studying such advanced agers. He shares his findings in “Age Later: Health Span, Life Span, and the New Science of Longevity,” published earlier this summer.

A Haifa native, Barzilai is the director of the Institute for Aging Research at New York’s Albert Einstein College of Medicine. He has gleaned insight from a population sample that now totals almost 3,000, including about 750 centenarians and their children.

Consider the four Kahn siblings, whom Barzilai got to know during their golden years. All lived past the century mark, and two — Irving Kahn and Helen (Kahn) Reichert — lived to be 109. Age did not stop Irving Kahn from working as an investor — or from taking a taxi to his office at age 108. Similarly, age could not prevent Helen Reichert from enjoying a cigarette — and neither could the doctors whom she outlived.

In a phone interview with The Times of Israel, Barzilai said that while he agrees that death remains a certainty of life, he is convinced that “aging, the way it is, shouldn’t happen.”

“I want to tell a story about why I think that,” Barzilai said. Correcting himself, he added, “not ‘I think,’ but the science behind it.”

“People age at different rates,” he explained. “[Some] look 10 years younger, some 10 years older. There is a biological age. If you’re aging relatively quickly, you start accumulating diseases after the decade after [age] 60.”

These include the so-called “Big Four” of disease: diabetes, heart disease, cancer and Alzheimer’s. But not so for the Ashkenazi super-agers in his population sample.

“The first thing we learned is the most important,” Barzilai said. “It’s not that they get sick. Everybody gets sick.”

Yet, he said, the centenarians he studies live healthy lives 20 to 30 years longer than others born around the same time period.

Dr. Nir Barzilai poses for a photo during a TEDx Beacon Street conference at Boston University Medical School in Boston, January 7, 2020.

“[At the end] of their life, they die rapidly, quickly, without disease [for their] last five to eight years like us,” Barzilai said, noting that when diseases finally do come, they last only “for a few weeks.” He calls this “a huge longevity dividend” that could potentially yield knowledge of how to “slow aging” and make “a lot of the need for hospice and medical care for disease go away.”

“[It] is not the diseases themselves, it is the aging which makes you susceptible — in this case, to viral infection,” as well as “the ability of the body to sustain [itself] through a difficult disease, to survive a difficult disease,” Barzilai said.

Comparing the coronavirus situation to a war, Barzilai said, “You want to fight the virus… to do immunization. But in wars, you also defend the population and the soldiers too. We have to fortify the older adults. That’s what my book is about.”

Who are advanced agers?

Before COVID-19, Barzilai was speaking quite a bit about advanced agers — including at a TEDx Beacon Street talk at Boston University Medical School in January attended by The Times of Israel. He subsequently was interviewed in a Fox News podcast by former speaker of the House Newt Gingrich. Throughout, he was anticipating the release of his first book for a mass audience — “intelligent laypeople,” he said.

“The science is very, very simple, and whenever I felt there was a need for more science to be explained, there’s a chart or something so you can read what it means,” Barzilai said, adding that other features include “stories on centenarians, stories on biotech, drug development, about the future.”

There are also glimpses into Barzilai’s own life. A Technion graduate who completed his residency at Hadassah Medical Center, Barzilai rose to the position of chief instructor of medics in the IDF. His army service included a deployment to a refugee camp in war-torn Cambodia.

Subsequently, he studied medicine in the United States, at Yale and Cornell universities, before joining the faculty at Albert Einstein College of Medicine, where he holds the Ingeborg and Ira Leon Rennert Chair in Aging Research.

Barzilai’s own family includes his super-ager uncle Irving, a 98-year-old Holocaust survivor of multiple concentration camps. After World War II, Irving moved to Czechoslovakia, only to flee during the 1968 Prague Spring uprising. More recently, following a move to Houston, he lost his house to a hurricane four years ago but rebuilt it.

Throughout, Barzilai said, his uncle approaches life with the same philosophy: “He asks himself what’s next, bring it on, nothing’s going to kill me.”

The Kahn siblings showed similar resiliency. Irving Kahn was described as the “Oldest Active Wall Street Investor” in his 2015 New York Times obituary. Eventually managing $1 billion in assets, he took a taxi to work as recently as the year before his death.

Each day, he read two financial newspapers, which Barzilai sees as reflecting an important issue as people age.

“The point is to activate the mind,” Barzilai said. “Maybe you can have an opportunity to do it because you have the genes to do it — not because you do it to keep your mind.” However, he said, “I think both are true to some extent.”

Barzilai characterizes the centenarians he has interacted with as generally “positive thinking, grateful, thankful, extroverts.”

Yet he said it is a mistake to assume these qualities are lifelong. He notes that significant changes may occur with age, from losing one’s spouse to moving into assisted living. And, he said, science has disproved the belief that personality remains constant after age 70.

Healthy habits only get you so far

Some centenarians follow habits that seem counterintuitive. Kahn’s sister Helen (Kahn) Reichert had a penchant for cigarettes, chocolate and Budweiser, as described in her 2011 NPR obituary — which noted that she outlived multiple physicians who advised her not to smoke.

Barzilai noted a misperception that centenarians owe their longevity to healthy habits.

“The answer is no,” he said. “They are very similar to the bad habits of the population.”

However, he added, centenarians have “genes protecting them against the bad habits.”

For the general population, Barzilai said, exercising, eating right, and avoiding alcohol and tobacco are not ends unto themselves. Rather, he said, most people need to adopt a healthy lifestyle because “we are unlikely to have the genes that protect life against all these bad habits.”

Asked about the role of genetics in advanced aging, Barzilai said that the genetics is “strong” in centenarians but added that this wasn’t the only factor in living long.

Barzilai cites an example from his family history. His father and grandfather both suffered heart attacks at age 68. His grandfather’s was fatal, whereas his father survived, underwent triple bypass surgery and lived to age 84.

“You can see the genetics are similar,” Barzilai said. “My father had an interaction with the environment that was not available to my grandfather.”


“It’s very complicated to give numbers about how much is genetic,” he said. “It’s almost impossible to give a number. The point here is that if we can find all the genetics, we could protect against the environment, no matter what the number is.”

And, Barzilai said, “we are finding the longevity gene.”

He said that in studying advanced agers, he is finding “what in their genetics allows them to get to be 100,” with two findings already translated into drugs.

“It’s an impact we have on drug development from these Jewish centenarians,” he said.

In recent years, Barzilai has become intrigued by a drug called metformin that he is examining for possible protective effects against age-related disease. It is currently used to treat Type 2 diabetes. More recently, it has been tried against COVID-19. During the pandemic, a paper published in China reported less mortality among diabetic COVID-19 patients taking metformin than among those not taking metformin.

Barzilai cited past human studies in which participants using metformin had “less of the major diseases of aging,” with results being detected through either “clinical or observational” means.

“For me, metformin is a tool to show the FDA that aging can be prevented,” Barzilai said. “By targeting aging, we can prevent age-related disease.”

According to Barzilai, metformin can help realize the maximal lifespan for the human species, which he said is 115 years old. [Oriana: Note that Jeanne Clement lived to be 122]

“If we can reproduce metformin for aging, and if pharma develops more and better drugs and combinations of drugs, we can start making better progress,” Barzilai said.

Looking long-term, he is optimistic about what lies ahead for aging research.

“Now biotech is involved in this,” he said. “It’s why pharmaceuticals are going to be interested, why the future looks better. We can extend our health span and enjoy a better end of life, a better quality of life.”

https://www.timesofisrael.com/only-as-old-as-your-genes-ashkenazi-super-agers-could-hold-key-to-long-life/

Oriana:
Metformin, a drug that lowers blood sugar, has been known as a “life-extension drug” for over twenty years now. It is a prescription-only drug. Fortunately there is a supplement that works even better than metformin: BERBERINE.

In addition to lowering blood sugar (although this may not happen to every berberine taker), berberine also helps produce an excellent blood lipid profile — with no side effects that plague statins.

Another drug known to extend healthy lifespan is rapamycin, an immunosuppressant. The supplement QUERCETIN has a similar action. It’s available online. Food sources include onion and berries, green tea and red wine.

Ashwaganda and ginseng are also being studied as potential anti-aging drugs.

SUPERAGERS: ARE THERE LIFESTYLE DIFFERENCES?

In the largest observational study to date on “SuperAgers” — people in their 80s who have brains as sharp as those 30 years younger — researchers in Spain found key differences in lifestyle that may contribute to these older adults’ razor-sharp minds.

SuperAgers in the study had more gray matter in parts of the brain related to movement, and they scored higher on agility, balance and mobility tests than typical older adults — even though the physical activity levels of the two groups were the same.

“Though superagers report similar activity levels to typical older people, it’s possible they do more physically demanding activities like gardening or stair climbing,” said senior author Bryan Strange, director of the Laboratory for Clinical Neuroscience at the Technical University of Madrid, in a statement.

“From lower blood pressure and obesity levels to increased blood flow to the brain, there are many direct and indirect benefits of being physically active that may contribute to improved cognitive abilities in old age.”

The study, published Thursday in The Lancet Healthy Longevity journal,  followed 64 SuperAgers and 55 cognitively normal older adults who were part of the Vallecas Project, a long-term research project on Alzheimer’s in Madrid.

In a battery of tests, the Spanish SuperAgers scored lower than typical older adults in levels of depression and anxiety, the study found. Mental health issues such as depression are known risk factors for developing dementia.

SuperAgers also told researchers they had been more active in midlife, had been happy with the amount of sleep they got, and were independent in their daily living. Poor sleep is a key risk factor for cognitive decline.

“This study adds to what we already know — SuperAging isn’t just the ability to perform well on a cognitive test,” said Angela Roberts, an assistant joint professor of communication and computer science at Western University in London, Ontario, in an email. She was not involved in the study.

It is associated with slower and less pronounced brain atrophy in regions critical for memory and language and possibly slower age-related declines in walking and mobility,” said Roberts, also a principal investigator of the Northwestern SuperAging Research Program, a clinical trial led by the Mesulam Center for Cognitive Neurology and Alzheimer’s Disease at Northwestern University’s Feinberg School of Medicine in Chicago.

The study is good news for people in their 30s and 40s who may want to improve their health by incorporating more exercise, stress reduction and other healthy habits, said Dr. Jo Robertson, national screening and trials coordinator for the Australian Dementia Network at the University of Melbourne, in an email.

“The take-home point here for people in midlife is that they need to be enthusiastically modifying lifestyle factors known to have an impact — increasing physical fitness, reducing cardiovascular risk, optimizing mental health and getting appropriate care for any mood disorders — to improve their long-term brain health,” said Robertson, who also was not involved in the study.

What is a SuperAger?

Most people’s brains shrink as they grow older. In SuperAgers, however, studies have shown the cortex, responsible for thinking, decision-making and memory, remains much thicker and shrinks more slowly than those in their 50s and 60s.

To be a SuperAger, a term the Northwestern SuperAging program coined, a person must be over 80 and undergo extensive cognitive testing. Acceptance in the study occurs if the person’s memory is as good or better than cognitively normal people in their 50s and 60s — only 10% of people who apply qualify.

“SuperAgers are required to have outstanding episodic memory — the ability to recall everyday events and past personal experiences — but then SuperAgers just need to have at least average performance on the other cognitive tests,” cognitive neuroscientist Emily Rogalski, a professor of psychiatry and behavioral sciences at the Feinberg School, told CNN in an earlier interview.

“It’s important to point out when we compare the SuperAgers to the average agers, they have similar levels of IQ, so the differences we’re seeing are not just due to intelligence,” said Rogalski, who developed the SuperAging project.

SuperAgers share similar traits, say experts who study them. They tend to be positive. They challenge their brain every day, reading or learning something new. Many continue to work into their 80s.

SuperAgers are also social butterflies, surrounded by family and friends, and can often be found volunteering in the community. And as the current study found, they also stay active physically.

More gray matter in certain parts of the brain

All participants in the Spanish research underwent brain scans, blood tests and other lifestyle and cognitive assessments when they entered the study and were reexamined annually for four years.

Brain scans showed SuperAgers had greater gray matter volume than typical older adults in areas of the brain responsible for cognitive functioning, spatial memory and overall memory. In addition, some of the most impressive changes in gray matter volume were in areas of the brain connected to motor activity or movement as well as memory.

Interestingly, the study found SuperAgers were just as likely to have the same levels of APOE genes —  including APOE4, a red flag risk for Alzheimer’s disease — as normally aging adults. However, that finding is not new, experts say.

“SuperAgers perform better on a range of measures — memory, physical condition, speed and mental health — regardless of the levels of blood biomarkers of Alzheimer’s disease,” said Robertson, who is a senior clinical neuropsychologist at Melbourne Health in Australia.

Earlier research has found a genetic predisposition for the ability to keep the mind sharp well into old age. Examination of donated brains of SuperAgers have found bigger, healthier cells in the entorhinal cortex – one of the first areas of the brain affected by Alzheimer’s.

Brain scans showed SuperAgers had greater gray matter volume than typical older adults in areas of the brain responsible for cognitive functioning, spatial memory and overall memory. In addition, some of the most impressive changes in gray matter volume were in areas of the brain connected to motor activity or movement as well as memory.

Brains of SuperAgers also had many more von Economo neurons, a rare type of brain cell thought to allow rapid communication across the brain. So far, the corkscrew-like von Economo neuron has only been found in humans, great apes, elephants, whales, dolphins and songbirds.

“The story here is not simply that (SuperAgers) are at lower risk of developing dementia,” Roberts said. Instead, she said, they may have added protective factors — genetic or lifestyle or even a positive emotional outlook — that help protect them in the face of these risk factors.

“This is a key reason to study SuperAgers — as they may help us to uncover protective mechanisms that act on known dementia risk factors … to lessen or minimize the risk of age-related cognitive decline and brain changes,” Roberts said.

https://www.cnn.com/2023/07/13/health/superager-movement-mental-health-wellness/index.html

Oriana:

I was especially struck by the emphasis on better ability to walk shown by the SuperAgers.

*
ending on beauty:

I CAME HERE TONIGHT

Astronaut of lichen
I swim to her

to write of sticks and glass
the candle of her spine

~ Sutton Breiding



No comments:

Post a Comment