Saturday, August 24, 2024

REDEEMING DOCTOR WATSON; ‘ALLAHU AKBAR’ AND ISLAMIC TERRORISM; MARX STILL RELEVANT; SPACE EXPLORATION NEEDS PURPOSE; THE YUPPIE ERA; CHANGE IN VOICE AND SPEECH PREDICTS ALZHEIMER’S AND PARKINSON’S; FORESTS STILL ABOSRB A LOT OF CO2; MARX RELEVANT AGAIN; BENEFITS OF FERMENTATION

Artflow AI; 20240724_#20. My thanks to Violeta Kelertas

*

INTO THE CLOUD

What do you have with you
now my small traveler
suddenly on the way
and all at once so far

on legs that never were
up to the life that you
led them and breathing with
the shortness breath comes to

my endless company
when you could come to me
you would stay close to me
until the day was done

o closest to my breath
if you are able to
please wait a while longer
on that side of the cloud

~ W.S. Merwin, from The Shadow of Sirius (2009)

*
REDEEMING DOCTOR WATSON

It’s not easy playing second-fiddle. Think about this for a moment: is there a character in all of Western literature more misunderstood, more defamed than Doctor Watson, the erstwhile sidekick of detective Sherlock Holmes?

So often, in twentieth-century film and television adaptions, Dr. John Watson is represented as a blithering idiot—often old, always naive, and perpetually astonished. He exists in a constant state of amazement; at the very most, providing a contrast that makes Holmes seem even smarter.

This is strange, because, as he is written in Arthur Conan Doyle’s stories, Watson could not be more different than this scurrilous remaking. Holmes and Watson meet in 1881, in a laboratory where Holmes is conducting research. Watson, a surgeon, has just returned to London from a stint in Afghanistan as an army doctor. He’s looking for lodgings, and an old friend directs him to Holmes, who is in the same situation. When they meet, Watson finds Holmes fascinating.

Holmes finds Watson suitable. As both a doctor and a war veteran, Watson is in the unique position to appreciate Holmes’s scientific detective work, as well as offer meaningful assistance as needed. In fact, he appreciates it so well that he begins to profile Holmes, which attracts more attention towards Holmes’s business. And he’s young; according to an estimation by Sherlockian scholar William S. Baring-Gould (which has been corroborated by other scholars, including Leslie S. Klinger), Watson is probably only about twenty-nine years old.

There’s a cartoon by the artist Kate Beacon, from her strip “Hark, a Vagrant,” that perfectly explains his predicament. In it, Dr. Watson-from-the-Sherlock-Holmes-stories, a young and capable professional, finds out that Holmes is recasting him in their adventures with a portly, easily-impressed idiot who goes around yelling catchphrases like “I say!” and “By Jove!”

It is, really, a shocking switch, and it doesn’t fully happen until 1939 when Nigel Bruce played Watson alongside Basil Rathbone’s Holmes in the film adaptation of The Hound of the Baskervilles, as well as the film The Adventures of Sherlock Holmes, released a few months later.

He played Watson fifteen times on film through 1946, and also on the radio numerous times through the 1950s. His Watson is incurably daft. This persona fosters a strange relationship between the two men, in which Holmes is more bemusedly parental towards his assistant than anything else. “For years, Sherlockians have wrung their hands and bemoaned the fate of poor Watson on screen in films and television and told anyone who would listen that this wasn’t the real Watson,” Klinger told the Los Angeles Times in 2012. He added,

“The thing that Sherlockians say is ‘why would a genius like Sherlock Holmes want to hang around with a fool like Nigel Bruce’s Watson?’ It doesn’t make any sense… The way we have always thought of him is as an intelligent person, young and certainly not older than Holmes, someone who is stalwart and courageous, a little bit physical. He’s someone who can mostly give back as good as he gets from Holmes.”

Thankfully, the madness has stopped, in large part—from Jude Law’s age-appropriate Watson in the Guy Ritchie films, to Lucy Liu’s capable professional in Elementary, to Martin Freeman’s adrenaline-junkie BFF in the BBC Sherlock series, the Watsons doctored up by recent adaptations have restored the character’s common-sense and scientific expertise, as well as their genuine concern for Holmes’s dangerous habits. (This latter quality runs deep, all the way to one of the first Holmes adaptations, William Gillette’s off-the-rails stage melodrama from 1899.)

But I want to dwell on the original Watson, the textual Watson, for a little bit. More than simply “not being an imbecile,” the literary Watson offered more than just common-sense support to our detective. It must not be forgotten that Watson is the writer of the Holmes stories, and, as such, is their architect. He’s responsible for representing their relationship, and even Holmes’s entire persona. Watson might, actually, be the greater mastermind between the two of them.

See, in the books, Watson is easily astonished by Holmes, but I think this is entirely purposeful. Take this example, from the 1891 story “A Scandal in Bohemia.” In one scene, Holmes emerges from his bedroom, in the flat he shares with Watson, wearing an elaborate disguise. According to Watson, Holmes had left the room dressed as himself, but “returned in a few minutes in the character of an amiable and simple-minded Nonconformist clergyman.” Watson notes the striking, convincing elements of Holmes’s costume: “His broad black hat, his baggy trousers, his white tie,” but then begins to describe subtle aspects of his new appearance that really seem to seal the deal: “his sympathetic smile,” and “general look of peering and benevolent curiosity.”

After all, Holmes is not simply wearing the outfit of a clergyman, but looks to be an “amiable and simple-minded” one—believable as a distinct person not merely because of his habit, but because of his expressions. Marveling, Watson remarks that Holmes’s changes to his own presentation were “such as Mr. John Hare” (a popular actor and the then-manager of London’s Garrick Theatre, known for playing well-meaning geezers) “alone could have equaled.” He adds, “It was not merely that Holmes changed his costume. His expression, his manner, his very soul seemed to vary with every fresh part that he assumed.”

This is quite the endorsement! Here, in this third installment ever written about Holmes, Watson clues us in that performance is a key part of Holmes’s detective methodology. In the Holmes canon, which totals fifty-six stories and four novels, Sherlock Holmes is recorded as being thoroughly disguised seventeen times. I think about this a lot because my own doctoral dissertation is about “performance and the Victorian detective,” and, you guessed it, one of the chapters is about Holmes. (Is the conceit of this article taken from a talk I gave at Vanderbilt University’s annual “Dickens Universe Winter Conference?” YOU BET.) 

But I kept returning to Watson’s representations of Holmes’s transformations, intrigued by how Watson remains stunned by his friend’s transformations every single time he sees him transform. In a slightly-later story, “The Man with the Twisted Lip,” when Watson journeys into an opium den alone to look for another acquaintance, he hears a voice whisper to him “’Walk past me, and then look back at me.’” He is confused, but then it makes sense.

“The words fell quite distinctly upon my ear. I glanced down. They could only have come from the old man at my side, and yet he sat now as absorbed as ever, very thin, very wrinkled, bent with age, an opium pipe dangling down from between his knees, as though it had dropped in sheer lassitude from his fingers. I took two steps forward and looked back. It took all my self-control to prevent me from breaking out into a cry of astonishment.

He had turned his back so that none could see him but I. His form had filled out, his wrinkles were gone, the dull eyes had regained their fire, and there, sitting by the fire and grinning at my surprise, was none other than Sherlock Holmes. He made a slight motion to me to approach him, and instantly, as he turned his face half round to the company once more, subsided into a doddering, loose-lipped senility.”

Holmes’s physical transformations seem to occur almost as if by magic; it is impossible, even for the most skilled actor, to completely lose their identities inside their performances with physical maneuverings alone, yet Watson swears that Holmes does.

We have to take Watson at his word and this is tricky, when you think about it; he is the spectating character, as well as the narrator. What Watson insists Holmes is able to do is almost physically impossible and has never been done before. Good acting, which Watson affirms time and time again Holmes is able to do, never fully erases the presence of the actor, unless it is done outside an expected, established performance space without the forewarning of the actor’s presence. (Then it doesn’t seem to be “acting” so much as another kind of more general “performance.”) After all, when one attends the theater to see a show, one is not dumbfounded by performers looking and acting differently they they do in life.

Watson knows how this works—in “A Scandal in Bohemia,” when Holmes dresses up as the “simple-minded clergyman,” he compares Holmes to the actor John Hare, a man with an established repertoire. He doesn’t compare him to the characters Hare has played, but to the real man himself. The job of an actor, as we understand it, is to be able to make the audience forget that they are watching the person they know, but the true identity of the performer is never eradicated completely.

Yet Watson is entirely fooled and startled by Holmes’s disguises every time—which makes very little sense in this aforementioned theoretical way; indeed, John Hare, known for effectively playing old men throughout his career, would have nonetheless been identifiable on a stage as the man playing the geezer if there had even been the remote possibility that he would be acting in the theater, that night. But it also makes no sense because Watson should be familiar with his friend’s tendencies. One would think that after so much time spent as the roommate and friend of a man so prone to disguises, that whenever a stranger strode into their apartment or excessively chatted with Watson on the street, Watson might at least wonder if one of these interlopers could be his disguise-prone friend.

Holmes has established that his performance space is limitless, something Watson can never seem to fully grasp. But the question is, is Watson an audience, or is he (after all, the writer of these stories for a broad reading public) complicit in the performance? See, if Holmes is acting, then who is to say that Watson isn’t acting, too?

Watson’s constant, repeated shock at Holmes’s unveiling of his identity is so salient that it is likely the reason why subsequent adaptations turn him into an easily-dumbfounded fool, seemingly-existing to marvel at his roommate’s abilities. Most adaptations, though, seem to forget that Watson is the one in control. Therefore, I prefer to speculate that this ultimately reveals just how clever a man he is, participating in an odd-couple double act in which his alter-ego is fooled all the time. What if his astonishment is fake? After all, Watson IS a doctor; he can’t be that dumb. It’s elementary. (The 1988 Holmes-Watson comedy Without a Clue suggests something along these lines.)

Reading the Sherlock Holmes stories with Holmes as the performer and Watson as his clever hype-man, swearing to his audience that he has never seen feats more impressive than the feats Holmes will pull off today, relocates the “genius” so often worn by the subject of his stories. The true genius of the Holmes-Watson gambit lies in their very collaboration, their scheme of presentation which is apparently so effective that it gets “Watson’s” stories published in The Strand, and, within the worlds of the stories as well as outside them, made Holmes so extraordinarily famous.

The thing is, Holmes is much less impressive without Watson there to tell you just how impressive he is. If Holmes is so impressive, it’s because Watson has set him up to be. Rather than a sidekick, Watson is rather like Holmes’s manager. Holmes is the talent, but Watson is the strategist. And you can’t put on a show without that.

https://getpocket.com/explore/item/redemption-for-doctor-watson?utm_source=pocket-newtab-en-us

*

HOW WE MAKE DECISIONS

~ “We say that we “decide” to get married, to have children, to live in particular cities or embark on particular careers, and in a sense this is true. But how do we actually make those choices? One of the paradoxes of life is that our big decisions are often less calculated than our small ones are. We agonize over what to stream on Netflix, then let TV shows persuade us to move to New York; buying a new laptop may involve weeks of Internet research, but the deliberations behind a life-changing breakup could consist of a few bottles of wine. We’re hardly more advanced than the ancient Persians, who, Herodotus says, made big decisions by discussing them twice: once while drunk, once while sober.

Samuel Johnson hopes to reform us. He examines a number of complex decisions with far-reaching consequences—such as the choice, made by President Barack Obama and his advisers, to green-light the raid on Osama bin Laden’s presumed compound, in Abbottabad, Pakistan—and then shows how the people in charge drew upon insights from “decision science,” a research field at the intersection of behavioral economics, psychology, and management. He thinks that we should apply such techniques to our own lives.

I’ve never had to decide whether to launch a covert raid on a suspected terrorist compound, but I’ve made my share of big decisions. This past summer, my wife and I had a baby boy. His existence suggests that, at some point, I decided to become a father. Did I, though? I never practiced any prudential algebra; rather than drawing up lists of pros and cons and concluding, on balance, that having kids was a good idea, I gradually and unintentionally transitioned from not particularly wanting children to wanting them, and from wanting them to joining my wife in having them. If I made a decision, it wasn’t a very decisive one.

In “War and Peace,” Tolstoy writes that, while an armchair general may imagine himself “analyzing some campaign on a map” and then issuing orders,
a real general never finds himself at “the beginning of some event”; instead, he is perpetually situated in the middle of a series of events, each a link in an endless chain of causation. “Can it be that I allowed Napoleon to get as far as Moscow?” Tolstoy’s General Kutuzov wonders. “When was it decided? Was it yesterday, when I sent Platov the order to retreat, or was it the evening before, when I dozed off and told Bennigsen to give the orders? Or still earlier?” Unlike the capture of Moscow by Napoleon, the birth of my son was a joyous occasion. Still, like Kutuzov, I’m at a loss to explain it: it’s a momentous choice, but I can’t pinpoint the making of it in space or time.

For Tolstoy, the tendency of big decisions to make themselves was one of the great mysteries of existence. It suggested that the stories we tell about our lives are inadequate to their real complexity. Johnson means to offer a way out of the Tolstoyan conundrum. He wants to make us writers, rather than readers, of our own stories. Doing so requires engaging with one of life’s fundamental questions: Are we in charge of the ways we change?

Ideally, we’d be omniscient and clearheaded. In reality, we make decisions in imperfect conditions that prevent us from thinking things through. This, Johnson explains, is the problem of “bounded rationality.”
Choices are constrained by earlier choices; facts go undiscovered, ignored, or misunderstood; decision-makers are compromised by groupthink and by their own fallible minds. The most complex decisions harbor “conflicting objectives” and “undiscovered options,” requiring us to predict future possibilities that can be grasped, confusingly, only at “varied levels of uncertainty.” 

And life’s truly consequential choices, Johnson says, “can’t be understood on a single scale.” Suppose you’re offered two jobs: one at Partners in Health, which brings medical care to the world’s neediest people, and the other at Goldman Sachs. You must consider which option would be most appealing today, later this year, and decades from now; which would be preferable emotionally, financially, and morally; and which is better for you, your family, and society. From this multidimensional matrix, a decision must emerge.

Johnson’s book is part of a long tradition. For centuries, philosophers have tried to understand how we make decisions and, by extension, what makes any given decision sound or unsound, rational or irrational. “Decision theory,” the destination on which they’ve converged, has tended to hold that sound decisions flow from values. Faced with a choice—should we major in economics or in art history?—we first ask ourselves what we value, then seek to maximize that value.

From this perspective, a decision is essentially a value-maximizing equation. If you’re going out and can’t decide whether to take an umbrella, you could come to a decision by following a formula that assigns weights to the probability of rain, the pleasure you’ll feel in strolling unencumbered, and the displeasure you’ll feel if you get wet. Most decisions are more complex than this, but the promise of decision theory is that there’s a formula for everything, from launching a raid in Abbottabad to digging an oil well in the North Sea. Plug in your values, and the right choice pops out.

In recent decades, some philosophers have grown dissatisfied with decision theory. They point out that it becomes less useful when we’re unsure what we care about, or when we anticipate that what we care about might shift. In a 2006 article called “Big Decisions: Opting, Converting, Drifting,” the late Israeli philosopher Edna Ullmann-Margalit asked us to imagine being one of “the early socialist Zionist pioneers” who, at the turn of the twentieth century, dreamed of moving from Europe to Palestine and becoming “the New Jews of their ideals.” Such a change, she observed, “alters one’s life project and inner core”; one might speak of an “Old Person” who existed beforehand, browsing bookshops in Budapest, and a “New Person” who exists afterward, working a field in the desert. The point of such a move isn’t to maximize one’s values. It’s to reconfigure them, rewriting the equations by which one is currently living one’s life.

Ullmann-Margalit doubted that such transformative choices could be evaluated as sound or unsound, rational or irrational. She tells the story of a man who “hesitated to have children because he did not want to become the ‘boring type’ ” that parents tend to become. “Finally, he did decide to have a child and, with time, he did adopt the boring characteristics of his parent friends—but he was happy!” Whose values were maximized—Old Person’s or New Person’s? Because no value-maximizing formula could capture such a choice, Ullmann-Margalit suggested that, rather than describing this man as having “decided” to have children, we say that he “opted” to have them—“opting” (in her usage) being what we do when we shift our values instead of maximizing them.

The nature of “opting situations,” she thought, explains why people “are in fact more casual and cavalier in the way they handle their big decisions than in the way they handle their ordinary decisions.”
Yet it’s our unexplored options that haunt us. A decision-maker who buys a Subaru doesn’t dwell on the Toyota that might have been: the Toyota doesn’t represent a version of herself with different values. An opter, however, broods over “the person one did not marry, the country one did not emigrate to, the career one did not pursue,” seeing, in the “shadow presence” implied by the rejected option, “a yardstick” by which she might evaluate “the worth, success or meaning” of her actual life.

Before having children, you may enjoy clubbing, skydiving, and LSD; you might find fulfillment in careerism, travel, cooking, or CrossFit; you may simply relish your freedom to do what you want. Having children will deprive you of these joys. And yet, as a parent, you may not miss them. You may actually prefer changing diapers, wrangling onesies, and watching “Frozen.” These activities may sound like torture to the childless version of yourself, but the parental version may find them illuminated by love, and so redeemed. You may end up becoming a different person—a parent.
The problem is that you can’t really know, in advance, what “being a parent” is like. For Paul, there’s something thrilling about this quandary. Why should today’s values determine tomorrow’s? In her 2014 book, “Transformative Experience,” Ullmann-Margalit suggests that living “authentically” requires occasionally leaving your old self behind “to create and discover a new self.” Part of being alive is awaiting the “revelation” of “who you’ll become.”

When we’re aspiring, inarticulateness isn’t a sign of unreasonableness or incapacity. In fact, the opposite may be true. “Everyone goes to college ‘to become educated,’ ” Callard observes, “but until I am educated I do not really know what an education is or why it is important.” If we couldn’t aspire to changes that we struggle to describe, we’d be trapped within the ideas that we already have.
Our inability to explain our reasons is a measure of how far we wish to travel. It’s only after an aspirant has reached her destination, Callard writes, that “she will say, ‘This was why.’ ~

https://www.newyorker.com/magazine/2019/01/21/the-art-of-decision-making?utm_source=facebook&utm_brand=tny&utm_social-type=owned&utm_medium=social&mbid=social_facebook&fbclid=IwAR2MBNUiEdIqKMGG4HIYHkj2HYwXsykdU-5itvr129cSjYbXHWR3Cyc4tbU

Oriana:

No, we don't choose how and when we change. I think we rarely choose anything in the sense of making a rational, calculated decision. Sure, we can say, "I CHOSE to quit drinking." But an acute observer. or we ourselves when we drop the ego, can identify a myriad of external factors that had to happen just right to give us that flash of resolve that rewires the circuits of our brain and basically makes drinking no longer possible. Once you have a certain laser-like insight ("I no longer need to drink" -- and your arm stops in mid-air and can't reach the bottle) -- once we see a self-destructive habit with blazing clarity, old behavior simply becomes impossible.

It's not only that we are the slaves of our passions rather than rational beings (I see a blazing insight more in the category of "passion"), though I bow to Shakespeare's genius when he said that; it's that it's more a matter of circumstances, or even sheer chance. We may try to fool ourselves into thinking that we've considered the pluses and minuses, and thought long and hard. But deep inside, after we've lived for a while, we know that there's no denying that the heart indeed has its reasons and it wants what it wants. But that may change in a fraction of a second. I'm thinking not of heart beats, but of the speed of neural events. In terms of COMPLEX processing, the human brain is still far ahead of computers. And each person is a unique conglomeration -- a once in eternity event in the universe.

Alternately, we may give up on any conscious "choosing" because the complexity and uncertainty involved in trying to untangle the costs and benefits may be simply too great. Too many "unknown unknowns" make a mockery of our proud intellect.

Now, sometimes we don't have that emotional clarity that makes being a slave of passion so appealing, and for a while the agony is hellish. But the moment we do choose, we'll find the supposed good reasons for that choice, even though our thinking may be quite mistaken. For me it was a relief to realize that I was a slave of both passions and circumstances, mostly the latter (it actually takes great courage to be a "slave of passions," it seems to me; I don't see myself as having that kind of courage, but I have experienced acting from sheer desperation).

Another thing I learned is that choice is not necessarily all that important. We are versatile and resourceful; we'll manage somehow. So go ahead and toss that coin. 

Mary:

In thinking of how we make decisions it is important to realize there is no " ground zero.” We are always in the middle of things, just like Tolstoy's General, each step taken with thousands of determinants behind it, surrounding it, and potentially developing from it. We remember the past and imagine the future, and each point on the journey has the potential to be transformative. History, circumstance and values are all determinants. It would be foolish to think changes occur, or can occur, as a result of a rational conscious process. And often our future self could be not only unexpected but unrecognizable to our present self. Decisions can be transformative in unimagined ways, and we rarely know where the first step will eventually take us.

I think it is fruitless to regret the "unlived life”-- the one we imagine we might have had if we had chosen differently. Those kinds of regret assume the shape of what might have been without any real evidence. We cannot really know what the results would have been, and there is a tendency to assume that life might have been better than the one you have — again, with no real evidence. You are already a different person than the one who could have made a different choice in the past. Regret assumes you would have had a rosier, better life, but that is no more sure than that it would have been ordinary and dull, or full of grief and disappointment.

The forces that determine decisions are both personal and social: life history, values, taste, environment, education, temperament, family, class, opportunity, genetics. It is difficult to understand exactly where and why decisions are made. It is always in the flux of things, and hard to see that first step of a significant change — or its refusal.

Oriana:

The temptation to live in regret was very strong for me due to my having left Poland. It didn’t take me long to realize that it’s not just the country I left, but the self I might have become had I stayed. It’s difficult to be an immigrant, and definitely not desirable unless the situation in your homeland is truly unbearable. Of course my life in Poland was no idyll, but it had its beauties, whether chestnut trees in bloom in Warsaw and Krakow or small local cemeteries. It had pleasures I did not realize were bliss, i.e. speaking one’s native tongue, not having an accent, not being constantly asked, “Where are you from?” 

I would have studied English at the University of Warsaw, likely with German as my additional language. But right here the absurdity of having that imaginary life comes to light. I would have chosen German on the basis of having become a poet in the US, with the desire of studying Goethe and Rilke in the original — the kind of poetry that had the power of putting me into a trance. But — would I have become a poet had I stayed in Poland? Would I have instead chosen French, and read every single novel by Balzac?

Believe it or not, there was a period in my life when that was an important question for me: would I have become a poet had I stayed in Poland? And a Polish woman poet replied, “Yes, you would have become a poet, but of course as a completely different person.”

And of course there is no way of imagining myself as a completely different person. What would have remained the same was my intensity and my voracious intellect. Otherwise the self I would have become remains unknown, and it’s pointless to speculate how my life would have developed.

Pointless, but hard to resist entirely . . . 

For the sake of my sanity, I have to believe that I did the right thing in leaving Poland for America. That belief isn't firm; there were times in my life when it broke down completely, and I thought I'd made a terrible mistake. But considering that there is no way to know, to split oneself into two different selves, ideally in a fascinating dialogue with each other, I might as well try to image what it would be like to be an astronaut. Ah, at last something I can clearly reject as ridiculous.

*
THE NEW DEMOCRATIC PARTY AND FDR TYPE OF DEMOCRATS

~ Vice President Kamala Harris’s choice of Minnesota governor Tim Walz to be her running mate seems to cement the emergence of a new Democratic Party.

When he took office in January 2021, President Joe Biden was clear that he intended to launch a new era in America, overturning the neoliberalism of the previous forty years and replacing it with a proven system in which the government would work to protect the ability of ordinary Americans to prosper.

Neoliberalism relied on markets to shape society, and its supporters promised it would be so much more efficient than government regulation that it would create a booming economy that would help everyone. Instead, the slashing of government regulation and social safety systems had enabled the rise of wealthy oligarchs in the U.S. and around the globe. Those oligarchs, in turn, dominated poor populations, whose members looked at the concentration of wealth and power in the hands of a few people and gave up on democracy.

Biden recognized that defending democracy in the United States, and thus abroad, required defending economic fairness. He reached back to the precedent set by Democratic president Franklin Delano Roosevelt in 1933 and followed by presidents of both parties from then until Ronald Reagan took office in 1981. Biden’s speeches often come back to a promise to help the parents who “have lain awake at night staring at the ceiling, wondering how they will make rent, send their kids to college, retire, or pay for medication.” He vowed “to finally rebuild a strong middle class and grow our economy from the middle out and bottom up, giving hardworking families across the country a little more breathing room.”

Like his predecessors, he set out to invest in ordinary Americans. Under his administration, Democrats passed landmark legislation like the American Rescue Plan that rebuilt the economy after the devastating effects of the coronavirus pandemic; the Bipartisan Infrastructure Law that is rebuilding our roads, bridges, ports, and airports, as well as investing in rural broadband; the CHIPS and Science Act that rebuilt American manufacturing at the same time it invested in scientific research; and the Inflation Reduction Act, which, among other things, invested in addressing climate change. Under his direction, the government worked to stop or break up monopolies and to protect the rights of workers and consumers.

Like the policies of that earlier era, his economic policies were based on the idea that making sure ordinary people made decent wages and were protected from predatory employers and industrialists would create a powerful engine for the economy. The system had worked in the past, and it sure worked during the Biden administration, which saw the United States economy grow faster in the wake of the pandemic than that of any other developed economy. Under Biden, the economy added almost 16 million jobs, wages rose faster than inflation, and workers saw record low unemployment rates.

While Biden worked hard to make his administration reflect the demographics of the nation, tapping more women than men as advisors and nominating more Black women and racial minorities to federal judicial positions than any previous president, it was Vice President Kamala Harris who emphasized the right of all Americans to be treated equally before the law.

She was the first member of the administration to travel to Tennessee in support of the Tennessee Three after the Republican-dominated state legislature expelled two Black Democratic lawmakers for protesting in favor of gun safety legislation and failed by a single vote to expel their white colleague. She has highlighted the vital work historically Black colleges and universities have done for their students and for the United States. And she has criss-crossed the country to support women’s rights, especially the right to reproductive healthcare, in the two years since the Supreme Court, packed with religious extremists by Trump, overturned the 1973 Roe v. Wade decision.

To the forming Democratic coalition, Harris brought an emphasis on equal rights before the law that drew from the civil rights movements that stretched throughout our history and flowered after 1950. Harris has told the story of how her parents, Dr. Shyamala Gopalan, who hailed from India, and Donald J. Harris, from Jamaica, met as graduate students at the University of California, Berkeley and bonded over a shared interest in civil rights. “My parents marched and shouted in the Civil Rights Movement of the 1960s,” Harris wrote in 2020. “It’s because of them and the folks who also took to the streets to fight for justice that I am where I am.”

To these traditionally Democratic mindsets, Governor Walz brings something quite different: midwestern Progressivism. Walz is a leader in the Minnesota Democratic-Farmer-Labor Party, which formed after World War II, but the reform impulse in the Midwest reaches all the way back to the years immediately after the Civil War and in its origins is associated with the Republican, rather than the Democratic, Party. While Biden’s approach to government focuses on economic justice and Harris’s focuses on individual rights, Walz focuses on the government’s responsibility to protect communities from extremists. That stance sweeps in economic fairness and individual rights but extends beyond them to recall an older vision of the nature of government itself.

The Republican Party’s roots were in the Midwest, where ordinary people were determined to stop wealthy southern oligarchs from taking over control of the United States government. That determination continued after the war when people in the Midwest were horrified to see industrial leaders step into the place that wealthy enslavers had held before the war. Their opposition was based not in economics alone, but rather in their larger worldview. And because they were Republicans by heritage, they constructed their opposition to the rise of industrial oligarchs as a more expansive vision of democracy.

In the early 1870s the Granger movement, based in an organization originally formed by Oliver H. Kelley of Minnesota and other officials in the Department of Agriculture to combat the isolation of farm life, began to organize farmers against the railroad monopolies that were sucking farmers’ profits. The Grangers called for the government to work for communities rather than the railroad barons, demanding business regulation. In the 1870s, Minnesota, Iowa, Wisconsin, and Illinois passed the so-called Granger Laws, which regulated railroads and grain elevator operators. (When such a measure was proposed in California, railroad baron Leland Stanford called it “pure communism” and hired former Republican congressman Roscoe Conkling to fight it by arguing that corporations were “persons” under the Fourteenth Amendment.)

Robert La Follette grew up on a farm near Madison, Wisconsin, during the early days of the Grangers and absorbed their concern that rich men were taking over the nation and undermining democracy. One of his mentors warned: “Money is taking the field as an organized power. Which shall rule—wealth or man; which shall lead—money or intellect; who shall fill public stations—educated and patriotic free men, or the feudal serfs of corporate capital?”

In the wake of the Civil War, La Follette could not embrace the Democrats. Instead, he and people like him brought this approach to government to a Republican Party that at the time was dominated by industrialists. Wisconsin voters sent La Follette to Congress in 1884 when he was just 29, and when party bosses dumped him in 1890, he turned directly to the people, demanding they take the state back from the party machine. They elected him governor in 1900.

As governor, La Follette advanced what became known as the “Wisconsin Idea,” adopted and advanced by Republican President Theodore Roosevelt. As Roosevelt noted in a book explaining the system, Wisconsin was “literally a laboratory for wise experimental legislation aiming to secure the social and political betterment of the people as a whole.” La Follette called on professors from the University of Wisconsin, state legislators, and state officials to craft measures to meet the needs of the state’s people. “All through the Union we need to learn the Wisconsin lesson,” Roosevelt wrote.

In the late twentieth century, the Republican Party had moved far away from Roosevelt when it embraced neoliberalism. As it did so, Republicans ditched the Wisconsin Idea: Wisconsin governor Scott Walker tried to do so explicitly by changing the mission of the University of Wisconsin system from a “search for truth” to “improve the human condition” to a demand that the university “meet the state’s workforce needs.”

While Republicans abandoned the party’s foundational principles, Democratic governors have been governing on them. Now vice-presidential nominee Walz demonstrates that those community principles are joining the Democrats’ commitment to economic fairness and civil rights to create a new, national program for democracy.

It certainly seems like the birth of a new era in American history. At a Harris-Walz rally in Arizona on Friday, Mayor John Giles of Mesa, Arizona, who describes himself as a lifelong Republican, said: “I do not recognize my party. The Republican Party has been taken over by extremists that are committed to forcing people in the center of the political spectrum out of the party. I have something to say to those of us who are in the political middle: You don’t owe a damn thing to that political party…. [Y]ou don’t owe anything to a party that is out of touch and is hell-bent on taking our country backward. And by all means, you owe no displaced loyalty to a candidate that is morally and ethically bankrupt…. [I]n the spirit of the great Senator John McCain, please join me in putting country over party and stopping Donald Trump, and protecting the rule of law, protecting our Constitution, and protecting the democracy of this great country. That is why I’m standing with Vice President Harris and Governor Walz.”

Vice President Harris put it differently. Speaking to a United Auto Workers local in Wayne, Michigan, on Thursday, she explained what she and Walz have in common.

“A whole lot,” she said. “You know, we grew up the same way. We grew up in a community of people, you know—I mean, he grew up…in Nebraska; me, Oakland, California—seemingly worlds apart. But the same people raised us: good people; hard-working people; people who had pride in their hard work; you know, people who had pride in knowing that we were a community of people who looked out for each other—you know, raised by a community of folks who understood that the true measure of the strength of a leader is not based on who you beat down. It’s based on who you lift up.”

~ Heather Cox Richardson, 8-11-24, posted on Facebook by Violeta Kelertas

*
EDUCATION GAP AND VOTING BEHAVIOR

A recent piece published by Inside Higher Ed made the case that “the recent midterm elections highlighted the growing educational divide between voters” such that college-educated citizens are generally voting for Democrats, while those without a degree cast more ballots for Republican candidates. As evidence of this divide and increasing political polarization in the country, the piece cited analysis from the Washington Post claiming that “52 percent of voters with a bachelor’s degree cast their ballots for Democrats; 42 percent of those with a high school degree or less voted for Democrats.”

Analyses of this sort are misleading and generally unhelpful since they lack nuance and fail to capture the real political diversity present in the electorate and on most campuses today. That being said, my own analysis of 2022 election exit polls of close to 20,000 voters looking at House of Representative voting in aggregate confirms that there is a seeming educational divide in terms of formal degree attainment: 54 percent of college graduates reported to have voted for Democratic candidates compared to 43 percent of no-degree holders who cast their ballots for Democrats.

But first impressions may be incomplete and it is critical to go a bit deeper here. If we look at the results broken down by gender and educational attainment, for instance, the data demonstrate that having a college degree does not mean one voted for a Democratic candidate at all. Men in 2022, for instance, voted quite differently than women and generally supported Republican candidates over Democratic office seekers.

Fifty percent of men who hold college degrees voted for the GOP compared to 48 percent who cast ballots for Democrats. Sixty-one percent of men without a degree voted for Republicans and only a small minority—37 percent—voted for Democratic candidates. Formal education matters, but having a college degree is not perfectly correlated with Democratic Party and candidate support whatsoever.

Women voters look a bit different. While 60 percent of college graduate women supported Democratic candidates in 2022, 50 percent of women without a college degree supported the Republican candidates and 49 percent voted for the Democratic candidates.

Given these numbers, Inside Higher Ed’s proclamation about educational levels is overblown and simply incorrect.

Moreover, many statements about the role of education do not hold up when race and ethnicity are considered. Fifty-two percent of college-educated white men and 72 percent of white men without a college degree cast ballots for the GOP. This education gap does not show itself for college-educated men of color; 59 percent of those with a college degree and 62 percent of men without a degree cast a ballot for the Democratic candidate.

Non-college-educated white women overwhelmingly supported the GOP—61 percent for Republicans compared to 37 percent for Democrats. 

College-educated white women are a reliable Democratic voting bloc with 56 percent voting for Democrats compared to 42 percent voting for Republicans. Note that 56 percent is not a huge chasm. It's terribly disappointing, but so it goes. Acceptance, acceptance, acceptance.

Women of color were appreciably more likely to support Democratic candidates than their male counterparts; 76 percent of degree-holding and 74 percent of non-degree-holding women of color supported the Democratic House candidate in 2022.

The article also failed to note that today’s college and university students are centrists and are not monolithically Democrats at all. Gen Z is shaping up to be a practical, swing generation that engages politically and socially and is not aligned to the left or right. As opposed to the Silent Generation—President Biden’s and Nancy Pelosi’s generation—which has seen a decline in independent identification and a rise in Republican partisan support in 2022, UT-Austin’s Future of Politics Survey has found real centrism among Gen Z students today. Thirty-one percent of sampled students report that they are Republican and another 33 percent are Democrats—essentially an even split.

The remainder—37 percent—are either unaffiliated or independent and research has shown that as opposed to being dogmatic and ideological, students today thrive in a world of differences, seem to genuinely welcome a diversity of ideas, seek empathetic leadership, and are interested in serving their communities. Thus, the purported education gap is simply not present on campus today and students may not be likely to be heading down a polarized path.

Americans with higher formal educational levels generally take more left-of-center views and vote for Democratic candidates in aggregate. But this tells only a partial story; when race is introduced along with gender, education ceases to line up so neatly. It is critical to understand that students on campus may have polarized choices and vote more to the left than to the right, but that does that mean that they are overwhelmingly left-of-center. Gen Z college and university students are balanced as a whole and not the predictable voter block we thought they were.

https://www.aei.org/politics-and-public-opinion/understanding-the-education-gap-in-voting-demands-nuance/


*
THE TROUBLE WITH BILLIONAIRES by Linda McCraig and Neil Brooks

In this searing and entertaining indictment of the super-rich, Linda McQuaig and Neil Brooks challenge the idea that today’s cavernous income inequality is the result of merit, and reveal how the global economic system has been hijacked by the wealthiest, with disastrous consequences for us all.

The high taxes and strong social programs of the 1950s, ‘60s and ‘70s gave us sky-high economic growth and rising equality. In recent years, however, we’ve been constantly told that taxes and government spending are bad. McQuaig and Brooks systematically debunk these claims. As their research shows, not only do lower taxes correlate with worse societal outcomes – from health to the environment – they also fail to produce economic prosperity.

This is an outstanding book that clarifies the problems faced by humanity that are generated by the totally avaricious 1% of the population at the top of the money pile in the United States. This 1% has amassed 24% of the national income.

Linda McQuaig and Neil Brooks simplify the presentation of the egregious inequalities generated by these disparities in income so that their unfairness becomes grossly obvious. What is happening is that the benefits of whatever economic growth is occurring are now accumulating in the offshore accounts of the 1%, while the rest of the population is economically stagnant or sliding out of the middle class and into the poverty zones.

The authors also present very convincing, surprising evidence that these inequalities in distribution of wealth are highly detrimental to those societies dominated by the [unregularted] capitalist system.

There is a growing body of evidence showing that extreme inequality imposes a number of very negative consequences on society. It increases the incidence of a wide range of health and social problems 
including crime, stress, mental illness, heart disease, diabetes, stroke, infant mortality, and reduced longevity.

It's no accident that the United States claims the most billionaires but also suffers from among the highest rates of infant mortality and crime, the shortest life expectancy, and the lowest rates of social mobility and electoral political participation in the developed world. There is also extensive evidence that
the emergence of an extremely wealthy elite seriously impairs the functioning of democracy.

These are not new problems. In the early 1900's the very rich engineered income tax laws providing enormous advantages of a top rate of only 24%, which enabled them to accumulate enormous wealth, which also gave them inordinate power and influence over the government. During the end of the Depression and after WWII, Roosevelt (that "traitor to his class") and later Truman increased the upper range of the income tax rate to 79%.

... As this more egalitarian ideal became the established norm in the postwar years, successive governments - even Republican ones 
followed suit. Under the Eisenhower administration, the top marginal rate rose to a striking 91 percent...the overall result was a more egalitarian society, as the wage increases of working people and heavier taxation of the rich lead to greater equality in income distribution. The egalitarian reality also contributed to a new ethos of equality, fairness, and public empowerment.

This was reflected in support for government, which was called upon to defend and promote the public interest. No longer regarded as simply an instrument for protecting the interests of a small wealthy class with which it had been so closely allied, government was now seen as an institution with a duty to represent the interests of the population at large. Having proved itself capable and effective in defending the population in fighting the war and pulling the country out of the Depression, government came to enjoy respect as a central and beneficial force in society.

The very notion that there was such a thing as a public interest, and that government had an obligation to serve it, was part of a profound change in attitudes. Among other things, the new mood removed the well-to-do from their protected bubble at the top of society and brought them more into the mainstream. No longer giants who strode unchallenged across the economic skyscape, the wealthy were pushed closer to the ground. They were now subject to economic as well as social constraints, facing greater regulation in their business affairs, heavier taxation of their incomes, and public disdain for any behavior that seemed excessively self-interested or greedy. Under the new social contract, everyone was expected to contribute to the community... (p. 54-55)

All of this was eroded over the later part of the 20th century and is increasingly problematic today. This is readily evidenced in the various economic bubbles that are generated, which profit the 1% enormously both as the inflation occurs and as they burst. The total focus of the 1% on their profits, combined with their total disregard for any consideration for the welfare of the 99% is becoming increasingly evident to anyone who takes the time to research what is happening. The subprime banking scams are prime examples.

Wall Street traders routinely boasted about "ripping the face off" clients, an expression that meant making profits by selling derivative deals so complicated the buyers couldn't possibly understand them. Such indifference to clients, let alone other members of the public, promoted an ethos in which greed and an obsessive focus on self-interest were considered normal and acceptable, even laudable and beneficial. It was this deadly combination 
a political agenda controlled by the rich, reinforced by a culture celebrating greed and saluting billionaires  that encouraged thousands of apparently normal people to take part in the subprime mortgage scam, either as participants preying on the vulnerable or as political authorities failing to stop the brazenly predatory behavior. (p. 68-9)

[This] leads to a lack of good investment opportunities in the real economy, driving capital toward the financial sector and concentrating wealth and power in the hands of financiers. This elite uses its clout to both create a social ethos that condones greed and to directly shape the political agenda to facilitate the amassing of great fortunes. A crucial element in this political agenda is the freeing up of financial markets for lucrative speculative activities. While these speculative activities are clearly orchestrated by the financial elite, segments of the broader public are drawn in, and bear most of the risks and the ultimate costs of a financial collapse.

By contrast, when income is more widely dispersed, as in the early postwar era, there is strong consumer demand for goods and services, attracting capital into the real economy. Political power is also more widely held. Middle-class citizens and organized labor aren't inclined to use their political clout to press for freer financial markets, but rather to protect and enhance their own incomes and buying power. This creates a political agenda and a social ethos that has a restraining effect on financial markets. 

The authors make an excellent case for the value of social capital, inherent in a population where the 99% has more resources and can therefore bring greater investments of "social capital" that make for a healthy economy.

The authors present strikingly convincing evidence that in countries where the good of all is considered and protected, everyone, including the 1%, enjoy better health and greater happiness.

...What is not so obvious is that extreme levels of inequality in society have an effect similar to poverty. This has become clear in a growing body of research...Countries with higher levels of inequality have higher levels of social problems — at all levels of income. 

Typically, the incidence of such problems is highest at the bottom end, but it continues through all income levels, becoming gradually weaker with each step up the financial ladder. Simply living in an unequal society puts one at greater risk of experiencing a wide range of health problems and social dysfunction. ~ Daniel Benor, MD, Amazon

~ The 5th and 6th Chapters move on to demolishing the myths of the self-made billionaires, firmly situating their wealth accumulation in its social, economic and political context making the point that they are getting off relatively speaking scot-free with regard to returning something (tax) to the societies that their business empires have been built upon. Chapter 7 considers the question of motivation, and undermines the conventional wisdom that large sums of money are required to incentivize excellence whether in sport, business or public service.

Chapter 8: "John Maynard Keynes and the Defeat of Austerity" takes a look at the period of destructive interwar austerity and how Keynesian economics developed and provided a solid foundation for capitalism's post-war golden age, which included a significant lessening of inequality and some of the highest (if not the highest) rates of economic growth amongst mature industrial economies (and not incidentally it was a period of relatively high growth for many developing countries). Chapter 9 focusses on the "Triumph of the Welfare State" in the early post war period (1945-1970's) as well as the forces (such as the perversely rich funders of the Institute of Economic Affairs in the UK, and more globally, the Mont Pelerin society) who staged the long fight back for the "liberal" economic ideas that took root in the 1970's and beyond. The book ends by looking at a number of policies that would reduce the grotesque inequalities that have arisen over the last thirty or so years.

McQuaig and Brooks have written an excellent book that achieves a number of worthwhile aims, not least
the undermining of the low tax, low government spending nonsense propagated by the current coalition government, on the grounds that this self-serving belief claims as its own: economic efficiency. It also holds its own with other books about our current social-political-economic situation, such as Nicholas Shaxson's  Treasure Islands: Tax Havens and the Men who Stole the World, and Pickett & Wilkinson’s  The Spirit Level: Why Equality is Better for Everyone and Richard Brooks's  The Great Tax Robbery, in providing the general reader with an accessible, interesting angle on the major issues facing us a society that the mainstream media has almost completely avoided. ~ S. Wood, Amazon

*
RUSSIANS FINALLY REALIZE RUSSIA IS AT WAR

A Ukrainian soldier in the Kursk region talked to a local resident. She said that she had been living in constant fear since the war began. The soldier asked her when the war began and she immediately answered that the war began on August 6.

So for this resident of the Kursk region, the war did not begin in February 2022, and certainly did not begin in 2014. For her, the war began only when it affected her personally.

For most residents of Russia, the war did not begin at all. There is some kind of military operation, somewhere far away, on TV.

Many Russians even now do not know what is happening in the Kursk region. They heard how a week ago Gerasimov reported to Putin that the Ukrainian invasion was repelled, they believed the TV again and switched to their daily affairs. They are sure that the glorious Russian army continues to liberate Ukrainians from the Nazis.

They were told on TV that the Poles and the Brits were fighting against them in the Kursk region. The fact is that for the Russians the Ukrainians are second-class people, an inferior race, the Ukrainians are not capable of resisting the Russians, much less launching an offensive on Russian territory. For the Russians, it is simply insulting to retreat before the Ukrainians, but retreating the NATO forces is another matter.

There is still a very disgusting part of Russians who spread lies about Ukraine. For example, on Quora one user who claims to live in Ulan-Ude (Russia) constantly writes about alleged crimes committed by Ukrainian soldiers. She provides many colorful details, but does not provide any evidence. And she has quite a lot of readers.

Ulan-Ude is the capital of Buryatia. Buryats have proven themselves to be the most cruel sadists in Ukraine. Their crimes are documented. At the same time, this resident of Ulan-Ude talks about the fictitious crimes of Ukrainians, instead of repenting for the crimes of her fellow countrymen.

In other words, for the overwhelming majority of Russians, there is no war, their army is an army of liberators that fights against NATO and the whole world.

And I am inclined to think that most Russians think so not because they know the truth, but are supposedly afraid to speak out against the official point of view. I believe that most Russians really believe everything their government tells them. Their ability to think and analyze has atrophied. It is probably impossible to cure them.

Russian POWs surrender

I guess those Russian POWs have the most correct among Russian vision of the situation. But the problem is the rest of the Russians will not listen to their opinion, because that opinion contradicts the TV. ~ Alex Lyanshenko, Quora

Alex:
Totally true. Russian parents don't believe their children even they are first person witnesses, if what they say contradicts Russian TV. Russia is an Orwellian society.

Jacobs Trowbel:
Russians can feel the war in the form of shortages, inflation, conscriptions, military recruitments and funerals of dead soldiers.

Alex Lyashenko:
But I do not see any kind of shortages in Moscow and Petersburg. And these are the only two cities in Russia which Putin cares about. To be more exact he cares so that the population of these two cities would not rise up against him. He doesn't give a damn about the other cities and the rest of the population.

William Vaughan:
Eventually Russia will abandon its nuclear arsenal and focus on being a global trade partner like the rest of Europe has… or it will use its nuclear arsenal and cease to exist. The path they are on is not sustainable.

*
RUSSIA IS A LOST COUNTRY

Oil depot fire near Rostov

Russia is a lost country. No one is safe in Kremlin and among the Russian military leaders now. The coward bastard Putin fears for his life. He is said to have asked the agent Aleksey Dyumin to lead the Russian operations (Putin's post-invasion revenge)

They were warned, but still the border invasion came as a shock, and now the super agent must help Putin exact revenge.

The Ukrainian forces lurked on the border with Russia's Kursk region for several weeks before commanders were given the go-ahead to begin the operation.

After the invasion on August 6, it was revealed that the Russian military leadership had already received a report in mid-July that a Ukrainian offensive into Russia was expected. But they dismissed the threats and urged everyone "not to panic", and President Putin wants to know who let him down.

The information about Aleksey Dyumin comes from a Russian member of parliament who claims that the former bodyguard attended a crisis meeting with Putin, military leaders, the intelligence services and senior political figures on August 12.

At the meeting, the situation in the regions where Ukrainian troops entered and conquered communities several miles into the country was reviewed.

At the meeting, Putin is said to have given the agent the task of leading the operation against the "terrorist acts in Kursk”.

According to sources from within the Kremlin, Dyumin was given full freedom of action to direct the operation, ISW states.

One of the tasks is to coordinate the various federal forces and intelligence services involved in the fight against the Ukrainian invaders.

Many military analysts in Russia loyal to the war effort, however, suspect that the main purpose of the bodyguard's involvement is something else, writes ISW.

It is believed that this indicates that Putin's closest and most loyal circle wants to have full control over the situation in Kursk.

The security services on the ground have failed to stabilize the situation — despite the fact that, in principle, immediately after the border invasion, they stated that they had beaten back the Ukrainian troops.

This claim has been refuted by a number of posts and videos on social media where Ukrainian soldiers have filmed themselves on location in Russian villages raising their own flag and posing outside public buildings.

Now, according to these analysts, Putin wants his implacably loyal bodyguard to find out "why and how he was led behind the light of the real situation”.

It is believed that Dyumin's report could mean the end of the careers of a number of senior officers responsible for the fiasco.

The Ukrainian offensive is a setback both for the Russian intelligence services and the preparedness of the military.

"Troops have been spotted and intelligence indicates preparations for an attack," said the report delivered to the command after Ukrainian units gathered at the border on the Ukrainian side. But according to Andrei Gurulyov, a member of parliament in the Russian Duma and former army officer, the response was lukewarm.

From the top came the order not to panic and that those higher up in the hierarchy knew better, Gurulyov said on television after the Ukrainian operation began.

A Russian warplane has reportedly been shot down over Kursk.

On Wednesday, Ukrainian President Volodymyr Zelenskyi stated that they continued the offensive deeper into Russia. It also claims to have carried out a major attack with drones against four Russian air bases and shot down a Russian fighter plane.

~ Now all of us in Ukraine must act as united and effective as we did in the first weeks and months of the war when Ukraine took the initiative and changed the situation in our favor, says Zelenskiy according to The Guardian.

Now we have done exactly the same thing. We have proven once again that we Ukrainians are capable of achieving our goals at every opportunity – capable of defending our interests and our independence ~

*
KURSK CHANGED THE NARRATIVE OF THE WAR

The main change is the people of Russia are experiencing first hand what fighting a war in your own country is like.

Before this, the war only impacted Ukrainians.

Ukraine’s incursion into Russia has laid bare the apparent complacency and incompetence of Russian officials in charge of the border. Many local people accuse the government of downplaying the Ukrainian attack or misinforming them of the danger. ~ Brent Cooper

Dimitri Zolochev:
Russians are fully aware by now that their army from top to bottom just can’t perform up to the standards they thought previously before 2022. There was this government propaganda aura that the Russian Army was a totally powerful, highly skilled, and combat juggernaut which could be compared to a knife cutting through butter.

Now Russians clearly see that their army has been a shameful and incompetent force which has had no drive and morale to fight and beat the UAF. Russians see the wholesale slaughter of their men. They see their tanks and IVFs blown up like Roman Candle fireworks. They see the cemeteries bust everywhere with war dead burials. They see a porous air defense system, so that there are ongoing and common explosions at military bases and installations inside Russia. They see oil and gas installations being exploded.

Although Russians are under severe threat of felony arrests if they speak out against the war, most Russians are disillusioned and skeptical about this war being winnable ever and worth it when they speak with family and friends.

Just this week, various Russian State TV propagandists have finally stated that the UAF are great fighters and tacticians and “deserve” credit for their combat skills and dedications. These TV propagandists blasted the Russian commanders for their terrible tactics and inability to fight smartly and effectively. They demand punishment for all the commanders that have let Russia down.

Scott Lawson:
In no way are Russian scum suffering anything like the way the people of Ukraine have. They are not being randomly slaughtered by the UAF like the way the barbarian Russian army slaughtered innocent civilians.

Michael Coburn:
I feel that a growing refusal to fight from the army may start a domino effect leading to an actual coup and a march on Moscow. After all, the men fighting for a little man’s ego are likely to find non-suicide a compelling motive. The people are smart enough to wait until a winner emerges before they throw their weight behind it.

Mikkel Lyktholt:
The mass capitulation on the front line in Kursk Oblast, is a good sign however, and hopefully it's contagious and the war stops because Russians simply refuse to fight any longer, but that is still very hopeful. The foreign fighters fighting for Russia are also a mixed bunch, the Cuban fighters started executing their Russian officers a few months ago, the Kadirovites seem to barely do anything besides pose for social media. I am struggling to think where Russia would even find capable fighters to continue the war in any meaningful way. At the moment they seem hellbent on using depleted forces and conscripts at least in Kursk. At any rate it really doesn't look good for Russia and hopefully that will be the spark that lights the fire, if it doesn't then I'm afraid that nothing will, and stopping Russia means escalation from the allies of Ukraine, which I wouldn't be against personally but it is undeniably a risky route to take.

*
“ALLAHU AKBAR” AND ISLAMIC TERROR

Allahu Akbar. You hear it everywhere these days.

Special agent Scott Wickland said that he heard cries of “Allahu Akbar” before the Benghazi attack. And then the guards ran for their guns.

In Nice, France, the Islamic terrorist who killed 86 people and wounded over 400 by running them over with a truck, shouted, “Allahu Akbar”. In New York, the Islamic terrorist who was trying to imitate him, also shouted, “Allahu Akbar.” The 9⁄11 hijackers had the same message, “Allahu Akbar”.

“Allahu Akbar” has been present at virtually every major recent Islamic terror attack in the West.

But according to the New York Times, “Allahu Akbar” is an “innocent” and “innocuous” expression. According to one of the Times’ sources, “You see a reallу beautiful woman” and “уou go, ‘Allahu Akbar.’”

If all those shouts of “Allahu Akbar” in Paris, London and New York are caused by Muslim terrorists encountering attractive women, their reaction of choice to an attractive woman is a killing spree.

“Allahu Akbar” is not “innocent” or “innocuous.” It’s at the core of what makes Islam violent.

To understand the violent history of “Allahu Akbar”, let’s climb into a time machine and go back to the year 628 and to a place that will one day be known as Saudi Arabia. It’s hot out here in the desert. Temperatures from the spring to the fall routinely cross the hundred degree mark and keep going.

We’re in Khaybar. It’s a desert oasis maintained by the Jews. If being in 109 degree heat has got you down, you stop by the oasis, and have a cool drink of water and some dates. Then you keep going. Out here trade runs through the desert and the oasis is a gas station. If you want to choke off major trade routes, you go after an oasis. And that’s what a cult leader whose followers today terrorize the world by attacking its travel routes, airline hijackings, pirates preying on ships, train and bus bombs, was doing.

Muslims call what happened next, the “Battle of Khaybar”. Like most Muslim battles, it was a treacherous ambush and a massacre. And it helps explain why there are no Jews in Saudi Arabia today. Nor do Muslims regret this act of ethnic cleansing. Instead they celebrate it. Muslims still threaten Jews by chanting, “Khaybar, Khaybar ya Yahud.” “Remember Khaybar, Jews, Mohammed’s Army Will Return.”

And “Allahu Akbar?”

That’s what Mohammed shouted as he realized that his surprise attack had been successful.

“Allahu-Akbar! Khaybar is destroyed.” He boasted that any nation attacked by Muslims would suffer a similar fate. And then he “had their warriors killed, their offspring and woman taken as captives”.

Mohammed also picked up his own sex slave. “Safiya was amongst the captives. She first came in the share of Dahya Alkali but later on she belonged to the Prophet.” Safiya’s husband had been murdered. Like their ISIS successors, the Prophet of Islam’s band of killers and rapists took the women as slaves.

That’s where “Allahu Akbar” originated. And that’s why Muslims still shout it at terrorist attacks.

Allahu Akbar does not mean “God is Great.” It means “Allah is Greater”. What was Allah greater than at Khaybar? Allah was greater than the religion of the Jews because Mohammed was able to defeat them.

In Islam, a religious war is also a religious test. Muslim victories demonstrate the supremacy of Allah.

Despite the incessant claims that Muslims, Jews and Christians all worship the same god, the Koran tells Muslims something very different. “And the Jews say: Ezra is the son of Allah, and the Christians say: The Messiah is the son of Allah. These are the words of their mouths; they imitate the saying of those who disbelieved before; may Allah destroy them; how they are turned away!” (Koran 9:30)

The preceding verse commands Muslims to “Fight those who do not believe in Allah” and “who do not consider unlawful what Allah and His Messenger have made unlawful and who do not adopt the religion of truth from those who were given the Scripture until they pay jizya and submit.”

Those who “were given the Scripture” are Christians and Jews.

Jews and Christians had “taken Rabbis and monks to be their lords besides Allah”. The Christians had taken “Messiah, the son of Mary” when they had been commanded to “worship only one Allah.” (Koran 9:31)

Jews and Christians were “Kuffir” and “Mushrikeen”. They had taken “partners” in addition to Allah. Christians and Jews seek to “extinguish the light of Allah” (Koran 9:32). Allah had sent Mohammed to make Islam supreme over all other religions. (Koran 9:33). Jews and Christians obstruct the “Way of Allah” (Koran 9:34) Muslims are encouraged to make Jihad against non-Muslims (Koran 9:38). Those who refuse to carry out Jihad will be punished by Allah (Koran 9:39).

When Muslims defeat Christians or Jews, they prove that Allah is superior to Jewish and Christian beliefs. And that the teachings of Islam are superior to the teachings of their religious enemies.

“Allahu Akbar” originated with Mohammed’s attack on the Jews of Khaybar. When Muslim terrorists shout it today, they are declaring that they are about to prove Allah’s superiority by killing non-Muslims.

“Allahu Akbar” isn’t merely associated with terrorist attacks. It’s the reason for those attacks.
Muslims kill non-Muslims to prove that “Allahu Akbar”: that Allah is greater than the religions of their victims.

“Allahu Akbar” is the motive for Islamic terrorism.

A typical excuse is that Muslims will use “Allahu Akbar” to celebrate a good event. What this excuse misses though is that Islam is a supremacist religion. And Muslims believe that the good event that they are celebrating is due to being the only ones who truly worship “Allah”. That’s a common religious belief. And they are entitled to it. But the problem is that this relationship rests heavily on Jihad.

The Islamic mission is to make Islam supreme over all other religions (Koran 9:33). If Muslims aren’t striving to defeat other religions, then “Allahu Akbar” rings hollow. Islam does not primarily offer an internal religious experience that transforms the believer, but an external collective experience that transforms the world. Jihad, the acts of terror we see on the news, are that religious experience.

“Allahu Akbar” is the supremacist core of Islam. Mohammed offered a religious experience that merged desert banditry and conquest, whose sacraments were the murder of the enemies of Islam and the rapes of their wives and daughters. The horrifying Islamic rituals of ethnic cleansing, rape and torture demonstrated that “Allahu Akbar”. That Allah was greater than the dead men and raped women.

The Yazidi girls who were sold as sex slaves to ISIS fighters, as the Prophet Mohammed had done, describe their Islamic captors intimidating them by shouting, “Allahu Akbar”, and recall the Islamic rapist of a 12-year-old girl saying that it brought him “closer to Allah”, of a 15-year-old girl calling it a “prayer to Allah” and of the rapist of another 12-year-old girl describing her abuse as “pleasing to Allah.”

The official ISIS publication praised Allah for enabling its Jihadists to capture non-Muslim women.

“I write this while the letters drip of pride. Yes, O religions of kufr (non-Muslims) altogether, we have indeed raided and captured the kāfirah women, and drove them like sheep by the edge of the sword. I and those with me at home prostrated to Allah in gratitude on the day the first slave-girl entered our home.”

How can raping children be a prayer to Allah? Because “Allahu Akbar”. Being able to rape non-Muslim girls is a matter of “pride”. It shows that Allah, the god Muslims worship, is superior to their religion.

When ISIS Jihadists rape children or when an ISIS Muslim sympathizer runs over people in New York, Berlin or Nice, it’s a prayer of praise to Allah. And the prayer is, “Allahu Akbar.”

The more non-Muslims are killed, abused and enslaved, the more the truth of Islam and the supremacy of Allah are proven with the screams of the wounded, the dying and the families of the dead.

This is Islam. This is what it was in 628. That’s what it is today.


“Allahu Akbar” is a mandate to kill non-Muslims. A Muslim terrorist taking a gun, a knife or a truck and attacking non-Muslims is living out, “Allahu Akbar”. He’s showing that Islam is superior.

“Allahu Akbar” isn’t something he happens to say while killing you. It’s why he’s killing you. ~ Jerome Enriquez, Quora

Neil Jamieson:
The facts that for Islam that rape, gang rape, child rape, sex slavery, slavery, abuse, torture, and murder are all actions that bring one closer to Allah, that these are all acts of praise to Allah, that these acts are all Allah’s sacraments and the highest devotion to Allah, makes me question the divinity of Allah. Allah is just some gangster entity with supernatural powers to be avoided and resisted. Allah has the morality and ethics of a barbarian thug as does the person Allah claims is the “most perfect human ever”, Mohammed.

*
ARISTOTLE AND THE MOON SHOT

In 2009, journalist Tom Wolfe, author of the space-age classic The Right Stuff, wrote an opinion piece for the New York Times entitled “One Giant Leap to Nowhere.” Commenting on the Space Shuttle program, Wolfe recapped the first four decades of the space race and quipped, “NASA never understood the need for a philosopher corps.” According to Wolfe, NASA would never recover its lost vitality and sense of purpose because it had no philosophy of space exploration.

I increasingly suspect he was on to something. For space exploration—whether robotic or human, expeditionary or remote, commercial or government—to pursue its full potential, contribute to the general welfare of the United States, and provide benefits for all humanity, there must be a deep, rigorous engagement with the concept from everyone and for everyone. In other words, to best explore space, society needs to have a communal conversation on exploration’s value, impact, and meaning.

We can learn from the past. In 1969, Apollo 11 accomplished exactly what President Kennedy called for in his 1962 speech at Rice University, when he challenged NASA to send a human into the heavens to walk upon the surface of another world and return to tell the tale. “We choose to go to the moon in this decade and do other things,” he said, “not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.”

 But when the first human landed on the moon, to great fanfare, that success created a paradox: going to the moon eliminated the reason for going to the moon. Three of the nine missions planned after Apollo 11 were canceled. Indeed, in the five decades since Apollo, no earthling has ventured beyond low Earth orbit.

The lack of a consistent, enduring approach to contemplating human activity in space has, I would argue, cast a pall on NASA’s deep space human exploration ambitions. The 2003 Columbia disaster prompted decision makers to reassess NASA’s human spaceflight aims, leading to the Bush administration’s decision to resume human expeditions beyond low Earth orbit.

Since then, the agency has enjoyed relatively persistent, if modest, political support for an open-ended campaign of human deep space exploration. However, that support has manifested itself in different ways across, and even within, four administrations. Most recently, the Artemis program—formally launched by President Trump in 2017—set an ambitious goal to return humans to the moon in 2024. But that moon landing has already been delayed until at least 2026. And, tellingly, the Artemis Base Camp, initially proposed as an integral part of returning to the moon, has been caught in the budget squeeze. Work may be delayed well into the 2030s.

This hazy mandate to send humans to the moon and then Mars—without identifying a specific purpose for such an endeavor—leaves NASA with the substantial practical challenge of trying to sort out the complex ambitions, myriad options, and limited budgets of human expeditions into deep space. Still, predictable delays and budgetary shortfalls present an opportunity for NASA to revisit its reasons for sending humans to walk, once again, on the soil of alien worlds. If NASA’s planning is to ever really get ahead of its immediate mission ambitions and develop a sense of strategic coherency, now is the time to make that happen.

Telic goals vs. an infinite universe

As a space policy professional and, more recently, a student of the history and philosophy of space exploration, comparing the end of the Apollo program with the beginning of Artemis strikes me less as a matter of technology or budget and more as a matter of telos, or purpose.

Aristotle identifies telos as a “final cause,” the end state toward which something’s existence ultimately leads. President Kennedy’s speech at Rice established a clear teleological foundation for Apollo—both in the explicit challenge of putting an astronaut on the moon and returning him safely to Earth before 1970, and in the implicit goal of beating the Soviets.

Aristotle’s telos forces consideration of an end, or, as he put it, “that for the sake of which everything is done.” Open-ended activities, though—like exploring the universe—can be described as atelic: they have no specific endpoint. Even if a country is the first to reach the moon, there is no point at which any nation can declare exploration of the universe complete. The former is a telic activity, the latter atelic. Apollo was launched on a firmly telic basis but lacked a sufficiently strong rationale to keep going.

Telic activities have a particular modern appeal; they lend themselves to bold proclamations, a multitude of program management tools, and regular progress reports. Concrete goals work for space exploration because they fuel a sense of direction and progress, and, most importantly, narrative. Narratives have a beginning, middle, and end. We start at the beginning; the telic goal defines the end. All that remains is the middle part: figuring out ways that available means can achieve those ends.

Space exploration needs signposts and metrics to feed narratives of technological advancement, forward progress, and futurity. NASA excels at all of this. The catch, as NASA discovered, is that a telic goal can be completed, exhausting the mandate that set everything in motion and bringing the narrative to a close.

Atelic efforts, lacking discrete, concrete ends, are different. Without clear goals against which progress can be measured, atelic efforts are essentially everlasting. They emphasize process, not destination. If the atelic pursuit involves doing something that’s intrinsically good, it can resemble a virtuous activity. And, where telic goals invite debate about the particular, pragmatic value of reaching an end goal, the atelic emphasis on enduring value changes the character of that debate.

Thus, applying an atelic approach to space exploration could give voice to the transcendent character of the endeavor, liberating the constraining concept of mission value from the strictures of cost, engineering, and schedule—or even complete agreement on ultimate objectives. Atelic rationales could make room for the same kind of thinking that put a golden phonograph record, The Sounds of Earth, on each Voyager spacecraft, destined to drift forever across the interstellar night.

Within the space community, decision makers are constantly grappling with questions of worth and value. Is spending money on space exploration worth it? In what way? To whom? With limited space exploration resources, should a country work toward specific, concrete goals, or broader, more enduring ones?

In space exploration, pragmatic, telic objectives are sometimes at odds with atelic, virtuous pursuits. Should astronauts investigating rock formations on the moon be focused on finding commercially viable mineral deposits, or should they be looking to learn more about how the moon was formed? Another atelic defense of space exploration might posit that sending people out into the cosmos to experience life beyond this world is good in itself. Also atelic: Elon Musk’s statement that he is working “to extend the light of consciousness to the stars.”

Encouraging activities on other worlds could have multiple indirect benefits without any practical tangible impact.

Since its creation in 1958, NASA has periodically tried to grapple with deeper questions around the value and meaning of space exploration. By law, the agency’s goals are superordinate to the conduct of science. Title 51 of the US Code—which incorporates the original National Aeronautics and Space Act that created NASA—lists NASA’s purpose, authorities, and responsibilities.

NASA exists to contribute to the “general welfare of the United States” by conducting aerospace and space activities that will meet both scientific and non-scientific objectives, such as economic competitiveness and international cooperation. Founding documents emphasize peaceful scientific activity led by a civilian agency for the betterment of all humanity. At NASA, science has a seat at the table, perhaps even a preeminent one—but not the only one.

Title 51 doesn’t provide clear guidance on how NASA is to reconcile its different prerogatives, so the agency needs to find new ways to think about its endeavors that move beyond familiar quantitative measures like cost and schedule—especially for long-term planning. What is really needed are answers to the fundamental questions of purpose and telos posed by both the Apollo and Artemis programs: Why should humans aspire to tread upon the face of a heavenly body in the first place? If the objectives are telic, then at what ends should those efforts be aimed? If the purposes are atelic, what are they?

The 1965 volume The Railroad and the Space Program, edited by historian Bruce Mazlish, is one of NASA’s most significant early forays into pursuing these deeper questions. A similar attempt to understand space exploration through a larger conceptual frame has driven other efforts at the agency, including the recent report on the Artemis program’s ethical, legal, and societal impacts.

The challenge for the future is understanding how human passions and inclinations can inform and engage space exploration without succumbing to the “terrestrial privilege” of “armchair astronaut” commentary that is often long on wild speculation but short on concrete understanding of the engineering, budgetary, and political challenges facing NASA. How can we, in exploring space, discover and create value and meaning? How can we yoke space exploration to our finest impulses in a truly self-sustaining and beneficial way?

TO BUILD A MOON BASE OR NOT?

Artemis provides a good opportunity to think about how a deeper engagement on space values, impact, and meaning might unfold. For example, in current plans, the goal is to build a base at the lunar south pole and use robots to carry out surface exploration elsewhere on the moon.

However, there has been some quiet speculation that NASA might be better served by indefinitely delaying (or canceling) a permanent base in favor of conducting human-led scientific investigations at multiple locations around the moon.

Another option is a mobile base—a robotic lunar RV, stuffed with lab equipment and living facilities—that could be telerobotically driven anywhere on the moon to greet astronauts wherever they land. In all of these options, there is a question of whether NASA should turn its attention from establishing a permanent outpost toward a more science-focused approach with human-led sorties.

Of course, the concept of telos is just one of many tools in the philosophical toolkit. 

Considering the Artemis effort from a broader philosophical standpoint can reveal widely divergent visions of what space exploration should be—and perhaps offer guidance in the choices ahead. For example, insider discussions about a permanent base versus a more peripatetic approach point to larger questions that are as philosophical as they are practical: Why go to the expense and danger of sending humans into space at all, rather than working with robots? Is there an inherent value to human presence in space? And if so, what is it? Is the scientific benefit commensurate with the added cost and risk? Are the benefits of human presence enhanced by continuous permanent residence?

In the case of building a base at the lunar south pole, many pragmatic, telic arguments are available—not least of which is the simple political value of having a discrete objective and creating a concrete psychological anchor for subsequent lunar activity. In my opinion, although base building provides an attractive telic goal with some hints of future pragmatic value, it ultimately does not present a strong enough atelic argument on its own and risks recreating the “goal attained” trap of Apollo.

But, chosen carefully, some telic objectives could mature into enduring atelic efforts. For example, a goal-oriented presence could potentially be framed under an overarching atelic framework of expanding knowledge or advancing exploration. Alternately, a series of atelic activities can transform, after a few unexpected breakthroughs or discoveries, into a post hoc telic narrative, as if the goals had been clear all along. Or perhaps an atelic argument will surface on its own. Maintaining a persistent presence on the moon would create more open-ended opportunities, such as permitting NASA and its partners to more substantively weigh in on the values and standards to which humans should adhere as they reach out into the cosmos.

Norms offer a particularly interesting way to contemplate how telic and atelic aims consolidate support for space exploration. NASA will not be alone on the moon; several nations are joining the effort while a rival China has announced its own plans. Many of the norms that the international community has embraced explicitly set aside older, more familiar frameworks (such as sovereignty) that might otherwise guide our behavior. For instance, the UN Outer Space Treaty states that nations cannot claim sovereignty “by means of use or occupation or other means.”

What that means is still undefined. Can I build a structure right in the middle of someone else’s landing zone? And then are they obligated to land elsewhere, or do I have to move? Either way, it looks like appropriation by occupation. Or, if two countries have their eye on the same location, who prevails? Does it matter if one is pursuing commercial use and the other scientific? Lunar surface activities could kick up a sizable amount of dust that could interfere with other operations, but who sets those limits? Outer space is an alien environment that will expose and defy our unspoken assumptions and priorities. Philosophy gives us ways to frame and discuss them.

Fusing the telic strengths of base building and the atelic strengths of science itself could also be productive. The general pursuit of universal knowledge and truth, frequently associated with scientific investigation, can be described as valuable in its own right. Building bases that can sustain a longer, more resilient pursuit of scientific knowledge could be a more enduring approach than pursuing either path—pure base or pure science—alone.

For Artemis to succeed where Apollo failed—providing for its successors—decision makers must think carefully about value and meaning in all areas of the mission. One of the ongoing discussions within NASA is about what building, operating, and owning a surface lunar habitat might entail. Commercial space advocates have argued that the private sector can provide exploration infrastructure more cost-effectively than the government—a practical advantage.

But in the case of an Artemis base camp, turning to the private sector for a lunar surface habitat would present political and symbolic liabilities to the mission—an atelic threat. Artemis is sending Americans to live on the moon on behalf of their country and their world; ethical considerations (or even political logic) mean that they should be sent for virtuous reasons, rather than in pursuit of profit. Sure, a commercial habitat might (in theory) be more cost-effective, but at what cost? And will those savings be worth jeopardizing the symbolic impact of Artemis?

If it is to survive, Artemis cannot afford to appear as a way to turn scientific expeditions into expensive time shares in some rocket baron’s celestial hotel. In any lunar base, ownership will feed into symbolic logic and rationale. As a base grows beyond the initial habitat and the symbolic requirements of NASA ownership are satisfied, a diversity of participants—including commercial ventures and international partners—becomes a way to broaden the sense of ownership and demonstrate the virtue of diverse approaches to transforming the moon into a human world.

Clearly, when making these sorts of decisions around building a lunar base, NASA must make choices that escape the bounds of quantitative, engineering, or cost-benefit analyses. Although it is one of the world’s preeminent engineering organizations, NASA is not institutionally well equipped by culture, precedent, or inclination to incorporate considerations that fall beyond the telic utilitarian and practical aspects of completing a mission. Yet, NASA’s core constituency is the American public, and to better serve that public, the agency needs a way to engage questions of values and visions and offer more straightforward and durable narratives of space exploration.

PHILOSOPHY FOR A CLEARER PUBLIC PURPOSE

NASA needs to embrace philosophy so that it can better explain what it is doing and why to the public and itself. This is particularly important because, as a federal agency, NASA derives its overall purpose and direction from the public through elected officials. But even when Congress and the White House set the overall agenda (and budget!), the agency still needs an internal logic guiding its decisions.

Throughout NASA’s research and exploration portfolio, a wide range of societal impacts, ethical considerations, and inspirational elements come into play. There are decisions to be made between favoring human or robotic expeditions that require understanding their differences and harmonizing them. And what should the agency’s position be, for instance, on developing technologies that will ultimately be used by the private sector? Absent clearer, systematic thinking about such issues, NASA is compelling its scientists and engineers to act as philosophers on the spot whenever they favor a robotic or human mission, authorize a commercial contract, or make myriad seemingly routine decisions.

Although this ad hoc approach may seem like an organic way to deal with the problem of purpose, it is a missed opportunity. Leaving all decisions about societal values to engineering program managers on a case-by-case basis means NASA doesn’t develop the ability to think more systematically about values, vision, and norms. And these are the core ingredients in shaping the guiding logic and narrative needed for a coherent strategy of space exploration.

Without a real way to consider what it does, NASA falls back on institutional interests and bureaucratic inertia. In other words, failing to deliberately engage philosophical debates about values and visions means that any exogenous NASA vision could become erratic, meaningless, or even subject to intellectual fads. The agency risks foundering as administrations and mandates change over time. It could get caught in the kind of pointless ideological food fights that would rob it of its broad, bipartisan appeal. Without a stronger sense of self, NASA risks getting dragged into someone else’s ideological fantasy and souring the public on space exploration.

Instead, NASA should cultivate a strong self-awareness about vision and mission.

And that self-awareness should be broad. One of the most persistent difficulties with thinking about space exploration is the immense amount of terrestrial bias that humans automatically bring to the table. Our cultures, norms, and institutions are grounded in the geographical and biological reality of where we live. Simply porting over terrestrial solutions means bringing along terrestrial assumptions, a potentially fatal mistake in the hostile and unforgiving domain of space exploration.

Terrestrial bias pops up in many small design decisions on spacecraft, including the occasional inclusion of drawers, which can jam without the aid of gravity to keep their contents in place. A broader philosophical framework can help explorers create a culture appropriate to the reality of living and working beyond Earth.

Another example of the benefits of freeing our decision making of terrestrial bias in favor of new ways of understanding the meaning and value of presence can be seen in the discussion around in situ propellants. Historically, plans to explore places like Mars assumed that astronauts would need to carry all the propellant for a return trip with them. By contrast, in situ resource utilization (ISRU) calls for sending robotic equipment in advance of a human landing to process carbon dioxide from the Martian atmosphere and manufacture the necessary propellant on site. The ISRU approach shows the importance of finding different ways to think about the value of Mars itselfreimagining it as a site of both scientific and industrial production—through rigorous philosophical engagement with space exploration.

In his closing paragraphs, Tom Wolfe argues that what NASA needs is the power of clarity and vision. What NASA needs to succeed and endure is purpose, a sense of objective, and a guiding logic to animate its strategic thinking. Congress and the White House can give NASA its goals and the resources to reach them. But first,
NASA must be able to provide better ways to address the deep questions of space exploration: Why? To what end? And for what purpose?”  The smallest step on the moon—or anywhere in the heavens—starts with a giant, collective leap of the mind.

https://issues.org/aristotle-philosophy-space-exploration-faith/

Mary:

I don't think there is a clear sense of purpose in our space exploration programs...just as the plans seem to be unformed and undecided--manned or robotic? permanent or roving? scientific or exploitative? international or national? cooperative or combative? I live close to Canaveral and there's lots of excitement with the rocket launches...crowds gather, folks follow them in the news. They seem to be launching frequently, I'm used to feeling the Boom when they go up...but there's no real sense of exactly the aim and purpose of it all...especially the "private" launches without satellites. Space tourism for the rich??? Costly, frivolous, polluting, wasteful.

*
THE YUPPIE ERA

~ In 1979, an article by Blake Fleetwood in the Times Magazine reported a surprising phenomenon: young people were moving to big cities like New York, Philadelphia, and Baltimore. This was news because America’s metropolises, New York especially, had been given up for dead, gutted by white flight, a deteriorating economic base, and financial mismanagement. In the nineteen-seventies, New York had lost eight hundred thousand people, ten per cent of its population. Yet the evidence suggested, Fleetwood wrote, “that the New York of the 80’s and 90’s will no longer be a magnet for the poor and the homeless, but a city primarily for the ambitious and educated—an urban elite.” It was an uncannily accurate call.

https://www.newyorker.com/magazine/2024/07/29/triumph-of-the-yuppies-tom-mcgrath-book-review?utm_source=pocket-newtab-en-us

Oriana:
Alas, after this enticing first paragraph, I hit a paywall. I remember when the term “Yuppie” was used a lot — and then it practically disappeared. I decided to check other sources.

~ Yuppie is a slang term denoting the market segment of young urban professionals. A yuppie is often characterized by youth, affluence, and business success. They are often preppy in appearance and like to show off their success by their style and possessions.

The term yuppie originated in the 1980s and is used to refer to young urban professionals who are successful in business and considerably affluent.

It is difficult to identify modern yuppies because modern society has doled out wealth to various groups of people rather than a specific set of people with similar characteristics.

Coined in the 1980s, the term yuppie was used as a derogatory title for young business people who were considered arrogant, undeservedly wealthy, and obnoxious. Yuppies were often associated with wearing high fashion clothing, driving BMWs, and gloating about their successes. The term has become less of a stereotype and now promotes the image of an affluent professional.

Yuppies tend to be educated with high-paying jobs, and they live in or near large cities. Some typical industries associated with yuppies include finance, tech, academia, and many areas in the arts, especially those associated with liberal thinking and style.

History of the term “yuppie”

There is some debate over who first coined the term yuppie, but many attribute this to Joseph Epstein, writer and former editor of The American Scholar. Others credit journalist Dan Rottenberg with coining the term in 1980 an article titled "About That Urban Renaissance..." for Chicago magazine.

Rottenberg describes the gentrification of Chicago's downtown by upwardly mobile young professionals rebelling against suburbia. "The Yuppies seek neither comfort nor security, but stimulation, and they can find that only in the densest sections of the city," he wrote.

ORIGINS OF THE WORD

Linguistically, the term was an evolution, starting from the word "hippie," which 20 years earlier was a label attached to someone considered "hip" to the current culture. That word morphed into "yippie"—counterculture advocates associated with the Youth International Party.

At nearly the same time, a parody of an American stereotype of the "country-club/prep school culture" called The Preppy Handbook made The New York Times bestseller list. "Yuppie" was the mash-up of all of these moments in the young adults in America, each a reflection of their time.

Yippies, in contrast to yuppies, were affiliates of the Youth International Party, a counterculture group that emerged in the late 1960s. The term continued to grow throughout the 1980s as it was used in more newspaper and magazine articles.

After the 1987 stock market crash, the term yuppie became less political and gained more of the social implications it has today. Although its usage declined in the 1990s, it has since come back into the United States lexicon. It has been used and cited in articles, songs, movies, and other pop culture media. To name a few, the term has appeared in the novel and film Fight Club, the movie American Psycho, the satirical blog "Stuff White People Like" and the Tom Petty song "Yer So Bad.”

The term yuppie isn't confined only to the United States—other countries, such as China, Russia, and Mexico, have their variations of yuppies that generally also carry the hallmark connotation of young, higher-class professionals. The term tends to spread and thrive in prospering economies.

Modern Yuppies

In the 21st century, the term takes on new meaning while retaining the basic tenets of original yuppies. For example, due to the internet and growing reliance on electronic communication, the term yuppie could refer to a Silicon Valley tech worker that doesn't necessarily have the same social skills as the original yuppie, but still works for a prestigious company and makes a lot of money.

This can make it harder to define yuppies since it might not be obvious at first glance that these people have glamorous careers. Perhaps, as a result, the term yuppie isn't used as widely as it was in the 1980s and early 1990s.

A 2015 article in The New York Times made the case that the all-encompassing definition of yuppies had fragmented. Micro-yuppies abounded. These yuppies profess allegiance to lifestyles, such as nature-based, or professional communities, such as technology executives, or even online communities, such as gaming. 

Hipsters, who mock the consumption culture fostered by modern society, have replaced earlier yuppies. However, the irony of the situation is that they participate in society actively through their choices.

https://www.investopedia.com/terms/y/yuppie.asp

*
FORESTS STILL ABSORB A LOT OF EARTH’S CARBON DIOXIDE

Each year, burning fossil fuels puffs tens of billions of metric tons of planet-warming carbon dioxide into the atmosphere. And for decades, the Earth’s forests, along with its oceans and soil, have sucked roughly a third back in, creating a vacuum known as the land carbon sink. But as deforestation and wildfires ravage the world’s forests, scientists have begun to worry that this crucial balancing act may be in jeopardy.

A study published recently in the journal Nature found that, despite plenty of turmoil, the world’s forests have continued to absorb a steady amount of carbon for the last three decades.

“It appears to be stable, but it actually maybe masks the issue,” said Yude Pan, a senior research scientist at the U.S. Forest Service and the lead author of the study, which included 16 coauthors from around the world.

As the Earth’s forests have undergone dramatic changes, with some releasing more carbon than they absorb, Pan warns that better forest management is needed. “I really hope that this study will let people realize how much carbon is lost from deforestation,” Pan said. “We must protect this carbon sink.”

Roughly 10 million hectares of forest — an area equivalent to the size of Portugal — are razed every year, and ever-intensifying wildfires almost double that damage. The planet has lost so many trees that experts have warned forests may soon reach a tipping point, in which this crucial carbon vault would emit more planet-warming gases than it absorbs. Some studies have suggested that the Amazon rainforest, often called the lungs of the world, is already there.

Using data reaching back to 1990, the researchers analyzed hand measurements of tree species, size, and mass from 95 percent of the globe’s forests to calculate the amount of carbon being tucked away over three decades. For each biome studied — temperate, boreal, and tropical forests — the researchers considered how long-term changes in the landscape altered the region’s emissions-sucking power.

In the boreal forest, the world’s largest land biome that stretches across the top of the Northern Hemisphere, the researchers found a dire situation. Over the study period, these cold-loving tree species have lost 36 percent of their carbon-sinking capacity as logging, wildfires, pests, and drought devastated the land.

Some regions are faring worse than others:
In Canada, wildfires have turned boreal forests into a source of carbon emissions. In Asian Russia forests, similar conditions caused the region to lose 42 percent of its sinking strength.

It’s the clear consequence of decades of worsening fires. A study published in Nature in June looked at 21 years of satellite records and was the first to confirm that the frequency and magnitude of extreme wildfires has more than doubled worldwide. The change is especially drastic in boreal forests, where these wildfires have become over 600 percent more common per year.

“I was just shocked by the magnitude,” said Calum Cunningham, a postdoctoral researcher at the University of Tasmania and lead author of the wildfire study.


An overview of the dense canopy, alongside an area of deforestation, as seen in the Amazon rainforest in 2008 near Manaus, Brazil.

Down near the equator, where tropical forests make up over half of the world’s tree cover area, the global carbon sequestration study found a complicated, three-part equation. Agricultural deforestation has caused a 31 percent loss of the old forest’s carbon-sinking strength. But new plant life has reclaimed large swaths of abandoned farmland, and the carbon-sucking power of these younger forests has made up for the losses from logging. Although persistent deforestation continues to create more emissions, the study found that when adding up these gains and losses, tropical forests are almost carbon-neutral.

So how has the globe managed to keep up the overall balancing act? The answer lies in temperate forests, where the carbon sink has increased by 30 percent. The study found that decades of reforestation efforts, largely by nationwide programs in China, are finally paying off. But the trend might not last. In China, urbanization and logging have begun to cut into tree cover. In the United States and Europe, wildfires, droughts, and pests have caused the temperate forest carbon sink to drop by 10 percent and 12 percent, respectively. 

Forest management efforts, along with the rate of emissions, will determine how this all plays out. A paper in Nature last year found “striking uncertainty” in the continued potential of carbon storage in U.S. forests, highlighting the need for conservation and restoration efforts.

Chao Wu, a postdoctoral researcher at the University of Utah who led that 2023 study, said that mitigating emissions should be the biggest priority for solving the climate crisis. “But the other important part is nature-based climate solutions, and the forest will be a very important part of that,” Wu said.

Richard Houghton, a senior scientist at the nonprofit Woodwell Climate Research Center who contributed to the latest sequestration study, says it’s “luck, in a sense” that the global forest carbon sink has remained stable.

For it to stay that way, Houghton and Pan said that increased restoration efforts and reduced logging are needed in all biomes, and especially in tropical forests, where 95 percent of deforestation occurs. “We don’t have enough preservation,” Houghton said, adding that protecting forests has added biodiversity and ecosystem health benefits. “There’s always more reasons to do a better job.”

https://grist.org/science/forests-global-carbon-sink-study/


*
CHANGE IN VOICE OR SPEECH MAY SIGNAL PARKINSON’S OR ALZHEIMER’S. AS WELL AS DEPRESSION

For Alzheimer’s and Parkinson’s, “one of the first changes that’s notable is voice,” usually appearing before a formal diagnosis, says Anais Rameau, MD, an assistant professor of laryngology at Weill Cornell Medical College and a member of the NIH voice project.

Parkinson’s may soften the voice or make it sound monotone, while Alzheimer’s disease may change the content of speech, leading to an uptick in “umm’s” and a preference for pronouns over nouns.

With Parkinson’s, vocal changes can occur decades before movement is affected. If doctors could detect the disease at this stage, before tremor emerged, they might be able to flag patients for early intervention, says Max Little, PhD, project director for the Parkinson’s Voice Initiative. “That is the ‘holy grail’ for finding an eventual cure.” 

Again, the smartphone shows potential. In a 2022 Australian study, an AI-powered app was able to identify people with Parkinson’s based on brief voice recordings, although the sample size was small. On a larger scale, the Parkinson’s Voice Initiative collected some 17,000 samples from people across the world. “The aim was to remotely detect those with the condition using a telephone call,” says Little. It did so with about 65% accuracy. “While this is not accurate enough for clinical use, it shows the potential of the idea,” he says. 

Rudzicz worked on the team behind Winterlight, an iPad app that analyzes 550 features of speech to detect dementia and Alzheimer’s (as well as mental illness). “We deployed it in long-term care facilities,” he says, identifying patients who need further review of their mental skills. Stroke is another area of interest, since slurred speech is a highly subjective measure, says Anderson. AI technology could provide a more objective evaluation. 

MOOD AND VOICE

No established biomarkers exist for diagnosing depression. Yet if you’re feeling down, there’s a good chance your friends can tell – even over the phone.

“We carry a lot of our mood in our voice,” says Powell. Bipolar disorder can also alter voice, making it louder and faster during manic periods, then slower and quieter during depressive bouts. The catatonic stage of schizophrenia often comes with “a very monotone, robotic voice,” says Anderson. “These are all something an algorithm can measure.” 

Apps are already being used – often in research settings – to monitor voices during phone calls, analyzing rate, rhythm, volume, and pitch, to predict mood changes. For example, the PRIORI project at the University of Michigan is working on 

The content of speech may also offer clues. In a UCLA study, published in the journal PLOS One, people with mental illnesses answered computer-programmed questions (like “How have you been over the past few days?”) over the phone. An app analyzed their word choices, paying attention to how they changed over time. The researchers found that AI analysis of mood aligned well with doctors’ assessments and that some people in the study actually felt more comfortable talking to a computer.

Respiratory Disorders

Beyond talking, respiratory sounds like gasping or coughing may point to specific conditions. “Emphysema cough is different, COPD cough is different,” says Bensoussan. Researchers are trying to find out if COVID-19 has a distinct cough. 

Breathing sounds can also serve as signposts. “There are different sounds when we can’t breathe,” says Bensoussan. One is called stridor, a high-pitched wheezing often resulting from a blocked airway. “I see tons of people [with stridor] misdiagnosed for years – they’ve been told they have asthma, but they don’t,” says Bensoussan. AI analysis of these sounds could help doctors more quickly identify respiratory disorders. 

Pediatric Voice and Speech Disorders 

Babies who later have autism cry differently as early as 6 months of age, which means an app like ChatterBaby could help flag children for early intervention, says Anderson. Autism is linked to several other diagnoses, such as epilepsy and sleep disorders. So analyzing an infant’s cry could prompt pediatricians to screen for a range of conditions. 

ChatterBaby has been “incredibly accurate” in identifying when babies are in pain, says Anderson, because pain increases muscle tension, resulting in a louder, more energetic cry. 

The next goal: “We’re collecting voices from babies around the world,” she says, and then tracking those children for 7 years, looking to see if early vocal signs could predict developmental disorders. Vocal samples from young children could serve a similar purpose.

Even atherosclerosis 

Eventually, AI technology may pick up disease-related voice changes that we can’t even hear. In a new Mayo Clinic study, certain vocal features detectable by AI – but not by the human ear – were linked to a three-fold increase in the likelihood of having plaque buildup in the arteries.

“Voice is a huge spectrum of vibrations,” explains study author Amir Lerman, MD. “We hear a very narrow range.” 

The researchers aren't sure why heart disease alters voice, but the autonomic nervous system may play a role, since it regulates the voice box as well as blood pressure and heart rate. 

Lerman says other conditions, like diseases of the nerves and gut, may similarly alter the voice. Beyond patient screening, this discovery could help doctors adjust medication doses remotely, in line with these inaudible vocal signals. 

“Hopefully, in the next few years, this is going to come to practice,” says Lerman. 

Still, in the face of that hope, privacy concerns remain. Voice is an identifier that's protected by the federal Health Insurance Portability and Accountability Act, which requires privacy of personal health information. That is a major reason why no large voice databases exist yet, says Bensoussan. (This makes collecting samples from children especially challenging.)  

Perhaps more concerning is the potential for diagnosing disease based on voice alone. “You could use that tool on anyone, including officials like the president,” says Rameau. 

But the primary hurdle is the ethical sourcing of data to ensure a diversity of vocal samples. For the Voice as a Biomarker project, the researchers will establish voice quotas for different races and ethnicities, ensuring algorithms can accurately analyze a range of accents. Data from people with speech impediments will also be gathered.

Despite these challenges, researchers are optimistic. “Vocal analysis is going to be a great equalizer and improve health outcomes,” predicts Anderson. “I’m really happy that we are beginning to understand the strength of the voice.” 

https://www.webmd.com/alzheimers/news/20221207/how-your-voice-could-reveal-hidden-disease

Mary:

The articles on voice analysis are fascinating...could be a rich source of information with real implications for health diagnosis and treatment.

Oriana:

Speech therapy is a very useful tool, and it can be utilized to help patients in the early stage of Alzheimer's, for instance. I wonder in how many cases it remains unused because the thinking is, what's the point, the disease is incurable anyway . . . Yet there is no putting a price on one extra year in which the patient can speak loud and clear.

*
RATES OF VASECTOMY VS TUBAL LIGATION

When couples decide they don’t want any more biological children – or any children at all – the topic of contraceptive surgery tends to come up, especially for heterosexual couples. But the decision can be weighty. Is it the best option? Is a permanent solution really what they want? And which person should undergo the surgery?

A vasectomy takes just about 20 minutes to do in a clinic, requires only local anesthesia, and is minimally invasive and quicker to heal. It typically costs about $1,000 in the US without insurance. By contrast, the procedures available to those with a uterus are more complicated.  

Tubal ligation and bilateral salpingectomy, the removal of both fallopian tubes – which is now more commonly recommended – require general anesthesia, involve multiple incisions and often laparoscopy, and have lengthier healing times. They also generally cost more than a vasectomy – some candidates report out-of-pocket quotes in the $15,000–$30,000 range.

Yet surveys show that there are fewer men getting vasectomies than women undergoing sterilization procedures. In the most recent comprehensive data, the National Center for Health Statistics reported that 27% of contraceptive-using women relied on female sterilization as their mode of birth control from 2017 to 2019, a bigger proportion than those using the pill (21%). But just 8% relied on male sterilization for birth control.

Vasectomy rates have been in decline in the US for some time, though there has been a recent bump in consultations since the rollback of Roe v Wade, especially in younger and childless men.

There are no definitive answers when it comes to this disparity. One possible factor is that women tend to be more in contact with the healthcare system than men, especially men of minority groups, says Kari Braaten, an assistant professor of obstetrics and gynecology at Harvard Medical School. “Vasectomy users are much more likely to be white and high income than any other group,” Braaten says.

The NCHS report also notes that sterilization was over three times more likely in women without a high school diploma or GED compared to women with a bachelor’s degree or higher. “Security in your contraception is highly needed among those who may have both a less stable relationship and less stable access to the healthcare system,” Braaten says.

The NCHS report also notes that sterilization was over three times more likely in women without a high school diploma or GED compared to women with a bachelor’s degree or higher. “Security in your contraception is highly needed among those who may have both a less stable relationship and less stable access to the healthcare system,” Braaten says.

But there is little research on the motivations behind these procedures. In one study from 2019 focused on low-income women with male partners, many participants said that their partners didn’t want to get vasectomies because they wanted options if, for example, their wives died or in case of divorce. Some participants’ partners felt like a vasectomy would compromise their manhood, or were scared of lingering pain or lowered libido and testosterone.

Pain is a real but relatively rare complication, and “there are no changes in terms of testosterone production or libido” after a vasectomy, says Leon Telis, a urologist and the director of the men’s health program at Mount Sinai Hospital.

https://www.theguardian.com/wellness/article/2024/aug/20/surgical-sterilization-how-to-decide


*
ANY AMOUNT OF ALCOHOL IS BAD FOR US

To say yes to that glass of wine or beer, or just get a juice? That’s the question many people face when they’re at after-work drinks, relaxing on a Friday night, or at the supermarket thinking about what to pick up for the weekend. I’m not here to opine on the philosophy of drinking, and how much you should drink is a question only you can answer. But it’s worth highlighting the updated advice from key health authorities on alcohol. Perhaps it will swing you one way or the other.

It’s well-known that binge-drinking is harmful, but what about light to moderate drinking? In January 2023, the World Health Organization came out with a strong statement: there is no safe level of drinking for health. The agency highlighted that alcohol causes at least seven types of cancer, including breast cancer, and that ethanol (alcohol) directly causes cancer when our cells break it down.

Reviewing the current evidence, the WHO notes that no studies have shown any beneficial effects of drinking that would outweigh the harm it does to the body. A key WHO official noted that the only thing we can say for sure is that “the more you drink, the more harmful it is – or, in other words, the less you drink, the safer it is”. It makes little difference to your body, or your risk of cancer, whether you pay £5 or £500 for a bottle of wine. Alcohol is harmful in whatever form it comes in.

Countries have started adopting this position in their national guidance. For example, in 2023, Canada introduced new national recommendations saying that abstinence is the only risk-free approach, and noting that two drinks (approximately four units) a week is low-risk. This was a change from 2011 when the guidance allowed a maximum of 10 drinks (about 20 units) and 15 drinks (about 30 units) for women and men respectively. The NHS has adopted the language of “no completely safe level of drinking”, with the guidance not to drink more than 14 units, or about six glasses of wine/pints of beer a week.

What about red wine? Wasn’t this supposed to be good for us? Two decades back, studies emerged that hinted that red wine could benefit the heart, especially as part of a Mediterranean diet. However, some of these studies didn’t control for the fact that red wine drinkers were more likely to be educated, wealthy, physically active, eat vegetables and have health insurance. In 2006, in a new analysis that controlled for health-affecting variables, the benefits of drinking red wine weren’t found. Since then, increasing evidence has shown that even one glass of wine a day increases the risk of high blood pressure and heart problems.

The alcohol industry has been savvy here and funded studies that – surprise, surprise – show the benefits of moderate drinking. This is a lesson in why you should always look at who funds the study, and whether there’s a conflict of interest. The muddying of studies by commercial interests (a tactic that was also famously used by the tobacco industry) led to statements, like from economist Emily Oster, that having one drink a day during pregnancy is safe. This has been debunked: fetal brain imaging in 2022 showed that even one alcoholic drink a week during pregnancy harms the baby’s developing brain.

To summarize, there’s widespread consensus that alcohol poisons our bodies. This isn’t a moral judgment: it is what large-scale epidemiological studies have shown. This should inform government policies such as health warnings on alcohol labels, bans on multi-buy promotions, restrictions on marketing and advertising, and greater awareness of the health risks of drinking. Yet, we have to be careful not to descend into puritanism. We live in a democracy where people have the freedom to drink and make choices about their health.

And I’ll admit that even though I work in public health, I continue to have a drink from time to time. Each day, we humans make decisions over the risks we take, and those of us who work in public health have to remember that not everyone is concerned only with living longer; feeling satisfied in how we live each day is also important. We eat that doughnut or bag of crisps, even though we know it’s not great for us, just as we drive long distances on motorways knowing there’s always the risk of a fatal traffic accident. And with alcohol, for many people there’s happiness in sharing a bottle of wine or grabbing a few pints with friends.

There’s no moral judgment in how people choose to live their life and the choices they make. But, yes, drinking carries a health risk, and it’s worth us, and governments, finally acknowledging this fact, even if we’d prefer not to think about it.

https://www.theguardian.com/commentisfree/article/2024/aug/20/red-wine-drinking-alcohol-health-risks

Oriana:

I remember when I was swayed by the arguments that there were definite cardiovascular benefits of alcohol, especially for men (young women are protected against heart disease by their hormones, to some degree; “premenopausal women don’t get heart attacks,” I was told  by a doctor who shrugged off my complaints about chest pains; he wasn’t the only one)
Everyone agrees that there is a difference between raising a toast at a party and regularly drinking to the point of getting drunk. The latter is self-destructive. 

But is drinking like smoking? We now know that even second-hand smoke is destructive; could it be the same with even very small quantities of alcohol? Studies have yielded mixed results. It’s possible that that “social drinking” is mostly on its way out — just as smoking, once ubiquitous, has entirely lost any aura of glamor and is now seen as purely destructive. It’s not quite in the same category as using drugs, but it may be getting there. 

Speaking of smoking, alcoholics are also typically heavy smokers, and the increased risk of taking up smoking, drinking, and taking drugs seems to run in the family, pointing to a very likely genetic component. The awareness that alcoholism and susceptibility to substance abuse in general is probably genetic should make it easier for us not to be judgmental, but also not to yield to any peer pressure to drink. 

As for research, I wonder what view will prevail ten years from now. Meanwhile, listen to your body. This is particularly crucial for women, who have a less efficient system for detoxifying alcohol than men do (this pertains mainly to liver enzymes), and thus are more harmed by even small amounts of the substance that is known to be neurotoxic and cardiotoxic.

*


A SURPRISING EARLY SIGN OF DEMENTIA RISK

: BAD DREAMS

~ We spend a third of our lives asleep. And a quarter of our time asleep is spent dreaming. So, for the average person alive in 2022, with a life expectancy of around 73, that clocks in at just over six years of dreaming.



Yet, given the central role that dreaming plays in our lives, we still know so little about why we dream, how the brain creates dreams, and importantly, what the significance of our dreams might be for our health – especially the health of our brains.



My 2022 study, published in The Lancet's eClinicalMedicine journal, showed that our dreams can reveal a surprising amount of information about our brain health.



More specifically, it showed that having frequent bad dreams and nightmares (bad dreams that make you wake up) during middle or older age, may be linked with an increased risk of developing dementia.



In the study, I analyzed data from three large US studies of health and aging. These included over 600 people aged between 35 and 64, and 2,600 people aged 79 and older.



All the participants were dementia-free at the start of the study and were followed for an average of nine years for the middle-aged group and five years for the older participants.



At the beginning of the study (2002-12), the participants completed a range of questionnaires, including one which asked about how often they experienced bad dreams and nightmares.



I analyzed the data to find out whether participants with a higher frequency of nightmares at the beginning of the study were more likely to go on to experience cognitive decline (a fast decline in memory and thinking skills over time) and be diagnosed with dementia.



Weekly nightmares



I found that middle-aged participants who experienced nightmares every week were four times more likely to experience cognitive decline (a precursor to dementia) over the following decade, while the older participants were twice as likely to be diagnosed with dementia.

Interestingly, the connection between nightmares and future dementia was much stronger for men than for women.



For example, older men who had nightmares every week were five times more likely to develop dementia compared with older men reporting no bad dreams.



In women, however, the increase in risk was only 41 percent. I found a very similar pattern in the middle-aged group.

Overall, these results suggest frequent nightmares may be one of the earliest signs of dementia, which can precede the development of memory and thinking problems by several years or even decades – especially in men.



Alternatively, it is also possible that having regular bad dreams and nightmares might even be a cause of dementia.



Given the nature of this study, it is not possible to be certain which of these theories is correct (though I suspect it is the former). However, regardless of which theory turns out to be true – the major implication of the study remains the same, that is, that having regular bad dreams and nightmares during middle and older age may be linked to an increased risk of developing dementia later in life.



The good news is that recurring nightmares are treatable. And the first-line medical treatment for nightmares has already been shown to decrease the build-up of abnormal proteins linked to Alzheimer's disease.



There have also been case reports showing improvements in memory and thinking skills after treating nightmares.



These findings suggest that treating nightmares might help to slow cognitive decline and to prevent dementia from developing in some people. This will be an important avenue to explore in future research.



The next steps for my research include investigating whether nightmares in young people might also be linked to increased dementia risk. This could help to determine whether nightmares cause dementia, or whether they are simply an early sign in some people.



I also plan to investigate whether other dream characteristics, such as how often we remember our dreams and how vivid they are, might also help to determine how likely people are to develop dementia in the future.



The research might not only help to shed light on the relationship between dementia and dreaming, and provide new opportunities for earlier diagnoses – and possibly earlier interventions – but it may also shed new light on the nature and function of the mysterious phenomenon that we call dreaming.

~ Abidemi Otaiku, NIHR Academic Clinical Fellow in Neurology, University of Birmingham

https://www.sciencealert.com/an-early-warning-sign-of-dementia-risk-may-be-keeping-you-up-at-night-says-study

Oriana:

Yes, there are drugs that can prevent nightmares, but those drugs, as usual, come with side effects. A more beneficial approach, if seems to me, is "rescripting." In effect, you try to "revise" the dream in a benevolent direction. 

I used to have recurrent nightmares about being in a concentration camp. In one of those dreams, I was already in line to the entrance to the gas chamber. In other recurrent dreans of that sequence, I was trying to escape while being shot at, and would wake up with my heart pounding. 

The amazing thing, however, is that my brain took action to "neutralize" or "resolve" the nightmares without any conscious effort on my part  to dream its way out the bad dreams. The guards grew more and more lax and the threat to my life seemed less and less. Finally, it got so that I noticed that the gate, with a beautiful birch forest (Birkenau?) behind it, was left open, and the guard, as if forgetting to be menacing, didn't even aim his rifle at me. One sunlit dream afternoon, I dared to simply walk out through the open gate 

while the guard looked the other way. I left the barracks and the wire fence behind, and simply walked ahead on the sandy, sunlit road between the lovely trees. There was no pursuit, no threat. I was not only free, but also safe. I didn't run; I walked at leisure, taking in the beauty of the forest. Already still in the dream I knew that I was leaving the place forever. That proved correct. I still miss that lush northern forest from the dreams that started as nightmares and ended as the friendly woods of my childhood summers.

*

SPEED OF SPEECH AND DEMENTIA RISK



Can you pass me the whatchamacallit? It's right over there next to the thingamajig.

Many of us will experience "lethologica", or difficulty finding words, in everyday life. And it usually becomes more prominent with age.



Frequent difficulty finding the right word can signal changes in the brain consistent with the early ("preclinical") stages of Alzheimer's disease – before more obvious symptoms emerge.

However, a recent study from the University of Toronto suggests that it's the speed of speech, rather than the difficulty in finding words that is a more accurate indicator of brain health in older adults.



The researchers asked 125 healthy adults, aged 18 to 90, to describe a scene in detail. Recordings of these descriptions were subsequently analyzed by artificial intelligence (AI) software to extract features such as speed of talking, duration of pauses between words, and the variety of words used.



Participants also completed a standard set of tests that measure concentration, thinking speed, and the ability to plan and carry out tasks. Age-related decline in these "executive" abilities was closely linked to the pace of a person's everyday speech, suggesting a broader decline than just difficulty in finding the right word.



A novel aspect of this study was the use of a "picture-word interference task", a clever task designed to separate the two steps of naming an object: finding the right word and instructing the mouth on how to say it out loud.



During this task, participants were shown pictures of everyday objects (such as a broom) while being played an audio clip of a word that is either related in meaning (such as "mop" – which makes it harder to think of the picture's name) or which sounds similar (such as "groom" – which can make it easier).



Interestingly, the study found that the natural speech speed of older adults was related to their quickness in naming pictures. This highlights that a general slowdown in processing might underlie broader cognitive and linguistic changes with age, rather than a specific challenge in memory retrieval for words.



While the findings from this study are interesting, finding words in response to picture-based cues may not reflect the complexity of vocabulary in unconstrained everyday conversation.
Verbal fluency tasks, which require participants to generate as many words as possible from a given category (for example, animals or fruits) or starting with a specific letter within a time limit, may be used with picture-naming to better capture the "tip-of-the-tongue" phenomenon.


The tip-of-the-tongue phenomenon refers to the temporary inability to retrieve a word from memory, despite partial recall and the feeling that the word is known.



These tasks are considered a better test of everyday conversations than the picture-word interference task because they involve the active retrieval and production of words from one's vocabulary, similar to the processes involved in natural speech.



While verbal fluency performance does not significantly decline with normal aging (as shown in a 2022 study), poor performance on these tasks can indicate neurodegenerative diseases such as Alzheimer’s.



The tests are useful because they account for the typical changes in word retrieval ability as people get older, allowing doctors to identify impairments beyond what is expected from normal ageing and potentially detect neurodegenerative conditions.



The verbal fluency test engages various brain regions involved in language, memory, and executive functioning, and hence can offer insights into which regions of the brain are affected by cognitive decline.



The authors of the University of Toronto study could have investigated participants' subjective experiences of word-finding difficulties alongside objective measures like speech pauses. This would provide a more comprehensive understanding of the cognitive processes involved.



Personal reports of the "feeling" of struggling to retrieve words could offer valuable insights complementing the behavioral data, potentially leading to more powerful tools for quantifying and detecting early cognitive decline.



Opening doors



Nevertheless, this study has opened exciting doors for future research, showing that it's not just what we say but how fast we say it that can reveal cognitive changes.



By harnessing natural language processing technologies (a type of AI), which use computational techniques to analyze and understand human language data, this work advances previous studies that noticed subtle changes in the spoken and written language of public figures like Ronald Reagan and Iris Murdoch in the years before their dementia diagnoses.



While those opportunistic reports were based on looking back after a dementia diagnosis, this study provides a more systematic, data-driven and forward-looking approach.



Using rapid advancements in natural language processing will allow for automatic detection of language changes, such as slowed speech rate.



This study underscores that could aid in identifying people at risk before more severe symptoms become apparent.



https://www.sciencealert.com/scientists-identify-a-speech-trait-that-foreshadows-cognitive-decline

*
MARX RELEVANT AGAIN

~ “The new modes of production, communication, and distribution had also created enormous wealth. But there was a problem. The wealth was not equally distributed. Ten per cent of the population possessed virtually all of the property; the other ninety per cent owned nothing. As cities and towns industrialized, as wealth became more concentrated, and as the rich got richer, the middle class began sinking to the level of the working class.

Soon, in fact, there would be just two types of people in the world: the people who owned property and the people who sold their labor to them. As ideologies disappeared which had once made inequality appear natural and ordained, it was inevitable that workers everywhere would see the system for what it was, and would rise up and overthrow it. The writer who made this prediction was, of course, Karl Marx, and the pamphlet was “The Communist Manifesto.” He is not wrong yet.

Marx was also what Michel Foucault called the founder of a discourse. An enormous body of thought is named after him. Marx saw that modern free-market economies, left to their own devices, produce gross inequalities, and he transformed a mode of analysis that goes all the way back to Socrates—turning concepts that we think we understand and take for granted inside out—into a resource for grasping the social and economic conditions of our own lives.

Apart from his loyal and lifelong collaborator, Friedrich Engels, almost no one would have guessed, in 1883, the year Marx died, at the age of sixty-four, how influential he would become. Eleven people showed up for the funeral. For most of his career, Marx was a star in a tiny constellation of radical exiles and failed revolutionaries (and the censors and police spies who monitored them) but almost unknown outside it. The books he is famous for today were not exactly best-sellers. “The Communist Manifesto” vanished almost as soon as it was published and remained largely out of print for twenty-four years; “Capital” was widely ignored when the first volume came out, in 1867. After four years, it had sold a thousand copies, and it was not translated into English until 1886.

One reason for Marx’s relative obscurity is that only toward the end of his life did movements to improve conditions for workers begin making gains in Europe and the United States. To the extent that those movements were reformist rather than revolutionary, they were not Marxist (although Marx did, in later years, speculate about the possibility of a peaceful transition to communism). With the growth of the labor movement came excitement about socialist thought and, with that, an interest in Marx.

Still, as Alan Ryan writes in his characteristically lucid and concise introduction to Marx’s political thought, “Karl Marx: Revolutionary and Utopian” (Liveright), if Vladimir Lenin had not arrived in Petrograd in 1917 and taken charge of the Russian Revolution, Marx would probably be known today as “a not very important nineteenth-century philosopher, sociologist, economist, and political theorist.” The Russian Revolution made the world take Marx’s criticism of capitalism seriously. After 1917, communism was no longer a utopian fantasy.

Engels, who was two years younger, had the same politics as Marx. Soon after they met, Engels wrote his classic study “The Condition of the Working Class in England,” which ends by predicting a communist revolution. Engels’s father was a German industrialist in the textile business, an owner of factories in Barmen and Bremen and in Manchester, England, and although he disapproved of his son’s politics and the company he kept, he gave him a position at the Manchester factory. Engels hated the work, but he was good at it, as he was at most things. He went fox hunting with the gentry he despised, and made fun of Marx’s attempts to ride a horse. Engels eventually became a partner, and the income helped him keep Marx alive.

It’s true that Marx was highly doctrinaire, something that did not wear well with his compatriots in the nineteenth century, and that certainly does not wear well today, after the experience of the regimes conceived in his name. It therefore sounds perverse to say that Marx’s philosophy was dedicated to human freedom. But it was. Marx was an Enlightenment thinker: he wanted a world that is rational and transparent, and in which human beings have been liberated from the control of external forces.

This was the essence of Marx’s Hegelianism. Hegel argued that history was the progress of humanity toward true freedom, by which he meant self-mastery and self-understanding, seeing the world without illusions—illusions that we ourselves have created. The Young Hegelians’ controversial example of this was the Christian God. (This is what Feuerbach wrote about.) We created God, and then pretended that God created us. We hypostatized our own concept and turned it into something “out there” whose commandments (which we made up) we struggle to understand and obey. We are supplicants to our own fiction.

Concepts like God are not errors. History is rational: we make the world the way we do for a reason. We invented God because God solved certain problems for us. But, once a concept begins impeding our progress toward self-mastery, it must be criticized and transcended, left behind. Otherwise, like the members of the Islamic State today, we become the tools of our Tool.

What makes it hard to discard the tools we have objectified is the persistence of the ideologies that justify them, and which make what is only a human invention seem like “the way things are.” Undoing ideologies is the task of philosophy. Marx was a philosopher. The subtitle of “Capital” is “Critique of Political Economy.” The uncompleted book was intended to be a criticism of the economic concepts that make social relations in a free-market economy seem natural and inevitable, in the same way that concepts like the great chain of being and the divine right of kings once made the social relations of feudalism seem natural and inevitable.

Marx thought that industrial capitalism, too, was created for a good reason: to increase economic output—something that “The Communist Manifesto” celebrates. The cost, however, is a system in which one class of human beings, the property owners (in Marxian terms, the bourgeoisie), exploits another class, the workers (the proletariat).

Capitalists don’t do this because they are greedy or cruel (though one could describe their behavior that way, as Marx almost invariably did). They do it because competition demands it. That’s how the system operates. Industrial capitalism is a Frankenstein’s monster that threatens its own creators, a system that we constructed for our own purposes and is now controlling us.

Marx was a humanist. He believed that we are beings who transform the world around us in order to produce objects for the benefit of all. That is our essence as a species. A system that transforms this activity into “labor” that is bought and used to aggrandize others is an obstacle to the full realization of our humanity. Capitalism is fated to self-destruct, just as all previous economic systems have self-destructed. The working-class revolution will lead to the final stage of history: communism, which, Marx wrote, “is the solution to the riddle of history and knows itself as this solution.”

(. . . ) To us, [specialization] seems an obviously efficient way to organize work, from automobile assembly lines to “knowledge production” in universities. But Marx considered the division of labor one of the evils of modern life. (So did Hegel.) It makes workers cogs in a machine and deprives them of any connection with the product of their labor. “Man’s own deed becomes an alien power opposed to him, which enslaves him instead of being controlled by him,” as Marx put it. In a communist society, he wrote, “nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes.It will be possible “to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner . . . without ever becoming hunter, fisherman, herdsman, or critic.”

This often quoted passage sounds fanciful, but it is at the heart of Marx’s thought. Human beings are naturally creative and sociable. A system that treats them as mechanical monads is inhumane. But the question is, How would a society without a division of labor produce sufficient goods to survive? Nobody will want to rear the cattle (or clean the barn); everyone will want to be the critic. (Believe me.) As Marx conceded, capitalism, for all its evils, had created abundance. He seems to have imagined that, somehow, all the features of the capitalist mode of production could be thrown aside and abundance would magically persist.

. . . Piketty says that for thirty years after 1945 a high rate of growth in the advanced economies was accompanied by a rise in incomes that benefitted all classes. Severe wealth inequality came to seem a thing of the past (which is why, in 1980, people could quite reasonably call Marx’s predictions mistaken). It now appears that those thirty years were an anomaly. The Depression and the two world wars had effectively wiped out the owners of wealth, but the thirty years after 1945 rebooted the economic order.

“The very high level of private wealth that has been attained since the nineteen-eighties and nineteen-nineties in the wealthy countries of Europe and in Japan,” Piketty says, “directly reflects the Marxian logic.” Marx was correct that there is nothing naturally egalitarian about modern economies left to themselves. As Piketty puts it, “There is no natural, spontaneous process to prevent destabilizing, inegalitarian forces from prevailing permanently.”

The tendency of the system to increase inequality was certainly true in Marx’s own century. By 1900, the richest one per cent of the population in Britain and France owned more than fifty per cent of those nations’ wealth; the top ten per cent owned ninety per cent. We are approaching those levels again today. In the United States, according to the Federal Reserve, the top ten per cent of the population owns seventy-two per cent of the wealth, and the bottom fifty per cent has two per cent. About ten per cent of the national income goes to the top two hundred and forty-seven thousand adults (one-thousandth of the adult population).


This is not a problem restricted to the rich nations. Global wealth is also unequally distributed, and by the same ratios or worse. Piketty does not predict a worldwide working-class revolution; he does remark that this level of inequality is “unsustainable.” He can foresee a time when most of the planet is owned by billionaires.

Marx was also not wrong about the tendency of workers’ wages to stagnate as income for the owners of capital rises. For the first sixty years of the nineteenth century—the period during which he began writing “Capital”—workers’ wages in Britain and France were stuck at close to subsistence levels. 

It can be difficult now to appreciate the degree of immiseration in the nineteenth-century industrial economy. In one period in 1862, the average workweek in a Manchester factory was eighty-four hours.

It appears that wage stagnation is back. After 1945, wages rose as national incomes rose, but the income of the lowest earners peaked in 1969, when the minimum hourly wage in the United States was $1.60. That is the equivalent of $10.49 today, when the national minimum wage is $7.25. And, as wages for service-sector jobs decline in earning power, the hours in the workweek increase, because people are forced to take more than one job.

The rhetoric of our time, the time of Bernie Sanders and Donald Trump, Brexit, and popular unrest in Europe, appears to have a Marxist cast. Sanders’s proposals to reduce inequality are straight out of Piketty: tax wealth and give more people access to knowledge. Trump, since he admires authoritarian personalities, might be pleased to know that Marx supported free trade on a “the worse things get” theory: by driving wages lower, free trade increases the impoverishment of the working class and leads more quickly to the revolution. In the terms used everywhere today, on the left, on the right, and in the press: the system is “rigged” to reward “the élites.” Marx called them “the ruling class.”

How useful is Marx for understanding this bubble of ferment in the advanced economies? I think we don’t yet know very well the precise demographic profile of Brexit voters and Trump and Sanders supporters—whether they are people who have been materially damaged by free trade and immigration or people who are hostile to the status quo for other reasons. That they are basically all the former may turn out to be a consoling belief of the better-off, who can more easily understand why people who have suffered economic damage would be angry than why people who have nothing to complain about financially might simply want to blow the whole thing up.

Still, in the political confusion, we may feel that we are seeing something that has not been seen in countries like Britain and the United States since before 1945: people debating what Marx would call the real nature of social relations. The political earth is being somewhat scorched. And, as politics continues to shed its traditional restraints, ugly as it is to watch, we may get a clearer understanding of what those relations are.

They may not be entirely economic. A main theme of Stedman Jones’s book is that Marx and Engels, in their obsession with class, ignored the power of other forms of identity. One of these is nationalism. For Marx and Engels, the working-class movement was international. But today we seem to be seeing, among the voters for Brexit, for example, a reversion to nationalism and, in the United States, what looks like a surge of nativism.

Stedman Jones also argues that Marx and Engels failed to appreciate the extent to which the goal of working-class agitation in nineteenth-century Britain was not ownership of the means of production but political inclusion, being allowed to vote. When that was achieved, unrest subsided.

Money matters to people, but status matters more, and precisely because status is something you cannot buy. Status is related to identity as much as it is to income. It is also, unfortunately, a zero-sum game. The struggles over status are socially divisive, and they can resemble class warfare.

Ryan, in his book on Marx, makes an observation that Marx himself might have made. “The modern republic,” he says, “attempts to impose political equality on an economic inequality it has no way of alleviating.” This is a relatively recent problem, because the rise of modern capitalism coincided with the rise of modern democracies, making wealth inequality inconsistent with political equality. But the unequal distribution of social resources is not new. One of the most striking points Piketty makes is that, as he puts it, “in all known societies in all times, the least wealthy half of the population has owned virtually nothing,” and the top ten per cent has owned “most of what there is to own.”

Inequality has been with us for a long time. Industrial capitalism didn’t reverse it in the nineteenth century, and finance capitalism is not reversing it in the twenty-first. The only thing that can reverse it is political action aimed at changing systems that seem to many people to be simply the way things have to be. We invented our social arrangements; we can alter them when they are working against us. There are no gods out there to strike us dead if we do.


http://www.newyorker.com/magazine/2016/10/10/karl-marx-yesterday-and-today?mbid=social_facebook_aud_dev_kwjunsub-karl-marx-yesterday-and-today&kwp_0=252969&kwp_4=961667&kwp_1=461749

*
DINOSAUR-KILLING ASTEROID WAS LIKELY A GIANT MUDBALL

asteroid hits Yukatan, an artist's interpretation

A study found the chemical identity of the asteroid that collided with what’s now the Yucatán Peninsula in Chicxulub, Mexico, 66 million years ago, triggering events that led to the demise of most dinosaurs.

Sixty-six million years ago, the story of life on Earth took a dramatic turn when an asteroid collided with what’s now the Yucatán Peninsula in Chicxulub, Mexico. The aftereffects of the collision resulted in the extinction of an estimated 75% of animal species, including most dinosaurs except for birds. But practically nothing of the asteroid itself remains.

In a new study published in the journal Science, researchers pieced together the chemical identity of the asteroid that fueled the planet’s fifth mass extinction event. The dino killer was a rare clay-rich mudball containing materials from the dawn of the solar system, the findings suggest.

While the Chicxulub asteroid landed tens of millions of years ago, learning about this ancient space rock is important because it’s “part of a bigger picture of understanding the dynamic nature of our Solar System,” said study coauthor Dr. Steven Goderis, a research professor of chemistry at Vrije Universiteit Brussel.

Laying out a theory of non-avian dinosaur extinction

Scientists hypothesized in 1980 that a collision with a giant space rock led to the death of the dinosaurs. Back then, the researchers didn’t find the asteroid itself; instead, they found a thin layer of the metal iridium in rocks around the world from 66 million years ago. Iridium is rare within the Earth’s crust but abundant in some asteroids and meteorites.

Some members of the wider scientific community were skeptical of the hypothesis. However, in 1991, scientists found that the Chicxulub crater was the right age to have been formed by a massive asteroid strike coinciding with the demise of the dinosaurs. Over the years, researchers have gathered more and more evidence that the asteroid strike was indeed the impetus for the cataclysmic extinction event.

The asteroid was huge — likely between 6 and 9 miles (9.7 and 14.5 kilometers) in diameter. But its colossal size is why it largely disappeared. The rock, roughly the size of Mount Everest, hurtled toward Earth, traveling 15.5 miles per second (25 kilometers per second), according to NASA.

The Cretaceous-Paleogene boundary layer is seen in Stevns Klint, Denmark. The study authors investigated the red clay layer with the highest ruthenium concentrations, which indicated the arrival of vaporized carbonaceous asteroid material dispersed from the Chicxulub impact area.

“Basically, all this kinetic energy is converted into heat,” Goderis said. “When the thing hits the target, it will more than explo
de; it will be vaporized.” The impact created a cloud of dust composed of the asteroid itself and the rock it landed on. The dust spread worldwide, blotting out sunlight and lowering temperatures for years, resulting in mass extinction.

As for the asteroid, “there’s nothing left except for this chemical trace that is deposited all around the globe,” Goderis said. “This forms this tiny clay layer you can recognize everywhere in the world, and it’s basically the same instant in time, 66 million years ago.”

Dinosaur-killer asteroid chemical makeup revealed

Asteroids (and the smaller meteoroids that break off of them) come in three major varieties, each with their own chemical and mineral makeup: metallic, stony and chondritic. In the new study, Goderis and his colleagues, including the study’s lead author, Dr. Mario Fischer-Gödde of the University of Cologne in Germany, examined the chemical composition of the thin clay layer to unlock the asteroid’s secrets.

[A chondrite is a stony meteorite that has not been modified, by either melting or differentiation of the parent body. They are formed when various types of dust and small grains in the early Solar System accreted to form primitive asteroids. Wikipedia]

The researchers sampled 66 million-year-old rocks from Denmark, Italy and Spain and isolated the parts containing the metal ruthenium. (Like iridium, ruthenium is more abundant in space rocks than in Earth’s crust.) The team also analyzed ruthenium from other asteroid impact sites and meteorites. The chemical makeup of the ruthenium from 66 million years ago matched the chemical makeup of the ruthenium present in a certain kind of chondritic meteorite, the scientists found.

“We noticed that there’s a perfect overlap with carbonaceous chondrite signatures,” Goderis said. Therefore, the asteroid that killed the dinosaurs was probably a carbonaceous chondrite, an ancient space rock that often contains water, clay and organic (carbon-bearing) compounds.
While carbonaceous chondrites make up the majority of rocks in space, only about 5% of the meteorites that fall to Earth belong to this category. “There is quite some diversity in carbonaceous chondrites, and some of them can smell,” Goderis said. But in the inferno, when the Chicxulub impactor landed, Goderis said, “you probably wouldn’t have had the time for a good sniff.”

What the findings mean for the future

Impacts of the scale of Chicxulub happen only every 100 million to 500 million years. But because there is still an outside chance of Earth crossing paths with another asteroid or giant meteorite, Goderis said that it’s good to know “the physical and chemical properties of these objects, to think about how to protect ourselves” from a collision with a large space rock.

carbonaceous asteroid 

Carbonaceous chondrites often contain water, clay and carbon-bearing compounds and make up the majority of rocks in space, but only about 5% of the meteorites that fall to Earth belong to this category.

Goderis cited the 2022 DART mission, or the Double Asteroid Redirection Test, in which NASA sent a spacecraft to intentionally knock an asteroid off its course. Knowing how different types of asteroids interact with the physical forces around them would be critical for an effective planetary defense operation.

“The carbonaceous chondrite will react completely differently from an ordinary chondrite — it’s much more porous, it’s much more light and it will absorb much more of an impact if you send an object towards it. So, we need to learn about this to have a corresponding response,” Goderis said.

https://www.cnn.com/2024/08/16/science/chicxulub-asteroid-impact-dinosaur-extinction/index.html

*
MODERN GRAPES EXIST BECAUSE THE DINOSAURS DIED OUT

A fossil image (left) and CT scan (right) show Lithouva, the earliest fossil grape from the Western Hemisphere found in Colombia, dated to 60 million years ago.

Grapes have been intertwined with the story of humanity for millennia, providing the basis for wines produced by our ancestors thousands of years ago — but that may not have been the case if dinosaurs hadn’t disappeared from the planet, according to new research.

When an asteroid struck Earth 66 million years ago, it wiped out the massive, lumbering animals and set the stage for other creatures and plants to thrive in the aftermath.

Now, the discovery of fossilized grape seeds in Colombia, Panama and Peru that range from 19 million to 60 million years old is shedding light on how these humble fruits captured a foothold in Earth’s dense forests and eventually established a global presence. One of the newly discovered seeds is the oldest example of plants from the grape family to be found in the Western Hemisphere, according to a study on the specimens published Monday in the journal Nature Plants.

“These are the oldest grapes ever found in this part of the world, and they’re a few million years younger than the oldest ones ever found on the other side of the planet,” said lead study author Fabiany Herrera, an assistant curator of paleobotany at the Field Museum in Chicago’s Negaunee Integrative Research Center, in a statement. “This discovery is important because it shows that after the extinction of the dinosaurs, grapes really started to spread across the world.”

Much like the soft tissues of animals, actual fruits don’t preserve well in the fossil record. But seeds, which are more likely to fossilize, can help scientists understand what plants were present at different stages in Earth’s history as they reconstruct the tree of life and establish origin stories.

The oldest grape seed fossils found so far were unearthed in India and date back 66 million years, to about the time of the dinosaurs’ demise.

“We always think about the animals, the dinosaurs, because they were the biggest things to be affected, but the extinction event had a huge impact on plants too,” Herrera said. “The forest reset itself in a way that changed the composition of the plants.”

A difficult search

Herrera’s PhD advisor, Steven Manchester, who is also a senior author on the new study, published a paper about the grape fossils found in India. It inspired Herrera to question where other grape seed fossils might exist, like South America, although they had never been found there.

“Grapes have an extensive fossil record that starts about 50 million years ago, so I wanted to discover one in South America, but it was like looking for a needle in a haystack,” Herrera said. “I’ve been looking for the oldest grape in the Western Hemisphere since I was an undergrad student.”

Herrera and study coauthor Mónica Carvalho, assistant curator at the University of Michigan’s Museum of Paleontology, were doing fieldwork in the Colombian Andes in 2022 when Carvalho spotted a fossil. It turned out to be a 60 million-year-old grape seed fossil trapped in rock, among the oldest in the world and the first to be found in South America.

“She looked at me and said, ‘Fabiany, a grape!’ And then I looked at it, I was like, ‘Oh my God.’ It was so exciting,” Herrera said.

Although the fossil was tiny, its shape, size and other features helped the duo identify it as a grape seed. And once they were back in the lab, the researchers carried out CT scans to study its internal structure and confirm their findings.

They named the newly discovered species Lithouva susmanii, or  “Susman’s stone grape,” in honor of Arthur T. Susman, who has been a supporter of South American paleobotany at the Field Museum.

“This new species is also important because it supports a South American origin of the group in which the common grape vine Vitis evolved,” said study coauthor Gregory Stull of the National Museum of Natural History.

The rocks had been deposited in ancient lakes, rivers and coastal settings, Herrera said.

“To look for such tiny seeds, I split every piece of rock available in the field,” he said, adding that the difficult search “is the fun part of my job as a paleobotanist.”

Encouraged by their find, the team conducted more fieldwork across South and Central America and found nine new species of fossil grape seeds trapped within sedimentary rocks. And by tracing the lineage of the ancient seeds to their modern grape counterparts, the team realized something had enabled the plants to thrive and spread.

How ancient forests changed

When the dinosaurs went extinct, their absence changed the entire structure of forests, the team hypothesized.


“Large animals, such as dinosaurs, are known to alter their surrounding ecosystems. We think that if there were large dinosaurs roaming through the forest, they were likely knocking down trees, effectively maintaining forests more open than they are today,” Carvalho said.

After the dinosaurs disappeared, tropical forests became overgrown, and layers of trees created an understory and canopy. These dense forests made it difficult for plants to receive light, and they had to compete with one another for resources. And climbing plants had an advantage and used it to reach the canopy, the researchers said.

“In the fossil record, we start to see more plants that use vines to climb up trees, like grapes, around this time,” Herrera said.

Meanwhile, as a diverse set of birds and mammals began to populate Earth after the disappearance of the dinosaurs, they likely also helped spread grape seeds.

The resiliency of plants

Studying the seeds tells a story about how grapes spread, adapted and went extinct over thousands of years, showcasing their resiliency to survive in other parts of the world despite disappearing from Central and South America over time.

Several fossils are related to modern grapes and others are distant relatives or grapes native to the Western Hemisphere. For example, some of the fossil species can be traced to grapes that are only found in Asia and Africa today, but it’s unclear why the grapes went extinct in Central and South America, Herrera said.

“The new fossil species tell us a tumultuous and complex history,” he said. “We usually think of the diverse and modern rainforests as a ‘museum’ model, where all species accumulate over time. However, our study shows that extinction has been a major force in the evolution of the rainforests. Now we need to identify what caused those extinctions during the last 60 million years.”

Herrera wants to search for other examples of fossil plants, like sunflowers, orchids and pineapples, to see if they existed in ancient tropical forests.

Studying the origins and adaptations of plants in the past is helping scientists understand how they may fare during the climate crisis.

“I only hope that most living plant seeds adapt quickly to the current climate crisis. The fossil record of seeds is telling us that plants are resilient but can also completely disappear from an entire continent,” Herrera said.


https://www.cnn.com/2024/07/02/science/fossil-grape-seeds-dinosaurs-scn

*
FERMENTATION: JAPAN’S SOLUTION TO FOOD WASTE


food scraps being prepared for fermentation. Food scraps are shredded before being sterilized and fed into huge fermentation tanks

Even as a boy, Koichi Takahashi knew he wanted to save the planet. He dreamed of building a future society that sustainably coexisted with nature through a loop of recycling and regeneration. Takahashi knew he couldn't remake the entire world himself, but as he got older, he realized that he could focus his energy on reforming one small corner of it.

Pig farms were the unlikely target that Takahashi settled on for his life's work. Specifically, he founded a company, the Japan Food Ecology Center, that creates a win-win solution by turning leftover human food into high-quality pig feed. "I wanted to build a model project for the circular economy," he says. "Instead of relying on imports for feed, we can make effective use of local food waste."

This comes with steep environmental and economic costs. Compared to many countries, consumers in Japan pay higher prices for food because so much of it is imported. And they also pay taxes to cover the majority of the 800bn yen (£4.2bn/$5.4bn) the country spends each year on waste incineration. Food makes up about 40% of the rubbish that Japan incinerates, and incineration produces significant air pollution and greenhouse gas emissions.

As the world's fifth-largest emitter of greenhouse gases, Japan has set goals of cutting emissions by 46% by 2030 and becoming fully carbon neutral by 2050. Tackling food waste will have to be a part of those efforts, Takahashi says.

Ancient science, new solutions

Takahashi's vision for creating a sustainable food loop was sparked in 1998 when the Japanese government launched a project promoting ways to convert otherwise wasted resources into livestock feed. The price for imported grain was rising at the time, and there was "a sense of crisis in Japan", Takahashi recalls. People felt the livestock industry would collapse if a solution was not found.

The food is sorted by hand when it arrives at the processing facility – and it receives around 40 tons per day

Takahashi, who was then a practicing veterinarian, sensed an opportunity to put his farm animal expertise to use – and to simultaneously fulfill his desire to help the planet. As he learned more, though, he found there were no quick fixes, as simply sending raw food waste to farms was difficult. Major issues complicated the matter, including the highly varied content of food waste; the fact that food's high water content promotes spoilage; drying the food out to get around the water problem would require nearly as much energy as incinerating it.

To come up with a solution, Takahashi turned to a natural art that Japan has been perfecting for millennia: fermentation. "I realized that we already had the technology to create a product that could last long," he says.

Japan's long history of fermentation

Japan has a special relationship with fermentation. There is some archaeological evidence to suggest people began fermenting berries there during the early Jomon period around 5,000 years ago. Today fermented foods, beverages and seasonings enjoy a central place in Japanese food culture today. Japan even has its own accredited fermentation sommeliers.

And the country is also a leader in fermentation science – a field of study that spans innovations ranging from biofuel development to antibiotic discovery.

Partly, Japan's ability to excel at fermentation science has stemmed from its researchers' distinct approach to microbes, says Victoria Lee, a historian at Ohio University. Japanese microbiologists "went in a whole different direction" compared to their colleagues in North American and Europe, where the focus was on pathogens and germs, Lee says. Rather than view microbes as enemies, in Japan, a tradition emerged of "seeing them as powerful partners”.

Fermentation has been a subject of scientific study in Japan since the 19th Century, when the government began investing heavily in building new industries based on ancient practices like sake brewing and soy sauce-making. By the 20th Century, the country's focus on fermentation led to a distinctly Japanese approach in which microbes were seen "as living workers", says Victoria Lee, a historian at Ohio University and author of The Arts of the Microbial World: Fermentation Science in Twentieth-Century Japan. "To scientists, more than a food process, fermentation became a way to transform society and solve various environmental and resource problems.”

Like the fermentation researchers who came before him, Takahashi was looking for a way, as Lee puts it, "to take what might otherwise simply be waste and transform it into something useful, in the process creating new industries.”

Working with researchers from government, universities and national institutes, Takahashi used his veterinary knowledge to craft a lactic acid-fermented, liquified feed product for pigs. The team had to engage in lengthy troubleshooting. "When we fed the early test feed to the pigs, they grew slower and their meat was too fatty," Takahashi recalls. Over the course of "a series of failures", they eventually got the nutritional content right. They also found a way to extend the shelf life of the "ecofeed", as they called their product, by lowering the pH to 4.0, a level at which most pathogenic bacteria cannot survive.

The resulting product – pale and watery – tastes like sour yogurt, Takahashi says, and can sit on a shelf, unrefrigerated, for up to 10 days. The product is also a boon for the climate: compared to the equivalent amount of feed imported from abroad, Takahashi says, the ecofeed manufacturing process generates 70% less greenhouse gas emissions.

As the science fell into place, Takahashi began lobbying the government and various other stakeholders to allow him to move forward with bringing the center's recycling loop system to market. It took years, but "we have now built a relationship of trust" with various government bodies that oversee waste and environmental issues, he says. Government officials, in fact, "frequently come to us for advice".

Profitable and Sustainable

Most waste treatment plants smell like, well, waste. Visitors to the Japan Food Ecology Center are often surprised by the lack of pungency. If anything, the air is reminiscent of a smoothie shop.

The center is located in Sagamihara, a city in Kanagawa prefecture that's about two hours by train from Tokyo. Sagamihara itself is unremarkable, but every year some 1,500 visitors from around Japan – from elementary students to retirees – visit the center to learn firsthand about food recycling.

The facility processes about 40 tons of food waste per day, which arrives by truck from several hundred supermarkets, department stores and mass manufacturers. Some of these businesses are motivated by a desire to green their operations, but all are incentivized by the lower fees that the center charges to accept their waste, compared to incinerators. The foods they send vary by day, but whey, a byproduct of butter and cheese, is an ever-present staple, as are scraps from mass production of things like gyoza and sushi. Any manufacturing process results in an unavoidable 3% to 5% product loss, Takahashi says, so a factory that makes 50 tons of food per day will generate at least 1.5 tons of waste.

Large amounts of food waste are also produced by manufacturers that are contracted to supply Japan's 55,657 convenience stores – many of which are open 24 hours, 365 days a year – with a constant stream of products. Perishable food for convenience stores "must be delivered as soon as it's ordered, so factories that make boxed lunches and rice balls are required to produce extra, even if they lose some", Takahashi says. "Because of the importance placed on preventing lost sales opportunities, large amounts of food waste have become routine.”

Fermenting fish bones

Makoto Kanauchi, a fermentation scientist at Miyagi University in Sendai, uses science to refine and expand upon age-old fermentation techniques. “Cultivating good microbes to drive out bad ones is the origin of fermented food,” he says. Kanauchi is constantly experimenting with new fermented food types, including a soy sauce made from puffer fish bones. He also has also made a silky, spreadable soymilk-based cheese and a fermented fish sauce-like seasoning made with plankton.

On a recent morning, an assortment of 140 and 500-liter containers filled with gyoza skins, rice, cabbage, pineapple rinds, bananas, noodles and sandwich buns awaited processing. Batches of ecofeed are calibrated based on caloric and nutritional content, so various materials are mixed with intent rather than at random. To prevent contamination, all of the food is passed through a metal detector and inspected by hand by workers at a conveyor belt. Chopping and crushing comes next, resulting in a liquid product (on average, 80% of food is water), followed by sterilisation to reduce pathogenic bacteria. Finally, the liquid is fed into one of several huge tanks where fermentation occurs, thanks to the bacteria in the lactic acid.

The resulting ecofeed costs farmers about half the price of conventional feed, and farms can also tailor their personal formula according to their needs – requesting more lysine or other amino acids, for example, to increase fat or muscle mass in their pigs.

According to Dan Kawakami, a farmer at Azumino Eco Farm in Nagano that has been supplied by the center since 2006, the quality of pork from animals raised on ecofeed is better. Using a sustainable feed source "also differentiates our product from competitors", he says, "and it's beneficial in terms of cost”.

Eco-pork from farms like Azumino is carried by an ever-growing list of dozens of restaurants, supermarkets and department stores around the country, and generates more than 350m yen (£1.8m/$2.3m) in annual sales, says Takahashi. "It's becoming popular as a meat that's both delicious and sustainable," he says.

Last year, Takahashi also got into biogas – a form of renewable energy produced from methane fermentation. The biogas operation expands the types of food that the plant can accept, since the pigs cannot have anything with too high a fat, salt or oil content. In towering 1,500-ton vats – the bubbling contents of which resemble the Bog of Eternal Stench from the movie Labyrinth – scraps are mixed with water and heated to produce fermentation. An electric conversion generator turns the resulting methane into electricity, which Takahashi sells back to the grid.

The plant is currently generating 528kW of electrical output per day, about the same amount used by 1,000 households. The solid byproduct of the process – a powdery black substance that smells like a savory seasoning – is dried using the generator's surplus heat and is sold as a nutrient-rich agricultural fertilizer. As Takahashi notes, "nothing is wasted”.

Importantly, the center is making a profit off the 35,000 tons of food waste it processes each year. "We're subverting the conventional notion that environmental efforts don't pay, or that recycling is just too expensive," Takahashi says. Because his goal is "to change society", he adds, he did not take out any patents on the technology, allowing others to replicate his method. His center has served as a blueprint for other facilities in Japan that use his fermentation method. Together, they produce more than one million tons of ecofeed a year and prove that "an ecological, sustainable effort can be profitable", Takahashi says.

Takahashi also regularly hosts students, scholars and industry executives from around the world who come to learn about fermentation and food waste. Coming full circle, he likes to end the tours he takes them on with the pork itself, so people can judge its quality for themselves. It's served tonkatsu-style – deep fried, and with sides of rice and salad that are produced by the same farm. As for the pork, it's shockingly tender, with just the right ratio of fat to juicy muscle. The same meal is served three times a week to the center's staff, Takahashi adds, "to motivate them by tasting how good the pigs fed by their own products are”.

https://www.bbc.com/future/article/20240816-the-japanese-farms-recycling-waste-food

*
BENEFITS OF FERMENTED FOODS

fermented foods

Pickles and sauerkraut might not be the first examples that jump to mind when you think of health foods. But a growing body of research shows that a diet that includes a regular intake of fermented foods can bring benefits.

Fermented foods are preserved using an age-old process that not only boosts the food's shelf life and nutritional value but can give your body a dose of healthful probiotics — live micro­organisms crucial to good digestion.

The digestive tract is teeming with some 100 trillion bacteria and other microorganisms, says Dr. David S. Ludwig, a professor of nutrition at the Harvard T.H. Chan School of Public Health. Research today is revealing the importance of a diverse and healthy intestinal microbiome (the microbial community in the gut) because it plays a role in fine-tuning the immune system and wards off damaging inflammation inside the body, which may lead to conditions ranging from obesity and diabetes to neurodegenerative diseases. "It's a very exciting, dynamic area of research," says Dr. Ludwig.

Future research will likely yield more clues about how the microbiome contributes to overall health. This may eventually enable scientists to pinpoint microorganisms that could target specific diseases or help people lose weight. Until that day comes, fermented foods are useful because they help provide a spectrum of probiotics to foster a vigorous microbiome in your digestive tract that can keep bad actors at bay, says Dr. Ludwig.

A time-tested preservation method

While research into the health benefits of fermented foods is relatively new, the process of fermentation has long been used to help foods last longer and keep them from spoiling. "Most societies throughout the world and throughout time have included fermented foods as part of their diet," says Dr. Ludwig. In colder, northern climates, fermenting foods allowed people to have vegetables throughout the long winter months when they otherwise wouldn't be available.

One of the earliest forms of food preservation, fermentation can extend the usability of a food for months. "For example, if you put cabbage on the shelf for a few weeks, it'll spoil," says Dr. Ludwig. "But if you ferment it into sauerkraut, it will last for months." It's the same concept with fermented dairy foods and proteins. "Think about how long milk lasts compared with cheese," he says.

In addition to helping food last longer, fermentation also enhances the taste of foods, giving them added complexity. Plus, the fermentation process works other forms of magic on foods, changing them and adding nutrients. For example, by eating fermented vegetables, vegetarians can get vitamin B12, which otherwise isn't present in plant foods, says Dr. Ludwig.

A changing microbiome

But one of the biggest benefits of fermented foods comes from probiotics. Recent research suggests that the type of gut bacteria in the bodies of Americans is changing. One possible reason is that the microbiomes in our bodies are not regularly replenished the way they were in past generations. That's because of changes in the American diet — particularly the rise in processed foods — and because of better hygiene, which cuts down on the number of microbes people are exposed to naturally through dirt and other contaminants, according to Dr. Ludwig. In addition, antibiotics are used widely and can kill off beneficial organisms along with the bad ones.

Changes to the population of gut microbes may create an imbalance between beneficial and harmful gut bacteria, leading to health problems. When the digestive tract has an unhealthy mix of organisms, it can actually lead to a weakening of the walls of the intestines, which start to leak their contents into the bloodstream — a condition referred to, not surprisingly, as leaky gut syndrome, according Dr. Ludwig. Chronic exposure to these substances leaking out from the intestines has been linked to a host of health problems, ranging from asthma and eczema to schizophrenia and Alzheimer's disease, he says.


Fermented foods can bolster the gut microbiome, creating a healthier mix of microbes and strengthening the walls of the intestines to keep them from leaking.

Growing a healthy microbiome

If people eat probiotics (like those found in fermented foods) from early childhood, that can help train the immune system to tolerate — and cooperate with — a diverse, beneficial microbiome, says Dr. Ludwig. After the first few months and years of life, a person's microbe population is relatively stable, but adults who eat fermented foods regularly can still reap benefits.

Adding fermented foods to the diet is relatively easy, says Dr. Ludwig. You can find naturally fermented foods at natural-food stores and many supermarkets. And fermentation is also easy and safe to do at home by following some simple instructions.

But keep in mind that not all fermented foods are created equal. For instance, although cheese is fermented, it's not known to bring the same health benefits as yogurt. The difference is live microbes, says Dr. Ludwig. Yogurt has them; cheese typically doesn't.

Live cultures are found not only in yogurt and a yogurt-like drink called kefir, but also in Korean pickled vegetables (called kimchi), sauerkraut, and some pickles. The jars of pickles you can buy off the shelf at the supermarket are sometimes pickled using vinegar and not the natural fermentation process using live organisms, which means they don't contain probiotics. 

To ensure the fermented foods you choose do contain probiotics, look for the words "naturally fermented" on the label, and when you open the jar look for telltale bubbles in the liquid, which signal that live organisms are inside the jar, says Dr. Ludwig.

Yogurt might be the easiest fermented food for Americans to add to their diets, because they're already familiar with it. "But I encourage people to extend their range a little bit," says Dr. Ludwig. In addition to eating raw and cooked vegetables, add pickled vegetables as a side with dinner or topping a salad. Or toss a little sauerkraut into a sandwich or wrap. Another option is fermented soybeans, which are found in natto, tempeh, and miso.

If you're really adventurous, you can also try fermented fish, which are commonly eaten in some Northern and Asian cultures, but may be something of an acquired taste, says Dr. Ludwig.

https://www.health.harvard.edu/staying-healthy/fermented-foods-can-add-depth-to-your-diet

*
AUTISM AND MOTHER’S AUTOIMMUNE DISEASES

About one in ten women who have a child with autism have immune molecules in their bloodstream that react with proteins in the brain, according to a study published in Molecular Psychiatry.

Several research groups have found these immune molecules, called antibodies, in mothers of children with autism, and have shown that prenatal exposure to the antibodies alters social behavior in mice and monkeys.

The new study, which includes more than 2,700 mothers of children with autism, is the largest survey yet on the prevalence of these anti-brain antibodies.

“It’s a very large sample size,” says study leader Betty Diamond, head of the Center for Autoimmune and Musculoskeletal Disorders at The Feinstein Institute for Medical Research in Long Island, New York. The scale gives a clearer impression of the prevalence of these antibodies, she says.

Antibodies help the body’s immune system recognize and fight off disease-causing microorganisms such as bacteria and viruses, but sometimes the body mistakenly produces antibodies to its own proteins. In some people, this results in autoimmune diseases such as rheumatoid arthritis and lupus, in which the body attacks its own tissues.

Researchers say anti-brain antibodies do not harm the brains of the women who produce them because of the blood-brain barrier, a filter that prevents most molecules from entering the brain. But the immature blood-brain barrier of a developing fetus may let them through, allowing them to damage the brain and perhaps cause autism.

Diamond’s team also found that women who have autism-linked antibodies are more likely to have other markers of autoimmunity compared with those who don’t carry these antibodies. Studies have shown that women with an autoimmune disease also have an increased risk of having a child with autism.

“This ties together the epidemiological finding that women who have autoimmune disease are more likely to have kids with autism, with the idea that there are actually antibodies against fetal brain in their serum,” says Paul Patterson, professor of biology at the California Institute of Technology, who was not involved in the work.

https://www.thetransmitter.org/spectrum/large-study-links-autism-to-autoimmune-disease-in-mothers/

*
MOTHER’S LUPUS AND  CHILD’S AUTISM LINK



At the turn of the 21st century, the prevalence of autism spectrum disorder among American children was roughly 1 in 150. That’s according to data collected by the Autism and Developmental Disabilities Monitoring Network of the U.S. Centers for Disease Control and Prevention. A decade later, in 2010, the prevalence had risen to 1 in 68 children. By 2020, it had climbed again—to 1 in 36 children. 

“The prevalence of Autism Spectrum Disorder (ASD) has increased dramatically in recent decades, supporting the claim of an autism epidemic,” wrote the authors of a 2020 study in the journal Brain Sciences.

The precise cause and extent of that epidemic are contested. Some researchers have observed that the diagnostic criteria for ASD have evolved during that time—stretching and broadening to include a wider array of conditions. And so part of the rise in diagnoses, they argue, is likely attributable to dilating conceptions and a deeper understanding of autism. 

Still, the increasing prevalence of ASD diagnoses has spurred greater scientific interest in the underlying causes of the disorder. That work has revealed a possible connection between ASD and autoimmune conditions, including systemic lupus erythematosus (SLE).

“For quite a while, there’s been a link between maternal autoimmune diseases and risk for having a child with autism,” says Paul Ashwood, a professor of medical microbiology and immunology at the University of California, Davis and the MIND Institute, which focuses on autism and other neurodevelopmental conditions. He mentions work based on nationwide data collected over a period of many years from mothers and their offspring in Denmark. That research found that prenatal exposure to a number of different maternal autoimmune diseases, including both lupus and rheumatoid arthritis, was associated with an increased risk for an eventual autism diagnosis.



Since then, more research has firmed up the apparent association, and also found evidence of a broader connection between a pregnant woman’s immune system and the risk of an offspring with autism. “What we’ve been looking at a lot more recently is how anything that generates a maternal immune response could be linked to autism risk,” Ashwood says.



Antibodies and the developing brain



In response to a threat, such as a virus or other pathogen, the immune system produces protein antibodies that are intended to neutralize or eliminate the danger. But among people with autoimmune conditions such as systemic lupus erythematosus, the immune system produces antibodies that attack the body’s own healthy proteins or tissues. These are called autoantibodies.


In a 2015 study in the journal Arthritis and Rheumatology, a group of Canadian researchers found that children born to women with systemic lupus erythematosus were nearly twice as likely to develop autism as children of women who did not have SLE. Furthermore, the children of mothers with SLE tended to be diagnosed with autism at a younger age than those of mothers without SLE.



“In-utero exposures to maternal antibodies and cytokines [proteins that regulate the growth of immune system cells] are important risk factors for ASD,” the authors of that study wrote. Women with SLE “display high levels of autoantibodies and cytokines,” which have been shown in animal models to alter fetal brain development and induce behavioral anomalies in offspring, they added.



“Maternal antibodies, including autoantibodies, start crossing the placenta barrier around day 100 of gestation, and we know that this can affect the developing fetus,” says Judy Van de Water, professor of medicine and associate director of biological sciences at the University of California, Davis and the MIND Institute. “One of the things we’re looking at is how these autoantibodies or other aspects of the mother’s immune response could affect neurodevelopment.

”

Some research has already found that maternal autoantibodies related to SLE may lead to the development of heart conditions and also blood and liver abnormalities in a developing fetus. Van de Water and her colleagues are examining whether and how other autoantibodies may similarly affect fetal brain development. “Several of the proteins that these autoantibodies target are really highly expressed in the developing brain, and not the mature brain,” she says. This may create unique exposure risks for a developing fetus.



 The immune-autism link



Apart from lupus, several other maternal autoimmune disorders, including rheumatoid arthritis, have been tied to an increased risk for having children with autism. The same is true of immune-related conditions such as asthma and allergies. Van de Water and other researchers are now taking a broad look at how a pregnant woman’s immune system activity may affect the fetal brain. 

“Anything that impacts maternal immune homeostasis or the balance of the immune response in the mother could impact neurodevelopment in the child,” she says. “So we’re looking at different immune systems responses—what the response is, how intense the response is, the makeup of inflammatory markers—and their relationships to autism.” 



An autoimmune condition like lupus is one source of a heightened maternal immune response, but Van de Water says that, under the right conditions, just about anything that triggers an immune reaction could potentially affect neurodevelopment in ways that contribute to autism. 

“We’re looking at a lot of different maternal immune activations or perturbations—whether from an existing condition or illness, or something that happens during pregnancy,” she says. 


In particular, experts highlight the role that inflammatory cytokines may play in autism risk. 

“The way to think about cytokines in the fetal environment is that they can potentially act in a dose response manner—just as too much is bad, then too little is also bad, but there is this goldilocks level that you need to have for appropriate growth,” Ashwood says. “If there’s some kind of immune condition or inflammatory response that leads to the constant production and release of these cytokines, those could cross the placental barrier and affect fetal development.”



In the brain, for example, the presence of cytokines “could affect neuron growth, neuron proliferation, the connection of neurons to other neurons, synapse formation, neuronal migration, and all sorts of processes that are necessary to build an interconnected network as the brain grows,” he explains. 

“Having those systems slightly off-kilter can potentially affect the trajectory of neurodevelopment.”

 Lupus and other autoimmune disorders are one potential source of cytokine imbalance. But Van de Water says that obesity is another inflammation-related condition—and a far more common one than lupus—that could produce the sort of immune activity that contributes to autism. 



“Obesity has a major inflammatory component attached to it,” she says. “We just published a paper looking at this, and it turns out that the biggest maternal risk factor for autism was not any autoimmune disease, but asthma and allergies coupled with obesity. You put these two together with obesity and he risk was significantly greater.”

Another potential connection between a mother’s immune activity and her offspring’s autism risk is the microbiome—the community of bacteria that inhabit the gut. Some research has found that the metabolites produced by a mother’s gut bacteria can affect the neurodevelopment of the fetus. 

Furthermore, there’s evidence that infections, metabolic stress (such as obesity), and other immune-related events can lead to maternal microbiome imbalances that, potentially, could raise her offspring’s risk for autism. 

On top of this, there’s evidence that people with autism share some distinct microbiome characteristics, and that gut-related symptoms—diarrhea, constipation, and abdominal pain in particular—are common comorbidities among people with autism. 

“There’s a lot of interest right now in the microbiome —how it’s formed, the way it nourishes the body, and how it shapes the activity of the immune system,” Ashwood says. There’s also been much recent interest in the so-called “gut-brain connection,” and science has established that the gut’s microbiota influence brain connectivity and functioning.



It’s not certain yet, but it’s possible that maternal autoimmune disorders and other immune-related perturbations could directly or indirectly affect the microbiome of the fetus in ways that contribute to the development of autism.



A multifaceted disease



While there are several plausible mechanisms that could tie autoimmune disorders to autism, experts say this is likely only one small part of the autism equation. “It’s worth remembering that autoimmunity in the general populace is pretty low,” Ashwood says. Also, research on the link between maternal lupus and autism has found that while the risks are elevated, women with the autoimmune condition were still at low overall risk for having a child with autism.



Apart from maternal immune conditions, there’s growing evidence of the role that genetics play in a person’s risk for autism. “More than 100 genes are known to confer risk, and 1,000 or more may ultimately be identified,” wrote David Amaral, a distinguished professor at the University of California, Davis and the MIND Institute, in a 2017 paper on the causes of autism. 

He goes on to explain that, most likely, a mix of genetic and environmental factors contribute to the development of autism. “It seems clear at this point,” he writes, “that when all is said and done, we will find that autism has multiple causes that occur in diverse combinations.”

Van de Water likewise emphasizes this point. Autism spectrum disorder is a diverse and multifaceted condition, and its underlying causes are likely equally complex. 

Lupus and other immune-related conditions may be a piece of the puzzle, but they’re just one of many. “Anyone who tells you they know the cause of autism doesn’t know autism very well. There are many layers to it,” Van de Water says. “There seems to be a relationship between the mother’s immune activity and autism, but we don’t have all the answers yet.”

https://time.com/7003909/link-between-lupus-autism-children/?utm_source=pocket-newtab-en-us

ending on beauty:

How can I keep my soul in me, so that
it doesn't touch your soul? How can I raise
it high enough, past you, to other things?
I would like to shelter it, among remote
lost objects, in some dark and silent place
that doesn't resonate when your depths resound.
Yet everything that touches us, me and you,
takes us together like a violin's bow,
which draws *one* voice out of two separate strings.
Upon what instrument are we two spanned?
And what musician holds us in his hand?
Oh sweetest song.'
 
~ Rainer Maria Rilke 
 

 

 

 






No comments:

Post a Comment