Saturday, July 27, 2024

WHY RUSSIA RESENTS THE WEST; LOOK YOUNGER, LIVE LONGER; THE GILDED AGE: EXTRAORDINARY WEALTH AND UNIMAGINABLE POVERTY; HOW ‘LATE’ CAME TO MEAN ‘DEAD’; AI WEIWEI’S FOREVER BICYCLES; STALIN’S GREAT PURGE; WHY MORE AND MORE WOMEN STAY SINGLE; CHOMSKY’S HYPOCRISY; WHY YOU SHOULD NOT EAT BURNT FOOD

*
HURRICANE RIDGE

The glaciers tongue me, cliffs of ice,
pools of polar green.
Across eternal snow, a deer
steps out on the trail.

His antlers hold the flame-blue sky,
his crown of shining branches.
He stares at me without fear,
then climbs straight up,

barely nudges the slippery scree.
How could I know it would be
neither a lover nor a holy sage,
but a deer in a tundra of clouds –

this messenger making me feel
one day I’ll walk forever –
when thirsty, eating snow,
when tired, leaning on the wind.

Before I die, I want one wish
to be granted to me:
to hike again along the crest
here on Hurricane Ridge,

and let a deer like that once more
step out before me on the path,
look at me calmly, and walk on.
Let the wind wave a branch.

~ Oriana

Mary:

The opening poem is extraordinary — an epiphany the reader feels with the same held breath as you when that deer steps out carrying light in his antlers: a life completely other and completely whole, a miracle to see, that like a blessing, will be forever remembered.

*

*
Without freedom of speech, there is no modern world, only a barbaric world. ~ Ai Weiwei

I really love this powerful statement. Well, that's Putin's Russia, with people who said something Putin didn't like falling out of windows, getting poisoned, etc. This goes back centuries. Most of Europe has moved forward, but not Russia.

Ai Weiwei is a Chinese artist, documentarian, and activist.

Ai's father was the Chinese poet Ai Qing, who was denounced during the Anti-Rightist Movement. In 1958, the family was sent to a labor camp in Beidahuang, Heilongjiang, when Ai was one year old. They were subsequently exiled to Shihezi, Xinjiang in 1961, where they lived for 16 years. Upon Mao Zedong's death and the end of the Cultural Revolution, the family returned to Beijing in 1976.

Ai has been called China's most famous artist. He has created works that focus on human rights abuses using video, photography, wallpaper, and porcelain. (Wikipedia)

After much harassment by the Chinese authorities, including being jailed for 81 days, Ai left China in 2015. Ai moved to Berlin where he maintained a large studio in a former brewery. Now he resides chiefly in Portugal.


Forever by Ai Weiwei, Mexico City

This work was is part of the Bicycle Series. Note the multiplied wheels and frame.

Denis Meir:

I saw an earlier version of this piece (Forever). It was just three or four bicycle frames welded together. I saw a photo of it in a gallery in NY in the early '90s. The late 80s and early 90s were the golden age for conceptual art.

The simpler version had an immediate impact on me, because in Beijing lots of people still rode around on bikes. After late-night parties small groups of people would ride homewards together. There were separate bike lanes on all the major streets. I think it was the last generation during which the city was equally convenient for cyclists and motorists.

Oriana:

Yes, it all started with about a dozen bicycles.


Then came more bicycles.

And now for something different. Here is Ai’s “Lighted Cube.”


*

HOW DID “LATE” COME TO MEAN “DEAD”?


Usually, if someone’s late, they were supposed to be somewhere but haven’t yet arrived. So it seems a little counterintuitive that late can also describe someone who died—in other words, someone who was here but has since left. But it’s less counterintuitive if you consider the whole breadth of ways we use late.

The Old Meaning of Late

When the word was first adopted into Old English from Germanic tongues, people invoked it in many of the adverbial senses still common today: “after the proper, right, or expected time,” “at or until a time far into the day or night,” and “relatively near the end of a period of time, season, event, etc.,” per the Oxford English Dictionary.

Back then (and for centuries thereafter), you could also use late as a synonym for recently. “He had a fever late, and in the fit / He cursed thee and thine, both house and land,” John Keats wrote in “The Eve of St. Agnes.” These days, you can’t really deploy late in this way and expect your meaning to be understood. Still, similar senses relating to recent time remain current in late’s derivatives—as in lately and, to a lesser extent, of late.

Late as a form of “deceased” seems to have emerged from this subset of meanings. In the early 1400s, people started using the word to describe something that existed or was true recently but now is no longer the case. A late bishop could be someone who recently got promoted to cardinal. A late tenant could be someone who recently vacated the boardinghouse.
The same word could also apply to someone who died recently, and it was around this time that this usage of late became common.

How Late Is Used Today

Dictionaries often still reflect the recency part of the definition. To Merriam-Webster, late is “living comparatively recently: now deceased.” The OED says it’s used for a person “that was alive not long ago, but is not now; recently deceased.” In everyday conversation, though, people rarely take recency into consideration when using the term. This may be due to the ambiguity and relativity of “recent” as a length of time.

In other words, how recently does your grandmother need to have died in order to qualify for the term late? There’s no concrete answer.

Taking the recency out of late also makes it a graceful euphemism for death at any distance. My dead husband, for example, could come off as shockingly morbid in the middle of an otherwise innocuous conversation; the clinical remove of my deceased husband might misrepresent your feelings; and my dearly departed husband is overly sentimental for certain situations. My late husband is a subtle workaround for those issues, even if the phrase’s original meaning is lost on most people.

https://www.mentalfloss.com/posts/why-late-means-dead?utm_source=pocket-newtab-en-us

*
STALIN’S GREAT PURGE

At least 1.1 million people have been executed by Stalin’s regime during the Great Purge in 1937–1938.

The purpose of the purge was to destroy any possible resistance to the totalitarian Soviet regime ruled by Joseph Stalin.

These are the photos taken by the NKVD, the Soviet secret service, of innocent people they were about to execute.

Dmitri Chaikovskoy, shot 1939

Nikolai Tiajelkov, shot 1937

Anna Bitter, shot 1937

Barbara Budkiewicz, shot 1937

Marfa Ryanzantzeva, shot 1937

All the people in these photos were shot in basements after confessing under torture in various heinous crimes — spying for Japan, China or France, plotting to assassinate Stalin, and so on.

All the people in these photos were “rehabilitated” after Stalin’s death (declared innocent).
The families of those shot didn’t know their loved ones were dead. For years, they were sending them parcels with food, cigarettes, warm clothing. They were coming to the department of prisons and standing in queues for hours to send parcels.

It’s only after Stalin’s death they found out that the sentence of “10 years without correspondence” meant that the person was shot on the same day.

In addition to 1.1 million people who were shot, 4.5 million of USSR citizens were sent to remote GULAG camps for terms of minimum of 10 years, sentenced to hard labor. Many of them died before they got there — getting sick in cattle train carriages, during long marches to the “camps”; many others died later in the camps because of diseases, starvation, freezing cold, exhaustion, or took their own lives.

Women often were raped to death on arrival to a prison camp, after a queue of hardened male prisoners were given access by prison guards in exchange for a pack of cigarettes.

Arrival at the camp

Labor at the camp

Meanwhile, the rest of Soviet citizens pretended that nothing was happening, that life was great, and becoming better with every year.

The black NKVD cars were arriving at night and quickly grabbing the whole family. By morning the apartment was empty, as if no one ever lived there. And the neighbors weren’t asking questions because they didn’t want the black car to pick them up the next night.
That’s the country that Vladimir Putin wants to restore. ~ Elena Gold, Quora

RCG:
I swear Putin is becoming more and more like Josef Stalin each day. He is cruel, cold and uncaring about the welfare of Russian nation.

Barry Walsch
“Burned by the Sun” a disturbing movie about one of Stalin’s victims. I still suffer nightmares after watching this movie some twenty years ago.

Elena Gold:
I didn’t mention even 1/100 of the horrors of the USSR regime. Alexander Solzhenitsyn’s book “The Gulag Archipelago” describes it in great detail. Solzhenitsyn was awarded the Nobel Prize in Literature in 1970 for this work.

Oriana:
I don’t remember if this comes from The Gulag Archipelago or One Day in the life of Ivan Denisovich, but it has remained in my mind. I quote from memory: “Don’t try to defend your life. Think of yourself as already dead. Only then can you preserve your human dignity.” 

Mary:

The photographs of victims of Stalin's purge make a powerful statement, each face both witness and victim. They personalize the mass slaughter with their own individuality, and the effect is immediate and powerful, in the same way the list of names is on the Vietnam memorial, and the iron Shoes on the Danube memorial is to those shot there during the Holocaust.  Their power is in their particularity...This face, this name, not simply a number but a human being, with a life and memory and history of their own. No longer an abstraction, they demand recognition...and remembrance...they refuse to be forgotten.

Oriana:

To us those photographs reflect how precious each life is. But today's Russia would rather forget. Russia has never really been de-Stalinized the way Germany has been quite successfully de-Nazified. There has been no clear acknowledgment of the crimes of communism. The mass murder started already with Lenin, then escalates with Stalin -- but Russia would prefer to forget, just as we can already predict that it won't ever be willing to come to terms with the obvious: Putin too is a killer.

*
WHY RUSSIA RESENTS THE WEST (repost)

Jealousy.

Russia is a very rich country in terms of natural resources yet its people have a standard of living far below the West.

Russia is the largest country in the world in terms of land mass, but has little influence.

Russia projects itself to be powerful from a military standpoint, yet cannot defeat small countries like Ukraine.

Russia is backward in its outlook but has not desire to change that. ~ Brent Cooper, Quora

Huib Minderhut:
Poverty in Russia is so deep that even redistributing the wealth of the oligarchs would hardly make a dent. Unfortunately. 100 years of theft, greed and mismanagement has turned a rich country into an economic wasteland.

Alex Sadovsky:
Russia is a culturally medieval Asian country with the self-image of a Victorian European empire. A split identity disorder. Perhaps this lies at the root of its inability to industrialize and effectively use its vast resources.

*
A FAMOUS RUSSIAN WOMAN ECONOMIST FALLS OUT OF A WINDOW

In Moscow, famous economist Valentina Bondarenko fell out of a window of her residential building.

Bondarenko worked on long-term socio-economic forecasting and modeling of the Russian economy. Probably she failed to predict a future for Russia that was bright enough.

Since recently, Russian females also began falling out of the high-floor windows, while earlier it was only males who were that careless to upset the Russian destiny-makers.

Of course, there were many female journalists killed during the early years of Putin’s rule — the ones that didn’t understand that the era of freedom of press was over and criticizing the men in power was mortally dangerous. But at the time, the methods used mostly involved guns and metal rods, with which victims were shot in the back of the head or beaten to death. There could always be a version it was a robbery went wrong.

In 2020s, the method of defenestration (being thrown out of a window) became the FSB trademark to show their hand without showing their hand.

The plebs must be reminded from time to time that all of them are mere mortals — and destiny-makers are always near.  ~ Elena Gold, Quora

Sean Walker:
Why don't they just get lower floor apartments if they're gonna say something controversial…seems like it would be obvious now

Tom R:
Was the analysis itself the cause of the defenestration, or the publicizing of it? One would think Putin is intelligent enough to want the unvarnished data at least in secret.

Elena Gold:
Maybe that was the way to ensure the secrets stay secrets.

Urmas Alas:
‘Since recently, Russian females also began falling out of the high-floor windows …’
DEI at work.

Nathan Reynolds:
Should have attended at conference out of the country then sought asylum . Must had noticed her colleagues thinning out.

Elena Gold:
You can’t leave Russia if you have been privy of state secrets, and this includes state statistics. Now Duma members have to get an approval from the FSB as well.

Martin Rullis:
Just recently heard her talk about how messed up things are and thought to myself: “This will not sit well with the little man”. Guess it didn’t.

Also, way to improve your economy by murdering your top economists.

*
A little throat singing goes a long way, especially if it's in praise of Genghis Khan.


(To listen to throat singing, please click on the link, not on the image) 

https://www.youtube.com/watch?v=6WlI24rv__g&feature=youtu.be

*
PALESTIAN ATTITUDES TOWARD ISRAEL, HAMAS, AND THE U.S.


Since the beginning of the war between Israel and Hamas in Gaza last October, Khalil Shikaki has conducted three polls, each involving between 1,200 and 1,500 Palestinians. His pollsters interviewed between 480 and 750 Palestinians in Gaza and around 760 people in the West Bank.

In the June 12 poll, 40% of Palestinians in both the West Bank and Gaza said they would prefer Hamas to govern them, followed by Fatah (20%), the Palestinian National Liberation Movement in control of the West Bank and led by Mahmoud Abbas. Eight percent chose others. Support for Hamas over the preceding three months increased by 6%

Shikaki explains this significant support for Hamas despite the suffering caused by the war: “The support for Hamas comes from various sources, but the most important one is because Palestinians share Hamas' values. They will support Hamas for that, even if Hamas makes wrong moves here or there.”

He explained that those values comprise three main elements: a high level of religious observance, no separation of faith and state, and primacy of religious identity over national and ethnic identity. He said about one-third of people polled in Gaza share those values, and slightly fewer in the West Bank.

The second source of support, he says, "is the belief that Hamas stands for resistance, armed resistance to Israeli occupation, at a time when the majority of the Palestinians believe that the only way to end the Israeli occupation and allow the Palestinians to be free, independent and sovereign is the use of force.”

Here are some excerpts from Shikaki's interview with NPR, which took place in late June.

What is the current attitude of Palestinians toward Americans and the Biden administration?

Shikaki: Extremely negative, because right now the lens that people are using to judge the administration is how it performs regarding the war in Gaza. A war in Gaza in the eyes of the Palestinians is nothing short of genocide, and the U.S. is supplying Israel with the arms to conduct a genocide. So the U.S. is essentially evil and the satisfaction with the U.S. role was zero, basically. It increased slightly, I think, to 3% in our current survey. The reason for that, in fact, has been the floating [aid] pier in the northern part of Gaza that hasn't really been doing very well. Nonetheless, one-third of the Palestinians had a positive view of the U. S. efforts in facilitating humanitarian delivery of humanitarian services.

What are Palestinian attitudes toward the pro-Palestinian protests on U.S. campuses?

We've not asked directly about that, but we've asked two things that are related that showed that the Palestinian public looks very positively at it. And in fact very optimistically about that. The first one is what was the most important outcome of October the 7th, and the second was, what did October the 7th trigger in terms of the Palestinian interest?

Eighty percent of the public in the current survey basically say that making the Palestinian issue and resolving the Palestinian-Israeli conflict is now becoming central to global interests. and so this is a very positive. It's not just about the demonstrations and the pro-Palestinian demonstrations, but it's about everything that is happening and the focus on trying to find a solution .

According to your last surveys, about 90% or so of Palestinians do not believe that Hamas committed atrocities such as killing women and children or raping during the attack on Israel last Oct. 7. How do you explain that?

When we look at who thinks that and why they think the way they do, we basically found two groups. There are those who have seen no evidence of that. They have not seen videos, for example, that show atrocities committed by Hamas. That's the largest majority of those who deny that Hamas did commit atrocities. But then there are those who actually saw the videos — and here we do find almost half of them believe that Hamas did commit atrocities.

However, we still have the other half of those who've seen the videos who don't believe Hamas committed atrocities, and that's because they think it's all fabrication. This is war, and [they believe] Israel is using its propaganda machine to depict Hamas in a very negative color. And that is part of this.

Are you worried that the results of your polls might be used by Israeli politicians to convince Israelis that they should have no mercy toward Palestinians in Gaza because they support Hamas?

Yes, I worry very much about that. Not because this is what we're finding, but because you will find people who will misuse the data to justify whatever they are doing …. First of all, the statement that the majority of the Palestinians support Hamas is totally wrong. The majority of the Palestinians oppose Hamas, not support Hamas. The support for Hamas among the Palestinians in Gaza and in the West Bank is 40% or less. That's the amount of support, so 60% or so of the Palestinians do not support Hamas.

The second lie that some people spread is that the Palestinian support for October 7th is a support for massacre and atrocities that were committed in October. Our findings show the exact opposite. Those who think atrocities were committed on October the 7th do not support October the 7th and do not support Hamas.

So the idea that the majority or vast majority supports Hamas or that it's the vast majority that supports atrocities committed by Hamas are two fabrications, lies. Our findings definitely show that in fact, it's the exact opposite of these two statements.

In 30 years of polling, since the Oslo Accords, what would you say is the biggest change that occurred in Palestinian society?

The three main issues that we have explored with the Palestinian public since Oslo are those that relate to state-building and the extent to which this was moving in the right direction. That is to create an authority that is efficient, free of corruption, perhaps democratic or at least with good governance. So this was one item and the expectations at that time were highly optimistic.

And looking at where we are today, we can see that there has been a sea change with the overwhelming majority of the Palestinians today believing that the entire process of state-building has been a total failure. The Palestinian national elite that was in charge of doing it has essentially failed miserably to deliver what they promised the Palestinians.

The second major change has to do with the support for the two-state solution. And here too, the picture is very dramatic. In 1993, all the way I would say up until the last 10 years, a majority continued to support the two-state solution. That majority at the time was 80% and it continued to decline gradually. It's mainly due to the growing perception that this two-state solution was no longer feasible. Israeli settlements' expansion has essentially made it impossible.

The third and most dramatic change has to do with the support for violence. I would say until the early 2000s, the support for violence was the view of the minority, 20%, maybe 25%. So the public was totally opposed to it. But most importantly, the public at that time was not only opposed to violence, but was very supportive of diplomacy and negotiations. You could easily find 70 to 80% of the Palestinians supporting diplomacy and negotiations. And you can easily find that those who believe that did not support violence at all. So there was absolutely zero overlap in terms of support for diplomacy and armed struggle or violence.

The change here has been dramatic today. The majority of Palestinians believe that violence or armed struggle is the most effective step for ending the Israeli occupation. Starting in 2015, we began to see a rise, but not a majority. The formation of the current Israeli government under Netanyahu early in 2023 made the difference. Even before Oct. 7, a majority of Palestinians in the West Bank was already supporting violence in a manner that we have not seen since 2005.

According  to your last polls, the most popular leader for the Palestinian people is not Hamas nor the Palestinian Authority, but Marwan Barghouti, who has been in an Israeli prison for more than 20 years for murdering Israeli citizens. He is a well-known supporter of the two-state solution. Can you explain this support for Barghouti?

Barghouti was popular before Oct. 7. His popularity increased significantly after Oct. 7. There is absolutely no doubt about that. In the last 20 years, Barghouti has been the most popular Palestinian leader. Now, why is he popular? Most Palestinians, perhaps unfairly, think that [Palestinian Authority leader Mahmoud] Abbas is a sellout, that he would accept conditions unacceptable to the Palestinians in search for a two-state solution. They don't think Barghouti would do that. ... they think Abbas is here to survive and to stay in power. And that he has no values, unlike Hamas.

The second reason for this popularity is that they think Barghouti stands for resistance. This is where Barghouti and Hamas are seen as one. They both stand for resistance. So, even though Barghouti supports a two-state solution, and wants to end the conflict and end the Israeli occupation in a peace treaty with Israel, living side by side in peace, security, and cooperation… at least the public perception of him is that this is not going to come without violence.

How do you keep your pollsters safe doing their job in Gaza during this war?

It is risky to exist in Gaza, but we do our best to ensure their safety by preventing them from entering areas of combat, such as the northern part of Gaza, for example. We don't allow our data collectors to cross there in order to avoid a situation in which they risk their lives. And so, there is risk, always risk, but the risk that our data collectors are taking in Gaza today is the same risk that other Gazans are taking by staying in safe areas. And so our data collectors do exactly that: stay in safe areas, interview people who live in those safe areas. And so far we have not lost any of our data collectors — killed or injured.

https://www.npr.org/2024/07/26/g-s1-12949/khalil-shikaki-palestinian-polling-israel-gaza-hamas


*
STEEP INCREASE IN AMERICANS NOT HAVING CHILDREN

The proportion of adults in the United States younger than 50 years old who do not have children is growing — leaping from 37% in 2018 to 47% in 2023, according to a new Pew Research Center survey published Thursday.

The new Pew study comes as comments resurfaced from Ohio Sen. JD Vance, the Republican candidate for vice president, who told former Fox News host Tucker Carlson in 2021 that the country was being run by “a bunch of childless cat ladies who are miserable at their own lives and the choices that they’ve made and so they want to make the rest of the country miserable, too.”

But with more people not having children, Pew researchers wanted to investigate whether the unhappy childless adult characterization is actually true.

“We wanted to learn more about the reasons adults don’t have children, their experiences, how it impacts their relationships,” said Rachel Minkin, a report coauthor and Pew research associate.

The latest poll surveyed more than 3,300 adults who do not have children and say they are not likely to have them. While researchers did find that those surveyed reported some difficulties and pressure, they also found that people without children also reported ways in which their experiences were full and connected.

“We see majorities …  saying having a fulfilling life doesn’t have much to do with whether someone does or doesn’t have children,” Minkin said.

There were many reasons why people said they didn’t have kids, including financial concerns, infertility, or that it just didn’t happen, according to the research.

For people younger than 50, the top reason reported for not having children was that they don’t want to.

“It is completely normal and valid to not want to have children,” said licensed psychologist Dr. Linda Baggett, owner of Well Woman Psychology in Manhattan Beach, California, in an email. “I think current generations are feeling more empowered to be open about and act on this preference, whereas in past generations people may have been more likely to have children anyway due to societal expectations, economic/labor factors, and religious beliefs.

“It is a myth that everyone, especially women, want to have children,” said Baggett, who was not involved in the research.

In her practice, psychotherapist Carissa Strohecker Hannum sees a lot of people saying that they feel hesitant to bring children into the world when they are so concerned about the state of it. Other clients have had such bad experiences with their relationships with their own parents that they are worried about repeating the pattern.

People tell her, “Before I consider raising a child in this world, I really want to work on my mental health. And I really want to make sure that I’d be bringing a child into a different sort of emotional environment.”

When Hayden meets someone new or runs into someone she hasn’t seen since high school, the question often comes up: How old are your kids?

And when she answers that she doesn’t have any, Hayden said she often sees an expression flash across their face that communicates that she is less than or incomplete.

Many of the cons people reported in the Pew survey related to not having children come from the outside.

For those older than 50 who responded and are employed, 33% said they are expected to take on extra work because they don’t have children and 32% say they are left out of conversation of coworkers who have kids, the data showed.

Women were especially likely to say that they felt pressure from society to have children, Minkin said.

“I hear this a lot, and it is unfortunate,” Baggett said.

She recommends setting a kind but firm boundary with loved ones about how they talk about the decision not to have kids and remind them that it isn’t up for discussion.

It’s OK to validate that the other person may be disappointed, but that doesn’t mean your feelings and decisions are up for debate,” she added.

Living a full life

For people who choose not to have children, there is a lot of potential for happiness and fulfillment.

“Embrace and own this decision,” Baggett said. “You have to do what’s best for you and honestly, it serves no one, especially the child, to bring an unwanted child into the world.”
People who responded to the Pew survey said that not having children gave them more resources to advance in their career and pursue their hobbies and passions.

Hayden said she has found her passion using the resources she and her husband have built to support their community members who might not have the things they need — whether that be educational support, properly fitting shoes or a present to bring to a friends’ birthday party.

“I’m just so lucky that I have found a passion,” she said. “At the end of the day, I don’t feel like your life has to be your children.”

https://www.cnn.com/2024/07/26/health/childless-adults-pew-research-wellness/index.html

Oriana:

Women may also be influenced by the belief that child-free people look younger than people with children. On the other hand, it’s not as simple as children = stress = faster aging. Children also induce positive emotions, which are important for health. Of course children also induce negative emotions, so perhaps it evens out. In any case, "it's complicated." 

As one mother said to me, "It's such a huge decision that it can't be made rationally. Life makes it for you." Then she added, "I just can't wait for those grandbabies!" 

Grown children who fail to give their parents "those grandbabies" is another huge area of intergenerational conflict, and at least for now I'm not going to touch that subject.

*
LOOK YOUNGER, LIVE LONGER?

Are your friends and family already jealous because you look younger than your years? Well, prepare to make them even greener with envy. People whose faces belie their real age also live longer, enjoy better health and are less likely to get dementia, according to a study published in the British Medical Journal.

The research was conducted among 1,826 twins in Denmark aged 70 or older.

"Our study shows that in a group of people aged over 70, perceived age is a strong indicator of mortality after adjustment for chronological age. We anticipate that the effect might be even more pronounced in middle age," conclude the authors, led by Professor Kaare Christensen, an expert on aging at the University of Southern Denmark.

The researchers reached their conclusions after getting independent assessors to estimate the age of the subjects by looking at photographs of their faces in 2001, then seeing which of the twins had died by the time a follow-up was done in 2008.

Factors such as smoking, exposure to sunshine, depression and low socio-economic status are known to contribute to aging, while being married, high social status, lack of depression and a low body mass index (BMI) all help preserve a more youthful appearance.

Genetic factors explain the difference in survival and perceived age, and influence skin appearance and the risk of heart attack, say the authors. They pinpointed the length of someone's leucocyte telomeres, molecular biomarkers of aging which reveal how capable cells are of replicating, as being key to the process.

Shorter length is associated with a range of diseases linked to ageing. Subjects in the study with longer ones were likely to enjoy fewer health problems, longer life and retain full cognitive function for longer, the study found.

It was undertaken to see if doctors are right to draw negative conclusions about a patient's health prospects because they look older than they are. The belief is well-founded, it seems.

"When assessing health, physicians traditionally compare perceived and chronological age, and for adult patients the expression 'looking old for your age' is an indicator of poor health. Our study indicates that this practice, which has existed for decades if not centuries, is actually a useful clinical approach", say Christensen and the others.

https://www.theguardian.com/uk/2009/dec/14/look-younger-live-longer-study-shows#%3A~%3Atext%3DPeople%20whose%20faces%20belie%20their%2CDenmark%20aged%2070%20or%20older

*
FIVE FLAWS OF SUPPLY-SIDE ECONOMICS (repost)

Few topics divide economists quite like the supply-side one. For every expert who swears that this economic approach works, another one vehemently disputes it. Like other theories, supply-side economics isn’t flawless and does have some holes. Here are five key reasons why the theory has been disproven.

Tax Cuts Don’t Create More Jobs

If companies are taxed less, they’ll use their excess savings to employ more staff, supply-siders argue. The problem is that there isn’t a lot of evidence to back that up. From 1982 to 1989, when the United States was governed by Reagan and taxes were cut substantially, the labor force didn’t grow anymore than previously. A similar thing happened under George W. Bush’s watch. In 2001 and 2003, Congress passed two generous tax cuts for the wealthy, and the slowest job growth in half a century followed.

There's a number of reasons why this may happen. When individuals or businesses receive tax cuts, they may not necessarily use all the extra money to create jobs. The effects of tax cuts on job creation may not be immediate. The impact of tax cuts can vary by industry. In any case,
supply-side economics may have flaws in terms of tangible, short-term job creation.

Supply-Side Policies Weakened Investment

Data supporting the popular opinion that lower taxes on the rich spur more investment is also hard to come by. In fact, the Center for American Progress, citing figures from the U.S. Bureau of Economic Analysis, said that average annual growth in nonresidential fixed investment was significantly higher in the non-supply-side 1990s than in the Reagan and Bush decades. Ironically, in the 1990s, the tax rate for higher earners was raised.

Tax Cuts Don’t Spur Stronger Economic Growth

All of the above serves as a reminder that supply-side economics doesn’t always achieve what its advocates say it does and is by no means a guarantee for economic growth. Often, supply-siders point to the 1980s as evidence that these policies engineer economic turnarounds. However, as Roubini points out, the pickup in growth exhibited from 1983 to 1989 came after a severe recession and was nothing out of the ordinary.

More evidence that traditional supply-side policies don’t lift economies was discovered in Kansas. In 2012 and 2013, lawmakers there cut the top rate of the state’s income tax by almost 30% and the tax rate on certain business profits to zero in a desperate bid to energize the local economy. That experiment lasted about five years and didn’t go well, with Kansas’ economy underperforming most neighboring states and the rest of the nation during that period.

Important: The economic benefits of deregulation also aren’t as clear-cut as supply-side advocates let on. While it is true that some regulations can be unnecessary and onerous, the majority are essential standards that underpin the economy and protect consumers.

Tax Cuts Don’t Pay for Themselves

A key selling point of supply-side economics is that tax cuts actually increase overall tax revenue by boosting employment and the incomes of the population and, therefore, don’t leave the country in more debt. This view has gained political currency but isn’t backed by much concrete evidence.

In fact, data shows that budget deficits exploded during Reagan’s era of tax cuts. According to the New York University Stern School of Business, the public debt-to-gross domestic product (GDP) ratio rose to 50.6% in 1992 from 26.1% in 1979.

The National Bureau of Economic Research (NBER) similarly shot down talk of tax cuts paying for themselves. Based on its estimates, for each dollar of income tax cuts, only 17 cents will be recovered from greater spending.

What Do Economists Think of Supply-Side Economics?

Opinions are mixed. Some economists strongly believe that putting more money into the pockets of businesses is the best way to ensure economic growth. Others strongly dispute this theory, arguing that wealth doesn’t trickle down and that the only outcome is the rich getting richer.

What Are the Disadvantages of Supply-Side Policies?

The most obvious disadvantages are the time it can take for these policies to work, the fact that they can be very costly to implement, and the backlash that they receive from left-wing thinkers. Telling the population that helping the rich will benefit everyone is a hard sell, particularly as there is no concrete evidence to support this.

Are There Any Examples of Supply-Side Policies Working?

While there are plenty of holes in supply-side economics, it isn’t completely flawed, although its success can be hard to measure. It takes a long time to reap the benefits of these policies, and any good that comes from them may be attributed to something else. A lot also depends on where you stand politically. Some people credit the likes of Ronald Reagan and Margaret Thatcher with salvaging the economy in the 1980s. Others believe their supply-side policies ruined everything and spurred inequality.

The Bottom Line

Supply-side economics, which posits that everyone prospers when companies have more money at their disposal, has reshaped how most of the world’s major economies operate. The thing is, not all economists agree with the trickle-down theory. Abundant evidence has been presented to support the view that supply-side economics doesn’t deliver as advertised. According to those findings, this economic model does not create more jobs and lift the economy or result in similar overall tax revenues. ~

https://www.investopedia.com/supply-side-economics-6755346

*
THE GILDED AGE: AN ERA OF WEALTH AND INEQUALITY

The Gilded Age, which roughly spanned the late 1870s to the early 1900s, was a time of rapid industrialization, economic growth, and prosperity for the wealthy. It was also a time of exploitation and extreme poverty for the working class.

Reconstruction preceded the Gilded Age, when factories built as part of the North’s Civil War effort were converted to domestic manufacturing. Agriculture, which had once dominated the economy, was replaced by industry. Ultimately, the Gilded Age was supplanted by early 20th-century progressivism after populism failed.

The term “gilded age” was coined by Mark Twain and Charles Dudley Warner in a book titled The Gilded Age: A Tale of Today. Published in 1873, the book satirized the thin “gilding” of economic well-being that overlaid the widespread poverty, corruption, and labor exploitation that characterized the period.

A societal shift from agriculture to industry resulted in a movement to the cities for some and westward migration for others.

The beginning of organized labor, investigative journalism, and progressive ideologies began to spell the end of the Gilded Age and its rigid class structure.

The Gilded Age marked the beginning of industrialization in America—a time of innovation, transportation growth, and full employment. It was also a time of economic devastation and dangerous working conditions for labor.

As the United States began to shift from agriculture to industry as a means of economic growth, people began to move from farms to urban areas. Railroads expanded, industry began to mechanize, communication improved, and corruption became widespread.

Railroads expanded dramatically in the U.S. in the 1870s. From 1871 to 1900, 170,000 miles of track were laid in the United States, most of it for constructing the transcontinental railway system. It began with the passage of the Pacific Railway Act in 1862, which authorized the first of five transcontinental railroads.

Mechanization of Industry

The late 19th century saw an unprecedented expansion of industry and production, much of it by machines. Machines replaced skilled workers, reducing labor costs and the ultimate selling price of goods and services. Instead of skilled workers seeing a product through from start to finish, jobs were often limited to one task repeated endlessly. The pace of work increased, with many laborers forced to work longer hours.

Technological advancements, including the phonograph and the telephone, came into existence during the Gilded Age. So did the advent of mass-circulation newspapers and magazines. Professional entertainers quickly adopted these new forms of communication, making listening and reading news new leisure activities.

Monopolies and Robber Barons


During the Gilded Age, many businessmen became wealthy by gaining control of entire industries. Controlling an entire sector of the economy is known as having a monopoly. The most prominent figures with monopolies were J.P. Morgan (banking), John D. Rockefeller (oil), Cornelius Vanderbilt (railroads), and Andrew Carnegie (steel).

Because of the way they exploited workers with low wages, long hours, and dangerous working conditions, these wealthy tycoons were often referred to as robber barons, a pejorative term used to describe the accumulation of wealth through that exploitation.

Gilded Age Homes

Homes during the Gilded Age reflected the lifestyle and wealth of the homeowner. While the wealthy built magnificent mansions with stately names like Vanderbilt Mansion, Peacock Point, and Castle Rock, many of the less fortunate lived in tenement buildings in cities, where they flocked for jobs, or in the West, in claim shanties—small shacks built to fulfill Homestead Act regulations.

The Gilded Age saw rapid growth in the economic disparities between workers and business owners. The wealthy lived lavishly, while the working class endured low wages and horrid conditions.

Real Wage Increases

The technological changes brought about by industrialization are thought to be largely responsible for the fact that real wages of unskilled labor grew 1.43% per year during the Gilded Age vs. 0.56% per year during the Progressive Era and just 0.44% per year from 1990 to 2005.

By those measures and comparisons, the Gilded Age would seem to be a success. In 1880, for example, the average earnings of an American worker were $347 per year. That grew to $445 in 1890, an increase of more than 28%.

Abject Poverty

“While the rich wore diamonds, many wore rags.” This summarizes the income and lifestyle disparity that characterized the Gilded Age. In 1890, 11 million of the nation’s 12 million families (92%) lived below the poverty line. Tenements teemed with an unlikely combination of rural families and immigrants who came into urban areas, took low-paying jobs, and lived in abject poverty.

Though wages rose during the Gilded Age, they were deficient initially. As noted above, in 1880, the average wages of an American worker were $347 per year ($10,399 today, as of this writing) but had risen to $445 by 1890 ($14,949 in today’s dollars). Given today’s federal poverty level (FPL), which is $30,000 for a family of four, most Gilded Age Americans were excessively poor despite the impressive wage growth of the time.

The annual income of an American worker in 1890 was $445, at the height of the Gilded Age. Adjusted for inflation, that's just under $1,500 in today's dollars.

Labor Unions

The rise of labor unions was neither sudden nor without struggle. Business owners used intimidation and violence to suppress workers, even though they had a right to organize. By 1866, there were nearly 200,000 workers in local unions across the United States. William Sylvis took advantage of these numbers to establish the first nationwide labor organization, named the National Labor Union (NLU).

Unfortunately, Sylvis and the NLU tried to represent too many constituencies, causing the group to disband following the Panic of 1873 when it couldn’t serve all those competing groups. The NLU was replaced by the Knights of Labor, started by Uriah Stephens in 1869. Stephens admitted all wage earners, including women and Black people.

The Knights of Labor lost members and eventually dissolved for two reasons. First, Stephens, an old-style industrial capitalist, refused to adjust to the changing needs of workers. Second, a bomb thrown into a crowd at a rally in Chicago’s Haymarket Square on May 4, 1886, was blamed on the union, driving even more members away.

By December 1886, labor leader Samuel Gompers took advantage of the vacuum left by the demise of the Knights and created a new union based on the simple premise that American workers wanted just two things: higher wages and better working conditions. Thus was born the American Federation of Labor (AFL).

Corruption and Scandals — Muckrakers

Another product of the Gilded Age was investigative journalism. Reporters who exposed corruption among politicians in the wealthy class were known as muckrakers for their ability to dig through the “muck” of the Gilded Age to uncover scandal and thievery.

Notable muckrakers included Jacob Rils, who in 1890 exposed the horrors of New York City slum life. In 1902, Lincoln Steffens brought city corruption to light with a magazine article titled “Tweed Days in St. Louis.” Ida Tarbell put her energy into exposing the antics of John D. Rockefeller; her reporting led to the breakup of Standard Oil Co. In 1906, Upton Sinclair wrote The Jungle to expose conditions in the meatpacking industry. This led to the passage of the Meat Inspection Act and the Pure Food and Drug Act.

Immigration

Many immigrants came to North America during the Gilded Age, with 11.7 million of them landing in the United States. Of those, 10.6 million came from Europe, making up 90% of the immigrant population. Immigrants made it possible for the U.S. economy to grow since they were willing to take jobs that native-born Americans wouldn’t.

While factory owners welcomed these newcomers, who were willing to accept low wages and dangerous working conditions, not all Americans did. So-called nativists lobbied to restrict certain immigrant populations, and in 1882, the Chinese Exclusion Act passed Congress. But millions came despite the obstacles. The Statue of Liberty beckoned, and the “huddled masses” responded. The children of immigrants began to assimilate, despite their parents’ objections. Another hallmark of the Gilded Age was born, as America became a true melting pot.

Women in the Workforce

Industrialization created jobs outside the home for women. By 1900, one in seven women were employed. The typical female worker was young, urban, single, and either an immigrant or the daughter of immigrants. Her work was temporary—just until she married. The job she was most likely to hold was that of a domestic servant.

The Gilded Age also saw an increase in college-educated women. Colleges, including Bryn Mawr, Radcliffe, and Mount Holyoke, opened their doors to women in the post-bellum years. This did not happen without some incredible chauvinism. Scientists of the era warned that women’s brains were too small to handle college work without compromising their reproductive systems. Many, it turned out, took that risk. The predominant fields held by female college graduates were nursing and teaching.

The Black Experience

As reconstruction ended on a state-by-state basis, Black people could migrate away from plantations and into cities in search of economic opportunity, or to move west or south in search of land that they could work for themselves. From 1870 to 1900, the South’s Black population went from 4.4 million to 7.9 million. People found jobs in Alabama, Arkansas, Georgia, Kansas, Louisiana, Mississippi, South Carolina, and Texas, working on railroads and in mines, lumber, factories, and farms.

For some, however, sharecropping replaced slavery, keeping Black workers tied to the land without ownership.

For a small set of others, this period led to the foundation of what’s known as the Black elite or “the colored aristocracy,” as was noted by Willard B. Gatewood in Aristocrats of Color: The Black Elite, 1880–1920.

Economic Impact and Legacy

The Gilded Age saw the transformation of the American economy from agrarian to industrial. It saw the development of a national transportation and communication network. Women began to enter the workforce as never before. Millions of immigrants took root in a new land. Enterprising industrialists became titans and wealthy beyond measure.

Production and per capita income rose sharply, albeit with great disparity among wealth classes. Earlier legislation, like the Homestead Act, motivated the movement westward of millions of people to lay claim to land that would give them a new start and a chance at the American dream. As America became more prosperous, some of its citizens fell victim to greed, corruption, and political vice. This combination of extraordinary wealth and unimaginable poverty was the ultimate juxtaposition of capitalism and government intervention. The debate continues today.

Are There Gilded Age Mansions Left?

You can still see and even visit some of the most opulent examples of Gilded Age domicile excess today. In New York City, for example, you can drive past the Vanderbilts’ Plant House, the Carnegie Mansion, the Morgan House, and others, if you know where to look.

The Worst Scandal of the Gilded Age

The Gilded Age gave birth to enough scandals to create competition for the worst of the lot, but many historians agree that the transcontinental railroad scandal was the cream of the crop, so to speak.

The federal government, in deciding to underwrite a transcontinental railroad, created an opportunity for corruption that it did not anticipate. As builder of the railroad, the Union Pacific company engaged in price fixing and bribery that affected members of the Ulysses S. Grant presidential administration. The corruption was uncovered by investigators, bringing the scheme to an end.

The Bottom Line

The Gilded Age was critical to the growth of the United States by introducing industrialization and technological advances. It was also a time of political turmoil, greed, and extreme income inequality. The U.S. became the most economically powerful country in the world due to the era. It was a time of unprecedented progress and unimaginable poverty.

The wealth gap between the Rockefellers, Carnegies, Morgans, and Vanderbilts and the rest of the country was palpable. With wealth came greed. With innovation came corruption. Muckrakers, the first investigative journalists, helped uncover the graft, and unions helped labor even the playing field. Ultimately, this “best and worst” of times became another important chapter in the American saga.

https://www.investopedia.com/gilded-age-7692919

Mary:

I grew up in a city that bears the signature of the gilded age at every turn. Not only the great houses of those robber barons, some now broken into apartments still wearing some of the features of that golden age, like a huge Tiffany window, or architectural excesses like turrets and balconies, but their names remain everywhere, on streets, parks, bridges, schools, libraries and museums. Carnegie was king here — Steeltown was his town, both with its mills and millworkers, and with the results of his charities; a great library system, world class museum, and university excelling in technical and fine arts.

When I think about it, Carnegie's hand shaped my own life in many ways. The libraries, museums and music halls were essential to my education and imagination. I attended free art classes in the museum as a child, graduated from high school in one of the Carnegie music halls, was an avid patron of the libraries both central and branch, and  earned my BA at Carnegie-Mellon University studying art and literature. By the time I was an adult the steel industry that had been the backbone of the city was in decline. I met and married one of the last workers at one of the last mills still open — who was laid off as that mill shut down, and who felt that was a personal tragedy. Carnegie left his mark on that city and its people for generations past and to come.
 
*
A STRANGE FACTOID ABOUT THE DIVORCE RATE AMONG NAVY SEALS

The divorce rate among U.S. Navy Seals is over 90 percent.

https://www.wf-lawyers.com/divorce-statistics-and-facts/

*
WHY INCREASING NUMBER OF WOMEN CHOOSE STAYING SINGLE

I remember the moment my sister told me she was having a baby. I was spending the evening with a group of friends and, halfway through, Kate said she needed a word. We ducked into a bedroom, where she looked at me so solemnly that I ransacked my brain for anything I could possibly have done wrong in the past half-hour.

The seriousness of her announcement made me giggle out loud. I had a flashback to the pair of us as kids, when a secret meeting like this meant we’d broken something in the house and were working out how to present the news to our parents. Plus, the thought of my little sister being a mum was innately funny. Not that Kate wasn’t ready for the role – she was in her mid-30s and keen to get on with it. I just couldn’t see myself as anyone’s aunt.

My own path to such “conventional” adulthood stalled somewhere in my 30s, not through choice or any dramatic event, but through an invisible winnowing of opportunities. I was – am – still single. I didn’t – don’t – regret my own lack of children. But becoming an aunt brought with it a phantom modifier, one that echoed across my empty flat, even though no one had spoken it out loud.

Spinster.


There are many reasons we no longer use that term: its misogynist undertones of sour dessication, or bumbling hopelessness, to start with. The label went out of official usage in 2005 when the government dropped it from the marriage register, thanks to the Civil Partnership Act and, in an age when becoming a wife is no longer necessary or definitive, it seems almost redundant.

But it hasn’t gone. Nor has it been replaced by anything better. So what else are we formerly-known-as-spinsters supposed to call ourselves: free women? Rather insulting to everyone else, I imagine. Lifelong singles? Sounds like a packet of cheese slices that’ll last for ever in the back of your fridge.

It’s important we find an identity, because our number is swelling. The Office for National Statistics shows that women not living in a couple, who have never married, is rising in every age range under 70. In the decade-and-a-half between 2002 and 2018, the figure for those aged 40 to 70 rose by half a million. The percentage of never- married singletons in their 40s doubled.

And it’s not just a western phenomenon. In South Korea, the rather pathetic figure of the “old miss” has become the single-and-affluent “gold miss”. In Japan, unmarried women over the age of 25 are known as “Christmas cake” (yes, it’s because they were past their sell-by date). Shosh Shlam’s 2019 documentary on China’s sheng nu explores these “Leftover Women” and the social anxiety they cause as traditional marriage models are upended.

Singleness is no longer to be sneered at. Never marrying or taking a long-term partner is a valid choice. For a brief spurt, it even appeared that the single-positivity movement was the latest Hollywood cause, with A-listers such as Rashida Jones, Mindy Kaling and Chelsea Handler going proudly on the record about how they had come to embrace their single lives. Jones and Kaling have since found love; Handler announced on her chatshow last year that she’d changed her mind and really wanted a relationship.

And when Emma Watson (also not single) announced to Vogue she was “self-partnered” I found myself suppressing a gag reflex. Give it another 10 years, I wanted to say. Then tell me how empowering it is going to parties/dinner/bed alone.

But there I go, living down to the spinster stereotype of envy and bitterness. How is it possible that, despite being raised by a feminist mother and enjoying a life rich with friendships and meaningful employment, I still feel the stigma of that word? Or fear that, even in middle age, I haven’t achieved the status of a true adult woman?

Perhaps I should blame the books I’ve read. Through a formative literary diet of Jane Austen, Charles Dickens and PG Wodehouse, I grew up alternately pitying and laughing at spinsters, their petty vendettas and outsize jealousies born out of their need for significance in a world that found no use for them. They were figures of fun and frustration, not women I was ever expected to relate to. After all, like many spinsters-to-be, I never considered myself on that track. I’d find a partner eventually – even Bridget Jones managed it. Doesn’t everyone?

No they don’t. I assumed that my own situation was a temporary aberration, one that required no sense of emergency or active response. My social calendar was full, my work constantly introduced me to new people. Mother Nature would, surely, pick up the slack.

*
But now my little sister was having a baby, and I was single and approaching a big birthday. The odds were increasingly against me – even if the notorious statistic that you’re more likely to be killed by a terrorist than you are to find a husband after the age of 40 has, in recent years, been debunked. The fact that the average age at marriage (in heterosexual couples) has never been later – 31.5 for women in the UK, 33.4 for men – offers little comfort, because the singles market is at its most crowded between the ages of 35 and 47, and in that market women outnumber men.

One of the cruelest tricks spinsterhood can play is to leave you feeling like an outlier and a freak – yet my status is far from unique as the statistics show. I see that in my own close friendship group – almost a dozen of us are never-married in our late 30s and early 40s, and none through choice.

There’s no avoiding that our romantic opportunities have dwindled as the pool of age-appropriate men has emptied. Annually, we manage a small smattering of dates between us. Most of us have grown weary of online dating, which requires you to treat it as an all-consuming hobby or part-time job. We’re tired of Tinder, bored of Bumble – I’ve even been ejected by eHarmony, which, last time I logged on, told me it couldn’t find me a single match.

In our 20s, my friends and I used to revel in gossip and talk endlessly about the guys we were interested in; now, the subject is sensitively avoided, even within the sisterhood. The only people who do tend to ask whether we’re seeing anyone are complete strangers, because relationship status is still considered a key component of small talk, a vital piece of the information trade, essential in categorizing someone’s identity.

My friend Alex has a range of responses to the question “And do you have another half?” depending on which she thinks the other person can take. Her nuclear option, “No, I’m a whole person,” is deployed only in the most desperate of circumstances.

As we age, the distance between our shared life experiences and viewpoints has only been widening. Professor Sasha Roseneil, author of The Tenacity of The Couple-Norm, published in November by UCL Press, says: “All sorts of processes of liberalization have gone on in relationships, in the law and in policy.” Her research focused on men and women between the ages of 30 and 55, the period in mid-life “when you’re expected to be settled down in a couple and having kids.”

“But what our interviewees told us was that there remains at the heart of intimate life this powerful norm of the couple,” says Roseneil. “And people struggle with that. Many of them long to be part of a couple – there was a lot of feeling of cultural pressure, but there was also a sense of that norm being internalized. Single people felt a bit of a failure, that something had gone wrong, and that they were missing out.

Being a spinster can be isolating – it’s easy to become convinced that no one else is quite as hopeless a case as you. It leaves us, the perennially unattached, asking ourselves big questions that we can’t – daren’t – articulate to others. Are we missing out on the greatest emotions a human can have? Shall we slide into selfishness, loneliness, or insignificance? Who will be there for us when we grow old? And is a life without intimate physical companionship one half-loved, and half-lived?

Within the framework of the current feminist narrative, there’s a strong sense that the answer to each of the above should be no – or the questions shouldn’t be asked at all. “We interviewed a lot of people around Europe and that’s a very real early 21st-century experience for women,” says Roseneil. “And people are conflicted – that’s the mental essence of being human. They can simultaneously have contradictory feelings: on the one hand it’s totally fine to be single and I can have a nice life, on the other hand – what am I missing out on and is there something wrong with me?

As modern, single women, we are not supposed to feel that we’re missing out. And so we feel obliged to hide any feelings of shame or inadequacy or longing.

I know I don’t want to take my many privileges for granted and I suspect that many single women in a similar position to me dread being thought of as whiny or desperate. And so we don’t talk about the subject, and we try not to acknowledge that spinsters still exist. Perhaps that’s the reason that, instead of finding my inspiration from modern have-it-all heroines, I prefer to look back and learn from the spinsters who came before.

Western society has always struggled with the issue of what to do with unmarried women. Take the religious mania for persecuting so-called witches in the middle ages. Communities fixated on single women – their era’s “other” – not only because they were suspicious of their alternative lifestyles, but because of the collective guilt over their inability to cater or care for them.

When single women weren’t assumed to be witches, they were often taken to be prostitutes – to such an extent that the two terms were interchangeable, including in court documents.

And yet the original spinsters were a not-unrespectable class of tradespeople. The term came into existence in the mid-1300s to describe those who spun thread and yarn, a low-income job that was one of the few available to lower-status, unmarried women. Most still lived in the family home, where their financial contributions were no doubt greatly appreciated. The term bore no stigma and was used almost as a surname, like Smith or Mason or Taylor.

Spinsterhood was accompanied by unusual legal and economic freedoms. The feudal law of couverture invested men with absolute power over their wives, and the “femme sole”, or unmarried woman, was the only category of female legally entitled to own and sell possessions, sign contracts, represent herself in court, or retain wages. It wasn’t until the late 18th century that people began to despise the spinster and that was largely thanks to the poets, playwrights and other trendsetters of the time, who turned her into one of the most pitiable creatures in literature and, by extension, society.

They trolled never-married women with hideous caricatures of stupidity, meanness and monstrosity (none quite tops the vitriol-filled Satyr Upon Old Maids, an anonymously written 1713 pamphlet decrying these “nasty, rank, rammy, filthy sluts”). And as the policy of Empire forged ahead, women who couldn’t, or wouldn’t, procreate were written off as useless, or selfish, or both. When an 1851 census revealed that one byproduct of the Napoleonic Wars and colonization was a generation of “surplus” women counting in their millions, some suggested taxing their finances, while others called for them to be forcefully emigrated.

And yet it was ultimately the Victorians who, with their indefatigable sense of purpose and powers of association, rescued the spinster, championing in her the rebel spirit that fanned feats of political and social reform. Out of impoverished necessity, never-married women pioneered the way to the first female professions, from governess to nursing, and expanding to typing, journalism, academia and law. They became philanthropists and agitators, educators and explorers; some rejected sexual norms while others became quiet allies of the homosexual community.

What I love about these women is their spirit of urgency – they weren’t waiting for anything. Of all the anxious experiences of spinsterhood, one of the most debilitating is the sense of a life on hold, incomplete. As Roseneil argues in her book, membership of grown-up society is marked by coupling. “There’s something symbolic about transitioning into a permanent relationship that says you are an adult.”

For those of us who haven’t, and may never, make that step, we can be left with the strong impression – not just from society, but from within ourselves – that we’re immature or underdeveloped. Consider another wave of “superfluous women”, between the world wars, whose marriage prospects were shattered by the loss of an entire generation of young men. Popular history recast them as dilettantes and flappers: the spinster’s contribution to national life once again belittled and mocked.

No wonder modern spinsters feel conflicted about where we stand, and whether we’re all we should be. When Professor Paul Dolan, a behavioral scientist at LSE, published research claiming that single women without children were happier than married ones, he was taken aback by the response. “I had lots of emails from single women saying thank you,” says Dolan, “because now people might start believing them when they say they’re actually doing all right. But more interesting was the reactions from people who didn’t want to believe it.

“I’d underestimated how strongly people felt: there was something really insulting about choosing not to get married and have kids. It’s all right to try and fail – but you’d better try. So with these competing narratives, you would be challenged internally as a single woman, where your experiences are different to what they’re expected to be.”

Whether a spinster is happy with her state depends, of course, not just on her personality, her circumstances, and her mood at the moment you ask her, but an ambivalent definition of contentment. We struggle to remember that, says Dolan, because our human psychology doesn’t deal well with nuance. “Almost everything you experience is a bit good and a bit bad. But with marriage and singleness it’s not voiced the same way. You’ve ticked off this box and got married so you must be happy. The divorce rates show that’s categorically untrue.

It is time, surely, to change the rules, and the conversation. As the population of never-married women expands, we should be honest about what it meant, and means, to be one. We should celebrate our identity and the life experience that has given it to us. We should reclaim our history and stop being defined by others. Why not start by taking back that dread word, spinster?

https://getpocket.com/explore/item/why-are-increasing-numbers-of-women-choosing-to-be-single?utm_source=pocket_collection_story


*
CHOMSKY’S HYPOCRISY

Chomsky is the most prominent intellectual remnant of the New Left of the 1960s. In many ways he epitomized the New Left and its hatred of “Amerika,” a country he believed, through its policies both at home and abroad, had descended into fascism. In his most famous book of the Sixties, American Power and the New Mandarins,
Chomsky said what America needed was “a kind of denazification.”

Of all the major powers in the Sixties, according to Chomsky, America was the most reprehensible. Its principles of liberal democracy were a sham. Its democracy was a “four-year dictatorship” and its economic commitment to free markets was merely a disguise for corporate power. Its foreign policy was positively evil. “By any objective standard,” he wrote at the time, “the United States has become the most aggressive power in the world, the greatest threat to peace, to national self-determination, and to international cooperation.”

As an anti-war activist, Chomsky participated in some of the most publicized demonstrations, including the attempt, famously celebrated in Norman Mailer’s Armies of the Night, to form a human chain around the Pentagon. Chomsky described the event as “tens of thousands of young people surrounding what they believe to be—I must add that I agree—the most hideous institution on this earth.”

This kind of anti-Americanism was common on the left at the time but there were two things that made Chomsky stand out from the crowd. He was a scholar with a remarkable reputation and he was in tune with the anti-authoritarianism of the student-based New Left.

At the time, the traditional left was still dominated by an older generation of Marxists, who were either supporters of the Communist Party or else Trotskyists opposed to Joseph Stalin and his heirs but who still endorsed Lenin and Bolshevism. Either way, the emerging generation of radical students saw both groups as compromised by their support for the Russian Revolution and the repressive regimes it had bequeathed to Eastern Europe.

Chomsky was not himself a member of the student generation—in 1968 he was a forty-year-old tenured professor—but his lack of party membership or any other formal political commitment absolved him of any connection to the Old Left. Instead, his adherence to anarchism, or what he called “libertarian socialism,” did much to shape the outlook of the New Left.

American Power and the New Mandarins approvingly quotes the nineteenth-century anarchist Mikhail Bakunin predicting that the version of socialism supported by Karl Marx would end up transferring state power not to the workers but to the elitist cadres of the Communist Party itself.

Despite his anti-Bolshevism, Chomsky remained a supporter of socialist revolution. He urged that “a true social revolution” would transform the masses so they could take power into their own hands and run institutions themselves. His favorite real-life political model was the short-lived anarchist enclave formed in Barcelona in 1936–1937 during the Spanish Civil War.

The Sixties demand for “student power” was a consequence of this brand of political thought. It allowed the New Left to persuade itself that it had invented a more pristine form of radicalism, untainted by the totalitarianism of the communist world.

For all his in-principle disdain of communism, however, when it came to the real world of international politics Chomsky turned out to endorse a fairly orthodox band of socialist revolutionaries. They included the architects of communism in Cuba, Fidel Castro and Che Guevera, as well as Mao Tse-tung and the founders of the Chinese communist state. Chomsky told a forum in New York in December, 1967 that in China “one finds many things that are really quite admirable.” He believed the Chinese had gone some way to empowering the masses along lines endorsed by his own libertarian socialist principles:

"China is an important example of a new society in which very interesting and positive things happened at the local level, in which a good deal of the collectivization and communization was really based on mass participation and took place after a level of understanding had been reached in the peasantry that led to this next step."

When he provided this endorsement of what he called Mao Tse-tung’s “relatively livable” and “just society,” Chomsky was probably unaware he was speaking only five years after the end of the great Chinese famine of 1958–1962, the worst in human history. He did not know, because the full story did not come out for another two decades, that the very collectivization he endorsed was the principal cause of this famine, one of the greatest human catastrophes ever, with a total death toll of thirty million people.

Nonetheless, if he was as genuinely aloof from totalitarianism as his political principles proclaimed, the track record of communism in the USSR—which was by then widely known to have faked its statistics of agricultural and industrial output in the 1930s when its own population was also suffering crop failures and famine—should have left this anarchist a little more skeptical about the claims of the Russians’ counterparts in China.

In fact, Chomsky was well aware of the degree of violence that communist regimes had routinely directed at the people of their own countries. At the 1967 New York forum he acknowledged both “the mass slaughter of landlords in China” and “the slaughter of landlords in North Vietnam” that had taken place once the communists came to power. His main objective, however, was to provide a rationalization for this violence, especially that of the National Liberation Front then trying to take control of South Vietnam. Chomsky revealed he was no pacifist.

It was not only Chomsky who was sucked into supporting the maelstrom of violence that characterized the communist takeovers in South-East Asia. Almost the whole of the 1960s New Left followed. They opposed the American side and turned Ho Chi Minh and the Vietcong into romantic heroes.

When the Khmer Rouge took over Cambodia in 1975 both Chomsky and the New Left welcomed it. And when news emerged of the extraordinary event that immediately followed, the complete evacuation of the capital Phnom Penh accompanied by reports of widespread killings, Chomsky offered a rationalization similar to those he had provided for the terror in China and Vietnam: there might have been some violence, but this was understandable under conditions of regime change and social revolution.

Although information was hard to come by, Chomsky suggested in an article in 1977 that post-war Cambodia was probably similar to France after liberation at the end of World War II when thousands of enemy collaborators were massacred within a few months. This was to be expected, he said, and was a small price to pay for the positive outcomes of the new government of Pol Pot. Chomsky cited a book by two American left-wing authors, Gareth Porter and George Hildebrand, who had “presented a carefully documented study of the destructive American impact on Cambodia and the success of the Cambodian revolutionaries in overcoming it, giving a very favorable picture of their programs and policies.”

“Refugees are frightened and defenseless, at the mercy of alien forces. They naturally tend to report what they believe their interlocutors wish to hear. While these reports must be considered seriously, care and caution are necessary. Specifically, refugees questioned by Westerners or Thais have a vested interest in reporting atrocities on the part of Cambodian revolutionaries, an obvious fact that no serious reporter will fail to take into account.”

In 1980, Chomsky expanded this critique into the book After the Cataclysm, co-authored with his long-time collaborator Edward S. Herman. Ostensibly about Vietnam, Laos, and Cambodia, the great majority of its content was a defense of the position Chomsky took on the Pol Pot regime. By this time, Chomsky was well aware that something terrible had happened: “The record of atrocities in Cambodia is substantial and often gruesome,” he wrote. “There can be little doubt that the war was followed by an outbreak of violence, massacre and repression.” He mocked the suggestion, however, that the death toll might have reached more than a million and attacked Senator George McGovern’s call for military intervention to halt what McGovern called “a clear case of genocide.”

Ben Kiernan later went on to write The Pol Pot Regime: Race, Power and Genocide under the Khmer Rouge 1975–79, a book now widely regarded as the definitive analysis of one of the most appalling episodes in recorded history. In the evacuation of Phnom Penh in 1975, tens of thousands of people died. Almost the entire middle class was deliberately targeted and killed, including civil servants, teachers, intellectuals, and artists. No fewer than 68,000 Buddhist monks out of a total of 70,000 were executed. Fifty percent of urban Chinese were murdered.

Kiernan argues for a total death toll between April 1975 and January 1979, when the Vietnamese invasion put an end to the regime, of 1.67 million out of 7.89 million, or 21 percent of the entire population. This is proportionally the greatest mass killing ever inflicted by a government on its own population in modern times, probably in all history.

Chomsky was this regime’s most prestigious and most persistent Western apologist. Even as late as 1988, when they were forced to admit in their book Manufacturing Consent that Pol Pot had committed genocide against his own people, Chomsky and Herman still insisted they had been right to reject the journalists and authors who had initially reported the story. The evidence that became available after the Vietnamese invasion of 1979, they maintained, did not retrospectively justify the reports they had criticized in 1977.

They were still adamant that the United States, who they claimed started it all, bore the brunt of the blame. In short, Chomsky still refused to admit how wrong he had been over Cambodia.

*

Chomsky has persisted with this pattern of behavior right to this day. In his response to September 11, he claimed that no matter how appalling the terrorists’ actions, the United States had done worse. He supported his case with arguments and evidence just as empirically selective and morally duplicitous as those he used to defend Pol Pot. On September 12, 2001, Chomsky wrote:

“The terrorist attacks were major atrocities. In scale they may not reach the level of many others, for example, Clinton’s bombing of the Sudan with no credible pretext, destroying half its pharmaceutical supplies and killing unknown numbers of people.”

This Sudanese incident was an American missile at tack on the Al-Shifa pharmaceutical factory in Khartoum, where the CIA suspected Iraqi scientists were manufacturing the nerve agent VX for use in chemical weapons contracted by the Saddam Hussein regime. The missile was fired at night so that no workers would be there and the loss of innocent life would be minimized. The factory was located in an industrial area and the only apparent casualty at the time was the caretaker.

The idea that tens of thousands of Sudanese would have died within three months from a shortage of pharmaceuticals is implausible enough in itself. That this could have happened without any of the aid organizations noticing or complaining is simply unbelievable.

Hence Chomsky’s rationalization for the September 11 attacks is every bit as deceitful as his apology for Pol Pot and his misreading of the Cambodian genocide.

“It is the responsibility of intellectuals to speak the truth and to expose lies,” Chomsky wrote in a famous article in The New York Review of Books in February 1967. This was not only a well-put and memorable statement but was also a good indication of his principal target. Most of his adult life has been spent in the critique of other intellectuals who, he claims, have not fulfilled their duty.  

Yet at the very time he was making this critique, Chomsky himself was playing at social engineering on an even grander scale. As he indicated in his support in 1967 for the “collectivization and communization” of Chinese and Vietnamese agriculture, with its attendant terror and mass slaughter, he had sought the calculated reorganization of traditional societies. By his advocacy of revolutionary change throughout Asia, he was seeking to play a role in the reorganization of the international order as well.

Although his politics made him famous, Chomsky has made no substantial contribution to political theory. Almost all his political books are collections of short essays, interviews, speeches, and newspaper opinion pieces about current events. The one attempt he made at a more thoroughgoing analysis was the work he produced in 1988 with Edward S. Herman, Manufacturing Consent: The Political Economy of the Mass Media. This book, however, must have been a disappointment to his followers.

The book offers a crude analysis that would have been at home in an old Marxist pamphlet from the 1930s. Apart from the introduction, most of the book is simply a re-hash of the authors’ previously published work criticizing media coverage of events in central America (El Salvador, Guatemala, and Nicaragua) and in southeast Asia (Vietnam, Laos, and Cambodia), plus one chapter on reporting of the 1981 KGB-Bulgarian plot to kill the Pope.

Chomsky has declared himself a libertarian and anarchist but has defended some of the most authoritarian and murderous regimes in human history. His political philosophy is purportedly based on empowering the oppressed and toiling masses but he has contempt for ordinary people who he regards as ignorant dupes of the privileged and the powerful. He has defined the responsibility of the intellectual as the pursuit of truth and the exposure of lies, but has supported the regimes he admires by suppressing the truth and perpetrating falsehoods. He has endorsed universal moral principles but has only applied them to Western liberal democracies, while
continuing to rationalize the crimes of his own political favorites.  

He is a mandarin who denounces mandarins. When caught out making culpably irresponsible misjudgments, as he was over Cambodia and Sudan, he has never admitted he was wrong.

Today, Chomsky’s hypocrisy stands as the most revealing measure of the sorry depths to which the left-wing political activism he has done so much to propagate has now sunk.

https://newcriterion.com/article/the-hypocrisy-of-noam-chomsky/

Opinion in the United States with regard to Israel has shifted. Israel used to be the darling of the liberal American Jewish community. Now, the main support for Israel is the far-right evangelical community that has become politicized in the last 20 or 30 years as very strong supporters of Israel, mostly for extreme anti-Semitic reasons. Meanwhile, liberals, liberal Democrats, have drifted away. Look at the last poll: among Democrats, there’s more sympathy for Palestinians than for Israel. It’s particularly true among younger people, including younger Jews. ~ Noam Chomsky

It’s almost inconceivable that Israel will ever agree to destroy itself and become a Jewish minority population in a Palestinian-dominated state, which is what the demography indicates. And there’s no international support for it.  . . . Israel decided in the 1970s, it made a fateful decision to choose expansion over security. Well, that meant that Israel was dependent for its security and support by the United States. That’s the bargain. If you choose expnsion over security, you depend on a powerful state. If the US changes its policy, Israel has difficult choices to make. ~ Noam Chomsky

Israel has been the leading issue of my life since early childhood. I started talking publicly about the criminal nature of Israel’s actions in 1969 – it should have been much earlier. I could also see the acts of repression and … insulting the non-Ashkenazi Moroccan Jewish population. All of those things should have been talked about. I didn’t become involved until after the ’67 war and Israel initiated its policies of settlement and development in the occupied territories, which expanded and led to the current situation. I was much too mild in my criticism and much too late. ~ Noam Chomsky,  

https://www.aljazeera.com/features/2023/4/9/qa-noam-chomsky-on-palestine-israel-and-the-state-of-the-world


*
COUNTRIES THAT HAVE BANNED THE BURQA:

France, Belgium, Chad, Sri Lanka, Cameroon, Germany, Bulgaria, Netherlands, Denmark, Italy, China, Niger, Austria, Switzerland, Repupbic of Congo, Russia, Denmark, Spain, Norway.

The report says there are only six countries within the European Union (EU)—Croatia, Cyprus, Greece, Poland, Portugal, and Romania— that haven't banned Islamic headscarves or face veils in some form or discussed a proposal to do so.

Certain countries have partial bans, meaning the burqa is illegal in some municipalities but not in others.

*
WHEN A CHURCH CLOSES ITS DOORS

Pastor Ryan Burge

They gathered one last time on Sunday — the handful of mostly elderly members of First Baptist Church in Mt. Vernon, Illinois.

The members, joined by well-wishers, said the Lord’s Prayer, recited the Apostle’s Creed and heard a biblical passage typically used at funerals, “To everything there is a season ... a time to be born, and a time to die.” They sang classic hymns — “Amazing Grace,” “It Is Well With My Soul” and, poignantly, “God Be With You Till We Meet Again.”

Afterward, members voted unanimously to close the church, a century and a half after it was created by hardscrabble farmers in this southern Illinois community of about 14,000 people.

Many U.S. churches close their doors each year, typically with little attention. But this closure has a poignant twist.

First Baptist’s pastor, Ryan Burge, spends much of his time as a researcher documenting the dramatic decline in religious affiliation in recent decades. His recent book, “The Nones,” talks about the estimated 30% of American adults who identify with no religious tradition.

He uses his research in part to help other pastors seeking to reach their communities, and he’s often invited to fly around the country and speak to audiences much larger than his weekly congregation.

But it’s no academic abstraction. Burge has witnessed the reality of his research every Sunday morning in the increasingly empty pews of the spacious sanctuary, which was built for hundreds in the peak churchgoing years of the mid-20th century.

“It’s this odd thing, where I’ve become somewhat of an expert on church growth, and yet my church is dying,” said Burge, a political science professor at Eastern Illinois University. “A lot of what I do is trying to figure out how much I am to blame for what’s happened around me.”

Burge, 42, started leading the congregation in 2006, when “there were about 50 people on a good Sunday,” he recalled. In the years since, he’s earned his doctorate and begun working as a professor. He’s gained a wide online and print readership, in part by converting dense statistical tables into easy-to-comprehend graphics on religious trends.

All this time, he’s continued to pastor the small church.

“I’m willing to admit that I’m not as good as I could be or should be” as a pastor, he said. “But I’m also not willing to admit that it’s 100% my fault. If you look at the macro level trends happening in modern American religion, it’s hard to grow a church in America today, regardless of what your denomination is. And a lot of places have way more headwinds than tailwinds.”

The church’s American Baptist denomination is part of a cluster of so-called mainline denominations — Episcopal, Methodist, Presbyterian, Lutheran and others that were once central in their communities but have been dramatically shrinking in numbers. The nation’s largest evangelical denomination, the Southern Baptist Convention, has also been losing members.

While there’s no annual census of U.S. church closures, about 4,500 Protestant churches closed in 2019, according to the Southern Baptist-affiliated Lifeway Research.

Scholars say churches dwindle for various reasons — scandal, conflict, mobility, indifference, lower birth rates, members shifting to a church they like better. To be sure, most Americans remain religious, and some larger churches are thriving while many smaller ones dwindle. Some surveys suggest that the long rise of the “nones” has slowed or paused.

But the nonreligious are far more common today than a generation ago, in the U.S. and many other nations.


Roughly a dozen people attend pastor Ryan Burge’s Sunday service at First Baptist Church in Mt. Vernon, Ill., Sept. 10, 2023

“If Billy Graham would have been born in 1975 instead of 1918, I don’t think he would have been as successful, because he hit his peak right as the baby boom was taking off and America was really hungry for religion,” Burge said.

Things are particularly challenging where communities are shrinking, such as the Rust Belt and rural areas.

Burge hopes his research, and his personal experience, can offer some consolation to other pastors in similar circumstances.

“This is not all your fault,” he said. “You know, in the 1950s, you could be a terrible pastor and probably grow a church because there just was so much growth happening all across America. Now it doesn’t look like that anymore.”

Gail Farnham, 80, has seen that trajectory of church life first-hand.

Her family began attending First Baptist Church when she was 5. Her parents quickly got involved as volunteers and “never looked back,” she recalled. Like many American families in the ‘50s, they joined during the booming rise in church involvement. First Baptist peaked at about 670 members by mid-century, leading to the construction of a large new sanctuary and a suite of Sunday School classrooms.

Farnham went on to raise her own children in the church, and as the congregation’s moderator, she continued to hold a top leadership role.

First Baptist has had its share of schisms and controversies in the past, but it largely followed the typical arc of many Protestant churches, thriving in the 1950s and only gradually losing sustainability. The Sunday before its final service, eight worshipers attended. This Sunday’s attendance of about 40 was swelled by former members and others, gathering for the momentous final service.

The remaining, primarily older members, found a new mission in recent years despite the uncertain future. They joined a program to provide bag lunches for needy schoolchildren. At one point they were providing 300 meals per week.

The closure is “bittersweet,” Farnham said.

“It’s something we’ve seen coming,” she said. ”It’s not a surprise. We’re thankful we’ve been able to serve and meet a need in the community. We turned from being a church saying, ”Oh me, oh my, what are we going to do?’ to being a church that said, ‘We’re going to serve as long as we can with the best we can.”

Now everyone, Burge included, will be looking for a new church. “I have been preaching every Sunday since August of 2005 and I need to be a member of a church for a while, not up front,” he said.

https://apnews.com/article/illinois-christian-baptist-church-closure-religious-trends-c6b9e938eb228c8018fb911b02a791f0?utm_source=pocket-newtab-en-us


*
SHOULD YOU AVOID EATING BURNT FOOD?

Acrylamide, or acrylic amide, is a chemical compound produced when carbohydrate-rich foods react with amino acids, primarily asparagine, during cooking, baking, or roasting.

In 2002, scientists at the University of Stockholm discovered that it might actually be wise to scrape the burnt bits off your toast. They found that a substance called acrylamide forms when we apply heat over 120C (248F) to certain foods – including potato, bread, biscuits, cereal, and coffee – and its sugar content reacts with the amino acid asparagine.

This process is called the Maillard reaction, and it causes food to brown and gives it that distinctive flavor. But scientists have found that acrylamide is carcinogenic in animals, but only in doses much higher than those in human food.

Acrylamide could also increase the risk of humans developing cancer, especially children, according to the European Food Safety Authority. But researchers looking into the effects on humans have not yet been able to come to a definite conclusion.

Scientists are sure, however, that acrylamide is neurotoxic to humans, which means it can affect the nervous system. The exact cause for this are still not fully understood, but among the theories are that acrylamide attacks structural proteins within nerve cells or may inhibit anti-inflammatory systems that protect nerve cells from damage.

The toxic effects of acrylamide have been shown to be cumulative, which means that consuming a small amount of acrylamide over a long period of time could increase the risk of it affecting organs in the longer term.

More specifically, evidence from animal studies suggests that long-term exposure to dietary acrylamide could also increase the risk of neurodegenerative disease, such as dementia, and may be associated with neurodevelopmental disorders in children, says Federica Laguzzi, assistant professor of cardiovascular and nutritional epidemiology at the Institute of Environmental Medicine at Karolinska Institutet in Sweden.

"Acrylamide passes through all tissue, including the placenta, because it has a low molecular weight and is soluble in water," says Laguzzi, who has found a link between higher acrylamide intake in pregnant women and the lower birth weight, head circumference and length of their newborn babies.  

The potential mechanism behind acrylamide's role in increasing the risk of cancer in humans isn't yet known. Leo Schouten, an associate professor of epidemiology at Maastricht University in the Netherlands, has a theory why it might happen.

After the 2002 discovery of the presence of acrylamide in our food by Swedish researchers, the Dutch Food Authority contacted investigators of the Netherlands Cohort Study on Diet and Cancer, including Schouten, to investigate whether dietary acrylamide was a risk for humans. Schouten and colleagues tried to capture an estimate of how much acrylamide people were consuming based on a questionnaire. 

They discovered that the variation between people with low and high exposure in an elderly Dutch population could be explained mainly by one product popular in the Netherlands called ontbijtkoek, roughly translated as "breakfast cake", which was extremely high in acrylamide due to the use of baking soda in the production.  
They investigated the link between non-smokers' acrylamide intake (as cigarette smoke also contains the substance) and all cancers, and found a higher risk of endometrial and ovarian cancers in women with high exposure to acrylamide. They have also found, in further studies, a slight link between acrylamide intake and kidney cancer.

However, these findings are yet to be confirmed by any other researchers. The closest is a US population study, which published findings in 2012 suggesting an increased risk of ovarian and endometrial cancer among non-smoking post-menopausal women who consumed high amounts of acrylamide. Of course, there could be other reasons for this – people who eat high levels of acrylamide might also follow other lifestyle choices that put them at a higher risk.

Other studies haven't found an association, or saw weaker associations. But it's unclear whether the association Schouten and his team found was incorrect, or if other studies weren't able to measure acrylamide intake accurately.

The mechanism behind acrylamide's potential cancer-causing effect could be related to hormones, Schouten says, because certain hormones have been associated with an increased risk of cancer, especially female genital cancers like endometrial and ovarian cancer.

"Acrylamide may affect estrogen or progesterone, which would explain the female cancers, but this hasn't been proven," says Schouten.

Laboratory studies involving rats have also found links between acrylamide intake and cancer in mammary glands, thyroid gland, testes and the uterus, which also suggest a hormonal pathway, but this does not automatically mean a similar risk to humans.

In 2010, the Joint Food and Agriculture Organization/World Health Organization Expert Committee on Food Additives suggested that more long-term studies are needed to further understand the link between acrylamide and cancer. It did, however, support efforts to reduce acrylamide levels in food.

"It's well established that acrylamide is genotoxic and can cause cancer in animals, but the association between acrylamide and cancer in humans is still unclear," says Laguzzi. "Most epidemiological studies are performed with acrylamide intake measured through dietary questionnaires that rely on people's reporting, which can bias the results.”

While Schouten believes he was able to accurately measure acrylamide in people's diets, not everyone agreed, including many toxicologists. Another way to measure acrylamide intake is by measuring biomarkers in urine and blood, but this hasn't found anything concrete, either, Schouten says.

It's important to do more research where acrylamide is measured with biomarkers, especially through blood, as this shows acrylamide intake over a longer period of time than urine, says Laguzzi.

Acrylamide has been measured through biomarkers in US studies. One study from 2022, using data spanning a decade, shows a link between acrylamide intake and deaths from cancer, but it couldn't conclude which cancers.

One reason there may not be much conclusive evidence that the levels of acrylamide in our diets can increase the risk of cancer is because we could have protective measures that limit the increased risks associated with our overdone chips.

Laguzzi has found no link between non-gynaecological cancer risk and acrylamide intake in her research summarizing the population evidence of this association. She says this could be because either humans have good reparative mechanisms to help prevent both potential carcinogenic and neurotoxic effects, or because these studies were performed using inaccurate measures of dietary acrylamide exposure.

"Also, we don't just eat acrylamide on its own. It's in food, where there could also be other components, like antioxidants, that can help prevent the toxic mechanisms," she says.
Despite the absence of solid research showing the risks to humans of eating acrylamide, the food industry is taking measures to reduce it in our foods.

"The EU is in the process of setting maximum allowable levels for acrylamide in food, and that could have serious repercussions for the food supply chain," says Nigel Halford, whose research is helping farmers to reduce the potential for acrylamide formation in products made from wheat.

While acrylamide isn't found in plants, asparagine, which is the substance that turns into acrylamide when heated, is.

"Acrylamide affects quite a wide range of foods that come from cereal grains, so it's quite big deal for the food industry," he says.

Wheat grain accumulates much more asparagine than necessary, and it seems to accumulate more when it doesn't get all the nutrients it needs, Halford says, particularly sulphur. Halford is trying to stop this processes genetically, using the gene editing technique Crispr.  

At the other end of the supply chain, many producers have been urged to reduce the acrylamide content of their products where possible, especially in baby food.

This has been quite successful, says Schouten, who is pleased that the Dutch breakfast cake ontbijtkoek has around 20% of the acrylamide it used to have, by changing how it's produced.
There are also ways to reduce acrylamide at home when cooking, says Saleh. She advises that, when making chips, for example, soaking cut potatoes in hot water for 10 minutes can reduce their acrylamide formation by almost 90%.


The scientific interest toward acrylamide health risk has grown again in the recent years, says Laguzzi. It will be a long process, but within a few years, any link between acrylamide intake and cancer risk will hopefully be clearer, she says. In the meantime, that habit of scraping the burnt bits off your toast might not be such a bad idea.

https://getpocket.com/explore/item/should-you-avoid-eating-burnt-food?utm_source=pocket-newtab-en-us

Oriana:
Instant coffee is another source of acrylamide — though fried potatoes contain much more acrylamide. Reducing carbohydrates in your diet and cooking at lower temperature will reduce acrylamide. Acrylamide is typically not found in meat, dairy, or seafood.

To lower your exposure to acrylamide, avoid cereals, oatmeal, potato chips, french fries, roasted nuts, peanut butter, cakes and other flour-based food — and, alas, instant coffee (which is my favorite morning beverage). Fresh roasted coffee contains half the acrylamide found in instant coffee. Surprisingly, prunes and olives also contain acrylamide.

The good news is that our bodies have ways to detoxify low doses of acrylamide.

*


It's thought that people began cooking as early as a million years ago.

*
THE BENEFITS GLUCOSAMINE — NOT JUST FOR THE JOINTS

Glucosamine is a compound that’s naturally produced by your body. Most commonly, it exists in your cartilage and helps create the proteins and fats that repair your cartilage when it’s damaged.

Glucosamine isn’t commonly found in foods, but it is often sold as a supplement in drops, capsules, or topical forms. Supplements can be made from the shells of shellfish or be produced artificially. Taking glucosamine supplements may offer health benefits, particularly for joint pain such as arthritis.

Glucosamine supplements can provide some important health benefits. Early trials suggest that glucosamine may have some antioxidant effects that can improve your eye health — which is particularly helpful for people with conditions like glaucoma. [Oriana: Note, however, that many sources state that glucosamine increases the risk of glaucoma]

In addition, glucosamine can provide benefits like:

Reduced Joint Pain
Studies around the world have shown that glucosamine appears to reduce joint pain, especially among people with osteoarthritis. In fact, the scientific support for glucosamine is so strong that the compound is available as a medicinal substance — not just as a supplement — in the U.K. and other areas of Europe.

Supplements of combined glucosamine and chondroitin — a related compound also found in cartilage — have been shown to be as effective as osteoarthritis medications such as celecoxib (CeleBREX). For people who do not react well to non-steroidal anti-inflammatory drugs like celecoxib, glucosamine supplements may be a safe and effective way to reduce symptoms of arthritis.

May Reduce Inflammation
Glucosamine and chondroitin supplements may also help reduce chronic inflammation, which is linked to a number of potential health problems like heart disease, diabetes, and arthritis. This may be part of the reason why glucosamine reduces arthritis pain. Studies have linked regular consumption of glucosamine supplements with lower levels of inflammation. Regularly taking the supplements can help lower your risk of a number of chronic conditions.

May Improve Bone Health
While more research needs to be done, early studies suggest that glucosamine supplements may help prevent the progression of osteoporosis post-menopause. Glucosamine appears to help reduce the weakening of bones by supporting healthy bone growth. This makes the supplements especially helpful for people who are at risk of developing osteoporosis as they age.

https://www.webmd.com/diet/health-benefits-glucosamine

*
Is glucosamine more effective with chondroitin?
 
A 2018 combined analysis of 29 studies in people with knee osteoarthritis (6,120 total participants) showed that global pain was significantly reduced by glucosamine or chondroitin taken separately but not by the combination of the two.
 
Oriana:
 
My personal experience with glucosamine was memorable, and has affected my attitude toward supplements in general. I had severe post-traumatic arthritis. A doctor showed me the damage on X-rays and said that when I am no longer able to walk the length of a block, my insurance would pay for knee replacement. 
 
I turned to glucosamine instead, using the recommended dose. Nothing happened. Then I increased the dose. Still nothing. In my desperation, I increased the dose once more -- and then finally, finally . . . . experienced more pain relief than I dared hope for. 
 
Note that I'm taking about a megadose that I personally calibrated. Eventually I did have to have knee replacement, but my surgery didn't restore pain-free walking. So I still take glucosamine -- 3-4 capsules at a time.

*

GLUCOSAMINE SUPPLEMENTS COULD BENEFIT THE HEART

A large 2019 study with data from the UK Biobank reveals that regular use of glucosamine supplements could reduce the risk of getting cardiovascular disease (CVD) and cardiovascular events. The study appeared this week in the latest issue of The BMJ and was titled, “Association of habitual glucosamine use with risk of cardiovascular disease: prospective study in UK Biobank”.

The researchers from Tulane University, Harvard University, and Harbin Medical University in China noted that habitual or regular use of these supplements could prevent CVD events such as coronary heart disease and stroke. Professor Lu Qi at Tulane University in New Orleans and his team gathered data from the UK Biobank that includes data from over half a million British population. 

For this study they analyzed 466,039 participants without CVD. These participants were provided with a questionnaire that they completed and in it they mentioned their use of Glucosamine supplements. Over time the patients were followed up and hospital records and death certificate information revealed the numbers of CVD events or deaths that took place among these participants. An average seven years of tracking was done where CVD events such as events due to coronary heart disease and stroke were recorded.

Results revealed that 19.3 percent of the participants were using glucosamine supplements since the start of the study. On analysis they noted that use of these supplements reduced the risk of all CVD events by 15 percent and lowered the risk of coronary heart disease, stroke and deaths due to CVD by 9 to 22 percent when compared with those who did not use these supplements. During the follow up there were 10 204 CVD events, 3060 deaths due to CVD, 5745 coronary heart disease events and 3263 strokes.

“I am a bit surprised but not very much, because previous studies from humans or animals have shown that glucosamine may have protective effects against inflammation, which is a risk factor for cardiovascular disease,” Qi said.

Before making conclusive statements the team meticulously also took into consideration other risk factors such as age, gender, weight, body mass index (BMI), lifestyle choices, diet, ethnicity, other medication and supplement use etc. these were all adjusted for before the benefits of glucosamine were ascertained. For example they found that glucosamine reduced the risk of coronary heart disease by 37 percent among smokers, by 18 percent among former smokers and by 12 percent among those who had never smoked.

Authors explain that glucosamine use reduced levels of C-reactive protein (CRP). CRP is normally associated with inflammation and is raised among those with various conditions including CVD. It is also raised among smokers. 

Glucosamine use also mimics a low carbohydrate diet and thus may benefit in prevention of heart disease. Low carbohydrate diet has been associated with less risk of development of CVD.

Senior author, Lu Qi, in a statement said, “This is good news for people who take these supplements.” He warned however, “we really need additional data from population studies and further, more solid data from clinical trials.”

The authors of the study agree that they did not have clear information regarding the dose of the glucosamine supplements taken and they also did not have data on the side effects of such use. They however conclude, “...habitual use of glucosamine supplements to relieve osteoarthritis pain might also be related to lower risks of CVD events. Further clinical trials are warranted to test this hypothesis.”

Glucosamine is a “non-vitamin, non-mineral supplement”. It is sold only on prescription in the European countries but is available over the counter in United States and Australia. In the US and Australia, around one in five adults consume glucosamine supplements regularly. In the US around 20 percent adults regularly took glucosamine supplements said a 2008 CDC report. It is the second most common nutritional supplement (that is not a vitamin or a mineral) in the United States after omega 3 supplements, said the report.

Glucosamine can mimic a low carbohydrate diet and can thus benefit health, according to  previous studies.

Glucosamine supplements should not be taken by people who are allergic to shellfish and by women who are pregnant or breast feeding. It can affect blood clotting in people who are taking blood thinners such as Warfarin. The supplement is also not to be taken along with certain anti-cancer drugs as it may interfere with their efficacy.

https://www.news-medical.net/news/20190514/Glucosamine-supplements-could-benefit-the-heart.aspx

*
ending on beauty:

"To see a World in a grain of sand
And a Heaven in a wild flower,
Hold Infinity in the palm of your hand
And Eternity in an hour.”

~ William Blake, from "Auguries of Innocence"

Grain of sand, magnified


No comments:

Post a Comment