Saturday, December 25, 2021

THE UNFUNNY LUCILLE BALL IN “BEING THE RICARDOS”; JOAN DIDION’S CONTROLLED ANXIETY; EFFORTLESS WAY TO IMPROVE YOUR MEMORY; DOSTOYEVSKY'S PROBLEM WITH HOLBEIN’S DEAD CHRIST; HOW ANESTHESIA CHANGED CHILDBIRTH

COLD FIRES

My last Christmas Eve in Warsaw —
the gray, uncertain day
dying into the early dark.
We wait for the first star, then light
the twelve skinny candles on the tree
and break the wishing wafer.  

Holding a jagged shard of a wish,
mother intones: “Health and success,
fulfillment of all dreams.”
Kissing on both cheeks,
we break the wafer each with each.
So begins Wigilia,
the supper of Christmas Eve.

The number of the dishes
has to be odd: spicy red borscht
with uszka, “little ears” —
pierogi with cabbage and wild mushrooms
soaked back to dark flesh
from the pungent wreaths;
fish — the humble carp;
potatoes, a compote from dried fruit,
and poppy-seed cake.
Father counts: “If it doesn’t
come out right, we can always
include tea.”

He drops a pierog
on the starched tablecloth.
I stifle laughter as he picks it up
solemnly like a communion host.
On the fragrant, flammable tree,
angel-hair trembles in silver drafts.

Then we turn off the electric lights.
Now only the candles glow
in a heavenly hush,
though I no longer believe in heaven.
Father sets a match
to the “cold fires.” Icy starbursts hiss
over the staggered pyramid of gifts:

slippers and scarves, a warm skirt,
socks and more socks,
a book I will not finish.
We no longer sing carols,
mother playing the piano —
the piano sold by then,
a TV set in its place.  

Later, unusual for a Christmas Eve,
we go for a walk. The streets
are empty; a few passers-by
like grainy figures in an old movie.
It begins to snow.

I never saw such tenderness —
snowflakes like moths of light  
soothing the bare branches,
glimmering across
hazy halos of street lamps.
Each weightless as a wish,
snowflakes kiss our cheeks.
They settle on the benches and railings,
on the square roofs of kiosks —

on the peaceful,
finally forgiven city.

~ Oriana
THE GHOST OF CHRISTMAS PAST

Oriana:

For me Christmas had always been about the special food on Christmas Eve (which in Polish is not called “Christmas Eve” but “vigil” [Wigilia] —see my poem “Cold Fires” on this page), the evergreen tree (which in Polish is not called a “Christmas tree”) and its scent — ah, the scent! — this holiday was very much about the scent of an evergreen), the gifts, the tree lights and ornaments, the wishes, the nice clothes, the carols, the family warmth and coziness. Even back when I did go to church, that was not the important part of the holiday. If food was a “ten”, then church was a “one.”

So when I stopped going to church on Christmas or any other day, Christmas went on as before — defined by the special supper and gift-giving.

Nor did I miss anything. I left the nativity story as I left children's books — which I also didn’t miss, except maybe Winnie the Pooh.

God was the all-seeing, all-powerful tyrant to be feared, and the sweetness of the nativity creche did not obscure that. Not for me. I did like the animals though. They were the best part. They lent the most comfort and a momentary forgetting that here was a terrorist religion based on threats of hellfire. It was marvelous to drop that part and keep on enjoying the celebration.

So Christmas was basically secular from the start. But that doesn’t mean that I object to the crĂȘche displays. True, at first it was a tad of a shock to discover that it was all a myth — not Mary’s virginity, since that part was obvious, but Bethlehem, the census, the fact that the “slaughter of the innocents” never took place but was invented to echo the slaughter of the first-born Egyptian infants, as later the flight into Egypt so that there could be a return from Egypt. 

Or the heavy possibility that Jesus never even existed — at least not as presented in the Gospels. But even before I fully digested the made-up nature of it all, I was able to enjoy the stories as stories — the way I loved Greek myths, even with the cruelty inherent in many of them. “The Greeks really had great imagination,” I used to think — never mind the fusion from other traditions, never mind any scholarly examination.

And if someone says “Merry Christmas” to me — I say it back to them, knowing that to neither of us it’s about religion. 

Mary:

The history of how the American Christmas developed is both interesting and surprising. It makes sense that the holiday gained significance and importance through the time of the civil war, when so many families were separated for so long, suffered so many losses, and the fabric of society itself was so threatened and tested. Home, peace and family were more precious and important than ever, and making the observation of  Christmas a national holiday could contribute to recovery from the divisions and privations of war.

The secular celebration was already more important to unity than the religious observation. The transition from the old ways of observing Christmas, more dedicated to spiritual and religious matters, to the current form was like the Americanization of old world traditions. Immigrants would observe the old tradition, the Wigilia, the feast of the seven fishes, Little Christmas, the blessing of food in the church, the holy wafer at the feast. But as the generations follow, more and more of the old observations would disappear, and the most persistent, that remained the longest, would be most of the traditions around celebratory foods, often reserved only for this Season.

The crazy, and ugly, commercialization of Christmas is a modern and American unfortunate development. Not gift giving in itself, but the amping up of an old tradition to the kind of pressurized madness to spend and spend and spend, the kind of obscene scenarios on "Black Friday," the frantic greed and grab, the fact that retailers depend on Christmas buying for success...not only takes over but actually displaces many of the pleasures of the celebration. People go into debt, spend what they can’t afford, and it doesn't make anyone happier.

I always felt the heart of the season was a celebration of light. In religion, for Christians, the birth of Christ as the Light of the World. I love the lights of Christmas, the lighted trees, the lights strung up everywhere, shining in the dark. And the songs, the music. Those are the ways I like to celebrate...family, food, light, music. Gifts are always small and incidental, not the main point.

Oriana:

Yes, the “war on Christmas” has already been won — by the merchants. Not that the churches made any valiant effort to save Christmas from Mammon. 

I stopped celebrating after my mother died. But even long before then, Christmas wasn't real Christmas anymore. At least you and I have memories. 


**
“I'm dreaming of a wet Christmas” — this year, the dream came true in California. There could be no better Christmas gift than rain.

But speaking of the Ghost of Christmas Past, thirty years ago the red hammer-and-sickle flag was lowered down the Kremlin flagpole and the Soviet Union ceased to exist. Talk about a Christmas gift to the world! Putin called it “the greatest catastrophe of the Twentieth Century.”

This Youtube is a must-see (and hear
— you'll hear some of the Soviet anthem, which has a massive grandeur)

https://www.youtube.com/watch?v=xak4CaM-Nvg



*
WE HAVE LOST JOAN DIDION

“We tell ourselves stories in order to live...We look for the sermon in the suicide, for the social or moral lesson in the murder of five. We interpret what we see, select the most workable of the multiple choices. We live entirely, especially if we are writers, by the imposition of a narrative line upon disparate images, by the ‘ideas’ with which we have learned to freeze the shifting phantasmagoria which is our actual experience.”

~ Joan Didion, The White Album

*
BALANCED ON THE VERGE OF A NERVOUS BREAKDOWN AND A NOBEL PRIZE

~ Joan Didion was a mood, the static in the air before an earthquake, the sense that your footing is crumbling beneath you. She started out as a John Wayne–worshipping Goldwater Republican and ended up criticizing U.S. foreign policy and debunking the case against the Central Park Five in the New York Review of Books. Her signature style, honed and hardened by disillusionment and a profound mistrust of sentimentality in all its forms, was both a repudiation of her roots and a veiled embrace of its frontier toughness. Descended from the survivors of the Donner Party, she prided herself on facing brutal truths, even when one of those truths is that the mythos that gave your ancestors the will to carry on was just another sham.

She produced a number of much-quoted lines, from “We tell ourselves stories in order to live” (not the bromide about the life-giving power of fiction it’s typically taken to be) and “Writers are always selling somebody out” (which is quintessential Didion). One of those lines, “Style is character,” from an interview with the Paris Review, seemed to capture her own ethos best. Her first real job was in New York, at Vogue, where, she always maintained, she learned to write with the utmost of compression and economy.

By the end of her career, Didion was as much an icon for her persona as for her writings, hired by Celine to model sunglasses in magazine ads and adored by young women writers for her chic sangfroid as well as her prose. Photos taken of Didion in Los Angeles in 1968 in a long dress, cigarette in hand, leaning against her yellow Corvette, captured an idea about female artistry, simultaneously fragile and impenetrable, balanced on the verge of a nervous breakdown and a Nobel Prize, that, while clearly no fun—photos of Didion smiling are vanishingly rare—nevertheless felt enviably cool.

In that same Paris Review interview, Didion explained that despite writing from her early childhood, at first she wanted to be an actress, like the main character of her best-known novel, Play It as It Lays. “It’s the same impulse,” she said. “It’s make-believe. It’s performance.” The great movie stars of Didion’s era exerted a particular kind of charisma that she, too, mastered. It’s the art of offering your whole self up to the camera or the page and yet also holding something back. Didion’s restraint and discipline as a writer served as more than just the container for the various species of chaos she wrote about. It also gestured toward the unsaid, the part of herself she kept forever in reserve.

Few actors can finesse this paradox, and even fewer writers. Didion herself didn’t always pull it off smoothly. After The Year of Magical Thinking, she published 2011’s Blue Nights, an account of her daughter’s death that, according to her biographer, Tracy Daugherty, attracted criticism for shying away from a full account of the cause of that death. That omission pointed, perhaps, toward where her true vulnerabilities lay.

Such slips were rare, however, and even in her confessional mode, Didion always felt elusive. That was what made us want to follow her across the assortment of blasted heaths she visited, surveyed, and dissected with the chilliest aplomb. There was the precision of her prose and then there was the mystery of the woman who chipped it out of ice, so pure and translucent it seemed like a natural phenomenon. Like all the best mysteries, it will never be solved. ~

https://slate.com/culture/2021/12/joan-didion-death-obituary-magical-thinking.html?fbclid=IwAR2oXV3NznWVuUyZNxYYGc4pMKkk_uvQSciUZ1hwV_zthDg76GfRCdUC8d4

Oriana:

Didion’s adoptive daughter,  Quintana, died of acute pancreatitis, a disease most often caused by alcoholism. But I don’t think a mother should be condemned for not discussing her daughter’s alcoholism, especially while she’s still mourning both the loss of her husband and her daughter’s untimely death at only 39. Of course a mother would try to paint a sympathetic portrait of her child rather than hang the “alcoholic” label on her. Didion wanted the readers to appreciate other facets of Quintana’s personality. That is excusable in a mother.

On the other hand, Didion helped kindle the awareness that adopted children may be traumatized by the knowledge of having been adopted (i.e. given away by their genetic parents; is "abandonment" too strong a term? )— and that the match with adoptive parents is not always successful. 

And we shouldn’t forget that Quintana’s death was a finale to a life marred by rapid mood changes, depression, and even suicidal despair:

“I had seen her wishing for death as she lay on the floor of her sitting room in Brentwood Park, the sitting room from which she had been able to look into the pink magnolia. Let me just be in the ground, she had kept sobbing. Let me just be in the ground and go to sleep.” ~ Joan Didion, Blue Nights

Joan Didion with Quintana Roo, 1966

*
JOAN DIDION’S CONTROLLED ANXIETY

~ She was famous for her detached, sometimes elegiac tone, but returned to alienation and isolation throughout her career, whether she was exploring her own grief after the death of her husband John Gregory Dunne in the Pulitzer-winning The Year of Magical Thinking, the emptiness of Hollywood life in the novel Play It As It Lays, or expats caught up in central American politics in her novel A Book of Common Prayer.

Didion was born in Sacramento in 1934 and spent her early childhood free from the restrictions of school, with her father’s job in the army air corps taking the family countrywide. A “nervous” child with a tendency to headaches, Didion nonetheless began her path early, starting her first notebook when she was five. While her father was stationed in Colorado Springs, she took to walking around the psychiatric hospital that backed on to their home garden, recording conversations that she’d later work into stories. In a 2003 interview with the Guardian, she recalled an incident when she was 10: while writing a story about a woman who killed herself by walking into the ocean, she “wanted to know what it would feel like, so I could describe it” and almost drowned on a California beach. She never told her parents. (“I think the adults were playing cards.”)

From reading Ernest Hemingway and Henry James, she learned to dedicate time to crafting a perfect sentence; she taught herself to use a typewriter by copying out the former’s stories. “Writing is the only way I’ve found that I can be aggressive,” she once said. “I’m totally in control of this tiny, tiny world.” ~

https://www.irishtimes.com/culture/books/joan-didion-american-journalist-and-author-dies-at-age-87-1.4763144

Oriana:

I admired Didion’s laser-like intelligence. Yes, she did provide clarity. The Year of Magical Thinking is extraordinary.

She lived a writer’s life to the fullest, leaving behind much admirable prose — admirable not only because of its spare style but also for its cultural insights.

Didion died this past Thursday, December 23, aged 87. How quickly we now learn of our losses. But because I watched my father die of Parkinson's, I am glad she got released.



*

BEING THE RICARDOS: DON’T EXPECT COMEDY; IT’S A VERY TENSE. SAD MOVIE

Oriana:

I think the average viewer goes to this movie, a Christmas-time release, expecting a comedy — or at least a good deal of humor. Instead, we get slapped in the face with the story of an unhappy marriage. The last word on the screen is “divorce.”

And basically all the important characters in the movie are unhappy. Now, that could be much easier to watch if we had more videos from the real “I Love Lucy” show. That would introduce nostalgic laughter and explain why the show achieved a stunning popularity.

As is, the movie is a heavy drama, sometimes sinking under its weight. There are several excellent scenes, but overall, there is no magic.

Though Kidman is competent, she is miscast as Lucy. She is all anorexic angles against Lucy’s comfortable curves. Javier Bardem, though, even if he too is unlike the real-life Desi Arnaz, has enough charisma to make us understand why women would throw themselves at him. A womanizing charmer/"Latin lover," he makes that stereotype come to life. He also delivers a pleasant surprise once we understand that he really does love Lucy and is protective of her. 

But funny? No, he isn't funny in this movie. This is a movie in which no one is happy or funny.

*
“A FILMED WIKIPEDIA PAGE”

~ With Sorkin the writer also serving as director, Being the Ricardos was doomed from the start. There are moments when the movie pops and the filmmaker seems in sync with his cast, his cast seems in sync with one another, and the intended sparks fly. But they’re fleeting. Sorkin stalls the film’s urgency with endless flashbacks and flash-forwards, with characters frequently restating (and overstating) ideas and emotions we’ve just seen dramatized. And when he comes up emotionally short, he resorts to a hoary, obvious score (by the usually dependable Daniel Pemberton). The whole thing is strangely lifeless as a result, a museum piece, a carefully curated display of old-timey television with nothing much at stake.

Being the Ricardos takes as its hook a short-lived scandal: In 1952, star Lucille Ball was investigated by the House Un-American Activities Committee for nebulous ties to the Communist Party in her youth, a bit of gossip leaked by the notorious columnist Walter Winchell. This upended Ball’s life for a week in the midst of I Love Lucy’s domination, threatening to bring the show, as well as the careers of its stars, Ball (Nicole Kidman) and husband, business partner, and co-star Desi Arnaz (Javier Bardem), to a hasty conclusion. “It was a scary time,” explain the actors playing the show’s writers Jess Oppenheimer (Tony Hale), Bob Carroll Jr. (Jake Lacy), and Madelyn Pugh (Alia Shawkat), whose recollections illuminate that stressful week. It’s the kind of dual (warring, perhaps) storytelling device that Sorkin loves, a chance to spin several plates at once.

The trouble is he isn’t a graceful enough director to execute such narrative acrobatics. Much of the couple’s backstory — Ball’s frustrated attempts at movie stardom, the fiery attraction between her and Arnaz, the logistics of careers that initially kept them apart — is dramatized capably, firmly rooted in old Hollywood history while invested in the complex politics of navigating show business as a headstrong woman. But Sorkin stuffs in complications that occurred elsewhere in the I Love Lucy timeline, including gossip rags running stories of Desi’s infidelity and the battle to work Lucy’s pregnancy into the show, turning the movie’s scope into what she calls “a compound fracture of a week.” The story could have succinctly captured Lucy and Desi’s lives and relationship via the earth-shattering events of this confined period, but the copious cuts forward and backward in time keep undermining that potential. Being the Ricardos turns into a filmed Wikipedia page, too flighty and shallow to give us any real emotional insight or to add to I Love Lucy’s well-known lore.

It can be dry as a Wikipedia page, too. This is a film about one of the funniest people of the 20th century. Yet presented with Ball’s unique flair for pratfalls and punch lines, Sorkin dwells instead on a laser-focused Kidman thinking her bits of business through.

That said, the longtime television pro knows what bickering over business at the table read and power plays in rehearsals look and sound like, and he nails the rivalries and running jokes that become part of the work environment. Of particular note is the subplot concerning Nina Arianda’s Vivian Vance, who considered herself more of a pretty ingenue than a frumpy sidekick. Arianda and Kidman flesh out the prickly dynamic between Vance and Ball and Vance’s ongoing distaste with her place on the show. It’s a fascinating footnote, sympathetically portrayed.

The sturdy cast of supporting players and character actors get their arms around Sorkin’s stylized dialogue with ease. His customary rat-tat-tat rhythms don’t feel too contemporary here, indebted as they are to the screwball comedies of an earlier era. J.K. Simmons proves the picture’s MVP, envisioning his William Frawley as a mixture of merciless insult comic and seen-it-all showbiz cynic. Clark Gregg (as CBS exec Howard Wenke), Alia Shawkat, Jake Lacy, and Tony Hale all make the most of their limited screen time.

The central performers have more trouble. Both Kidman and Bardem are a good decade too old for their roles. Hair aside, Kidman just doesn’t look much like Ball (and the attempts to make her look like Lucy with the help of prosthetics just underscore that point), and she can’t do slapstick. It can be downright eerie to watch as Kidman, stone-faced, attempts classic sequences like the beloved stomping of the grapes and falls flat. She just slogs through it, seemingly embarrassed by the endeavor. ~

https://www.vulture.com/article/movie-review-aaron-sorkins-being-the-ricardos.html


Oriana:

I agree that the scenes between “Lucy” and “Ethel” were the best in the movie, and, for me, saved the movie from being trivial and forgettable. “Ethel’s” complaining that the show makes her fat and ugly, wearing ugly dresses, and having an old man as her husband.

There is also another good proto-feminist scene where “Lucy” complains about the show’s presenting her as dumb, and the script writer explaining that yes, for the sake of comedy the show infantilizes the main character. She mustn’t show how smart she actually is.

With more such scenes, we would have an excellent, uncannily relevant movie. I'm not saying that the movie is bad — but it could be a lot better.

And I have to admit that despite its flaws, the movie has a haunting quality. If you crave holiday cheer, avoid it — but otherwise, I wouldn’t discourage anyone from seeing it.

*
I LOVE LUCY, BUT NOT HERE

~ Being the Ricardos ostensibly takes place during the production of one episode, from the Monday table read of the script to the Friday taping of the show in front of a live audience, but it’s almost incidental to the many, many other crises and conflicts that Sorkin piles on in his script, which is filled with so many flashbacks and flash forwards that one soon becomes as dizzy as the Lucy Ricardo character whom Ball portrayed so brilliantly for six seasons in the 1950s.

In a way, the movie’s title is wildly inaccurate, because the film resolutely shows us Ball and Arnaz as mostly anything but the Ricardos. That’s not a slam on the film because this is a deliberate choice Sorkin makes, and the lives and careers of both these pioneers are clearly much more rich and interesting than the comedic archetypes they portray. But even with terrific performances from the main cast–Nicole Kidman as Ball, Javier Bardem as Arnaz, J.K. Simmons and Nina Arianda as co-stars William Frawley and Vivian Vance–it’s hard to get a sense of what story Sorkin is trying to tell.

The ultimate problem with Being the Ricardos is that despite the hard work of its cast and Sorkin’s good intentions–his dialogue is as sharp as always and he never met a social issue he didn’t want to righteously incorporate into whatever script he’s working on–there are too many things happening at once for the viewer to get a real grip on who these people are, even if they’re among the most famous faces in pop culture history. (It also doesn’t help that Sorkin and cinematographer Jeff Cronenweth shoot most of the movie in dark offices or shadowy soundstages.) You may still love Lucy after watching this–as you should–but it’s hard to find a lot to love in Being the Ricardos. ~

https://www.denofgeek.com/movies/being-the-ricardos-review-i-love-lucy/

Oriana:

I agree that it’s hard to get the sense of the main story here. For me the strongest theme is Lucy’s shattered dream of a happy home. The word “home” has a terrific importance to her, but not to her husband, who’d rather fulfill the macho stereotype of a Latin-lover-womanizer — in no way just a supporting character. If the movie focused on “shattered dreams,” it would at least be coherent.

Let me finish with an excerpt of a particularly negative review in The Irish Times:

~ Beneath the snappy dialogue, there’s little spark or insight. Kidman simultaneously evokes Rosalind Russell and Marie Curie as Ball. It’s a fine performance but any resemblance to characters living or dead – including Lucille Ball – is purely coincidental.

Sorkin has said that he’s not a particular fan of I Love Lucy’s brand of slapstick and Being the Ricardos goes out of its snooty way to avoid anything as vulgar as Lucille Ball’s comedy, save for a very brief glimpse of the famous grape-stomping scene. The film’s obsession with process means we’re never getting to drink the wine. ~

https://www.irishtimes.com/culture/film/being-the-ricardos-a-sticky-situation-but-not-much-comedy-1.4752275

Deborah:

Nicole Kidman is a terrible Lucille Ball, just awful in my opinion. Kidman is not a funny actress, and it really shows in this movie.
She has that snow-maiden, ready for my close up seriousness that just does not equal the full body performances and comedy of Lucille Ball. 

Oriana:

I agree. She’s good in the feminist scenes, but she’s not a believable Lucy.


Nicole Kidman and Javier Bardem as Lucille Ball and Desi Arnaz

*
LINCOLN’S LAST CHRISTMAS

~ The character of American Christmas changed as a result of the Civil War

President Lincoln's final Christmas was a historic moment. The telegram he received from General William Tecumseh Sherman signaled that the end of the Civil War was near. But as Lincoln's personal Christmas story reveals, those conflict-filled years also helped shape a uniquely American Christmas.

Sherman’s telegram to the president, who had been elected to a second term only a month before, read “I beg to present you, as a Christmas gift, the city of Savannah, with 150 heavy guns and plenty of ammunition, and also about 25,000 bales of cotton.”

“Washington celebrated with a 300-gun salute,” writes the Wisconsin State Journal. This victory signaled that the end of the long, bloody war that shaped Lincoln’s presidency and the country was likely near. Lincoln wrote back: “Many, many thanks for your Christmas gift—the capture of Savannah. Please make my grateful acknowledgements to your whole army—officers and men.”

Although it separated many from their families, permanently or temporarily, the Civil War also helped to shaped Americans’ experience of Christmas, which wasn’t a big holiday before the 1850s. “Like many other such ‘inventions of tradition,’ the creation of an American Christmas was a response to social and personal needs that arose at a particular point in history, in this case a time of sectional conflict and civil war,” writes Penne Restad for History Today.

By the time of the war, Christmas had gone from being a peripheral holiday celebrated differently all across the country, if it was celebrated at all, to having a uniquely American flavor.


“The Civil War intensified Christmas’s appeal,” Restad writes. “Its celebration of family matched the yearnings of soldiers and those they left behind. Its message of peace and goodwill spoke to the most immediate prayers of all Americans.

This was true in the White House, too. “Lincoln never really sent out a Christmas message for the simple reason that Christmas did not become a national holiday until 1870, five years after his death,” writes Max Benavidez for Huffington Post. “Until then Christmas was a normal workday, although people did often have special Christmas dinners with turkey, fruitcake and other treats.”

During the war, Lincoln made Christmas-related efforts–such as having cartoonist Thomas Nast draw an influential illustration of Santa Claus handing out Christmas gifts to Union troops, Benavidez writes. But Christmas itself wasn't the big production it would become: In fact, the White House didn't even have a Christmas tree until 1889. But during the last Christmas of the war–and the last Christmas of Lincoln's life–we do know something about how he kept the holiday.

On December 25, the Lincolns hosted a Christmas reception for the cabinet, writes the White House Historical Society. They also had some unexpected guests for that evening’s Christmas dinner, the historical society writes. Tad Lincoln, the president’s rambunctious young son who had already helped inspire the tradition of a Presidential turkey pardon, invited several newsboys —child newspaper sellers who worked outdoors in the chilly Washington winter—to the Christmas dinner. “Although the unexpected guests were a surprise to the White House cook, the president welcomed them and allowed them to stay for dinner,” writes the historical association. The meal must have been a memorable one, for the newsboys at least. ~

https://www.smithsonianmag.com/smart-news/president-lincolns-last-christmas-180967617/?utm_source=facebook.com&utm_medium=socialmedia&fbclid=IwAR1f7WqhmoBsgerYjBSbQqVYOyxPjXviTjztMnnKQ-rfvO25tD4uss8az3w

Lincoln Christmas card from the 1920s

JOSEPH MILOSCH: THE REAL WAR ON CHRISTMAS

When I hear conservatives say there is a war on Christmas, I take issue with their grammar. There is no war on Christmas, but there was a war. Wall Street waged it from 1920 until its victory in 1965. The Christians lost the crusade. When they are informed about their loss, they deny it. Christmas used to begin four weeks before the first day of Christmas or Christmas Day. In theory, it would continue for twelve days of spiritual reflections on the meaning of Christ’s life.

The Christmas season ended on the day the Magi arrived at the manger. The Three wise men’s arrival is important to Christians because it symbolizes the gentiles’ inclusion in God’s plans. For many European immigrants, the Feast of the Epiphany was the day they opened their gifts. In 1953, I was in the first grade, and out of thirty students, fifteen of them received their presents on Old Christmas. By the time I was a senior in high-school, 1965, my classmates did not celebrate Little Christmas.

Why did the old tradition fade away? In 1920 an advertising agent prophesied a sales increase if the country celebrated only one day of Christmas. He chose Christmas day because it was the first day of the Christmas season. Furthermore, he figured that starting sales a few weeks ahead of the holiday increased the pressure to buy and give gifts. This year more stores started their Christmas sales a week before Halloween when advertising agencies ask consumers to Get a Jump on Black Friday. Some stayed open on Thanksgiving.

On the other hand, Christians believe the Christmas Season is a time of rejoicing and self-reflection. For Catholics, along with some other denominations, it begins with Advent. This four-week period begins four Sundays before Christmas and ends on Christmas day. Advent is the time to prepare spiritually for Christmas day and carry the preparation through the Twelve Days of Christmas. The composer penned the song as a spiritual resistance guide to the oppression by the dominant religion, The Anglican Church.

Based on a memory-forfeit game, the song originated after the disposal of the last English  Catholic King of England. At that time, acknowledging your Catholic faith was punishable by death. Catholics used the song to communicate their beliefs. Today, those ideas are interdenominational. During the Christmas Season, the faithful decipher the metaphors and employ them in prayer. On the first day, the devoted used the first tenet, and during the second day, the faithful employed the first two items.

This sequence continues until Little Christmas; then, the prayers include all twelve beliefs. This technique is similar to one found in gospel songs written by African-American slaves. They used lyrics as a means to resist oppression. The partridge in a pear tree refers to Christ crucified on the Cross. Thus, the birth of Christ leads to his death. It is an act performed for our salvation and gives Christmas its meaning.


The Two Turtle Doves stand for God’s everlasting friendship and love, which God shows by allowing the execution of his son. The Tree French Hens stands for faith, hope, and charity. The Four Calling Birds are either the Four Gospels or their authors: Mathew, Mark, Luke, and John. Five Golden Rings are the Five Books of the Old Testament. Six Geese a-laying are the days of Creation, and the Seven Swans a-swimming are the Holy Spirit’s blessings: prophecy, service, teaching, encouragement, giving, mercy, and leadership.

Eight maids a-miking are the beatitudes. The nine ladies dancing are the nine fruits of the Holy Spirit.: Love, joy, peace, patience, kindness, goodness, faithfulness, modesty, and self-control. The ten lads a-leaping is the Ten Commandments. The Eleven pipers are the eleven loyal apostles, and the twelve drummers drumming are the twelve points in the Apostles’ creed.
Today, most Christians are not aware of the meaning of the lyrics, though they accept the beliefs expressed in the song.

Instead of practicing prayer and meditation to prepare for Christmas, we shop for gifts, food, and alcohol. Although many charity organizations and churches initiate food drives for the poor, it seems Christians act contrary to the spirit of Christmas. When we think of Christ’s birthday, we think about stores, who hire more security guards to protect themselves against shoplifters. During the holiday season, police departments increase their overtime to handle the escalation in theft, vandalism, and drunkenness.

Hospitals expand emergency room staff to deal with the surge in shootings and stabbings. TV commercials show us how to deal with family arguments, drunkenness, and abuse. Another commercial depicts people stuffing themselves with the Christmas meal before asking for food donations and gifts for the poor. These commercials seem to encourage us to force the poor to share in our day of gluttony — as if we enjoy rubbing the faces of the unfortunate in the poverty in which they live.

We give gifts to underprivileged children that their parents can’t afford. Then, we ignore their needs for the rest of the year. On Christmas Eve, the news posts all the sobriety checkpoints so we can avoid them. The TV is full of sporting events that depict an abundance of presents, drinks and food. While Christians unwrap their gifts, eat, and drink, the news uses the stores’ profits during the Christmas season to judge whether Christmas was successful.

On December twenty-fifth, the actions of Christians indicate less a commemoration of Jesus’s birth than a celebration of greed, gluttony, and drunkenness in the name of Christ. As they prepare for bed, they continue to complain about the war on Christmas. Then they consume Tums to relieve the gas and heartburn building up from eating and drinking to the extreme. Imagine what kind of country the United States would be if we practiced the meditation prayer depicted in The Twelve Days of Christmas?

Oriana:

“The partridge in a pear trees” is Christ on the Cross? This astonished me and I rushed to google the song. Sure enough: The partridge in a pear tree is symbolic of Christ upon the Cross. In the song, He is symbolically presented as a mother partridge because she would feign injury to decoy a predator away from her nest. She was even willing to die for them.

The tree is the symbol of  redemption.

https://news.hamlethub.com/coscob/life/214-the-first-day-of-christmas-revealed

But there are also the dissenting voices, pointing to secular meanings such as fertility. The “calling birds” were originally “cooly birds,” meaning blackbirds (though in this instance spared being baked in a pie). In keeping with the bird motif, the five gold rings might refer to gold-ringed pheasants. The song is secular, loosely speaking a humorous courtship song, according to https://www.vox.com/21796404/12-days-of-christmas-explained
Though the song always struck me as a loony-tunes kind of annoying folklore, I can go along with the Catholic explanation. There are always magical numbers ascribed to sins and virtues alike, so . . . whatever works. And I agree that any spiritual practice would be preferable to the commercial orgy that modern Christmas has become. 

Mary:

The Twelve Days of Christmas as a religious song caught me by surprise too! It seems like a bit of medieval iconography, that went in for that sort of thing, the partridge in the pear tree not so far from the icon of the pelican feeding its young with blood from its own breast, another Christ symbol. And we no longer remember all those numbered lists of virtues and attributes well enough to see them in those milkmaids and leaping lords.

Oriana:

I can't help but see The Twelve Days of Christmas as too humorous to be a serious reminder of Catholic teachings. Christ as a partridge does not sit well with me. Sooner, indeed, the pelican, though it's of course a myth about its feeding its young with its blood. 

It's been a while that I've heard the song. Maybe it's past its peak of popularity. I hope so, since, let's face it, it's so tedious and annoying. As Christmas carols go, I wish this one a happy oblivion.

*
THE EASIEST WAY TO IMPROVE YOUR MEMORY

~ When trying to memorize new material, it’s easy to assume that the more work you put in, the better you will perform. Yet taking the occasional down time – to do literally nothing – may be exactly what you need. Just dim the lights, sit back, and enjoy 10-15 minutes of quiet contemplation, and you’ll find that your memory of the facts you have just learnt is far better than if you had attempted to use that moment more productively.

Although it’s already well known that we should pace our studies, new research suggests that we should aim for “minimal interference” during these breaks – deliberately avoiding any activity that could tamper with the delicate task of memory formation. So no running errands, checking your emails, or surfing the web on your smartphone. You really need to give your brain the chance for a complete recharge with no distractions.

An excuse to do nothing may seem like a perfect mnemonic technique for the lazy student, but this discovery may also offer some relief for people with amnesia and some forms of dementia, suggesting new ways to release a latent, previously unrecognized, capacity to learn and remember.

The remarkable memory-boosting benefits of undisturbed rest were first documented in 1900 by the German psychologist Georg Elias Muller and his student Alfons Pilzecker. In one of their many experiments on memory consolidation, Muller and Pilzecker first asked their participants to learn a list of meaningless syllables. Following a short study period, half the group were immediately given a second list to learn – while the rest were given a six-minute break before continuing.

When tested one-and-a-half-hours later, the two groups showed strikingly different patterns of recall. The participants given the break remembered nearly 50% of their list, compared to an average of 28% for the group who had been given no time to recharge their mental batteries. The finding suggested that our memory for new information is especially fragile just after it has first been encoded, making it more susceptible to interference from new information.

Although a handful of other psychologists occasionally returned to the finding, it was only in the early 2000s that the broader implications of it started to become known, with a pioneering study by Sergio Della Sala at the University of Edinburgh and Nelson Cowan at the University of Missouri.

The team was interested in discovering whether reduced interference might improve the memories of people who had suffered a neurological injury, such as a stroke. Using a similar set-up to Muller and Pilzecker’s original study, they presented their participants with lists of 15 words and tested them 10 minutes later. In some trials, the participants remained busy with some standard cognitive tests; in others, they were asked to lie in a darkened room and avoid falling asleep.

The impact of the small intervention was more profound than anyone might have believed. Although the two most severely amnesic patients showed no benefit, the others tripled the number of words they could remember – from 14% to 49%, placing them almost within the range of healthy people with no neurological damage.

The next results were even more impressive. The participants were asked to listen to some stories and answer questions an hour later. Without the chance to rest, they could recall just 7% of the facts in the story; with the rest, this jumped to 79% – an astronomical 11-fold increase in the information they retained.

The researchers also found a similar, though less pronounced, benefit for healthy participants in each case, boosting recall between 10 and 30%.

Della Sala and Cowan’s former student, Michaela Dewar at Heriot-Watt University, has now led several follow-up studies, replicating the finding in many different contexts. In healthy participants, they have found that these short periods of rest can also improve our spatial memories, for instance – helping participants to recall the location of different landmarks in a virtual reality environment. 

Crucially, this advantage lingers a week after the original learning task, and it seems to benefit young and old people alike. And besides the stroke survivors, they have also found similar benefits for people in the earlier, milder stages of Alzheimer’s disease.

In each case, the researchers simply asked the participants to sit in a dim, quiet room, without their mobile phones or similar distractions. “We don’t give them any specific instructions with regards to what they should or shouldn’t do while resting,” Dewar says. “But questionnaires completed at the end of our experiments suggest that most people simply let their minds wander.”

Even then, we should be careful not to exert ourselves too hard as we daydream. In one study, for instance, participants were asked to imagine a past or future event during their break, which appeared to reduce their later recall of the newly learnt material. So it may be safest to avoid any concerted mental effort during our down time.

The exact mechanism is still unknown, though some clues come from a growing understanding of memory formation. It is now well accepted that once memories are initially encoded, they pass through a period of consolidation that cements them in long-term storage. This was once thought to happen primarily during sleep, with heightened communication between the hippocampus – where memories are first formed – and the cortex, a process that may build and strengthen the new neural connections that are necessary for later recall.

This heightened nocturnal activity may be the reason that we often learn things better just before bed. But in line with Dewar’s work, a 2010 study by Lila Davachi at New York University, found that it was not limited to sleep, and similar neural activity occurs during periods of wakeful rest, too. In the study, participants were first asked to memorize pairs of pictures – matching a face to an object or scene – and then allowed to lie back and let their minds wander for a short period. Sure enough, she found increased communication between the hippocampus and areas of the visual cortex during their rest. Crucially, people who showed a greater increase in connectivity between these areas were the ones who remembered more of the task, she says.

Perhaps the brain takes any potential down time to cement what it has recently learnt – and reducing extra stimulation at this time may ease that process. It would seem that neurological damage may render the brain especially vulnerable to that interference after learning a new memory, which is why the period of rest proved to be particularly potent for stroke survivors and people with Alzheimer’s disease.

Other psychologists are excited about the research. “The effect is quite consistent across studies now in a range of experiments and memory tasks,” says Aidan Horner at the University of York. “It’s fascinating.” Horner agrees that it could potentially offer new ways to help individuals with impairments to function.

Practically speaking, he points out that it may be difficult to schedule enough periods of rest to increase their overall daily recall. But he thinks it could still be valuable to help a patient learn important new information – such as learning the name and face of a new caretaker. “Perhaps a short period of wakeful rest after that would increase the chances that they would remember that person, and therefore feel more comfortable with them later on.” Dewar tells me that she is aware of one patient who seems to have benefitted from using a short rest to learn the name of their grandchild, though she emphasizes that it is only anecdotal evidence.

Thomas Baguley at Nottingham Trent University in the UK is also cautiously optimistic. He points out that some Alzheimer’s patients are already advised to engage in mindfulness techniques to alleviate stress and improve overall well-being. “Some [of these] interventions may also promote wakeful rest and it is worth exploring whether they work in part because of reducing interference,” he says, though it may be difficult to implement in people with severe dementia, he says.
 
Beyond the clinical benefits for these patients, Baguley and Horner both agree that scheduling regular periods of rest, without distraction, could help us all hold onto new material a little more firmly. After all, for many students, the 10-30% improvements recorded in these studies could mark the difference between a grade or two. “I can imagine you could embed these 10-15 minute breaks within a revision period,” says Horner, “and that might be a useful way of making small improvements to your ability to remember later on.”

In the age of information overload, it’s worth remembering that our smartphones aren’t the only thing that needs a regular recharge. Our minds clearly do too.

https://www.bbc.com/future/article/20180208-an-effortless-way-to-strengthen-your-memory

Oriana:

This is excellent information. Now if it only became easier for those of us who love to work hard (all right, go ahead and call us workaholics) to take do-nothing breaks.
Mnemosyne, goddess of memory and the mother of the muses

*
DOSTOYEVSKY’S PROBLEM WITH HOLBEIN’S REALISTIC PAINTING OF DEAD CHRIST: HUMAN, ALL TOO HUMAN?

~ Much of The Idiot was written while Dostoevsky and his wife were living in Florence, just a stone’s throw away from the Pitti Palace, where the writer often went to see and to admire the paintings that adorned its walls, singling out Raphael’s Madonna della Seggiola (Madonna of the Chair) for special mention. It is very probably no coincidence that visual images play a prominent role in The Idiot. 

Palazzo Pitti, the Saturn Room

Early on in the narrative, Prince Myshkin, the eponymous ‘idiot’, sees a photograph of the beautiful Anastasia Phillipovna that makes an extraordinary impression on him and generates a fascination that will end with her death and his madness. But insofar as Anastasia Phillipovna is the epitome of human beauty in the world of the novel, this photograph can also serve as a visual aide to the saying attributed to the Prince, that ‘beauty will save the world’.

Later, he is confronted with an image of a very different kind — Hans Holbein’s 1520-22 painting of the dead Christ, shown with unflinching realism and reportedly using the body of a suicide as model. It is a Christ stripped of the beauty that bourgeois taste regarded as an essential attribute of his humanity and, in its unambiguous, mortality, devoid also of divinity. 

On first seeing it, Myshkin comments that a man could lose his faith looking at such a picture and, later, the despairing young nihilist, Hippolit, declares that just this picture reveals Christ’s powerlessness in face of the impersonal forces of nature and the necessity of death that awaits every living being. It is, Hippolit suggests, an image that renders faith in resurrection impossible.

These two images can be seen as establishing the visual parameters for a complex interplay of the themes of beauty, death, and divinity that run through the novel as a whole and that go a long way to structuring the conceptual — and religious — drama at its heart. This drama is also, crucially, at the center of then contemporary European debate about Christ and about the representation of Christ. But these are not the only images that contribute to Dostoevsky’s take on that debate. It has been suggested that the opening description of Myshkin is modeled on the canonical icon of Christ in Orthodox tradition and much of the novel’s theological force has to do with the eclipse of Myshkin’s icon-like identity in the encounter with a modern Russia in the grip of a capitalist revolution.

On his arrival in St Petersburg from the West, Myshkin goes to call on his distant relative, Mme Epanchina ( it is in her husband’s office that he sees the photograph of Anastasia Phillipovna). Mme Epanchina’s oldest daughter Adelaida is a keen amateur landscape painter and over breakfast Myshkin rather inappropriately suggests that the face of a man in the moment of being guillotined might make a suitable subject for her painting. And, finally, there is an imaginary painting of Christ that Anastasia Phillipovna ‘paints’ in one of her letters to Aglaia Epanchina to whom, by this point, Myshkin has become engaged. It is this ‘painting’ that is the main focus of this paper, in part because it has been under-discussed in secondary literature in comparison with Anastasia’s photograph and Holbein’s dead Christ but also because it makes an important contribution to the debate about Christ and about how to represent Christ that, as we have seen, is central to the religious questions at issue in the novel.

This picture is, of course, painted in words and not an actual painting, but Anastasia Phillipovna describes it as if it were a painting and she clearly wants Aglaia too to see it that way. Dostoevsky thus invites readers too to imagine it as a picture they might see in a gallery. Anastasia Phillipovna depicts Christ as sitting, alone, accompanied only by a child, on whose head he ‘unconsciously’ rests his hand, while ‘looking into the distance at the horizon; thought, great as the world, dwells in His eyes. His face is sorrowful’. The child looks up at him, the sun is setting.

On first reading, the portrait might not seem very unusual. It is the kind of portrait that we have become rather used to. Christ sitting with children is a subject familiar from innumerable popular Christian books and devotional pictures. Yet in the 1860s images of Christ sitting and of Christ sitting with children were both equally innovative, having relatively few precedents in earlier iconography.

Unsurprisingly, the theme becomes much more common with the rise of romanticism and a new more positive evaluation of childhood and the idea that children had a special affinity with the divine, with notable examples from Benjamin West, William Blake, and Charles Lock Eastlake and, by the mid-Victorian era, it had become a widespread and popular topic .Unlike in earlier representations, Christ is now to be seen alone with varying numbers of children, unaccompanied by a crowd of mothers and disciples. This tendency becomes especially prominent in illustrated Bible stories specifically for children—another phenomenon of the nineteenth century.

In a sense, the reason for the new prominence of these themes is not hard to fathom. It reflects a turn to the human Jesus and a new emphasis on the role of feeling in religious life. The Christ of ecclesiastical tradition was Savior by virtue of the ontological power of the hypostatic union, uniting divine and human in the very person of his being. It is this identity as both divine-and-human that makes it possible for his innocent suffering on the cross to be salvific rather than merely tragic. In the wake of romanticism, however, his qualification as Savior has to do with his uniquely intense God-consciousness and his unrestricted empathy with other human beings, an empathy that extends even to their suffering and sin.

Ernest Renan describes Jesus’s religion as ‘a religion without priests and without external practices, resting entirely on the feelings of the heart, on the imitation of God, on the immediate rapport of [human] consciousness with the heavenly Father’. Renan’s characteristically 19th century bourgeois assumptions led him to see women as being especially susceptible to ‘the feelings of the heart’ and it was therefore no surprise that ‘women received him eagerly’. ‘[W]omen and children adored him’ and

the nascent religion was thus in many respects a movement of women and children … He missed no occasion for repeating that children are sacred beings, that the Kingdom of God belongs to children, that we must become as children in order to enter it, that we must receive it as a child, and that the Father conceals his secrets from the wise and reveals them to the little one. He almost conflates the idea of discipleship with that of being a child … It was in effect childhood, in its divine spontaneity, in its naĂŻve bursts of joy, that would take possession of the earth.

How do these themes resonate with the action and personalities of The Idiot? Clearly, Anastasia Phillipovna’s ‘portrait’ of Christ lives in the atmosphere of Renan’s and similar humanist-sentimental lives of Jesus. But what does this mean for the novel’s possible contribution to the religious understanding of Christ as a whole?

The historical Lives of Jesus movement had its scandals, but the parallel moves in art also provoked bemusement and sometimes hostility, as in the case of other new developments in nineteenth century art. The novelty of this new view of Christ can be seen by reactions to Ivan Kramskoy’s painting of Christ in the wilderness. Tolstoy would later say of it that it was ‘the best Christ I know’, but many of the reactions to it were far more negative. For many this was a Christ devoid of divinity, a manifestation of historicist positivism in art. ‘Whoever he is,’ Ivan Goncharov continues, ‘he is without history, without any gifts to offer, without a gospel … [Christ] in his worldly, wretched aspect, on foot in a corner of the desert, amongst the bare stones of Palestine … where, it seems, even these stones are weeping!

What kenotic Christology refers to as Christ’s state of exinanition (i.e., his human state of weakness and vulnerability, emptied of his divine attributes), is thus becoming a theme of contemporary art at the time of Dostoevsky’s composition of The Idiot, paralleling to some extent the development of the historical portrayal of the life of Jesus. Both in historiography and art the same question then arises, namely, how, if Jesus is portrayed as fully human, can his divinity be rescued from the manifestation of what is visibly all-too human?

The sitting Christ, absorbed in brooding thoughts and given over to melancholy, seems to be a Christ who is in the process of becoming all-too human. In this regard, it is noteworthy that where Renan’s Christ goes alone to look out over Jerusalem at sunrise, Anastasia Phillipovna’s Christ is pictured at sunset. Her [verbal] portrait reveals the shadow side of Renan’s optimism. For her, the light is fading, as natural light always must.

But what does this tell us about Dostoevsky’s novel? Firstly, it underlines the contemporaneity of Dostoevsky’s visual vocabulary deployed in the novel. Not only is this one of the first novels in which a photograph (the photographic portrait of Anastasia Phillipovna) plays a major role, but the image of the sitting Christ, accompanied only by a child, reflects contemporary developments in religious art that are also further connected with contemporary historiography. Holbein’s dead Christ is, of course, a picture from an earlier age. However, on the one hand, it offers a ne plus ultra of the humanizing approach to Jesus and, on the other  it is a theme we also find in Manet’s Dead Christ with Angels, exhibited alongside his better-known Olympia in the same year that The Idiot was published. Like Kramskoy’s painting, this was seen by many critics as sacrilegious and an affront to faith by virtue of the elimination of all elements of beauty and conventional sacrality. Visually, as well as in literary terms, Dostoevsky is entirely in synchronization with the decisive movements of the visual culture of his time.


Manet: Dead Christ with Angels

In fact, commenting in 1873 on Nicholas GĂ©’s Mystic Night (which, as we have seen, also attracted Goncharov’s attention), Dostoevsky showed himself to be alert to the risks of a one-sidedly humanizing and sentimental approach to Christ in art. In a review published in The Citizen he writes:

Look attentively: this is an ordinary quarrel among most ordinary men. Here Christ is sitting, but is it really Christ? This may be a very kind young man, quite grieved by the altercation with Judas, who is standing right there and putting on his garb, ready to go and make his denunciation, but it is not the Christ we know. The Master is surrounded by His friends who hasten to comfort Him, but the question is: where are the succeeding eighteen centuries of Christianity, and what have these to do with the matter? How is it conceivable that out of the commonplace dispute of such ordinary men who had come together for supper, as this is portrayed by Mr. Gué [sic], something so colossal could have emerged?


Secondly, setting Nastasya’s portrait of Christ in an art-historical context may not directly solve the question as to whether Myshkin is to be regarded as some kind of Christ-figure (and, if so, what kind) but it does illuminate how Anastasia Phillipovna sees him. We know that she reads much and is given to speculative ideas, and it is therefore not at all surprising that her vision of Christ and of Myshkin as Christ is a vision taken from contemporary, humanist, Western sources, a sentimental Christ whose power to save is, at best, fragile. In this way, whether or not we are to read Dostoevsky’s portrayal of Myshkin as a Christ-figure, he is a Christ-figure, albeit a very particular kind of Christ-figure, for her. In any case, to the extent that Anastasia Phillipovna places her own hope of salvation in such a Christ-figure, it is doomed to fail.

There is one further iconographical intertext that is worth exploring in connection with Anastasia Phillipovna’s ‘portrait’, although it is not expressly mentioned. What is mentioned—Evgeny Pavlovitch mentions it—is that Myshkin’s relation to her might be seen as mirroring the gospel story of the woman taken in adultery and protected by Christ from being stoned to death (an association reinforced by the story Myshkin himself tells of the outcast Marie he had rescued from ostracism in the little Swiss village where he had lived before the start of the novel’s action).

This would imply that her deepest hope is, through Myshkin, to sit at the feet of Christ, listening to his word, taking the better part. But if this is her hope, then it is well-hidden, screened not only by the substitution of Aglaia for herself but also by the sentimental positivism of nineteenth century historicism that comes to expression in her portrait of a melancholy Christ contemplating the light of a setting sun. In this way, it may not only be her psychological injuries that make her incapable of accepting the forgiveness that Myshkin offers, it may also be her—and the age’s—misconception of Christ that gets in the way. 

This, clearly, makes the issue less individual and less psychological. Arguably, it also makes it more tragic in a classical sense. This is because her fate is that of a whole world of values that, in this historical moment, is descending into the impending darkness. Yet—even if Dostoevsky himself does not say this—there might remain a chance, however slender, that this ‘human all-too human’ Christ retains a memory of another light and, with that memory, the hope that the values of this present age are not the sole values by which we and the world are to be judged.

https://www.eurozine.com/human-all-too-human/#footnote-5

Mary:

On Holbein's dead Christ and beauty, death and divinity...Don't we always make our gods beautiful?? The nature of Christ as savior is that he is both divine and human, at once and always. To depict him as the empty, fully human corpse, is to deny him as god, and without divinity he cannot function as savior. The same problem is there with the romanticized Christ figure, again, fully, or should I say, only, human.

And even the best, most saintly human, cannot be Savior, because he is not also and always at the same time divine.

I think I'm getting a theological headache!

Oriana:

Can somebody who's "only" human be a savior? It depends on the definition of a savior. Jonas Salk saved us from living in terror of polio. In my class there was a girl who'd been crippled by polio. True, it's a partial, physical salvation, but imagine . . .

Dostoyevsky stated that if he had to choose between truth (roughly synonymous with science) and Christ, he’d choose Christ. However, I suspect that once we are fully convinced that something is true, we can’t reject it because it’s more emotionally comforting to believe that if we go to church every Sunday then we’ll go to heaven’s eternal bliss. As one nun in the movie “The Innocents” says, “Faith is one minute of belief and twenty-four hours of doubt.” And as one literature professor said, “Dostoyevsky believed in God on Monday, but by Wednesday again he didn’t believe.” Such doubt-filled faith can no longer be a source of comfort.

But as for the beauty of religious art, that’s the one truly redeeming factor. Gothic cathedrals may be the monuments to intellectual error, but at least we can truly enjoy the rose windows, the riot of arches — whereas Holbein’s Dead Christ is hideous (though I admire Holbein’s courage in reaching for such realism). As the history of humanity goes, it’s just one of many ironies and oxymorons.

*


Gabriele Kuiznate. I chose this image simply because beauty is its own excuse. And let us hope it will indeed save the world.

*
“I understood at a very early age that in nature I felt everything I should feel in church but never did. Walking in the woods, I felt in touch with the universe and with the spirit of the universe." ~ Alice Walker



*
HOW ANESTHESIA CHANGED THE NATURE OF CHILDBIRTH

~ On December 27, 1845, a physician named Crawford W. Long gave his wife ether as an anesthetic during childbirth. This is the earliest use of ether in childbirth on record–but Long, who didn’t publish his results until the 1850s, spent his lifetime fighting to be recognized. Whatever it may have meant for his career, this event marked the beginning of a new era in childbirth–one where the possibility of pain relief was available.

When Long did this, he had already used ether on a friend, writes anesthesiologist Almiro dos Reis JĂșnior, to remove infected cysts from his neck. Long had experience with the substance from so-called “ether parties” where young people would knock each other out for fun. However, the public was skeptical of knocking people unconscious during surgery, so Long stopped using ether in his clinic. “But Long still believed in the importance of anesthesia and administered ether to his wife during the birth of his second child in 1845 and other subsequent deliveries, thus undoubtedly becoming the pioneer of obstetric analgesia,” writes dos Reis JĂșnior.

Later in his life, Long tried to get credit for pioneering surgical anesthesia, a contentious claim that historians didn't recognize until recently. But he didn’t seek credit for obstetric anesthesia, writes historian Roger K. Thomas, even though “his use of ether with his wife predates by slightly more than a year that of the Scottish physician, James Y. Simpson, who is credited with the first obstetrical use of anesthesia.”

Simpson studied and taught at the University of Edinburgh, the first university in the world to have such a focus on gynecology and obstetrics, writes P.M. Dunn in the British Medical Journal. On January 19, 1847, he used ether in a difficult delivery. “He immediately became an enthusiastic supporter and publicist of its use, vigorously countering the arguments of those who suggested God had ordained that women should suffer during childbirth,” Dunn writes.

After some experimentation, Simpson concluded that chloroform was better than ether for use in childbirth. The first time he used chloroform to assist in a birth, the grateful parents christened their daughter Anesthesia.

The idea of anesthesia in childbirth caught on pretty quickly after this. In 1847, Fanny Longfellow, who was married to one of America’s most prominent poets, used ether during her delivery. Then in 1853, writes author William Camann, “Queen Victoria to relieve labor pain during the birth of Prince Leopold, ending any moral opposition to pain relief during childbirth.”

The idea of pain relief during surgery was unprecedented when surgeons started experimenting with it in the 1840s. For women, who routinely underwent agony to bear a child, the idea of birth without pain represented a new freedom. Following these innovations, writes Dunn, “women lobbied to assure pain relief during labor and sought greater control over delivery.” ~

https://www.smithsonianmag.com/smart-news/it-didnt-take-very-long-anesthesia-change-childbirth-180967636/

Queen Victoria and the royal family

*
MOVING TOWARD A UNIVERSAL FLU VACCINE

~ Scientists at Scripps Research, University of Chicago and Icahn School of Medicine at Mount Sinai have identified a new Achilles' heel of influenza virus, making progress in the quest for a universal flu vaccine. Antibodies against a long-ignored section of the virus, which the team dubbed the anchor, have the potential to recognize a broad variety of flu strains, even as the virus mutates from year to year, they reported Dec. 23, 2021 in the journal Nature.

"By identifying sites of vulnerability to antibodies that are shared by large numbers of variant influenza strains we can design vaccines that are less affected by viral mutations," says study co-senior author Patrick Wilson, MD. The anchor antibodies we describe bind to such a site. The antibodies themselves can also be developed as drugs with broad therapeutic applications.

Ideally, a universal influenza vaccine will lead to antibodies against multiple sections of the virus -- such as both the HA anchor and the stalk -- to increase protection to evolving viruses. ~

https://www.sciencedaily.com/releases/2021/12/211223113049.htm

*

ending on beauty:

The night ocean
shatters its black mirrors.
Waves shed their starry skins,
the foam hisses yess, yess.
What do astronomers know?
a handful of moons,
a Milky Way of facts:
every atom inside us
was once inside a star.

Star maps are within us,
private constellations
of House, Tree, Cat,
waiting to become
once again an alphabet of light —

on the blackboard of night sky
spelling out
This is your face
beyond your face,
the galaxies that spiral
through the lines of your palm

~ Oriana, Star Maps


 


 


Saturday, December 18, 2021

IS DUNE FASCIST? DUNE AND ISLAMIC FUTURISM; SIMPSONS’ LIFESTYLE NO LONGER ATTAINABLE; CRIMES OF FASHION; LETTUCE IS AN ANTI-DEPRESSANT; HOW TO PRAY TO A DEAD GOD

 *
GRIPPER

Mother and I pried open the bronze,
divided his ashes between us.
I took my portion upstairs,
quickly closed the door.    

I felt queasy. I had never seen
human ashes before —
would they still look human,
with sharp pieces of offended bone?

But my father’s ashes
looked just like ashes:
gray, speckled with white.
Then I glimpsed something

round and hard: a metal button.
On the disk, legible still,
the word Gripper twice —
two serpents made of letters,

smudged but not charred —
not returning dust to dust.
Soon I found more Grippers,
from his cremated hospital gown.

With half the ashes, half
the Grippers were later laid
in the family crypt
in my father’s hometown —

blessed by the priest,
consigned to everlasting mercy.
As if a sprinkling of holy water
could extinguish such persistent flame.

During the service, in my mind
I heard my father ask: “Is that
what is left of me? Buttons?
That’s the treasure you found?”

“Not worth a button”
was his favorite saying.
Laughter was his grip on life.
Only days before the end,

he said, with his widest grin,
“When you’re lying
in the coffin, you should suddenly
sit up and say Hah-hah! ”

That was of course too ambitious.  
From eternity I have only
these buttons. Still able
to grip. Not giving up.

~ Oriana

~ I think of the past less often than I had feared. The past is an immense album whose images are blurred, elusive — protean in their inconstancy and therefore embarrassing. Memory consoles with its balancing of gains and losses, because not all is on the debit side; the passage of years bestows a sense of architectonics, and the purity of arch, the crystalline contour can compensate for the fading of warm colors. It also teaches futility, because we know now that the distance between the word and the world, contrary to all our previous expectations, remains unbridgeable. ~ Czeslaw Milosz, The Land of Ulro

In Blake's mythology, "Ulro" stands for the land of suffering.


Funny, in this case the blurry image fits the blurry nature of memory, as Milosz asserts (I think memory is like dreams: most of the time, blurred and fragmentary, but now and then frightfully vivid)

Oriana:

I rarely look at old photographs. My “album of memory” is my poems. This is obvious when it comes to personal narratives. But all poems, even mythological, remain in part autobiographical for me. I remember their context in my life. I remember the impulse that inspired them, my thinking and attitudes toward various challenges of life in the past. Having the poems has pretty much resolved my fear of remembering the past (on the whole, I don't think of my past with pleasure).

The unbridgeable gap? In absolute terms, I have to agree. Words can only render a slice of reality, and a poem is at its best when it focuses on the narrow slice: a specific event and detail that then leads to a larger perspective. When a poem tries to cover too much, it’s almost bound to fail. But describe just one thing, and you birth an immensity.

I think that “Gripper” succeeds by focusing on one event, finding the metal buttons in my father’s ashes. And I'm reminded of one of Una’s poem, Geode.

GEODE

I bought a rock
at a souvenir shop in the desert,
not guaranteed to be a geode
but the seller hinted
at crystal inside.

It looked ordinary, small,
warm in my hand from its stay
at the window. I sensed
movement as if it leaned
like a living thing toward light.

Often I have picked up a hammer
tempted to smash my rock to see
if a violet excitement exists
inside. Something always
stays my hand.

Instead it rests on the shelf
next to the Book of Luminous Things.
When I die it will be tossed
in the trash to continue
its journey, the promise still intact.

~ Una Hynum

I foolishly tried to make Una end the poem with the geode resting next to Milosz's anthology, The Book of Luminous Things, “the promise still intact.” She wisely and courageously stood by what she saw as the truth — and even so, the poem ends with “the promise still intact.”



*
IS DUNE FASCIST?

~ Popular SF narratives like Dune play a central role in white nationalist propaganda. The alt-right now regularly denounces or promotes science fiction films as part of its recruiting strategy: fascist Twitter popularized the “white genocide” hashtag during a boycott campaign against inclusive casting in Star Wars: The Force Awakens. But Villeneuve’s film seemed to provoke greater outrage than normal because Herbert’s book is such a key text for the alt-right.

Dune was initially received as a countercultural parable warning against ecological devastation and autocratic rule, but geek fascists see the novel as a blueprint for the future. Dune is set thousands of years from now in an interstellar neofeudal society that forestalled the rise of dangerous artificial intelligences by banning computers and replacing them with human beings conditioned with parapsychological disciplines that allow them to perform at the same level as thinking machines [Oriana: Paul himself is a Mentat, a kind of human computer who can put aside emotion in favor of logic]. Spaceships navigate through space using the superhuman abilities of psychics whose powers are derived from a mind-enhancing drug known as melange ["spice"], a substance found only on the desert planet of Arrakis [Iraq?].

The narrative follows the rise of Paul Atreides, a prince who reconquers Arrakis, controls the spice, and eventually becomes the messianic emperor of the Known Universe. Dune was first published in serial form in John W. Campbell’s Analog Science Fiction and Fact and, like many protagonists in Campbell-edited stories, Paul is a mutant ĂŒbermensch whose potential sets him apart from everyone else. He turns out to be the product of a eugenics program that imbues him with immense precognitive abilities that allow him to bend the galaxy to his will. Paul’s army also turns out to be selected for greatness: the harsh desert environment of Arrakis culls the weak, evolving a race of battle-hardened warriors.

In the fascist reading of the novel, space colonization has scattered the human species, but what Herbert calls a “race consciousness” moves them to unite under Paul, who sweeps away all opposition in a jihad that kills 60,000,000,000. For the alt-right, Paul stands as the ideal of a sovereign ruler who violently overthrows a decadent regime to bring together “Europid” peoples into a single imperium or ethnostate.

Herbert’s worlds represent impossible attempts to square the circle of fusing the destructive dynamism of capitalist modernization with the stable order prized by traditionalism. Beyond a shared affinity for space-age aristocrats, Faye and Herbert see the sovereign as one who is capable of disciplined foresight. Drawing on the Austrian School economist Hans-Hermann Hoppe, many thinkers on the alt-right believe that only men from genetically superior populations are capable of delaying gratification and working toward long-term goals. The alt-right asserts that white men hold an exclusive claim over the future. According to these white nationalists, science fiction is in their blood.

The Bene Gesserit sisterhood who bred Paul’s bloodline for prescience submit him to a kind of deadly marshmallow test to determine if he is fully human. One threatens to kill him with a poisoned needle (the gom jabbar) if he removes his hand from a device that produces the sensation of burning pain. Restraining his immediate impulses is only the first step toward using his precognitive abilities to choose between all the possible timelines. As in fascist doctrine, Paul’s ability to envision the future is a biogenetic trait possessed only by the worthy few.

Even the alt-right’s favorite novel does not seem to support their misreadings. Herbert’s book is often deeply conservative, but by the fascists’ own admission it presents a syncretic vision of the future in which cultures and populations have clearly intermingled over time. Paul’s army of desert guerillas, the Fremen, clearly owe something to Arabic and Islamic cultures, and Paul’s own genealogy defies the fascist demand for racial purity. The alt-right has tried to wrestle Islamophobic and Antisemitic messages from the book but they are stymied by its refusal to map existing ethnic categories onto the characters.

Fascists seek to tame class struggle and humanize capitalism by grounding it in a shared racial destiny, but they only end up enacting a program that leads to a more barbarous form of inhumanity.

Distorted as these fascist readings of Dune may be, Herbert’s novel will remain a persistent feature of alt-right culture as long as they fight to conquer the future. ~

https://www.lareviewofbooks.org/article/race-consciousness-fascism-and-frank-herberts-dune/

from Haaretz:

WHAT THE FAR RIGHT LIKES ABOUT “DUNE”; ISLAMIC FUTURISM

~ So what does the radical right like about “Dune”? The answer is clear. It may depict a futuristic world, but it’s one governed according to a feudal order, with houses of nobility battling one another. At the same time, the world portrayed in the film is also capitalist: Its economy is based entirely on the production and manufacture of “spice,” a kind of psychedelic version of petroleum, produced in distant desert realms.

In addition, the mythology created by Herbert has racial foundations: The imperial Bene Gesserit sisterhood seeks to produce the Messiah through planned racial crossbreeding of rival dynasties. All that is enough to turn Paul Atreides, “Dune’s” leading character, played in the new film by Timothee Chalamet, into a hero of the real-life fascist right wing.

An article in the extreme right-wing Daily Stormer describes Atreides as “the leader of the religious and nationalist rebellion against an intergalactic empire.” It even compared him to Hungarian President Viktor Orban in his battle against the European Union.

So is “Dune,” the book and the film, really a fascist or reactionary work? There are those who would argue that the question itself is irrelevant. Since it’s a blockbuster and a product of mass consumption, a movie like this purportedly belongs to the field of entertainment, and there’s no point in looking for deep political meaning in it.

As a result, the vast majority of moviegoers will be satisfied watching the space battles and marvel at the sandworms bursting forth from the dunes. But even if relating to a film as entertainment and nothing more is appropriate when it comes to “Star Wars” – “Dune” is something else.

So is “Dune,” the book and the film, really a fascist or reactionary work? There are those who would argue that the question itself is irrelevant. Since it’s a blockbuster and a product of mass consumption, a movie like this purportedly belongs to the field of entertainment, and there’s no point in looking for deep political meaning in it.

As a result, the vast majority of moviegoers will be satisfied watching the space battles and marvel at the sandworms bursting forth from the dunes. But even if relating to a film as entertainment and nothing more is appropriate when it comes to “Star Wars” – “Dune” is something else.

This epic of huge dimensions is based on one of the most serious and complex science-fiction works ever written. It’s not a superficial story about spaceships and swords, but rather a rich, multilayered work in which Herbert developed the theology, ecology, technology and economy of the universe that he created. And even more than that, in the 21st century, we cannot discount the political importance of science fiction.

To a great extent, the post-modern mythologies of fantasy and science fiction currently play the role that national epics played in the late 19th and early 20th centuries. Works such as “Lord of the Rings” and “Dune” can be considered the contemporary counterparts of the German national epic “Song of the Nibelungs,” which inspired Richard Wagner’s opera cycle.

From that standpoint, one can view production of the new film version of “Dune” as another expression of the conservative fantasies of our time, which go well beyond right-wing American extremist circles. People of all ages are easily swept up in works depicting kings and barons and royal dynasties, or in celebrating the racial differences among elves, dwarfs and humans.

But Herbert’s work contains a fundamental element that sets it apart from most other fantasy and science-fiction works: Islam.

Anyone unfamiliar with Herbert’s books will be surprised to come across Arabic, Persian, Turkish and even Hebrew words in the movie. The protagonist Paul Atreides may not be fundamentally different from King Arthur, or from fantasy heroes like Harry Potter or Frodo Baggins (the protagonist of “Lord of the Rings”), but unlike them he bears the title “Mahdi.” This is a concept that originated in the Koran, as a name for the Messiah, especially in Shi’ite Islam. Artreides is also referred to as “Muad’dib,” “Usal” or “Lisan al-Gaib” – titles given here to someone destined to lead a galactic jihad. And if that is not enough - he is also called “Kwisatz Haderech” (the Hebrew term for “the leap forward’), a concept with origins in the Babylonian Talmud.

Herbert cast the future world of “Dune” in the form of a kind of Middle Eastern, Islamic mythology. With the mass-culture reception of this latest rendering of the work, it is conceivable that “Dune” fans will begin to learn Arabic and Persian in order to trace the theological roots of the work. But this space jihadist fantasy also has limitations. The Arab and Islamic characteristics in the work are mostly associated with the Fremen, the desert dwellers on Dune, who are characterized as a kind of rather ignorant and primitive Bedouin.

In his book “Orientalism,” the Palestinian-born intellectual Edward Said criticized the stereotypical representations of the Middle East that are accepted in European and American culture. Indeed, “Dune” is perhaps the most “Orientalist” work in the science-fiction genre. The way in which Arab and Islamic culture are represented is saturated with clichĂ©s. The transliteration of Arabic-language words is incorrect. Moreover: As is common in the realm of “white” fantasy, the Fremen Bedouin expect a white savior to lead them to jihad. Herbert seems to have been influenced in this regard by the figure of T.E. Lawrence (of Arabia), the British Orientalist military officer who led some of the battles of the Arab Revolt during World War I.

Muslim futurism

But even if Herbert represented Islam stereotypically, he deserves credit for at least representing it. Would it have been better for the story of “Dune” – like so many other fictional works – to be set in a Nordic world, with gleaming blond heroes? Dune’s techno-orientalism expresses at least curiosity and fascination with the Islamic world, which is far from self-evident. This curiosity has a context: The books were written in the 1960s, before the era of the “war on terror.” Since the Islamic Revolution in Iran, the rise of Al-Qaida, and the Islamic State, it is hard to imagine that a popular work would be centered around what is referred to as a galactic jihad.

In an article published online in Al-Jazeera last year, Islamic scholar Ali Karjoo-Ravary noted that “Dune” granted Islam a central place in its futuristic world, in an extraordinary way. This future world does not resemble a California-based IT company transplanted to a different world, but a different, non-Western culture with Islamic characteristics.

In recent years, an interesting movement of Muslim futurism has emerged – works of science fiction written by Muslims from around the world, imagining a futuristic Muslim world. It is easier to imagine such a world in the present era, where Middle Eastern urban centers like Dubai and Doha are among the most futuristic cities in the world; moreover, the UAE has sent a probe to Mars. The distant future will not necessarily look like a Facebook board meeting, nor like a gathering of the Knights of the Round Table – but rather like an Islamic caliphate. Only Muslims will save civilization. ~

https://www.haaretz.com/israel-news/dune-may-be-fascist-but-its-focus-on-islam-is-groundbreaking-1.10357745

A reader’s comment:

Just because Neo-Nazis like something, does not make it 'fascist.' That word is so completely overused that it has become meaningless. Current white supremacists (or any supremacist) are not attracted to the centralized autocratic government so much as they are to the racial hierarchy that places them on top.

Another comment:

Denis Villeneuve's Dune suffers from being an umptieth iteration of a Star Wars-type arc story and its dubious futuristic-medieval construct of space combatants duking it out with knives, but its "orientalist" slant can provide a saving grace.

Dune and Star Wars share the same arc story of the Chosen One reluctant at first to embrace his mission against a galactic oppressor, etc., and Star Wars had more sequels, prequels, and whatnot that I care to remember.

Another:

I think it's a bias of the audience (and the article writer) to assume that Paul is white. He's not described like that in the book at all, but he's *cast* like that in movie adaptations. You could have a majority cast with people of color, that wouldn't matter or change the themes Frank Herbert was writing about at length in his novels. Again, this is a projection of an already biased audience. That's not Herbert's fault, and he would be tearing his hair out knowing that neo-Nazi trash is lauding his work.

Oriana:

Paul is the heir of the House of Atreides, which makes the educated viewer think back all the way to the Trojan war and the House of Atreus (Agamemnon). Does that make Paul Greek, especially given that thousands of years have passed since then? That’s overthinking it. He’s rather a stereotype of the lone hero. Can any literary work entirely escape from convention and be “original”?  

The dunes near Florence, Oregon, thought to have inspired Herbert's work

*
HOW ISLAMIC IS DUNE?

“Dune” is a multilayered allegory for subjects including T.E. Lawrence’s Bedouin exploits (which Herbert critiqued, following Suleiman Mousa’s “T.E. Lawrence: An Arab View”), Caucasian Muslim resistance to Russian imperialism, OPEC, and Indigenous struggles in the United States and Latin America. It is also thoroughly Muslim, exploring how Islam will develop 20,000 years into the future. While drawing on other religions, Herbert saw Islam as “a very strong element” of “Dune’s” entire universe, much as algebra or tabula rasa pervades our own — from Koranic aphorisms spoken by the Bene Gesserit missionary order to the Moorishness of a warrior-poet character (played in the movie by Josh Brolin) to the Shiism of the universe’s bible.

Rather than building and improving upon the novel’s audacious — and yes, Orientalist — engagement with these cultures and experiences, Villeneuve waters down the novel’s specificity. Trying to avoid Herbert’s apparent insensitivity, the filmmakers actively subdued most elements of Islam, the Middle East and North Africa (MENA). The new movie treats religion, ecology, capitalism and colonialism as broad abstractions, stripped of particularity.

Screenwriter Jon Spaihts claimed the book’s influences are “exotic” costumery, which “doesn’t work today,” when, in his words, “Islam is a part of our world.” This flies in the face of Herbert’s explicit aim to counter what he saw as a bias “not to study Islam, not to recognize how much it has contributed to our culture.” The film’s approach backfires: In justifying the film’s exclusion of Muslim and MENA creatives, it truly relegates “Dune’s” Muslimness to exotic aesthetics. The resulting film is both more Orientalist than the novel and less daring.

Take, for example, the languages in “Dune.” To create the Fremen language — often identified simply as Chakobsa, the Caucasian hunting language — Herbert mixed in “colloquial Arabic,” since he reasoned it “would be likely to survive for centuries in a desert environment,” wrote his son, Brian, in “Dreamer of Dune.” Herbert employed what he called an “elision process,” modifying “Arabic roots” to show how “languages change.” Herbert used Arabic throughout his universe, within and beyond the Fremen culture.

The film, however, dispenses with all that. Its Fremen language seems to be a futuristic take on Chakobsa, erasing Herbert’s elided Arabic. The film employs only the minimum Arabic necessary to tell the story, such as “Shai-Hulud” (the planet’s giant sandworms) and “Mahdi” (Paul’s messianic title, also a major figure in Islamic eschatology). The Arabic and Persian that does appear is pronounced poorly. When the Fremen speak English, their accents are a hodgepodge. Maybe people pronounce words differently in the future — but why do the Fremen sound like a 21st-century, Americanized caricature of a generic foreign accent? 

The film’s conlanger, David Peterson, wrote that “Dune” was set so far in the future that “it would be completely (and I mean COMPLETELY) impossible” for so much “recognizable Arabic” to survive. Unaware of Herbert’s inspiration, he also claimed “there’s nothing of the Caucasus in Dune.” For some unexplained reason, the movie’s characters do speak modern English and Mandarin (a fact widely advertised).

Similarly, the film employs “holy war,” not “jihad” — an attempt to avoid the conventional association of jihad with Islamic terrorism. In the book, Herbert’s jihad (which he sometimes calls “crusade”) is a positive description of anti-colonial resistance — but it also describes the colonial violence of the Atreides and the Bene Gesserit. The novel disrupts conventional understandings of the word “jihad”: If popular audiences see jihad as terrorism, then the imperialists, too, are terrorists.

The cinematic “Dune” skirts the novel’s subversive ideas, more black-and-white than its literary parent. Where Herbert challenged fixed, Orientalist categories such as “East” and “West,” the film opts for binaries: It codes obliquely Christian whiteness as imperialist and non-whiteness as anti-imperialist. The obvious Ottoman inspiration behind the Padishah Emperor’s Janissary-like military force, the Sardaukar, is absent. Instead, the imperial troops (who speak what is perhaps meant to be modified Turkish or Mongolian) are depicted with Christian imagery, bloodletting crucified victims. Meanwhile, the Bene Gesserit wear headscarves that look European Christian (with the exception of a beaded Orientalist veil).

The film dilutes Herbert’s anti-imperialist vision in other ways, too. One of the novel’s essential scenes involves a banquet where stakeholders debate the ecological treatment of Fremen. It was the only scene Herbert requested (unsuccessfully) for the David Lynch adaptation. Disgusted with McCarthyism’s bipartisanship, Herbert wrote the scene to expose corruption across political aisles: Liberals, too, are colonizers. One of them is the “Imperial Ecologist,” a half-Fremen named Liet Kynes who “goes native,” reforms the Fremen, and controls their environment. Herbert considered his death the “turning point” of the book: Swallowed by a sand formation even he cannot control, the ecologist realizes his hubris as the archetypal “Western man.”

There is no banquet in the movie; Kynes, played by a Black woman, dies in an act of triumphant defiance. The casting choice presented an incredible opportunity to explore how even subjugated people can participate in the oppression of others — a core theme of Herbert’s saga. Instead, the movie both inverts and reduces the ecologist’s character, simplifying Herbert’s critique of empire and cultural appropriation. It rests on an implicit premise: All dark-skinned people necessarily fit into an anti-colonial narrative, and racial identity easily deflects a character’s relationship to empire. The novel didn’t rely on such easy binaries: It interrogated the layered, particular ways that race, religion and empire can relate to each other.

Kynes’s depiction reflects the film’s broader worldview. It paints the Fremen as generic people of color, who are also generically spiritual. It sprinkles Brown and Black faces throughout the rest of the cast, with sparse attention to cultural or religious detail. The film does accentuate the novel’s critique of Paul as White savior, opening with the question, posed by a Fremen: “Who will our next oppressors be?” But the film fails to connect its abstract critique of messianism to anything resembling the novel’s deep cultural roots. It wants its audience to love the Atreides family and the ecologist — those banquet liberals — while keeping the Muslimness of “Dune” to a low whine.

Hans Zimmer’s score heightens the film’s cultural aimlessness. The music is vaguely religious, with primitive drums. (One hears the influence of Zimmer’s collaborator Edie Boddicker, who also worked with him on “The Lion King.”) The vocals sound like the “Lord of the Rings” hymns. The only distinctly Arab notes, during Paul’s education about Dune, are of “Aladdin” faire. These musical choices are particularly disappointing, given Villeneuve’s previous work. Over a decade ago, he made “Incendies,” a “Dune”-inspired movie that carefully explored MENA politics. Using Radiohead instead of “authentic” Arab music, Villeneuve aimed to interrogate the “westerner’s point of view” as an “impostor’s.” Imagine if Paul, Herbert’s impostor-savior, walked the desert to Zimmer’s cover of Pink Floyd?

This all feels like a missed opportunity. The film could have hired Muslim and MENA talent to lean into these influences, elevating the good and improving the bad. These artists could have developed Fremen custom further (which Herbert sometimes depicts as stereotypically rigid). What if they crafted language, dress and music, modifying traditional songs or prayers, improving Herbert’s “elisions” — or advanced this universe’s pervasive Islamic theology and eschatology?

On the planet Dune, it takes risk and creativity to cross the desert without attracting a worm’s notice: Fremen alter their regular gait to avoid being engulfed. Herbert was unafraid to explore the rich sands of Islamic and MENA histories, even if he made missteps. He put in the work.  

But the film usurps the ideas that shaped the novel. Seeking to save Muslim and MENA peoples from taking offense, Villeneuve — as Paul does to the Fremen — colonizes and appropriates their experiences. He becomes the White savior of “Dune.” Where Herbert danced unconventionally, the filmmakers avoid the desert entirely. But is it so hard to walk without rhythm?

https://www.washingtonpost.com/outlook/2021/10/28/dune-muslim-influences-erased/

[MENA= Middle East and North Africa]

 

from Aljazeera:

A quick look at Frank Herbert’s appendix to Dune, “the Religion of Dune”, reveals that of the “ten ancient teachings”, half are overtly Islamic. And outside of the religious realm, he filled the terminology of Dune’s universe with words related to Islamic sovereignty. The Emperors are called “Padishahs”, from Persian, their audience chamber is called the “selamlik”, Turkish for the Ottoman court’s reception hall and their troops have titles with Turco-Persian or Arabic roots, such as “Sardaukar”, “caid”, and “bashar”. Herbert’s future is one where “Islam” is not a separate unchanging element belonging to the past, but a part of the future universe at every level. The world of Dune cannot be separated from its language, and as reactions on Twitter have shown, the absence of that language in the movie’s promotional material is a disappointment. Even jihad, a complex, foundational principle of Herbert’s universe, is flattened – and Christianised – to crusade.

To be sure, Herbert himself defines jihad using the term “crusade”, twice in the narrative as a synonym for jihad and once in the glossary as part of his definition of jihad, perhaps reaching for a simple conceptual parallel that may have been familiar to his readership. But while he clearly subsumed crusade under jihad, much of his readership did the reverse.

One can understand why. Even before the War on Terror, jihad was what the bad guys do. Yet as Herbert understood, the term is a complicated one in the Muslim tradition; at root, it means to struggle or exert oneself. It can take many forms: internally against one’s own evil, externally against oppression, or even intellectually in the search for beneficial knowledge. And in the 14 centuries of Islam’s history, like any aspect of human history, the term jihad has been used and abused. Having studied Frank Herbert’s notes and papers in the archives of California State University, Fullerton, I have found that Herbert’s understanding of Islam, jihad, and humanity’s future is much more complex than that of his interpreters. His use of jihad grapples with this complicated tradition, both as a power to fight against the odds (whether against sentient AI or against the Empire itself), but also something that defies any attempt at control.

Herbert’s nuanced understanding of jihad shows in his narrative. He did not aim to present jihad as simply a “bad” or “good” thing. Instead, he uses it to show how the messianic impulse, together with the apocalyptic violence that sometimes accompanies it, changes the world in uncontrollable and unpredictable ways. And, of course, writing in the 1950s and 1960s, the jihad of Frank Herbert’s imagination was not the same as ours, but drawn from the Sufi-led jihads against French, Russian, and English imperialism in the 19th and mid-20th century. The narrative exhibits this influence of Sufism and its reading of jihad, where, unlike in a crusade, a leader’s spiritual transformation determined the legitimacy of his war.

In Dune, Paul must drink the “water of life”, to enter (to quote Dune) the “alam al-mithal, the world of similitudes, the metaphysical realm where all physical limitations are removed,” and unlock a part of his consciousness to become the Mahdi, the messianic figure who will guide the jihad. The language of every aspect of this process is the technical language of Sufism.

Perhaps the trailer’s use of “crusade” is just an issue of marketing. Perhaps the film will embrace the characteristically Islam-inspired language and aesthetics of Frank Herbert’s universe. But if we trace the reception of “the strong Muslim flavor” in Dune, to echo an editor on one of Herbert’s early drafts, we are confronted with Islam’s unfavorable place in America’s popular imagination. In fact, many desire to interpret Dune through the past, hungering for a historic parallel to these future events because, in their minds, Islam belongs to the past. Yet who exists in the future tells us who matters in our present. NK Jemisin, the three-time Hugo award-winning author, writes: “The myth that Star Trek planted in my mind: people like me exist in the future, but there are only a few of us. Something’s obviously going to kill off a few billion people of color and the majority of women in the next few centuries.”

Jemisin alerts us to the question: “Who gets to be a part of the future?”

Unlike many of his, or our, contemporaries, Herbert was willing to imagine a world that was not based on Western, Christian mythology. This was not just his own niche interest. Even in the middle of the 20th century, it was obvious that the future would be colored by Islam based on demographics alone. This is clearer today as the global Muslim population nears a quarter of humanity.

While this sounds like an alt-right nightmare/fantasy, Herbert did not think of Islam as the “borg”, an alien hive mind that allows for no dissent. Herbert’s Islam was the great, capacious, and often contradictory discourse recently expounded by Shahab Ahmed in his monumental book, What is Islam? Herbert understood that religions do not act. People act. Their religions change like their languages, slowly over time in response to the new challenges of time and place. Tens of thousands of years into the future, Herbert’s whole universe is full of future Islams, similar but different from the Islams of present and past.

Herbert countered a one-dimensional reading of Islam because he disavowed absolutes. In an essay titled: Science Fiction and a World in Crisis, he identified the belief in absolutes as a “characteristic of the West” that negatively influenced its approach to crisis. He wrote that it led the “Western tradition” to face problems “with the concept of absolute control”. This desire for absolute control is what leads to the hero-worship (or “messiah-building”) that defines our contemporary world. It is this impulse that he sought to tear down in Dune.

In another essay, Men on Other Planets, Herbert cautions against reproducing cliches, reminding writers to question their underlying assumptions about time, society, and religion. He encourages them to be subversive, because science fiction “permits you to go beyond those cultural norms that are prohibited by your society and enforced by conscious (and unconscious) literary censorship in the prestigious arenas of publication”.

https://www.aljazeera.com/opinions/2020/10/11/paul-atreides-led-a-jihad-not-a-crusade-heres-why-that-matters


One of the absurdities of Dune: if Fremen are a relatively simple tribe, based on the Bedouin [who at least had camels; Fremen have no vehicles of any sort], how come they are wearing the technologically advanced, moisture-conserving stillsuits?

 
*
STALIN'S PERSISTENT POPULARITY IN RUSSIA (March 2019)

~ “Burn in hell, executioner of the people and murderer of women and children!” shouted Yevgeny Suchkov, before snapping a red carnation and hurling it at a granite bust of Stalin. Police and Kremlin security officers reacted instantly, seizing him in a neck lock and dragging him away.

Stalin’s reputation has soared in Russia since Vladimir Putin, a former KGB officer, came to power in 2000. Busts and portraits of the Soviet dictator, once taboo, have reappeared across the country in recent years.

The decommunization movement, of which Suchov is a member, by contrast wants to see reminders of the Soviet era removed from Russia’s streets, as well as state archives relating to Stalin’s campaign of political terror being opened to the public.

While all the attention was on Suchkov, his fellow activist, Olga Savchenko, stepped up to Stalin’s grave and calmly said: “Shame on the executioner.” She was also detained.
Suchkov, 21, said he had felt obliged to stage his protest because standing by and doing nothing in the face of an “homage to evil” would make him an accessory.

“And I have no intention of becoming an accessory to the evil that was Stalin and Stalinism,” he told the Guardian after his release from police custody. Savchenko, 25, said her great-grandfather was executed by the Soviet secret police in 1937 at the height of Stalin’s purges.
Both activists were ordered to pay a small fine, although they said police had not specified in their report exactly what offense they had been charged with.

Almost three decades on from the collapse of the communist system in Russia, thousands of metro stations, streets and squares across the country continue to bear the name of Soviet leaders and officials, while almost every town or city has a statue of Vladimir Lenin. Opinion polls indicate around 25% of Russians believe Stalin’s campaign of political terror, estimated to have killed some 20 million people, was “historically justified”.

Critics accused the decommunization movement of enflaming dangerous social tensions. “We must clamp down hard on this or tomorrow blood will flow across the whole country,” said Alexander Yushchenko, a Communist Party MP.

Knowledge about the Stalin era is patchy among young Russians. There is nothing in the official school curriculum about Stalinist terror, and children can go through their entire school years without hearing anything about the topic. Unsurprisingly, almost half of all Russians aged 18-24 know nothing at all about Stalin’s purges, according to an opinion poll published last year.

Ukraine implemented a decommunization program after protests overthrew the country’s pro-Moscow president in 2014. Hundreds of Soviet-era statues and monuments have since been toppled in the parts of the country under government control.

“There is no other method of overcoming the Soviet legacy,” said Alexander Polozun, a 20-year-old decommunization activist. ~

https://www.theguardian.com/world/2019/mar/08/homage-to-evil-russians-activists-detained-over-stalin-protest


Oriana:

Even though this is not a new article, on December 18 (Stalin’s birthday) I thought again of how Stalin, a mass murderer, is still adored in Russia.

I remember the term “de-Stalinization” from my childhood, but then Poland was not the Soviet Union. The average Pole hated Stalin. Not so in Russia.

Though there are still Neo-Nazis in Germany and elsewhere, there are no statues of Hitler anywhere in the country. It is illegal to display the swastika, so the German Nazis use the confederate flag instead.
I know I’ve used this image before, perhaps more than once. To me it’s iconic: a reminder that if indoctrination is intense enough and long enough, it persists.

*
LIFE OF THE SIMPSONS NO LONGER ATTAINABLE

~ The most famous dysfunctional family of 1990s television enjoyed, by today’s standards, an almost dreamily secure existence that now seems out of reach for all too many Americans. I refer, of course, to the Simpsons. Homer, a high-school graduate whose union job at the nuclear-power plant required little technical skill, supported a family of five. A home, a car, food, regular doctor’s appointments, and enough left over for plenty of beer at the local bar were all attainable on a single working-class salary. Bart might have had to find $1,000 for the family to go to England, but he didn’t have to worry that his parents would lose their home.

This lifestyle was not fantastical in the slightest—nothing, for example, like the ridiculously large Manhattan apartments in Friends. On the contrary, the Simpsons used to be quite ordinary—they were a lot like my Michigan working-class family in the 1990s.

The 1996 episode “Much Apu About Nothing” shows Homer’s paycheck. He grosses $479.60 per week, making his annual income about $25,000. My parents’ paychecks in the mid-’90s were similar. So were their educational backgrounds. My father had a two-year degree from the local community college, which he paid for while working nights; my mother had no education beyond high school. Until my parents’ divorce, we were a family of three living primarily on my mother’s salary as a physician’s receptionist, a working-class job like Homer’s.

By 1990—the year my father turned 36 and my mother 34—they were divorced. And significantly, they were both homeowners—an enormous feat for two newly single people.
Neither place was particularly fancy. I’d estimate that the combined square footage of both roughly equaled that of the Simpsons’ home. Their houses were their only source of debt; my parents have never carried a credit-card balance. Within 10 years, they had both paid off their mortgage.

Neither of my parents had much wiggle room in the budget. I remember Christmases that, in hindsight, looked a lot like the one portrayed in the first episode of The Simpsons, which aired in December 1989: handmade decorations, burned-out light bulbs, and only a handful of gifts. My parents had no Christmas bonus or savings, so the best gifts usually came from people outside our immediate family.

Most of my friends and classmates lived the way we did—that is, the way the Simpsons did. Some families had more secure budgets, with room for annual family vacations to Disney World. Others lived closer to the edge, with fathers taking second jobs as mall Santas or plow-truck drivers to bridge financial gaps. But we all believed that the ends could meet, with just an average amount of hustle.

Over the years, Homer and his wife, Marge, also face their share of struggles. In the first episode, Homer becomes a mall Santa to bring in some extra cash after Homer learns that he won’t receive a Christmas bonus and the family spends all its Christmas savings to get Bart’s new tattoo removed. They also occasionally get a peek into a different kind of life. In Season 2, Homer buys the hair-restoration product “Dimoxinil.” His full head of hair gets him promoted to the executive level, but he is demoted after Bart accidentally spills the tonic on the floor and Homer loses all of his new hair. Marge finds a vintage Chanel suit at a discount store, and wearing it grants her entrĂ©e into the upper echelons of society.

The Simpsons started its 32nd season this past fall. Homer is still the family’s breadwinner. Although he’s had many jobs throughout the show’s run—he was even briefly a roadie for the Rolling Stones—he’s back at the power plant. Marge is still a stay-at-home parent, taking point on raising Bart, Lisa, and Maggie and maintaining the family’s suburban home. But their life no longer resembles reality for many American middle-class families.

Adjusted for inflation, Homer’s 1996 income of $25,000 would be roughly $42,000 today, about 60 percent of the 2019 median U.S. income. But salary aside, the world for someone like Homer Simpson is far less secure. Union membership, which protects wages and benefits for millions of workers in positions like Homer’s, dropped from 14.5 percent in 1996 to 10.3 percent today. With that decline came the loss of income security and many guaranteed benefits, including health insurance and pension plans. In 1993’s episode “Last Exit to Springfield,” Lisa needs braces at the same time that Homer’s dental plan evaporates. Unable to afford Lisa’s orthodontia without that insurance, Homer leads a strike. Mr. Burns, the boss, eventually capitulates to the union’s demand for dental coverage, resulting in shiny new braces for Lisa and one fewer financial headache for her parents. What would Homer have done today without the support of his union?

The purchasing power of Homer’s paycheck, moreover, has shrunk dramatically. The median house costs 2.4 times what it did in the mid-’90s. Health-care expenses for one person are three times what they were 25 years ago. The median tuition for a four-year college is 1.8 times what it was then. In today’s world, Marge would have to get a job too. But even then, they would struggle. Inflation and stagnant wages have led to a rise in two-income households, but to an erosion of economic stability for the people who occupy them.

Last year, my gross income was about $42,000—the amount Homer would be making today. It was the second-highest-earning year of my career. I wanted to buy a home, but no bank was willing to finance a mortgage, especially since I had less than $5,000 to make a down payment. However, my father offered me a zero-down, no-interest contract. Without him, I would not have been able to buy the house. (In one episode, Homer's dad helps him with a downpayment on his home.)

I finally paid off my medical debt. But after taking into account all of my expenses, my adjusted gross income was only $19. And with the capitalized interest on my student loans adding thousands to the balance, my net worth is still negative.

I don’t have Bart, Lisa, and Maggie to feed or clothe or buy Christmas presents for. I’m not sure how I’d make it if I did.

Someone I follow on Twitter, Erika Chappell, recently encapsulated my feelings about The Simpsons in a tweet: “That a show which was originally about a dysfunctional mess of a family barely clinging to middle class life in the aftermath of the Reagan administration has now become aspirational is frankly the most on the nose manifestations [sic] of capitalist American decline I can think of.”

For many, a life of constant economic uncertainty—in which some of us are one emergency away from losing everything, no matter how much we work—is normal. Second jobs are no longer for extra cash; they are for survival. It wasn’t always this way. When The Simpsons first aired, few would have predicted that Americans would eventually find the family’s life out of reach. But for too many of us now, it is. ~

https://www.theatlantic.com/ideas/archive/2020/12/life-simpsons-no-longer-attainable/617499/?utm_campaign=the-atlantic&utm_medium=social&utm_source=facebook&fbclid=IwAR2MilAn7Ht0OzcSOurJw22KfZTqTXogy9CVN8rtNd5CK6SH1xmUQ30vB2w

Mary:

In my family in the 1950's my father was the wage earner, my mother was at home, there were, by 1963, seven children. We had nothing fancy, but everything we needed. No one went hungry, without clothes or shoes, and sacrifices were made to pay tuition for us at catholic schools. Parish schools, not private schools, the tuition was reasonable . When the first three of us went to high school we continued at catholic school, but we worked to pay our tuition.

This all gradually changed, to the point that my mother did go out to work, even as we were older and moving into our adult lives. Five of us went to college, but all on combinations of scholarships, grants, loans and work study, our parents didn’t, and couldn’t have, paid those bills.

Everything is vastly different now. Less secure, more expensive, and many things now much less available. Few jobs are secure enough to last most of a lifetime, few families can afford so many kids and can get them educated so well. Families that could live on one worker's wages are rare, in most, both partners work outside the home...and even with 2 salaries expenses are such that most families carry quite a burden of debt. The only debt my parents carried was their mortgage, they never used credit. When I graduated from university I owed $2000. This is laughable now as people are graduating owing hundreds of thousands of dollars — debts so huge they have to postpone many things, like buying a home and starting a family. So many things are just simply out of reach.

Oriana:

One of the things that most impressed me about the US soon after my arrival was the ability of a factory worker to support his entire family with his salary. In Milwaukee I had a simple but filling hamburger lunch at the home of one such worker. He belonged to a union, of course. I forget now how many kids there were, 2 or maybe 3. The house was modest but the family owned it, the husband had a car while his wife drove an obviously cheaper car, perhaps bought used. Still, given where I was coming from, this seemed like a fairy tale. And to think that officially it was the Soviet Union that advertised itself as “the worker’s paradise.”

The first three decades after the end of WWII are regarded as the time of unprecedented prosperity in the US (and in the West in general). And it seemed like the standard of living would just keep on rising. Instead it was the cost of living that began to soar. Union busting became the practice. One bit of hope I see is the unions seem to be coming back. But then most manufacturing is done abroad. Those good union wages were tied mainly to the manufacturing industry, once the dominant kind of employment.

*
CRIMES OF FASHION: CLOTHES USED TO BE VERY EXPENSIVE

~ Could something as mundane as a shirt ever be the motive for murder? What if clothing were more expensive than rent or a mortgage? In 1636 a maidservant, Joan Burs, went out to buy mercury. A toxic heavy metal, mercury causes damage to the nervous system and can feel like insects are crawling beneath the skin. Burs baked the poison into a milk posset (which often contained spices and alcohol that might have masked the bitter taste), planning to kill her mistress. She believed that if the lady of the house were dead, she herself might get better clothing.

The simplest kind of coat cost £1, which was 20 days’ labor for a skilled tradesman. Clothes were sometimes mentioned first in a will, since they could be more expensive than a house. Even the well-off, such as Samuel Pepys, remade and refashioned existing garments as much as they could rather than buying new.

It is no wonder, therefore, that there was a thriving black market for second-hand clothing of dubious provenance; much of the clothing worn by the middling and working classes essentially ‘fell off a cart’. The web of how such things were acquired could become extremely complex, as tinkers hawked both new and second-hand wares, and items were passed on or exchanged – not to mention the markets that thrived on the clothing trade. 

To supply the country’s insatiable demand for new clothes, thieves might strip drunk people on their way home from a night out, force doors, or even tear down walls. In urban areas in 17th-century England stolen clothes accounted for the most prosecutions of any crime. It was rare for anyone to commit (or attempt to commit) murder over an item of clothing, but the motivations for stealing were broad. Often, they were crimes of opportunity: freshly washed linen hung out to dry on hedges, awaiting capture from any passer-by.

Some thefts, however, were more complicated, involving acting and the tricks of the con-artist’s trade. One cold winter’s night (since it was the little ice age, every winter’s night was cold), a teenage boy was sent on a simple errand. All he had to do was take some clothes – valued at about £4, no small sum – and deliver them to a gentleman across the city. Passing along Watling Street, a woman stopped him and demanded his name, his mother’s name, where he lived and what his errand was. He answered her questions and continued along his journey. Meanwhile, the woman passed all this information on to her partner-in-crime, who set off after the boy, hailing him by name and speaking of his mother. She asked him to buy a shoulder of mutton for her while she waited with the clothes. The boy did so, but returned to find no woman and no clothes. Such operations would have been immensely profitable and difficult to trace, as the stolen goods would have been sold on to the second-hand clothes dealers who supplied the whole country.

No member of society was safe from the theft of clothes. Perhaps the best-loved, and certainly one of the best-known, celebrities of the Elizabethan period (as well as being Elizabeth I’s personal jester) was the clown Richard Tarlton, known for his witty comebacks and cheeky persona. One night, while Tarlton was downstairs at an inn, wearing only his shirt and nightgown, drinking with some musician friends, a thief crept into his room and stole all his clothes. The story traveled around London to great hilarity and the clown was publicly mocked when he next performed onstage. However, Tarlton had the somewhat macabre last laugh, responding to the crowd with one of the impromptu verses that made him famous. He declared,

When that the theefe shall Pine and lacke,

Then shall I have cloathes to my backe:

And I, together with my fellowes,

May see them ride to Tiborne Gallowes.

Those caught stealing clothes were frequently hanged at Tyburn, known as ‘Tyburn tree’. (Executions were supposed to deter thieving.) Spending their last night at Newgate prison, they would be paraded through the streets in a horse and cart before a boisterous crowd, all jostling for the best view of the condemned and hanging on the thief’s last words. Ironically, the events were prime sites for pickpockets.

While clothing could be the motive for theft or murder because it was so difficult to come by, an accurate description by a witness of the perpetrator’s clothing could secure a conviction. For example, after Francis Terry stole wheat from a barn in 1626 he left a distinctive footprint that made identifying him easy. The print showed three indentations mapped to three nails on the sole of Terry’s right boot.

After other crimes, witnesses recalled a man in a red coat, wearing a hat with a hole in it, or dressed in grey clothes. Since many people only had one or two outfits, this was seen as positive proof and helped secure a conviction. Finally, in close communities where word of mouth was paramount, any change in clothing could arouse suspicion. Mary Watts gave the game away after allegedly stealing a silver bowl and some clothing, since she bought herself new clothes with the profits, to the shock of the community around her.

People in the 16th and 17th centuries had a relationship with clothing that is difficult to comprehend in an age of fast fashion, where clothes change with the seasons and any change in identity is instantly worn on the body. But, for early modern people, fashion was just as connected to identity. Most could not afford to change their clothes often, but their outfits became part of how they were seen and how they saw themselves. A change of clothing could provoke anger, hilarity, or even thoughts of murder.

https://www.historytoday.com/archive/history-matters/crimes-fashion?utm_source=pocket-newtab

Mary:
 
It is hard for us to imagine how hard clothes were to come by in the past, how very few most people had, how tempting they were to thieves as valuable objects. Old houses, both old worker's row houses, and old victorian city houses like my parents eventually bought, had cupboards for clothing. These cupboards were, however, very shallow, and could only accommodate a few garments.
 
That was because as a norm, people only had a few. I grew up this way through grade and high school. We wore uniforms every day, had some clothes to wear for after school play or work, and something for Sunday or special occasions. We didn't need deep closets.

There were advantages to this. You didn't have to spend time deciding what to wear, for one, plus it made everyone more or less equal, the rich kids wore the same uniform as the poorer kids. No one was shamed for their clothes, for the lack of finery or the latest trend. Now clothes are generally cheap and part of our throwaway culture. They are poorly made and don't last long. High couture, good and well made clothes, are as far from the ordinary person as the fineries of the upper class were in the days of nobles and peasants.

I think my upbringing stays with me in this. I usually have one purse I use every day until it wears out, and one or two dress ones for special occasions. Three or four pairs of shoes. When I see the quantities of these some women have I am sort of set back, thinking what a bother it must be to always be choosing and changing -- especially shifting things from purse to purse all the time. I don't spend much time thinking about any of it.

Maybe my experience of these things is unusual, but I don't think it's far from most. The world of our parents is not the world we have now, and many of my generation and those after, found themselves with a standard of living lower than their parents could aspire to. 

Oriana:

When I came to this country, I was startled by the low price of clothes relative to food, for instance. Clothes were expensive in Poland, and having just two pairs of slacks, say, was not regarded as poverty. Having lots and lots of clothes wasn't even imaginable. 

I remember a guest scientist from Poland who stayed at my mother's house for a while. One day my mother pointed out that there was a sale at Sears, two pairs of women's pants for ten dollars (or whatever it was). The Polish woman said, "But I already have two pairs: one gray, and one black." 

My mother and I used to chuckle over this, having somehow already forgotten that there is no point accumulating clothes (generally cheap clothes that don't last) -- while not so long ago, in Poland, we wouldn't have understood why anyone needed more than two pairs of anything. 

This is a culture of excess. The voices of protest are few but becoming louder, it seems to me. There is, for instance, the Buy Nothing movement, and consumerism is increasingly condemned as a threat to the environment. Recently I came across two articles that suggested that adults stop buying Christmas gifts for other adults -- gifts should be just for children. It makes sense. What a relief it would be! 

Of course we don’t want a return of the world in which most people had only one outfit, with one coat for winter if they were lucky. Oh yes, let me not forget “Sunday clothes.” There must have been some people who didn’t go to church precisely because they couldn’t afford Sunday clothes. And the sight of children wearing rags didn’t seem to offend anyone. 

No, we certainly don’t want that world, but perhaps we need to imagine one in which clothing is not as abundant, but instead it’s of good quality. 

A public washing ground

*
HOW TO PRAY TO A DEAD GOD

~ On an evening in 1851, a mutton-chopped 28-year-old English poet and critic looked out at the English Channel with his new bride. Walking along the white chalk cliffs of Dover, jagged and streaked black with flint as if the coast had just been ripped from the Continent, he would recall that:

The sea is calm to-night.
The tide is full, the moon lies fair
Upon the straits; on the French coast, the light
Gleams, and is gone; the cliffs of England stand,
Glimmering and vast, out in the tranquil bay.

Matthew Arnold’s poem ‘Dover Beach’ then turns in a more forlorn direction. While listening to pebbles thrown upon Kent’s rocky strand, brought in and out with the night tides, the cadence brings an ‘eternal note of sadness in’. That sound, he thinks, is a metaphor for the receding of religious belief, as

The Sea of Faith
Was once, too, at the full, and round earth’s shore …
But now I only hear
Its melancholy, long, withdrawing roar,
Retreating, to the breath
Of the night-wind, down the vast edges drear
And naked shingles of the world.


Eight years before Charles Darwin’s On the Origin of Species (1859) and three decades before Friedrich Nietzsche’s Thus Spoke Zarathustra (1883-5) – with its thunderclap pronouncement that ‘God is dead’ – Arnold already heard religion’s retreat. 

Darwin’s theory was only one of many challenges to traditional faith, including the radical philosophies of the previous century, the discoveries of geology, and the Higher Criticism of German scholars who proved that scripture was composed by multiple, fallible people over several centuries. While in previous eras a full-throated scepticism concerning religion was an impossibility, even among freethinkers, by the 19th century it suddenly became intellectually possible to countenance agnosticism or atheism. The tide going out in Arnold’s ‘sea of faith’ was a paradigm shift in human consciousness.

What ‘Dover Beach’ expresses is a cultural narrative of disenchantment. Depending on which historian you think authoritative, disenchantment could begin with the 19th-century industrial revolution, the 18th-century Enlightenment, the 17th-century scientific revolution, the 16th-century Reformation, or even when medieval Scholastic philosophers embraced nominalism, which denied that words had any connection to ultimate reality. Regardless, there is broad consensus on the course of the narrative. At one point in Western history, people at all stations of society could access the sacred, which permeated all aspects of life, giving both purpose and meaning. During this premodern age, existence was charged with significance. 

At some point, the gates to this Eden were sutured shut. The condition of modernity is defined by the irrevocable loss of easy access to transcendence. The German sociologist Max Weber wrote in his essay ‘Science as a Vocation’ (1917) that the ‘ultimate and most sublime values have retreated from public life either into the transcendental realm of mystic life or into the brotherliness of direct and personal human relations,’ the result of this retraction being that the ‘fate of our times is characterized by rationalization and intellectualization and, above all, by the “disenchantment of the world”.’

A cognoscente of the splendors of modern technology and of the wonders of scientific research, Arnold still felt the loss of the transcendent, the numinous, and the sacred. Writing in his book God and the Bible (1875), Arnold admitted that the ‘personages of the Christian heaven and their conversations are no more matter of fact than the personages of the Greek Olympus’ and yet he mourned for faith’s ‘long, withdrawing roar’.

Some associated the demise of the supernatural with the elimination of superstition and all oppressive religious hierarchies, while others couldn’t help but mourn the loss of transcendence, of life endowed with mystery and holiness. Regardless of whether modernity was welcomed or not, this was our condition now. Even those who embraced orthodoxy, to the extremes of fundamentalism, were still working within the template set by disenchantment, as thoroughly modern as the rest of us. Thomas Hardy, another English poet, imagined a surreal funeral for God in a 1912 lyric, with his narrator grieving that

. . . toward our myth’s oblivion,
Darkling, and languid-lipped, we creep and grope
Sadlier than those who wept in Babylon,
Whose Zion was a still abiding hope.

How people are to grapple with disenchantment remains the great religious question of modernity. ‘And who or what shall fill his place?’ Hardy asks. How do you pray to a dead God?

The question was a central one not just in the 19th century, but among philosophers in the subsequent century, though not everyone was equally concerned. When it came to where, or how, to whom, or even why somebody should direct their prayers, Thomas Huxley didn’t see an issue.

A stout, pugnacious, bulldog of a man, the zoologist and anatomist didn’t become famous until 1860, when he appeared to debate Darwinism with the unctuous Anglican Bishop of Winchester, Samuel Wilberforce, at the University of Oxford. Huxley was the ever-modern man of science and a recipient of a number of prestigious awards – the Royal Medal, the Wollaston Medal, the Clarke Medal, the Copley Medal, and the Linnean Medal – all garnered in recognition of his contributions to science. By contrast, Wilberforce was the decorated High Church cleric, bishop of Oxford and dean of Westminster. The former represented rationalism, empiricism and progress; the latter the supernatural, traditionalism and the archaic. 

Unfortunately for Wilberforce, Huxley was on the side of demonstrable data. In a room of dark wood and taxidermied animals, before an audience of a thousand, Wilberforce asked Huxley which side of the esteemed biologist’s family a gorilla was on – his grandmother’s or his grandfather’s? Huxley reportedly responded that he ‘would rather be the offspring of two apes than be a man and afraid to face the truth.’ The debate was a rout.

Of course, evolution had implications for any literal account of creation, but critics like Wilberforce really feared the moral implications of Huxley’s views. Huxley had a rejoinder. Writing in his study Evolution and Ethics (1893), he held that ‘Astronomy, Physics, Chemistry, have all had to pass through similar phases, before they reached the stage at which their influence became an important factor in human affairs’ and so too would ethics ‘submit to the same ordeal’.

Rather than relying on ossified commandments, Huxley believed that reason ‘will work as great a revolution in the sphere of practice’. Such a belief in progress was common among the 19th-century intelligentsia, the doctrine that scientific knowledge would improve not just humanity’s material circumstances but their moral ones as well. 

What, then, of transcendence? Inheritors of a classic, English education, both Huxley and Wilberforce (not to mention Arnold) were familiar with that couplet of the poet Alexander Pope, rhapsodizing Isaac Newton in 1730: ‘Nature, and Nature’s laws lay hid in night. / God said, Let Newton be! and all was light!’ For some, the answer to what shall fill God’s place was obvious: science.

The glories of natural science were manifold. Darwin comprehended the ways in which moths and monkeys alike were subject to the law of adaptation. From Newton onward, physicists could predict the parabola of a planet or a cricket ball with equal precession, and the revolution of Antoine Lavoisier transformed the alchemy of the Middle Ages into rigorous chemistry. By the 19th century, empirical science had led to attendant technological wonders; the thermodynamics of James Clerk Maxwell and Lord Kelvin gave us the steam engine, while the electrodynamics of Michael Faraday would forever (literally) illuminate the world. Meanwhile, advances in medicine from experimentalists such as Louis Pasteur ensured a rise in life expectancy.

Yet some were still troubled by disenchantment. Those like Arnold had neither the optimism of Huxley nor the grandiosity of Pope. Many despaired at the reduction of the Universe to a cold mechanization – even when they assented to the accuracy of those theories. Huxley might see ingenuity in the connection of joint to ligament, the way that skin and fur cover bone, but somebody else might simply see meat and murder. Even Darwin would write that the ‘view now held by most physicists, namely, that the Sun with all the planets will in time grow too cold for life … is an intolerable thought.’ Such an impasse was a difficulty for those convinced by science but unable to find meaning in its theories. For many, purpose wasn’t an attribute of the physical world, but rather something that humanity could construct.

Art was the way out of the impasse. Our prayers weren’t to be oriented towards science, but rather towards art and poetry. In Literature and Dogma (1873), Arnold wrote that the ‘word “God” is … by no means a term of science or exact knowledge, but a term of poetry and eloquence … a literary term, in short.’ Since the Romantics, intellectuals affirmed that in artistic creation enchantment could be resurrected. Liberal Christians, who affirmed contemporary science, didn’t abandon liturgy, rituals and scripture, but rather reinterpreted them as culturally contingent. 

In Germany, the Reformed theologian Friedrich Schleiermacher rejected both Enlightenment rationalism and orthodox Christianity, positing that an aesthetic sense defined faith, while still concluding in a 1799 address that ‘belief in God, and in personal immortality, are not necessarily a part of religion.’  

Like Arnold, Schleiermacher saw ‘God’ as an allegorical device for introspection, understanding worship as being ‘pure contemplation of the Universe’. Such a position was influential throughout the 19th century, particularly among American Transcendentalists such as Henry Ward Beecher and Ralph Waldo Emerson.

Lyman Stewart, the Pennsylvania tycoon and co-founder of the Union Oil Company of California, had a different solution to the so-called problem of the ‘death of God’. Between 1910 and 1915, Stewart convened conservative Protestant ministers across denominations, including Presbyterians, Baptists and Methodists, to compile a 12-volume set of books of 90 essays entitled The Fundamentals: A Testimony to the Truth, writing in 1907 that his intent was to send ‘some kind of warning and testimony to the English-speaking ministers, theological teachers, and students, and English-speaking missionaries of the world … which would put them on their guard and bring them into right lines again.’

Considering miracles of scripture, the inerrancy of the Bible, and the relationship of Christianity to contemporary culture, the set was intended to be a ‘new statement of the fundamentals of Christianity’. Targets included not just liberal Christianity, Darwinism and secular Bible scholarship, but also socialism, feminism and spiritualism. Writing about the ‘natural view of the Scriptures’, which is to say a secular interpretation, the contributor Franklin Johnson oddly echoed Arnold’s oceanic metaphor, writing that liberalism is a ‘sea that has been rising higher for three-quarters of a century … It is already a cataract, uprooting, destroying, and slaying.’

Like many radicals, Stewart’s ministers – such as Louis Meyer, James Orr and C I Scofield – saw themselves as returning to first principles, hence their ultimate designation as being ‘fundamentalists’. But they were as firmly of modernity as Arnold, Huxley or Schleiermacher.

Despite their revanchism, the fundamentalists posited theological positions that would have been nonsensical before the Reformation, and their own anxious jousting with secularism – especially their valorization of rational argumentation – served only to belie their project.

Praying towards science, art or an idol – all responses to disenchantment, but not honest ones. Looking with a clear eye, Nietzsche formulated an exact diagnosis. In The Gay Science (1882), he wrote:

God is dead. God remains dead. And we have killed him … What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us?


Nietzsche is sometimes misinterpreted as a triumphalist atheist. Though he denied the existence of a personal creator, he wasn’t in the mold of bourgeois secularists such as Huxley, since the German philosopher understood the terrifying implications of disenchantment. There are metaphysical and ethical ramifications to the death of God, and if Nietzsche’s prescription remains suspect – ‘Must we ourselves not become gods simply to appear worthy of it?’ – his appraisal of our spiritual predicament is foundational. Morning star of 20th-century existentialism, Nietzsche shared an honest acceptance of the absurdity of reality, asking how it is that we’re able to keep living after God is dead.

Another forerunner of existentialism was the Russian novelist Fyodor Dostoevsky, who had a different solution. The Brothers Karamazov (1879) enacts a debate about faith far more nuanced than the bloviating between Huxley and Wilberforce. Two brothers – Ivan and Alyosha – discuss belief; the former is a materialist who rejects God, and the latter is an Orthodox novice.

Monotheistic theology has always wrestled with the question of how an omnibenevolent and omnipotent God could allow for evil. Theodicy has proffered solutions, but all have ultimately proven unsatisfying. To imagine a God who either isn’t all good or isn’t all powerful is to not imagine God at all; to rationalize the suffering of the innocent is ethically monstrous. And so, as Ivan tells his brother, God himself is ‘not worth the tears of that one tortured child’. Finally, Alyosha kisses his brother and departs. Such an enigmatic action is neither condescension nor concession, even though the monk agrees with all of Ivan’s reasoning. Rather, it’s an embrace of the absurd, what the Danish philosopher SĂžren Kierkegaard would call a ‘leap of faith’. It is a commitment to pray even though you know that God is dead.

ShĆ«saku Endƍ, in his novel Silence (1966), about the 17th-century persecution of Japanese Christians, asks: ‘Lord, why are you silent? Why are you always silent?’ Following the barbarity of the Holocaust and Hiroshima, all subsequent authentic theology has been an attempt to answer Endƍ. With Nietzsche’s predicted wars, people confronted the new gods of progress and rationality, as the technocratic impulse made possible industrial slaughter. If disenchantment marked the anxieties of Romantics and Victorians, then the 20th-century dreams of a more fair, wise, just and rational world were dissipated by the smoke at Auschwitz and Nagasaki. Huxley’s fantasy was spectacularly disproven in the catastrophic splitting of the atom.

These matters were not ignored in seminaries, for as the journalist John T Elson wrote in Time magazine in 1966: ‘Even within Christianity … a small band of radical theologians has seriously argued that the churches must accept the fact of God’s death, and get along without him.’ That article was in one of Time’s most controversial – and bestselling – issues. Elson popularized an evocative movement that approached the death of God seriously, and asked how enchantment was possible during our age of meaninglessness. Thinkers who were profiled included Gabriel Vahanian, William Hamilton, Paul van Buren and Thomas J J Altizer, all of whom believed that ‘God is indeed absolutely dead, but [propose] to carry on and write a theology … without God.’ Working at progressive Protestant seminaries, the death of God movement, to varying degrees, promulgated a ‘Christian atheism’.

Radical theology is able to take religion seriously – and to challenge religion. Vahanian, a French Armenian Presbyterian who taught at Syracuse University in New York, hewed towards a more traditional vision, nonetheless writing in Wait Without Idols (1964) that ‘God is not necessary; that is to say, he cannot be taken for granted. He cannot be used merely as a hypothesis, whether epistemological, scientific, or existential, unless we should draw the degrading conclusion that “God is reasons”.’ 

Altizer, who worked at the Methodist seminary of Emory University in Atlanta, had a different approach, writing in The Gospel of Christian Atheism (1966) that ‘Every man today who is open to experience knows that God is absent, but only the Christian knows that God is dead, that the death of God is a final and irrevocable event and that God’s death has actualized in our history a new and liberated humanity.’ What unified disparate approaches is a claim from the German Lutheran Paul Tillich, who in his Systematic Theology, Volume 1 (1951) would skirt paradox when he provocatively claimed that ‘God does not exist. He is being-itself beyond essence and existence. Therefore, to argue that God exists is to deny him.’

What does any of this mean practically? Radical theology is unsparing; none of it comes easily. It demands an intensity, focus and seriousness, and more importantly a strange faith. It has unleashed a range of reactions in the contemporary era, ranging from an embrace of the cultural life of faith absent any supernatural claims, to a rigorous course of mysticism and contemplation that moves beyond traditional belief. For some, like Vahanian, it meant a critical awareness that the rituals of religion must enter into a ‘post-Christian’ moment, whereby the lack of meaning would be matched by a countercultural embrace of Jesus as a moral guide. Others embraced an aesthetic model and a literary interpretation of religion, an approach known as ‘theopoetics’.

Altizer meanwhile understood the death of God as a transformative revolutionary incident, interpreting the ruptures caused by secularism as a way to reorient our perspective on divinity.
In Beyond God the Father: Toward a Philosophy of Women’s Liberation (1973), the philosopher Mary Daly at Boston College deconstructed the traditional – and oppressive – masculine symbols of divinity, calling for an ‘ontological, spiritual revolution’ that would point ‘beyond the idolatries of sexist society’ and spark ‘creative action in and toward transcendence’. 

Daly’s use of such a venerable, even scriptural, word as ‘idolatries’ highlights how radical theology has drawn from tradition, finding energy in antecedents that go back millennia. Rabbi Richard Rubenstein, in his writing on the Holocaust, borrowed from the mysticism of Kabbalah to imagine a silent God. ‘The best interests of theology lie not in God in the highest,’ writes John Caputo in The Folly of God: A Theology of the Unconditional (2015), but in something ‘deeper than God, and for that very same reason, deep within us, we and God always being intertwined.’

Challenges to uncomplicated faith – or uncomplicated lack of faith – have always been within religion. It is a dialectic at the heart of spiritual experience. Perhaps the greatest scandal of disenchantment is that the answer of how to pray to a dead God precedes God’s death. Within Christianity there is a tradition known as ‘apophatic theology’, often associated with Greek Orthodoxy. 

Apophatic theology emphasizes that God – the divine, the sacred, the transcendent, the noumenal – can’t be expressed in language. God is not something – God is the very ground of being. Those who practiced apophatic theology – 2nd-century Clement of Alexandria, 4th-century Gregory of Nyssa, and 6th-century Pseudo-Dionysius the Areopagite – promulgated a method that has come to be known as the via negativa. According to this approach, nothing positive can be said about God that is true, not even that He exists. ‘We do not know what God is,’ the 9th-century Irish theologian John Scotus Eriugena wrote. ‘God Himself does not know what He is because He is not anything. Literally God is not’.

How these apophatic theologians approached the transcendent in the centuries before Nietzsche’s infamous theocide was to understand that God is found not in descriptions, dogmas, creeds, theologies or anything else. Even belief in God tells us nothing about God, this abyss, this void, this being beyond all comprehension. Far from being simple atheists, the apophatic theologians had God at the forefront of their thoughts, in a place closer than their hearts even if unutterable. This is the answer of how to pray to a ‘dead God’: by understanding that neither the word ‘dead’ nor ‘God’ means anything at all.

Eleven centuries before Arnold heard the roar of faith’s tide and Nietzsche declared that God was dead, the Hindu sage Adi Shankara recounted a parable in his commentary to the Brahma Sutras, a text that was already a millennium old. Shankara writes that the great teacher Bhadva was asked by a student what Brahma – the ground of all Being – actually was. According to Shankara, Bhadva was silent. Thinking that perhaps he had not been heard, the student asked again, but still Bhadva was quiet. Again, the student repeated his question – ‘What is God?’ – and, again, Bhadva would not answer. Finally, exasperated, the young man demanded to know why Bhadva would not respond to the question. ‘I am teaching you,’ Bhadva replied. ~

https://aeon.co/essays/how-to-fulfil-the-need-for-transcendence-after-the-death-of-god?utm_source=Aeon+Newsletter&utm_medium=email&utm_campaign=december_drive_2021&utm_content=newsletter_banner

Oriana: THAT WHICH IS THE HIGHEST

As you can imagine, this provoked many lengthy comments . . . words, words, words. I agree that at least some individuals have a deep need for enchantment, and I count myself among them. But enchantment does not require supernaturalism. The feeling of awe can be inspired by beauty, including both the beauty of a scientific discovery and a poetic masterpiece such as Dover Beach.

So yes, it all depends on the definition of “God.” And if “god” can be ten thousand different things, perhaps it would be logical to stop using the term.

That said, I like Ayn Rand’s definition: “That which is the highest.” Rand’s moral philosophy appalls me, but I admit that this definition of the divine makes sense to me. Again, though, why use the term “god” when we could be more precise and specify that we mean “that which is the highest” (to be followed by a specific individual answer: to a scientist, it might be science). It’s more words, but the clarity is worth it. 

*
Another issue is that prayer is one possible response to danger. People want a big and mighty protector “up there.” If not god, then at least their mother in heaven, praying that they drive without having an accident. Hence also the rosary dangling on the handle of the rear-view mirror, and similar “amulets” — similar to how our remote ancestors tried to protect themselves in shamanic religions.

Life is fragile, and it can get terrible. In the end, no one is spared from suffering, aging, and mortality.As Stephen Dunn put it in his Ars Poetica,

Maybe from the beginning
the issue was how to live
in a world so extravagant
it had a sky,
in bodies so breakable
we had to pray
.

I can hardly bear the thought of how difficult life used to be the past centuries — and how helpless people felt. The more fear, the more religiosity. But technology and medicine kept advancing, so that today people feel a lot more secure. People keep attributing the decline of religion to science, but it’s really technology that has decreased our helplessness (as Milosz pointed out in one of his essays). Less fear, more empowering technology and effective medicine  — that’s the recipe for less religiosity.

To be sure, we face new apocalyptic dangers, such as climate catastrophe. But at least in the developed countries, people realize that the way to deal with it is not by praying, but by switching to clean energy.  When effective secular solutions exist, or at least can be developed, people don’t generally turn to prayer instead.

How to pray to the dead god? Those who have an emotional need for it may turn to the Universe as a kind of responsive and all-accepting deity. Whatever works.


Thomas JJ Altizer. 
His 2006 memoir is called Living the Death of God.

*

HOW COVID CAN AFFECT THE BRAIN

~ Months after a bout with COVID-19, many people are still struggling with memory problems, mental fog and mood changes. One reason is that the disease can cause long-term harm to the brain.

"A lot of people are suffering," says Jennifer Frontera, a neurology professor at the NYU Grossman School of Medicine.

Frontera led a study that found that more than 13% of hospitalized COVID-19 patients had developed a new neurological disorder soon after being infected. A follow-up study found that six months later, about half of the patients in that group who survived were still experiencing cognitive problems.

The current catalog of COVID-related threats to the brain includes bleeding, blood clots, inflammation, oxygen deprivation, and disruption of the protective blood-brain barrier. And there's new evidence in monkeys that the virus may also directly infect and kill certain brain cells.

Studies of brain tissue suggest that COVID-related changes tend to be subtle, rather than dramatic, says Geidy Serrano, director of the laboratory of neuropathology at Banner Sun Health Research Institute. Even so, she says: "Anything that affects the brain, any minor insult, could be significant in cognition."

Some of the latest insights into how COVID-19 affects the brain have come from a team of scientists at the California National Primate Research Center at UC Davis.

When COVID-19 arrived in the U.S. in early 2020, the team set out to understand how the SARS-CoV-2 virus was infecting the animals' lungs and body tissues, says John Morrison, a neurology professor who directs the research center.

But Morrison suspected the virus might also be infecting an organ that hadn't yet received much attention.

"Early on I said, 'let's take the brains,'" he says. "So we have this collection of brains from these various experiments and we've just started to look at them."

One early result of that research has generated a lot of interest among scientists.

"It's very clear in our monkey model that neurons are infected," says Morrison, who presented some of the research at the Society for Neuroscience meeting in November.

The monkey brains offer an opportunity to learn more because they come from a close relative of humans, are easier to study, and scientists know precisely how and when each animal brain was infected.

The monkey model isn't perfect, though. For example, COVID-19 tends to produce milder symptoms in these animals than in people.

Even so, Morrison says scientists are likely to find infected human neurons if they look closely enough.

"We're looking at individual neurons at very high resolution," he says, "so we can see evidence of infection."

The infection was especially widespread in older monkeys with diabetes, he says, suggesting that the animals share some important COVID-19 risk factors with people.

In the monkeys, the infection appeared to start with neurons connected to the nose. But Morrison says within a week, the virus had spread to other areas in the brain. 

"This is where you get into some of the neurologic symptoms that we see in humans," he says, symptoms cognitive impairment, brain fog, memory issues, and changes in mood. "I suspect that the virus is in the regions that mediate those behaviors."

That hasn't been confirmed in people. But other researchers have found evidence that the virus can infect human brain cells.

A draft of a study of brains from 20 people who died of COVID-19 found that four contained genetic material indicating infection in at least one of 16 areas studied.

And, similar to monkeys, the virus seemed to have entered through the nose, says Serrano, the study's lead author.

"There's a nerve that is located right on top of your nose that is called the olfactory bulb," she says. That provides a potential route for virus to get from the respiratory system to the brain, she says.

Serrano says the virus appears able to infect and kill nerve cells in the olfactory bulb, which may explain why many COVID patients lose their sense of smell — and some never regain it.

In other brain areas, though, the team found less evidence of infection.

That could mean that the virus is acting in other ways to injure these areas of the brain.

For example, studies show that the virus can infect the cells that line blood vessels, including those that travel through the brain. So when the immune system goes after these infected cells, it could inadvertently kill nearby neurons and cause neurological problems, Serrano says.

COVID-19 can also damage the brain by causing blood clots or bleeding that result in a stroke. It can damage the protective cells that create what's known as the blood-brain barrier, allowing entry to harmful substances, including viruses. And the disease can impair a person's lungs so severely that their brain is no longer getting enough oxygen.

These indirect effects appear to be a much bigger problem than any direct infection of neurons, Frontera says.

"People have seen the virus inside of brain tissue," she says. "However, the viral particles in the brain tissue are not next to where there is injury or damage," she says.

Frontera suspects that's because the virus is a "bystander" that doesn't have much effect on brain cells. But other scientists say the virus may be cleared from brain areas after it has caused lasting damage.

Researchers agree that, regardless of the mechanism, COVID-19 presents a serious threat to the brain.

Frontera was part of a team that studied levels of toxic substances associated with Alzheimer's and other brain diseases in older COVID patients who were hospitalized.

"The levels were really high, higher than what we see in patients that have Alzheimer's disease," Frontera says, "indicating a very severe level of brain injury that's happening at that time." 


It's not clear how long the levels remain high, Frontera says. But she, like many researchers, is concerned that COVID-19 may be causing brain injuries that increase the risk of developing Alzheimer's later in life.

Even COVID-19 patients who experience severe neurological problems tend to improve over time, Frontera says, citing unpublished research that measured mental function six and 12 months after a hospital stay.

"Patients did have improvement in their cognitive scores, which is really encouraging," she says. 

But half of the patients in one study still weren't back to normal after a year. So scientists need to "speed up our processes to offer some kind of therapeutics for these people," Frontera says. 

Also, it's probably important to "treat that person early in the disease rather than when the disease has advanced so much that it has created damage that cannot be reversed," Serrano says. 

All of the researchers mentioned that the best way to prevent COVID-related brain damage is to get vaccinated.

https://www.npr.org/sections/health-shots/2021/12/16/1064594686/how-covid-threatens-the-brain


*

LETTUCE IS AN ANTI-DEPRESSANT: FOOD THAT HELPS THE BRAIN

~ The full list of foods with purported mental-health benefits is expansive, but vegetables, organ meats (like liver), fruits, and seafood took the top four categories.

No single food has magical powers, however. “We want to shift [the conversation away] from singular foods and diets and into talking about food categories,” says Ramsey. His study, for example, found that spinach, Swiss chard, kale, and lettuce contain the highest antidepressant nutrients per serving, but that it didn’t really matter which leafy green you ate—what matters is that leafy greens are a regular part of your food intake.

“As a clinical psychiatrist, it’s intriguing to think about food interventions and how they could shift an entire organism,” says Ramsey. “What happens if I get someone using food for a more diverse microbiome, lower overall inflammation, and more connection to a sense of self-care? Those are all great things for someone struggling with mental and brain health.”

These findings could have a big impact. Worldwide, 4 percent of men and 7 percent of women suffer from depression, and the disorder can affect all facets of life, including productivity and athletic performance. Nutrition is just one piece of the mental-health puzzle, but it has researchers excited. “I really am a big fan of responsibly using medications and effective talk therapy to treat depression,” says Ramsey. “But [focusing on] diet allows us to empower patients to think about their mental health as tied to nutrition.” ~

https://getpocket.com/explore/item/how-your-diet-affects-your-mental-health?utm_source=pocket-newtab

Oriana:

Sure, everyone's heard that "fish is brain food." But what about spinach and other leafy greens, including, yes, lettuce? I find that the very act of preparing healthy food is mood-enhancing. It means that you have value, and are worth the time and the expense it takes to provide the best food. Likewise, you are too precious to be consuming junk food.


*

ending on beauty:

If the universe is—this is the latest—
bouncing between inflation
and shrinkage, as if on a trillion-year
pendulum, why wouldn’t

an infant’s sobbing, on the exhale,
have a prosody
as on the inhale have the chemistry
of tears and seas

~ Ange Mlinko, “This Is the Latest”