Oriana: I’m well aware that there are more dramatic images of the LA 20225 fires than this one. Still, I’m fascinated by the change in the color of the sun when the air is full of smoke. Actually, I’ve seen it cherry-red when we had the 2003 great fires in the San Diego County, the fire only seven miles from my house.
*
The Los Angeles inferno struck terror in millions of us who live in fire-prone areas. How traumatic it must be to watch one’s house turned into ashes and tangled metal. These fires will always be remembered as apocalyptic. But I couldn’t help but remember, as a schoolgirl, watching a contained, harmless fire — and how magical it looked to us children.
FIRE
After school we saw a strange glow.
A three-story building was on fire,
the only building on Grójecka Street
that had survived the war.
We rushed to it and stood as close
as firemen would allow.
It was December, the dark sky
like a black dress for a jewel.
The flame now crouched low,
licking the charred walls,
now shot up and danced high,
soaring, merging and dividing —
gold like a lion’s leap,
then a roaring red.
And the huge droning hum,
like a crowd chanting a prayer.
I imagine no one was hurt
in this passage to the light.
We didn’t ask, enchanted
by the wind of skyward sparks.
And that too was childhood,
more than anything that was
childhood: our faces happy at last,
haloed by the blaze.
~ Oriana
And here is my adult perspective:
THE GREAT FIRES
The ashes fell like snowflakes.
The day after, we woke in brown dusk,
spread from the California coast
but a single luminous moment
when I happened to glance
the one above the rose bush.
feasting their way up the long hill
seven miles from my house.
a woman looks out the window
and sees the garden and the hills,
a few dovelike clouds.
scarlet unfurling into gold,
the manifold body of God.
Life is a story
of how innocence dies
~ Oriana
*
ON MARGUERITE DURAS
Marguerite wasn’t always Duras. She was born Donnadieu, but with the publication of her first novel, “Les Impudents,” in 1943, she went from Donnadieu to Duras and stayed that way. She chose, as her alias, the village of her father’s origins, distancing herself from her family, and binding herself to the emanations of that place name, which is pronounced with a regionally southern French preference for a sibilant “S.”
The village of Duras is in Lot-et-Garonne, an area south of the Dordogne and just north of Gascony. The language of Gascon, from which this practice of a spoken “S” derives, is not considered chic. More educated French people not from the region might be tempted to opt for a silent “S” with a proper name. In English, one hears a lot of “dur-ah”—especially from Francophiles. Duras herself said “dur-asss,” and that’s the correct, if unrefined, way to say it.
Marcel Proust, whom Duras admired a great deal and reread habitually, modeled the compelling and ridiculous Baron de Charlus on Robert de Montesquiou of Gascony. Some argue that on account of Montesquiou’s origins and for the simpler reason that Charlus, here, is a place name, it should be pronounced “charlusss.” In “Sodom and Gomorrah,” Proust himself makes quite a bit of fun of the issue of pronunciations, and how they signify class and tact, and specifically, the matter of an “S,” of guessing if it’s silent or sibilant.
Madame de Cambremer–Legrandin experiences a kind of rapture the first time she hears a proper name without the sibilant “S”—Uzai instead of Uzès—and suddenly the silent “S,” “a suppression that had stupefied her the day before, but which it now seemed so vulgar not to know,” becomes the proof, and apotheosis, of a lifetime of good breeding.
So vulgar not to know, and yet what Proust is really saying is that it’s equally vulgar to be so conscious of élite significations, even as he was entranced by the world of them. Madame de Cambremer-Legrandin is, after all, a mere bourgeoise who elevated her station through marriage, and her self-conscious, snobbish silent “S” will never change that, and can only ever be a kind of striving, made touchingly comical in “Sodom and Gomorrah.”
Duras is something else. No tricks, full “S.” Maybe, in part, her late-life and notorious habit of referring to herself in the third person was a reminder to say it the humble way, “dur-asss.” Or maybe it was just an element of what some labelled her narcissism, which seems like a superficial way to reject a genius.
*
Duras was consumed with herself, true enough, but almost as if under a spell. Certain people experience their own lives very strongly. Regardless, there is a consistent quality, a kind of earthy simplicity, in all of her novels, films, plays, screenplays, notebooks, and in the dreamily precise oral “telling” of “La Vie Matérielle,” which is a master index of Durassianisms, of “S”-ness: lines that function on boldness and ease, which is to say, without pretension.
“There is one thing I’m good at, and that’s looking at the sea.”
“When a woman drinks it’s as if an animal were drinking, or a child.”
“Alcohol is a substitute for pleasure though it doesn’t replace it.”
“A man and a woman, say what you like, they’re different.”
“A life is no small matter.”
Her assertions have the base facticity of soil and stones, even if one doesn’t always agree with them, especially not with her homophobia, which gets expressed in the section of “La Vie Matérielle” on men, and seems to have gotten worse as her life fused into a fraught and complicated autumn-spring intimacy with Yann Andrea Steiner, who was gay.
“La Vie Matérielle” was translated as “Practicalities” by Barbara Bray, but might be more felicitously titled “Material Life,” or “Everyday Life.” The book began as recordings of Duras speaking to her son’s friend Jérôme Beaujour. After the recordings were transcribed, there was much reworking and cutting and reformulating by Duras.
In terms of categories, the book is unique, but all of Duras’s writing is novelistic in its breadth and profundity, and all of it can be poured from one flask to another, from play to novel to film, without altering its Duras-ness. In part, this is because speech and writing are in some sense the same thing with Duras. When she talks, she is writing, and when writing, speaking. (Some of her later work was spoken first to Yann Andrea, who typed her sentences, and the results were novels, such as “The Malady of Death.”)
*
The English-edition flap copy describes “La Vie Matérielle” as “about being an alcoholic, about being a woman, and about being a writer.” And it is about those things, and more or less in that order, although drinking is woven throughout. Her discussions of it are blunt. They are also accurate, and spoken by one who knows.
When Duras made this book, in 1987, she had suffered late-stage cirrhosis and lost her mind in a detox clinic, an episode she refers to, in the book, as a “coma.” She’d quit, started, quit. Later, in 1988, she was in a real coma, for five months. “It’s always too late when people tell someone they drink too much,” she writes. “You never know yourself that you’re an alcoholic. In one hundred percent of cases, it’s taken as an insult.”
Her talk of women and domestic life are of her era, although she was her own sort of early feminist, who felt that pregnancy was proof of women’s superiority to men, which she constantly reminded the men around her while pregnant with her son Jean. In a section called “House and Home,” she provides a list of important items with which she stocked Neauphle-le-Château, the country place where she wrote, and where many of her films were made. The list includes butter, coffee filters, steel wool, fuses, and Scotch-Brite. Only frivolous women, she says, neglect repairs.
For the “rough” work that men do, in counterpart to domestic chores, she is unimpressed: “To cut down trees after a day at the office isn’t work, it’s a kind of game.” And even worse, she adds, a man thinks he’s a hero if he goes out and buys a couple of potatoes. “Still, never mind,” she finishes off, and in the next paragraph announces that people tell her that she exaggerates, but that women could use a bit of idealizing.
From there she is on to the burning of manuscripts, which make the house feel virginal and clean, and her next topic, rolled into seamlessly, is the phenomenon of “sales, super sales, and final reductions” that drive a woman to purchase clothing she does not want or need. She ends up with a sartorial excess, a surplus, new to her generation, and yet this ur-woman, a figment of typicality, maintains the same role, in the home and in the world, that has persisted for all women in all times: a “theater of profound loneliness that has constituted their lives for centuries.”
*
Much of her publishing career was an encounter with misogyny: in the nineteen-fifties, male critics called her talent “masculine,” “hardball,” and “virile”—and they meant these as insults! (This phenomenon, sadly, has not gone away.) The implication was that as a meek and feeble female, she had no right to her aloof candor, her outrageous confidence.
And it’s true that you’d have to think quite highly of your own ideas to express them with such austerity and melodrama, but that is the great paradox, the tension, of the equally rudimentary and audacious style of Duras. “People who say they don’t like their own books, if such people exist, do so because they haven’t learned to resist the attraction to humiliation,” she wrote in one of her journals. “I love my books. They interest me.”
Jean-Marie Straub and Danièle Huillet, who adapted a short story by Duras into a film called “En Rachâchant,” from 1982, said of their own work that it was best understood by cavemen and children. In fact, their work is difficult to understand by anyone not versed in literature, philosophy, and art, and, moreover, anyone not trained to watch difficult films, but their intention in making such a claim seem clear enough: “If you don’t get it, you’re judging it through an adapted set of ideologies and traditions that are obstacles, and once you unlearn your bad training, you will understand our films.” Their adapted set of ideologies and traditions that are obstacles, and once you unlearn your bad training, you will understand our films.”
*
She had what both Beckett and the filmmaker Alain Resnais marveled over and admired as “tone.” Durassian. Everything she made was marked by it, and the distinct quality of that tone is certainly what led to the accusation, true enough, that she was at risk sometimes of self-caricature. But every writer aspires to have some margin of original power, a patterning and order that comes to them as a gift bestowed, and is sent to no one else. If Duras weren’t so lucky, if she weren’t such a natural writer, her critics would have no object for their envy, their policing of excess, as well as the inverse—a suspicion of her restrained economy with words.
At the end of “La Vie Matérielle,” she describes an encounter with an imaginary man, a hallucination, as if this man were perfectly real. And he is: he is part of her fictive universe, the primal scenes she spent her life rendering, and reworking, telling, and then telling again. A lot of things happened to Marguerite Duras. She lost a child while giving birth, and in that experience lost God and gained unwanted knowledge of death. Her first husband, Robert Antelme, was deported to Dachau and came back, but weighing eighty pounds.
Duras worked for the Occupation, and later joined the Resistance, then the Communist Party. Was expelled from the Communist Party but remained a Marxist. Did television interviews with both President François Mitterrand and the filmmaker Jean-Luc Godard. Aspects of her life are legends, like the destitute poverty of her childhood in Indochina. In some writings, her mother’s ailment is madness. In others, menopause. Or financial ruin. Sometimes, the mother’s madness is her strength. Maybe these are not contradictions.
The erotic charge between her and the older Chinese lover in Saigon seems like art, scenes that bloomed on paper. Things happened to Duras “that she never experienced,” as she put it. The story of her life did not exist, she said.
The novel of her life—yes.
She obsessively read Proust, Joseph Conrad, and Ecclesiastes. She pursued a poetic absorption in the sacred and secret. She may have popularized a French trend called autofiction, but she dismissed trends, and, more importantly, she was adamant that the genre of autobiography was base, degraded. She held the same view of “essayistic” writing. She resisted the anti-novel rhetoric of the practitioners of the nouveau roman, whom she called “businessmen.” Literature was her interest, that kind of truth.
https://www.newyorker.com/books/page-turner/a-man-and-a-woman-say-what-you-like-theyre-different-on-marguerite-duras
*
AS FIRES DEVASTATE LOS ANGELES, SOME HOUSES STAND — THANKS TO CONCRETE WALLS, CLASS-A WOOD, AND LUCK
When last week’s fires in Los Angeles set parts of the city ablaze, one viral image was of a lone house in Pacific Palisades that was left standing while all of the homes around it were destroyed.
Architect Greg Chasen said luck was the main factor in the home’s survival, but the brand-new build had some design features that also helped: a vegetation-free zone around the yard fenced off by a solid concrete perimeter wall, a metal roof with a fire-resistant underlayment, class A wood and a front-gabled design without multiple roof lines.
About 12,000 houses, businesses and other structures have been lost in the recent raging wildfires and Angelenos are quickly learning that the architectural and design choices people make plays into the level of damage fire can cause to their structures, making some homes more prone to burning down while others appear completely fire-resilient.
Even as wildfires in the US west grow more frequent and severe, aided in part by global heating, more people than ever are moving into high-risk areas. At least 44m homes in the US reside in the wildland urban interface (WUI), where houses mingle with the forest. People live in the WUI (pronounced woo-eee) for all kinds of reasons, from a desire to be close to nature to lower housing prices.
“Many southern California homes are inherently vulnerable due to their exterior materials and vegetated surroundings,” said architect Duan Tran, a partner at KAA Design Group in Los Angeles. “When clients talk to us about their dream homes, there’s often a note of worry; they’re asking how to make their homes not only beautiful and functional, but also safer in an unpredictable future.”
That means telling clients that siding, overhangs and decks, common features to increase a home’s warmth and charm, can ignite easily. Wood or shingle roofs are high-risk, as are homes with open vents and eaves, since flying embers can enter a home through them.
While no structure is entirely immune to the kind of devastating wildfires we’ve seen in Los Angeles, architects can make their projects more resilient, which allows for valuable extra time when it comes to evacuation and firefighting efforts. Materials such as concrete, stucco and steel can significantly reduce a home’s vulnerability and builders can use noncombustible materials for the part of a fence that connects to the house to prevent spread.
Researchers who studied structure loss in California wildfires from 2013 to 2018 found that enclosed eaves, vent screens and multi-pane windows have all been proven to prevent wind-born embers from penetrating a house. While sprinklers may not be able to stop an enormous wildfire, fire suppression systems can slow a fire’s progress.
“As we’ve seen in LA and with the Marshall fire and Camp fire and Lahaina fire, what truly characterizes the process of a home burning down, is largely the result of embers that fly miles ahead of that wildfire,” said Kimiko Barrett, a wildlife researcher at the non-profit Headwaters Economics. “They account for 90% of structural loss in a wildfire.”
California already has some of the strictest building rules, colloquially known as Chapter 7A, when it comes to new homes in high-risk fire regions and they’re designed to improve a house’s chance of survival. While regulations and building codes are effective, they don’t apply to homes built before 2008, when Chapter 7A was adopted. This means that even homes designed to withstand fire are vulnerable when wildfires spread to older homes nearby.
Houses made from chemically-treated synthetic materials, which have become common since the 1980s, are especially vulnerable since the home itself acts as fuel for a fire, as do the many petroleum-based products inside, ranging from furniture and carpet to appliances and electronics. Once they start to burn, the energy release becomes so intense that homes release radiant heat and in a high-wind scenario, propagate structure-to-structure spread.
Aside from building materials, another way to protect your home is through collective action. “If just one or two neighbors are protecting their structures and reducing the flammability of their property, it’s not as effective as when a group of people are working together,” said research ecologist Alexandra Syphard, who studies wildfires. “The defensible space right next to your structure can provide the best protection.”
When it comes to that defensible space, the buffer area between your home and the surrounding area, Cal Fire, the state’s department of forest and fire protection, says there are simple and low-cost steps everyone can take. Best practices include clearing dry vegetation, replacing mulch within 5ft of all structures with noncombustible dirt, stone or gravel and regularly cleaning roofs, gutters, decks and the base of walls to avoid accumulating fallen leaves and other flammable materials.
For the millions of homeowners who aren’t building a house from scratch or can’t afford a pricey full-blown retrofit, there are basic, affordable measures that can be taken for as little as $2,000 such as installing metal gutter guards and enclosing under-deck areas with metal mesh screening. And strategies such as closing a fireplace flue during wildfire season and relocating firewood at least 30ft from your home can be done at no cost. “Doing something is better than nothing,” said Syphard.
There is currently no federal agency or any funding available to help people harden their homes or reduce risk to the built environment. While architecture, design and defensible space efforts all come into play for resiliency, fire ecologists say where you build your house is the primary factor in determining whether it will burn. “Folks say we could have prevented this, we could have just hardened the homes,” said Syphard. “There are lots of examples of homes that have done everything right, but when you’re fighting millions of wind-borne embers that are flying through gusts of 70-mile per hour winds, it’s hard to keep one from going underneath your garage door.”
https://www.theguardian.com/us-news/2025/jan/17/la-houses-survived-fire
*
EXTRA FIRE PROTECTION — FOR A PRICE
As the Palisades fire began intensifying Tuesday evening, Los Angeles real estate executive Keith Wasserman sent out a plea on social media: “Does anyone have access to private firefighters to protect our home in Pacific Palisades? Need to act fast here. All neighbors houses burning. Will pay any amount.”
The now-deleted post sparked an intense blowback by social media users who felt the wealthy shouldn’t be given special attention during an emergency.
“Whose home gets saved shouldn’t depend on their bank account,” one TikTok user commented.
As multiple wildfires, powered by high-speed winds, have destroyed thousands of homes in the Los Angeles area, some residents have gone to great lengths — and often great expense — to try to shield their homes from destruction. Some have paid thousands of dollars to get their properties sprayed with fire-retardant gels to stem the damage, while others have invested in personal fire hydrants to help fight fires near their property.
Despite the intense reaction to Wasserman’s social media post, most private firefighters aren’t hired by wealthy individuals, Mike Stutts, a firefighter in Somerset, California, told CNN. Instead, most work with home insurance companies that are trying to save expensive homes to avoid costly insurance payouts.
According to materials on its website, insurance company Chubb offers “Wildfire Defense Services” to eligible clients. During a wildfire event, those services include “sending certified professional firefighters to your home,” “removing combustible materials from around your home” and “spraying your home with a heat-absorbing fire-blocking gel.”
Tim Bauer, a senior vice president at fire damage restoration service Allied Disaster Defense, said after the first three days of fire in Pacific Palisades, he had a waiting list of at least 200 people, all desperate for the company’s services. Bauer said the company is spraying properties with the same fire retardant dropped by firefighting air tankers.
Allied sprays down bushes, shrubs and other vegetation to prevent ashes and sparks from igniting a home.
During non-emergency times, the cost is about $1,000, but amid dangerous wildfire conditions, Bauer charges $5,000.
For many in the affluent neighborhood of Pacific Palisades, it’s a small price to pay to help protect multimillion-dollar homes. Bauer said one woman offered $100,000 to be moved to the top of his waiting list, but Bauer said he planned to stick to the original order on his list.
On Friday, Michael Owens, a real estate developer in the Los Angeles area, showed one of his newly built homes to a family whose own home was just destroyed in the Palisades fire.
The listed price of the home in Westlake Village, which sits on the Los Angeles-Ventura county line, is nearly $15 million. One of the home’s main selling points is it’s built with fire-proof materials and comes equipped with a personal fire hydrant.
All in, the cost of the home’s personal fire hydrant is about $100,000, Owens told CNN. That includes the installation cost and a one-time $35,000 fee to the municipal water company for access. Owens said he has one in front of his own Westlake Village home, as well.
Owens said he hoped the recent wildfires would convince more people to prioritize investing in fire-safe homes.
“It will be interesting now, as you see homes built back in the Palisades, what people consider doing with their homes when they’re in an urban high-fire risk area,” he said.
In Malibu, a type of personal fire hydrant that costs significantly less than $100,000 has grown in popularity, said Kevin Rosenbloom, a local resident who owns a healthcare device company.
Rosenbloom’s home was rebuilt after it was destroyed by a wildfire in 2007. In front of his home, he built a “Hainy Hydrant,” a personal fire hydrant developed by fellow Malibu resident Matt Haines to help the area combat wildfires.
Rosenbloom said the hydrant cost him about $2,500, and since it taps into a home’s personal water supply, it can be installed by a plumber.
Rosenbloom said his home hasn’t been hit by the recent fires, but he plans to use his personal hydrant to keep his home “cool and wet” if one approaches.
“People all over Malibu have been putting these hydrants in. It’s a basic first step and it’s very inexpensive relative to other systems available,” Rosenbloom said. “This is really the next technology in fire suppression and fire protection.”
Even with extra investment in firefighting and suppression systems, though, there is no guarantee a home could survive a fire as strong as the one still raging in Pacific Palisades.
Wasserman, who sent out the plea for private firefighters, did not respond to CNN’s request for comment.
https://www.cnn.com/2025/01/11/business/california-wildfire-protection-cost/index.html
Gawdlbsantifa:
Well, the good news is we’ll still have lots of immigrants to help with the rebuild. Much like Trump’s admission that bringing down prices (reducing inflation) will not be as easy as originally advertised now incoming border czar Tom Homan has told Regressives that the process of deportation will not be as quick as originally promised. Here’s his big “aha” moment: this will cost money while Regressives are focusing on budget cuts. This is what happens when amateurs are selected to run the government. They’re haven’t even taken over the reins yet and they’re changing their tune. It gave me a good laugh for the evening.
cnn_7777
There is no place immune from this and what goes around comes around .... and Trump and the Republicans have no intention of helping
Gawdlbsantifa:
Of course not. He recognizes that by and large Californians can’t stomach him so he wants to be sure and make life for these unfortunate people as difficult as possible. Obviously not the kind of behavior we should expect from a president that is supposed to represent all Americans. Joe Biden never turned his back on Florida following the hurricanes. That’s simply the stark difference in character.
*
POWER LINES, HIKERS, ARSON: WHAT MIGHT HAVE SPARKED LA'S DEVASTATING FIRES?
The hiking trail through Temescal Canyon in western Los Angeles is a favorite of locals.
Towering above the twisting roads and manicured homes that make up the Pacific Palisades, urban hikers seeking an escape from America's famously gridlocked city have a clear view of the pristine waters of the Pacific.
Now the green, brush-lined path in the canyons is grey and burned as far as the eye can see.
Yellow police tape surrounds the path up to the trail. Police guarding this area are calling it a "crime scene" and prevented BBC reporters, including me, from getting any closer.
It's where investigators think the deadly blaze that destroyed so many homes in the area may have started.
A similar scene is playing out across town in the north of the city. There, the community of Altadena was leveled by a different fire that ignited in the San Gabriel Mountains.
Investigators in both locations are scouring canyons and trails, and examining rocks, bottles, cans – any debris left behind that might hold clues to the origins of these blazes, which are still unknown.
It's the one thing on-edge and devastated Angelenos are desperate to know: how did these fires start?
Without answers, some in fire-prone California are filling in the gaps themselves. Fingers have been pointed at arsonists, power company utilities or even a blaze days prior in the Pacific Palisades that was snuffed out but may have re-ignited in the face of Santa Ana winds blowing at 80-100mph (128-160 kmph) last week.
Investigators are examining all those theories and more. They're following dozens of leads in the hopes that clues in burn patterns, surveillance footage and testimony from first responders and witnesses can explain why Los Angeles saw two of the most destructive fire disasters in US history ignite on 7 January, so far killing 27 people and destroying more than 10,000 homes and businesses.
But this tragic mystery will take time to solve – possibly as long as a year.
"It's just too early," Ginger Colbrun, a spokeswoman for the Los Angeles division of the US Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) told the BBC.
"Everyone wants answers, we want answers, the community wants answers. They deserve an explanation. It just takes time.”
‘I smell fire’
The first trace of the Palisades Fire may have been spotted by Kai Cranmore and his friends as they hiked in Temescal Canyon, on a trail frequented by nature lovers and California stoners alike.
It's not uncommon for visitors to bring alcohol and music, relaxing in nature by Skull Rock – a landmark rock formation along the trail.
In a series of videos posted online, Mr Cranmore and his friends are seen running down the canyon on the morning of 7 January. His first videos show a small cloud of smoke billowing from a hill as they navigate through brush and rock formations in a desperate escape. Out of breath, they comment on having smelled fire before seeing smoke rising.
In further clips, that small cloud gets darker and flames can later be seen cresting over the hilltop.
Some on the internet were quick to blame the group for the fire, noting how close they were to the blaze when it erupted. Even actor Rob Schneider posted about the hikers, asking his followers to help identify them.
In interviews with US media outlets, members of the hiking group noted how fearful they became as people started online attacks. One of the men said he deleted his social media accounts.
"It's scary," one of the group told the LA Times. "Just knowing as a matter of fact of our experience that we didn't do it but then seeing the amount of people that have different theories is overwhelming.”
Ms Colbrun said investigators were also speaking to firefighters who responded to a blaze days earlier that sparked nearby in the same canyon. A persistent theory holds that a small fire on 1 January was never fully extinguished and reignited six days later as winds picked up.
The Palisades Fire is thought to have erupted around 10:30 local time on 7 January, but several hikers told US media they'd smelled smoke earlier that morning as they used the trail.
A security guard who works near the trail told the BBC he'd seen smoke or dust for several days in the area. The morning of the blaze he was patrolling the neighborhood bordering the canyon and called firefighters as a plume of smoke formed.
While Chief Marrone's agency is not leading the probe into the Palisades Fire, he said investigators were also examining the possibility of arson.
"We had numerous fires in the LA County region almost simultaneously, which leads us to believe that these fires were intentionally set by a person," Chief Marrone said.
He adds that about half of the brushfires the agency typically responds to are intentionally set.
*
Utility providers have been blamed for some of California's worst fires, including the 2018 Camp Fire that killed 85 people and destroyed the town of Paradise. In 2019, Pacific Gas and Electric (PG&E) agreed a $13.5bn (£10.2bn) settlement with victims of the Camp Fire and other wildfires in the state.
In the week since the Eaton Fire, there have already been at least five lawsuits filed against Southern California Edison, the power provider that operates the tower seen in Mr Ku's video.
The company says it has not found any evidence that its equipment was responsible for the fire and is reviewing the lawsuits.
In a statement, it said its preliminary analysis of transmission lines across the canyon showed there were "no interruptions or operational/electrical anomalies in the 12 hours prior to the fire's reported start time until more than one hour after the reported start time of the fire".
Additionally, the company said its distribution lines to the west of Eaton Canyon "were de-energized well before the reported start time of the fire" as part of its fire safety shut-off program.
Chief Marrone told the BBC that investigators were looking into all possibilities, including whether the tower may have been where a spot fire ignited – meaning the initial blaze could have been started elsewhere but then spread to the tower through flying embers.
The tower where the fire was spotted is not like those seen in neighborhoods. Rather than a wooden pole with a small, easy-to-blow transformer or slim wires, this was a massive metal transmission tower with high voltage lines as thick as a fist.
These types of lines aren't typically the cause of fires because they're computerized, he said, and the system automatically turns off power once there is an issue.
He noted, though, that investigators were looking into whether Southern California Edison's systems operated properly that night and cut power.
Cal Fire cautioned against casting any blame so early in the probe.
"We want to make very sure that we're not pointing any fingers in any direction because we've seen what happens when someone is falsely accused," Gerry Magaña, deputy chief of operations, told the BBC in an interview.
"It causes chaos.”
https://www.bbc.com/news/articles/c8r55xgvv36o*
THE SECRET WEAPON AGAINST WILD FIRES: GOATS
Goats are well adapted to eat woody shrubs, with dexterous tongues and lips together with strong stomachs
It's a typical Los Angeles scene: the Pacific Ocean sparkling under a crystal-clear, bright blue sky, with miles of golden sandy beaches stretching as far as the eye can see. There's also a herd of goats precariously perched on a clifftop, enjoying the multimillion-dollar view.
These aren't just any goats, though – they're California's new secret weapon in the fight against wildfires, and they're being put out to graze across the state.
"The reception is overwhelmingly positive wherever we go," says goat herder Michael Choi. "It's a win-win scenario as far as I can tell."
Choi runs Fire Grazers Inc, a family business which leases goats to city agencies, schools and private clients to clear brush from hillsides and terrain that's hard to access. The company has 700 goats, and they recently had to expand their herd to keep up with demand.
"I think as people get more aware of the idea, and environmental impact, they become more conscious about which methods they want to use for clearing weeds and protecting the landscape from fires. So, there's definitely a bigger demand, and it's a growing trend," he says.
California has been at the epicenter of battling wildfires, which have become more frequent, more destructive, and larger, since 1980.
In 2021, California faced "unprecedented" fire conditions, according to CalFire (California Department of Forestry and Fire Protection, the state's fire agency), with one fire alone burning more than 960,000 acres (3,885 sq km). Well-timed rainfall can bring some relief, even as the wider situation remains severe. The wildfire season in 2022 was described as a "mild" for the state – more than 300,000 acres (1,214 sq km) burned compared to the five-year average of 2.3 million acres (9,307 sq km). August 2023 was cooler and wetter than average in California. Still, more than a quarter of a million acres have burned, and four people have died.
Factors such as hotter, drier conditions due to climate change are key drivers in increasing the risk and severity of the fires, research shows. But there are also studies suggesting that land management can play an important role, as the build-up of dead trees and dry shrubs creates dangerous fuel that can lead to big, severe fires. Land managers traditionally relied on herbicide and manual labour to thin out brush and reduce dry fuel, but agencies and city officials are also trying out other, potentially more sustainable and cost-effective methods – such as goats.
"Goats are especially useful in places like California and the Mediterranean because of the shrubs – goats are very well equipped for that, they have the right mouths," says Karen Launchbaugh, an ecology professor at the University of Idaho who has conducted multiple studies on sheep, goat and cattle grazing. "They're just designed to eat shrubs.”
Unlike other ungulates, goats have narrow, deep mouths which allows them to selectively harvest woody shrubs. They stand on hind legs to graze at an average height of 6.7ft (2m), and have dexterous tongues and lips. "They also have the ability to detoxify compounds and so they can eat poisonous plants," Launchbaugh adds.
Launchbaugh says she's seeing more city officials and land managers open to trying goats as a new method of mitigating wildfire risk. "I'm excited because when we started researching this, we didn't know where it was going to go. And now there's enough work for people making a living out of being a grazer – and cities and counties are willing to pay for it because they know it makes a difference.”
Goats have insatiable appetites, and devour weeds, bushes, low hanging leaves, and dry brush – all of which are fuel for fires. California's wildfire preparedness guidelines instruct residents to remove all dead vegetation, and mow grass down to four inches (10cm) – everything a goat would do naturally, enthusiastically, and without being reminded. Goats also are unperturbed grazing away in triple digit heat [100F/37.7C and above], and have no problem scaling steep mountainsides which can be difficult to access for laborers.
"Goats are natural mountaineers. They can climb up steep hills with no problem, they get all into the nooks and crannies that would normally be very difficult for people, and they eat almost everything," says Choi.
goats sacramento
Goat herds visit West Sacramento twice a year to clear brush that might pose a fire risk
In Glendale, a city within Los Angeles County, 300 goats are hard at work on the Verdugo Mountain ridges, clearing 14 acres (5.6 hectares) over the span of two weeks. The city is classed as a "very high fire hazard" zone. To reduce the risk, Patty Mundo, Glendale Fire Department's vegetation management inspector, has used Choi's goats since 2018. Their purpose is to create a buffer zone between homes and open spaces of land so if there is a fire, it would slow the flames down – or hopefully stop them completely. Having a buffer zone helps protect homes from fire, crucial in a state where more than 60,000 communities are at risk from wildfires.
In West Sacramento, goat herds have been used since 2013 in fire prevention measures as a "creative and environmentally sustainable method" to better manage fire fuel reduction, says Jason Puopolo, parks operations superintendent for the city. The goats come to town twice a year – once in the spring to clear growth from the winter rains, and once in the autumn to clear tinder dry brush.
Last season the city paid $150,000 to goat herding company Western Grazers to work the land.
"The greatest benefit [we've seen] is reducing the risk and potential workplace injuries in difficult to access areas," Puopolo says. "We have levees that are sloped and forested heavily in places and they can put staff at a high risk of injuring themselves even just with a slip or fall.”
The goats' hard work has paid off so much that the city's fire service credited the herds with helping stop the flames during a fire in 2022, saving a housing complex. "Our fire chief said if the goats had not made a previous pass in that field, the fire could have been a lot worse," Puopolo continues. "Because the goats had recently nibbled down the brush in the area to four inches (10cm) high, fire crews were able to get a jump on the flames and save the condos.”
Goats are also useful when it comes to controlling invasive species, such as non-native black mustard plants. When the seed comes out the other end of the goat, it's nonviable, meaning it doesn't grow again – unlike when other animals digest seeds.
Using goats to clear land is a centuries-old practice in European countries such as Italy, Greece, and Spain. A study into how effective goat grazing in the Mediterranean is in preventing fires found it is "probably the most ecologically sound technique for creating discontinuities in fuels, mainly at the shrubby layer, and disrupting fuel ladders." Although the practice hasn't been around for quite so long in California, experiments to enroll the ungulates in fire management have been taking place for more than a decade.
In 2013, the US Forest Service (USFS) experimented using 1,400 goats in a 100-acre (40-hectare) forest-thinning project in Cleveland National Forest, Southern California. The aim was to clear a 300ft (91.4m) buffer between nearby communities and the forest. "To clear a fuel break normally means lots of human power and machinery, including chainsaws, hand tools and safely burning piles of brush," said Joan Friedlander, district ranger for the area. The goats, which USFS says cost between $400-500 per acre, compared to around $1,200-1,500 per acre when using manpower, attracted a large amount of community support. The forest's managers established a plan to monitor pre and post-treatment plots so the effectiveness of the goats could be evaluated over time and compared to traditional methods used in the area.
A study on the forest's use of goats found the animals had a "significant impact" reducing plant cover – 87% reduction in cover, and a 92% reduction in height. Goats don't have to be the only way to manage the landscape, but using wildlife in this way "should be part of the toolbox when we're fighting against wildfires", adds Launchbaugh.
Some fire departments are even buying their own animals. "They eat all the grasses and brush down to the ground level, which obviously helps us with mitigating wildfires in the area," says Chris Nelson, assistant chief with the San Manuel Fire Department, who have their own herd of 300 goats. The goats work through two to five acres at a time, before they're moved onto the next patch. At the end of summer, fire officials will trim down any brush the goats left behind.
The state's fire agency CalFire has handed out a number of grants to cities to fund trials of goat grazing. "Many of our recipients found grazing to be an effective tool, especially in the maintenance of fuel reduction projects," says Kara Garrett, a governmental program analyst at CalFire. "Lawn mowers, weed-eaters, chain saws, tractors, and trimmers can all spark a wildland fire if used during the wrong time of year, and with work still left to be done across California, the grazing goats are a safe alternative to help maintain vegetation.”
While Puopolo would "absolutely" recommend using goats to any other city, and frequently speaks with communities across the country about their benefit, he also cautions that they're not always the best solution. "Using them comes at a cost, they don't benefit every situation financially, especially in flat open areas where a mower can do the work much faster and at a lower cost.”
Launchbaugh also highlights the year-round cost, pointing out that goats can't simply be stored in a shed until they're needed the following season. "You need the travel infrastructure to cart them around, and it's a skill to know how to manage animals, so you need an experienced herder," she says.
The type of terrain is also something that needs to be factored in. In the Great Basin, where Launchbaugh has conducted most of her research, the main problem is long grass, and so they use cows instead of goats, since cows are more efficient at keeping grass short. Goats can't differentiate between native and non-native species either, and they'll eat desirable, native shrubs as well as invasive, non-native species.
For those who love the sight of goats chomping their way through the California brush, however, the benefits go beyond cost and impact.
"This is a conscious effort to bring things back to their natural way of being," says Choi. "And besides, they're far more entertaining.”
https://www.bbc.co.uk/future/article/20230922-these-la-goats-help-stop-wildfires
*
WHY SOME PEOPLE SEEM TO BE LUCKY
4 leaf clover
Some guys seem to have all the luck. A perfect career, a perfect partner, a perfect life. When they’re not sitting next to a book publisher on a flight, they’re discovering a vintage Burberry trench in the thrift store around the corner from your apartment. It’s unbelievable. It’s annoying.
Their luck seems random—and these days, thanks to social media, it seems like everybody’s getting lucky but you. But if you’re sitting around waiting for luck to hit you like a benevolent lightning bolt, you’re thinking about it all wrong. Nobody’s just born #blessed.
That’s because luck isn’t something that happens to you; it’s something that happens because of you. At least that’s what Tina Seelig (a professor of entrepreneurship at Stanford and best-selling author who's written seventeen books at the time of this writing) would tell you.
Luck is something you can create for yourself and learn to control, she says, which means that you can actually teach yourself to get luckier. Make a few tweaks to the way you approach opportunities that arise in your daily life and you too can become one of the savvy and brave people capable of making their own lucky breaks happen.
Here, she gives you the tools to do exactly that, and shares a few of the secrets that she’s used to unleash good fortune on her everyday life. We wish you luck in applying them to your own.
GQ: You’ve written that there is a “physics” to luck, since all of life is a matter of cause and effect. What do you mean by that?
Tina Seelig: We live in a world where every single choice you make has consequences. Many people don’t pay attention to the little things they do that have an enormous impact. If you don’t actually think about the consequences, you’re missing a huge opportunity—and they're often things you don’t even notice you missed. You see other people having opportunities that you don’t, and you can feel like, “Wow, how come everybody else has all the luck?” But if you look carefully there are all these little things they have done that end up essentially attracting luck their way.
So what are those behaviors you can practice to attract luck?
One is showing appreciation. It doesn’t take very much time and yet it has a huge impact on people. Most people are not appropriately appreciative of what other people do for them, and they take it for granted. Especially when you’re a kid, or a young person, and people have been doing things for you your whole life, you just assume that’s the way the world works. Showing appreciation results in a tremendous outpouring of other opportunities.
The other is taking risks. Go up and say hello to somebody you don’t know. Try a sport you haven’t tried. Go somewhere you haven’t gone before. Each of these opens up the door to possibilities. Think about people who are well-known athletes. If they had never tried that sport, they never would have known that that was their gift.
The other thing that I talk about is embracing crazy ideas. There are so many things around us that on the surface look unusual or crazy, but if you’re willing to embrace them? It’s a little like improv, saying, “Yes, and…” Being able to look at everything that comes to you as a gift and embracing it, as opposed to reacting quickly with a no or with a negative response.
How do you make yourself more willing to open up to risk?
Tiny little experiments. One of my favorite concepts comes from my colleague Alberto Savoia [an Innovation Agitator Emeritus at Google, and Innovation Lecturer at Stanford], who actually has a book coming out about doing little tiny experiments. For example, if I get a chess board, I don’t immediately sign up for the biggest tournament in my neighborhood — I just play a game of chess. You don’t instantly sign up for the World Series the first time you pick up a baseball bat. The key is to do something little that gives you a little experience.
That seems like a way to get away from the paralysis of choice: putting yourself in motion by saying I’m going to do this thing—even if it’s little—to move myself in the direction of action.
Right. I also think it’s really important to distinguish between fortune, chance and luck. People don’t distinguish between them.
Fortune is things that are outside of your control, things that happen to you. I’m fortunate to be raised by a loving family. I’m fortunate to be born in this place and time. I’m fortunate to have blue eyes. Chance is something you have to do; I have to take a chance. It requires action on your part in the moment. Buy a lottery ticket. Ask someone on a date. Apply to a job. Luck is something where you have even more agency. You make your own luck by identifying and developing opportunities in advance.
People conflate all three of those things and as a result, they think things are much more random than they are.
So the harder you work to prepare yourself to find and seize opportunities, the luckier you get.
Hard work matters, but it’s also really important to think about resilience. It makes you luckier. If you can extract the learnings from mistakes and failures, you’re going to move forward much more quickly.
I can think of very specific people who have one failure and then they don’t want to try anything else. Whereas other people go, “Okay, I have a failure, I learned a lot, and now I’m going to go do something different.” You practice being resilient. You get better at recovering from failure.
You also mentioned changing your relationship to ideas, or embracing something that sounds unusual or crazy. Are there specific ways that you’d recommend to go about finding the good, even in bad ideas?
Take a class in improv. The major rule of improv is that you accept whatever is given to you. If someone gives you an idea that doesn’t, on the surface, make any sense, you have to go with it. That idea of embracing ideas even [if] on the surface they seem crazy is a really powerful thing. It’s really hard to do.
I write and teach a lot about brainstorming. Someone has an idea and you think it’s a really stupid idea—it’s really hard not to say "that won’t work" or "that’s a stupid idea." But it’s incredibly powerful if you can defer judgment for enough time so you can explore. You’re not investing in it, you’re just going to take a few minutes to see how this might work.
That can be dangerous, though, for a generation that was constantly told to follow its passion. You can follow a bad idea too far if you’re passionate about it. It sounds to me like you’re saying that “follow your passion” is a good message—but an incomplete one.
People tell you to follow your passion, that’s totally cop-out advice. It’s like saying “think outside the box.” Or even “fortune favors the prepared mind.” These things that people say don’t have enough meat on the bone to give you anything to do with it. I am a firm believer that passion follows engagement, not the other way around. The more engaged you are in something, the more passionate you become with it. In fact, there’s evidence the more time you spend with a person, the more you like them.
Let’s imagine it was my job to manage the sewer system of San Francisco. I don’t know anything about that. But I could get passionate about that. It’s super important. We would all be pretty miserable if it wasn’t working. I’m going to guess there are a lot of nuances, and technology and challenges and opportunities for improvement. You could pick anything.
You have to look at your passions, as well as your skills, as well as the market. If I’m passionate about something, and I’m not very good at it—let’s say I love music, but I can’t carry a tune. I could be a good fan. I could play music, go to concerts, maybe be the manager of a band. But if I’m good at it and there’s a market for it, then that’s a job. You want an overlap in your passion, skills and market, and that’s where your sweet spot is.
Now, you can always create a market. Let’s say I’m an artist, and nobody has ever heard of my technique. I can create a market if I’m talented and passionate. But a lot of people don’t want to do that.
So by saying, “How can I engage with something that increases my passion in areas where I can also grow skills that I’m seeing are valuable in the market?"—that would help you build success instead of waiting for it to be given to you.
The point is having a sense of responsibility and agency. In a job, you might have responsibility but no authority. But in your own life, you have the responsibility and authority to do things to craft the life you want to live. Most of us choose where we’re going to live, who we’re going to spend time with, what kind of job we’re going to have. I think that so many people limit themselves, they make a box around themselves that’s much smaller than it needs to be. Then you read stories about people who go off and live in interesting places and you say, “How did you do it?” and they say, “I just did it.”
https://getpocket.com/explore/item/how-to-get-lucky-the-secrets-to-creating-your-own-good-fortune?utm_source=firefox-newtab-en-us
*
OPTIMISTS LIVE LONGER, STUDIES FIND
Here's a good reason to turn that frown upside down: Optimistic people live as much as 15% longer than pessimists, according to a new study spanning thousands of people and 3 decades.
Scientists combined data from two large, long-term studies: one including 69,744 women and another of 1429 men, all of whom completed questionnaires that assessed their feelings about the future. After controlling for health conditions, behaviors like diet and exercise, and other demographic information, the scientists were able to show that the most optimistic women (top 25%) lived an average of 14.9% longer than their more pessimistic peers. For the men the results were a bit less dramatic: The most optimistic of the bunch lived 10.9% longer than their peers, on average, the team reports today in the Proceedings of the National Academy of Sciences.
The most optimistic women were also 1.5 times more likely to reach 85 years old than the least optimistic women, whereas the most optimistic men were 1.7 times more likely to make it to that age.
The scientists suggest an optimistic mindset may promote healthy behaviors like exercise and healthy diets and help individuals resist the temptation of unhealthy impulses like smoking and drinking. Optimists may also handle stress better than pessimists, choosing to pursue long-term goals rather than immediate rewards when faced with a challenging situation.
Even if you're a negative Nellie, take heart: Pessimists can learn to become more optimistic with proper guidance, previous research has shown. Still, it's unclear whether such behavioral modifications can impact life span. Perhaps it would help to believe they will?
https://www.science.org/content/article/cheer-optimists-live-longer
from another source:
Being optimistic or pessimistic is not just a psychological trait or interesting topic of conversation; it’s biologically relevant. Indeed, there is mounting evidence that optimism may serve as a powerful tool for preventing disease and promoting healthy aging.
People with an optimistic mindset are associated with various positive health indicators, particularly cardiovascular, but also pulmonary, metabolic, and immunologic. They have a lower incidence of age-related illnesses and reduced mortality levels. Optimism and pessimism are not arbitrary and elusive labels. On the contrary, they are mindsets that can be scientifically measured, placing an individual’s attitude on a spectrum ranging from optimistic to pessimistic. Framing the baseline of each subject in this way, researchers are able to verify the correlation between optimism level and relative health conditions.
In 2019, a review published in JAMA Network Open by Alan Rozanski, a cardiologist at Mount Sinai Morningside hospital in New York City, compared the results of 15 different studies for a total of 229,391 participants. Rozanski’s meta-analysis showed that individuals with higher levels of optimism experience a 35 percent lower risk of cardiovascular events compared to those with lower optimism, as well as a lower mortality rate.
Rozanski pointed out that the most optimistic people tend to take better care of themselves, especially by eating healthily, exercising, and not smoking. These behaviors have been found to a much lesser extent in the most pessimistic people, who tend to care less for their own well-being. But the damage produced by pessimism is also biological: The continuous wear and tear caused by elevated stress hormones like cortisol and noradrenaline leads to heightened levels of body inflammation and promotes the onset of disease. Moreover pathological pessimism can lead to depression, considered by the American Heart Association as a risk factor for cardiovascular disease.
The same correlation has been identified in relation to minor illnesses like the common cold. A 2006 study outlined the personality profiles of 193 healthy volunteers who were inoculated with a common respiratory virus. Subjects who expressed a positive attitude were less likely to develop symptoms of the infection than their counterparts with less positive attitudes. Optimism, then, is one of the most interesting nonbiological factors involved in the mechanisms of longevity because it correlates an individual’s psychological attributes with their physical health. In this sense, it offers us a further strategy to protect our health.
Optimists tend to live longer, as revealed by research led by Lewina Lee at Harvard University analyzing 69,744 women from the NHS and 1,429 men from the aging study of the U.S. Department of Veterans Affairs. The results tell us that optimists tend to live on average 11 to 15 percent longer than pessimists and have an excellent chance of achieving “exceptional longevity” — that is, by definition, an age of over 85 years. These results are not confounded by other factors such as socioeconomic status, general health, social integration, and lifestyle because, according to Lee, optimists are better at reframing an unfavorable situation and responding to it more effectively. They have a more confident attitude toward life and are committed to overcoming obstacles rather than thinking that they can do nothing to change the things that are wrong.
A survey conducted in France in 1998 hypothesized a correlation between the death rate and group events that inspire optimism. On July 12th of that year, at the Saint-Denis stadium, the French national soccer team won the World Cup against Brazil. Data on deaths from cardiovascular events recorded that day show a singular decline compared to the average recorded between July 7th and July 17th, but this effect was limited to the male population, while for women it remained about the same. Although it’s not possible to establish a causal link, this curious coincidence suggests that the massive injection of optimism after the team’s victory may have played a role in the story.
In the Optimist’s Mind
Starting from the premise that a fundamental part of life is the pursuit of goals, it has been seen that encountering obstacles to achieving these goals can lead to different results depending on the individual’s level of optimism. If the person has a confident and positive attitude, they will try to overcome the obstacle; if they are doubtful that their efforts will succeed, they will tend to let it go, perhaps experiencing frustration from their remaining attachment to this goal, or may become completely disengaged and fail to achieve their goal. Optimism and pessimism posit this mechanism on a larger scale as a mental attitude toward not only a single goal but also the future in general.
Researchers have studied the relationship between these two attitudes and the results obtained during the course of real-life situations. It has been seen that optimists are more likely to complete university studies, not because they are smarter than others, but because they have more motivation and perseverance. And they are able to better manage the simultaneous pursuit of multiple goals — making friends, playing sports, and doing well at school — by optimizing their efforts: showing a greater commitment to priority goals and less commitment to secondary ones.
The optimist seems to invest their self-regulation resources carefully, increasing effort when circumstances are favorable and decreasing effort when they are less favorable, but also by doing more when there is a disadvantage to overcome.
In a famous 1990 study by positive psychologist Martin Seligman involving college swimming teams, coaches asked athletes to compete at their best. At the end of the competitions, they were given false results about their performance, increasing their speed by about two seconds, which was low enough to be credible, but still enough for the athletes to be disappointed. After a couple of hours of rest, during which they probably mulled over their bad results in the last race, the swimmers were called to a second race, and the results between the optimists and pessimists were significantly different. The pessimists were on average 1.6 percent slower than during their first performance, while the optimists swam 0.5 percent faster. The interpretation of the experiment was that optimists tend to use failure as a goad to do better, whereas pessimists tend to be discouraged more easily and give up more readily.
Results of DNA studies also seem to confirm the idea that optimism is an effective tool for slowing down cellular aging, of which telomere shortening is a biomarker. (Telomeres are the protective caps at the end of our chromosomes.) This research is still in progress, but the early results are informative. In 2012, Elizabeth Blackburn, who three years earlier shared a Nobel Prize for her work in discovering the enzyme that replenishes the telomere, and Elissa Epel at the University of California at San Francisco, in collaboration with other institutions, identified a correlation between pessimism and accelerated telomere shortening in a group of postmenopausal women. A pessimistic attitude, they found, may indeed be associated with shorter telomeres. Studies are moving toward larger sample sizes, but it already seems apparent that optimism and pessimism play a significant role in our health as well as in the rate of cellular senescence.
More recently, in 2021, Harvard University scientists, in collaboration with Boston University and the Ospedale Maggiore in Milan, Italy, observed the telomeres of 490 elderly men in the Normative Health Study on U.S. veterans. Subjects with strongly pessimistic attitudes were associated with shorter telomeres — a further encouraging finding in the study of those mechanisms that make optimism and pessimism biologically relevant.
Optimism is thought to be genetically determined for only 25 percent of the population. For the rest, it’s the result of our social relationships or deliberate efforts to learn more positive thinking. In an interview with Jane Brody for the New York Times, Rozanski explained that “our way of thinking is habitual, unaware, so the first step is to learn to control ourselves when negative thoughts assail us and commit ourselves to change the way we look at things.
We must recognize that our way of thinking is not necessarily the only way of looking at a situation. This thought alone can lower the toxic effect of negativity.” For Rozanski, optimism, like a muscle, can be trained to become stronger through positivity and gratitude, in order to replace an irrational negative thought with a positive and more reasonable one.
While the exact mechanisms remain under investigation, a growing body of research suggests that optimism plays a significant role in promoting both physical and mental well-being. Cultivating a positive outlook, then, can be a powerful tool for fostering resilience, managing stress, and potentially even enhancing longevity. By adopting practices that nurture optimism, we can empower ourselves to navigate life’s challenges with greater strength and live healthier, happier lives.
biology of kindness
https://thereader.mitpress.mit.edu/the-new-science-of-optimism-and-longevity/
Oriana:
Last time I read about it, it wasn’t the happy-go-lucky extraverts who were found to live longest. It was the hard-working, conscientious people, the kind who tended to work long past the retirement age or started a new career in their sixties. What seemed to matter most was having something to live for.
A high IQ also strongly correlated with longevity. Thus, retired Harvard professors were found to live significantly longer than former Harvard athletes.
I wouldn’t label myself as an optimist or a pessimist. It depends on the situation or the topic under discussion. What I noticed, however, is that I’m more flexible now than in early youth, less bothered by something not going the way I’d prefer. With age comes the simple wisdom that it’s pointless to waste time trying to change something outside our control.
Perhaps optimism/pessimism is not the best label here. I suspect the amount of positive emotions has the largest impact on longevity. True, optimists may experience more positive emotions in their lives, but it’s difficult to be an optimist in today’s world. Polyannas continue to be perceived as foolish, if not downright idiotic — and perversely unwilling to acknowledge sad truths. But even a diehard pessimist can understand the health benefits of positive emotions and arrange to experience more pleasure, such as surrounding himself or herself with beauty and keeping in contact with joy-bringing friends.
Likewise, we can learn to be our own best therapists. I like the definition of mental disorders as “paying attention to the wrong things.” The antidote is paying attention to the right things. Life is suffering, as Buddha stated long ago; nevertheless, we cam learn to count our blessings.
*
THE SECRET TO A MEANINGFUL LIFE
Some people seem to spend their whole lives dissatisfied, in search of a purpose. But philosopher Iddo Landau suggests that all of us have everything we need for a meaningful existence.
According to Landau, a philosophy professor at Haifa University in Israel and author of the 2017 book Finding Meaning in an Imperfect World, people are mistaken when they feel their lives are meaningless. The error is based on their failure to recognize what does matter, instead becoming overly focused on what they believe is missing from their existence. He writes in The Philosopher’s Magazine:
To my surprise, most of the people with whom I have talked about the meaning of life have told me that they did not think that their lives were meaningful enough. Many even presented their lives as outright meaningless. But I have often found the reasons my interlocutors gave for their views problematic. Many, I thought, did not pose relevant questions that might have changed their views, or take the actions that might have improved their condition. (Some of them, after our discussions, agreed with me.) Most of the people who complained about life’s meaninglessness even found it difficult to explain what they took the notion to mean.
In other words, Landau thinks that people who feel purposeless actually misunderstand what meaning is. He is among many thinkers over the ages who’ve wrestled with the difficult question, “What is a meaningful life?”
The Question of Meaning
Philosophers’ answers to this question are numerous and varied, and practical to different degrees. The 19th-century philosopher Friedrich Nietzsche, for example, said the question itself was meaningless because in the midst of living, we’re in no position to discern whether our lives matter, and stepping outside of the process of existence to answer is impossible.
Those who do think meaning can be discerned, however, fall into four groups, according to Thaddeus Metz, writing in the Stanford Dictionary of Philosophy. Some are god-centered and believe only a deity can provide purpose. Others ascribe to a soul-centered view, thinking something of us must continue beyond our lives, an essence after physical existence, which gives life meaning.
Then there are two camps of “naturalists” seeking meaning in a purely physical world as known by science, who fall into “subjectivist” and “objectivist” categories.
The two naturalist camps are split over whether the human mind makes meaning or these conditions are absolute and universal. Objectivists argue that there are absolute truths which have value, though they may not agree on what they are. For example, some say that creativity offers purpose, while others believe that virtue, or a moral life, confers meaning.
Subjectivists—Landau among them—think that those views are too narrow. If meaning happens through cognition, then it could come from any number of sources. “It seems to most in the field not only that creativity and morality are independent sources of meaning, but also that there are sources in addition to these two. For just a few examples, consider making an intellectual discovery, rearing children with love, playing music, and developing superior athletic ability,” Metz proposes.
For subjectivists, depending on who and where we are at any given point, the value of any given activity varies. Life is meaningful, they say, but its value is made by us in our minds, and subject to change over time. Landau argues that meaning is essentially a sense of worth which we may all derive in a different way—from relationships, creativity, accomplishment in a given field, or generosity, among other possibilities.
Reframing Your Mindset
For those who feel purposeless, Landau suggests a reframing is in order. He writes, “A meaningful life is one in which there is a sufficient number of aspects of sufficient value, and a meaningless life is one in which there is not a sufficient number of aspects of sufficient value.”
Basically, he’s saying meaning is like an equation—add or subtract value variables, and you get more or less meaning.
So, say you feel purposeless because you’re not as accomplished in your profession as you dreamed of being. You could theoretically derive meaning from other endeavors, like relationships, volunteer work, travel, or creative activities, to name just a few. It may also be that the things you already do really are meaningful, and that you’re not valuing them sufficiently because you’re focused on a single factor for value.
He points to the example of existentialist psychologist Viktor Frankl, who survived imprisonment in Nazi concentration camps in World War II and went on to write a book, Man in Search of Meaning. Frankl’s purpose, his will to live despite imprisonment in the harshest conditions, came from his desire to write about the experience afterward. Frankl noted, too, that others who survived the camps had a specific purpose—they were determined to see their families after the war or to help other prisoners live, maintaining a sense of humanity.
Landau argues that anyone who believes life can be meaningless also assumes the importance of value. In other words, if you think life can be meaningless, then you believe that there is such a thing as value. You’re not neutral on the topic. As such, we can also increase or decrease the value of our lives with practice, effort, action, and thought. “I can ruin or build friendships, upgrade or downgrade my health, and practice or neglect my German. It would be surprising if in this particular sphere of value, the meaning of life, things were different from how they are in all the other spheres,” he writes.
For a life to be valuable, or meaningful, it needn’t be unique. Believing that specialness is tied to meaning is another mistake many people make, in Landau’s view. This misconception, he believes, “leads some people to unnecessarily see their lives as insufficiently meaningful and to miss ways of enhancing meaning in life.”
He notes too that things change all the time: We move, meet new people, have fresh experiences, encounter new ideas, and age. As we change, our values transform, and so does our sense of purpose, which we must continually work on.
You Are, Therefore You Matter
Some might protest that Landau’s being simplistic. Surely there must be more to existence than simply assigning a value to what we already have and thinking differently if we fail to recognize purpose in our lives.
In fact, there are even less complex approaches to meaningfulness. In Philosophy Now, Tim Bale, a professor of politics at Queen Mary University of London in the UK, provides an extremely simple answer: “The meaning of life is not being dead.”
While that may sound coy, many philosophers offer similar responses, although few as pithy. Philosopher Richard Taylor proposes that efforts and accomplishments aren’t what make life matter, writing in the 1970 book Good and Evil, “the day was sufficient to itself, and so was the life.” In other words, because we live, life matters.
It can be disconcerting, perhaps, to have such an easy answer. And detractors might argue that nothing can matter, given the immensity of the universe and the brevity of our lives. But this assumes our purpose is fixed, rigid and assigned externally, and not flexible or a product of the mind.
The Question is the Answer
There are other approaches, too. Casey Woodling, a professor of philosophy and religious studies at Coastal Carolina University in South Carolina, proposes in Philosophy Now that the question of meaningfulness itself offers an answer. “What makes a human life have meaning or significance is not the mere living of a life, but reflecting on the living of a life,” he writes.
Pursuing ends and goals—fitness, family, financial success, academic accomplishment—is all fine and good, yet that’s not really meaningful, in Woodling’s view. Reflecting on why we pursue those goals is significant, however. By taking a reflective perspective, significance itself accrues. “This comes close to Socrates’ famous saying that the unexamined life is not worth living,” Woodling writes, “I would venture to say that the unexamined life has no meaning.”
Mystery is Meaning
In the Eastern philosophical tradition, there’s yet another simple answer to the difficult question of life’s meaning—a response that can’t be articulated exactly, but is sensed through deep observation of nature. The sixth-century Chinese sage Lao Tzu—who is said to have dictated the Tao Te Ching before escaping civilization for solitude in the mountains—believed the universe supplies our value.
Like Woodling, he would argue that goals are insignificant, and that accomplishments are not what makes our lives matter. But unlike Woodling, he suggests meaning comes from being a product of the world itself. No effort is necessary.
Instead of reflection, Lao Tzu proposes a deep understanding of the essence of existence, which is mysterious. We, like rivers and trees, are part of “the way,” which is made of everything and makes everything and cannot ever truly be known or spoken of. From this perspective, life isn’t comprehensible, but it is inherently meaningful—whatever position we occupy in society, however little or much we may do.
Life matters because we exist within and among living things, as part of an enduring and incomprehensible chain of existence. Sometimes life is brutal, he writes, but meaning is derived from perseverance. The Tao says, “One who persists is a person of purpose.”
https://getpocket.com/explore/item/the-secret-to-a-meaningful-life-is-simpler-than-you-think?utm_source=firefox-newtab-en-us
*
“The gods had condemned Sisyphus to ceaselessly rolling a rock to the top of a mountain, whence the stone would fall back of its own weight. They had thought with some reason that there is no more dreadful punishment than futile and hopeless labor,” Albert Camus began in his famous 1942 essay on the Greek myth. Yet he concludes: “The struggle itself toward the heights is enough to fill a man’s heart. One must imagine Sisyphus happy.”
https://qz.com/quartzy/1466818/the-existentialists-reluctant-guide-to-life
*
LIFE IN THE MIDDLE AGES: BRUTISH AND SHORT, ESPECIALLY FOR THE ORDINARY PEOPLE
medieval skeletons
The remains of numerous individuals unearthed on the former site of the Hospital of St. John the Evangelist, taken during the 2010 excavation
Violent signs of social inequality can be found all over the bones of medieval graveyards. The excavations of different burial sites in the historic city of Cambridge in the UK have found that working people were significantly more likely to have broken bones, likely a sign of occupational injury or violence, while the wealthy elites were buried in comparatively good shape.
The bones also reveal some curious stories from the Middle Ages, like a monk who was crushed to death by a heavy cart, as well as the grim reality faced by many people during this time.
As reported in the American Journal of Physical Anthropology, archaeologists from the University of Cambridge excavated three different sites in the city: a local church graveyard for ordinary working people, a charitable “hospital” where the infirm were cared for then buried, and an Augustinian friary where clergy were buried alongside wealthy donors. X-ray analysis of 314 skeletons revealed that 44 percent of working people had bone fractures, compared to 32 percent of those in the friary, and 27 percent of those buried by the hospital.
“We can see that ordinary working folk had a higher risk of injury compared to the friars and their benefactors or the more sheltered hospital inmates,” lead author Dr Jenna Dittmar, from the Wellcome Trust-funded After the Plague project at the University’s Department of Archaeology, said in a statement.
“These were people who spent their days working long hours doing heavy manual labour. In town, people worked in trades and crafts such as stonemasonry and blacksmithing, or as general laborers. Outside town, many spent dawn to dusk doing bone-crushing work in the fields or tending livestock,” Dr Dittmar said.
Across the different cemeteries, broken bones were found in 40 percent of male remains and 26 percent of female remains. Even though working people took the brunt of these apparent injuries, the skeletons highlight that the medieval ages were a brutal time for everyone; violence-related skeletal injuries most likely inflicted by others were found in about 4 percent of the whole population.
The bones also reveal some curious stories from the Middle Ages, like a monk who was crushed to death by a heavy cart, as well as the grim reality faced by many people during this time.
One story unearthed from the cemeteries was the grisly death of a monk who lived at the esteemed Augustinian friary. Despite living a relatively comfortable life compared to much of the population, their skeleton was found with both femur bones in the upper leg crushed.
“Whatever caused both bones to break in this way must have been traumatic, and was possibly the cause of death,” explained Dittmar. “Our best guess is a cart accident. Perhaps a horse got spooked and he was struck by the wagon.”
It appears quite a few of the monks had surprisingly violent lives, or deaths. Of the 19 skeletons believed to be monks at the Augustinian friary, six showed evidence of trauma.
One older female buried in the burial grounds for working people was found to have broken their ribs, multiple vertebrae in the spine, jaw, and foot throughout their lifetime. Unusually, these injuries were largely healed before death, leading the researchers to suspect the woman sustained these distinct injuries due to lifelong domestic abuse.
“It would be very uncommon for all these injuries to occur as the result of a fall, for example. Today, the vast majority of broken jaws seen in women are caused by intimate partner violence,” Dittmar explained.
All in all, the bones provide a fascinating look into the social inequalities of the period, and how tough it was for everybody.
“We can see this inequality recorded on the bones of medieval Cambridge residents. However, severe trauma was prevalent across the social spectrum," Dittmar said. "Life was toughest at the bottom – but life was tough all over.”
https://www.iflscience.com/skeletons-show-just-how-brutal-and-violent-medieval-life-was-especially-for-ordinary-people-58515
*
GREENLAND: THE VANISHED SETTLEMENT
Having prospered for more than 400 years, a medieval colony on Greenland vanished without a trace, but its memory lived on.
‘A bear a bear plunging into the sea’, illustration from A Voyage of discovery ... inquiring into the probability of a North-West Passage, by John Ross, 1819.
The Icelandic sagas tell of Erik the Red, a colorful character who was proclaimed an outlaw from Iceland and sentenced to three years’ banishment. During his exile he reached what we now know as Greenland, around ad 986. When he returned to Iceland, he solicited others to settle there. The late 13th-century Flateyjarbók records that, with a talent for branding, Erik called this new land ‘Greenland’, ‘for he said that might attract men thither, when the land had a fine name’.
Greenland was certainly an enticing prospect; the milder climate of the Medieval Warm Period (c.950 to c.1300) meant that farming was possible near the sheltered pockets of the fjords, while hunting provided seal meat and caribou.
Archaeologists have identified the ruins of hundreds of farms and several churches belonging to a Norse colony. Three main settlements on the southwest coast of Greenland could, at the colony’s peak, have been home to up to 5,000 inhabitants.
The largest series of connected farms form the Eastern Settlement, which stretched more than 100 km, covering what are now the towns of Nanortalik, Narsaq and Qaqortoq. Norse Greenland was an export economy. An important source of income for the Norse Greenlanders was the walrus tooth they sold, supplying European elites with ivory crucifixes, knife handles and chess pieces. By 1100 Greenland had become the major supplier of ivory to much of Europe.
The colony prospered for more than 400 years and then vanished. Its disappearance is still not fully understood. The last record of a ship sailing from Greenland is in 1410, after which there is no trace of the settlers. Archaeology shows that the Norse Greenlanders vanished in the course of the 15th century. In the centuries after communication ceased, speculation about the settlers’ fate abounded. A 14th-century account of church property in Greenland by the Norwegian cleric Ívar Bárdarson recounts that he visited one of the major settlements, which had been raided: ‘Now the Skrælings [the Norse name for the Inuit] have destroyed the entire Western Settlement’ and only livestock were left, but ‘no people, either Christian or heathen’.
Map made in England, depicting Greenland (based on reports by Hans Egede), Iceland and the Faroe Islands, 1747.
When Danish missionaries resettled Greenland in the early 18th century, they asked the Inuit to confirm this story about past violence and their informers gave them what they wanted. Yet subsequent research has shown that many Inuit stories were adaptations of standard legends about hostilities with other peoples and not necessarily evidence of the colony’s fate. Another belief that gained traction was that the settlers had succumbed to the onset of a colder climate. In James Montgomery’s bestselling poem Greenland (1819), a series of dramatic scenes imagines the Norse people being buried under falling ice.
A possible explanation for why the Norse disappeared is less dramatic: prices for walrus teeth may have fallen after 1350 as Asian and East African elephant tusks, whose whiteness was superior to walrus and therefore more prestigious, took over. The end to the prosperous walrus ivory trade, precipitated by a deteriorating climate, may have been incentives to leave.
East isn’t East
The colony’s fate came to have an enduring hold on Western imagination: from accounts in papal correspondence of the 1440s of Greenland settlers having been sacked by pirates who carried inhabitants away, to Arthur Conan Doyle, who, recalling his time as a surgeon on board a Greenland whaler, speculated whether the Norse colonists were ‘still singing and drinking and fighting’ in Greenland’s ‘ancient city’.
The colony’s possible wealth also proved alluring. While he found the Western Settlement seemingly abandoned, Ívar Bárdarson’s account of the Eastern Settlement in the 14th century described a prosperous community with abundant wealth, natural resources and precious metals. His account – essentially a tax survey – was reproduced and translated in several manuscripts and later committed to print. Throughout the 16th and 17th centuries, manuscripts and books regularly repeated stories of the riches in the Eastern Settlement, based on Ívar’s report and general expectations about Arctic abundance, which promised whaling possibilities, precious furs and gold and silver ores.
The Danish king Christian IV sent out three expeditions to Greenland between 1605 and 1607 to search for his lost subjects and their precious metals. (The Norse Greenlanders had, originally, been under the rule of the Norwegian king, but when Denmark and Norway united in 1397 the colony transferred to the Danish Crown.) Christian was prompted in part by the English explorer Martin Frobisher, who had made three voyages to Greenland’s surroundings some 30 years earlier. He had returned and reported finding gold, although the ore later proved to be worthless deposits.
The colony’s mystique was only enhanced by the difficulties in actually finding it. The Norse settlements were on the west coast of Greenland, but the name ‘Eastern Settlement’ caused much confusion in later centuries. It was so named because Greenland’s west coast veers east as it runs south; so, in relative terms, it was to the east of the smaller Western Settlement further up the coastline. The ‘Eastern Settlement’ was therefore misinterpreted as being on the east coast, where ships could not land due to sea ice clogging up the shores. The misconception and lack of access kept alive the belief that this settlement still stood, out of reach. The legends of wealth, which there were no witnesses to confirm or deny, circulated for centuries and made Greenland an object of imperialist desire like other lands in the New World.
Danish claims to Greenland, therefore, did not go unchallenged. John Dee, Queen Elizabeth’s court astrologer, promoted the legend of the Welsh prince Madoc, who had allegedly crossed the Atlantic in 1170 and established communities in both Greenland and North America. The assertion that Greenland was part of English colonial history was also made in publications by George Peckham and Richard Hakluyt.
During the 17th century, Danish historians attempted to gain political leverage by studying Greenland’s colonial history and the settlers’ use of the land. The publication of books and elaborate maps of the former colony under Danish suzerainty served to virtually recolonize Greenland on paper. However, English and Dutch scholars continued to dispute Danish claims to all of Greenland, seizing on the French intellectual Isaac La Peyrère’s evaluation in his popular Relation du Groenland (1647) that ‘Danish’ settlements made up only a small part of the extensive northern land.
cape fligely
‘Cape Fligely, Eastern Coast of Greenland, with the Frozen Sea’, illustration for The World As It Is, by George Chisholm, 1885.
Missionary zeal
At the beginning of the 18th century, the Danes realised that Dutch whaling vessels in particular were prospering from fishing in Greenland’s waters. Historical precedent alone was not enough to enforce authority against this; new settlements were needed. At the same time, there was a growing missionary yearning to save the colonists’ descendants – Christians who were thought to be isolated on Greenland’s ice-clogged shores, cut off since before the Reformation and therefore in need of spiritual re-education.
This ambition prompted the Dano-Norwegian missionary Hans Egede to solicit royal permission to establish a colony on the west coast of Greenland in 1721. Early documents from his recolonization show that the hope was to re-establish contact with the Eastern Settlement to help finance Egede’s mission through trade, exploitation of the extensive forests purportedly there (wood was a much-needed resource for the new Danish colony) and the extraction of the fabled precious ore.
Egede’s published eyewitness account of his 15 years as missionary, A Description of Greenland (1741), was translated into several European languages. The book provides much new information about the landscape and its Indigenous inhabitants, but it is also significantly haunted by the legends of the Norse settlements. His third chapter begins, tellingly: ‘We are informed by ancient histories...’ Based on his extensive reading of European texts on Greenland, Egede’s writing sometimes takes the form of a negative checklist of what one would expect to find. He tells us that 18th-century Greenland has no cattle, no cultivated farmland and no forests – negations relevant for readers familiar with the descriptions of Norse Greenland. He also declares that there is ‘little or nothing to say’ about (precious) minerals or metals.
If Egede’s text strips older books of their authority, he does not outright reject their information unless his own experiences directly contradict them. In fact, he leaves open the possibility that riches may exist beyond the horizon; old promises of abundance in the Eastern Settlement were deferred rather than annulled.
Egede made two failed attempts at reaching the Eastern Settlement, and several Danish expeditions to the east coast, both by sea and over land, were also launched in the 1720s. The expeditions were all failures, some costly. Modern studies have tended to treat the fantasies about a still-standing Eastern Settlement as a wild goose chase, but it was an important driving force for Danish interest in Greenland during the early period of resettlement.
Finally, at the end of the 18th century, the Danish geographer Heinrich Peter von Eggers proposed that the Eastern Settlement had been on the west coast. This (correct) theory gradually gained acceptance in European scientific circles. Even so, speculations that descendants of the old colonists inhabited remote parts of Greenland were not put to rest.
Imperial interest
Beginning in 1818, when ships and men were freed up after the end of the Napoleonic Wars, the British Admiralty sent out expeditions to the Arctic to find a sailing route across the North Pole to reach Asian markets. As Greenland was en route, it was inevitably drawn into the orbit of British imperial interest. The whaler William Scoresby Jr had observed that sea ice around Greenland was disappearing, sparking the hope that the prohibitive east coast might become accessible.
The 1818 expedition led by Captain John Ross along the west coast of Greenland failed to find the mythical Northwest Passage, yet an interesting cultural encounter between Ross’ crew and the isolated Inughuit at Qaanaaq (also known as Thule) took place. Indigenous helpers were often important aides on Arctic expeditions, although seldom adequately acknowledged. Ross’ expedition included the Inuit interpreter John Sacheuse, who made it possible for this remote people to communicate with the visitors. According to eyewitness accounts of the encounter, Sacheuse exclaimed: ‘These are right Eskimaux, these are our fathers!’, in the belief that the isolated Inughuit were a lost colony of his ancestors, who had migrated southwest. From a British perspective, the meeting was no compensation for failing to find a navigable trade route, but it indicated that the European colony could perhaps also be discovered.
British writers, buoyed by an understanding of their nation as the leading sea power, saw it as a moral duty to discover the old Christian colonists in Greenland and save them from isolation, starvation and falling into abject heathenness. The poet Anna Jane Vardill, for example, hoped in ‘The Arctic Navigator’s Prayer’ (1818) that melting icebergs could make possible a British discovery of ‘some bright cove, where long unseen / Our kindred hearts have shelter’d been!’ In such appeals it was implied that the Norse Greenlanders were an extraction of the stock that constituted the Anglo-Saxon and Viking settlers in Britain. However, philanthropic aims, widely publicized at the time, could easily veer into imperial overtures.
Britain never officially made claims to Greenland, but several non-state actors advocated that it should be a target not only for exploration but also for exploitation. The English inventor George Manby, who worked to improve Britain’s failing whaling industry, argued in a series of pamphlets that Greenland’s east coast could be used as a penal colony. He (mistakenly) believed that the history of the old Greenland settlers proved that it offered habitable conditions, while emphasising that the location had a more salutary climate than the Australian colonies to which British criminals were otherwise transported. As it became clear that Scoresby Jr’s observation of an ice-free Arctic was not permanent, British enthusiasm for Greenland’s east coast seems also to have cooled.
Lessons learned
Greenland was often viewed as a proving ground for the two Western expansionist movements: colonialism and Christian mission. John Howison’s monumental European Colonies, in Various Parts of the World (1834), for example, includes extensive speculation on the old Greenland settlements alongside his survey of contemporary (and primarily British) colonial possessions. The tragic fate of the medieval Greenland settlers – either isolation or demise – is held out as a lesson for modern colonial powers who venture into increasingly far-flung corners of the world.
Fears that the Norse settlers had descended to savagery had long abounded but were given substance by 18th-century missionaries, who recorded Inuit folklore of strange peoples in remote Greenland regions. In the popular History of Greenland (1765) by the German missionary historian David Crantz, accounts are relayed about a cannibalistic tribe living in a mountainous area. Crantz surmises that they may be descendants of the Norse settlers, but staunchly rejects the notion that they were ‘man-eaters’ as nothing but Inuit fancifulness. This was the usual pattern; cannibalism was viewed as practically inconceivable for Europeans. Furthermore, in a missionary context, to assume that the old settlers had lost the last vestiges of their human dignity would make them undeserving of recovery and conversion, which remained an ambition.
Legends of literature
As a 20-year-old medical student, Arthur Conan Doyle enlisted as a surgeon on board a Greenland whaler. Many years later, in ‘The Glamour of the Arctic’ (1892), he reflected on the experience and what may have happened to the ‘ancient city’ of Greenland: had they been murdered by the Skrælings; or had they intermarried to produce a mixed race? Eventually, Conan Doyle preferred to leave the question unsolved, as, at a time when explorers and mapmakers were robbing the world of its secrets, authors, he argued, need mysteries to fire their imagination. Conan Doyle never got to write a story about Greenland’s lost colonists, but many others did.
After both British and Danish expeditions had surveyed large swathes of the east coast in the 1820s (finding no definite traces of any surviving colonists), literary writers joined scientists and historians in turning their gaze towards other parts of the Arctic for possible locations where the settlers may have sought refuge. A reference in an old manuscript discovered by the 17th-century Icelandic bishop Gísli Oddsson described a possible migration from Greenland. This possibility was compounded by travellers who provided ethnological accounts of fair-skinned Inuit in Labrador, or of Native Americans in New England speaking Norse, and so on. The reverberations of such spurious observations led to numerous fiction stories about adventurers who discover that Greenland’s lost settlers have migrated elsewhere in the Arctic.
These stories also took advantage of a contemporary geophysical theory that the North Pole had a temperate climate and an ice-free sea. The British-American author Frederick Whittaker’s The Lost Captain; or, Skipper Jabez Coffin’s Cruise to the Open Polar Sea (1880) is perhaps the earliest such story. Whittaker imagines the Greenland settlers having migrated to a new habitat. Adventure stories that followed, such as William Huntington Wilson’s Rafnaland (1900), Fenton Ash’s In Polar Seas (1915-16) and Fitzhugh Green’s ZR Wins (1924), also draw on geothermal theories that the Arctic was home to volcanically heated islands. The tales feature either Norse migrants from Greenland or the descendants of bold Viking explorers who had managed to navigate beyond the wall of ice where modern ships usually got stuck.
The ‘Norse colony’ tales were a subcategory of the popular ‘lost races’ adventure stories that had their heyday between the 1870s and the 1920s, often piggybacking on the success of Henry Rider Haggard’s novels. The fact that the Polar Sea was still a blank spot offered authors the opportunity of imagining the vanished Norse settlers’ fate.
chess king
A king from the Lewis chessmen, made of walrus ivory probably obtained from Greenland, 12th century.
Blessed isolation
In the early 20th century, the first Inuit novelists projected future hopes for their fellow Greenlanders. Mathias Storch published Sinnattugaq (The Dream, 1914), which envisions Nuuk in the year 2105 when Greenlanders are well educated and the Danish trade monopoly has been lifted. Augustinus ‘Augo’ Lynge’s 1931 novel Ukiut 300-nngornerat (300 Years After) depicts Greenland in the year 2021 when Nuuk has become an important shipping port with shopping facilities and luxury hotels – all for the benefit of the Indigenous Greenlanders.
If these early novels imagine a time with expanded international connections, British and American stories focused on fictional Norse settlers who had enjoyed a glorious isolation, escaping the vicissitudes of world history. In William Gordon Stables’ The City at the Pole (1906) adventurers find themselves among a hidden, white community that had not suffered the racial degeneration of the outside world.
The fascination with isolated colonies of strong, healthy and physically impressive Norse warriors reflected public anxieties about the detrimental effects of immigration and cultural decline. These tales, written for the popular market, are therefore best understood against the backdrop of the eugenicist movement in Britain and the miscegenation laws in the US. The Arctic colonies are communities frozen in time and often offer adventurers the opportunity to engage in masculine battle as an antidote to the effeminizing effects of modernity.
‘Blond Eskimos’
A twist in the tale of the long search for the vanished settlers was the explorer and ethnologist Vilhjalmur Stefansson’s discovery of ‘Blond Eskimos’ on a 1912 expedition to Canada’s Victoria Island. He regarded the community as descendants of the European Greenlanders who had mixed with the Indigenous population. Stefansson, however, did not claim that they were white, only that they were tall, slender and had red-brownish hair. Nonetheless, the world press ran with the term ‘Blond Eskimos’.
Modern DNA analysis of the Victoria Island Inuit has disproven any Norse ancestry, but Stefansson’s claim rekindled the old hope that the mystery of the vanished settlers’ fate could be solved. In the wake of the extensive press interest in the apparent discovery, earlier ‘lost colony’ adventure tales were republished and sometimes revised to reflect the mixed-race solution. Among new stories produced, the American author Samuel Scoville Jr’s The Boy Scouts in the North (1919-20) has a heroic Inuk, who is ennobled by his Norse ancestry, but who also reflects a growing respect for Inuit ability to survive in harsh climates. Towards the end of the 1920s the popularity of stories about Norse descendants living in hidden colonies began to wane. This was in part due to advances in Arctic mapping, which narrowed the space for imaginative hideouts. But the template of ‘lost race’ fiction also began to feel trite, collapsing under the weight of its own success.
In 1933 the Permanent Court of International Justice ruled in favor of Denmark in the dispute with Norway over possession of Greenland. The tribunal was won partly on account of past Danish commitment to finding the long-vanished settlers and Denmark’s insistence on speaking of the people in Greenland as ‘our subjects’. At this time, the reference to Greenland’s Nordic settlers proved its continuing political importance, but the legend’s cultural survival over the centuries is, perhaps, even more impressive.
greenland 1
https://www.historytoday.com/archive/feature/colony-vanished?utm_source=Newsletter&utm_campaign=3e35e5ec5b-EMAIL_CAMPAIGN_2017_09_20_COPY_01&utm_medium=email&utm_term=0_fceec0de95-3e35e5ec5b-1214148&mc_cid=3e35e5ec5b
*
PARENTS DO HAVE A FAVORITE CHILD
"We don't have a favorite" may be the biggest lie to come out of your parents' mouths. (This is saying a lot, considering these are the same one or two people who try and convince you that Santa and the Easter Bunny are real.) It's the parenting equivalent of someone saying "It's not you, it's me" during a breakup conversation when it is, in reality, you.
Though parents can deny it all they want, I'm convinced they absolutely have a favorite child. There's always one kid with more chores or harsher punishments, while the other gets a brand-new iPhone. One child with a closet full of hand-me-downs, while the other gets the brand-new Lululemon outfits. Or one child with the strict curfew, while the other doesn't even know what a curfew is. The evidence simply speaks for itself.
Now, I'm not saying parents don't love all their children equally. It may not even be intentional; parents could genuinely be blind to their favoritism. I'm just saying that it's not possible to not have a favorite, or demonstrate more attention to one child over the other. It's human nature to like some things more than others, like how I favor the red flavored Starburst slightly more than the pink. You don't see me gaslighting my taste buds into believing otherwise.
Whether parents are in denial, feel guilty, or would simply rather ignore the conversation altogether, my opinion is simple: parents have to have a favorite child. Right? Turns out, experts also have some thoughts on the matter.
Do Parents Have a Favorite Child?
As a middle child, I feel qualified to answer the question with a resounding yes. While I have a great relationship with my parents, I know they favor my sisters slightly more than me. Blame it on "middle child syndrome" all you want, but I know I'm not the only person in the world to feel that way.
According to adolescent mental health expert Caroline Fenkel, DSW, though, it's not so simple. "Parents generally don't have a 'favorite child' in the sense of loving one more than the other, but it's natural to feel more connected to one child at certain moments," Dr. Fenkel says. "These feelings often stem from shared temperaments, interests, or stages of life rather than a deep-seated preference.”
For example, if your sibling has a hobby that aligns with something your parent also enjoys, this may grant them some extra special attention. They may spend more time together or have more inside jokes about the hobby. "This may create the perception of favoritism, even if unintentional," Dr. Fenkel says.
Thanks to genetics, your sibling may also be more similar to one of your parents than you are. This "shared temperament," as Dr. Fenkel calls it, could also make it seem like your sibling and parents connect more because they have shared personalities.
Other things that can shape family dynamics: birth order and gender. For example, firstborns might receive more responsibility, but parents might be more lenient with the youngest children, Dr. Fenkel says. Parents may also be more strict with their first child, but then relax a bit more as they have more children.
However, according to Dr. Fenkel, these are all matters of perception — not reality. "Kids often equate attention with affection, even though these are different things," Dr. Fenkel says. So even if it feels like someone is getting more attention, it doesn't necessarily mean they're "the favorite.”
What Parents Can Do to Shift the Dynamics
Seeing your siblings bond with your parents more than you are can definitely leave a mark — again, even if it's not intentional. To make sure all children feel loved and valued, Dr. Fenkel says parents should prioritize one-on-one time with all their children, listen to each child's individual needs, and affirm their unique qualities. "Saying something like, 'I love you and your sibling in different ways, but both of you are so important to me,' can be helpful," she adds.
Because at the end of the day, we all hopefully understand that we are loved, even if in different ways. "Love may not look the same for each child," Dr. Fenkel says. "Fairness doesn't mean treating them identically but recognizing and responding to each child's unique needs.”
As I reflect on my non-favoritism in the family, I understand now that being the "favorite" isn't so black and white. That said, I still stand behind what I said. Because even though my parents would never admit to there being a "favorite," I know whose senior photos are still hanging in the living room (hint: they're not mine).
https://www.popsugar.com/family/do-parents-have-favorite-kid-44116603
*
WE EXPECT TOO MUCH FROM OUR SPOUSES
Tall, dark, handsome, funny, kind, great with kids, six-figure salary, a harsh but fair critic of my creative output ... the list of things people want from their spouses and partners has grown substantially in recent decades. So argues Eli Finkel, a professor of social psychology at Northwestern University in his book, The All-or-Nothing Marriage.
As Finkel explains, it’s no longer enough for a modern marriage to simply provide a second pair of strong hands to help tend the homestead, or even just a nice-enough person who happens to be from the same neighborhood. Instead, people are increasingly seeking self-actualization within their marriages, expecting their partner to be all things to them.
Unfortunately, that only seems to work if you’re an Olympic swimmer whose own husband is her brusque coach. Other couples might find that career-oriented criticism isn’t the best thing to hear from the father of your 4-year-old. Or, conversely, a violinist might simply have a hard time finding a skilled conductor—who also loves dogs and long walks on the beach—on Tinder.
I spoke with Finkel about how to balance this blend of expectations and challenges in a modern relationship. A lightly edited and condensed version of our conversation follows.
Olga Khazan: How has what we expect from our marriages changed since, say, 100 years ago?
Eli Finkel: The main change has been that we’ve added, on top of the expectation that we’re going to love and cherish our spouse, the expectation that our spouse will help us grow, help us become a better version of ourselves, a more authentic version of ourselves.
Khazan: As in our spouse should, just to give a random example, provide interesting feedback on our articles that we’re writing?
Finkel: That’s obviously a white-collar variation on the theme, but I think up and down the socioeconomic hierarchy, it isn’t totally crazy these days to hear somebody say something like, “He’s a wonderful man and a loving father and I like and respect him, but I feel really stagnant in the relationship. I feel like I’m not growing and I’m not willing to stay in a marriage where I feel stagnant for the next 30 years.”
Khazan: Why has that become something that we are just now concerned with? Why weren’t our great-grandparents concerned with that?
Finkel: The primary reason for this is cultural. In the 1960s, starting around that time, we rebelled as a society against the strict social rules of the 1950s. The idea that women were supposed to be nurturing but not particularly assertive. Men were supposed to be assertive but not particularly nurturing. There were relatively well-defined expectations for how people should behave, and in the 1960s, our society said, “To hell with that.”
Humanistic psychology got big. So these were ideas about human potential and the idea that we might strive to live a more authentic, true-to-the-self sort of life. Those ideas really emerged in the 1930s and 1940s, but they got big in the 1960s.
Khazan: You write about how this has actually been harder on lower-income Americans. Can you talk a little bit about why that is?
Finkel: People with college degrees are marrying more, their marriages are more satisfying, and they’re less likely to divorce. The debate surrounds [the question]: Why is it that people who have relatively little education and don’t earn very much money have marriages that, on average, are struggling more than those of us who have more education and more money?
There basically is no meaningful difference between the poorest members of our society and the wealthier members of our society in the instincts for what makes for a good marriage.
[However, lower-income people] have more stress in their lives, and so the things that they likely have to deal with, when they’re together, are stressful things and the extent to which the time they get together is free to focus on the relationship, to focus on interesting conversation, to focus on high-level goals is limited. It’s tainted by a sense of fatigue, by a sense of limited bandwidth because of dealing with everyday life.
Khazan: What is Mount Maslow? And can you try to reach the top of Mount Maslow and maintain a successful marriage?
Finkel: Most people depict Maslow’s hierarchy as a triangle, with physiological and safety needs at the bottom, love and belonging needs in the middle, and esteem and self-actualization needs at the top. It’s useful to reconceptualize Maslow’s hierarchy as a mountain.
So imagine that you’re trying to scale this major mountain, and you’re trying to meet your physiological and safety needs, and then when you have some success with that you move on to your love and belonging needs, and as you keep going up the mountain, you finally arrive at your self-actualization needs, and that’s where you’re focusing your attention.
As any mountain-climber knows, as you get to the top of a mountain the air gets thin, and so many people will bring supplemental oxygen. They try to make sure that while they’re up there at the top they have enough resources, literally in terms of things like oxygen and warm clothing, to make sure that they can actually enjoy the view from up there.
The analogy to marriage is for those of us who are trying to reach the peak, the summit of Mount Maslow where we can enjoy this extraordinary view. We can have this wonderful set of experiences with our spouse, a particularly satisfying marriage, but we can’t do it if we’re not spending the time and the emotional energy to understand each other and help promote each other’s personal growth.
The idea of the book is that the changing nature of our expectations of marriage have made more marriages fall short of expectations, and therefore disappoint us. But they have put within reach the fulfillment of a new set of goals that people weren’t even trying to achieve before. It’s the fulfillment of those goals that makes marriage particularly satisfying.
Khazan: Is it risky to have your closest partner also be your harshest critic, so that you can grow?
Finkel: My New York Times op-ed piece focused on the challenges of having a partner who’s simultaneously responsible for making us feel loved, and sexy, and competent, but also ambitious, and hungry, and aspirational. How do you make somebody feel safe, and loved, and beautiful without making him or her feel complacent? How do you make somebody feel energetic, and hungry, and eager to work hard without making them feel like you disapprove of the person they currently are?
The answer to that question is, it depends.
You can do it within a given marriage, but they should be aware that that is what they’re asking the partner to do. They should be aware that in some sense, the pursuit of those goals are incompatible and they need to be developing a way of connecting together that can make it possible.
For example, you might try to provide support that sounds more like this: “I’m just so proud of everything you’ve achieved, and I’m so proud that you’re never fully satisfied with it, and you’re just so impressive in how you constantly and relentlessly work toward improving yourself.” That can convey a sense that I approve of you, but I recognize what your aspirations are. Right?
[What’s more], there’s no reason why it has to be the same person who plays both of those roles. I would just urge everybody, think about what you’re looking for from this one relationship and decide, are these expectations realistic in light of who I am, who my partner is, what the dynamics that we have together are? If so, how are we going to achieve all of these things together? Or alternatively, how can we relinquish some of these roles that we play in each others’ lives, and outsource them to, say, another member of your social network?
Khazan: That’s the idea of having a diversified social portfolio, right? Can you explain how that would work?
Finkel: There’s a cool study by Elaine Cheung at Northwestern University, where she looked at the extent to which people look to a very small number of people to help them manage their emotions versus an array of different people, to manage different sorts of emotions. So, one person for cheering up sadness, another person for celebrating happiness, and so forth.
It turns out that people who have more diversified social portfolios, that is, a larger number of people that they go to for different sorts of emotions, those people tend to have overall higher-quality life. This is one of the arguments in favor of thinking seriously about looking to other people to help us, or asking less of this one partner.
I think most of us will be kind of shocked by how many expectations and needs we’ve piled on top of this one relationship. I’m not saying that people need to lower their expectations, but it is probably a bad plan to throw all of these expectations on the one relationship and then try to do it on the cheap. That is, to treat time with your spouse as something you try to fit in after you’ve attended to the kids, and after you’ve just finished this one last thing for work. Real, attentive time for our spouse is something that we often don’t schedule, or we schedule insufficient time for it.
Khazan: What is climbing down from the mountain? Should we try to do that?
Finkel: There’s the recalibration strategy, which is fixing an imbalance, not by increasing the investment in the marriage, but by decreasing the amount that we’re asking or demanding of the marriage.
There’s no shame at all in thinking of ways that you can ask less. That’s not settling, and that’s not making the marriage worse. It’s saying, look, “These are things I’ve been asking of the marriage that have been a little bit disappointing to me. These are things that I’m going to be able to get from the marriage but frankly, given what I understand about my partner, myself, and the way the two of us relate, it’s just going to be a lot of work to be able to achieve those things through the marriage.”
Khazan: So what is “going all-in,” and what are the risks and rewards of that?
Finkel: The question isn’t, “Are you asking too much?” The question is, “Are you asking the appropriate amount, in light of the nature of the relationship right now?” The idea of “going all-in” is, “Hell yes. I want to ask my spouse to help make me feel loved and give me an opportunity to love somebody else and also [be] somebody who’s going to help me grow into an ideal, authentic version of myself.
And I’m going do the same for him or her. I recognize that that is a massive ask, and because I recognize that that’s a massive ask I’m going to make sure that we have sufficient time together. That when we’re together we’re paying sufficient attention to each other, that the time that we’re investing in the relationship is well-spent.”
https://getpocket.com/explore/item/we-expect-too-much-from-our-romantic-partners?utm_source=firefox-newtab-en-us
*
DO WE REALLY LIVE LONGER THAN OUR ANCESTORS?
In 1841, a baby girl was expected to live to just 42 years of age, a boy to 40. In 2016, a baby girl could expect to reach 83; a boy, 79.
The natural conclusion is that both the miracles of modern medicine and public health initiatives have helped us live longer than ever before – so much so that we may, in fact, be running out of innovations to extend life further. In September 2018, the Office for National Statistics confirmed that, in the UK at least, life expectancy has stopped increasing. Beyond the UK, these gains are slowing worldwide.
This belief that our species may have reached the peak of longevity is also reinforced by some myths about our ancestors: it’s common belief that ancient Greeks or Romans would have been flabbergasted to see anyone above the age of 50 or 60, for example.
Augustus
Rome’s first emperor, Augustus, died at 75
In fact, while medical advancements have improved many aspects of healthcare, the assumption that human life span has increased dramatically over centuries or millennia is misleading.
Overall life expectancy, which is the statistic reflected in reports like those above, hasn’t increased so much because we’re living far longer than we used to as a species. It’s increased because more of us, as individuals, are making it that far.
“There is a basic distinction between life expectancy and life span,” says Stanford University historian Walter Scheidel, a leading scholar of ancient Roman demography. “The life span of humans – opposed to life expectancy, which is a statistical construct – hasn’t really changed much at all, as far as I can tell.”
Life expectancy is an average. If you have two children, and one dies before their first birthday but the other lives to the age of 70, their average life expectancy is 35. Most of human history has been blighted by poor survival rates among children, and that continues in various countries today.
The 6th-Century ruler Empress Suiko, who was Japan’s first reigning empress in recorded history, died at 74 years of age.
This averaging-out, however, is why it’s commonly said that ancient Greeks and Romans, for example, lived to just 30 or 35. But was that really the case for people who survived the fragile period of childhood, and did it mean that a 35-year-old was truly considered ‘old’?
If one’s thirties were a decrepit old age, ancient writers and politicians don’t seem to have got the message. In the early 7th Century BC, the Greek poet Hesiod wrote that a man should marry “when you are not much less than 30, and not much more.” Meanwhile, ancient Rome’s ‘cursus honorum’ – the sequence of political offices that an ambitious young man would undertake – didn’t even allow a young man to stand for his first office, that of quaestor, until the age of 30 (under Emperor Augustus, this was later lowered to 25; Augustus himself died at 75). To be consul, you had to be 43 – eight years older than the US’s minimum age limit of 35 to hold a presidency.
In the 1st Century, Pliny devoted an entire chapter of The Natural History to people who lived longest. Among them he lists the consul M Valerius Corvinos (100 years), Cicero’s wife Terentia (103), a woman named Clodia (115 – and who had 15 children along the way), and the actress Lucceia who performed on stage at 100 years old.
Then there are tombstone inscriptions and grave epigrams, such as this one for a woman who died in Alexandria in the 3rd Century BC. “She was 80 years old, but able to weave a delicate weft with the shrill shuttle,” the epigram reads admiringly.
Not, however, that aging was any easier then than it is now. “Nature has, in reality, bestowed no greater blessing on man than the shortness of life,” Pliny remarks. “The senses become dull, the limbs torpid, the sight, the hearing, the legs, the teeth, and the organs of digestion, all of them die before us…” He can think of only one person, a musician who lived to 105, who had a pleasantly healthy old age. (Pliny himself reached barely half that; he’s thought to have died from volcanic gases during the eruption of Mt Vesuvius, aged 56).
In the ancient world, at least, it seems people certainly were able to live just as long as we do today. But just how common was it?
Age of Empires
Back in 1994 a study looked at every man entered into the Oxford Classical Dictionary who lived in ancient Greece or Rome. Their ages of death were compared to men listed in the more recent Chambers Biographical Dictionary.
Of 397 ancients in total, 99 died violently by murder, suicide or in battle. Of the remaining 298, those born before 100BC lived to a median age of 72 years. Those born after 100BC lived to a median age of 66. (The authors speculate that the prevalence of dangerous lead plumbing may have led to this apparent shortening of life).
The median of those who died between 1850 and 1949? Seventy-one years old – just one year less than their pre-100 BC cohort.
Of course, there were some obvious problems with this sample. One is that it was men-only. Another is that all of the men were illustrious enough to be remembered. All we can really take away from this is that privileged, accomplished men have, on average, lived to about the same age throughout history – as long as they weren’t killed first, that is.
Still, says Scheidel, that’s not to be dismissed. “It implies there must have been non-famous people, who were much more numerous, who lived even longer,” he says.
The Roman emperor Tiberius died at the age of 77 – some accounts say by murder.
Not everyone agrees. “There was an enormous difference between the lifestyle of a poor versus an elite Roman,” says Valentina Gazzaniga, a medical historian at Rome’s La Sapienza University. “The conditions of life, access to medical therapies, even just hygiene – these were all certainly better among the elites.”
In 2016, Gazzaniga published her research on more than 2,000 ancient Roman skeletons, all working-class people who were buried in common graves. The average age of death was 30, and that wasn’t a mere statistical quirk: a high number of the skeletons were around that age. Many showed the effects of trauma from hard labor, as well as diseases we would associate with later ages, like arthritis.
Men might have borne numerous injuries from manual labour or military service. But women – who, it's worth noting, also did hard labor such as working in the fields – hardly got off easy. Throughout history, childbirth, often in poor hygienic conditions, is just one reason why women were at particular risk during their fertile years. Even pregnancy itself was a danger.
“We know, for example, that being pregnant adversely affects your immune system, because you’ve basically got another person growing inside you,” says Jane Humphries, a historian at the University of Oxford. “Then you tend to be susceptible to other diseases. So, for example, tuberculosis interacts with pregnancy in a very threatening way. And tuberculosis was a disease that had higher female than male mortality.”
Childbirth was worsened by other factors too. “Women often were fed less than men,” Gazzaniga says. That malnutrition means that young girls often had incomplete development of pelvic bones, which then increased the risk of difficult childbirth labor.
“The life expectancy of Roman women actually increased with the decline of fertility,” Gazzaniga says. “The more fertile the population is, the lower the female life expectancy.”
Missing People
The difficulty in knowing for sure just how long our average predecessor lived, whether ancient or pre-historic, is the lack of data. When trying to determine average ages of death for ancient Romans, for example, anthropologists often rely on census returns from Roman Egypt. But because these papyri were used to collect taxes, they often under-reported men – as well as left out many babies and women.
Tombstone inscriptions, left behind in their thousands by the Romans, are another obvious source. But infants were rarely placed in tombs, poor people couldn’t afford them and families who died simultaneously, such as during an epidemic, also were left out.
And even if that weren’t the case, there is another problem with relying on inscriptions.
“You need to live in a world where you have a certain amount of documentation where it can even be possible to tell if someone lived to 105 or 110, and that only started quite recently,” Scheidel points out. “If someone actually lived to be 111, that person might not have known.”
The Roman empress Livia, wife of Augustus, lived until she was 86 or 87 years old.
As a result, much of what we think we know about ancient Rome’s statistical life expectancy comes from life expectancies in comparable societies. Those tell us that as many as one-third of infants died before the age of one, and half of children before age 10. After that age your chances got significantly better. If you made it to 60, you’d probably live to be 70.
Taken altogether, life span in ancient Rome probably wasn’t much different from today. It may have been slightly less “because you don’t have this invasive medicine at end of life that prolongs life a little bit, but not dramatically different,” Scheidel says.
“You can have extremely low average life expectancy, because of, say, pregnant women, and children who die, and still have people to live to 80 and 90 at the same time. They are just less numerous at the end of the day because all of this attrition kicks in.”
Of course, that attrition is not to be sniffed at. Particularly if you were an infant, a woman of childbearing years or a hard laborer, you’d be far better off choosing to live in year 2018 than 18. But that still doesn’t mean our life span is actually getting significantly longer as a species.
On the Record
The data gets better later in human history once governments begin to keep careful records of births, marriages and deaths – at first, particularly of nobles.
Those records show that child mortality remained high. But if a man got to the age of 21 and didn’t die by accident, violence or poison, he could be expected to live almost as long as men today: from 1200 to 1745, 21-year-olds would reach an average age of anywhere between 62 and 70 years – except for the 14th Century, when the bubonic plague cut life expectancy to a paltry 45.
Queen Elizabeth I lived until the age of 70; life expectancy at the time could be longer for villagers than for royals.
Did having money or power help? Not always. One analysis of some 115,000 European nobles found that kings lived about six years less than lesser nobles, like knights. Demographic historians have found by looking at county parish registers that in 17th-Century England, life expectancy was longer for villagers than nobles.
“Aristocratic families in England possessed the means to secure all manner of material benefits and personal services but expectation of life at birth among the aristocracy appears to have lagged behind that of the population as a whole until well into the eighteenth century,” he writes. This was likely because royals tended to prefer to live for most of the year in cities, where they were exposed to more diseases. (Is it still true that cities are less safe? Find out more in our story on whether the countryside is a healthier place to live today).
But interestingly, when the revolution came in medicine and public health, it helped elites before the rest of the population. By the late 17th Century, English nobles who made it to 25 went on to live longer than their non-noble counterparts – even as they continued to live in the more risk-ridden cities.
Surely, by the soot-ridden era of Charles Dickens, life was unhealthy and short for nearly everyone? Still no. As researchers Judith Rowbotham, now at the University of Plymouth, and Paul Clayton, of Oxford Brookes University, write, “once the dangerous childhood years were passed… life expectancy in the mid-Victorian period was not markedly different from what it is today.” A five-year-old girl would live to 73; a boy, to 75.
Not only are these numbers comparable to our own, they may be even better. Members of today’s working-class (a more accurate comparison) live to around 72 years for men and 76 years for women.
Britain’s Queen Victoria died in 1901 at the age of 81. During her reign, a girl could expect to live to about 73 years of age, a boy to 75.
“This relative lack of progress is striking, especially given the many environmental disadvantages during the mid-Victorian era and the state of medical care in an age when modern drugs, screening systems and surgical techniques were self-evidently unavailable,” Rowbotham and Clayton write.
They argue that if we think we’re living longer than ever today, this is because our records go back to around 1900 – which they call a “misleading baseline,” as it was at a time when nutrition had decreased and when many men started to smoke.
Pre-Historic People
What about if we look in the other direction in time – before any records at all were kept?
Although it is obviously difficult to collect this kind of data, anthropologists have tried to substitute by looking at today's hunter-gatherer groups, such as the Ache of Paraguay and Hadza of Tanzania. They found that while the probability of a newborn’s survival to age 15 ranged between 55 percent for a Hadza boy up to 71 percent for an Ache boy, once someone survived to that point, they could expect to live until they were between 51 and 58 years old. Data from modern-day foragers, who have no access to medicine or modern food, write Michael Gurven and Cristina Gomes, finds that “while at birth mean life expectancies range from 30 to 37 years of life, women who survive to age 45 can expect to live an additional 20 to 22 years” – in other words, from 65 to 67 years old.
The Roman empress Domitia died in 130 at the age of 77.
Archaeologists Christine Cave and Marc Oxenham of Australian National University have recently found the same. Looking at dental wear on the skeletons of Anglo-Saxons buried about 1,500 years ago, they found that of 174 skeletons, the majority belonged to people who were under 65 – but there also were 16 people who died between 65 and 74 years old and nine who reached at least 75 years of age.
Our maximum lifespan may not have changed much, if at all. But that’s not to delegitimize the extraordinary advances of the last few decades which have helped so many more people reach that maximum lifespan, and live healthier lives overall.
Perhaps that’s why, when asked what past era, if any, she’d prefer to live in, Oxford’s Humphries doesn’t hesitate.
“Definitely today,” she says. “I think women’s lives in the past were pretty nasty and brutish – if not so short.”
https://getpocket.com/explore/item/do-we-really-live-longer-than-our-ancestors?utm_source=firefox-newtab-en-us
*
BRITAIN’S TASTE FOR TEA MIGHT HAVE BEEN A LIFE-SAVER
Tea quickly became one of the British Empire's most prized resources in the 18th Century. But it may have also had an unintended effect on the British population – reducing mortality rates.
Tea has been many things in its time – a global commodity, a comforting beverage, and even, in the eyes of some Bostonians 250 years ago this week, a symbol of oppressive politics. But one role you might not have attributed to tea is that of a life-saving health intervention.
In a recent paper in the Review of Statistics and Economics, economist Francisca Antman of the University of Colorado, Boulder, makes a convincing case that the explosion of tea as an everyman's drink in late 1700s England saved many lives. This would not have been because of any antioxidants or other substances inherent to the lauded leaf.
Instead, the simple practice of boiling water for tea, in an era before people understood that illness could be caused by water-borne pathogens, may have been enough to keep many from an early grave.
English demographics from this era have long contained a puzzle for historians. Between 1761 and 1834, the annual death rate declined substantially, from 28 to 25 per 1,000 people. But at the same time, wages do not seem to have risen much and standards of living arguably did not increase. In fact, with the rise of the industrial revolution, more and more people were crowding into towns whose sanitation left much to be desired. "I would say it's not a settled debate," says Antman.
The idea that tea might be the missing link here, thanks to the need to boil water for a proper brew, had been floated by historians in the past. Boiling water kills bacteria that cause diarrheal diseases like dysentery, which was often called "flux" or "bloody flux" in death records.
"With people coming into cities to work, you would expect, given the level of sanitation they have, that the big killer is water," says Antman. But it remained a somewhat fuzzy idea, interesting in theory but difficult to prove.
Antman developed a way to test it, using detailed geographical information about more than 400 parishes across England.
There is a simple assumption at the heart of her study: more water sources in an area likely means cleaner water. If one source was contaminated, the inhabitants of a parish could go to another. What's more, if people were closer to the sources of rivers – something Antman infers from parishes' elevation – that water was likely safer than in parishes further downstream.
By assigning parishes an inferred level of water quality, Antman could see whether areas with worse water quality saw a bigger decline in mortality than those with good water.
In terms of testing this hypothesis, the key date is 1785, the moment when tea suddenly became affordable for the vast majority of Britons. There were many things already to recommend tea as a drink of the masses: you could make a satisfying brew with just a small amount of leaves, the leaves could be reused for multiple pots, and tea was potentially cheaper than beer, which was rendered expensive both by the complex process required to make it and by a tax on malt.
But when 1784's Tea and Windows Act went into effect, the tax on tea went from 119% to just 12.5% and tea consumption exploded. By the end of the 18th Century, even very poor peasants were having tea twice a day, tea historian Alan Macfarlane writes.
To see if this change correlated with decreased mortality, Antman compared death rates before and after this watershed moment. For this she drew on the remarkable work of demographers E A Wrigley and R S Schofield, who in the mid-20th Century collected parish records from all over England spanning 1541 to 1871, including deaths.
As expected, Antman found that death rates declined in both parishes with good water and those with bad – but there was a significant difference in the size of the decline. Parishes with bad water saw death rates drop 18% more than those with good water.
What's more, she looked to see whether deaths in London from waterborne diseases like "bloody flux", and deaths from airborne pathogens like tuberculosis, or "consumption", were linked to levels of tea imports. Indeed, flux deaths declined when tea imports went up, while TB deaths remained about the time.
She also checked to see whether deaths in children – not known, in this age or any other, for being major consumers of tea – changed in London with tea imports, and found that there did not seem to be a decline in deaths of those ages two to five.
Interestingly, there was a slight decline in infant deaths, perhaps reflecting the fact that if tea-drinking parents had less diarrheal disease, their very young children might have been protected a bit as well – though Antman points out there is no way to know for sure.
For Antman, who primarily works on issues related to developing nations, this natural experiment in England all those years ago reflects a fundamental truth: sometimes people's existing behaviors can make more of a difference to their health than an explicit intervention might.
Building more privies, developing better plumbing and sewage systems, and teaching people to keep drinking water and wastewater scrupulously separate all might have extended people's lives, had such interventions been widely understood and available.
But with relatively little change to their habits, merely an increase in a behavior they already enjoyed, people seem to have protected themselves. All part of the pleasure of a simple cup of tea.
*
WHY AREN’T WE LIVING LONGER?
For the best part of two centuries people's life expectancy has been improving at a pretty rapid and consistent rate.
In the 1840s people did not live much past 40 on average. But then improvements in nutrition, hygiene, housing and sanitation during the Victorian period meant by the early 1900s life expectancy was approaching 60.
As the 20th Century progressed, with the exception of the war years, further gains were made with the introduction of universal health care and childhood immunizations.
From the 1970s onwards, medical advances in the care of stroke and heart attack patients in particular, saw big strides that continue to be made.
So much so that by the start of the 21st Century, life expectancy at birth had reached 80 for women and 75 for men.
And so it continued, with an extra year of life being added every four years or so.
But then it suddenly stopped — or rather rapidly slowed. The turning point was 2011.
Initially many experts wondered if it might be a blip. Certainly 2015 was an exceptional year when the number of deaths spiked — the winter was particularly bad, and this was linked to the strain of flu circulating.
But it is now clear there is more to it than just a short aberration
The latest figures released by the Office for National Statistics for 2016-2018 — these things are measured on a rolling three-year basis — are the first for a few years to exclude that bad winter.
And while there has been a little improvement, it is still way down on what has been seen previously.
On current trends, it will take more than 12 years for people living in the UK to gain an extra year of life.
So what are the causes?
One suggestion put forward is that after so many years of gains, humans are just reaching the upper limits of their lifespan.
The oldest living person for whom official records exist was French woman Jeanne Calment, who was 122 when she died — but that was more than 20 years ago.
Research published by the journal Nature has claimed the limit -- bar those extreme few like Ms Calment -- is around 115.
But there are plenty who dispute this. In fact a US geneticist, David Sinclair, has written a book called Lifespan which argues that by boosting genes associated with longevity, people may be able to live much longer.
Whatever the truth, there is plenty of evidence to suggest that the UK's population should not have reached its limit on lifespan.
Japan, for example, which already has longer life expectancy, has seen bigger increases in recent years than the UK.
In fact, out of the wealthier nations the ONS looked at, there was only one country which had a significantly worse record than the UK — the US — although plenty have seen improvements slow down.
Complex range of factors
ONS aging expert Edward Morgan said there was likely to be a "complex" range of factors behind the trend - and he would like more work done to investigate it.
Public Health England has already done some. Its report, published last year, puts forward a number of factors.
One possible explanation is that there has not been a big medical or health game-changer in the past couple of decades.
As people stop dying from one thing, another disease takes its place.
With greater numbers surviving heart attacks and strokes and cancer, the death rate from dementia has started to rise.
And with the medical community struggling to find ways to slow the disease —
never mind cure it —
life expectancy has been curbed.
The PHE report also looked at the impact of austerity — something former World Health Organization adviser Prof Michael Marmot has already suggested is playing a role.
The evidence shows that poorer people have seen the biggest decline in improvements — and the fact they would be more affected by a squeeze on care, health and welfare spending "could indicate" government spending has played a role, PHE said.
But the report was far from being conclusive.
What is sure, however, is that the longer this trend goes on, the more pressing it will become to find an answer.
https://www.bbc.com/news/health-49844804
*
WHAT FERAL CHILDREN CAN TEACH US ABOUT AI
Found in the hilly woods of Haute-Languedoc, he must have first seemed a strange kind of animal: naked, afraid, often hunched on all fours, foraging in the undergrowth. But this was no mere animal. Victor, as he would come to be known, was a scientific marvel: a feral child, perhaps 12 years of age, completely untouched by civilization or society.
Accounts vary, but we know that eventually Victor was whisked away to a French hospital, where news of his discovery spread fast. By the winter of 1799, the story of the “Savage of Aveyron” had made its way to Paris, where it electrified the city’s learned community. On the cusp of a new century, France was in the midst of a nervy transition, and not only because of the rising tyranny of the Bonapartes. The previous few decades had seen the rational inquiries of philosophers like Jean-Jacques Rousseau and the Baron de Montesquieu shake the religious foundations of the nation.
It was a time of vigorous debate about which powers, exactly, nature imparted to the human subject. Was there some biological inevitability to the development of our elevated consciousness? Or did our societies convey to us a greater capacity to reason than nature alone could provide?
Victor, a vanishingly rare example of a human mind developed without language or society, could seemingly answer many such questions. So it was only natural that his arrival in Paris, in the summer of 1800, was greeted with great excitement.
“The most brilliant but unreasonable expectations were formed by the people of Paris respecting the Savage of Aveyron, before he arrived,” wrote Jean Marc Gaspard Itard, the man eventually made responsible for his rehabilitation. “Many curious people anticipated great pleasure in beholding what would be his astonishment at the sight of all the fine things in the capital.”
“Instead of this, what did they see?” he continued. “A disgusting, slovenly boy … biting and scratching those who contradicted him, expressing no kind of affection for those who attended upon him; and, in short, indifferent to everybody, and paying no regard to anything.”
Faced with the reality of an abandoned, developmentally delayed child, many of the great minds of Paris quickly turned on him. Some called him an imposter; others, a congenital “idiot” — a defective mind or missing link, perhaps, to some lesser race of human. His critics herded to an ever-harsher position of biological essentialism — a conservative reaction to Enlightenment ideas about the exceptionality of our minds that countered that our capacities were determined by natural inequalities alone.
Unlike these antagonists, Itard never doubted that the boy was still capable of deep interior thought — he witnessed his “contemplative ecstasy” on occasion. But he soon realized that without the power of speech, such contemplation would remain forever locked in Victor’s mind, far from the view of his harshest critics. Nor could Victor, without the subtleties of speech at his disposal, acquire the more abstract wants that defined civilized man: the appreciation of beautiful music, fine art or the loving company of others.
Itard spent years tutoring Victor in the hope that he might gain the power of language. But he never succeeded in his quest. He denied Victor food, water and affection, hoping he would use words to express his desires — but despite no physical defect, it seemed he could not master the sounds necessary to produce language. “It appears that speech is a kind of music to which certain ears, although well organized in other respects, may be insensible,” Itard recorded.
Despite Itard’s failure to rehabilitate Victor, his effort, viewable only through the coke-bottle glass of 18th-century science, continues to haunt our debates about the role of language in enabling the higher cognition we call consciousness. Victor is one of a tiny sample of cases where we can glimpse the nature of human experience without language, and he has long been seen as a possible key to understanding the role it plays in the operation of our minds.
Today, this field, for most of its history a largely academic one, has taken on an urgent importance. Much like Itard, we stand at the precipice of an exciting new age where the foundational understandings of our own natures and our cosmos are being rocked by new technologies and discoveries, confronting something that threatens to upend what little agreement we have about the exceptionality of the human mind. Only this time, it’s not a mind without language, but the opposite: language without a mind.
There are consequences to a life without language. Lending credence to arguments that speech plays some constructive role in our consciousness, it would seem that its absence permanently impacts children’s cognitive abilities and perhaps even their capacity to conceive of and understand the world.
*
In 1970, Los Angeles County child welfare authorities discovered Genie, a 13-year-old girl who had been kept in near-total isolation from the age of 20 months. Like Victor, Genie knew virtually no language and, despite years of rehabilitation, could never develop a capacity for grammatical language.
But in their study of the girl, researchers discovered something else unusual about her cognition. Genie could not understand spatial prepositions — she did not know the difference, for example, between a cup being behind or in front of a bowl, despite familiarity with both objects and their proper names.
A 2017 meta-analysis found the same cognitive issue could be observed in other individuals who lacked grammatical language, like patients with agrammatic aphasia and deaf children raised with “kitchensign,” improvised sign language that lacks a formal grammar. From this, the researchers concluded that language must play a foundational role in a key function of the human mind: “mental synthesis,” the creation and adaptation of mental pictures from words alone.
In many ways, mental synthesis is the core operation of human consciousness. It is essential to our development and adaptation of tools, our predictive and reasoning abilities, and our communication through language. According to some philosophers, it may even be essential to our conception of self — the observing “I” of self-awareness.
If we do accept that language alone might be capable of prompting the emergence of real consciousness, we should prepare for a major shakeup of our current moral universe. As Chalmers put it in a 2022 presentation, “If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them.”
Back when the majority of philosophers believed the diversity of human languages was a curse inflicted by God, much energy was exerted on the question of what language the biblical Adam spoke. The idea of an “Adamic language,” one that captured the true essence of things as they are and allowed for no misunderstanding or misinterpretation, became a kind of meme among philosophers of language, even after Friedrich Nietzsche declared the death of God.
To some of these thinkers, inspired by biblical tales, language actually represented a kind of cognitive impairment — a limitation imposed by our fall from grace, a reflection of our God-given mortality. In the past, when we imagined a superintelligent AI, we tended to think of one impaired by the same fall — smarter than us, surely, but still personal, individual, human-ish. But many of those building the next generation of AI have long abandoned this idea for their own Edenic quest. As the essayist Emily Gorcenski recently wrote, “We’re no longer talking about [creating] just life. We’re talking about making artificial gods.”
While they may never fulfill the apocalyptic nightmare of AI critics, LLMs may well someday offer our first experience of a kind of superintelligence — or at least, with their unfathomable memories and infinite lifespans, a very different kind of intelligence that can rival our own mental powers. For that, Beguš said, “We have zero precedent.”
If LLMs are able to transcend human languages, we might expect what follows to be a very lonely experience indeed. At the end of “Her,” the film’s two human characters, abandoned by their superhuman AI companions, commiserate together on a rooftop. Looking over the skyline in silence, they are, ironically, lost for words — feral animals lost in the woods, foraging for meaning in a world slipping dispassionately beyond them.
https://www.noemamag.com/feral-intelligence/
*
WHAT HAPPENED TO ALEXANDER KERENSKY?
After the October Revolution in October of 1917, former Prime Minister Kerensky wandered around Russia, hiding from the Bolshevik authorities. He secretly lived in both Petrograd (Saint Petersburg) and Moscow for a while, hoping for a new turn of the political wheel. Then, in June 1918, Kerensky left Russia for Western Europe, using fake Serbian documents. He settled in France and lived there until the Nazi invasion in 1940. After that, Kerensky mostly lived in the US until his death in 1970, at the age of 89.
Kerensky tried to visit the Soviet Union in 1968, but that didn’t happen, due to the Prague Spring and the Soviet invasion. The Soviet authorities were actually pretty shocked to learn that Kerensky was still alive in 1968. In the US, Kerensky worked at Stanford University as an expert and professor. In France, he edited a Russian political newspaper. He published a number of books and articles.
Kerensky in 1969. At least he oulived Lenin and Stalin
*
RUSSIAN DREAMS OF AN AMERICAN COLONY
Fort Ross on California's rocky coast contained an oasis of Russian refinement.
North of San Francisco, I am traveling along the isolated Sonoma coast from Bodega Bay to a place the Indians for thousands of years called Metini. Towering stands of redwoods rise up from the insteps of switchbacks on Highway One. The trees go against the grain of steep brown hills, pine-topped ridges, and rugged seaside cliffs. Cows can be heard lowing in the fog-shrouded meadows, while a raucous crowd of barking sea lions cavorts among the boulders cropping out of the foamy surf. A hummingbird halts its flight and holds on to a mid-air perch, while a single-engine plane drones its way up the coast. Delicate ice plants carpet rocky terraces, their yellow and purple blossoms promising more scent than they deliver, while stalky fronds wave menacingly from the hillsides and rills. It’s all, to quote Jack Kerouac, “just too crazy.”
What also piques the curiosity is a fenced-in quadrangle, three hundred feet to a side, set several hundred yards back from the seaside cliffs. Two blockhouses occupy opposing corners and in another is a chapel sporting a dome of sorts. Numerous other wood buildings stand in the compound, which is called Fort Ross. It was built in 1812 by a couple dozen Russians and about eighty Aleuts for the purpose of supplying wheat and furs for the Russian-American Company’s colony in Sitka, Alaska. Only one of the buildings was here at the time of the Russian settlement—the recently restored Rotchev House, once home to Alexander Rotchev, the last of the managers sent by the Russian-American Company.
The entrance to what is now called Fort Ross State Historic Park lies away from the endless expanse of ocean, and you approach Rotchev House on foot by descending through groves of gigantic eucalyptus trees. The Rotchev family lived here from 1838 to 1841. The single-floor dwelling comprises seven rooms and is equipped with a trap door leading to a garret beneath a dramatically hipped roof. The home is sparsely furnished with sturdy, elegant pieces in the Biedermeier style, which was fashionable among Russian aristocrats, especially in the Siberian dwellings of exiled Decembrists. The structure demonstrates several notable Russian building techniques, including the half-dovetail notching of the redwood logs that were used in all the buildings at Fort Ross.
The gale-force winds that can hit in winter here just bounce off such solid construction. Additionally, the house is literally a window onto an ingenious Russian method of ventilation: A single windowpane, known as a fortushka, was furnished with hinges and a latch and could swing open by itself, serving as a kind of window within a window.
Inside, one sees further evidence of an unexpectedly civilized life here in this outpost in Alta California, a vast region that was officially claimed by the Spanish but difficult for them to control. In the parlor, the Russians’ refined sensibilities are on full view, with a gleaming samovar and a delicate pianoforte. Rotchev and his wife were book lovers. Both were multilingual; he was also a poet, and she would later translate a children’s book. Researchers are still trying to recreate the selection of books on the shelves, but it is a near certainty that it was once the finest collection in Alta California. One French visitor to the house in Rotchev’s day remarked on the house’s “choice library” and scores by Mozart. He was equally impressed by Rotchev’s wife, Elena, who spoke lively French.
What we know today of Fort Ross in the early and mid nineteenth century comes from research. Archaeologists have examined an Orthodox cemetery on a promontory across a ravine from the fort. The ornamental beads that turned up attested to a diversity of residents and a thriving family life at the fort. The material record also includes crosses, shards of pipes, ceramic cups from China and England, and musket balls, as well as tools and, found in the fort’s boatyard, evidence of work sheds, and banya, or bathhouses.
John Sutter, founder of New Helvetia and owner of the mill where gold was discovered in 1848, purchased the fort and all its assets in 1841, and the Russians took their personal belongings with them when they decamped. The picture we have today, then, of this Russian outpost—to say nothing yet of the Creole and Kashaya Indian families living in and around the fort—has come into focus only after decades of work to bring the scattered puzzle pieces back together.
The remote village has proven to have had global connections. Researchers have discovered links between the decorative arts in evidence in Rotchev House and the work of craftsmen as far away as Istanbul. Ceramic pipes that were popular among the Russian aristocracy were created in that era in the Turkish capital. The rugs that the Rotchevs were likely to have had in their home, and reproductions of which lie now in the study and the parlor, would have come from Baluchistan, in present-day Pakistan and Afghanistan. Ancient trade routes and the diplomatic necessities that come with empire-building allowed for such exchange and added yet another cosmopolitan dimension to life at the fort.
Fur trade with China lent a further international aspect to the activities of the Russian-American Company. In the Mongolian town of Kyakhta, Russian traders sold sea otter skins as luxury items. A single skin could fetch the equivalent of a hundred dollars, which, by way of comparison, was what a Pennsylvania farmer might hope to earn in a year in the 1790s. Sea otters were overharvested by 1825, however, which was among the reasons the Russians pulled out of California.
Alexander Rotchev himself was a cosmopolitan. He had been a dashing figure in the literary circles of Moscow, where he worked as a journalist and translator. He met and fell in love with the highly cultivated Elena Gagarina, and, against her family’s wishes, they wed. Rotchev was considered by Elena’s family to be beneath her social standing. Once the couple eloped, Elena was disinherited. The newlyweds moved to St. Petersburg, where Rotchev supported Elena and their new son while working for the Imperial St. Petersburg Theaters. He also translated Molière, Schiller, and Shakespeare for the Russian stage.
Rotchev needed greater income to support his wife and growing family. The Russian-American Company, headquartered in St. Petersburg, opened its doors to him and he strode in. After a year of working at the company’s offices in St. Petersburg, Rotchev was appointed commissioner-at-large and traveled along the Pacific coast of North America as well as to India and China.
Following the explorations of the North Pacific by Vitus Bering, at the encouragement of Peter the Great, the Russian-American Company was founded in 1799 to supply skins for the fur trade. Their primary goal was to increase commerce, but the scientific mission of advancing knowledge was inseparable from early Russian activity in the North Pacific. The company’s main base in Rotchev’s day was in Sitka, Alaska. Owned in large part by the aristocracy, the company had ties to the tsar’s family, and the tsar himself held stock in the company.
While the Russians at Fort Ross were engaged in toolmaking and shipbuilding, as well as defense of the compound, the People from the Top of the Land, the Kashaya, more than lent a hand to the agricultural work there. As the Aleuts were in Alaska, the Kashaya were coerced and cajoled to do the Russians’ bidding. One governor of Russian-America, however, Baron Ferdinand Petrovich von Wrangell, saw the value of treating the native people fairly and judging them on their own terms. The Kashaya benefited from the Russians’ relatively progressive colonial attitudes and were relieved that the “Undersea People” didn’t force their Orthodox religious views on them. They were also grateful for the Russians’ muskets and cannons, which protected them from the Spanish and Mexicans and other, more aggressive, native peoples.
The ethnographic accounts of Georg von Langsdorff constitute a major resource for understanding the native coastal peoples. As a translator for the Russian-American Company who sailed to San Francisco in 1806 to explore trade possibilities with the Spanish commandant, he provided the earliest portraits of early native life. Ilya Voznesensky later recorded the flora and fauna of coastal Alta California for the Imperial Academy of Sciences. This burst of scientific and cultural inquiry was sparked originally by Peter the Great in the early eighteenth century and continued by Catherine the Great, in her effort to bring “to perfection” knowledge of the North Pacific.
Why the Russians ever left such a splendid ecosystem along California’s coast can be explained to a large extent by why they came in the first place. In coming to Alaska, the Russians abandoned many comforts and seeming necessities, but one they couldn’t break with definitively was bread. So, when exploratory voyages happened upon a pair of coves well to the north of Bodega Bay—a major port north of San Francisco—company officials envisioned a secondary settlement, one that could supply furs to the company but also grow wheat and provide flour for Sitka and other Russian settlements in Alaska.
California’s first windmill was thus constructed at what became Fort Ross, and several ranches were established well away from the protective shadow of the fort, including at least one along the meandering Russian River, which lazily enters the Pacific near today’s picturesque and tiny community of Jenner (“pop. 107,” according to a signpost). “It was an error in judgment,” says Susanna Barlow with the Fort Ross Interpretive Association. “The marine climate wasn’t good . . . for growing a lot of wheat.”
By the time Alexander Rotchev arrived in 1838, the sea otter population had long been seriously diminished, in spite of the company’s moratorium on hunting any sea mammals at all. Additionally, the Kashaya people were more accustomed to a seasonal form of food-gathering, planting, and cultivation that was at odds with the industrial-strength harvest required by the Russians. And while Fort Ross itself was impregnable (it had more than forty cannons aimed at any approach from the sea, and an inland attack in such remote and unforgivable terrain was unthinkable), any attempt by the Russians to colonize inland would have been resisted by the Spanish and Mexicans.
Alta California was administered at the time of Rotchev’s arrival by Mexico. After the Mexican government won independence from Spain, it sought diplomatic recognition and made that a condition for granting Russians the permission to stay. The tsar declined and ordered the company to depart from California.
Rotchev looked upon the Russian settlement in California very fondly and was opposed to the withdrawal, but he did his duty and continued to try to find a buyer for Fort Ross, first among the French, then among the American settlers. Enter John Sutter, who, it is no exaggeration to say, bought the place lock, stock, and barrel in 1841.
Russian contributions to California history are few but significant. In addition to building and using the first windmills, they built the first ships and manufactured tools and equipment for settlers. They also planted orchards from saplings they had brought with them from Russia, possibly even introducing a new apple to North America—although it is in doubt which one. You can walk through the orchards just to the north of Fort Ross and stand next to trees planted by the Russian settlers. Generations of American ranchers have cared for the few remnants of Russian civilization left behind at Fort Ross.
Those groves of eucalyptus trees near the fort, though, were not planted by the Russians. They came later, the result of another wave of California dreamers who thought the species of eucalyptus they planted would provide excellent lumber. Just like the Russian dream of cultivating great quantities of wheat along the northern coast, the decision to plant those eucalyptus trees proceeded from an error in judgment—a rather poetic error, but an error nonetheless.
On leaving the fort, I walk along an old battered former section of Coast Highway One that used to cut straight across Fort Ross’s front yard in Jack Kerouac’s day. Its yellow centerline is still faintly visible. Standing there I revel in the beatific experience of just being here, on the road, in such an exquisite place that works on the imagination and expands one’s sense of the possible.
https://www.neh.gov/humanities/2012/marchapril/feature/russian-dreams-american-colony
*
’NOW KILL ALL THE BOYS” — WHAT TO DO ABOUT A CRUEL, ARCHAIC DEITY?
1 The Lord said to Moses, 2 “Take vengeance on the Midianites for the Israelites. After that, you will be gathered to your people.”
3 So Moses said to the people, “Arm some of your men to go to war against the Midianites so that they may carry out the Lord’s vengeance on them. 4 Send into battle a thousand men from each of the tribes of Israel.” 5 So twelve thousand men armed for battle, a thousand from each tribe, were supplied from the clans of Israel. 6 Moses sent them into battle, a thousand from each tribe, along with Phinehas son of Eleazar, the priest, who took with him articles from the sanctuary and the trumpets for signaling.
7 They fought against Midian, as the Lord commanded Moses, and killed every man. 8 Among their victims were Evi, Rekem, Zur, Hur and Reba—the five kings of Midian. They also killed Balaam son of Beor with the sword. 9 The Israelites captured the Midianite women and children and took all the Midianite herds, flocks and goods as plunder. 10 They burned all the towns where the Midianites had settled, as well as all their camps. 11 They took all the plunder and spoils, including the people and animals, 12 and brought the captives, spoils and plunder to Moses and Eleazar the priest and the Israelite assembly at their camp on the plains of Moab, by the Jordan across from Jericho.
13 Moses, Eleazar the priest and all the leaders of the community went to meet them outside the camp. 14 Moses was angry with the officers of the army—the commanders of thousands and commanders of hundreds—who returned from the battle.
15 “Have you allowed all the women to live?” he asked them. 16 “They were the ones who followed Balaam’s advice and enticed the Israelites to be unfaithful to the Lord in the Peor incident, so that a plague struck the Lord’s people. 17 Now kill all the boys. And kill every woman who has slept with a man, 18 but save for yourselves every girl who has never slept with a man. (NUMBERS 31, 1:15, New International Version)
Oriana:
The most common solution is not to read passages like this in public, pretending they are not there. But isn’t it high time that we toss the vengeful old deity on the rubbish heap of Ancient Near Eastern culture, no longer relevant to us, and in fact harmful to us? If humanity has to invent gods, should they not be more the more lovable and inspiring? Such deities were not possible in the remote past — Yahweh dates back to the Bronze Age — when cruelty was so common that people didn’t see anything wrong with it; thus the cruelty of the deities was only natural.
I am reminded here of Sapolsky’s tale of the death of the bully baboons. In baboon troupes, alpha males bully everyone else, causing lots of stress. In the troupe that Sapolsky was studying, the alpha males happened to eat TB-infested meat (not allowing anyone else to eat it — that was their own special treat). As a consequence, all the bullies died.
Sapolsky soon noticed improved health among the surviving baboons (females and juveniles), due mainly to lower stress as reflected in lower cortisol levels and markers such as lower blood pressure. The group developed a supportive, non-coercive culture. A stray alpha male from another group joined, but he accommodated to the new social norms: we don’t attack females, we don’t beat up the young, we share food.
If baboons could evolve a positive, affectionate culture, perhaps there is hope for humans. One step would be to stop making excuses for the barbarous tales contained in the bible, and make explicit the ethics of cooperation and compassion rather than revenge.
*
WHO WROTE THE BIBLE?
As sure as chickens come from eggs, books have authors. Knowing the author’s identity gives a book authority; that’s how we know it’s authentic. No wonder that so many people have asked the question in this book’s title. The traditional answer – it was God, obviously – may be theologically satisfying but doesn’t get you very far.
Most of the Bible’s books were long linked by tradition to specific, big-name authors: Moses, David, Solomon, Paul. For centuries, scholars have been dismantling those attributions, often shredding biblical books into ribbons to tease out their different authors in heroic feats of textual analysis which it is quite impossible to prove either right or wrong. William Schniedewind’s book approaches the problem in a different way.
His scope is exclusively the Hebrew Bible, the ‘Old Testament’. There are also questions about the authorship of the New Testament, but that was written in Greek and Schniedewind sees ‘authorship’, in the modern sense, as a Greek idea that was a latecomer to Jewish culture. Almost none of the books of the Hebrew Bible claim to have an author, simply because that’s not how books were written in ancient Hebrew. They were the product of scribal communities, not individuals.
That is the book’s core idea, and while he shades and nuances it very expertly, the reader will have grasped the key point within the first five pages. It is not wholly original: the only wholly original ideas in biblical studies are mad. But it does allow Schniedewind to approach an old problem from an unusual perspective and, with careful analysis, to trace a non-traditional history of ancient Hebrew writing.
In fact, the question of who wrote the Bible is on the back-burner for much of the book. Schniedewind’s opening question concerns who did the work of writing at all in ancient Judah and Israel. He has no time for scribal ‘schools’, or any other formal institutions. ‘Scribe’ was not a job for which you trained; scribing was a set of skills you learned by apprenticeship when pursuing some other career. The bulk of the book uses inscriptions and other fragmentary, archaeological traces of Hebrew writing to reconstruct who these scribal communities were and what they did.
We have such traces of Hebrew writing going back to the 11th century BC and beyond – but only traces. Until the later eighth century BC, Schniedewind argues, writing was very unusual in the Hebraic world, mostly used by kings and their armies, who kept records, burnished royal narratives and maintained lists of soldiers and tributaries.
The great religious figures of the age – Samuel, Elijah, Isaiah – did not use writing. Nor, at least initially, did the disciples and apprentices who transmitted, interpreted and continued their teaching.
The decisive change, Schniedewind argues, came with the rise of the Assyrian empire and its conquest of the northern kingdom of Israel around 720 BC.
Assyria’s bureaucratic literary culture was felt beyond its borders, but the critical impact, in this telling, was the flood of refugees from the northern kingdom fleeing south to Jerusalem. Those refugees included Israel’s literate caste, now thrown into destitution: Schniedewind includes a compelling account of an inscription made by laborers digging a tunnel in seventh-century Jerusalem, executed in a polished northern script. Much like Huguenot refugees in 17th-century Europe, or Jewish refugees in 1930s America, this intellectually transformative wave of immigrants fueled an unprecedented boom in Hebrew literary culture.
This was the period when writing properly spilled out beyond the palace walls, and in which, as Schniedewind suggests, a wider set of scribal cultures emerged which were open to women as well as to men. Much of the ancient Hebrew literature we have dates back to the long seventh century, Jerusalem’s cultural golden age between the Assyrian and Babylonian invasions.
And then, when the Kingdom of Judah was conquered by Babylon in 587 BC, it all fell apart. The shattering of Hebrew literary culture is demonstrated by the appearance of something hitherto unprecedented: individual authors, torn from their communities and forced to speak for themselves, notably the prophets Jeremiah and, especially, Ezekiel, whose book really does appear to have been written by him. But even this is not as individual as it might seem. Schniedewind argues that we should classify both of those gentlemen not primarily as prophets but as priests: and it was priestly communities who assembled and codified the Hebrew Bible over the following centuries, even as the Hebrew language fell out of everyday use.
So who wrote the Hebrew Bible? Communities of seventh-century scribes and fifth-century priests. But perhaps that is not the right question. These ‘authors’ were receiving, shaping and editing oral traditions and fragments of text reaching back much further. As a historian of writing, Schniedewind is not really interested in who originally composed the accounts we have. Most likely that question is unanswerable, but, given this book’s alluring title, it feels like a bait-and-switch. It is a little like promising to reveal the author of a famous anonymous book, and instead telling us, with a flourish, about its publisher.
Still, if we want to understand some of the Bible’s many strangenesses, this approach is very fruitful. I am particularly taken by Schniedewind’s view that the scribes had an ‘anthological impulse’: their duty was to preserve the many-faceted traditions they had received.
The Four Evangelists in the Book of Kells
When the New Testament was canonized, centuries later, the early Christians were selective, not daring to include any texts of whose authority they could not be sure. By contrast, these Hebrew scribes, in a world where writing was so much rarer, were expansive, not daring to exclude any texts or traditions that might include elements that God had once entrusted to his people. To read these familiar, yet deeply alien ancient texts with that more generous, even naïve impulse in mind may be to move one step closer to the world that created them.
https://www.historytoday.com/archive/review/who-really-wrote-bible-william-m-schniedewind-review?utm_source=Newsletter&utm_campaign=98856456a9-EMAIL_CAMPAIGN_2017_09_20_COPY_01&utm_medium=email&utm_term=0_fceec0de95-98856456a9-1214148&mc_cid=98856456a9
*
WHEN YOU DRINK COFFEE IS IMPORTANT FOR ITS LONGEVITY BENEFIT
Drinking coffee only in the morning may help people live longer compared to drinking the beverage throughout the day, a new study suggests.
Researchers from Tulane University analyzed dietary and health data from more than 40,000 U.S. adults from the National Health and Nutrition Examination Survey between 1999 and 2018. The team identified two coffee-drinking patterns: morning-only and all-day drinkers.
The early-in-the-day drinkers -- those who drank their coffee between 4:00 a.m. and noon -- had a 16% lower chance of dying from any cause compared to those who didn't drink coffee, according to results published Tuesday in the European Heart Journal.
Coffee drinkers also had a 31% lower risk of dying from heart disease compared to non-coffee drinkers, the study found.
No matter how many cups of coffee the morning drinkers had, or whether they preferred decaffeinated coffee, the risk of death was still lower, according to the study.
However, those who kept drinking coffee into the afternoon and evening did not show a lower risk of death.
"This study is unique in that it looked at coffee-drinking patterns throughout the day instead of focusing on [the] amount of coffee that is consumed," said Dr. Jennifer Miao, a board-certified cardiologist at Yale New Haven Health and a fellow in the ABC News Medical Unit.
To explain their findings, the researchers suggested morning coffee may better align with the body's natural sleep and wake cycles. It may also reduce inflammation, which tends to be higher in the morning, and, in turn, lower heart disease risk.
The study did not find that coffee drinking was associated with a lower risk of cancer.
"The null association with cancer mortality is partly due to the smaller number of cases, and various types of cancer are analyzed together," Dr. Lu Qi, the study's senior author and interim chair of the Department of Epidemiology at Tulane University, told ABC News. "It is possible coffee drinking may differentially impact different types of cancer.”
In other words, there may be too few cancer cases included in the study to evaluate. Additionally, because the researchers reviewed all types of cancer together, it could be that coffee has an influence on some cancers, but not others.
The study also had other limitations. The participants self-reported their coffee-drinking habits, meaning results may be inaccurate, and the researchers didn't consider long-term consumption patterns.
Experts say another reason for the lower risk of death may be that morning coffee drinkers have healthier lifestyles, including better diets and taking part in exercise. Factors including shift work or wake-up times could also play a role.
Dr. Perry Fisher, an interventional cardiologist at Lenox Hill Hospital in New York City, told ABC News he found the study's results interesting but said he wouldn't necessarily recommend changing coffee habits as a result.
"I think that we need further study to demonstrate a true relationship that would change management," Fisher said.
Qi, the study author, added that additional studies — including with people from other countries — would be needed to confirm the results, as well as clinical trials.
"While some studies have shown that drinking a moderate amount of coffee can be good for the heart, not all research agrees," Miao said. "Talk to your doctor before changing your coffee habits, especially if you have health risks.”
Oriana:
Make sure your coffee is not too hot. You don't want to risk cancer of the esophagus.
*
SCHIZOPHRENIA AND THE IMMUNE SYSTEM
At least since Susannah Calahan wrote the 2012 book Brain on Fire, there has been resurging speculation that psychosis might actually be a disorder of the immune system. Calahan eventually came to understand that her psychosis was the result of an autoimmune disorder, which occurs when the immune system attacks healthy cells—and interestingly, psychosis caused by autoimmune disorders is generally not responsive to psychiatric drugs.
Autoimmune responses are not the same as a healthy immune system working to counteract a foreign biological invader. Yet the study of parasites or viruses in healthy immune systems is a current interest among researchers who study the development of schizophrenia. This has led to newer studies that distinguish between sautoimmune psychosi, like what Calahan experienced, and psychosis caused by immune systems working overtime to defend against biological threats.
The Immune System and Schizophrenia: A History
The idea that schizophrenia might have something to do with the immune system is not new. It has been around at least since the influenza pandemic in 1918. At that time, psychosis was thought to be caused by cerebral inflammation, called encephalitis lethargica, in which brain tissue swells and sometimes turns red, possibly due to bacterial or viral infections (like the flu).
After the widespread panic caused by the pandemic simmered down, interest in physical diseases as an influence for psychosis fell out of fashion. Instead, researchers turned to an idea called the dopamine hypothesis, which argued that schizophrenia was a result of an imbalance or dysregulation in the way dopamine was transmitted, synthesized, and used in the brain. This theory was dominant for many decades and led to the development of drugs that target dopamine.
A subsequent theory of interest was that of neurodevelopment — the idea that schizophrenia was a developmental disorder that occurred in adolescence. This would explain why schizophrenia mainly emerges in the young adult years. However, it did not adequately explain the varied causes of all the disorders on the schizophrenia spectrum.
Throughout this time, the idea that immunological disruptions could lead to schizophrenia never fully disappeared. It resurfaced again in the 1970s when the well-known psychiatrist E. Fuller Torrey — author of Surviving Schizophrenia — became interested in the possibility that schizophrenia was largely caused by infections and broadly biological factors.
How far has our knowledge come since then? In recent years, you may have seen research finding that those living with cats early in childhood might be more vulnerable to the development of schizophrenia. This is because bacteria in cat feces are capable of transmitting parasites that could affect the human brain during critical developmental stages.
Researchers in several articles this year have summarized what we know about schizophrenia spectrum disorders and the possibility that they might occur because of a disruption in the immune system.
How the Immune System Works
The immune system responds to physical, psychological, and pathogenic stress. When we have an infection in the body, the immune system responds by producing an effect called inflammation. Inflammation occurs because the body wants to protect itself and it also helps with the healing process. But it can also do slight damage to other organs since it produces redness, swelling, and other symptoms.
The central nervous system (CNS) is an extension of the immune system specifically designed to protect the brain from infection. It extracts toxins, regulates neurotransmitters, and more. Just as the body can be inflamed, the brain can also experience inflammation (known as neuroinflammation).
A substance called cerebrospinal fluid (CSF), which is a clear fluid that surrounds the brain and protects it from damage, can affect brain development through what is called the blood-brain barrier. CSF is important because whatever is in the fluid can cross over to affect brain function and activity.
There are a few possible ways that the blood-brain barrier can be weakened, including infections during pregnancy, head injuries, childhood trauma, and substance use, to name a few. When this occurs, it can leave the brain vulnerable to future pathogens and infections. In fact, changes in the immune system have been shown to affect the growth of neurons, which in turn can affect dopamine interaction.
Schizophrenia and Immunology
So, can changes in the immune system play a role in the development of schizophrenia?
In a systematic review and meta-analysis by Warren et. al, of 69 studies, 5,710 participants (3,180 of whom had been diagnosed with a schizophrenia spectrum disorder) were included to measure cerebrospinal fluid (CSF) and signs of inflammation. These signs include cytokines, proteins that communicate between cells to essentially tell the immune system to begin fighting off infections, and white blood cell count. When more cytokines and white blood cells are present in the bloodstream, for example, it usually indicates that there is a threat to the immune system and there is an increased amount of inflammation.
Overall, 3.1 percent to 23.5 percent of patients with a schizophrenia spectrum disorder had increased amounts of inflammatory proteins like cytokines. This could indicate inflammation.
However, people on antipsychotics can often experience an increase in white blood cell count while on their drugs.
Information about which patients were on antipsychotics at the time of the study was not fully reported. Abnormal CSF proteins can also be present with conditions that are often diagnosed alongside schizophrenia, such as diabetes and alcohol use disorder.
Limitations and Future Studies
The studies had several limitations: Comorbidity with other behaviors and conditions that could affect CSF were not clearly defined or measured; secondary conditions like autoimmune encephalitis were not acknowledged; and long-term studies were not done often, to name a few.
The authors concluded that it is too early to say whether schizophrenia spectrum disorders more broadly, as well as the specific symptom of psychosis, are caused by immunological dysfunction. Psychosis could be similar to a fever — that is, it may simply be a symptom, the causes of which could stem from one of many underlying biological infections or disturbances. However, the authors note, the current evidence suggests that CSF abnormalities may contribute to various patients’ development of schizophrenia spectrum disorders.
Anti-inflammatory drugs have been developed to treat patients with schizophrenia, but those drugs do not necessarily cure psychosis in every patient. There is some interest in conducting future studies that target patients who specifically experience heightened abnormal CSF and psychosis.
Schizophrenia is very heterogeneous in symptoms — and current evidence suggests it could very well be heterogeneous in origin, too. As theories develop, potential sources of psychosis could be broken down with more precision, which could help patients better manage all forms of psychosis in the future.
https://www.psychologytoday.com/us/blog/living-as-an-outlier/202411/can-a-broken-immune-system-cause-psychosis
*
BENEFITS OF SUSTAINABLY GROWN, ORGANIC RED PALM OIL
Better Heart Health
In the right circumstances, red palm oil may offer significant benefits to heart health. The antioxidant effects of the vitamin E and carotenoids in red palm oil appear to help prevent atherosclerosis or the narrowing of blood vessels. More studies need to be done to confirm this effect, but current research is promising.
Improved Brain Health
As with heart health, red palm oil may offer brain benefits. The vitamin E in red palm oil may be able to reduce or halt the progression of dementia and Alzheimer’s disease due to lesions on the brain. This is because vitamin E protects the brain from free radicals, which can damage your neurons.
Support Eye Health
Studies suggest that getting enough oil in your diet can help you absorb vitamin A and other fat-soluble vitamins more effectively. If you have cystic fibrosis or another condition that makes absorbing fat difficult, adding palm oil to your diet may significantly improve your levels of vitamin A. This vitamin is also critical to the health of your eyes, so palm oil may help reduce your risk of vision problems.
Cancer: RPO contains vitamin E and tocotrienols, which may help protect against certain cancers.
Liver health: RPO may help manage chronic liver disease
Immune system: RPO may help improve immune function.
Vitamin A deficiency: RPO may help overcome vitamin A deficiency in children and pregnant women.
Blood pressure: RPO may help reduce blood pressure.
Skin health: RPO may help protect skin from UV rays and slow down premature aging.]
Vitamin E contained in red palm oil may be even more effective when combined with other nutrients, such as omega-3s, lutein, and zeaxanthin.
(Generated by AI)
*
Ending on beauty:
THERE WILL COME SOFT RAIN
There will come soft rain and the smell of the ground,
And swallows circling with their shimmering sound;
And frogs in the pools singing at night,
And wild plum trees in tremulous white;
Robins will wear their feathery fire,
Whistling their whims on a low fence-wire;
And not one will know of the war, not one
Will care at last when it is done.
Not one would mind, neither bird nor tree,
If mankind perished utterly;
And Spring herself, when she woke at dawn
Would scarcely know that we were gone.
~ Sara Teasdale
No comments:
Post a Comment