Saturday, March 23, 2024

GREAT GATSBY: MISUNDERSTOOD; CAN THERE BE TOO MUCH SOLITUDE? IMPERIALIST FIRE ANTS; SLOW-DOWN OF THE OCEAN CURRENTS AND GLOBAL WARMING; INCREASING SOLUBLE AMYLOID BETA: A NEW TREATMENT FOR ALZHEIMER'S; REWRITING GENESIS; BENEFITS OF GOAT MILK

Common starling (Sturnus vulgaris); photo: Mark Williams

*
OFTEN REBUKED, YET ALWAYS BACK RETURNING

Often rebuked, yet always back returning
    To those first feelings that were born with me,
And leaving busy chase of wealth and learning
    For idle dreams of things which cannot be:

To-day, I will seek not the shadowy region;
    Its unsustaining vastness waxes drear;
And visions rising, legion after legion,
    Bring the unreal world too strangely near.

I’ll walk, but not in old heroic traces,
    And not in paths of high morality,
And not among the half-distinguished faces,
    The clouded forms of long-past history.

I’ll walk where my own nature would be leading:
    It vexes me to choose another guide:
Where the gray flocks in ferny glens are feeding;
    Where the wild wind blows on the mountain side.

What have those lonely mountains worth revealing?
    More glory and more grief than I can tell:
The earth that wakes one human heart to feeling
    Can center both the worlds of Heaven and Hell.

~ Emily Brontë, Often Rebuked, Yet Always Back Returning


Yorkshire Moors

*
THE GREAT GATSBY AS THE WORLD’S MOST MISUNDERSTOOD NOVEL

The Great Gatsby is synonymous with parties, glitz and glamour – but this is just one of many misunderstandings about the book that began from its first publication.

Robert Redford and Mia Farrow in the 1974 film of The Great Gatsby

Few characters in literature or indeed life embody an era quite so tenaciously as Jay Gatsby does the Jazz Age. Almost a century after he was written into being, F Scott Fitzgerald's doomed romantic has become shorthand for decadent flappers, champagne fountains and never-ending parties. Cut loose by pop culture from the text into which he was born, his name adorns everything from condominiums to hair wax and a limited-edition cologne (it contains notes of vetiver, pink pepper and Sicilian lime). It's now possible to lounge on a Gatsby sofa, check in at the Gatsby hotel, even chow down on a Gatsby sandwich – essentially a supersize, souped-up chip butty [a potato chip sandwich, regarded as working-class food].

Incongruous though that last item sounds, naming anything after the man formerly known as James Gatz seems more than a touch problematic. After all, flamboyant host is just one part of his complicated identity. He's also a bootlegger, up to his neck in criminal enterprise, not to mention a delusional stalker whose showmanship comes to seem downright tacky. If he embodies the potential of the American Dream, then he also illustrates its limitations: here is a man, let's not forget, whose end is destined to be as pointless as it is violent.

Misunderstanding has been a part of The Great Gatsby's story from the very start. Grumbling to his friend Edmund Wilson shortly after publication in 1925, Fitzgerald declared that "of all the reviews, even the most enthusiastic, not one had the slightest idea what the book was about." Fellow writers like Edith Wharton admired it plenty, but as the critic Maureen Corrigan relates in her book So We Read On: How The Great Gatsby Came to Be and Why It Endures, popular reviewers read it as crime fiction, and were decidedly underwhelmed by it at that. Fitzgerald's Latest A Dud, ran a headline in the New York World. The novel achieved only so-so sales, and by the time of the author's death in 1940, copies of a very modest second print run had long since been remaindered.

Gatsby's luck began to change when it was selected as a giveaway by the US military. With World War Two drawing to a close, almost 155,000 copies were distributed in a special Armed Services Edition, creating a new readership overnight.  

As the 1950s dawned, the flourishing of the American Dream quickened the novel's topicality, and by the 1960s, it was enshrined as a set text. It's since become such a potent force in pop culture that even those who've never read it feel as if they have, helped along, of course, by Hollywood. It was in 1977, just a few short years after Robert Redford starred in the title role of an adaptation scripted by Francis Ford Coppola, that the word Gatsbyesque was first recorded.

Along with Baz Luhrmann's divisive 2013 movie extravaganza, the book has in the past decade alone spawned graphic novels, a musical, and an immersive theatrical experience. From now on, we're likely to be seeing even more such adaptations and homages because at the start of this year, the novel's copyright expired, enabling anyone to adapt it without permission from its estate . Early calls for a Muppets adaptation may have come to nothing (never say never), but a big-budget TV miniseries was in the works as of 2021, and author Min Jin Lee and cultural critic Wesley Morris are both writing fresh introductions to new editions.

If this all leaves Fitzgerald purists twiddling their pearls like worry beads, it's quite possible that while some such projects may further perpetuate the myth that throwing a Gatsby-themed party could be anything other than sublimely clueless, others may yield fresh insights into a text whose very familiarity often leads us to skate over its complexities. Take, for instance, Michael Farris Smith's new novel, Nick. The title refers, of course, to Nick Carraway, the narrator of Gatsby, who here gets his own fully formed backstory. It's the tale of a Midwesterner who goes off to Europe to fight in World War One and comes back changed, as much by a whirlwind love affair in Paris as by trench warfare. There's room for an impulsive sojourn in the New Orleans underworld before he heads off to Long Island's West Egg.

An impossible dream?

Like many, Smith first encountered the novel in high school. "I just completely didn’t get it", he tells BBC Culture, from his home in Oxford, Mississippi. "They seemed like a lot of people complaining about things they really shouldn't be complaining about." It was only when he picked it up again while living abroad in his late twenties that he began to understand the novel's power. "It was a very surreal reading experience for me. It seemed like something on almost every page was speaking to me in a way I had not expected," he recalls.

Reaching the scene in which Carraway suddenly remembers it's his thirtieth birthday, Smith was filled with questions about what kind of a person Gatsby's narrator really was. "It seemed to me that there had been some real trauma that had made him so detached, even from his own self. The thought crossed my mind that it would be really interesting if someone were to write Nick's story," he says. In 2014, by then a published author in his forties, he sat down to do just that, telling neither his agent nor his editor. It was only when he delivered the manuscript 10 months later that he learned copyright law meant he'd have to wait until 2021 to publish it.

Smith points to a quote from one of Fitzgerald's contemporaries as having provided the key to understanding Carraway. “Ernest Hemingway says in [his memoir] A Moveable Feast that we didn't trust anyone who wasn't in the war, and to me that felt like a natural beginning for Nick.” Smith imagines Carraway, coping with PTSD and shell shock, returning home to a nation that he no longer recognizes. It's a far cry from the riotous razzmatazz of all that partying, yet Carraway is, Smith suggests, the reason Fitzgerald's novel remains read. “Maybe it's not the champagne and the dancing, maybe it is those feelings of wondering where we are, the sense that anything can crumble at any moment, that keep Gatsby meaningful from one generation to the next.”

William Cain, an expert in American literature and the Mary Jewett Gaiser Professor of English at Wellesley College, agrees that Nick is crucial to understanding the novel's richness.

"Fitzgerald gave some thought to structuring it in the third person but ultimately he chose Nick Carraway, a first-person narrator who would tell Gatsby's story, and who would be an intermediary between us and Gatsby. We have to respond to and understand Gatsby and, as we do so, remain aware that we're approaching him through Nick's very particular perspective, and through Nick's very ambivalent relationship to Gatsby, which is simultaneously full of praise and full of severe criticism, even at some moments contempt," he says.

Leonardo DiCaprio and Carey Mulligan starred in Baz Luhrmann's divisive 2013 film.

Like Smith, Cain first encountered the novel as a student. It was a different era – the 1960s – but even so, little attention was paid to Nick. Cain recalls instead talk of symbolism – the legendary green light, for example, and Gatsby's fabled automobile. It's a reminder that, in a way, the education system is as much to blame as pop culture for our limited readings of this seminal text. 

 It may be a Great American Novel but, at fewer than 200 pages, its sublimely economical storytelling makes its study points very easy to access. Ironically, given that this is a novel of illusion and delusion, in which surfaces are crucial, we all too often overlook the texture of its prose. As Cain puts it, "I think when we consider The Great Gatsby, we need to think about it not just as a novel that is an occasion or a point of departure for us to talk about big American themes and questions, but we have to really enter into the richness of Fitzgerald's actual page-to-page writing. We have to come to Gatsby, yes, aware of its social and cultural significance, but also we need to return to it as a literary experience.

Cain re-reads the novel every two or three years but frequently finds himself thinking about it in between – last summer, for instance, when US President Biden, accepting the Democratic nomination at the DNC, spoke of the right to pursue dreams of a better future. The American Dream is, of course, another of Gatsby's Big Themes, and one that continues to be misunderstood. 

"Fitzgerald shows that that dream is very powerful, but that it is indeed a very hard one for most Americans to realize. It feeds them great hopes, great desires, and it's extraordinary, the efforts that so many of them make to fulfill those dreams and those desires, but that dream is beyond the reach of many, and many, they give up all too much to try to achieve that great success," Cain points out. Among the obstacles, Fitzgerald seems to suggest, are hard-and-fast class lines that no amount of money will enable Gatsby to cross. It's a view that resonates with a mood that Cain says he's been picking up on among his students – a certain "melancholy" for the American Dream, the feeling fanned by racial and economic inequalities that the pandemic has only deepened.

In other certain respects, the novel hasn't worn quite so well. While Fitzgerald showed where his allegiances lay by highlighting the brute ugliness of Tom Buchanan's white supremacist beliefs, he repeatedly describes African Americans as "bucks". The novel makes for frustrating reading from a feminist perspective, too: its female characters lack dimensionality and agency, and are seen instead through the prism of male desire. The path is now open to endless creative responses to those more dated and unpleasant aspects, and an ITV and A+E Studios TV mini-series announced in 2021 looks set to be one of the first. Written by Michael Hirst and with Fitzgerald's great-granddaughter Blake Hazard on board as a consulting producer, it has been described as a "reimagining" of the classic novel. "I have long dreamt of a more diverse, inclusive version of Gatsby that better reflects the America we live in, one that might allow us  all to see ourselves in Scott's wildly romantic text," Hazard told The Hollywood Reporter.

To an impressive degree, however, the renewed attention brought by the change in law shows not just how relevant and seductive the text of Fitzgerald's novel remains, but how very alive it's always been. Pick it up at 27, and you'll find a different novel to the one you read as a teenager. Revisit it again at 45, and it'll feel like another book altogether. Copyright has never had any bearing on the impact of the words it governs.

Finally able to publish Nick, Smith returned once more to The Great Gatsby before turning in his last edit. "I think it will be a novel that's always evolving in my head, and always changing based on who I am," he says. "That's what great novels do.


https://getpocket.com/explore/item/the-world-s-most-misunderstood-novel?utm_source=pocket-newtab-en-us


*
APHRA BEHN (1640-1689): WOMAN, WRITER, PLAYWRIGHT, SPY

~ Any study of Aphra Behn is really a study of shifting disguises and political guesswork. She is remembered in history as the first woman to make a living by writing in English, all the way back in the seventeenth century. Few know that she became a writer while exploring her first intriguing career: Spy for the British crown.

Fittingly for a spy, Behn was secretive and her reputed garrulity among friends did not extend to anything autobiographical for future generations to rely on. Most of what we know of her is uncertain, gleaned from the literature she left us.

Her espionage career might have begun in 1659, when she was about nineteen years old. The death of Oliver Cromwell sent the bumbling Sealed Knot secret society into a flurry of activity on behalf of the Royalist cause. Her foster-brother Sir Thomas Colepeper and his half-brother Lord Strangford were caught up in covert activities. Behn would have been able to travel to France to liaise with Lord Strangford more easily than Colepeper, who was being watched. She also may have served as a living dropbox for letters exchanged between plotters.

But true to the nature of spies and covert plots, no solid evidence of this role survives. Her presence in it is only hinted at by her relationships with others involved.

A version of Behn’s life story says that she was one of the children of John Johnson, a gentleman appointed Lieutenant General of Surinam, a short-lived English colony in what is now Suriname, South America. In this tale, Johnson died during the transatlantic voyage to his appointment, which meant his widow and his children, including young Aphra, were temporarily stranded in South America. This tale is almost certainly false though. Crucially, there is no record of a Johnson destined for a high office in Surinam, nor any Johnsons among the recorded settlers of the colony.

However, that Behn did go to Surinam in the 1660s is not in question. The descriptions of the colony in her most famous novel, Oroonoko, are too detailed for her to have gleaned them only by reading other people’s reports. In the other stories she wrote, Behn didn’t trouble herself much with research. Stories she wrote set in France and Spain are no different from stories set in England. The setting often remains a mere suggestion, but Oroonoko is different: It gives the impression of the author writing down what she heard and saw around her, lending it a reality that she clearly wasn’t achieving through meticulous research.

It is how and why Behn ended up in Surinam that is up for debate. Her biographer, Janet Todd, argues that Behn went to Surinam as part of a spying mission for King Charles II. This would explain why, on her return to England, she had an audience with the king to “give him ‘An Account of his Affairs there,'” an incredibly unusual outcome for a young woman’s family trip to South America. Surinam was supposedly overrun with spies at the time, probably because a far-flung colonial outpost was a perfect place for any dissidents in Restoration Era England who had plans for seizing control of a colony or fomenting a revolution.

She may have been there to spy on any number of brewing conflicts. The governor of Surinam at the time, Lord Willoughby, was absent, leaving a power vacuum filled by various personalities. This was also a time of “gold, glory, and God” and various people were reporting back to King Charles II about the possibilities of any and all of these plans succeeding. Spain had already grown immensely rich from riches found in North America, but England hadn’t struck gold yet. Many were taken in by promises of El Dorado, including Behn, who would find herself disappointed that Charles II was already tired of the empty promises of the mythical city.

Behn was profoundly impacted by her trip to Surinam. Though she did have a mission to complete, with few friends and more time on her hands than usual, she began writing.

Possibly she was already considering plays or translations as sources of income in case she could no longer engage in spy craft due to age, shifting political tides, or notoriety. She also had connections to the theater world back in England, and may have already been considering how to further insert herself in those circles. Interestingly, she does not seem to have been considering marriage as part of her future at all at this stage, though it would have been the thing to do for a woman her age in Restoration society.

She found much inspiration in the social mobility colonists found in the Americas, especially Virginia, where transported criminals found themselves impossibly rich from tobacco and beaver hunting. Behn hated this sort of class mobility and frequently lampooned it in her work for the rest of her life.

She also made time to meet the Indigenous population living near the English colony. Like many European colonists of her time, she found in the Surinamese a sort of pastoral innocence, and she carefully recorded her exchange of her garters for a set of feathers which she took with her back to England. Her work often reflected a paternalistic attitude toward any person of color.

An avid reader, especially of pastoral romances like the works of dramatist Gauthier de Costes, seigneur de la Calprenède’s, Behn tended to see her own surroundings through that lens. When she transposed the reality of Surinam back into fiction, the very real political dramas took on the pastoral lens of the fiction she enjoyed.

While in Surinam, she probably wrote her play The Young King, a tragicomedy of heroic lovers in Arcadian pastoral settings. It was written with an eye toward pleasing Charles II–he was known to love Spanish-style drama. Though the play wasn’t staged for at least fifteen years after her return to England, she never seems to have edited it–it retains youthful criticisms of power and privilege that she refused to engage in again later in life.

While Behn was in Surinam, she almost certainly met William Scot, the exiled son of an executed republican. His father Thomas Scot had been a member of the House of Commons and instrumental in the trial and execution of Charles I. William also had political aspirations, and though he probably would have been safer somewhere further from English society, he was in Surinam, probably making deals and trying to find his way back into political influence.

Rumor had it at the time that Behn and Scot were having an affair. He was married with a child, though living apart from them. The affair was remarked upon by their contemporaries in letters home, though Behn’s given name is Astrea, taken from the seventeenth-century French pastoral romance L’Astrée by Honoré d’Urfé. She later adopted the name as her pen name; she may have already been using it as a pseudonym in Surinam.

The immensely long novel centers on a fictionalized pastoral idyll in France during the fifth century, where a young shepherdess and shepherd—Astrée and Celadon—fall in love. Celadon is a perfect lover, but Astrée is “a curious combination of vanity, caprice and virtue; of an imperious, suspicious and jealous nature, she is not at all the ideal creature of older pastorals.” It makes sense that Behn would choose such a codename. She would challenge many of the “superficial associations of such a name” throughout her life, and though a lot of her writing relied on pastoral imagery and tropes, she often challenged those as well.

Behn returned to England in 1665. She gave her report of English affairs in Surinam to Charles II. What she was up to for the next year is unclear, though we know that she kept up with the ongoing dramas of the colony through the pamphlets that were published about it.

Some of these intrigues found their way into Oroonoko, her novel published in 1688. The failed rebellion of the eponymous enslaved prince, Oroonoko, was foretold in an unsuccessful assassination attempt against Surinam’s Governor Lord Willoughby. The would-be assassin, the troublesome Thomas Allin, was someone Behn may have reported on during her time there.

He died by suicide rather than be executed for his attempt; her protagonist was executed instead.


She must have made inroads in the political and theatrical spheres, because in 1666, Thomas Killigrew sent Behn to the Netherlands as a spy. Killigrew was the dramatist heading up the King’s Company troupe and secretly working in intelligence for the King. This was during the darkest point of the Second Anglo-Dutch War, so Behn probably only landed this precarious role because she had done good enough work in Surinam.

Her mission on behalf of the English king was to meet with her old flame William Scot, who now claimed to have information for the Royalists about a Dutch-sponsored uprising in England. She was to assess what information he had and whether it was worth anything. Killigrew knew of their romance in Surinam and was happy to exploit it for the king’s gain.

Behn was ill-equipped for this dangerous mission. Though adept at role-playing, she was somewhat naive. She was a bad judge of character and more easily fooled by false sincerity than someone undertaking a third espionage mission should have been. She was also quite talkative and would never be remembered as discreet.

Unsurprisingly, she was not successful. Scot was hard to deal with, and Behn didn’t have the resources to succeed even if he had been helpful. They both asked for too much from their spymasters and received nearly nothing. It was a plight shared by all Royalist agents taking risks for the Crown. Behn returned to London in May 1667, having had the good fortune of missing the catastrophic Great Fire of London the year before, but with little else to show from her trip.

Writing became an urgent necessity after her return from the Netherlands. Charles II was infamously stingy with payments to his spies, often simply not paying them at all. Behn’s stay in Antwerp had left her in immense debt, and she spent time in a London debtor’s prison before being released with a patron’s help. She had good handwriting, so she began copying manuscripts for fast money before looking toward the theater for her next adventure.

On September 20, 1670, Behn had her theatrical debut: her play, The Forc’d Marriage, was staged by the Duke’s Company. Like many of her works, it was a tragicomedy that ends in two noblemen marrying commoners against their parents’ directives, a scandalous concept at the time. She would return to the concept of escaping a bad marriage numerous times in her work; her biographer Janet Todd assumes the repeating theme was inspired by Behn’s own bad luck in love.

The women in Behn’s stories often live bleak lives, even when they’re removed to pastoral idylls. They are forced to manipulate and negotiate through places where men have all the power. Behn led a similar life working in the theater; calling her a whore would have been only a slightly lower insult than calling her a poetess at the time.

Yet, it was threatening to men that this female writer was so popular. In an essay, Rutgers University professor Elin Diamond writes,

The conflict between (as she puts it) her “defenseless” woman’s body and her “masculine part” is staged in her insistence, in play after play, on the equation between female body and fetish, fetish and commodity….Like the actress, the woman dramatist is sexualized, circulated, denied a subject position in a theater hierarchy.

Similarly, female spies were—and continue to be—positioned as fetishes and commodities, constantly sexualized and reduced to a roadblock to be overcome by the dominant male spy. Behn’s experience in both worlds would have led her to navigate them as only she could–an independent agent set on getting what was hers.

(It seems I've lost the link to the text above, but a more literary appraisal of Aphra Behn can be found here: https://www.poetryfoundation.org/poets/aphra-behn)

*
CAN THERE BE TOO MUCH SOLITUDE?

With growing numbers of people living without partners and children, and working from home, more of us are spending time alone. But is this actually a problem?

What did I do on the weekend? On Saturday morning, I called my sister, then my parents. A man stopped by to pick up a rug he’d bought from me online, and we had a brief conversation about the merits of buying secondhand.

Then I went for a run with members of my gym, our chat dwindling with each kilometer.

I spent the rest of the weekend alone and – aside from pleasantries with customer service staff and delivery drivers – in almost unbroken silence. All told, my social interaction probably amounted to two hours out of a total 48.

Whether this idea strikes you as heavenly or nightmarish probably depends on your own relationship to “alone time”. For someone working long hours in a highly social job, or parenting young children, a day alone might register as a luxury. But if you spend most of your time by yourself, and not by choice, it may feel like a burden.

More of us are spending more time by ourselves, thanks to cultural trends like remote work and growing numbers of people choosing to stay single and live solo. Of 2000 American adults surveyed by Newsweek last year, nearly half (42%) reported being less sociable than they were in 2019. That’s certainly been the case for me – though not for the worse.

My life became smaller and in some ways quieter when, in 2021, I moved to a new city and started living by myself for the first time – but instead of feeling lonely, I’ve mostly been more productive and more content. In the new peace and quiet, I realized that I need much more time alone than I’d previously been allowing myself.

*
Now a new book is asking us to reconsider solitude. In Solitude: The Science and Power of Being Alone, authors Netta Weinstein, Heather Hansen and Thuy-vy T Nguyen argue that time spent by ourselves is not necessarily a threat to our wellbeing, nor an inherent good.

According to the authors, “alone time” and the extent to which it’s beneficial or detrimental is highly personal and not well understood by researchers.

“It’s something that society tends to frown upon. We tend to conflate the word ‘solitude’ with loneliness,” says Nguyen, an associate professor of psychology at Durham University and principal investigator of its Solitude Lab.

But they are different. Loneliness pertains to the distress felt at one’s social needs not being met and solitude is a state of simply being by oneself.

“You can be with other people and feel lonely,” says Nguyen. “Loneliness is more about the quality of our relationships: how connected you feel to people around you.”

For centuries the two have been used interchangeably, complicating analyses today. But while loneliness has been studied for decades, “the literature on solitude is just starting to catch up,” Nguyen says.

People speak of it as an experience best avoided, either unbearable or unsavory, or else as the escape of the privileged – think of tech billionaires going off-grid to “detox” solo.

But these are extreme, even pejorative representations: “There’s been no coverage of solitude as a very ordinary thing that we all experience,” says Nguyen.

As a state, it’s neither negative nor positive. “But some people struggle with that time, even if it’s just 15 minutes,” she adds.

Your own baseline may depend on what you’re accustomed to. You might be less comfortable with your own company if you never get any chance to practice.

*
While you might assume that introverts are more comfortable in their own company than extraverts, Nguyen’s study last year found no evidence of a link between introversion and a preference for solitude. Instead, a negative association was found with neuroticism, suggesting that people who are better at regulating their emotions tend to spend more (and higher quality) time alone.

The finding demonstrates the nuance that has been lacking in discussion of solitude, with past research often linking it to psychological problems. Nguyen’s research shows that our preference and tolerance not only varies between individuals, but also from day to day.

“The more we study solitude, the more I’m convinced that it has very much a regulatory capacity,” she says.

From a biological perspective, socializing is draining, even if we enjoy it; solitude “allows us opportunity for rest and recovery,” says Nguyen. There may also be psychological needs that are easier to satisfy in solitude, such as feelings of freedom and autonomy.

Solitude can seem unnatural in the context of our species’ sociable nature, but one study found that people who spend time alone tend to have higher-quality relationships. “In that sense, solitude fits perfectly into our framework of thinking of ourselves as social animals,” Nguyen says. We just don’t tend to see it that way.

Though it is slowly changing, a cultural stigma against solitude persists. We might even struggle to see time spent alone as equal to that spent in the company of others. “In my calendar, I put in events when I’m meeting other people; I don’t put in things that I do on my own,” says Nguyen.

I’ve found that one monastic weekend every month is enough for me to fully recharge. After three consecutive days alone, I start to go a bit loopy, my thoughts falling into well-worn grooves (about past mistakes, or future fears) that are rarely productive.

This is the balance I’ve struck now; it may not serve me in 30, 10 or even five years’ time. At Durham’s Solitude Lab, Nguyen is currently studying people’s transition to retirement, as well as first-time mothers: both examples of how changeable our experience of “alone time” can be.
New retirees tend to express trepidation about the sudden increase of solo time, and even concern about how to fill those hours, she says, while new mothers can report feeling alone despite never being apart from their baby.

Solitude can feel relatively unstructured, aimless and even empty – “almost like we have to create our own path” through it, Nguyen adds.

It’s true that too much time alone can focus our attention on how we feel our social connections to be lacking, in quantity or especially quality: a condition for loneliness. There is also the risk of rumination, contributing to the development of depression or anxiety.

If someone is struggling with their mental health, they shouldn’t soldier on alone, says Nguyen. But solitude itself – even when it’s a “chronic condition”, as might be said of people who, like me, live alone – isn’t necessarily deleterious to wellbeing.

“That, to me, is the biggest misunderstanding of the relationship between solitude and loneliness: loneliness is not something that just emerges, in and of itself – it’s usually symptomatic,” says Nguyen.

Those contributing factors might be physical health conditions that affect people’s ability to socialize; difficulties forming or maintaining relationships; and, for younger people, bullying or problems at home. There can also be structural challenges, such as the isolation often faced by immigrants and the decline in low-cost and accessible “third spaces” in which to pass time.

But too often, says Nguyen, talk of the reported loneliness “epidemic” neglects those broader factors in favor of focusing on individuals’ risk factors. “The focus is very much on the social interactions,” says Nguyen.

Efforts to bring down living costs and improve access to healthcare could be effective in tackling the problem by giving people more time and opportunity to foster connection. The US surgeon general, Vivek Murthy, has called for a shift in societal priorities, “to restructure our lives around people” instead of work and technology.

In the meantime, I wonder if the stigma against solitude is holding us back from making the most of it. The worst I ever feel about all the time I spend alone is when I think about others’ judgments, and what I ought to be doing with my weekends. Am I wasting the best years of my life, waiting for strangers to come and collect my furniture?

But Nguyen doesn’t think so. If it’s your own company that you’re craving, “allow yourself to have it”, she says. “Being away from other people doesn’t signal that there is something wrong with your social life … that’s actually nurturing of our solitude as well.”

My quiet weekend might have been nothing to write home about, but I began the week feeling rested, replenished and – what with the rug trade – richer, in more ways than one.

https://www.theguardian.com/wellness/2024/mar/19/how-much-alone-time-loneliness?utm_source=pocket-newtab-en-us

*
HOW TO FIND A BETTER STORY ABOUT YOURSELF

Your sense of who you are is deeply entwined in the stories you tell about yourself and your experiences. Storytelling is a big part of how we develop a view of our lives, says Jonathan Adler, a psychologist at Olin College of Engineering in Needham, Massachusetts. If you’ve ever struggled with low self-esteem, you’ll know just how important it can be to try to find a positive story to tell about yourself.

This was certainly true for me as a young girl. When I was old enough to understand, my mother occasionally talked about how, on her side, we were descended from Meriwether Lewis, of the 1804 Lewis and Clark Expedition to map the western territories of the United States. I recall her providing only one piece of evidence for this – her mother had named one of her sons Lewis after the explorer and spelled it the way his name was spelled, instead of the alternative (Louis). This was enough to convince me, and I clung to this story to help me cope with an otherwise bleak home life.

My parents suffered from what back then we called alcoholism, with all the turmoil that can entail. They would start drinking before dinner most nights, and when I would beg them not to, they’d berate me. If I was hoping my two brothers and I would become close to compensate for the chaos at home, I was out of luck. They were both pretty wild growing up. My older brother joined a motorcycle group as a teen and then enlisted in the Marines. His military career didn’t last long, however, and his life wasn’t much better afterward. My younger brother was a risktaker and always in trouble.

I had no self-esteem back then. But believing this story – that I was related to a famous explorer – lifted me up. There was one person I was related to whom I could be proud of (and by association, I felt I could be proud of myself).

With a little research, I’ve discovered I’m far from alone in finding solace in these kinds of family stories. Take Michael Harper, executive director of the Salvation Army in Portland, Maine, and a pastor for the organization. When Harper was a kid, he heard he was descended from John Winthrop, a governor of the Massachusetts Bay Colony during the 1600s. Harper’s parents divorced when he was eight, and he, his mother and his five siblings moved 30 times during his childhood while trying to manage on welfare. ‘It was one apartment after another. I could never establish roots and always felt like I didn’t belong,’ he says.

In his 50s, Harper was able to confirm through the Genealogy Roadshow, which aired on PBS from 2013 to 2016, that the family legend was true: he was descended from Winthrop, among other famous people. ‘It made a big difference,’ Harper recalls. Being able to trace that thread gave him ‘a great deal of satisfaction’.

‘When I was living in Boston,’ he tells me, ‘after I learned that I was descended from someone famous who landed there from England, I joked that, when I drove through the city, I felt like I owned the place. It made me feel very proud, and that thought has never left.’

Being related to a famous explorer was similarly my ace in the hole for gaining significance. Until it wasn’t. When I was in fifth grade and my class started studying the Lewis and Clark Expedition, I couldn’t wait to tell my teacher and classmates about my enviable lineage. However, I could tell by my teacher’s nonchalant nod and lackadaisical response that she didn’t believe me. I just wanted to get up and run out of the room.

Amy Morin, a licensed clinical social worker in Florida and host of the Mentally Stronger podcast, has practiced therapy for more than 20 years. She tells me that anyone whose self-worth is dependent on someone else (or being related to someone else), like I did, is going to be on ‘shaky ground’.

‘You might later discover you aren’t actually related at all, which could affect your self-esteem [even more]. Or, you might feel deflated when someone else isn’t as impressed as you think they should be,’ she says.

‘It’s important to develop self-worth based on who you are as a person, not who people in your lineage were. Everyone’s family has people who have struggled and people who have accomplished things,’ notes Morin.

Barbara Becker Holstein, a psychologist in private practice in New Jersey whose specialties include positive psychology and self-esteem, sees this as a problem of not recognizing, and probably needing to learn how to recognize, one’s own talents, capabilities, limitations and potential.

‘The more we understand and develop our own potential, the more we can see that who we may or may not be related to is interesting, even fascinating. But … critical to our own development – and what is most important – is to realize you are unique and have much to offer in life.’

I grew to understand that my parents’ absence in our lives was likely a big factor in my siblings’ and my development (or lack thereof). Living with four people whose lives had gone terribly wrong was so sad sometimes that I could hardly stand it, and I was embarrassed and ashamed when others found out, even though I knew intellectually that I was not a reflection of my family.

Thankfully I found a different story to tell about my family and about my life. This rewriting of my story has parallels with the process of ‘narrative therapy’ – which has to do with finding more positive ways to interpret and tell our life experiences.

Alongside his university role, Adler is also chief academic officer of the Health Story Collaborative, an organization in Boston that organizes events and gathers resources to help people find and tell their own healing narratives.

We’re born without words, let alone stories, Adler explains, ‘and storytelling is a skill we learn from other people. So it always comes from outside at the beginning. Indeed, even when we tell a story to ourselves, we then put that story out into the world and get feedback on it. In a way, then, our stories are always these negotiations with other people.’ Sometimes they unfold directly through dialogue with the people in our immediate lives, such as our family or friends – but, other times, they’re broader cultural stories.

‘In the particular phenomenon of [looking to a famous person for self-esteem], it seems that there’s some cultural storyline, or maybe in specific instances there’s an actual family storyline that is somehow resonating for the person,’ Adler says. He doesn’t necessarily think of storytelling as all positive or negative; it’s something we all do that can have positive and/or negative consequences for us.

Annie Brewster, a physician at Massachusetts General Hospital in Boston, is the founder and executive director of the Health Story Collaborative. She says ‘It [can be] hard and painful to have to “rejigger” our stories and retell them, but it can also be health-promoting. Through the challenge, it can also cause us to see ourselves more fully.’ Ultimately, she concludes, ‘it’s a developmental and appropriate shift if we can have more of our internal story come authentically instead of trying to fit into another narrative from the outside such as looking to link with a famous person.

This resonates with the way Harper and I have both found good endings to the stories we have told ourselves about our lives. Despite his difficult start, not only does Harper look back on an impoverished childhood without bitterness, he praises his mother for ‘holding her brood of six together’. Also, in choosing a career that is the epitome of helping others, he’d likely be the first to say he gets back more than he gives.

Like him, I, too, have chosen a positive path – and told a different story – when I could have ended up quite differently. Watching my two brothers screw up gave me self-discipline and two more examples of how I didn’t want to be. Determined not to experience difficulties with addiction like them, I saw their lives as a lesson. I told myself I would make good decisions and not be waylaid. I began plotting a new narrative.

I was teaching in my mid-20s when, in a night class for an MBA degree, a classmate who worked at AT&T mentioned that there were openings for writers in her department. I interviewed, was hired as a contractor, and didn’t go back to classroom teaching that September. After a few years, I started getting up at 5am to write occasional essays and articles for magazines before leaving for work.

Then, one day around that time, I saw a business column in The New York Times and realized there was an opportunity. I called the paper and asked to speak to the column’s editor. He had me write a sample column that got published, and that was the start of a dream job. For 20 years, I freelanced for the business section and eventually I earned my own column. I wrote a book with an addiction specialist, Sober Siblings: How to Help Your Alcoholic Brother or Sister – and Not Lose Yourself (2008). Today I ghost-write for executives and write executive columns for several publications.

I still think of that incident in fifth grade. Years later, my cousin went on a road trip to research our family’s ancestry. I learned about a reunion of Lewis’s descendants to whom we were supposedly related, and I contacted the reunion organizers in the hope that my cousin and I could attend. I sent my cousin’s findings to a genealogist there to prove our lineage, with the outcome that I learned – definitively – that we were not related to the Lewis family after all. Today, I can laugh about our family legend being false. After all, I found a better story to tell. And if I could say something to that fifth-grader today, I would tell her: ‘You’re special just the way you are. And don’t let anyone make you feel otherwise.’

https://psyche.co/ideas/i-rebuilt-my-self-esteem-by-changing-the-story-of-who-i-am?utm_source=Psyche+Magazine&utm_campaign=0f2828e08a-EMAIL_CAMPAIGN_2024_03_22&utm_medium=email&utm_term=0_-a9a3bdf830-%5BLIST_EMAIL_ID%5D

*
WHY BABIES SMELL SWEET, BUT TEENAGERS STINK

Moms find the smell of babies irresistible but pull away from the stink of their teens, which could affect their parent-child bond, a new study says.

Some smells automatically bring a smile to our faces: Puppy breath. New car smell. The aroma of fresh-baked cookies.

For moms, it's the scent of their babies. Research shows most moms find the smell of their bundle of joy irresistible, while babies find their moms' odor unique -- one more way nature strengthens a bond that assures survival of the species. In fact, 90% of new moms can pick out their baby by smell within 10 minutes to an hour after birth.

But do we continue to like the smell of our children as they age?

Not so much, a new study finds, especially when those children are teenagers in or past puberty.

"This has something to do with the changed composition of the infantile sweat due to the increased release of sexual hormones," said professor Ilona Croy, who studies the sense of smell at the Dresden University of Technology in Germany.

THE SWEET SMELL OF BABIES

It's possible that all women are evolutionarily wired to respond favorably to "new baby smell."
A 2013 study found the reward centers of the brain lit up in a small group of new moms and women who had never given birth when they smelled pajamas that newborn infants had worn for two nights. The PJs had been frozen and were presented to the women some six weeks later. None of the women were related to the babies.

Of course, the bonding benefits are not available to parents who cannot smell, either due to a physical or psychological issue.

"We did a study where we could show that mothers who have — because of various mental disorders — problems bonding with their child, show an abnormal olfactory perception," said Croy.

Ordinarily "mothers prefer the odor of their children before the odor of others," she added. "Those mothers are neither able to identify their child, nor do they prefer it.”

The new study blindfolded 164 German mothers and asked them to smell body odor on clothing from their own child and four unfamiliar, sex-matched children. Clothing samples consisted of onesies that infants had slept in for one night, or cotton T-shirts slept in for one night by kids up to age 18.

Moms accurately picked out a strange child's developmental level from the smell 64% of the time; success rate was even higher when the child was their own.

Mothers also scored higher when identifying odors in children who had not yet hit puberty, and found those much more pleasant — "sweet" was the most common response, said Croy, who supervised the study.

THE STINKY SMELL OF TEENS

Stronger, "high intensity" body odor samples were identified as coming from children in, or past, puberty.

"Body odor is perceived more intensively due to the developmental changes," explained lead author Laura Schäfer, a doctoral student in Croy's lab. "Pleasantness and intensity perception are often negatively related.”

In fact, moms got it wrong if an older child past puberty had a "pleasant" smell, classifying those odors as coming from a younger, pre-pubescent child.

Schäfer said the study may be the first to investigate whether parents can determine a child's developmental maturity by smell. Putting this in context with prior research, she said, the implications for parent-child bonding as children grow could be significant.

"Many parents report that their baby's odor smells pleasant, rewarding and adorable," Schäfer said. "This suggests infantile body odors can mediate affectionate love towards the child in the crucial periods of bonding.

"This seems to decline with increasing age," she added, which could be interpreted as a "mechanism for detachment, when the child becomes more independent and separates itself from parental care.”

So perhaps we are evolutionarily supposed to find our children stinky as they age, so we'll let go and allow them to become independent?

"Smelling can be an unconscious factor that can influence perception and thus also the relationship," Schäfer said, adding that parents shouldn't be "irritated if they do not find the smell of their own child in puberty very pleasant.”

"It is important to note, however, that the entire relationship between parents and child is, of course, always a complex interplay, where both several senses and, of course, contextual conditions play an essential role.”

https://www.cbsnews.com/sacramento/news/babies-smell-sweet-teens-stink-and-blindfolded-moms-can-tell-the-difference-study-says/


*
HOW TO ENHANCE MEMORY

As a Harvard-trained neuroscientist with more than 20 years of experience, when people ask how they can enhance their ability to remember, I like to share these strategies with them. Here are my most commonly used memory tricks.

SEE IT
When you create a mental image of what you’re trying to remember, you add more neural connections to it. You’re deepening the associations, making the formation of that memory more robust, so you’ll better remember later.

If you’re writing down something that you want to remember, write it in all caps, highlight it in pink marker or circle it. Add a chart or doodle a picture. Make what you’re trying to remember something you can easily see in your mind’s eye.

IMAGINE IT
People with the best memories have the best imaginations. To help make a memory unforgettable, use creative imagery. Go beyond the obvious and attach bizarre, surprising, vivid, funny, physically impossible and interactive elements to what you’re trying to remember, and it will stick.

For example, if I need to remember to pick up chocolate milk at the grocery store, for example, I might imagine Dwayne “The Rock” Johnson milking a chocolate brown cow in my living room.

MAKE IT ABOUT YOU
I rarely endorse self-centeredness, but I make an exception when it comes to enhancing your memory. You are more likely to remember a detail about yourself or something that you did, than you are to retain a detail about someone else or something someone else did.
So make what you’re learning unique to you. Associate it with your personal history and opinions, and you’ll strengthen your memory.

LOOK FOR THE DRAMA
Experiences drenched in emotion or surprise tend to be remembered: successes, humiliations, failures, weddings, births, divorces, deaths.

Emotion and surprise activate your amygdala, which then sends a loud and clear message to your hippocampus: “Hey! What is going on right now is extremely important. Remember this!”

PRACTICE MAKES PERFECT
Repetition and rehearsal strengthen memories. Quizzing yourself enhances your memory for the material far better than simply rereading it.

Muscle memories become stronger and are more efficiently retrieved the more you rehearse a skill. Because these memories tell the body what to do, your body gets better at doing these physical tasks with practice.

USE PLENTY OF RETRIEVAL CUES

Cues are crucial for retrieving memories. The right cue can trigger the memory of something you haven’t thought of in decades. Cues can be anything associated with what you’re trying to remember — the time of day, a pillbox, concert tickets by the front door, a Taylor Swift song, the smell of Tide detergent.

Smells are especially powerful memory cues because your olfactory bulb, where smells are perceived (you smell in your brain, not your nose), sends strong neural inputs to the amygdala and the hippocampus, the parts of your brain that consolidate memories.

EXTERNALIZE YOUR MEMORY

People with the best memories for what they intend to do later use aids like lists, pillboxes, calendars, sticky notes, and other reminders.

You might be worried that this is somehow cheating or that you’ll worsen your memory’s capabilities if you rely too much on these external “crutches” instead of using your brain. Our brains aren’t designed to remember to do things later. Write them down.

Here are a few other helpful reminders:

Context matters. Memory retrieval is far easier and faster when the internal and external conditions match whatever they were when that memory was formed. Your learning circumstances matter, too. For example, if you like to drink a mocha Frappuccino while studying for a test, have another one when you take the exam to get your brain back into that mindset.

It helps to chill out. Chronic stress is nothing but bad news for our ability to remember. In addition to making you more vulnerable to a whole host of diseases, it impairs memory and shrinks your hippocampus.

While we can’t necessarily free ourselves from the stress in our lives, we can change how we react to it. Through yoga, meditation, exercise, and practices in mindfulness, gratitude and compassion, we can train our brains to become less reactive, put the brakes on the runaway stress response, and stay healthy in the face of chronic, toxic stress.

Get enough sleep. You need seven to nine hours of sleep to optimally consolidate the new memories you created today. If you don’t get enough sleep, you’ll go through the next day experiencing a form of amnesia. Some of your memories from yesterday might be fuzzy, inaccurate or even missing. 

Getting enough sleep is critical for locking whatever you have learned and experienced in your long-term memory, and it reduces your risk of developing Alzheimer’s.

https://www.cnbc.com/2024/03/19/harvard-trained-neuroscientist-top-tricks-i-use-to-remember-better-.html?utm_source=pocket-newtab-en-us


*
SUPERCOLONIES OF ANTS AND HUMAN PARALLELS

It is a familiar story: a small group of animals living in a wooded grassland begin, against all odds, to populate Earth. At first, they occupy a specific ecological place in the landscape, kept in check by other species. Then something changes. The animals find a way to travel to new places. They learn to cope with unpredictability. They adapt to new kinds of food and shelter. They are clever. And they are aggressive.

In the new places, the old limits are missing. As their population grows and their reach expands, the animals lay claim to more territories, reshaping the relationships in each new landscape by eliminating some species and nurturing others. Over time, they create the largest animal societies, in terms of numbers of individuals, that the planet has ever known. And at the borders of those societies, they fight the most destructive within-species conflicts, in terms of individual fatalities, that the planet has ever known.

This might sound like our story: the story of a hominin species, living in tropical Africa a few million years ago, becoming global. Instead, it is the story of a group of ant species, living in Central and South America a few hundred years ago, who spread across the planet by weaving themselves into European networks of exploration, trade, colonization and war. Some even stowed away on the 16th-century Spanish galleons that carried silver across the Pacific from Acapulco to Manila. During the past four centuries, these animals have globalized their societies alongside our own.

It is tempting to look for parallels with human empires. Perhaps it is impossible not to see rhymes between the natural and human worlds, and as a science journalist I’ve contributed more than my share. But just because words rhyme, it doesn’t mean their definitions align. Global ant societies are not simply echoes of human struggles for power. They are something new in the world, existing at a scale we can measure but struggle to grasp: there are roughly 200,000 times more ants on our planet than the 100bn stars in the Milky Way.

In late 2022, colonies of the most notorious South American export, the red fire ant, were unexpectedly found in Europe for the first time, alongside a river estuary close to the Sicilian city of Syracuse. People were shocked when a total of 88 colonies were eventually located, but the appearance of the red fire ant in Europe shouldn’t be a surprise. It was entirely predictable: another ant species from the fire ants’ native habitats in South America had already found its way to Europe.

What is surprising is how poorly we still understand global ant societies: there is a science-fiction epic going on under our feet, an alien geopolitics being negotiated by the 20 quadrillion ants living on Earth today. It might seem like a familiar story, but the more time I spend with it, the less familiar it seems, and the more I want to resist relying on human analogies. Its characters are strange; its scales hard to conceive. Can we tell the story of global ant societies without simply retelling our own story?

*
Some animal societies hold together because their members recognize and remember one another when they interact. Relying on memory and experience in this way – in effect, trusting only friends – limits the size of groups to their members’ capacity to sustain personal relationships with one another. Ants, however, operate differently – they form what the ecologist Mark Moffett calls
“anonymous societies”, in which individuals from the same species or group can be expected to accept and cooperate with each other even when they have never met before. What these societies depend on, Moffett writes, are “shared cues recognized by all its members”.

Recognition looks very different for humans and insects. Human society relies on networks of reciprocity and reputation, underpinned by language and culture. Social insects – ants, wasps, bees and termites – rely on chemical badges of identity. In ants, this badge is a blend of waxy compounds that coat the body, keeping the exoskeleton watertight and clean. The chemicals in this waxy blend, and their relative strengths, are genetically determined and variable. This means that a newborn ant can quickly learn to distinguish between nestmates and outsiders as it becomes sensitive to its colony’s unique scent. Insects carrying the right scent are fed, groomed and defended; those with the wrong one are rejected or fought.


The most successful invasive ants, including the tropical fire ant and red fire ant, share this quality. They also share social and reproductive traits. Individual nests can contain many queens (in contrast to species with one queen per nest) who mate inside their home burrows. In single-queen species, newborn queens leave the nest before mating, but in unicolonial species, mated queens will sometimes leave their nest on foot with a group of workers to set up a new nest nearby. Through this budding process, a network of allied and interconnected colonies begins to grow.

In their native ranges, these multi-nest colonies can grow to a few hundred meters across, limited by physical barriers or other ant colonies. This turns the landscape to a patchwork of separate groups, with each chemically distinct society fighting or avoiding others at their borders. Species and colonies coexist, without any prevailing over the others. However, for the “anonymous societies” of unicolonial ants, as they’re known, transporting a small number of queens and workers to a new place can cause the relatively stable arrangement of groups to break down.

As new nests are created, colonies bud and spread without ever drawing boundaries because workers treat all others of their own kind as allies. What was once a patchwork of complex relationships becomes a simplified, and unified, social system. The relative genetic homogeneity of the small founder population, replicated across a growing network of nests, ensures that members of unicolonial species tolerate each other. Spared the cost of fighting one another, these ants can live in denser populations, spreading across the land as a plant might, and turning their energies to capturing food and competing with other species. Chemical badges keep unicolonial ant societies together, but also allow those societies to rapidly expand.

All five of the ants included in the International Union for the Conservation of Nature’s (IUCN) list of 100 of the world’s worst invasive alien species are unicolonial. Three of these species – the aforementioned red fire ant, the Argentine ant and the little fire ant – are originally from Central and South America, where they are found sharing the same landscapes. It is likely that the first two species, at least, began their global expansion centuries ago on ships out of Buenos Aires. Some of these ocean journeys might have lasted longer than a single worker ant’s lifetime.

Unicolonial ants are superb and unfussy scavengers that can hunt animal prey, eat fruit or nectar, and tend insects such as aphids for the sugary honeydew they excrete. They are also adapted to living in regularly disrupted environments, such as river deltas prone to flooding (the ants either get above the waterline, by climbing a tree, for example, or gather into living rafts and float until it subsides). For these ants, disturbance is a kind of environmental reset during which territories have to be reclaimed. Nests – simple, shallow burrows – are abandoned and remade at short notice. If you were looking to design a species to invade cities, suburbs, farmland and any wild environment affected by humans, it would probably look like a unicolonial ant: a social generalist from an unpredictable, intensely competitive environment.

When these ants show up in other places, they can make their presence felt in spectacular fashion. An early example comes from the 1850s, when the big-headed ant, another species now listed on the IUCN’s Top 100, found its way from Africa to Funchal, Madeira’s capital. “You eat it in your puddings, vegetables and soups, and wash your hands in a decoction of it,” complained one British visitor in 1851. When the red fire ant, probably the best-known unicolonial species, spread through the US farming communities surrounding the port of Mobile, Alabama, in the 1930s, it wreaked havoc in different ways. “Some farmers who have heavily infested land are unable to hire sufficient help, and are forced to abandon land to the ants,” was how the entomologist EO Wilson described the outcome of their arrival. Today, the red fire ant does billions of dollars of damage each year and inflicts its agonizing bite on millions of people. But the largest colonies, and most dramatic moments in the global spread of ant societies, belong to the Argentine ant.

Looking at the history of this species’ expansion in the late 19th and early 20th centuries, it can seem as if the spread of global trade was an Argentine ant plot for world domination. One outbreak appeared in Porto, after the 1894 Exhibition of the Islands and Colonies of Portugal. The insects had likely traveled on produce and wares displayed at the exhibition from Madeira – ornamental plants, which tend to travel with a clump of their home soil, are particularly good for transporting invasive species. In 1900, a Belfast resident, Mrs Corry, found a “dark army” of the same species crossing her kitchen floor and entering the larder, where they covered a leg of mutton so completely that “one could scarcely find room for a pin-point”.

In 1904, the US Bureau of Entomology dispatched a field agent, Edward Titus, to investigate a plague of Argentine ants in New Orleans. He heard reports of the ants crawling into babies’ mouths and nostrils in such numbers that they could be dislodged only by repeatedly dunking the infant in water. Other reports described the ants entering hospitals and “busily carrying away the sputum” from a tuberculosis patient. When the species arrived on the French Riviera a few years later, holiday villas were abandoned and a children’s hospital was evacuated.

In 1927, Italy’s king Vittorio Emmanuel III and its prime minister Benito Mussolini signed a law setting out the measures to be taken against the Argentine ant, splitting the cost equally with invaded provinces. The state’s effectiveness, or lack of it, is shown in the novella The Argentine Ant (1952) by Italo Calvino, one of Italy’s great postwar writers. Calvino, whose parents were plant biologists, sets his tale in an unnamed seaside town much like the one where he grew up, in the north-western province of Liguria. The ant in his story has outlasted Mussolini and the monarchy, and saturates the unnamed town, burrowing underground (and into people’s heads). Some residents drench their houses and gardens with pesticides or build elaborate traps involving hammers covered in honey; others try to ignore or deny the problem. And then there is Signor Baudino, an employee of the Argentine Ant Control Corporation, who has spent 20 years putting out bowls of molasses laced with a weak dose of poison. The locals suspect him of feeding the ants to keep himself in a job.

In reality, people who found themselves living in the path of such ant plagues learned to stand the feet of their cupboards, beds and cots in dishes of paraffin. However, this was not a longterm solution: killing workers away from the nest achieves little when most, along with their queens, remain safe at home. Slower-acting insecticides (like Baudino’s poison), which workers take back to the nest and feed to queens, can be more effective. But because unicolonial workers can enter any number of nests in their network, each containing many queens, the chances of delivering a fatal dose gets much slimmer.

In the early 20th century, an intensive period in the human war against ants, pest-control researchers advocated using broad-spectrum poisons, most of which are now banned for use as pesticides, to set up barriers or fumigate nests. Nowadays, targeted insecticides can be effective for clearing relatively small areas. This has proved useful in orchards and vineyards (where the ants’ protection of sap-sucking insects makes them a hazard to crops) and in places such as the Galápagos or Hawaii, where the ants threaten rare species. Large-scale eradications are a different matter, and few places have tried. New Zealand, the world leader in controlling invasive species, is the only country to have prevented the spread of the red fire ant, mostly by eradicating nests on goods arriving at airports and ports. The country is also home to a spaniel trained to sniff out Argentine ant nests and prevent the insects from reaching small islands important for seabirds.

Human inconvenience pales in comparison with the ants’ effects on other species. Exploring the countryside around New Orleans in 1904, Titus found the Argentine ant overwhelming the indigenous ant species, bearing away the corpses, eggs and larvae of the defeated to be eaten: “column after column of them arriving on the scene of battle”. Other entomologists at the time learned to recognize the disappearance of native ants as a sign of an invader’s arrival.

Unicolonial species are aggressive, quick to find food sources and tenacious in defending and exploiting them. Unlike many ant species, in which a worker who finds a new food source returns to the nest to recruit other foragers, the Argentine ant enlists other workers already outside the nest, thus recruiting foragers more quickly. However, the decisive advantage of unicolonial ant species lies in their sheer force of numbers, which is usually what decides ant conflicts. They often become the only ant species in invaded areas.

The effects of these invasions cascade through ecosystems. Sometimes, the damage is direct: on the Galápagos, fire ants prey on tortoise hatchlings and bird chicks, threatening their survival. In other cases, the damage falls on species that once relied on native ants. In California, the tiny Argentine ant (typically under 3mm long) has replaced the larger native species that once formed the diet of horned lizards, leaving the reptiles starving – it seems they do not recognize the much smaller invader as food. In the scrublands of the South African fynbos heathland, which has some of the most distinctive flora on Earth, many plants produce seeds bearing a fatty blob. Native ants “plant” the seeds by carrying them into their nests, where they eat the fat and discard the rest. Argentine ants – almost certainly imported to South Africa around 1900 along with horses shipped from Buenos Aires by the British empire to fight the Boer war – either ignore the seeds, leaving them to be eaten by mice, or strip the fat where it lies, leaving the seed on the ground. This makes it harder for endemic flora such as proteas to reproduce, tipping the balance towards invasive plants such as acacias and eucalypts.

In the past 150 years, the Argentine ant has spread to pretty much everywhere that has hot, dry summers and cool, wet winters. A single supercolony, possibly descended from as few as half a dozen queens, now stretches along 3,800 miles of coastline in southern Europe. Another runs most of the length of California. The species has arrived in South Africa, Australia, New Zealand and Japan, and even reached Easter Island in the Pacific and Saint Helena in the Atlantic. Its allegiances span oceans: workers from different continents, across millions of nests containing trillions of individuals, will accept each other as readily as if they had been born in the same nest. Workers of the world united, indeed. But not completely united.

leaf cutter ant
Expanding in parallel with the world-spanning supercolony are separate groups of the Argentine ant that bear different chemical badges – the legacy of other journeys from the homeland. Same species, different “smells”. In places where these distinct colonies come into contact, hostilities resume.

In Spain, one such colony holds a stretch of the coast of Catalonia. In Japan, four mutually hostile groups fight it out around the port city of Kobe. The best-studied conflict zone is in southern California, a little north of San Diego, where the Very Large Colony, as the state-spanning group is known, shares a border with a separate group called the Lake Hodges colony, with a territory measuring just 18 miles around. Monitoring this border for a six-month period between April and September 2004, a team of researchers estimated that 15 million ants died on a frontline a few centimeters wide and a few miles long. There were times when each group seemed to gain ground, but over longer periods stalemate was the rule. Those seeking to control ant populations believe that provoking similar conflicts might be a way to weaken invasive ants’ dominance. There are also hopes, for example, that artificial pheromones – chemical misinformation, in other words – might cause colony mates to turn on one another, although no products have yet come to market.

In the very long term, the fate of unicolonial societies is unclear. A survey of Madeira’s ants between 2014 and 2021 found, contrary to fears that invasive ants would wipe the island clean of other insects, very few big-headed ants and, remarkably, no Argentine ants. Invasive ants are prone to population crashes for reasons that aren’t understood, but may be related to genetic homogeneity: a single colony of Argentine ants in their homeland contains as much genetic diversity as the whole of California’s state-spanning supercolony. As with inbred species everywhere, this may make them prone to disease. Another potential issue is that the ants’ lack of discrimination about whom they help may also favour the evolution of free-riding “lazy workers” in colonies, who selfishly prosper by exploiting their nestmates’ efforts. Though it’s assumed that this uneven distribution of work may eventually lead to social breakdown, no examples have been found.

Unless natural selection turns against them, one of the most effective curbs on unicolonial ants is other unicolonial ants. In the south-eastern US, red fire ants seem to have prevented the Argentine ant forming a single vast supercolony as it has in California, instead returning the landscape to a patchwork of species. In southern Europe, however, the Argentine ant has had a century longer to establish itself, so, even if the fire ant does gain a European foothold, there’s no guarantee that the same dynamic will play out. In the southern US, red fire ants are themselves now being displaced by the tawny crazy ant, another South American species, which has immunity to fire ant venom.

*
It is remarkable how irresistible the language of human warfare and empire can be when trying to describe the global history of ant expansion. Most observers – scientists, journalists, others – seem not to have tried. Human efforts to control ants are regularly described as a war, as is competition between invaders and native ants, and it is easy to see why comparisons are made between the spread of unicolonial ant societies and human colonialism. People have been drawing links between insect and human societies for millennia. But what people see says more about them than about insects.

A beehive is organized along similar lines to an ant nest, but human views of bee society tend to be benign and utopian. When it comes to ants, the metaphors often polarize, either towards something like communism or something like fascism – one mid-20th-century US eugenicist even used the impact of the Argentine ant as an argument for immigration control. For the entomologist Neil Tsutsui, who studies unicolonial ants at the University of California, Berkeley, insects are like Rorschach tests. Some people see his research as evidence that we should all get along, while others see the case for racial purity.

In addition to conflating a natural “is” with a political “ought”, the temptations of ant anthropomorphism can also lead to a limited, and limiting, view of natural history. Surely the habit of worker ants in Argentine nests to kill nine-tenths of their queens every spring – seemingly clearing out the old to make way for the new – is enough to deter parallels between ant societies and human politics?

The more I learn, the more I am struck by the ants’ strangeness, rather than their similarities with human society. There is another way to be a globalized society – one that is utterly unlike our own. I am not even sure we have the language to convey, for example, a colony’s ability to take bits of information from thousands of tiny brains and turn it into a distributed, constantly updated picture of their world. Even “smell” seems a feeble word to describe the ability of ants’ antennae to read chemicals on the air and on each other. How can we imagine a life where sight goes almost unused and scent forms the primary channel of information, where chemical signals show the way to food, or mobilize a response to threats, or distinguish queens from workers and the living from the dead?

As our world turns alien, trying to think like an alien will be a better route to finding the imagination and humility needed to keep up with the changes than looking for ways in which other species are like us. But trying to think like an ant, rather than thinking about how ants are like us, is not to say that I welcome our unicolonial insect underlords. Calamities follow in the wake of globalized ant societies.

Most troubling among these is the way that unicolonial species can overwhelmingly alter ecological diversity when they arrive somewhere new. Unicolonial ants can turn a patchwork of colonies created by different ant species into a landscape dominated by a single group. As a result, textured and complex ecological communities become simpler, less diverse and, crucially, less different to each other. This is not just a process; it is an era.

The current period in which a relatively small number of super-spreading animals and plants expands across Earth is sometimes called the Homogecene. It’s not a cheering word, presaging an environment that favors the most pestilential animals, plants and microbes. Unicolonial ants contribute to a more homogenous future, but they also speak to life’s ability to escape our grasp, regardless of how we might try to order and exploit the world. And there’s something hopeful about that, for the planet, if not for us.

The scale and spread of ant societies is a reminder that humans should not confuse impact with control. We may be able to change our environment, but we’re almost powerless when it comes to manipulating our world exactly how we want. The global society of ants reminds us that we cannot know how other species will respond to our reshaping of the world, only that they will.

If you want a parable of ants’ ability to mock human hubris, it’s hard to improve on the story of Biosphere 2. This giant terrarium in the Arizona desert, funded by a billionaire financier in the late 1980s, was intended as a grand experiment and model for long-distance space travel and colonization. It was designed to be a self-sustaining living system, inhabited by eight people, with no links to the world’s atmosphere, water, soil. Except that, soon after it began operations in 1991, the black crazy ant, a unicolonial species originally from south-east Asia, found a way in, reshaped the carefully engineered invertebrate community inside, and turned the place into a honeydew farm.

It is possible to be both a scourge and a marvel.

https://www.theguardian.com/environment/2024/mar/19/empire-of-the-ants-what-insect-supercolonies-can-teach-us?utm_source=pocket-newtab-en-us


*
THE COLLAPSE OF THE ATLANTIC MERIDIONAL OVERTURNING CIRCULATION WOULD BE A CLIMATE DISASTER

The collapse of the Atlantic Meridional Overturning Circulation (AMOC)—the engine that drives our oceans—could happen as soon as 2100, according to a study from Utrecht University published last month in the journal Science Advances. And while the world may not suffer a fatal flash freeze like in the 2004 action flick The Day After Tomorrow, an AMOC breakdown would probably reduce average air temperatures in the Northern Hemisphere by around 40 degrees Fahrenheit or more, spelling out disaster for life on our planet.

But last year, a brother-sister science team in Denmark made their own prediction about the tipping point. Their model found that the AMOC could shut down as early as 2025 (though the duo says it’s more likely to happen toward the middle of the century). The controversial study, published in Nature Communications in June 2023, sparked a media frenzy and a sizable amount of backlash.

If correct, either of those studies’ findings would mean catastrophic consequences for our climate. But why do the two studies arrive at such drastically different timelines for the AMOC’s demise, and which conclusion is more credible? ~


https://www.popularmechanics.com/science/environment/a60212795/ocean-circulatory-system-shutdown/?source=nl&utm_source=nl_pop&utm_medium=email&date=032124&utm_campaign=nl34755063&GID=df404ca4c6a7ea7fd7c7fddd0178d54e5e4b11c249c39eb055ba3cb06818db2d&utm_term=TEST-%20NEW%20TEST%20-%20Sending%20List%20-%20AM%20180D%20Clicks%2C%20NON%20AM%2090D%20Opens%2C%20Both%20Subbed%20Last%2030D

*

OCEAN CIRCULATION

The ocean gyres move clockwise in the Northern Hemisphere and counterclockwise in the Southern Hemisphere. Ocean gyre circulation moves cold surface water from the poles to the equator, where the water is warmed before the gyres send it back toward the poles.

By moving heat from the equator toward the poles, ocean currents play an important role in controlling the climate. Ocean currents are also critically important to sea life. They carry nutrients and food to organisms that live permanently attached in one place, and carry reproductive cells and ocean life to new places.

https://education.nationalgeographic.org/resource/ocean-currents/

Rivers flow because of gravity. WHAT MAKES OCEAN CURRENTS FLOW?

Tides contribute to coastal currents that travel short distances. Major surface ocean currents in the open ocean, however, are set in motion by the wind, which drags on the surface of the water as it blows. The water starts flowing in the same direction as the wind.

But currents do not simply track the wind. Other things, including the shape of the coastline and the seafloor, and most importantly the rotation of the Earth, influence the path of surface currents.

In the Northern Hemisphere, for example, predictable winds called trade winds blow from east to west just above the equator. The winds pull surface water with them, creating currents. As these currents flow westward, the Coriolis effect—a force that results from the rotation of the Earth—deflects them. The currents then bend to the right, heading north. At about 30 degrees north latitude, a different set of winds, the westerlies, push the currents back to the east, producing a closed clockwise loop.


The same thing happens below the equator, in the Southern Hemisphere, except that here the Coriolis effect bends surface currents to the left, producing a counter-clockwise loop.

Large rotating currents that start near the equator are called subtropical gyres. There are five main gyres: the North and South Pacific Subtropical Gyres, the North and South Atlantic Subtropical Gyres, and the Indian Ocean Subtropical Gyre.

These surface currents play an important role in moderating climate by transferring heat from the equator towards the poles. Subtropical gyres are also responsible for concentrating plastic trash in certain areas of the ocean.

In contrast to wind-driven surface currents, deep-ocean currents are caused by differences in water density. The process that creates deep currents is called thermohaline circulation—“thermo” referring to temperature and “haline” to saltiness.

It all starts with surface currents carrying warm water north from the equator. The water cools as it moves into higher northern latitudes, and the more it cools, the denser it becomes.

In the North Atlantic Ocean, near Iceland, the water becomes so cold that sea ice starts to form. The salt naturally present in seawater does not become part of the ice, however. It is left behind in the ocean water that lies just under the ice, making that water extra salty and dense. The denser water sinks, and as it does, more ocean water moves in to fill the space it once occupied. This water also cools and sinks, keeping a deep current in motion.

This is the start of what scientists call the “global conveyor belt,” a system of connected deep and surface currents that moves water around the globe. These currents circulate around the globe in a thousand-year cycle.

https://education.nationalgeographic.org/resource/ocean-currents/

*
SLOWDOWN OF THE OCEAN CURRENTS

Visualization of ocean currents in the North Atlantic. The colors show sea surface temperature (orange and yellow are warmer, green and blue are colder).

As the ocean warms and land ice melts, ocean circulation — the movement of heat around the planet by currents — could be impacted. Research with NASA satellites and other data is currently underway to learn more.

Dynamic and powerful, the ocean plays a vital role in Earth’s climate. It helps regulate Earth’s temperature, absorbs carbon dioxide (CO2) from the atmosphere, and fuels the water cycle.
One of the most important functions of the ocean is to move heat around the planet via currents.

The Atlantic Ocean's currents play an especially important role in our global climate. The movement of water north and south throughout the Atlantic might be weakening due to climate change, which could become a problem. To help understand why, let’s explore what drives large-scale ocean circulation

Winds and Earth’s rotation create large-scale surface currents in the ocean. Warm, fast currents along the western edges of ocean basins move heat from the equator toward the North and South Poles

One such current is the Gulf Stream, which travels along the eastern coast of North America as it carries warm waters from the tropics toward Europe. This warm water, and the heat it releases into the atmosphere, is the primary reason Europe experiences a more temperate climate than the northeastern U.S. and Canada. For example, compare the climates of New York City and Madrid, Spain, which are both about the same distance north of the equator.

Differences in density drive slow-moving ocean currents in the deep ocean. Density is an object’s mass (how much matter it has) per unit of volume (how much space it takes up). Both temperature and saltiness (salinity) affect the density of water. Cold water is denser than warm water, and salty water is denser than fresh water. Thus, deep currents are typically made of cold and salty water that sank from the surface.

One location where surface water sinks into the deep ocean is in the North Atlantic. When water evaporates and gives up some heat to the air, the sea gets colder and a little saltier. Plus, when sea ice forms, it freezes the surface water leaving behind salt, which makes the remaining seawater saltier. Once this colder, saltier water becomes dense enough, it sinks to the deep ocean. 

Warmer, less dense water from the Gulf Stream rushes in to replace the water that sinks. This motion helps power a global “conveyor belt” of ocean currents – known as thermohaline circulation – that moves heat around Earth. Scientists measure the flow of Atlantic waters north and south, at the surface and in the deep, to assess the strength of this Atlantic Meridional Ocean Circulation (AMOC).

As the concentration of carbon dioxide rises in the atmosphere from human actions, global air and ocean temperatures heat up. Warmer water is less dense, and thus harder to sink. At the same time, Greenland’s ice sheet is melting due to warming air and ocean temperatures, and the melted ice is adding fresh water into the North Atlantic. This change reduces the water’s saltiness, making it less dense and harder to sink.

If enough water stops sinking, then the AMOC will weaken. Depending on how much the AMOC weakens, it can change regional weather patterns, such as rainfall, and affect where and how well crops can grow. According to the latest report from the International Panel on Climate Change (IPCC) — which includes research from hundreds of scientists — the AMOC is “very likely to weaken over the 21st century” due to climate change.

Scientists using temperature and sea level records have inferred the AMOC’s strength over the past century, and the evidence suggests that it might have already weakened. However, direct measurements over the past 30 years have not yet confirmed such a decline.

When and how much the AMOC will weaken is an area of ongoing research. Satellites such as the Gravity Recovery and Climate Experiment (GRACE), GRACE-FO, and ocean height-measuring altimeters can observe ocean features related to the AMOC — complementing measurements from ocean buoys and ships.

Current projections from the IPCC show that the AMOC is unlikely to stop, or collapse, before the year 2100. However, “if such a collapse were to occur," the IPCC says, "it would very likely cause abrupt shifts in regional weather patterns and the water cycle.” These could include “a southward shift in the tropical rain belt, weakening of the African and Asian monsoons, strengthening of Southern Hemisphere monsoons, and drying in Europe,” impacts that would greatly alter food production worldwide.

As more data are gathered and analyzed, scientists will be able to better predict current changes and impacts of those changes in the future. ~

https://science.nasa.gov/earth/earth-atmosphere/slowdown-of-the-motion-of-the-ocean/

*
HOW LIFE EMERGED

Researchers on a quest to understand the origins of life just learned a little lesson about photosynthesis from 1.75 billion years ago.


In a new study published in Nature, a team of researchers claim that microfossils found in the desert of north Australia show off the earliest known signs of photosynthesis. And that could means a better understanding of how all of life could have begun.

These microfossils are remnants of a type of organism called cyanobacteria, which experts believe have been around for as long as 3.5 billion years (though the oldest confirmed fossil examples are from about 2 billion years ago). At some point in their evolution, some varieties of these organisms developed thylakoids—structures within cells in which photosynthesis occurs—which may have allowed them to contribute huge amounts of oxygen to Earth’s atmosphere through photosynthesis in what has become known as the Great Oxidation Event.

These new findings offer up the oldest evidence of photosynthesis found to date. The researchers claim that their discovery extends the fossil record by at least 1.2 billion years, and that these very first photosynthesizing cells appeared roughly 1.75 billion years ago.

“[This discovery] allows the unambiguous identification of early oxygenic photosynthesizers and a new redox proxy for probing early Earth ecosystems,” the authors wrote in the paper, “highlighting the importance of examining the ultrastructure of fossil cells to decipher their paleobiology and early evolution.”

These exciting fossils were discovered in ancient rocks—located in the McDermott Formation in northern Australia—and feature the pigment chlorophyll, which allows organisms to absorb the sunlight during photosynthesis. The presences of chlorophyll was enough for researchers to determine that photosynthesis had occurred in these little compartments, which would mean that the process evolved much earlier than was previously demonstrable.

And that would likely help explain the Great Oxidation Event. Evidence in the fossil record shows us that there was a huge jump in atmospheric oxygen levels around 2.4 billion years ago. It was critical to the existence of life on Earth as we know it, and while scientists aren’t sure what caused it, one theory is that this is around the time that photosynthetic organisms evolved into being and began to exist in large numbers. By dating fossilized cells with the necessary components for photosynthesis to as close to that oxygen-flourishing event as possible, researchers are able to move one step closer to understanding the role of oxygen—and the cells helping create it—in the origins of life on Earth.

Of course, the next step is more research. Specifically, the team intends to examine fossil cells across the world to see just how well they match up with this new timeline.


“We predict,” the authors wrote, “that similar ultrastructural analyses of well-preserved microfossils might expand the geological record of oxygenic photosynthesizers and of early, weakly oxygenated ecosystems in which complex cells developed.”

Take a deep breath, and dive on into the science. ~

https://www.popularmechanics.com/science/a46317064/photosynthesis-origins-of-life-new-discovery/?utm_source=webstories&utm_medium=organic

*
RECESSION AND INCREASE IN LONGEVITY

There's a reason governments spend so many taxpayer dollars digging their economies out of recessions. Families lose their homes. Children go malnourished. New grads spend years struggling to get their careers back on track, forgoing marriage and kids and homeownership. But a growing body of research suggests that recessions are good for at least one thing: longevity. Puzzlingly, it appears that economic downturns actually extend people's lives.

The latest evidence comes from "Lives vs. Livelihoods," a new paper by four researchers led by the renowned health economist Amy Finkelstein. They found that during the Great Recession, from 2007 to 2009, age-adjusted mortality rates among Americans dropped 0.5% for every jump of 1 percentage point in an area's unemployment rate. The more joblessness, the longer people lived — especially adults over 64 and those without a college education.

"These mortality reductions appear immediately," the economists concluded, "and they persist for at least 10 years." The effects were so large that the recession effectively provided 4% of all 55-year-olds with an extra year of life. And in states that saw big jumps in unemployment, people were more likely to report being in excellent health. Recessions, it would seem, help us stay fitter, and live longer.

The question, of course, is why. The economists ruled out a lot of possible explanations. Laid-off workers weren't using their free time to exercise more, or cutting back on smoking or drinking because money was tight. Infectious diseases like influenza and pneumonia kept right on spreading, even though fewer people were going to work and dining out. Retirees didn't seem to be getting better care, even though rising unemployment rates made it easier for nursing homes to staff up. So what could the explanation be? How does higher unemployment lead to longer life?

The answer was pollution. Counties that experienced the biggest job losses in the Great Recession, the economists found, also saw the largest declines in air pollution, as measured by levels of the fine particulate matter PM2.5. It makes sense: During recessions, fewer people drive to work. Factories and offices slow down, and people cut back on their own energy use to save money. All that reduced activity leads to cleaner air. That would explain why workers without a college degree enjoyed the biggest drops in mortality: People with low-wage jobs tend to live in neighborhoods with more environmental toxins. It would also explain why the recession reduced mortality from heart disease, suicide, and car crashes — causes of death all linked to the physical and mental effects of PM2.5. Overall, the economists found, cleaner air was responsible for more than a third of the decline in mortality during the Great Recession.

The new paper, along with other research into recessions, provides an important reminder that economic growth isn't — and shouldn't be — the only measure of our collective well-being. If recessions save lives, that comes with a corollary: Boom times cost lives. An economy firing on all cylinders creates more jobs — but it also generates all sorts of unseen but harmful side effects. "Our findings suggest important trade-offs between economic activity and mortality," the authors conclude. That's economist-speak for two very bad choices: Would you prefer wealth that kills you, or poverty that keeps you alive?

It's that dilemma that has given rise to what's known as the degrowth movement — the idea that the gross domestic product doesn't provide us with an accurate read on human progress. Sure, economic growth provides jobs. But it doesn't tell us anything about the health of our children or the safety of our neighborhoods or the sustainability of our planet. What's the point of having all this money, the degrowthers ask, if it's making us worse off?

I'm sympathetic to that line of reasoning — up to a point. But I don't think that actually shrinking the economy, as some degrowthers advocate, is a good idea. Lower growth inevitably leads to higher unemployment, and that's not a trade-off we should be willing to accept. I grew up in Japan, a country degrowthers often point to as a model for slower growth. It's true that Japan is politically stable, clean, and safe even though its economy has stalled for 30 years.

But there's something about long-term economic stagnation that saps a country's hope. Nothing changes — in politics, in culture, in society — even when everyone knows it's bad. Without realizing it, I had settled into this national inertia, the belief that nothing could be done. It was only in 2012, when I moved to San Francisco, that I started to feel real agency over the direction of my life. Everyone around me believed they could change the world, and the sense of optimism was contagious.

The degrowth movement presents us with a false choice. The solution to bad growth isn't less growth. It's better growth. With stronger regulation and smarter innovation, I'm confident we can find ways to create jobs without destroying the environment and shortening our lives. 

If the new research tells us anything, it's that we still have a long way to go in striking a healthy balance between economic growth and social welfare. We shouldn't have to choose between working and living. ~

https://www.businessinsider.com/recessions-mortality-degrowth-economy-gdp-unemployment-environment-pollution-2024-3?utm_source=pocket-newtab-en-us


*
DOES THE EARLY BIRD ALWAYS GET THE WORM?

Early risers get a lot of good press: They are supposedly more productive and possibly better problem solvers. But after a month of forcing myself out of bed at 5 a.m., I learned that getting up early isn’t always the best thing for you.

I’m a morning person, and most days I’m out of bed by 5:45 a.m. I usually have 15 minutes before the rest of my household starts to wake, and I use this time to enjoy a cup of tea as well as the stillness of the morning. I look forward to this time so much that I wondered, What would happen if I expanded the 15 minutes to an hour?

While it was a nice thought, getting up at 5 a.m. was harder than I expected. My alarm went off a mere 45 minutes earlier than normal, but I had to drag myself out of bed. With no plan other than tea and stillness, I quickly learned that an hour is too long. The second day I decided to meditate, a practice I’ve wanted to do but never seemed to have the time for. Unfortunately, I fell asleep in my chair. Eventually, I took out a piece of paper and did a brain dump of all the things I wanted to get done in January–at least I had a plan.

As the month went on, I used the time to get a head start on work, but by 9 p.m., I was exhausted and would head to bed. That meant I lost out on evening time with my husband and son.

Why was 5 a.m. so much harder than 5:45 a.m.?

Forty-five minutes can make a huge difference, says Damon Raskin, MD, a sleep expert affiliated with Concierge Choice Physicians in Pacific Palisades, California. “We get our deep restorative sleep in the early-morning waking hours when REM sleep occurs,” he says. “If you shorten that, you are going to feel unrefreshed, and you’re not going to have enough sleep.”

A Better Way to Get Up Early

Turns out that simply adjusting your alarm clock isn’t the best way to make a long-term change. Instead, understand that your brain is always looking for patterns, says Shawn Stevenson, author of Sleep Smarter: 21 Proven Tips to Sleep Your Way to a Better Body, Better Health and Bigger Success.

“Your body clock, or circadian rhythm, governs how your body is in sync with all of life, and when you make a shift in that, there will be residual fallout,” he says. “By waking up 45 minutes earlier, you proactively created at-home jet lag. If you keep pressing it for several days, your body will eventually sort itself out, but there is a more graceful way to do it.”

First, withdraw from electronics at least an hour before bed, which affect the quality of your sleep. “When it comes to our health, most of us know that calories aren’t equal; 300 calories of broccoli aren’t the same for your body as 300 calories of Twinkies,” he says. “Sleep is similar, and unfortunately many today are getting Twinkie sleep, not cycling through proper brain activity because electronic devices suppress melatonin (the hormone that controls sleep cycles).”

Every hour you are exposed to blue light from a device, you suppress melatonin production for 30 minutes, says Stevenson. “You may be getting eight hours of sleep, but you will still wake up feeling exhausted,” he says.

Morning exercise will also help by regulating your cortisol levels, the hormone that gets you going in the morning, says Stevenson. “Normal cortisol rhythms spike in the morning and then gradually bottom out in the evening,” he says. “If you are changing your wake time, five minutes of exercise can help reset your rhythm. Do body-weight squats or walk around the block.”

Implementing a gradual wake time will also help. “Move your wake time up by 15 minutes and go through that for a couple of days to a week,” says Stevenson. “This is especially important if you want to establish a consistent sleep pattern.”

And not having a strong plan doesn’t help, says Stevenson. “If you don’t have a reason to get up, and your body wants to rest, forget about it,” he says. “You need something that will fill that space that is compelling.”

The Benefits of Getting Up Early

Being the proverbial “early bird” has its advantages, says Shanon Makekau, medical director of the Kaiser Permanente Sleep Lab in Hawaii.

“Morning people have been shown to be more proactive, which is linked to better job performance, career success, and higher wages, as well as more goal-oriented,” she says. “These people tend to be more in sync with the typical workday schedule, versus night owls who may be still be waking up at around lunchtime.”

Early-morning hours also tend to be more productive because there are fewer distractions.

Jeremy Korst, CMO of the automated tax software provider Avalara and former general manager of the Windows 10 group at Microsoft, gets up between 3:30 and 4 a.m. for two reasons: clarity of thought during that part of the day and quiet time. He does strategic work from 4 a.m. to 6:30 a.m. that requires focus, then he works out and heads to the office.

“No one else is awake yet, and it’s quiet,” he says. “This isn’t a time for clearing my inbox; this is heads-down work time, during which I’m more productive than any other time of day. Without distraction and a bit of separation from the flurry of the prior workday, I can truly focus on important work.”

Getting up early makes Korst feel like he’s got a jumpstart on the day: “I’m in the office early, so I am already ahead of the day and the schedule a bit,” he says. “This helps as calendars are nearly always jammed–getting ahead of it is critical.”

What Happened When the 30 Days Were Over

Unfortunately, my experiment didn’t produce long-lasting results. When my month was over, I immediately returned to my normal 5:45 a.m., which felt like sleeping in. I even slept until 10 a.m. on weekend mornings–a very rare occurrence for me. I feel more productive now that I’m back to my normal routine.

“The jury is still out regarding whether or not simply shifting one’s wake time earlier is enough to garner all of the positive benefits of the early bird,” says Makekau. “It may be that one’s internal tendency toward productivity is inherent or, more importantly, is tied to the congruency between the internal sleep/wake clock and one’s external schedule. Night owls could be just as productive as long as they are allowed to work on a delayed schedule.”

https://getpocket.com/explore/item/what-happened-when-i-forced-myself-to-wake-up-at-5-a-m-every-day-for-a-month?utm_source=pocket-newtab-en-us


*
THE POWER OF REST

Our society may encourage hustle and grind as the only means to be productive but if you really want to maintain your well-being and efficiency, stop and rest. Sleep deprivation declines productivity and negatively impacts our health. Employees with fatigue are costing employers $136.4 billion a year in health-related lost productive time.

Many of us have a habit of pushing through. But working through our lunches and not taking small breaks can actually be very counterproductive. Pushing past our limits causes our productivity to take a nosedive, coupled with additional stress and eventually burnout.

Have you ever thought of rest as being productive? For many, rest seems like an aspirational word. It sounds good and easy in theory but difficult to apply. However, rest is part of the process and is integral to our journey to success.

We have to be intentional with implementing rest into our daily lives, otherwise, it just won’t happen. Using a productivity hack such as the Pomodoro Technique can help you regain some level of control over how you manage your time. It helps you prioritize your breaks, minimize distractions (which use up additional mental energy), and optimize your brain so you can be more efficient and productive.

The technique can be broken down into five easy steps:
Pick a task
Set a 25-minute timer
Work on your task until time is up
Take a 5-minute break
Every 4 Pomodoros, take a 15-to-30-minute break

This technique allows you to break down complex tasks and cluster small tasks together. The key is that you have to stay focused on that one Pomodoro. [the word means “tomato” in Italian]. Even the smallest distraction can interrupt your workflow, so turn off your phone and email notifications. If there is a disruption, try to see how you can avoid it in your next session.

Taking a break allows you to come back to a task with a different perspective. It gives your brain a chance to tap into clarity, creativity, and insight. Here are some other ways you can detach from work so that you can better connect later.

Set and maintain boundaries

Working from home became the norm for many during COVID-19 and it has become challenging for some to create healthy boundaries around work. With the lines now blurred, the 9-to-5 parameters have changed. Especially if you have young children at home.

Technology has allowed us to become easily accessible. The problem is, more often than not, people feel entitled to our time and expect an immediate response. The key thing to remember is to maintain boundaries around your time. Just because you are accessible does not make you available. Don’t feel rushed to reply to the email or text, even if you have the read receipt option on. Minimize stress and anxiety by practicing mindfulness and enjoy whatever you were doing before that call, text, or email came through; especially if it is after work hours and the weekend. Do this often enough and others will learn to adjust accordingly.

Take a nap

Want to decrease fatigue, enhance performance, and reduce mistakes? Try adding naps into your daily or weekly routine. Naps have restorative power. Even a 20-to-30-minute nap can effectively recharge your battery. Set an alarm and minimize distractions (similar to the Pomodoro Technique).

However, be mindful of when you take your nap. Notice when your brain naturally begins to tucker out and you are most sleepy. More than likely it’s around the same time every day. Knowing this, you can actually begin scheduling your naps into your workday.

Move away from your desk

Getting up to stretch, going outside, and giving your eyes a break can be energizing. This is essential, especially for many who are working from home full time. It helps to get the blood circulating, which is good for your physical and brain health. If you choose to go outside, it provides a nice change in scenery. Doing this activity can help to decrease eye strain and bouts of fatigue. If you really want to be more productive and be creative, take these necessary breaks. You might be surprised at the clarity and insight that comes from stepping away from work.

Operating on fumes can leave you feeling stuck, take you longer to accomplish tasks, increase your mistakes, and minimize innovation. Our brains are like a machine, but we are not robots. Implementing rest periods, whether it’s via the Pomodoro Technique or some other methods, will help reduce fatigue and stress. This goes for parents with newborns, business executives, and seasoned truck drivers. When you make rest a priority, you will be pleasantly surprised by how much more you can accomplish by doing less.

https://www.fastcompany.com/90630426/how-doing-less-can-help-you-accomplish-more

*
THE DARK SIDE OF DINKS

If you've heard the acronym DINK lately, you might have this publication to blame.
In recent months, we've written about DINKs — Dual Income, No Kids — using their inflated net worths to retire early, travel the world, and buy boats.

For some, being a DINK is almost like a cheat code for achieving the American dream: It allows adults to sidestep the economic walls closing in on many millennials and Gen Zers struggling to afford housing, childcare, and healthcare. DINKs are in a better position to buy houses, go on vacations, and plan to retire early.

But it's not all romantic getaways and immaculate houses.

Even people who are happy to be childfree sometimes feel left behind or isolated in a culture that still deems parenthood the correct life path. But there's an even darker side to DINKs: The slice that forgoes kids not by choice but out of necessity. These are the Americans who would love to be parents but find that they can't swing it financially. They're more accurately described as childless rather than childfree.

It's difficult to parse out the exact number of Americans who might want kids but can't have them. We know that the childfree group — people who don't want kids — might be about 20% of the US adult population. But it's harder to track down the people who might otherwise have kids were the circumstances different.

A survey from NerdWallet and Harris Poll polling over 2,000 US adults in December found that 56% of non-parents don't plan to have kids. Of this group, 31% said that the "overall cost of raising a child is too high.”

"The big takeaway — and what really stood out to me — is just how big the cost of having kids looms for both current parents who are considering having more kids, and also people who don't have children yet," Kimberly Palmer, personal finance expert at NerdWallet, told BI.

In an economy as large as the US's, any story about the economy is probably true somewhere. It's why some industries can't fill open jobs while people in other professions struggle to find a new role, or why a country where jobs are abundant and wages on the rise also has a housing and childcare affordability crisis — and a dreary general economic outlook. And when it comes to DINKs, that duality is true as well — for some, it's a boon; for others, it's less of a choice than a need.

"We have over 150 million people working in the US economy," Kathryn Edwards, an economist at the RAND Corporation, previously told BI. "Whatever can be true is true for at least one person. Having that many workers means that you can have two true stories that are in absolute conflict, and it totally makes sense that they're both in our labor market.”

While there are signs that our society is coming to greater acceptance of childfree people, evidence points to our economy moving in the opposite direction. A Business Insider calculation earlier this year found that parents could spend $26,000 raising a kid in 2024. As birth rates are dropping, costs for housing, childcare, and medical care are rising. It's contributing to a whole population of DINKs who can't afford to shed the moniker.

Larry Bienz, 38, is a social worker and DINK in Chicago. He said he might be a parent in a different country, one with different priorities and infrastructure. But he's chosen not to be in this one.

In our society, Bienz said, "the first and foremost priority is that people are working in a job. Everything else comes after making sure that we are working on a job.”

He could imagine a life where he has kids, but the lifestyle it would require — both parents juggling jobs, housework, and childcare on little sleep — just doesn't seem sustainable.

Bienz already feels like he doesn't have enough time to invest in not just pleasure activities but also being civically engaged and part of his community. Layer onto that, as Bienz notes, a country with a stagnant minimum wage and without guaranteed paid leave or affordable healthcare, where parents rely on underpaid educators and day care workers. Meanwhile, in other countries, parents can have up to a year of paid parental leave guaranteed.

"Our system says, 'Oh, it's okay. You can get up to 12 weeks unpaid and you won't get fired from your job," Bienz said. "It's like, what a joke. 'Oh, I won't get fired for my job if I want to stay home with my baby for a while!’"

Bienz pointed to the example of the "welfare queen" — a concept "packed full of racism" — as an example of what it feels like the system won't allow: Someone using public resources to feed and house their family without working a job.

"The one thing we can't live with is a welfare queen," he said. "We're perfectly fine with, let's just say, a single mom working 60 hours a week paying for daycare.”

That'll be $30,000, please

Amelia and Kevin desperately want to be parents.

The couple — who are 37 and 43 respectively — have been trying to get pregnant for 18 months. They've bought a bigger house in a good school district in anticipation of the kids who would come. But there's no baby for them yet. And in a country where reproductive care is more scant by the moment, and health insurance only offers a piecemeal approach to affording treatment, they're having to think about how much, exactly, they can afford to spend on the act of having a child.

"We're the ideal situation. We're a happily married couple. We have good jobs, we're well educated," Amelia, whose last name is known to BI but withheld for privacy reasons, said. "What infertility feels like — it feels like every month you're attending a class and at the end of the month you go to take the final and the teacher comes up to you and goes, what are you doing? You're not in this class.”

The couple hasn't yet invested in medical interventions like intrauterine insemination (IUI) or in vitro fertilization (IVF) yet, but have already spent over $1,000 on treatments, therapy, and doctor's visits — and that's with insurance. More in-depth treatment, like IVF, likely wouldn't be covered, they said. And that's before considering any other potential risks, as reproductive rights and access to IVF become more imperiled in the US.

The American Society for Reproductive Medicine estimates that the average cost of an IVF cycle is $12,400, and other estimates have it coming in closer to $25,000. Meanwhile, adoption in the US can end up costing anywhere from $20,000 to $50,000, per the US Children's Bureau. Comparatively, the median household income in the US is $74,580, meaning a household trying to embark upon parenthood through nontraditional means might be putting over a third of their earnings toward it.

As countries around the world bemoan falling population rates and some politicians in the US try to limit access to abortion, Amelia thinks the government could step in.

"Everyone's saying you shouldn't abort, you should put your kid up for adoption, you should adopt. I'm like, great, let us adopt," Amelia said. "Oh no, that'd be $30,000, please.”

Amelia's not alone: In the NerdWallet survey of people who were not parents, 11% of respondents said it was because the cost of infertility treatments was too high, and 10% similarly said that the cost of adoption is too high. Those high costs are also colliding with a perfect reproductive storm as treatments like IVF become potentially even riskier.

"In Michigan, we saw a big jump in the number of child-free people following the overturning of Roe v. Wade," Zachary Neal, a professor in Michigan State University's psychology department's social-personality program, told me. In anecdotal responses in a landmark study of childfree adults Neal conducted with fellow Michigan State psychology professor — and his wife — Jennifer Neal, men were even thinking about the decision's repercussions, saying that they would not want to expose their partner to potential medical risks. "Both men and women are thinking in this climate, it just became too risky to be a pregnant person.”

The baby elephant in the room

Priscilla Davies is a 41-year-old actor, writer, and content creator. As an elder millennial, she's seen the ups and downs of multiple "economic crises." Davies is single and childfree by choice — in part because of the ways that marriage places uneven gendered burdens upon women.

"The establishment calls out the issues — they call it out from the wrong angle — and they're like, 'Oh, millennials are killing families. They don't want to have children. They're so selfish. They're always coming at it from the wrong angle as opposed to calling out what the issue is. And it's basically an elephant in the room," Davies said. "We all know that this economic system does not work.”

Younger parents have told Business Insider that the idea of the caretaking village has been washed away by ever-higher costs, skyrocketing rents, grandparents still working, and the loss of safe third spaces for kids and parents alike to congregate. Without a village, parenthood feels even more untenable.

"We should be providing financial assistance to parents. Just point blank, period. You have a child, we as a social society are invested in our children, so let's help people raise their children and that community, unfortunately — because we live in this hyper-capitalistic environment — that community is going to have to come at this stage from money," Davies said.

The tale of two DINKs — those who have happily opted into the lifestyle and others who have been pushed in — might sound like a story of diverging paths. But, ultimately, they're sides of the same coin: They want to have a choice.

For the DINKs who are happily living it up, that means the choice to exist peacefully and respectfully as a childfree adult. It means a world that respects their choices and can envision something beyond a traditional family structure that provides meaning, love, and satisfaction.
And for the DINKs-by-default, it means a path to parenthood, no matter their financial standing.

Right now, though, neither is reality. And that's leading childfree and childless people alike to experience isolation and difficult calculations. As for Amelia and Kevin, they said they're taking it one step at a time.

"It really comes down to do you have the money to have a child? And that's a very depressing situation to be in," Amelia said. "How much is a child worth it to you? Isn't a child worth $30,000? Isn't it worth it to you? I'm like, of course it's worth it, but I don't have $30,000.”

https://www.businessinsider.com/dinks-childfree-parents-choice-kids-childless-2024-3?utm_source=pocket-newtab-en-us



*
MARILYNNE ROBINSON RE-READS GENESIS

~ Marilynne Robinson’s novels always leave me with a visceral impression of celestial light. Heavenly bulbs seem to switch on at climactic moments, showing a world as undimmed as it was at Creation. “I love the prairie! So often I have seen the dawn come and the light flood over the land and everything turn radiant at once,” writes John Ames, the narrator of Gilead, an elderly preacher approaching death as if returning to the birth of being. “And God saw the light, that it was good,” the Bible says, and Ames sees that it’s good, too: “that word ‘good’ so profoundly affirmed in my soul that I am amazed I should be allowed to witness such a thing.

Robinson is one of the greatest living Christian novelists, by which I don’t just mean that she’s a Christian—though she is an active one—but that her great novels (five so far) and her versatile, morally stringent essays (four collections and a book of lectures, on subjects including Darwinism and the Puritans as well as her own childhood) reflect a deep knowledge and love of Christianity. Robinson, who has taught Bible classes and preached at her church in Iowa City, Iowa, is a learned lay theologian of the Calvinist variety. In many of her essays and particularly in Gilead, she makes us aware of a John Calvin who does not at all conform to his reputation as a dour ascetic.

This is the stuff of sermons—the kind I’d willingly sit through. But Robinson is also up to something that should interest her secular readers. She’s working out a poetics. In her deft hands, Genesis becomes a precursor to the novel—the domestic novel, as it happens, which is the kind she writes. Perhaps I’m making her sound self-glorifying. She’s not. She makes her case.

Robinson’s main claim is that Genesis invented a kind of realism—this-worldly, nonmythological—remarkably akin to our understanding of the term. This is outrageous, impossible to defend—if you’re a literary historian. But she’s not doing history. She’s writing an essay about biblical style and its implications. She wants us to see how radical scripture is compared with its sources.

For one thing, it’s human-centered. The Babylonian epics that the Bible recasts—the Enuma Elish, the Epic of Gilgamesh—tell the origin myths of a passel of quarrelsome gods. The Enuma Elish’s gods created people so that they would serve their Creators—build their temples, grow their food. “There is nothing exalted in this, no thought of enchanting these nameless drudges with the beauty of the world,” Robinson writes. In Genesis, by contrast, humankind is made in God’s image; all the sublimity of biblical Creation seems to be meant for its benefit. We move from gods indifferent to our well-being to a God obsessively focused on us.

Why that happens is not immediately clear. The protagonists of Genesis are unlikely candidates for God’s solicitude. One innovation of the Western novel is to shift the emphasis from great men and women to ordinary people in ordinary circumstances. But the biblical author is also interested in unexceptional folk. The founding fathers and mothers of Israel aren’t kings or warriors or, like Moses, a former prince who rescues an enslaved nation. The patriarchs raise sheep. Indeed, God seems to pick his covenantal partner, Abraham, at random. Why bind himself to a son of idolaters “drifting through the countryside, looking for grazing for his herds,” in Robinson’s words? Why not the next guy?

Apologists wave away that theological conundrum—the apparent contingency of election—by claiming that Abraham is unusually righteous, Kierkegaard’s exemplary “knight of faith.” But if Abraham is indeed thoroughly good, he’s the exception. Every other major character in Genesis has an unsavory side. God made a covenant with Noah, too, for instance, and although he is chosen to survive the flood because he is a righteous man, he isn’t afterward. He gets dead drunk, and his son Ham sees him naked in his tent. Ham tells his brothers; they enter the tent backwards, averting their eyes, and cover him with a blanket. Noah wakes up, feels humiliated, blames Ham, and lays a curse—not on Ham but on Ham’s son Canaan, who is condemned to be a slave to Ham’s brothers.

The Bible offers no excuse for Noah’s cruelty, or for many other misdeeds committed by its chosen people.“There is nothing for which the Hebrew writers are more remarkable than their willingness to record and to ponder the most painful passages in their history,” Robinson writes.

That history, with its providential arc, works itself out through family dramas of this kind, more than it does through cosmic events like the flood. At first, both share the stage: The glorious tale of Creation segues to Adam and Eve nervously fobbing off responsibility for eating the apple. Their son Cain commits fratricide, and his descendants bequeath lyres, pipes, and metallurgy to humankind. The genealogies culminate in Abraham, the first patriarch, whose household is made turbulent by rivalry among wives and among siblings.

Then the tone grows hushed. Everything in the background fades, leaving only God, Abraham, Sarah, their household, and their occasional journeys. “As soon as the terms are set for our existence on earth,” Robinson writes, “the gaze of the text falls on one small family, people who move through the world of need and sufficiency, birth and death, more or less as we all do.” Of course, unlike us, they speak with God, but that, Robinson adds, in a sneaky homiletic twist, is “a difference less absolute than we might expect.” Robinson thus redefines realism to encompass the encounter with the divine. Furthermore, if she can bring us to acknowledge that biblical characters are realistic, that they portray us, then we should probably admit that we may, like them, be God’s interlocutors, whether we know it or not.

The genius of Reading Genesis lies in its collapse of the space between the holy and the mundane, the metaphysical and the physical. God resides in commonplace things; his sublime purposes course through the small-bore tragedies of unremarkable people, to be revealed in the fullness of time. God is himself and the world is itself—we are not speaking of pantheism here—but they are also one. This is a very Christian mystery that Robinson’s ushering us into, and the proper response is awe at the hallowed world she shows us, at the loveliness—and shrewdness—of the idea of divine indwelling. She does a lot with it. For one thing, it allows her to dismiss scientific skepticism of religion as not only reductive but unimaginative. How can “sacredness in existence” be disproved? Sanctity is immanent, not quantifiable.

Above all, Robinson’s God-infused theory of reality is also a theology of realistic fiction — of her brand of realistic fiction, in which the physical may suddenly be revealed as numinous and the spirit inheres in the flesh. I want to be clear: at no point in this book does Robinson talk about herself, her books, or the novel as a form. That’s not the sort of thing she’d do. This is me reading her reading. I see Robinson in her depiction of the biblical author, who in turn sometimes seems to merge with God. What she has in common with both the writer or writers of the Bible and God, as she depicts them, is a deep tenderness toward the subjects of their concern. “The remarkable realism of the Bible,” she writes, “the voices it captures, the characterization it achieves, are products of an interest in the human that has no parallel in ancient literature.” Nor, I would add, in a great deal of modern literature. This boundless and merciful interest in the human is what distinguishes her.

Two characters seem to inspire the most pity and love in Robinson: the patriarch Jacob and her own creation, Jack Boughton. Both sin greatly and suffer greatly. As a young man, Jacob tricks his older brother, Esau, into selling him his birthright (the right to lead the family, and a double portion of the estate), and then straight-up cheats Esau out of their father’s blessing. A lifetime of exile and intermittent misfortune follows. Jacob matures into a more thoughtful, mostly penitent man, but his punishment does not end there. Ten of his 12 sons turn out to be worse than he ever was. At one point, they collude in slaughtering the men of a village and carrying off its women. Jacob commits the offense of favoring one son, Joseph, over the others, and in retribution, they throw the boy into a pit, from which he is kidnapped and sold into slavery in Egypt. The brothers present their father with Joseph’s bloodied coat, the implication being that he’d been killed by a wild beast. Jacob never recovers from the blow.

Jack, like Jacob, is born into a family rich in blessings. His father is a minister who truly tries to do right by him, and Jack’s seven siblings—good, kind people—love and worry about him. Nonetheless, as a child and young man, he commits senseless crimes—mostly petty thefts—seemingly “for the sheer meanness of it,” the Reverend John Ames says in Gilead. Then Jack impregnates a very young girl, which tests his all-forgiving father to his limits, and he leaves town, staying away for 20 years. In Jack, we learn of his bitter life as a vagrant, and in Home, he tries to go home, with mixed success. His presence makes his father anxious, and Jack can’t bear the feeling that everyone mistrusts him. Insofar as forgiveness is on offer, he seems unable to accept it. At one point in Gilead, he asks his father and Ames, “Are there people who are simply born evil, live evil lives, and then go to hell?”

The Bible, Robinson declares in the first line of Reading Genesis, is a theodicy, a meditation on the problem of evil. So are the stories of Jacob and Jack. Why do they do what they do? Were they predestined to hurt others? We know how Jacob’s story ends: Joseph becomes the most powerful man in Egypt after Pharaoh and is in a position to rescue his family from starvation. This is why you did what you did, Joseph tells his brothers: God sent me ahead of you to ensure your survival.

Robinson, however, is more interested in what happens afterward, when Joseph brings Jacob to meet Pharaoh. His father is curiously querulous. “The great man asks him,” she writes, “How old art thou? Jacob answers that he will not live as long as his fathers did.” Robinson comments:

He has grown very old in fewer years, enduring a life of poverty and sorrow. He is the third patriarch, the eponymous ancestor of the nation Israel, which at that time will not exist for centuries. He has received the great promises of the covenant, including possession of the land he will only return to as an embalmed corpse.

This is the patriarch at his most self-pitying. God’s pact is with Jacob’s children’s children more than it is with him; it doesn’t compensate for his sorrows. Jacob cannot reconcile the double perspective that may be the Bible’s greatest literary achievement: the view from heaven, “with an eye toward unrealized history,” as Robinson puts it, and the view from “a nearer proximity” of the human agent of that history. He has been told the future, but that hasn’t blunted his grief, hasn’t reached “the level of ‘innermost’ feeling.”

Jack, too, struggles with the meaning of his affliction, less certain of vindication than Jacob. In Home, he waits for a letter from his estranged wife, whom we sense he sees as his salvation. Robinson torques the suspense: Jack has earned our sympathy—more, to be honest, than Jacob has—and on Jack’s behalf we want answers to his questions. Will the evils he has inflicted, and his terrible loneliness, be shown to have a larger purpose? Will the ways of God be known to men—to this poor man?

We get answers, up to a point. It’s not clear that he does. Maybe he has missed his chance; maybe he’ll get another one. Not knowing breaks the heart, but knowing would be cheating. Besides, as Jacob comes to show, knowing doesn’t necessarily help. “The Lord stands back,” Robinson writes in Reading Genesis ; his “divine tact” lets his characters achieve their “full pathos and dignity.” Robinson does the same. The Bible was not given to man to simplify complexity, she says, but to speak of it with “a respect and restraint that resists conclusion.” Therein lies its beauty, and that of the literature it has inspired. The realism of Genesis, as she says, is “by itself a sort of miracle.” ~

https://www.theatlantic.com/magazine/archive/2024/03/marilynne-robinson-reading-genesis-book/677179/

*
BART EHRMAN ON HEAVEN AND HELL

Terry Gross:
When we originally scheduled the interview, we didn't realize how weirdly timely it would be. Let's face it — the pandemic has made death a presence on a scale most of us aren't used to. Your beliefs about what happens after death or if anything happens might shape how you're dealing with your fears and anxieties. In the new book, "Heaven And Hell: A History Of The Afterlife," my guest Bart Ehrman writes about where the ideas of heaven and hell came from. He examines the Hebrew Bible, the New Testament, as well as writings from the Greek and Roman era.

Ehrman is a distinguished professor of religious studies at The University of North Carolina, Chapel Hill and is one of America's most widely read scholars of early Christianity and the New Testament. His books such as "Misquoting Jesus" and "How Jesus Became God" challenge a lot of beliefs and common wisdom. As for Ehrman's beliefs, as a child, he was an altar boy in the Episcopal Church. At age 15, he became a born-again fundamentalist evangelical Christian. After attending the Moody Bible Institute, he studied at Princeton Theological Seminary, which introduced him to texts and interpretations that led him to a more liberal form of Christianity. Eventually, he left the faith altogether.

GROSS: Is it fair to say you're an atheist now?

EHRMAN: That is fair to say (laughter). I actually consider myself both an atheist and an agnostic because I — you know, I don't really know if there's a superior being in the universe, but I don't believe there is. And so in terms of what I know, I'm an agnostic. But in terms of what I believe, I'm an atheist.

GROSS: In a time like this, do you wish you could still believe in a heaven that offers eternal life, in a place where you would be united with loved ones?

EHRMAN: Yeah, that would absolutely be good. It's not that I wish I believed it; I wish that it were true. And as I say in my book, as we'll probably get to, it may be true that we will live after we die. But if we do, it'll be something pleasant like that. It's not going to be something awful. So I — you know, it's not that I wish I believed it so much as I wish that it were true.

GROSS: So what do you believe about death now, about what happens after you die?

EHRMAN: Well, I — you know, I've read about death and thought about death and the afterlife for many, many years now and what — you know, what philosophers say and theologians say and biblical scholars say and, you know, what people generally say. And I still think that Socrates is the one who probably put it best. When he was on trial, on capital charges  so it was a death sentence awaiting him — he was talking with his companions about what death would be, and his view is that it's one of two things.

Either we live on and we see those we knew before and those we didn't know before, and we spend all of our time being with them, which for him was absolute paradise because Socrates liked nothing better than conversing with people, and so now he could converse with Homer and with all the greats of the Greek past. So that would be great. And if it's not that, he said it would be like a deep sleep. Everybody loves a deep, dreamless sleep. Nobody frets about it or gets upset by having it. And so that's the alternative. And so it's either a deep sleep, or it's a good outcome, and either way it's going to be fine. And that's exactly what I think.

GROSS: One of the theses of your book about the history of heaven and how is that views of heaven and hell don't go back to the earliest stages of Christianity, and they're not in the Old Testament or in Jesus' teachings. They're not?

EHRMAN: (Laughter) I know, exactly. This is the big surprise of the book, and it's the one thing people probably wouldn't expect because, you know, when I was growing up, I just assumed. This is the view of Christianity. So this must be what Jesus taught. This is what the Old Testament taught. And in fact, it's not right.
Our view that you die and your soul goes to heaven or hell is not found anywhere in the Old Testament, and it's not what Jesus preached. I have to show that in my book, and I lay it out and explain why it's absolutely not the case that Jesus believed you died and your soul went to heaven or hell. Jesus had a completely different understanding that people today don't have.

GROSS: Are there things in the Hebrew Bible that still support the idea of heaven and hell as people came to understand it, things that you can extract from the Old Testament that might not literally mention heaven and hell but still support the vision that emerged of it?

EHRMAN: I think one of the hardest things for people to get their minds around is that ancient Israelites and then Jews and then Jesus himself and his followers have a very different understanding of what the relationship between what we call body and soul. Our view is that we — you've got two things going on in the human parts. So you have your body, your physical being, and you have your soul, this invisible part of you that lives on after death, that you can separate the two and they can exist — the soul can exist outside of the body. That is not a view that was held by ancient Israelites and then Jews, and it's not even taught in the Old Testament.

In the Old Testament, what we would call the soul is really more like what we would call the breath. When God creates Adam, he creates him out of earth, and then he breathes life into him. The life is in the breath. When the breath leaves the body, the body no longer lives, but the breath doesn't exist. We agree with this. I mean, when you die, you stop breathing. Your breath doesn't go anywhere. And that was the ancient understanding, the ancient Hebrew understanding of the soul, is that it didn't go anywhere because it was simply the thing that made the body alive.

And so in the Old Testament, there's no idea that your soul goes one place or another because the soul doesn't exist apart from the body. Existence is entirely bodily. And that was the view that Jesus then picked up.

GROSS: Are there specific passages in the Hebrew Bible that support the notion of an afterlife?

EHRMAN: Yeah, no, it's a good question. And people generally point to these passages in the Book of Psalms that talk about Sheol, or Sheol. It's a word that gets mistranslated into English. Sometimes Sheol is translated by the word hell, and it absolutely is not what people think of as hell. Sometimes Sheol is talked about by people today as a place that's kind of like the Greek Hades, a place where everybody goes after they die, and they aren't really physical beings down there; they're just kind of like souls, and they exist forever there, and there's nothing to do, and they do — they're all the same. And so Sheol is sometimes portrayed like that. The Bible does talk about this place Sheol, especially in poetry, especially in Psalms. And it's probably not a place that people go to, per se.

If you actually look at what the Psalms say about Sheol, they always equate it to the grave or to the pit. And so it appears that the ancient Israelites simply thought that when you died, your body got buried someplace. It got put in a grave, or it got put in a pit, and that's what they called Sheol, is the place that your remains are. But it's not a place where you continue to exist afterwards.

Just about the only place in the Hebrew Bible where you get an instance of somebody who has died who seems still to be alive afterwards is in this very strange and interesting passage in the book of 1 Samuel, where the king, Saul, is desperate for some advice from somebody who knows, and so he calls  he has a necromancer, a woman, this woman of Endor, who calls up his former adviser Samuel from the grave. And she holds a kind of seance. And Samuel comes up and is really upset that she's called him up from the grave, and he gets upset with Saul for doing this, and he predicts that Saul is going to die the next day in battle, which he does.

And so people often point to that as an instance that's 
well, so people are alive after they're dead. And right, it kind of seems like that when you read it  when you just kind of simply read it. But if you actually read it carefully, it doesn't say that. What it says is that Samuel came up, but it doesn't say where he was, and it doesn't say if he was living at the time. It looks like what  before he was raised up, it looks like he was simply dead, and he was brought back to life temporarily, and he didn't appreciate that (laughter), and so he was upset.

So you write that starting in the sixth century, Hebrew prophets began to proclaim, you know, that the nation had been destroyed and would be restored back to life by God. It would be the resurrection of the nation. But then toward the end of the Hebrew Bible era, some Jewish thinkers came to believe that the future resurrection would apply not just to the nation but to individuals. So how does that shift happen?

EHRMAN: Right. So this is a really important shift for understanding both the history of later Judaism and the history of later Christianity and the historical Jesus. About 200 before Jesus was born, there was a shift in thinking in ancient Israel that became 
it became a form of ideology, a kind of religious thought that scholars today call apocalypticism. It has to do with the apocalypse, the revelation of God. These people began to think that the reason there is suffering in the world is not what the prophets had said, that it  because people sin and God is punishing them; it's because there are forces of evil in the world that are aligned against God and his people who are creating suffering. And so you get these demonic forces in the world that are creating misery for everyone.

But they — these apocalyptic thinkers came to think that God was soon going to destroy these forces of evil and get rid of them altogether, and the world would again return to a utopia. It'd be like paradise. It'd be like the Garden of Eden once more. The people who thought that maintained that this Garden of Eden would come not only to people who happened to be alive when it arrived; it was going to come to everybody. People who had been on the side of God throughout history would be personally raised from the dead and individually would be brought into this new era, this new kingdom that God would rule here on Earth.

GROSS: So this was all dependent on, like, the Messiah coming on the end of days, which some Jewish prophets predicted would be soon. When Jesus was alive, he thought the end of days would be soon. And of course, it kept not happening.

EHRMAN: Yeah.

GROSS: And you say that for the ancient Jews, the fact that the Messiah didn't come, that was a turning point in beliefs about what happens after death, too. There started to be a belief that reward and punishment would be right after death, as opposed to after the Messiah comes.

EHRMAN: Yeah. That became a view somewhat in Judaism, and it became a very pronounced view in Christianity. The  after Jesus. Jesus himself held to the apocalyptic view that I laid out. He taught  his main teaching is that the kingdom of God is coming. People today, when they read the phrase kingdom of God, they think he's talking about heaven, the place that your soul goes to when you die. But Jesus isn't talking about heaven because he doesn't believe — he's a Jew — he doesn't believe in the separation of soul and body.

He doesn't think the soul is going to live on in heaven. He thinks that there's going to be a resurrection of the dead at the end of time. God will destroy the forces of evil. He will raise the dead. And those who have been on God's side, especially those who follow Jesus' teachings, will enter the new kingdom here on Earth. They'll be physical. They'll be in bodies. And they will live here on Earth, and this is where the paradise will be. And so Jesus taught that the kingdom of God, this new physical place, was coming soon, and those who did not get into the kingdom were going to be annihilated.

What ends up happening is that, over time, this expectation that the kingdom was coming soon began to be questioned because it was supposed to come soon and it didn't come soon, and it's still not coming, and when is it going to come? And people started thinking, well, you know, surely I'm going to get rewarded, you know, not in some kingdom that's going to come in a few thousand years, but I'm going to get rewarded by God right away. And so they ended up shifting the thinking away from the idea that there'd be a kingdom here on Earth that was soon to come to thinking that the kingdom, in fact, is up with God above in heaven. And so they started thinking that it comes at death, and people started assuming then that, in fact, your soul would live on.

It's not an accident that that came into Christianity after the majority of people coming into the Christian church were raised in Greek circles rather than in Jewish circles because in Jewish circles, there is no separation of the soul and the body. The soul didn't exist separately. But in Greek circles, going way back to Plato and before him, that was absolutely the belief. The soul was immortal and would live forever in Greek thinking. And so these people who converted to Christianity were principally Greek thinkers, they thought there was a soul that live forever. They developed the idea, then, that the soul lived forever with God when it's rewarded.

GROSS: So you were saying there really isn't an explicit description of heaven and hell in the Hebrew Bible or even in the New Testament, but that Paul is important in understanding the history of heaven and hell. Tell us about what Paul wrote.

EHRMAN: Paul is very important for understanding the history of heaven and hell, as he's important for understanding most things about early Christian thinking. Paul was not a follower of Jesus during his lifetime, during Jesus' lifetime. He wasn't one of the disciples. He converted several years after Jesus' death. He 
Paul was Jewish. He was raised Jewish. He wasn't raised in Israel; he was from outside of Israel. He was a Greek-speaking Jew. But he was also, like Jesus, an apocalypticist who thought that at the end of the age, there would be a resurrection of the dead.

When he became convinced that Jesus was raised from the dead, he thought that the resurrection had started. And so he talked about living in the last days because he assumed that everybody else now was going to be raised to follow suit. And so Paul thought he would be alive when the end came. For Paul, Jesus was going to come back from heaven and bring in God's kingdom here on Earth, and people would be raised from the dead for glorious eternity. Paul, in his earliest letters, affirms that view of the imminent resurrection. It's going to come very soon. And he fully expected to be alive when it happened.

But then time dragged on, and a couple of decades passed, and it didn't arrive, and Paul started realizing that, in fact, he might die before it happens. And so in some of his later letters, he ponders the possibility of death, and he wonders, well, what happens to me, then? If I'm brought into the presence of Christ at the resurrection, and, you know, there's a gap between the time I die and 
what happens to me during that gap? And he started thinking that, surely, he's going to be in Christ's presence during that time.

And so he came up with the idea that he would have a temporary residence up with Christ in God's realm, in heaven, until the end came. And so this is what the later Paul has to say, and this is the beginning of the Christian idea of heaven and hell, that you can exist 
even though your physical remains are dead, you can exist in the presence of God in heaven. And once Paul started saying that, his followers really latched onto it because most of Paul's converts were from Greek circles. They were gentiles. They weren't Jews. And they had been raised with the idea that your soul lives on after death, and now they had a Christian model to put it on. They could say that, yes, your soul lives on, and so when you die, your soul will go up to God with heaven. And as time went on, that became the emphasis rather than the idea of the resurrection with the dead.

GROSS: How does hell come into it?

EHRMAN: Well, so since these people believed that the soul was immortal, that you can kill the body but you can't kill the soul, they thought, well, OK, so our soul will go to heaven to be with God, but then they realized, well, what about the people who are not on the side of God? Well, if we're being rewarded, they're going to be punished. And that's how you start getting the development of the idea of hell, that it's a place where souls go to be punished in — as the opposite of the people who go to heaven to be rewarded.

And in thinking this, as it turns out, the Christians are simply picking up on views that had been around among the Greeks since way back in the time of Plato. Plato also has ideas about souls living on, either to be rewarded or punished forever. And Christians now, who were mainly coming from Greek contexts, latched onto that idea with a Christian way of putting it.

GROSS: You’ve also studied the Gnostic Gospels, which were the recently discovered gospels that never became part of the canon. And these are more mystical texts. And the most famous of the Gnostic Gospels is Thomas. What was his vision of what happens after death?

EHRMAN: The various groups of Christians that people sometimes label gnostic would cover a wide range of views. There are lots of different religions that people have called gnostic. But one thing that most of them have in common is the idea that the body is not what matters. The body is not your friend, and God did not create the body. The body is a cosmic disaster. It's why we experience so much pain and suffering because we live in these material shells. And in most gnostic religions, the idea is to get out of the shell, to escape the shell. So they have very much a differentiation between soul and body. It comes — Gnosticism, in some ways, comes out of Greek thinking. So for them, there's no resurrection of the dead.

Gnostics disagreed with the Jewish idea that at the end of time, God would raise the dead physically. For Gnostics, the idea of being raised in your body was repulsive. You mean I've got to live in this thing forever? No. Real life is in the soul. And so they denied the idea of the resurrection of the body. And what is interesting is Gnostics then claimed that Jesus also denied it. And so when you read the Gnostic Gospels, you find Jesus denouncing the idea that there's a resurrection of the body or that life will be lived eternally in the body; it's strictly a matter of the soul.

And the other interesting thing is that what the Gnostics did, by reading their ideas into Jesus, is also what the Orthodox Christians did, by putting words in Jesus' lips that supported their ideas of heaven and hell. And so in our various Gospels, you have Jesus saying all sorts of things that are contradictory because different people are putting their own ideas onto his lips.

GROSS: So your new book is about the history of heaven and hell. Your forthcoming book that you're working on now is going to be called "Expecting Armageddon." So how does the Book of Revelation contribute to the vision of hell?

EHRMAN: Well, yeah. You know, a lot of people read Revelation as indicating that people who are opposed to God 
sinners will be cast into the lake of fire forever, and they will be — yeah, they'll be floating in fire for eternity. And they get that from several passages in the Book of Revelation. I have to deal with this in my book, where I try to show that, in fact, the Book of Revelation does not describe eternal torment for sinners in the lake of fire. There are several beings that go into the lake of fire, but they are not human beings; they are the antichrist, the beast and the devil, and they are supernatural forces that are tormented forever.

The people, in the Book of Revelation, human beings who aren't on the side of God, are actually destroyed. They are wiped out. This is the view that is fairly consistent throughout the New Testament, starting with Jesus. Jesus believed that people would be destroyed when 
at the end of time, they'd be annihilated. So their punishment is they would not get the kingdom of God. That also is the view of Paul, that people would be destroyed if — when Jesus returns. It's not that they're going to live on forever. And it's the view of Revelation. People do not live forever. If they aren't brought into the new Jerusalem, the city of God that descends from heaven, they will be destroyed.

GROSS: So a lot of the imagery of hell comes from the Book of Revelation. It's a very explicit, kind of gruesome book, and I wonder if you've thought about why it's so graphic.

EHRMAN: Yeah, I've thought a lot about it. As you said, this is going to be what my next book is on, is about how people have misinterpreted Revelation as a prediction of about what's going to happen in our future. And the graphic imagery in the book has really contributed to all of these interpretations of Revelation. When earlier I was saying that Jesus was an apocalypticist who thought that the world was going to come to an end, that God was going to destroy the forces of evil to bring in a good kingdom, that is precisely what the author of the Book of Revelation thinks, that — and the book is a description of how it's going to happen.

The book is all about the terrible destruction that is going to take place on Earth when God destroys everything that is opposed to him, before bringing in a good kingdom. And so all of the imagery of death and destruction and disease and war in the Book of Revelation is used to show what terrible measures God has to take in order to destroy the forces of evil that are completely — have completely infiltrated the human world, before he brings in a new world. This, though, is not a book that describes what's going to happen to individuals when they die and go to heaven or hell; it's a description of the final judgment of God that somehow is going to be coming to Earth.

GROSS: You've talked about how belief in the end times led in a circuitous way to belief in heaven and hell. I've heard a lot of joking lately about how it's the end times. You know, California was on fire. We have climate change and extreme weather and earthquakes and volcanoes. And people are afraid that the planet itself is dying. We have, you know, plastics in the ocean, ice caps that are melting. And now we have the pandemic. I'm wondering if you're hearing that kind of thing, too.

EHRMAN: Yes. Yeah, of course. I mean, you know, a lot of people aren't joking. They take it very seriously. And it's 
I want to say a couple of things about that. First is every generation from the time of Jesus till today has had Christians who insisted that the prophecies were coming true in their own day. There have always been people who actually picked a time when it's going to happen. And there are two things that you can say about every one of these people over history who've picked a time. One is they based their predictions on the Book of Revelation. And secondly, every one of them has been incontrovertibly wrong (laughter). So that should give one pause. The things that are happening now are absolutely dreadful as, of course, they were in 1916 to  1914 to 1918 and as they have been at other times in history.

The book that I'm writing that I'm now calling "Expecting Armageddon" is all about that. It's about how people have misused the Book of Revelation to talk about how the end is coming and how it always seems like it's going to be coming in our own time. And everybody thinks this is as bad as it can be. And, you know, this time we may have it right. This kind of thinking, though, really came to prominence at the end of the 19th and into the 20th century and hit big prominence in 1945, when we actually had the means of destroying ourselves off the planet, which we still have, by the way. People aren't talking about nuclear weapons anymore, but they probably should be because that's another way this whole thing might end.

But now the talk is more about climate change, as it should be. We absolutely may do it to ourselves this time, but it won't be a prediction — a fulfillment of predictions of a prophecy; it'll be because of human stupidity and refusal to act in the face of crisis.

GROSS: So now we're faced with a pandemic. You could, I suppose, use the word plague, and the word, you know, plague is in the Hebrew Bible. What were the explanations in the Hebrew Bible for plagues?

EHRMAN: Yeah. The old testament has a fairly uniform and rather stark explanation for why there are plagues or epidemics or pandemics. In virtually every case, we're told that it's because God is punishing people. People have gone against his will, and so he is — so he's bringing this disaster of epidemic upon them. You get that in the story of Moses in the Book of Exodus. You get it everywhere in the writings of the prophets in Amos and Isaiah, etc. This was the old view that the reason God's people suffer is because they've done something wrong and he wants them to repent.

Eventually, Jewish thinkers began to reason that it didn't make much sense because there were times when they would be doing what God told them to do, or at least they'd be doing their level best to do what God told them to do, and they'd still be suffering these plagues. And that's when they developed the idea that, in fact, it's the forces of evil causing these disasters. These continue to be two of the common explanations today.

There are people today who are saying that the reason of the pandemic is because, you know, one sin or another. It's because of, you know, those LGBTQ folk, you know, who are allowing promiscuous activity. God is punishing us. Or it's because of, you know, one social ill or another that God is punishing. And you have that group. And then you have group who saying that it's the devil doing it, that in fact, it's the forces of evil. Satan is working his way, and that it's because we're at the end of time, and he has to be released here at the end of time before God will intervene. You get both of those explanations. Most people probably don't subscribe to either one. Most people just say, well, look — you know, it's a pandemic, and we better pay attention to our scientists, which is, obviously, the more socially satisfying answer to the question.

GROSS: When you were 15 and became a fundamentalist evangelical Christian, what would you have believed about the pandemic?

EHRMAN: That's a really good question. I probably would have subscribed 
I would have subscribed to either the view that God was upset and we needed to repent so that he would relent, or that the devil — it was the devil doing it, and we needed to pray to God for mercy and for him to intervene on our behalf.

GROSS: And compare that to now.

EHRMAN: Well, I think those views 
I mean, I respect believers. I do not try to convert anybody. I don't try to trash anybody's views. I try to respect everybody's views. I think that sometimes those very highly religious views can be socially extremely dangerous because if you think that the cause is supernatural, then you don't have much motivation to find a natural solution. It is quite dangerous to refuse the findings of science because of your personal beliefs. And we all just hope that it doesn't lead to even further disaster.

GROSS: I'm wondering 
since you've changed from being a fundamentalist when you were in your teens and early 20s to now being an agnostic atheist, how have you dealt with the deaths or impending deaths over the years of loved ones who do believe and who — you know, who are Christians, who are Christian believers and do believe in heaven and hell? Like, I'm sure you don't want to talk them out of their beliefs. But it's not what you believe.

EHRMAN: Yeah.

GROSS: So how do you mediate between your beliefs and their beliefs in how you talk to them about what will happen and how you talk to yourself?

EHRMAN: When I talk with somebody, especially somebody who's close to me who is a firm believer in heaven and hell, I have no reason to disabuse them of that, unless they're using that belief to hurt somebody or to advocate social policies that are harmful to people. My dear elderly mother is a very good Christian, and she believes that she will die and she will go to heaven and she will see her husband. And so I would be crazy to say, no, Mom, actually, yeah, you're not going to see him (laughter). Of course, I'm not going to — I mean, there's no reason to shatter somebody's beliefs, especially if they simply are providing them with hope.

My view is that we all believe very strange things, and most of the time we don't realize how strange they are. And so I don't — it's not that I think that I believe only rational things and everybody else is irrational. I have a different set of beliefs. But my firmest belief is that whatever we believe, it should not do harm in the world; it should do good in the world. And of course, belief in heaven and hell has done a lot of good; it's also done a lot of harm. It has terrified people. There are people who are terrified of dying because they're afraid — they are literally afraid that they will be tormented for trillions of years, just as the beginning. And I think that's a harmful belief.

And so I will never try to talk somebody out of a belief in heaven, but I certainly will try to talk people out of a belief in hell because it's simply wrong, and it's harmful. It does psychological damage. And when people raise their children on this stuff, it can scar them for life. And so I think that hell is something we need to fight against; heaven, I'm all for.

GROSS: Do you feel that believing in hell scarred you?

EHRMAN: I do in some ways. I don't think I'm scarred much longer, but I worked really, really hard at it. I was terrified of going to hell. And I think that, you know, psychologically, that was very bad. It made me a rather obnoxious fundamentalist Christian because I thought that everybody else was going to go to hell, and so I had to go out of my way to convert them all (laughter). So I wasn't always a pleasant person to be around because I was right and they were wrong, and since they were wrong, they were going to hell.

But the main thing is that I think that, in fact, it imposes emotional damage. When people need to find life pleasant and hopeful and they need to be helpful to other people, they need to enjoy life, if all you're looking forward to is what's going to happen after you die, you can't really fully enjoy life now because this is just a dress rehearsal. And so I don't try to talk people out of their view of heaven, but I think, actually, it's better off, you know, not living for what's going to happen after you die; it's better off living for what you can do now.

GROSS: You know, you write in your book that it's hard for you to conceive of God as being a sadist who would torture people for eternity in hell.

EHRMAN: Right. So the bottom line of the book is that the way you kind of trace the history of heaven and hell is that when people thought that everybody dies and it's the same for everybody forever, they thought, well, that's not fair. Surely, if there are gods in the world or God in the world, there has to be justice. So suffering now must be rewarded later, and wicked behavior now must be punished later. And so they came up with the idea of an afterlife with rewards and punishments.

But eventually, in Christianity, the idea was that since the soul is eternal, it's either rewarded eternally or it's punished eternally. But then people started thinking, well, wait — is that fair? So, OK, suppose I'm just a regular old sinner, and I die when I'm 40, and so maybe I had about 25, maybe even 30 years of not being the most perfect person on Earth. I'm going to be tortured for 30 trillion years for those 30 years? And those 30 trillion years is just the beginning? Is there really a God who's going to allow that, let alone cause it? I mean, I just —
no (laughter).

And so I think — I cannot believe that you can actually say that God is just and merciful and loving  even if he believes in judgment, he is not going to torture you for 30 trillion years and then keep going. It just isn't going to happen.

GROSS: I'm wondering what you think about when you think about how the number of people who are contracting COVID 19 and how the number of people who are dying keeps growing as we get closer to Passover and Easter, which are very holy times in Judaism and Christianity.

EHRMAN: I think that — I'll speak from the Christian tradition, which I still cherish, and even though I am no longer a Christian, there are aspects of Christianity that I resonate with because they're so deeply ingrained in me. The Easter story is a story of hope that — in the Easter story, death is not the final word, that there's something that comes after death. There is hope in moments of complete despair. There can be life after death.

I don't take that literally anymore because I don't believe there is. I'm open to it, and I hope there is something after death, and if it is, it'll be good. But I personally think, probably, this life is all there is. But I take the Easter story as a metaphor that, even in the darkest hours when there looks to be no hope and it looks like it's simply the end of all things, there actually is a glimmer of hope and that something good can come out of something very bad. And so I really believe that, and I'll probably always believe it.

GROSS: Thank you. Bart Ehrman's new book is called "Heaven And Hell: The History Of The Afterlife." And if you're thinking, but what about this passage in the Old Testament or what about that passage in the New Testament, let me just say we only had time to touch on a few of the points in Bart Ehrman's book, so if you want to know more about what he has to say about the history of the afterlife, I refer you to his book.

https://www.npr.org/2020/03/31/824479587/heaven-and-hell-are-not-what-jesus-preached-religion-scholar-says

*
BENEFITS OF GOAT MILK

Goat milk is A2 milk

Most milk is A1, some milk is A2. What is the difference? It’s all about the proteins. The casein protein in milk consists largely of β-casein. There are two versions of this: A1 β-casein and A2 β-casein. This is also where 'A1 milk' and 'A2 milk' come from. Research shows that A2 β-casein is a lot easier to digest than A1 β-casein. Cow's milk is almost always a combination of A1 and A2 milk. Goat milk, on the other hand, is always A2 milk. 

Minimally processed
Goat milk is real diary. It takes little to make healthy, long-lasting products. Take, for example, milk powder. The milk is spray-dried so you can store it longer and take it with you wherever you go. There is simply nothing more to it.

 No artificial additives.

No need to improve on what's already good. Goat milk products are based on natural goat's milk. 



Naturally rich in proteins

Goat milk contains carbohydrates, fats, vitamins, minerals and especially a lot of proteins. 

Your body breaks down proteins to make your own proteins and as a source of energy.

Full of slow proteins (casein)

Casein is by far the most important milk protein. Casein proteins are slowly absorbed by your body. Especially while you sleep and your body is recovering. Casein helps your muscles to regain their strength for a new day.

Small-scale family farms

Farms are not factories. Our goats live on small farms that are still real family businesses. The goats have as much space as possible to do… goat things.

Goats are fun

Goats always make you laugh. They are curious, mischievous and naughty. And they laugh a lot themselves. Just spend some time with them and try not to smile. Bet you can't do that?

https://goatfully.com/blogs/blogs/eight-reasons-to-go-for-goat

from another source:

The composition of goat milk protein is similar to that of breast milk and its casein fraction is mostly comprised of β-casein, followed by αs-casein (αs1- and αs2-casein) and κ-casein). Milk protein primarily contains two types of β-casein: A1 and A2). Interestingly, β-casein in goat milk exists mainly as the A2 type and it does not produce β-casomorphin-7 (BCM-7), generated during the process of milk digestion, which may be related with various disorders, such as gastrointestinal disturbances. It is known that αs1-casein forms hard curds in the stomach, which might cause digestion problems in infants, but its concentration in goat milk is markedly lower than that in other milks.

Along with the nutritional benefits of goat milk protein, goat milk has more medium-chain triglycerides and smaller fat globules than cow milk, resulting in better digestibility. These properties of goat milk can be exploited in functional foods for people with metabolic disorders as well as infants and the elderly.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5932946/

HEALTH BENEFITS OF GOAT MILK: WEB MD

The vitamins, minerals, and heart-healthy fats in goat cheese improve health in a number of ways. Copper, for example, helps produce red blood cells, which carry oxygen from the lungs to the other tissues of the body. Copper also aids in the absorption of iron and other nutrients.  
Goat cheese also contains riboflavin, also known as vitamin B2. Riboflavin plays an important role in many bodily processes, especially the production and functioning of new cells. 

Moreover, goat cheese has many other health benefits, including:

Weight Loss
The fatty acids in goat cheese are metabolized faster than cow’s cheese, which means that the body feels full faster. Researchers have found evidence that choosing goat cheese over cow helps people feel less hungry and eat less overall, which is an important factor in weight loss.

Improved Digestion
While cow’s milk has both A2 and A1 beta casein proteins, goat cheese has only A2 beta casein. The difference means that goat cheese and goat milk are easier on the digestion.

Goat milk stands out due to its remarkable digestibility. The smaller fat globules in goat milk, in contrast to those in cow milk, form softer, smaller curds in the stomach. These smaller curds are swiftly broken down by stomach enzymes and more rapidly digested.

Gut Health
Goat cheese is full of beneficial probiotics, a healthy kind of bacteria. Probiotics colonize the intestines and compete with any unhealthy bacteria or pathogens that they find there. This can improve the effectiveness of your immune system and reduce your vulnerability to illness.

Bone Health
Goat cheese is high in calcium, an essential nutrient for your bones, teeth, and organs. A diet high in calcium can prevent the onset of osteoporosis and other bone disorders later in life.

PROVIDES SELENIUM
Goat cheese is a good source of selenium, an essential trace mineral more often found in seafood. Selenium helps your body break down DNA-damaging peroxides, which can lower your risk of developing conditions like cancer, thyroid disease, and cardiovascular disease.

SHORT AND MEDIUM CHAIN FATTY ACID AND HYDRATION BENEFITS

Goat milk boasts significantly higher levels of short and medium-chain fatty acids than cow milk. Studies indicate these fatty acids are swiftly digested, offering rapid energy to the body. 

Additionally, goat milk is packed with hydration-boosting nutrients like carbohydrates, fat, protein, vitamins, and minerals, which the body takes longer to process. This extended absorption time allows the body to retain fluids, enhancing overall hydration. Goat milk also contains vital electrolytes such as sodium and potassium, crucial for efficient water absorption.  

GOATS ARE BETTER FOR THE ENVIRONMENT
Beyond their delicious milk, dairy goats play a vital role in sustainable farming. They require significantly less water per gallon of milk produced than most other livestock. Moreover, they emit nearly 20 times less methane per kilogram of body weight compared to dairy cows, contributing to a greener environment.  

Raising dairy goats embodies a labor of love, far removed from industrial-sized farming. With their adaptability and eco-friendly qualities, such as resilience to heat, dairy goats could play an important role in the battle against climate change. They exemplify a sustainable and compassionate approach to farming, ensuring both the well-being of the animals and the planet.


*
THE FIVE STAGES OF ALZHEIMER’S DISEASE

This first stage is called (1) preclinical Alzheimer’s disease, according to the Mayo Clinic. People in this stage don’t exhibit any outward symptoms of the condition, but they are undergoing brain changes that will induce signs of Alzheimer’s down the line. Although symptoms aren’t apparent at this point, experts are working on developing innovative brain imaging technology that might be able to pick up on signs of the condition at this stage.

After preclinical Alzheimer’s, which can last for years, a person develops what’s called (2) mild cognitive impairment due to Alzheimer’s disease. This involves confusion, trouble making decisions, and issues remembering things such as recent conversations or upcoming appointments, but not at a severe enough level for it to really affect a person’s job or relationships, the Mayo Clinic explains. (Of course, these symptoms aren’t always a sign of Alzheimer’s—we’ll discuss that a bit more down below.)

The following phase is (3) mild dementia due to Alzheimer’s disease. This is when symptoms become apparent enough that they often lead to an Alzheimer’s diagnosis, the Mayo Clinic notes. At this point, Alzheimer’s is affecting a person’s day-to-day life with symptoms such as noticeable short-term memory loss, trouble with problem-solving, poor decision-making, mood changes, losing items, getting lost themselves (even in familiar locations), and having a hard time expressing themselves. This can translate into the person asking the same question repeatedly because they forget the answer, a difficult time handling what used to be manageable responsibilities (like tracking their budget), and irritability or anger as their world begins to shift in confounding ways.

This eventually progresses into (4) moderate dementia due to Alzheimer’s disease, which is essentially an intensifying of symptoms. A person with this stage of Alzheimer’s tends to need more care making it through the day and avoiding dangerous situations, such as becoming lost (wandering around to find a familiar setting is common in this stage). This is also when long-term memory becomes more compromised, so a person with this level of Alzheimer’s may begin to forget who their loved ones are or get them confused with each other.

Lastly, during (5) severe dementia due to Alzheimer’s disease, a person may be unable to communicate coherently, even if they are physically able to speak. As they lose control over physical functions such as walking, holding their head up, and bladder and bowel activity, they may depend on others to care for them. People with this final stage of Alzheimer’s may also have difficulty swallowing. Sadly, this is often how death from Alzheimer’s can come about. Food or drinks can wind up in the lungs due to impaired swallowing, leading to pneumonia, or a person may become dehydrated or malnourished.

There’s no set amount of time it takes for every person with Alzheimer’s to advance through each of these stages, but the Mayo Clinic notes that people with the condition live eight to 10 years after diagnosis on average.

NORMAL FORGETFULNESS IS DIFFERENT FROM ALZHEIMER’S-RELATED MEMORY LOSS

It’s completely fine to occasionally forget where you put things, the names of people you don’t see that often, why you entered a room, and other minor details. Memory lapses can happen for all sorts of reasons, from a lack of sleep to normal cognitive changes as you grow older.

“Mild forgetfulness is a common complaint in people as they age,” Verna R. Porter, M.D., a neurologist and director of the Alzheimer’s Disease Program at Providence Saint John’s Health Center in Santa Monica, California, tells SELF. “The main difference between age-related memory loss and dementia (such as Alzheimer’s disease) is that in normal aging, the forgetfulness does not interfere with your ability to carry on with daily activities,” Dr. Porter says. “The memory lapses have little impact on your daily life.”

If you or a loved one is dealing with persistent memory loss and accompanying symptoms such as difficulty staying organized, confusion, and mood changes, that’s more of a cause for concern.

Estimates range, but the National Institute on Aging (NIA) says that more than 5.5 million people in the United States have the disease. According to the Centers for Disease Control and Prevention, it was the sixth leading cause of death in the United States in 2017, killing 116,103 people.

Alzheimer’s disease damages and kills brain cells. This destruction is what affects a person’s cognitive, social, and physical abilities.

Researchers have also discovered two specific abnormalities in the brains of people with Alzheimer’s disease, the Mayo Clinic says. One is that they have plaques, or buildup of a protein called beta-amyloid, that may harm brain cells, including by impeding cell-to-cell communication. Another is tangles in the transportation system that brain cells rely on to move nutrients and other substances that are necessary for your brain to function properly.

Early-onset Alzheimer’s disease happens when a person develops the condition anywhere from their 30s to mid-60s, according to the NIA. People with this early-onset form comprise less than 10 percent of the Alzheimer’s population. These cases are sometimes due to three specific gene mutations or other genetic factors. However, this kind of genetic influence is only involved in less than 5 percent of Alzheimer’s disease cases overall, according to the Mayo Clinic.

Late-onset Alzheimer’s (which is much more common and typically shows up in someone’s mid-60s) mainly arises due to age and brain changes. Genetics are sometimes involved, but much more rarely than in a person who starts exhibiting symptoms when they’re younger.

RISK FACTORS

Getting older is the biggest one. To be clear, Alzheimer’s isn’t just a regular part of aging that everyone should expect, but it’s much more common in people over 65. This is part of why women seem to be at a greater risk of developing Alzheimer’s disease—they simply tend to live longer.

Having a first-degree relative (like a dad or sister) with the disease also seems to raise your risk. This is due to that genetic component, which doctors are still investigating.

Another potential factor: past head trauma, like a concussion. “In general, head injuries can result in less brain [matter] because an accompanying brain injury can occur,” Amit Sachdev, M.D., an assistant professor and director of the Division of Neuromuscular Medicine at Michigan State University, tells SELF. “Less brain means less ability for the brain to age gracefully.”

There’s also a surprising potential link between heart disease risk factors and those that contribute to your chances of getting Alzheimer’s. For example, high blood pressure, high blood cholesterol, obesity, and poorly controlled type 2 diabetes can increase your risk of developing both conditions. This may be because of a health issue called vascular dementia, which is when impaired blood vessels in the brain cause memory and cognitive difficulties.

In addition, Down syndrome is one of the strongest risk factors for one day developing Alzheimer’s, and symptoms tend to present 10 to 20 years earlier than they do in the general population. Down/Alzheimer’s link may center around having an extra copy of chromosome 21, which is what brings about characteristics of Down syndrome. This extra chromosome material contains the gene that produces those beta-amyloid plaques that can harm brain cells, the NIA explains.

The only current test to absolutely confirm Alzheimer’s involves a microscopic exam of a deceased person’s brain to look for those plaques and tangles, according to the Mayo Clinic. Although tests to confirm whether or not a living person has Alzheimer’s seem to be forthcoming, they’re not yet ready for widespread use.

Instead, doctors basically make an extremely educated guess. They do this with strategies like ordering blood tests to rule out other causes of memory loss, administering mental status tests to evaluate a person’s thinking and memory, ordering brain imaging such as an MRI or CT scan, and testing a person’s cerebrospinal fluid for biological markers that can point toward the possibility of Alzheimer’s.

HOW TO REDUCE YOUR RISK

Research has found a link between engaging in socially and mentally stimulating activities and a reduced risk of Alzheimer’s disease. It seems as though these types of activities strengthen your “cognitive reserve,” making it easier for your brain to compensate for age-related changes, according to the NIA.

Reducing your risk of heart disease may also help lower your risk of Alzheimer’s. “Things that promote a healthy body will promote a healthy brain,” Dr. Sachdev says. “In this case, healthier blood vessels are less likely to become damaged and more likely to support the brain.”

Lowering your risk of heart disease and Alzheimer’s means staying active and eating well, among other things.

“Exercise may slow existing cognitive deterioration by stabilizing older brain connections and [helping to] make new connections,” Dr. Porter says. Experts are also investigating if exercise can bolster the size of brain structures that are key for memory and learning. In any case, the American Heart Association recommends getting 150 minutes of moderate exercise every week or 75 minutes of vigorous movement (or a mix of moderate and vigorous workouts) each week.

The Mediterranean diet, which focuses on eating produce, healthy oils, and foods low in saturated fat, has also been linked with a lowered risk of developing heart disease and Alzheimer’s.

*
The U.S. Food and Drug Administration (FDA) has approved two types of medications to help manage the memory loss, confusion, and problems with thinking and reasoning of Alzheimer's disease, according to the NIA.

Cholinesterase inhibitors are reserved for mild to moderate Alzheimer’s. It seems as though they impede the breakdown of acetylcholine, a brain chemical implicated in memory and thinking, but these drugs may start to work less effectively as Alzheimer’s progresses and a person produces less acetylcholine.

When it comes to moderate to severe Alzheimer’s, doctors may use a drug called memantine, which appears to regulate glutamate, a neurotransmitter that can cause brain cell death in large amounts. Sometimes doctors prescribe both cholinesterase inhibitors and memantine drugs, since they work in different ways.

Unfortunately, these drugs won’t fully stop the progression of the disease. But they may help slow the symptoms so that a person with Alzheimer’s can have a better quality of life for a longer period of time.

https://www.self.com/story/alzheimers-disease-facts

*
A NEW TEST FOR DIAGNOSING ALZHEIMER’S

A simple blood test to diagnose Alzheimer’s disease soon may replace more invasive and expensive screening methods such as spinal taps and brain scans.

A study by researchers at Washington University School of Medicine in St. Louis and Lund University in Sweden shows that a blood test can be as good at detecting molecular signs of Alzheimer’s disease in the brain as cerebrospinal fluid tests approved by the Food and Drug Administration (FDA) for Alzheimer’s diagnosis. The blood test, which was created by Washington University researchers, uses a highly sensitive technique to measure levels of Alzheimer’s proteins in the blood.

The research was published Feb. 21 in Nature Medicine.

The findings demonstrate that a blood test can diagnose Alzheimer’s disease pathology as accurately as cerebrospinal fluid tests and brain scans, even in patients with mild symptoms, and can be used to detect molecular signs of Alzheimer’s disease in the brain when symptoms haven’t yet emerged.

Identifying people with the disease has become vitally important, as the first Alzheimer’s treatments capable of slowing the disease’s progression recently became available to patients, and other promising drugs are in the pipeline. Such drugs might be more effective when started sooner rather than later, making it critically important to identify people with the disease early.

“The accuracy of this blood test now enables us to diagnose the presence of Alzheimer’s disease pathology with a single blood sample,” said co-senior author Randall J. Bateman, MD, the Charles F. and Joanne Knight Distinguished Professor of Neurology at Washington University.  “This advance will increase accurate diagnoses for many patients.”

For many years, Alzheimer’s was diagnosed symptomatically, after people began showing signs of memory and thinking problems. But studies have shown that up to a third of people diagnosed with Alzheimer’s based on cognitive symptoms alone are misdiagnosed and that their symptoms are due to other causes. Consequently, before a patient is eligible to receive Alzheimer’s therapies, a diagnosis of cognitive impairment must be coupled with a positive test for amyloid plaques, which are unique to Alzheimer’s disease. Amyloid positron emission tomography (PET) brain scans, cerebrospinal fluid analyses and blood tests can all be used to confirm the presence of brain amyloid plaques, but only for people who already have cognitive symptoms. They are not used by doctors for people without symptoms.

“In the near future, this type of blood test will replace the need for costly and less accessible cerebrospinal fluid and PET imaging tests in specialist memory clinics,” said co-senior author Oskar Hansson, MD, PhD, a professor of neurology at Lund University. “Next, we need to determine if the Alzheimer’s blood test also works in primary care. This is currently being investigated in Sweden.”

Bateman and colleagues previously created the first approved blood test for amyloid in the brain. The test uses mass spectrometry to measure the ratio of two forms of amyloid in the blood, and it received a “Breakthrough Device” designation from the FDA in 2019. Bateman, co-first author Nicolas Barthélemy, PhD, an assistant professor of neurology at Washington University, and colleagues have since created a second blood test based on the effects of amyloid accumulation on a second brain protein: tau. The presence of amyloid in the brain changes the levels of various forms of tau protein in the brain and in the blood. 

Measuring the ratio of phosphorylated tau-217 (ptau-217) and unphosphorylated tau in the blood reliably reflects brain amyloid levels. A test combining the amyloid and tau blood measures is marketed by the Washington University startup C2N Diagnostics, as PrecivityAD2.

In this study, a research team led by Bateman, Hansson, Barthélemy, co-first author Gemma Salvadó, PhD, a postdoctoral fellow at Lund University, and co-author Suzanne Schindler, MD, PhD, an associate professor of neurology at Washington University, compared the abilities of four tests to identify people with amyloid in their brains: the ptau-217 blood test and three FDA-approved cerebrospinal fluid tests. 

They evaluated the tests using blood and cerebrospinal fluid samples from volunteers in the Swedish BioFINDER-2 (Biomarkers For Identifying Neurodegenerative Disorders Early and Reliably) cohort (1,422 people), and Washington University’s Charles F. and Joanne Knight Alzheimer Disease Research Center (Knight ADRC) cohort (337 people). Both groups included people with very mild and mild cognitive symptoms, as well as healthy people for comparison. The tests’ accuracy rates were calculated by comparing their results to the gold standard: PET brain scans for amyloid and tau tangles.

The ptau-217 blood test was just as good as the FDA-approved cerebrospinal fluid tests at identifying people with amyloid buildup, with accuracy scores for all tests at 95% to 97%. In a secondary analysis, the researchers measured how well the tests determined the levels of tau tangles in the brain. In this, the ptau-217 blood test was superior to cerebrospinal fluid tests, with accuracy scores in the range of 95% to 98%.

As of now, Alzheimer’s therapies and diagnostic tests are only used clinically for people who already show signs of memory and thinking problems. But Alzheimer’s disease has a long presymptomatic phase of two decades or more during which amyloid builds up in the brain before neurodegeneration sets in and symptoms arise.

“We now have therapies that have clinical benefits, which is great, but they don’t reverse the loss of neurons in the brain,” Barthélemy said. “What we really want is to treat the disease before people start losing brain cells and showing symptoms.”

A subgroup analysis of healthy participants showed that the ptau-217 blood test accurately identified those who harbored amyloid plaques in their brains. The test was just as accurate at detecting the presence of amyloid plaques in people without symptoms as those with symptoms. Studies have shown that people with no cognitive problems but who are positive for amyloid are at high risk of developing cognitive impairments in the next few years. A major phase 3 clinical trial known as the AHEAD 3-45 Study was launched in 2020 to evaluate whether treating amyloid-positive people before symptoms arise can prevent cognitive decline. Washington University is a site for the trial. The blood test is being used in the AHEAD study to screen potential participants.

“Imagine a person who is 55 or 60 and has a family history of Alzheimer’s or some high-risk genetic variants,” Barthélemy said. “It would be really valuable to have an easy way to know whether they have amyloid pathology in their brains. If they do, they could come in, maybe once every two or three years, and get a therapy to clear the amyloid out and then never develop dementia at all. We’re still a few years away from such an approach, but I think that’s the future of Alzheimer’s care, and it depends on presymptomatic diagnosis and treatment.”

https://medicine.wustl.edu/news/alzheimers-blood-test-performs-as-well-as-fda-approved-spinal-fluid-tests/

*
BUT IS ALZHEIMER’S CAUSED BY THE BUILD-UP OF PLAQUE?

New research from the University of Cincinnati bolsters a hypothesis that Alzheimer’s disease is caused by a decline in levels of a specific protein, contrary to a prevailing theory that has been recently called into question.

UC researchers led by Alberto Espay, MD, and Andrea Sturchio, MD, in collaboration with the Karolinska Institute in Sweden, published the research on Oct. 4 in the Journal of Alzheimer’s Disease.

Questioning the dominant hypothesis

The research is focused on a protein called amyloid-beta. The protein normally carries out its functions in the brain in a form that is soluble, meaning dissolvable in water, but it sometimes hardens into clumps, known as amyloid plaques.

The conventional wisdom in the field of Alzheimer’s research for more than 100 years stated that Alzheimer’s was caused by the buildup of amyloid plaques in the brain. But Espay and his colleagues hypothesized that plaques are simply a consequence of the levels of soluble amyloid-beta in the brain decreasing. These levels decrease because the normal protein, under situations of biological, metabolic or infectious stress, transform into the abnormal amyloid plaques. 

“The paradox is that so many of us accrue plaques in our brains as we age, and yet so few of us with plaques go on to develop dementia,” said Espay, professor of neurology in the UC College of Medicine, director and endowed chair of the James J. and Joan A. Gardner Family Center for Parkinson’s Disease and Movement Disorders at the UC Gardner Neuroscience Institute and a UC Health physician. “Yet the plaques remain the center of our attention as it relates to biomarker development and therapeutic strategies.”

Sturchio noted that many research studies and clinical trials over the years have aimed at reducing amyloid plaques in the brain, and some have lessened plaques. But until the Sept. 27 announcement of a positive trial by Biogen and Eisai (with drug lecanemab), none slowed the progression of Alzheimer’s disease.

More importantly, in support of their hypothesis, in some clinical trials that reduced the levels of soluble amyloid-beta, patients showed worsening in clinical outcomes.

“I think this is probably the best proof that reducing the level of the soluble form of the protein can be toxic,” said Sturchio, first author of the report and adjunct research instructor at UC’s College of Medicine. “When done, patients have gotten worse.”

Previous research from the team found that regardless of the buildup of plaques in the brain, people with high levels of soluble amyloid-beta were cognitively normal, while those with low levels of the protein were more likely to have cognitive impairment.

In the current study, the team analyzed the levels of amyloid-beta in a subset of patients with mutations that predict an overexpression of amyloid plaques in the brain, which is thought to make them more likely to develop Alzheimer’s disease.

“One of the strongest supports to the hypothesis of amyloid toxicity was based on these mutations,” Sturchio said. “We studied that population because it offers the most important data.”

Even in this group of patients thought to have the highest risk of Alzheimer’s disease, the researchers found similar results as the study of the general population.

“What we found was that individuals already accumulating plaques in their brains who are able to generate high levels of soluble amyloid-beta have a lower risk of evolving into dementia over a three-year span,” Espay said.

The research found that with a baseline level of soluble amyloid-beta in the brain above 270 picograms per milliliter, people can remain cognitively normal regardless of the amount of amyloid plaques in their brains.

“It’s only too logical, if you are detached from the biases that we’ve created for too long, that a neurodegenerative process is caused by something we lose, amyloid-beta, rather than something we gain, amyloid plaques,” Espay said. “Degeneration is a process of loss, and what we lose turns out to be much more important.”

Next steps

Sturchio said the research is moving forward to study if increasing the levels of soluble amyloid-beta in the brain is a beneficial therapy for patients with Alzheimer’s. 

Espay said it will be important to ensure that the elevated levels of the protein introduced into the brain do not then turn into amyloid plaques, since the soluble version of the protein is needed for normal function to make an impact in the brain. 

On a larger scale, the researchers said they believe a similar hypothesis of what causes neurodegeneration can be applied to other diseases including Parkinson’s and Creutzfeldt-Jakob disease, with research ongoing in these areas as well.

For example, in Parkinson’s disease, a normal soluble protein in the brain called alpha-synuclein can harden into a deposit called a Lewy body. The researchers hypothesize that Parkinson’s is not caused by Lewy bodies aggregating in the brain, but rather by a decrease in levels of normal, soluble alpha-synuclein.

“We’re advocating that what may be more meaningful across all degenerative diseases is the loss of normal proteins rather than the measurable fraction of abnormal proteins,” Espay said. “The net effect is a loss not a gain of proteins as the brain continues to shrink as these diseases progress.”

Espay said he envisions a future with two approaches to treating neurodegenerative diseases: rescue medicine and precision medicine.

Rescue medicine looks like the current work, studying if boosting levels of key proteins like soluble amyloid-beta leads to better outcomes. 

“Interestingly, lecanemab, the anti-amyloid drug recently reported as beneficial, does something that most other anti-amyloid treatments don’t do in addition to reducing amyloid: it increases the levels of the soluble amyloid-beta,” Espay said.

Alternatively, precision medicine entails going deeper to understand what is causing levels of soluble amyloid-beta to decrease in the first place, whether it is a virus, a toxin, a nanoparticle or a biological or genetic process. If the root cause is addressed, the levels of the protein wouldn’t need to be boosted because there would be no transformation from soluble, normal proteins to amyloid plaques.

Espay said precision medicine would take into account the fact that no two patients are alike, providing more personalized treatments. The researchers are making progress in precision medicine through the Cincinnati Cohort Biomarker Program, a project aiming to divide neurodegenerative diseases by biological subtypes in order to match therapies based on biomarkers to those most likely to benefit from them.

“The Cincinnati Cohort Biomarker Program is dedicated to working toward deploying the first success in precision medicine in this decade,” Espay said. “By recognizing biological, infectious and toxic subtypes of Parkinson’s and Alzheimer's, we will have specific treatments that can slow the progression of those affected.”

https://www.uc.edu/news/articles/2022/09/decreased-proteins-not-amyloid-plaques-tied-to-alzheimers.html

brain with Alzheimer's (left ) compared to healthy brain

Oriana:

I have become more interested in the non-memory and non-verbal symptoms of dementia such as the gradual loss of motor control. The gait becomes slow and unsteady. There is profound stooping, beyond ordinary "bad posture." And you may remember that Trump had to hold a glass of water in both hands.

Speaking of hands, there is a tendency to hold them clenched into fists. Bladder and bowel incontinence are also part of the symptoms. The demented brain is less and less able to control the body.

Of course Alzheimer's is only one type of dementia that happens to be the common. But there is also frontotemporal dementia; one of its symptoms may be the development of insensitive, rude behavior in someone previously polite and soft-mannered.

*

ending on beauty:

FOR A FRIEND BORN UNDER GEMINI

We are all Gemini our twin
man-woman selves
kissing and fighting making up

Gemini means summer is near
its luxurious amber
of sinking into work we love

did I say sink
or sing

~  Oriana