GOAT IIM B UGAT (AIOM) on 6th November at 10:00 am.

CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

How This Italian Town Came to Be Known as the ‘City of Witches’

Centuries ago, it was said that Benevento was a gathering place for the occult. Today, superstitions still run deep Stepping off the train in the southern Italian city of Benevento is not a particularly haunting experience, in the sense that the air on a brisk October day yields nothing other than cloud cover and fog. That this is the so-called “city of witches,” the site where women from all over the country might have flown in the middle of the night to dance around a famous walnut tree and to learn, effectively, how to be a witch, is not immediately apparent. Where the witchiness of Benevento, a city of over 55,000 with a Roman theater and Arch of Trajan from ancient times, may be most felt is in the traditions of its residents, many of whom still hold close these passed-down superstitions. Depending on whom you ask, a curse of the evil eye must still be warded off with a specific ritual involving oil and water and a traditional prayer. Leaving a broom at your door is a good way to ensure the local witches, known as the Janare, won’t sneak under the threshold—they’ll be too distracted counting the strands of straw. And if you wake to find that your horse’s mane has been braided, a Janara must have taken it for a late-night ride. Even now, when Maria Scarinzi, an anthropologist and head of education programs at Janua, Benevento’s Museum of Witches, interviews older residents about their beliefs, she finds that they hesitate to share everything for fear of retribution. “They still believe that if you name the Janara, she will come to your house at night and she will harm you in some way,” Scarinzi says. “They still believe that if I tell you that I know the formula for getting rid of maggots, you will think that I am a Janara and you’ll distance me from society.” How Benevento became the city of witches Some researchers argue that this southern Italian town, a little more than two hours by train from Rome, became known for its witches because of its unique political position. But to understand the root of the myth, we have to go back to 1428. The hunting and persecution of so-called witches was a practice that began to take root in Italy in the late 1300s, supervised and carried out in many ways by the Catholic Church. By 1542, Pope Paul III had created the Congregation of the Holy Office of the Inquisition, which tasked the church with criminalizing those who would speak against the faith. It was an amorphous crime, because any misfortune to befall a person or town could be attributed to a witch—around 80 percent of the people charged with witchcraft in early-modern Europe were women. Academics estimate that 22,000 to 33,000 witchcraft trials took place in Italy, with very few of these ending in capital punishment. Witch hunting appeared to largely come to an end by the 18th century. The first reference to Benevento as a place where witches gather dates to 1428. It comes from the transcriptions of the trial of Matteuccia di Francesco, a 40-year-old woman who was eventually sentenced to death and burned at the stake for witchcraft by the Franciscan Bernardino of Siena in the Umbrian town of Todi. From Matteuccia, we receive the famous formula, or incantation, that has since become inextricably associated with Benevento: “Unguento, unguento / mandame a la noce de Benivento, / supra aqua et supra ad vento / et supra ad omne maltempo.” Translation: “Ointment, ointment/ send me to the walnut tree of Benevento/ over water and over wind/ and over all bad weather.” During the trial, Matteuccia confesses that she spreads a cream on herself and chants to be sent to the walnut tree of Benevento, which had demonic associations and was thought to be near the river Sabato. “From that moment on, the inquisitors try to make the witches confess that they went to Benevento, because it becomes a sort of indictment,” says Paola Caruso, who has published books on the folklore of Benevento. “If they went to Benevento, then that means they’re witches.” In fact, according to Caruso, after Matteuccia’s confession, nearly all the Italian witch trials of the 15th and 16th centuries reference, in some way, Benevento as a gathering place for witches. No records exist, however, of witch trials in Benevento itself, though this could be attributed to the World War II bombing of the city’s central cathedral that destroyed much of the ecclesiastical archives, Scarinzi says. By 1640, local medical examiner Pietro Piperno penned his historical treatise on, among other things, the walnut tree of Benevento, explaining the origins of its supernatural powers. He claimed, according to Caruso, that it is not those from Benevento who participate in the late-night gathering of witches around the walnut tree—but people coming from elsewhere. In many ways, this only reinforced the link between Benevento and the witches. Caruso’s research is built on the idea that Benevento became the “city of witches” because of its political isolation. Even when surrounded by the Romans, up until the third century B.C.E., the city once called Maleventum was ruled by the Samnites. It was eventually subsumed into the Romans’ dominion, but after the fall of the Empire, by the sixth century C.E., the Lombards arrived, establishing Spoleto in Umbria and Benevento as their two southern duchies. What made Benevento unique is that, despite its association with the Lombards, it managed to remain in large part independent from centralized control until the late 11th century C.E., when it was taken over by the papacy and largely stayed under papal control until becoming part of Italy in 1860. The fact that it had retained some sense of governing autonomy for so long sowed insecurity in the political leaders of the time. “We must imagine Benevento as a very rich city, a papal city, an obligatory halfway point—you had to pass through Benevento,” Scarinzi says. “We have to imagine it also as a kind of island in what was the Kingdom of Naples—difficult to conquer with all this wealth. So how can I discredit someone? It’s what we still do today: I speak ill of that person.” The targets of this abuse were generally local women known as healers, “almost women of science,” Scarinzi says, or practitioners of what would today be called herbal medicine. These were women who knew the medicinal value of herbs like St. John’s wort, lavender and dandelion, gleaned from information passed down to them through generations. “The negativity around these women was linked to the fact that people were afraid,” Scarinzi says, “because they were women who had a power, which, in many cases, was medicine.” The modern-day legacy The Museum of Witches, located in the Palazzo Paolo V off the city’s pedestrian Corso Garibaldi, is a testament to how the customs survive in the daily lives of its residents. For a couple of decades, anthropologists have been interviewing people about the history and customs of the larger province of Benevento. Part of this effort has been to talk with the elderly—mostly those 70 years and older—to preserve the superstitions and legends of the witches before they disappear. About 10 years ago, they had enough to open a museum. “Our goal was to recount the figure of the Benevento witch—that is, who the Janara is—for the people of Benevento,” Scarinzi says. “What is this magical world today, for older people, more than anything else, who continue to perform certain practices and certain rituals?” The museum opens with a short video punctuated by the voices of residents describing how the legend of the Janare has seeped into their way of life. Artifacts show the roots of rituals. A pair of small coffee cups tells the story of how a woman could entice the man of her dreams by serving a drop of her menstrual blood in his coffee. “The belief was that, in the moment in which the woman made her proposal of love, the man had to, naturally, accept, otherwise he would die,” Scarinzi says. “It was the Janara who gave the blood and the object a power.” A woman proposing rather than a man went against the customs of the time, but therein lay the power of the witch: She could rewrite the social order. A 19th-century prayer handwritten by a young child to protect her from any potential enemies is on display. At the time, children would have donned amulets and charms to ward off evil. Another display explains how laundry hung outside to dry should be taken in by dusk for fear that evil spirits might be present after the setting of the sun. The oral histories the museum has collected shed more light on the behaviors that have grown out of these beliefs. Scarinzi learned that some local women have never been to a hairdresser, concerned that their hair would be kept and used against them in a spell. Keeping the legend alive Outside of the customs and superstitions ingrained in the culture of Benevento, there’s a capitalist reason why the legend has survived: the Liquore Strega, founded in 1860 by Giuseppe Alberti, who opened his bar in the center of Benevento. “He decided to name the product after the legend of the city where it was born,” says Kenia Palma, marketing manager for Strega Alberti, the company that produces the liquor. Strega means “witch” in Italian. It didn’t take long for Strega to become a symbol of Benevento—the marketing of the yellow-colored liqueur, made in part with saffron, juniper and mint and bearing a slightly sweet yet smooth taste, was indelibly linked to the city and its witches. The label bears an illustration of witches dancing around a walnut tree. Today, its store is the first thing you see when descending from the train station. Palma notes that, on bottles of the liqueur, the location is even written as “near the train station,” because Benevento has long been considered an important junction that connected north and south. The Alberti family worked to make Liquore Strega a symbol of Italy itself. In the 1920s, the brand enlisted well-known Futurist artist Fortunato Depero to create stylized advertisements. After the war, Guido Alberti helped to start the country’s famous literary prize, Premio Strega, named in the brand’s honor. Source of the article

GOATReads:Politics

Care Work is Necessary for Anti-Imperialist Struggle

How can we think (and rethink and rethink) care laterally, in the register of the intramural, in a different relation than that of the violence of the state? —Christina Sharpe, In the Wake: On Blackness and Being The Popular University for Gaza encampment at the University of Chicago was raided on May 7, 2024. Following a week of connection, building, and generativity, police officers invaded the physical and figurative structures we had built, ripping organizers out of tents and tearing down the protective walls surrounding them. Just hours afterward, we convened together to make sense of it all—to begin the processes of recovery and rebuilding. Through circles that met in-person and virtually in the weeks to come, my peers brought care to the forefront of our work, inviting rage, grief, hopelessness, longing, intimacy, and joy into the relationships forged during the encampment. Our encampment existed in its physical form for only eight days, but the work of caring for one another has extended far past the date on which our physical structures were destroyed by the police. Care work that was seeded within the walls of our Popular University has sprung up from the ashes of the camp. This foundation of care continues to shape the way we organize against all forms of state violence. As we collectively pursued divestment from arms manufacturers that directly facilitate settler colonial violence in occupied Palestine, we also developed and deployed principled care tactics. The history and politics of care work are inextricable from the tentacles of empire. For example, the gendered and racialized modes of commodified care that we often think of when we hear the word “caregiver” have roots in legacies of colonial and imperial exploitation. The long legacy of slavery, sharecropping, and domestic laboring in the United States is inextricably linked to the subjugation of Black American women. Caregivers in home health-care settings, nursing homes, and childcare facilities are often precariously employed and underpaid immigrant women or women of color. In this model of caregiving, marginalized care workers tend to the bodies of those insulated from economic and social vulnerability. Simultaneously, most psychotherapists and medical doctors—those who are treated as care experts—are white, hailing from socioeconomic backgrounds that allow them to pay tens or hundreds of thousands of dollars for their training. The cost to access this kind of expertise makes it extremely difficult for most of us to afford comprehensive health care and psychotherapy in the US. This is why a repositioning of care was necessary in the context of an anti-imperial encampment focused on demanding that the University of Chicago disclose investments in war, divest from genocide, and repair injustices perpetrated from Gaza to the South Side of Chicago. Far too often, care means that autonomy gets taken away from someone in need. The state’s violent role in stripping vulnerable people of their autonomy is often labeled “care” or “welfare.” The active repositioning of care as intramural at the Popular University for Gaza created opportunities for peers to care laterally for one another, rather than relying upon individuals positioned as experts or as care workers. Rather than devaluing care or maintaining the power relations that keep it financially inaccessible, our care for one another within the walls of our Popular University created a structure of mutuality and sustainability. Repositioning care as a tool available to all of us was a means of decommodifying it, of taking it out of the imperialist context in which it is normally encountered. This repositioning allowed us to deploy care as a direct and oppositional response to violence targeted at Palestinians and those who act in solidarity with them. Intramural care—care within the figurative walls of our popular university—generated new ways of relating to risk and relating to one another. As a tool for building political power, care allowed encampment members to negotiate risk together, to workshop and (re)develop strategies for sustainable organizing, and to inhabit a community with a shared commitment to mutual aid and protection. Encampments are not a new organizing tactic, nor is thinking about how to provide collective, non-hierarchal, and lateral forms of care. As the Care Collective underscores in their book The Care Manifesto: The Politics of Interdependence, treaty camps (such as the one erected at Standing Rock in the fight against the Dakota Access Pipeline) also welcomed anyone who adhered to camp values, offering care that was “designed according to need, not profit”—food, education, health care, housing—for everyone in the community. Following this legacy, the Popular University for Gaza developed a robust care team. At the Popular University for Gaza, our care team comprised of undergraduates, graduate students, neighbors, faculty, and staff. We contributed to encampment operations as medics, group facilitators, mental health workers, conflict mediators, and food and water distributors. As critical as collecting and distributing vital resources was to the encampment, our care work went beyond mere resource provision. As a social worker, I realized that developing a peer support guide might be a wise way to help campers develop strategies for maintaining the mental and emotional resources needed to keep the encampment running. This guide offered some individual-level tools for emotional regulation and grounding, and it also provided an outline for running processing/decompression circles. These circles were the first opportunity that many of us had to get to know individuals from outside of our programs, student organizations, or housing situations at the encampment. Through this peer support guide, organizers who were not experienced in professionalized care provision were invited into a relational practice, a space of connection rather than a mystified series of expert techniques. These circles were also the first space where we were able to convene after our encampment was violently raided by the University of Chicago Police Department. I also developed a therapy referral network and the launch of a 24/7 on-call peer mental health line for organizers. To have built this infrastructure outside of the confines of health insurance or state surveillance feels exciting; when we think of care laterally, we realize our own capacity to provide for one another instead of risking increased involvement with a violent state apparatus. Our collective approach to care, drawing from practices of peer support and mutual aid, was a risk-responsive approach grounded in interdependence. Among the mental health workers on our care team, it was our philosophy that anyone who wanted to run a decompression circle should be able to do so—rather than requiring a license or degree to offer support to peers at the encampment, the only requirement for running a decompression circle was to have first attended one as a participant. With this structure, mental health work was taken up by encampment community members with a wide range of ages, varied prior experiences with mental health care, and a vast array of academic and personal backgrounds. This diversity of experience among decompression group facilitators also allowed us to run affinity spaces for decompression, attending to the more specific needs that might arise among community members of color or disabled community members, for example. Collective approaches to care at the Popular University for Gaza allowed us to use risk negotiation as a core organizing strategy. When determining our collective courses of action, conversations about arrest risk were central to our strategizing. An arrest obviously has different consequences depending on things like one’s citizenship status, prior criminal-legal contact, race. In the lead-up to any anticipated police presence at the camp, we spoke to one another frankly about who among us would be able to take on more risk, and we made safety plans for those of us who were unable to risk arrest. Lateral forms of care are also what allowed organizers to sustain over a week of encampment operations; by using a safety planning worksheet (from the Peer Support Guide) in mental health circles and conversations, organizers developed a sense of who they might reach out to in crises, in moments of overwhelm, and in moments of exhaustion. By prompting one another to concretely identify the human infrastructures of caregiving surrounding us, we were better equipped to tap in and rotate out as we struggled to meet the round-the-clock practical demands associated with maintaining and protecting our Popular University. Finally, a lateral approach to caregiving allowed us to extend our community beyond the parameters of the institutionally defined University of Chicago. A community member without a student, staff, or faculty affiliation was present in the early days of the encampment, helping with construction and encampment operations. He was targeted by university police, and in response, encampment organizers mobilized jail support and a care package for him upon his return. This exchange of care—in the form of care for the encampment’s physical infrastructure and in the form of care for a community member—exemplifies a decommodified mutual aid relationship built upon respect and recognition rather than charity. In writing about the role of care at the Popular University for Gaza, I do not want to lose sight of the way that these experiments were made possible because we were not under siege. We were not subject to snipers, airstrikes, famine, or water shortages. We had the resources and safety that so many Palestinians have not had access to for months or years. As I foreground my own experiences of care, healing, and building power, I do so with the recognition that this encampment was a political formation designed to disrupt one university’s ongoing investment in genocide. But the care tactics at the Popular University for Gaza have created an infrastructure for ongoing disruption. Without care, our anti-imperial movements run the risk of replicating the dynamics of colonialism and empire, pushing feminized and non-white individuals into unrecognized care roles and reifying the expertise of the so-called “helping professionals” for those organizers financially stable enough to access them. This experiment in caring laterally for one another is a mechanism to shift the dominant culture surrounding caregiving, turning care into a central organizing approach rather than an adjunctive resource for burnt-out organizers. Despite the successes of this experiment, there were also moments of failure: we failed to develop a robust set of norms for responding to harassment within the encampment, and we failed to sustain the level of security needed to protect ourselves from an armed police raid in the middle of the night. Although this was an imperfect experiment, it is one that connected me to people I would have otherwise never known, despite our overlapping involvement in the corporate entity known as the University of Chicago. It gave me a laboratory to think and rethink intramural care. On the morning of May 7, hours after riot police destroyed the tents, art, library, mental health space, medic area, and prayer spaces of our Popular University, I defended my dissertation proposal. After watching videos of my friends and loved ones getting dragged and shoved outside of the walls we had built within the academy—walls designed to foster different ways of relating to one another here—I stepped back into the walls of the University of Chicago. As a developing scholar of the welfare state, I began my proposal hearing with an additional citation, recognizing the learning that I had done in the prior several weeks as uniquely valuable to the work I hope to do in and outside of the academy. I study care provision within and around the walls of jails and prisons, but my approach to my work is forever changed by the relations of care that I witnessed and participated in during the encampment. After my proposal defense, I walked through the quad, noticing that new patches of grass had been rolled out almost immediately in the hours after police officers invaded and destroyed the Popular University. I paused to take a photo of one patch of yellowed grass in the rectangular shape of a tent; despite their best efforts, a trace of the encampment persisted past the violent attempts to remove it all. As a means of disrupting the university’s predominant ways of being and relating—the various forms of intellectualization, silencing, and disengagement that I associate with neoliberalized higher education—the encampment’s ethic of care was not extinguished by the raid. It persists in the form of spray bottles and goggles that are distributed at actions this autumn, in the debriefing circles and therapy referrals that have continued for months since the encampment was torn down. As a means of protest and critique, this evidence of care has persisted despite the university’s attempts to exterminate it. This evidence of care demonstrates the ongoing commitment of my fellow encampment organizers: to continue using care as a tactic in our work against state violence, against empire. Source of the article

GOATReads: History

How the Industrial Revolution Fueled the Growth of Cities

The rise of mills and factories drew an influx of people to cities—and placed new demand on urban infrastructures. The period of rapid technological advancement in the United States known as the Industrial Revolution may have taken place during parts of the 18th and 19th centuries, but its impact resonated for decades and influenced everything from food, clothing, travel and housing—particularly in cities. While U.S. cities like Boston, Philadelphia, New York City and Baltimore certainly existed prior to the start of the Industrial Revolution, newly established mills, factories and other sites of mass production fueled their growth, as people flooded urban areas to take advantage of job opportunities. But that’s only part of the story. As the populations of cities continued to increase, these municipalities were faced with the challenge of how to handle the influx of people. Problems like the availability of housing, overcrowding and the spread of infectious disease had to be addressed as quickly as possible, or the newly industrialized cities risked losing their citizens and the factories that employed them. Here’s what happened. Origins of the Industrial Revolution The Industrial Revolution began in England in the mid-1700s: a few decades after the first steam-powered engines in the country were produced. The textile industry was the first to benefit from the emerging technology, like Richard Arkwright’s “water frame” (patented in 1769), James Hargreaves’ “spinning jenny” (patented in 1770) and Edmund Cartwright’s power loom (patented in 1786). Factories capable of mass-producing cotton fabric sprung up around the country. It didn’t take long for British industrialists to take advantage of the opportunities for manufacturing in the fledgling United States, and in 1793, Englishman Samuel Slater opened a textile mill in Pawtucket, Rhode Island. Using technology developed in England, as well as new additions, like Eli Whitney’s cotton gin (patented in 1794), the industrialization of America continued. Urbanization Begins in the United States What is referred to as the American (or Second) Industrial Revolution started in the second-half of the 19th century, as the country was rebuilding following the Civil War, its bloodiest conflict to date. At the same time, waves of immigrants from Europe started arriving in America in search of jobs—a large proportion of which were in factories in industrial cities. “After the Civil War, the United States gradually transformed from a largely rural agrarian society to one dominated by cities where large factories replaced small shop production,” says Alan Singer, a historian at Hofstra University in Hempstead, New York, and the author of New York's Grand Emancipation Jubilee. “Cities grew because industrial factories required large workforces and workers and their families needed places to live near their jobs. Factories and cities attracted millions of immigrants looking for work and a better life in the United States.” But the domination of cities didn’t happen overnight, according to Daniel Hammel, professor in the University of Toledo’s Department of Geography and Planning, and associate dean of the College of Arts and Letters. “Even during the Industrial Revolution, most Americans lived in the countryside,” he explains. “We were essentially a rural nation until about 1920.” Indeed, the 1920 U.S. Census was the first in which more than 50 percent of the population lived in urban areas. Even then, Hammel says, “we're not talking about massive cities; we're talking about small settlements, in many cases of 2,5000 or 3,000 people.” The 1870s also saw a rapid expansion of the country’s railroad system. Prior to that period, in order for a city to be a manufacturing center, it had to be located somewhere with access to water, like an East Coast port (like New York City or Boston), one of the Great Lakes (like Buffalo or Cleveland), a canal (like Albany or Akron) or a river (like Cincinnati or Pittsburgh). But thanks to the continued growth of the railroad, places without developed water access, like Scranton, Indianapolis and Dayton had the means to ship and receive supplies and goods. The Industrialization of Agriculture One of the byproducts of the Industrial Revolution was a shift in American farming methods, and, in turn, the amount of labor needed to work the land. “At one point, you needed a large family to be able to farm your land,” Hammel explains. “But with industrialization—particularly in the early 20th century—agricultural production became more mechanized, and we didn't need as much labor in rural areas.” That prompted (or in some cases, allowed) young adults who were no longer required on the family farm to seek opportunities in urban factories. The industrialization of agriculture also affected African American tenant farmers living in the southern states, Hammel says. “All of a sudden, landowners didn't need as many people working on their land anymore, so they moved [the tenant farmers] off of it,” he notes. “And that was, in essence, the beginning of the Great Migration. From then through the World War II era, African Americans moved in huge numbers out of the Mississippi Delta, in particular, to the Midwestern cities.” Some of the most common urban destinations included Chicago, Milwaukee, Detroit, Cleveland, Kansas City, Pittsburgh and New York. More People, More Problems The Industrial Revolution caused towns to turn into cities, and existing cities to swell, both in terms of population—with new arrivals from Europe and rural areas of the United States—as well as their geographic footprint, now that they were home to factories and other buildings required in manufacturing. And while job opportunities were the main draw for most newly minted urbanites, that left them with the problem of having to find somewhere to live. For many, this meant moving into cramped, dark tenement buildings: some of which were already considered old, while others (particularly in Chicago), were hastily thrown together and of exceptionally low quality, Hammel notes. But at the same time, Hammel stresses that population density itself isn’t a problem. “There were very wealthy, very healthy people living in extremely high density,” he explains. “But if you don't have much money, the density combined with the lack of light and lack of airflow in some of these tenements was a major issue.” Specifically, as Singer points out, it was a public health issue. “Rapid, unregulated, urbanization meant overcrowding, substandard housing for working people, inadequate infrastructure (including water and sewage systems) and the spread of epidemic diseases like tuberculosis,” he notes. Gradually, as there was wider understanding of how people got sick, cities created public health departments dedicated to reducing preventable illnesses and deaths through improved sanitation, hygiene, infrastructure, housing, food and water quality and workplace safety. Though many of these areas still remain works-in-progress, these societal advancements originally grew out of necessity, when the Industrial Revolution fueled the growth of American cities. Source of the article

GOATReads: Psychology

Is narcissism really on the rise among younger generations?

A fresh investigation of vast numbers of young people from around the world has thrown up some surprising results In Metamorphoses, the ancient Roman poet Ovid describes the Thespian hunter Narcissus, whose own beautiful reflection in a pool captivated him in place until he died. This warning against vanity and excessive self-love made his name eponymous with a personality trait – narcissism. Two thousand years later, there are growing and persistent concerns in the West that, like Narcissus, the younger generations in society are becoming increasingly self-absorbed, vain and entitled. Commentators raising the alarm point to the many parents spoiling their offspring, dressing their children in shirts that say ‘Princess’, ‘Champion’ or something similar – and encouraging them to be overly confident. They highlight the countless vain posts on Instagram and other platforms where young people often appear self-centred. But is there really any truth to the idea that narcissism is on the rise or is it just a popular myth? Of course, commentary on the grandiosity of young people is hardly a recent phenomenon. In the 4th century BCE, Aristotle observed how the youth ‘have exalted notions, because they have not yet been humbled by life or learnt its necessary limitations; moreover, their hopeful disposition makes them think themselves equal to great things …’ Yet, at least on the surface, there are some intuitive grounds for thinking that something different might be going on in our era – and that narcissism might really be increasing. For instance, it seems natural to assume that interacting on social networking sites – especially from a young age – might lead to an inflated ego. Young people have grown up able to share their personal experiences with the world, alongside plenty of opportunities to embellish their everyday lives and paint a perfect picture of themselves that does not match their lived reality. Creating such ideal self-images could arguably involve the danger of becoming narcissistic. Perhaps unsurprisingly, the term narcissism is thrown around a lot on social media platforms, with all kinds of people accusing others of being self-absorbed and egotistical. Under the hashtag #NarcTok, videos identifying supposedly narcissistic behaviours and how to cope with them are trending on TikTok – and a brief scan of the site suggests most of them are filmed by and for young people as they navigate relationships with their peers. What’s more, the idea that narcissism is on the rise was lent scholarly credence in 2008 by an influential study that reported grandiose narcissism had risen significantly among college students in the United States from 1982 to 2006. Grandiose narcissism is the brash, attention-seeking form of the trait that’s distinct from so-called vulnerable narcissism, which is associated with being thin-skinned and insecure. These researchers assembled studies that used the Narcissistic Personality Inventory (NPI) to measure grandiose narcissism. The NPI presents participants with a series of statement pairs, each comprising a narcissistic and a non-narcissistic statement (such as whether they expect others to do things for them versus they prefer to do things for others), and it asks them to indicate the statement that they identify with more. Choosing the narcissistic statement more often than the non-narcissistic alternative can be interpreted as an indicator of narcissistic tendencies. Narcissism was described as responsible for the economic downturn of the emerging millennium Based on the 2008 findings, some psychologists went so far as to declare a narcissism epidemic. The media impact of this research was enormous, spawning several popular science books that lamented the ever-increasing challenges of dealing with entitled youths. In 2013, Time Magazine featured the ‘Me Me Me Generation’ on their front cover, describing millennials as ‘lazy, entitled narcissists who still live with their parents’. In this vein, narcissism was described as a new plague ravishing the youth of America and beyond, with ensuing serious repercussions that might even have been responsible for the economic downturn of the emerging millennium. Some experts claimed that the narcissistic personality traits of younger generations had given rise to overconfidence and risky behaviour, leading to unsound economic decision-making and the inevitable crash of the stock markets in 2008. It’s an intuitively plausible argument. Overconfident youths overestimating their financial knowledge and future earnings, and carelessly taking out mortgages that they could not realistically afford, may have been at least partially responsible for the subprime mortgage crisis. However, in the world of academic psychology, the 2008 findings did not receive universal acclaim. In fact, at least four articles published between 2008 and 2013 from three independent research teams, led by Jeffrey Arnett, Brent Roberts and Kali Trzesniewski, failed to replicate generational increases in narcissism. Yet these papers attracted little media attention (‘The Youth Are OK’ doesn’t make such a great newspaper story) and the advocates of the epidemic cited various methodological issues with the replication attempts, such as that they focused on students from different university campuses at different times. With the idea of the youth narcissism epidemic continuing to dominate public perception, and the scholarly debates unresolved, we (together with our colleague Paul Stickel) recently attempted to replicate the original study using the very same methodological approach, but based on a substantially larger database and time frame. The only discernible time trend was that narcissism scores have actually been decreasing We systematically assembled the entire available literature on this topic, screening almost 8,000 primary studies from several scientific literature databases and assessing the full texts of more than 4,000 of them. We ended up with the data of more than 540,000 participants, with an average age of 27 years and from 55 countries all over the world, who were administered the NPI for grandiose narcissism between 1982 and 2023. Based on this vast data set, we found no evidence for any increasing trends in grandiose narcissism across time. Not in the US, not in college students, let alone on a global scale. In fact, the only discernible time trend in narcissism scores across all investigated countries and populations was a negative one – meaning that narcissism scores have actually been decreasing. You might be wondering how we can possibly explain these decreases in narcissism, especially in light of the many supposedly narcissism-boosting aspects of modern life such as social media. But actually, empirical data gives little reason to assume that social media boosts our narcissism. On the contrary, the omnipresent necessity for a social media user to compare their imperfect self with people boasting seemingly flawless lives, appearances and experiences can have a negative effect on self-confidence and wellbeing, especially in young people. This means that social media is more likely lowering instead of increasing grandiose narcissism. What’s more, far from displaying greater entitlement, it appears prosocial behaviour has been on the rise among young people over the past decades. For instance, large-scale national surveys of incoming college students in the US found that recent participation in volunteer work increased from 66 per cent in 1990 to 84 per cent in 2008, alongside an increased desire to help others and participate in community action. Young people are also more tolerant of diversity, including differences in sexual orientations or identities, and young evangelicals increasingly endorse environmental stewardship. Consistent with these observations, antisocial behaviours have dropped in many countries. External factors may also have played a role in changing traits in beneficial ways in the younger generations, as in most Western countries psychosocial care and health services have become more accessible. Decreasing stigmatisation of seeking psychological help and increased state-subsidised assistance in more recent decades have led to more people seeking psychological treatments. Because narcissism is known to be associated with anxiety and depression (especially when someone feels criticised or humiliated), perhaps it makes sense that better awareness of mental health issues and access to treatments has helped to ameliorate narcissistic tendencies. All of the above trends are consistent with the downwards trajectory of narcissism that we observed over the past 40-plus years. In summary, there is no indication that young people nowadays are any more narcissistic than young people some decades ago. In fact, the concerns about an increasingly egotistical, self-absorbed and arrogant youth that will continue to endanger the economy appear to be unfounded and overly pessimistic. On the contrary, we have good reason to assume that young people nowadays are more ready to help, more tolerant and more prosocial in general, thus promising a positive outlook for the future. Source of the article

What is a bubble? Understanding the financial term.

An AI bubble burst could be looming We have endured the dot-com bubble and the housing bubble. And now, according to some experts, we may be in an AI bubble. As of mid-October, Wall Street is “growing louder with warnings that the artificial intelligence trade may be overheating” following “months of record gains in AI-linked stocks and corporate spending,” said Yahoo Finance. Still, “some analysts argue the market’s strength reflects conviction, not complacency, and that the AI trade, while stretched, still has fundamental backing.” Only time will tell which side is right when it comes to the potential AI bubble. But in the meantime, you can brush up on what exactly a bubble is — and what the consequences of one popping may be. What is a stock market bubble? A stock market bubble is a “significant run-up in stock prices without a corresponding increase in the value of the businesses they represent,” said The Motley Fool. Usually, this is driven by “highly optimistic market behavior,” said Investopedia. Then, when investors’ sky-high levels of optimism start to wane as they realize their hopes are not panning out, they all begin to sell off, sending stock prices tumbling and causing an abrupt contraction in the market. Take, for example, the dot-com bubble of the late 1990s: In the lead-up to this bubble bursting, “investors piled into any stock of just about any company with a website, regardless of its share price, revenue or profit outlook,” said U.S. News & World Report. Later, “when the dot-com bubble burst in 2000, the Nasdaq Composite Index dropped nearly 80% over the next two years.” What are the signs of a bubble? Surging stock prices do not necessarily indicate a bubble — assuming they are bolstered by a company’s strong performance. If, however, there is a mismatch between the information and the valuation, that could suggest a bubble. “During the height of market bubbles, prices often continue to rise even following bad news, such as earnings misses or analyst downgrades,” said U.S. News & World Report. Bubbles frequently emerge from stocks that carry a “compelling story” with a “promise to transform the world,” such as the advent of the internet, said Bankrate. This usually leads to widespread enthusiasm, with bubbles “marked by large groups of novice or amateur investors who believe experienced investors are behind the curve or simply just don’t ‘get’ the new market paradigm,” said U.S. News & World Report. How can a bubble affect investors? While bubbles can benefit investors who get in early, “many investors end up losing a lot of money during market bubbles because they don’t start buying until asset prices are already significantly overvalued,” said U.S. News & World Report. The effects of a bubble are not necessarily isolated to those who chose to invest, either. When a bubble bursts, it tends to precede a “downturn in the economy, creating a recession,” said Bankrate, which can lead to declining portfolio values and even layoffs. Source of the article

GOATReads: History

Treasure Trove of Shipwrecks Along China’s Coast Reveals How East Met West on the Maritime Silk Road

Sunken finds in the South China Sea testify to rich trade networks used over hundreds of years. The sea routes brought porcelain, tea and other goods from Asia to Africa, the Middle East and Europe On a spring day in 1848, the first Chinese junk to sail Atlantic waters arrived at the East India Docks in London. To Charles Dickens, the roughly 800-ton Keying looked nothing like a long-distance trader, instead resembling a “floating toy shop” and a grotesque “torture of perplexity.” China had no idea how to sail the seven seas, the British author argued, in typically imperialist fashion. Western thinking has for centuries clung to the belief that China’s rulers distrusted sea trade. After all, the Great Wall of China, swerving between valley, mountain and sky for some 13,000 miles, was built to keep invaders out and landlock China’s citizens in the bosom of the empire. In the 21st century, however, China is positioning itself as a civilization steeped in maritime history that equals or surpasses colonial Europe. At a 2017 forum to promote China’s trillion-dollar Belt and Road Initiative, President Xi Jinping said, “Our ancestors, navigating rough seas, created sea routes linking the East with the West, namely, the Maritime Silk Road. These ancient silk routes opened windows of friendly engagement among nations, adding a splendid chapter to the history of human progress.” China’s seafaring ways took off under the Tang dynasty (618-907 C.E.), when Persian merchants in the vein of the mythical Sinbad the Sailor shouldered the risks of life on the water. For the next millennnium, until the Opium Wars shattered peaceful trade relations in the mid-19th century, a Maritime Silk Road ran from northeast China to the Red Sea. Silk was just the tip of the iceberg: Cargo ships were crucial for transporting bulk goods, including porcelain, chests of tea, spices and medicines. Until recently, evidence of this sunken past was largely hidden from sight. Today, however, an array of Chinese coastal cities, from Quanzhou to Hepu, compete to be recognized as the starting point of the Maritime Silk Road. To some observers, the Maritime Silk Road is more of a modern political invention than a historic version of the land-based Silk Road, which ran from northwest China all the way to the Mediterranean coast of southeast Turkey. But stunning new shipwreck discoveries deep under China’s seas support this grand naval narrative, shedding sensational light on a mighty ocean road to rival the Great Wall. A pair of stunning Ming dynasty shipwrecks The China (Hainan) Museum of the South China Sea, which opened on Hainan Island in 2018, is a cultural gem that shines a spotlight on this rich history. The museum displays artifacts from more than 15 shipwrecks found across China. Under the Western Han dynasty (206 B.C.E.-25 C.E.), the kingdom’s southern shores became a “golden waterway” for ships sailing the Maritime Silk Road, one of the museum’s exhibitions explains. During the Song dynasty (960-1279 C.E.), trade peaked, and ships sank in ever-greater numbers. The museum’s displays trace this history and beyond, covering the period between the 10th and 19th centuries. Inside the museum, masses of porcelain and pottery are rewriting the art history and dates of ceramic styles crafted in China’s legendary kilns. Also on view are life-size stone statues from the Qing dynasty (1644-1911), representing the Daoist deities Fu, Lu and Shou (Fortune, Prosperity and Longevity). The sculptures were commissioned for ancestral halls and temples but ended up wrecked off Coral Island while heading to Chinese expatriate communities in Southeast Asia. Other displays showcase encrusted treasures, including Ming dynasty (1368-1644) copper coins and Edo era (1603-1868) Japanese gold coins. A couple of rusted iron cannons lost during naval patrols in the South China Sea in the early 18th century stand nearby. But it is the fruit of China’s most recent maritime exploration that takes pride of place in the museum. Around 93 miles southeast of the city of Sanya on Hainan Island, two mid-Ming dynasty shipwrecks, untouched by fishing trawlers or trophy-hunting divers, lie at a depth of about 4,920 feet. Discovered in 2022, the wrecks are a major breakthrough in deep-sea underwater archaeology by China, which is investing $4.7 million annually in their study. The vessels are also game changers in understanding how goods flowed in and out of China—how East met West. One ship, sunk during the reign of the Zhengde emperor (1505-1521), holds a multicolored mound of some 100,000 pieces of porcelain and metalware. The second ship, wrecked with a cargo of wooden logs when the Hongzhi emperor was on the throne between 1487 and 1505, looks less spectacular but has a unique backstory. The site mapping, cargo recovery and forthcoming excavation are the product of a collaboration between the museum, China’s National Cultural Heritage Administration, and the Institute of Deep-Sea Science and Engineering at the Chinese Academy of Sciences. As a museum exhibition explains, the “perpetually dark environment is characterized by low water temperature, immense hydrostatic pressure and extremely limited visibility.” It’s far too deep for human divers, so China deployed the Deep Sea Warrior, a 20-ton, 30-foot-long bathyscaphe submersible that took eight years to build and can spend up to ten hours underwater with a three-person crew. The submersible has recorded the Ming wrecks using high-resolution sonar and a tool that allows scientists to peer beneath the surface mud. So far, archaeologists have recovered 961 finds from the Zhengde trader, building a staggering collection of blue-and-white bowls, glazed vases, green celadon, iron-red porcelain and three-color sancai wares, says Zheng Ruiyu, head of the exhibition department at the China (Hainan) Museum of the South China Sea. The wreck is colossal, with a height difference of ten feet between jars stacked in the upper hold and bowls crated near the keel. Between these storage spaces lies a vast void created by the decay and collapse of wooden bulkhead compartments and the decomposed organic or liquid cargo once stored there. These goods might have been headed from China’s Guangdong Province to markets in Southeast Asia and buyers in the Muslim countries of West Asia, who commissioned custom-made plates for communal dining. The junk’s merchants catered to everyone from street vendors to high-ranking officials and nobles, “a multilayered market structure … demonstrating a sophisticated overseas trade strategy aimed at satisfying the needs of a broad range of social classes,” says Zheng. Blue-and-white bowls, coarsely painted with motifs such as double lions playing with a pearl, found buyers in middle-class markets. A large blue-and-white jar decorated with a scene of the mythological Eight Immortals, meanwhile, was a top-tier good “whose craftsmanship and decorative quality are in no way inferior to the treasured heirlooms in the Palace Museum,” a sprawling complex in Beijing’s Forbidden City, Zheng says. The wreck’s exquisitely designed fahua vases are also “unprecedented” in quantity and quality, he adds. The slip-trailing of piped lines of clay onto one of the vase’s surfaces created a distinct three-dimensional effect, showcasing peacocks and peonies amid a sea of clouds. A humble 38 finds have surfaced so far from the Hongzhi trader, which was mainly transporting at least 660 wooden logs. Still, the wreck fills a gap in knowledge about return voyages sailing the Maritime Silk Road back to China. “The presence of two ships, one outbound and one inbound in close proximity, proves they were following a mature and busy ancient maritime route,” says Zheng. Derek Heng, a historian at Northern Arizona University who was not involved in the research project, says that the ship’s cargo of raw materials is a missing scientific link. While ceramic, which is resistant to degradation underwater, holds “an iconic importance in global art history,” he explains, “the most valuable trade between East and West, other than gold and silver, was in natural products, including foods, spices and medicines that tend to decompose quickly in marine environments.” The roughly ten tons of logs, the longest of which measure more than eight feet, are ebony wood stacked in two parallel lines along the ship’s width. Ebony, or “blackwood,” as it was known at the time, is native to tropical Asia and Africa and “was highly prized for its jet-black color and smooth surface when polished,” says Heng. During the middle years of the Ming dynasty, he adds, Chinese furniture-making created demand for ebony, which was “all the rage among China’s literati class.” Examples included writing and painting tables used in scholars’ studios and high-ranking officials’ armchairs. China’s love of ebony wood furniture took off during the reign of the Jiajing emperor (1521-1567). The deep-sea ship lost in the South China Sea likely sank before the end of the Hongzhi emperor’s reign in 1505, making it a trailblazer predating the golden age of Chinese furniture. Deer antlers and Turbo marmoratus shells found on the Hongzhi trader are even more curious survivors of Ming-era Asiatic trade. Deer were venerated in Chinese society as symbols of longevity and prosperity, and their antler bones were used in arts and crafts. The snail shells, found everywhere from Hainan Island to Japan to the Philippines, were valued for their pearl-like shine and used in furniture inlays. The Ming dynasty’s ban on maritime trade The wrecks’ existence is all the more significant due to the widespread ban on sea trade during the Ming dynasty. In 1368, after nearly a century of Mongol rule in China, the Hongwu emperor—the founding emperor of the Ming dynasty—ascended to the throne. Born into a poor peasant family, he began his reign by resetting the clock to year one and issuing an ancestral decree “to promote frugality and constrain luxury.” He would rule through a traditional Confucian way of agriculture before business. Material desires and indulgences were curbed. To set an example for his subjects, Hongwu ordered the gold to be stripped from his royal coach. His wife, the Empress Ma, washed the family’s clothes at the palace creek. The emperor also banned overseas trade. Rocks and pine stakes blockaded harbors, and 450 naval garrisons studded the shores to stop foreign landfall. Violating the absolute ban on foreign sea trade could mean the death penalty, but some smugglers deemed the risk worth it. The mid-1500s saw an outpouring of Chinese piracy, with sailors like the enslaved outlaw Wu Ping disrupting the high seas. To keep the profits from sea trade rolling, a frenzy of wokou (a Chinese term that translates to “Japanese pirates”) pillaged the east coast of China between 1369 and 1576. Despite their name, the raiders also included Chinese people from landlocked Fujian Province, who needed seafaring to survive. “Pirates and traders are the same people,” said Tang Shu, a provincial officer in Fujian, in 1516. “When trade flourishes, pirates become traders, and when trade is banned, traders become pirates.” The deep-sea trader wrecked with porcelain cargo in the South China Sea sailed under the Zhengde emperor, in the early 15th century, when the sea ban first began to be relaxed. Zhengde knew the prohibitions harmed China. But it wasn’t until 1567 that officials lifted the maritime ban. Almost overnight, piracy vanished. At Huzhou in eastern China, Portuguese galleons were crammed with more than 1,000 cases of fine fabrics per ship and sent to Spanish-controlled Manila in the Philippines, the Spanish Empire’s leading trans-Pacific port. A Chinese cargo ship wrecked in 1580 and discovered off the coast of Indonesia in 2010 is believed to hold a staggering 500,000 artifacts, chiefly Chinese porcelain. To feed the West’s obsession with porcelain, some one million souls worked day and night at 3,000 kilns in smoky Jingdezhen, in Jiangxi Province. As the imperial scholar Wang Shimou put it in the late 16th century, “Tens of thousands of pestles shake the ground with their noise. The heavens are alight with the glare from the fires, so that one cannot sleep at night. The place has been called, in jest, ‘The Town of Year-Round Thunder and Lightning.’” The rise of maritime trade under the Song dynasty While the Ming dynasty resonated in the West as an age of great enlightenment (Ming literally means “bright”), it was under a predecessor, the Song dynasty, that the greatest leap in China’s maritime history took place. In the mid-12th century, inflation spiked due to the loss of northern China to Jurchen horsemen. The Song government went bankrupt after spending the vast majority of its yearly revenue on the military. Desperate action was needed to create a new order out of the wreckage of the old. The answer lay beyond the Great Wall, in maritime trade. Under the Emperor Zhezong, between 1085 and 1100, 600 ships were built annually. By 1259, the city of Ningbo alone was home to nearly 8,000 junks and fishing boats. Song ships sailed with game-changing technology, particularly the south-pointing needle compass that appeared earlier in China than it did in the West. The Song dynasty scholar Wu Tzu-Mu knew that for ships leaving the imperial silk factories of Hangzhou, the sea was “the abode of mysterious dragons and marvelous serpents. At times of storm and darkness, they travel trusting to the compass alone. … If there is a small error, you will be buried in the belly of a shark.” The Italian merchant Marco Polo saw firsthand how the new breed of ocean junk, built with up to 60 cabins for merchants’ privacy and crewed by as many as 300 sailors, had space for the equivalent of 6,000 baskets of black pepper. By the late 11th century, up to 13 watertight compartments in a single ship stopped delicate goods from spoiling. The technology was more reminiscent of the supposedly “unsinkable” Titanic than the medieval world. The West only caught on to this knowledge in the 1780s, when Benjamin Franklin suggested in a letter that Westerners adopt the Chinese practice of dividing “the hold of a great ship into a number of separate chambers by partitions, tight caulked, so that if a leak should spring in one of them, the others are not affected by it.” England claimed the invention of the watertight bulkhead as the brilliance of Sir Samuel Bentham in 1795; the engineer’s wife later admitted that his eureka moment was borrowed from the boats he saw along the Shilka River on Russia’s border with northeast China. A wreck featured prominently in the China (Hainan) Museum of the South China Sea shows how the technology was pioneered in Song China 750 years before the Titanic set sail. Known as the Huaguangjiao One wreck, the ship probably began its final voyage around 1162, in the port of Quanzhou in Fujian, the start of the Maritime Silk Road during the Song dynasty. Quanzhou was home to an estimated 200,000 people and was considered by many merchants of the time to be the greatest harbor in the world. During its journey, the Chinese junk found itself trapped among the Xisha Islands, known by Europeans as the Paracel Islands. The labyrinth of small islands, reefs and sandbanks in the South China Sea was notorious by the 17th century as “dangerous ground” for wrecking Western ships. The junk spiked its keel on the razor-sharp Huaguang Reef. Piles of ceramics, iron bars, copper goods and coins came to rest at a depth of nearly ten feet. They lay forgotten there for the next eight centuries. Local fishermen stumbled onto the wreck in 1996. A formal excavation, conducted by marine archaeologists assembled from across China in 2007, eventually recovered more than 10,000 artifacts and 511 wooden hull timbers. The wreck site, the team discovered, had already been severely damaged. “The site’s surface was scattered with broken porcelain shards and coral concretion,” says Zheng. “Illegal looting had dealt a devastating blow to the site. Bulkheads were pried open, planks torn away and there was even evidence of explosives being used. The looters’ violent methods caused irreversible damage far beyond what could be caused by normal fishing activities like trawling.” By good fortune, the bottom of the hull survived, pressed against the reef for a length of roughly 55 feet and a width of 23 feet. Careful study brought to life a remarkable story. The keel and bulkheads matched the Fujian style of oceangoing ship, known in the Song dynasty for its hull, which was flat on the top and “sharp as a blade” on the sides, according to Zheng. Fujian junks—built with 80 percent pine wood—were reinforced with five to six layers of planking, which was needed to withstand damage from saltwater clams known as shipworms. The planks were fastened with iron nails and sealed with pine resin. Ten watertight bulkhead compartments boosted the ship’s resistance to sinking, albeit unsuccessfully in this case. The ship’s cargo of blue-and-white porcelain, celadon and brown-glazed porcelain, bowls, plates, boxes, pots, bottles, jars, and urns represented “inexpensive yet well-made goods,” says Zheng. They “reflect a model of mass-produced, standardized production, indicating that the overseas market at the time was dominated by mass consumption rather than luxury goods.” One key find, a white celadon bowl painted with the words “made by Pan Sanlang in Ren Wu Year,” dated the ship to the 32nd year of the Song Emperor Gaozong’s reign: 1162. This telltale artifact, as well as other objects recovered from the wreck, is proudly displayed in the China (Hainan) Museum of the South China Sea. The Huaguangjiao One wreck has “milestone significance in the history of Chinese underwater archaeology,” says Zheng. “It is, to date, the only ancient ship in China’s ‘far-sea’ waters to be fully uncovered, mapped, dismantled and recovered by a Chinese team.” The legacy of the Maritime Silk Road The wealth of underwater discoveries made in recent years leaves no doubt that China’s maritime influence has long been overlooked. As a Spanish Dominican missionary stationed in China observed in 1669, “There are those who affirm that there are more vessels in China than in all the rest of the known world. This will seem incredible to many Europeans, but I, who have not seen the eighth part of the vessels in China, and have traveled a great part of the world, do look upon it as most certain.” In antiquity, a single ship could stow more goods than three caravans of camels trudging along the desert Silk Road. When it came to bulk transport and speed, the sea was the only option, as the sprawling cargoes of the deep-sea South China wrecks show. Starting this fall, and continuing over the next three years, China plans to excavate, recover and display some 10,000 additional artifacts from the two wrecks. “A series of major archaeological discoveries and high-profile publicity and exhibitions have captured the public’s attention,” says Zheng. Underwater archaeology, he adds, has transformed “from a mysterious professional field into something accessible. … Shipwrecks and their cargoes of exquisite porcelain and other goods are seen as a microcosm of the prosperous ancient Maritime Silk Road and powerful proof of our ancestors’ development and use of the oceans in the South China Sea and beyond.” What else remains to be found? According to Zheng, some experts estimate that perhaps 100,000 ships sank off China’s coast after the Song dynasty ended in 1279. More conservative calculations suggest that around 2,000 to 3,000 are preserved as wrecks, ready to resurface for study. “The potential for future discoveries in Chinese underwater archaeology is immense,” Zheng says. And now, with exploration plunging into deep seas, he adds, “This ‘final frontier of archaeology’ holds limitless potential.” Source of the article

What True Wealth Looks Like

Money can make you happier, but only if you don’t care about it. Many stressed-out people are attracted to eastern meditation, believing that it will give them relief from their “monkey mind” and lower their anxiety about life. Unfortunately, the monkey usually wins because people find the mental focus required for meditation devilishly hard. On a trip last year to India, I asked a Buddhist teacher why Westerners struggle so much with the practice. “You won’t get the benefit from meditation,” he said, “as long as you are meditating to get the benefit.” You might call this the “meditation paradox,” and it seemed like the most Buddhist thing I had ever heard. But when I thought about it more, I realized that the teacher’s epigram held a deep truth about a lot of life’s rewards: You can only truly attain them when you are not seeking them. Consider the relationship between money and happiness, about which you’ve no doubt received mixed messages your whole life. On the one hand, your grandmother probably taught you that money can’t buy happiness. On the other, today’s dominant culture insists that it can. So who’s right: grandma or the zeitgeist? The meditation paradox provides the answer: both. Money can buy happiness—as long as you don’t try to buy happiness. Social scientists have long studied whether money raises well-being. The conventional answer from economists is yes, at least up to a point. The most famous study supporting this came in 2010 from two Nobel laureates who calculated that various measures of life satisfaction increase with a person’s income up to about $75,000 ($112,000 in today’s dollars), at which point very little benefit is derived from extra money. Since then, this finding has been partly contested by scholars such as Matthew A. Killingsworth, who showed in an excellent study using a much larger data set that the happiness plateau generally occurs at a higher income level. According to psychologists, the answer to the money and well-being question is a bit different: The cash-happiness quotient depends more on the type of relationship you have with money than the actual amount of money you have. Researchers writing in the Journal of Personality and Social Psychology in 2014 demonstrated this mechanism by looking at materialism, defined as “values, goals, and associated beliefs that center on the importance of acquiring money and possessions that convey status.” Analyzing 259 data sets on the subject, they found that materialistic values are negatively correlated with overall life satisfaction, mood, self-appraisal, and physical health. Instead, these values were positively associated with depression, anxiety, compulsive buying, and risky behaviors. That’s what your grandma was talking about. We can be even more precise when we look specifically at the reasons people give for why they earn their money. According to a 2001 article in the same journal, psychologists found no negative association between well-being and acquiring money for the fundamental purposes of security or supporting your family. The problem comes from wanting to earn money for four particular motives: making social comparisons, seeking power, showing off, and overcoming self-doubt. Put simply, if you are striving to get rich to feel superior to others, or because you’re trying to boost your self-worth, your efforts will lower your happiness. These findings reinforce what I have written about in the past: that your well-being depends on how you spend your money. Buying possessions generally does not increase happiness, whereas spending money either on experiences enjoyed with loved ones or to get more free time does reliably raise well-being. This makes intuitive sense about the type of person who will get a flashy watch or a fast car to make their point, rather than rent a nice place to spend a quiet week away with their soulmate. So the research suggests that money follows a version of the meditation paradox: It’s good for your well-being as long as you don’t seek money because you believe wealth will enhance your well-being. This in turn suggests three positive changes that you can make. 1. Interrogate your financial motives. If this essay has alerted you to the fact that your motives for earning money matter for your happiness—and that making a lot of money is important to you—you may be asking yourself why. Take some time to consider what images enter your mind when you imagine reaching your financial goals. Do you see yourself being admired or envied by others? Do you feel as though you’ve made it, and are finally worthy of approval? These images might reflect your motivations, but they are terrible for your well-being. (Another point to bear in mind: If your financial motives are indeed social comparison and self-worth, you will never reach your financial goals, because you will never have enough money to satisfy these needs.) Simply recognizing your true motives and choosing better ones—such as “I earn money to support the people I love the most”—will start you on a better path. 2. Take a vow of poverty—or at least modesty. Francis of Assisi, the 13th-century Italian Catholic mystic and founder of the Franciscan order of Catholic priests and monks, began his life as a wealthy nobleman. His enlightenment came in his early 20s, when he had a vision in which he was called to give away all of his riches and live in poverty. This became the basis of his order, which he claimed would bring great joy to its members. “Blessed be my brother who goes out readily, begs humbly, and returns rejoicing,” he is said to have proclaimed to a member of his order. I won’t ask you to live in poverty and turn to begging, but one small way to detach yourself from money-based social comparison (and earn a bit of Franciscan rejoicing instead) is to renounce consumption of the most opulent items you might buy. For example, instead of choosing the priciest, most ostentatious car you can afford, purchase one that is down a few rungs in price and status. I try to practice this; I won’t claim it as a path to sainthood, but it has helped remind me that my economic success does not represent who I am. 3. Spend quietly. And what should you do with your leftover discretionary money? Here’s a useful answer for happiness: Spend it on experiences with people you love—without being showy about it—and on meaningful activities. So, for instance, go away for the weekend with a friend or partner and make a point of not posting a single picture of your getaway on social media, because that will probably lower your enjoyment of the experience. In fact, consider not taking any pictures, and instead resolve to be fully present, because that will surely enhance the experience. One last idea, returning to the Buddhist tradition: In Zen, the meditation paradox is commonly illustrated using koans, which are riddling statements or puzzling epigrams that monks are taught to contemplate to help them move beyond logical thinking and reach a deeper understanding of life’s meaning. Here is a koan of my own devising that might capture the broader point in this essay: A man became rich by getting rid of his gold. The superficial message of this aligns with the research that has shown how giving away your money to worthy causes raises your happiness. That’s fine and good. But ponder this koan more deeply, and see what it tells you. Ask what you consider gold—not just money, but any asset, talent, or strength you might be tempted to display, to demonstrate your worth to yourself and others. List those things that set you apart. Then contemplate how you could use them in a way that is not self-aggrandizing but that brings blessings to the world, and watch your fortune grow. Source of the article

The Feminist Who Inspired the Witches of Oz

The untold story of suffragist Matilda Gage, the woman behind the curtain whose life story captivated her son-in-law L. Frank Baum as he wrote his classic novel Every living generation has been petrified by The Wizard of Oz. Early in the 1939 film, a cranky neighbor riding her bicycle through a tornado suddenly transforms into a witch. She soars off on her broomstick, tilting her head back and screaming with laughter as her cloak billows out behind her.  The 1900 book by L. Frank Baum, The Wonderful Wizard of Oz, inspired the film’s theme of good and evil sorcery. All the witches in the story have magical powers. They can fly, materialize at will, and see all things far and near. But while the Witches of the North and South are kind and supportive, the Witches of the East and West are seen as evil. “Remember that the Witch is Wicked—tremendously Wicked—and ought to be killed,” the Great Oz bellows to Dorothy as she heads off to the west.  The backstories of the Wicked Witch of the West and Glinda the Good are the subject of the upcoming movie Wicked, based on Gregory Maguire’s 1995 novel and Winnie Holzman and Stephen Schwartz’s 2003 stage musical. The witch, who is unnamed in The Wizard of Oz, has a name in Wicked: Elphaba, an homage to the initials of L. Frank Baum. (His first name, which he rarely used, was Lyman.) But the real-life backstory of the witches of Oz is just as fascinating. It involves a hidden hero of the 19th-century women’s rights movement and the most powerful woman in Baum’s life: his mother-in-law, Matilda Electa Joslyn Gage.  It was likely at Gage’s urging that Baum began submitting his poems and stories to magazines. Gage even suggested putting a cyclone in a children’s story. But she was a notable figure in her own right. As one of the three principal leaders of the women’s rights movement, along with Susan B. Anthony and Elizabeth Cady Stanton, Gage was known for her radical views and confrontational approach. At the Statue of Liberty’s unveiling in 1886, she showed up on a cattle barge with a megaphone, shouting that it was “a gigantic lie, a travesty and a mockery” to portray liberty as a woman when actual American women had so few rights. After male critics branded Gage as satanic and a heretic, she became an expert on the subject of witch hunts. Her 1893 manifesto Woman, Church and State chronicled the five centuries between 1300 and 1800 when tens of thousands of human beings, mostly women, were accused of witchcraft and put to death by fire, hanging, torture, drowning or stoning. In one gruesome scene, she described 400 women burning at once in a French public square “for a crime which never existed save in the imagination of those persecutors and which grew in their imagination from a false belief in woman’s extraordinary wickedness.” Gage died two years before the publication of The Wonderful Wizard of Oz, a story that produced the most enduring image of female wickedness in American history. But Baum also introduced the world to a different kind of witch. It was the beautiful and benevolent Glinda, likely inspired by Matilda herself, who showed Dorothy that she always held the power to return home. Born March 24, 1826, Gage grew up as Matilda Joslyn north of Syracuse, New York, the only child of Helen Leslie and Hezekiah Joslyn, the town physician. The couple gave their daughter an unusual middle name, Electa—a Greek word that meant “elected” or “chosen one.”  Hezekiah Joslyn was a freethinker who taught his daughter that wisdom comes through one’s experiences. Freethinkers of 17th-century Europe challenged church authority, demanding the end of medieval witch hunts. Many of those backing the American Revolution also called themselves freethinkers, including Thomas Paine, whose 1776 pamphlet, Common Sense, helped instigate independence from England.  Gage’s parents were staunch abolitionists whose home was a station on the Underground Railroad. Escaped enslaved people hid under the floorboards of the kitchen. Gage was home-schooled in Greek, mathematics and physiology. At age 15, she set off for the Clinton Liberal Institute, a boarding school that promised an education free of religious dogma. At 18, she married Henry Gage, a merchant and store owner, and they settled in the Syracuse suburb of Fayetteville, where three of their four children were born in the 1840s. Their youngest daughter, Maud, arrived in 1861.  Haunted by injustice, Gage only grew fiercer in her thinking. She was furious at an America failing to live up to the ideal of liberty for all expressed in the Declaration of Independence. She was unable to leave her children at home to travel 60 miles to the inaugural 1848 National Women’s Rights Convention in Seneca Falls, New York. But by 1852, when the third convention came to city hall in Syracuse, she was ready to speak her mind to the crowd of 2,000. “There will be a long moral warfare before the citadel yields,” Gage proclaimed. “In the meantime, let us take possession of the outposts. … Fear not any attempt to frown down the revolution.”  Afterward, Gage found herself locked in a war of words with religious leaders. One local minister called the convention “satanic,” while another denounced the women as “infidels.”  After the Civil War, the leaders of the movement formed the National Woman Suffrage Association, with Stanton as president, Anthony as secretary and Gage as chair of the executive committee. On Election Day, 1872, Anthony was arrested and jailed for voting. Gage was by her friend’s side for the trial near Rochester. The judge was “a small-brained, pale-faced, prim-looking man,” Gage wrote. “With remarkable forethought, he had penned his decision before hearing” the case. The resulting publicity made Susan B. Anthony a household name.  In 1876, a six-month celebration of the centennial of the Declaration of Independence attracted nearly ten million Americans—roughly a quarter of the entire U.S. population­—to Philadelphia. The activists petitioned President Grant to make a declaration of their own at the opening ceremony. The request was denied. That wouldn’t stop the suffragists.  At the ceremony, Anthony, Gage and three other leaders lurked behind the press section. Gage clutched a three-foot scroll and marched through the crowd of 150,000 people toward the podium. She passed the document to Anthony, who then placed it into the hands of the master of ceremonies, announcing: “We present to you this Declaration of the Rights of the Women Citizens of the United States.” Before the guards could catch them, the suffragists quickly handed out printed copies of the declaration to the reporters in the crowd: “The women of the United States, denied for one hundred years the only means of self-government—the ballot—are political slaves, with greater cause for discontent, rebellion and revolution, than the men of 1776. … We ask justice, we ask equality, we ask that all the civil and political rights that belong to citizens of the United States be guaranteed to us and our daughters forever.” Afterward, the women decided to record their struggles in a book, which eventually ballooned to comprise six volumes. The mammoth undertaking took a decade to finish. Most of the labor was split between Stanton and Gage, who completed the History of Woman Suffrage, Volume I, in 1881.  The book’s publication coincided with Maud Gage’s freshman year at Cornell, the first Ivy League university to become coeducational. Maud resided in the female dormitory, Sage College, a magnificent brick building that still stands on the Ithaca, New York, campus. One Saturday evening in February 1881, the young women of Sage were treated to a lecture by Maud’s mother on the subject of women’s suffrage. “A large audience greeted Mrs. Gage on Saturday evening,” reported the Cornell Daily Sun. “Her discourse was well received.”  Still, Maud had a difficult time at school, left out of social clubs and mocked by college boys. “Her name is Gage and she is lively,” her schoolmate Jessie Mary Boulton wrote home in a letter. “A girl scarcely dares look sideways here. I came to the conclusion long ago that Cornell is no place for lively girls.” When Boulton co-founded a new chapter of Kappa Alpha Theta, the first sorority on an Ivy League campus, Maud was not on the membership rolls. Boulton also joined the Lawn Tennis Club for ladies, again without Maud. But fate would have its way: The 20-year-old student shared her room with a girl from Syracuse named Josie Baum, who introduced Maud to her cousin Frank, a 25-year-old bachelor, at a family party on Christmas Eve. At the time, Frank Baum was a failed chicken farmer who was writing and starring in his own touring stage plays. The two hit it off, and after a proper period of Victorian-era courtship, Baum proposed marriage. At first, Gage called her daughter “a darned fool” for wanting to drop out of college to marry this itinerant playwright and actor, a most disreputable profession. Yet a wedding date was set for November 1882. With a string quartet playing, the wedding took place at the Greek Revival home of the Gage family. “The promises required of the bride were precisely the same as those required of the groom,” noted a local newspaper, in apparent surprise. (At the time, it was standard for the bride, but not the groom, to promise to “love and obey.”) Now a designated New York State commemorative landmark owned and operated by the Matilda Joslyn Gage Foundation, the Gage House serves as a museum and dialogue center hosting programs on social justice led by progressive scholars and activists. One of its advisory board members, feminist icon Gloria Steinem, called Gage “the woman who was ahead of the women who were ahead of their time.” After the September 1884 death of her gentle and solemn husband Henry—an inspiration for the character of Uncle Henry in The Wizard of Oz—Gage threw herself even more fully into her work. Among many other projects, she’d been collaborating with Anthony and Stanton on the first two volumes of the History of Woman Suffrage. In the mid-1880s, though, Anthony came into some money and took over the rights to the series herself, paying Gage and Stanton for their shares.  Meanwhile, Gage was increasingly frustrated by the more conservative leanings of her fellow suffragist leaders. Anthony, in particular, had strong ties with the temperance movement, which blamed alcohol for the bad behavior of men but also had an overtly religious vision for national politics.  Gage disapproved of the alliance between the women’s suffrage movement and the Woman’s Christian Temperance Union, which began in 1874. In 1890, Gage left the National Woman Suffrage Association and founded the Woman’s National Liberal Union, which fought for the separation of church and state and drew attention to the religious subjugation of women.  As for the young Baums, Frank and Maud rented a house in Syracuse and had the first of their four sons. The new dad brought in steady money as the superintendent and sales manager for Baum’s (pronounced Bom’s) Castorine Company, a family business that created lubricants for buggies and machinery, a firm that still operates out of Rome, New York. Despite the outfit’s success, Baum grew bored. Yet he would never forget his days selling cans of oil, making the item a must-have for the Tin Woodman, who always needs a few drips to avoid rust.   Gage corresponded regularly with her son and two other daughters who had settled in the Dakota Territory, joining half a million other Easterners catching “Western fever.” As Baum heard about their lives, he yearned for a more adventurous one. Relocating with his family in 1888 to the new “Hub City” of Aberdeen, in what became South Dakota, he established a novelty shop on its main street called Baum’s Bazaar. The store failed in just 15 months, as Baum misjudged the clientele, focusing on frivolous toys and games and impractical items like parasols and fancy wicker. “Frank had let his tastes run riot,” wrote sister-in-law Helen, who picked up the leftover inventory for $772.54 but turned her family’s store, renamed Gage’s Bazaar, into a success by selling things people actually needed. Around the time Baum closed the store in December 1889, Matilda Gage blew in from the East to visit. She’d stay with the Baums every winter for the rest of her life. By early spring, Gage decided that she’d be remaining for the rest of the year. She and her band of suffragists convinced legislators to hold a referendum in the new state of South Dakota on the right to vote for women.  Baum put his remaining cash into another distressed business, buying the weakest newspaper in a town that had several bigger ones. His first editorials for the Aberdeen Saturday Pioneer were high-minded. “The key to the success of our country is tolerance,” he wrote. “The ‘live and let live’ policy of the Americans has excited the admiration of the world.” Volunteering as secretary of Aberdeen’s Women’s Suffrage Society, Baum argued that a law giving women the vote was more likely out there, as the West was new and open-minded. Gage barnstormed the state as Baum published editorials: “We are engaged in an equal struggle,” Baum wrote. In the West, he added, “a woman delights in being useful; a young lady’s highest ambition is to become a bread-winner.”  This inclusive spirit was often found in Baum’s early editorials. But on November 4, 1890, everything he advocated was shot down in the election. Baum had written a clever poem endorsing the town of Huron as the new state’s capital. The voters chose Pierre. And at the end of the day, the male voters of South Dakota broke 2 to 1 against the right of women to vote. Aberdeen had also just faced a drought and massive crop failure that crushed its economy. Failing farmers and merchants were heading back East or on to other boomtowns out West, leaving Baum with worthless credit slips and unpaid expenses. That drought was felt even more harshly 150 miles away at Standing Rock, the Sioux reservation, where the community of several thousand began to suffer from starvation. As part of a well-documented propaganda campaign fueled by the U.S. military, newswire reports warned that an uprising and massacre of the people of Aberdeen was coming. At first Baum tried keeping his readers cool and sensible about this “false and senseless scare.” On November 29, he wrote: “According to the popular rumor, the Indians were expected to drop in on us any day the last week. But as our scalps are still in healthy condition, it is needless for us to remark that we are yet alive.” Baum lambasted his rival newspapers for printing the worst of the propaganda and racism in order to sell newspapers. “Probably papers who have so injured the state by their flashy headlines of Indian uprisings did not think of the results of such action beyond the extra sale of a few copies of their sheets,” Baum continued in his November 29 editorial.  When U.S. officials ordered the arrest of Chief Sitting Bull in mid-December, newspapers all over the country reported that violence was likely to break out. Baum got caught up in the issue. “A man in the East can read the papers and light a cigar and say there is no danger,” he wrote, “but put that man and his family on the east bank of the Missouri, opposite Sitting Bull’s camp … he will draw a different picture.”  After police invaded Sitting Bull’s camp on December 15 and shot him as he was trying to escape, wire reports went out far and wide, and Baum printed the headline on December 20: “Expect an Attack at Any Moment. Sitting Bull’s Death to Be Avenged by a Massacre of Whites in the Near Future.” In this same issue, Baum printed his first of two racist editorials. Dehumanizing the Native Americans as “whining curs” and “miserable wretches,” he called for “the total annihilation of the few remaining Indians.” On December 29, as many as 300 Sioux men, women and children camping by a creek called Wounded Knee were shot dead by U.S. soldiers. Baum’s editorials put him on the wrong side of history.  There’s no known written record of Gage’s response to this incident. She was living with the Baums at the time, so any exchanges she had with Frank would have happened face-to-face. But his editorials were certainly at odds with her own views. She had the utmost respect for Indigenous communities. After paying several visits to an Iroquois Confederation north of Syracuse, she had concluded that the sexes there were “nearly equal,” and “never was justice more perfect, never civilization higher.” In 1893, she would become an honorary member of the Wolf Clan of the Mohawk Nation and, at the ceremony, receive the name of Ka-ron-ien-ha-wi, roughly translated as She Who Holds the Sky.  By January 1891, Baum seemed to have lost almost everything, including his integrity. His newspaper business collapsed. At the age of 34, he had no job, no career, no prospects, only a damaged reputation. That spring, the Baum family moved to Chicago, where Frank got a job working for the city’s Evening Post. In this new chapter of his life, he accompanied his mother-in-law to lectures, séances and other gatherings. In September 1892, he became a member of a group called the Theosophical Society. Founded in New York City, the society was led by the Russian mystic Madame Helena Blavatsky. Matilda, a freethinker like her father, had joined the society at a conference in Rochester, New York, in March 1885. The brew of Theosophical beliefs appealed to Gage—the ancient wisdom of Hinduism and Buddhism, combined with other mystical ideas like mediumship and telepathy—and all of it with an emphasis on universal human rights and living a life of nonviolence in both word and deed. A mission statement published by the group’s founders in 1882 defined Theosophy as “a Universal Brotherhood of Humanity, without distinction of race, creed or color.” Gage called Theosophy the “crown blessing” of her life.  Baum was especially interested in Theosophy’s description of the astral plane, a world of emotion and illusion where one’s “astral body” could go for a supernatural experience reached through mental powers. An 1895 book called The Astral Plane: Its Scenery, Inhabitants and Phenomena would become so popular in the family that three copies would circulate among the Baum and Gage households. In 1893, Gage published Woman, Church and State, her most influential work. Gage called it “a book with a revolution in it.” The 450-page volume put forth the provocative view that church and state had been suppressing women for centuries. The book was banned by her nemesis, Anthony Comstock, a U.S. postal inspector and the secretary of the New York Society for the Suppression of Vice, who called it “salacious” and declared he would bring criminal proceedings against any person who placed it in a public school or sent it through the U.S. mail.  Gage cited passages from the King James Bible, such as “though shalt not suffer a witch to live” (Exodus 22:18), showing the link between the preaching of religion and accusations of witchcraft. She traced this misogyny back to the Garden of Eden with the tale of Eve, the trickster serpent and the forbidden fruit. “A system of religion was adopted which taught the greater sinfulness of women,” she asserted, “and the persecution for witchcraft became chiefly directed against women.”  As this view gained traction within the church, Gage wrote, “a witch was held to be a woman who had deliberately sold herself to the evil one.” Anything could be used as evidence of witchcraft—possessing rare knowledge, having an unusual “witch mark,” suffering from mental illness, owning black cats, the use of herbs for healing, performing black magic, or having an ability to float or swim. But “those condemned as sorcerers and witches, as ‘heretics,’ were in reality the most advanced thinkers of the christian ages,” Gage wrote. She was especially moved by an 1883 gathering in Salem, Massachusetts, for descendants of Rebecca Nurse, one of the best-known women put to death in the Salem witch trials of 1692. Nurse was a 70-year-old woman with eight children and, as Gage wrote, “a church member of unsullied reputation and devout habit; but all these considerations did not prevent her accusation … and she was hung by the neck till she was dead.” Gage was promoting her new book when she visited the 1893 Columbian Exposition, a spectacular world’s fair in Chicago. Her son-in-law was covering the sprawling event as a reporter, and he wrote about the light pouring through the walls of windows of the white buildings, so bright that people purchased colored eyeshades from vendors. This detail would later reappear in The Wonderful Wizard of Oz, where all the residents of the Emerald City wear green eyeshades. “Because if you did not wear spectacles the brightness and glory of the Emerald City would blind you,” Baum would write in the novel. Indeed, the Oz novel’s illustrations would be sketched by William Wallace Denslow, who was drawing splendorous images of the fair’s fanciful architecture for another Chicago paper. One day, Gage visited the exposition’s Woman’s Building and encountered statues immortalizing her old colleagues Susan B. Anthony and Elizabeth Cady Stanton. Seeing the statues only increased her ire at her old colleagues for aligning with the Woman’s Christian Temperance Union. She also felt undervalued as a writer, and she harbored suspicions that Anthony had taken funds from a joint account to pay her own exorbitant travel expenses. “They have stabbed me in reputation, and Susan, at least, has stolen money from me,” she wrote in a July 1893 letter to her son, Thomas. “They are traitors, also, to woman’s highest needs.” Gage herself was falling into poverty. The cost of publishing and promoting her new book had exceeded its earnings, and she was now in debt. “I like an active life and one with freedom from money troubles,” Gage wrote. “I like to be independent in every way. But fate or Karma is against me.”  In February of 1895, Gage came across a writing contest in the children’s magazine The Youth’s Companion offering a prize of $500 for the best original story. The rich sum (equivalent to more than $18,000 today) seized her attention, and she considered her daughter Helen and son-in-law Frank the family’s best writers. While there are no known records of her in-person conversations with Frank, she likely shared the same ideas she sent Helen in a letter. “Keep in mind it is not a child’s paper but a paper for youth and the older members of the family,” she wrote. “The moral tone and literary character of these stories must be exceptional.”  She encouraged them to write “not narration or passages from history, but stories,” which she defined as tales with “a dramatic arc from the beginning to the end.” Gage went so far as to suggest a topic: “If you could get up a series of adventures or a Dakota blizzard adventure where a heroic teacher saves children’s lives.” Or, she added, “bring in a cyclone,” perhaps recalling a true twister story of a house rising off its foundation that Helen had written in the Syracuse Weekly Express in 1887. Above all, Gage added, create “fiction which comes with a moral, without however any attempt to sermonize.”  There’s no evidence that Baum entered that particular contest, but around that time, he began a new routine. He’d moved on to a new day job as a traveling salesmen of fine china for Pitkin & Brooks. Every evening, especially when he was away at hotels, he’d write down ideas in a journal. Soon, Baum began submitting his tales and poems to newspapers and magazines. At first, Baum kept track of his rejection letters in a journal called his “Record of Failure,” a title that could have described his whole business career, too.  But in early 1896, Baum started receiving acceptance letters for his short stories. In January 1896, the Chicago Times-Herald published a story of his titled “Who Called Perry?” In February, the same paper published his story “Yesterday at the Exposition,” which imagined a world’s fair in Chicago nearly 200 years in the future. His first national magazine story was called “The Extravagance of Dan” and published in the National magazine in May 1897.  Once his submissions started getting accepted, the stories kept coming, typically turning real life into the poetic. With both earnings and confidence rising, Baum expanded his vision to two successful books for children, Mother Goose in Prose (1897), a collection of short stories based on traditional nursery rhymes, and Father Goose: His Book (1899), a series of original nonsense poems. In October 1899, he got a story called “Aunt Hulda’s Good Time” into the magazine first suggested by his mother-in-law, the prestigious Youth’s Companion. For the first time, the Baums were able to afford a stately home, a Victorian on Humboldt Boulevard wired with electric lights and featuring a covered front porch where Baum would tell stories to his sons and the neighborhood children. He maintained a close relationship with his mother-in-law. “Frank came in and kissed me goodbye, as he always does,” wrote Gage. “He is very kind to me.” Gage was staying with the Baums in Chicago when she was confined to bed with pain in her lungs, throat and stomach. “We all must die, and I pray to go quickly when I leave,” she wrote in an 1897 letter. “I would a thousand times prefer Black Death to long-term paralysis. … The real suffering comes from lack of knowledge of real things—the spiritual.”  In Washington, D.C., thousands of activists were gathering for a convention to commemorate the 50th anniversary of the Seneca Falls conference. Unable to attend, Gage penned a final speech that was read aloud at the convention by a friend. Gage proclaimed what she called “the femininity of the divine” and shared her belief that one day “the feminine will soon be fully restored to its rightful place in creation.”   Gage also wrote messages to her loved ones and colleagues as part of her last will and testament. “I am one of those that are set for the redeeming of the Earth,” Gage wrote to Baum. “I am to live on the plane that shall be above all things that dishearten. … When I receive instructions from those who are in the Invisible, I will receive them willingly, with a desire to put them into practice to the extent of my spirit light and potency.”  Matilda Electa Joslyn Gage died on March 18, 1898. Her four-paragraph obituary in the New York Times reported her death was caused by “apoplexy,” an old medical term for a stroke but also meaning a state of extreme rage. Following a small ceremony for her mother in Chicago, Maud left her husband behind with their four sons and transported the urn of her mother’s remains east, to be interred by the old house in Fayetteville alongside her father’s grave.  This is when the magic happened. The story “moved right in and took possession,” Baum later said. The inspiration came at the twilight of a winter’s day when he saw his sons and their friends returning home from playing in the snow. “It came to me right out of the blue,” he said. “I shooed the children away.” Word paintings came out through his pencil onto scraps of paper: A gray prairie. A terrifying twister. A mystical land ruled by both good and wicked witches. A trio of comical characters who join a girl on her quest, a journey to a magical city of emeralds controlled by a mysterious wizard. “The story really seemed to write itself,” he told his publisher. Yet at first, Baum hadn’t settled on a name for his main character. In June 1898, Maud’s brother and wife welcomed a girl they named Dorothy. Maud enjoyed visiting them in Bloomington, Illinois, that summer, but the baby became ill and started running fevers. On November 11, only 5 months old, Dorothy Louise Gage died. She was “a perfectly beautiful baby,” Maud mourned. “I could have taken her for my very own and loved her devotedly.” In Baum’s writings, the girl from Kansas took on the name Dorothy, with a last name, Gale, that was perhaps a double reference to the gale-force cyclone and the family name of Gage.  His mother-in-law also lived on through the story. She’d believed strongly in mental manifestation, insisting that people could accomplish anything through the power of their minds. When Helen’s daughter Leslie fell ill in 1895, Gage had prescribed positive thought energy: “Take five minutes three or four times a day to think of health and when you go to bed at night keep saying to yourself ‘I am well.’ Grandma knows by experience that a great deal of good comes from concentration of thought.” As a woman who spent her whole life urging women to have confidence in themselves, Gage would have been pleased to see her views taking on a central role in the story of Oz. Dorothy’s silver shoes (in the movie ruby slippers) are not magical in themselves. It’s only after a lesson from Glinda on the power of thought that their magic can work. As the good witch Glinda tells Dorothy: “All you have to do is to knock the heels together three times and command the shoes to carry you wherever you wish to go.”  Source of the article

GOATReads:Politics

Forever Wars, Forever Forgotten

Car dealers are notorious for upselling you on things you probably don’t need, like leather seats and rust protection. But what about bulletproof glass? A smoke screen to blind the driver tailing you? Electrified door handles to deter carjackers? A bomb-proof underbody (in case you drive over an IED)? Heck, they’ll throw in gas masks and bulletproof vests for free, if you opt for the vaunted “military package.” These upgrades suit the world of Mario Kart, or, better yet, of Mad Max. But they’re made for ours. Anyone buying a Rezvani Vengeance, a luxury SUV that first came to market in the United States in 2022, can choose to equip their car with these features. “Vengeance is yours,” Rezvani ominously tells potential customers. But why would anyone need such a hulking, militarized vehicle for American streets? The answer has a lot to do with the war on terror. Al-Qaeda’s attacks on the country punctured Americans’ sense of safety. If terrorists could hijack commercial airplanes and fly them into buildings, then everyone was vulnerable. Americans scrambled to protect themselves in their everyday life. Driving the biggest vehicle on the road provided some comfort. There is a parallel too. The war on terror was an attempt to secure the United States. But this pursuit of security for the country came at the expense of the security of others. Around 408,000 civilians in Iraq, Afghanistan, Pakistan, Syria, Yemen, and Somalia lost their lives directly as a result of the war on terror’s violence; more than 4.5 million have died indirectly. A further 38 million people in these war zones (along with the Philippines and Libya) have been displaced, either abroad or internally (Brown University’s Costs of War Project describes this as a “conservative estimate”). The war has made Americans less safe too. The Islamic State’s conquest of large swathes of Iraq and Syria in 2014, as well as the terrorism carried out in its name around the world, were outgrowths of the war on terror, especially the overthrow of Saddam Hussein’s Ba’athist government. Similarly, American consumers’ decision to purchase SUVs en masse might have provided a sense of safety for their occupants. But SUVs—their size, blind spots, weight—have undermined the safety of everyone on the road, from pedestrians and cyclists to occupants of other vehicles. SUVs and pickup trucks account for most car sales in the United States today. Meanwhile, pedestrian deaths in recent years have reached record highs. It makes one wonder on whom Rezvani drivers are supposed to be taking revenge. Of course, it’s not just SUVs. Launching a war of global dimensions shaped the United States from the inside out. The war on terror brought about the rise of militarized police squads, Marvel movies, the Immigration and Customs Enforcement agency, unfettered Islamophobia, and, yes, tactical baby gear. These are just some of the consequences chronicled by Richard Beck in his profoundly illuminating book, Homeland: The War on Terror in American Life. It’s more than a book version of a “crazy ass moments in American history” social media account (though crazy-ass moments abound in Beck’s telling). It is a meditation on how exactly the United States lost its collective mind after September 11, 2001, and what this loss has meant for the world and especially for the United States. The war, Beck argues, has rotted American culture and politics. But yet, despite the extensive impact of the war, it quickly slipped out of focus. It became background noise. And, today, it seems strangely forgotten. In 2018, for example, 42% of Americans weren’t even aware that their country was still at war in Afghanistan, a place that the US military wouldn’t vacate for another three years. Why has it been so hard to see the wreckage of a more-than-two-decades-long conflict? Beck’s book helps explain why. Keeping Americans far from the country’s foreign policy has long been a goal of American policymakers. The national security state, built after the Second World War, is enveloped in secrecy. Similarly, politicians have sought to insulate Americans—or, at least, constituencies that mattered—from the consequences of foreign policy decisions. One such example is the 1973 elimination of the military draft: parents belonging to the upper- and middle classes no longer have to worry about the conscription of their children into war. But that gap grew during the war on terror, almost immediately after it began. Within weeks, George Bush implored citizens to do their patriotic duty—as workers and consumers. “We must stand against terror by going back to work,” Bush urged. “Fly and enjoy America’s great destination spots” and “get down to Disney World in Florida.” Given these instructions, Beck asks, is it a surprise that so many Americans decided to “tune out the whole situation and hope for the best”? As the war on terror expanded abroad, paradoxically, it faded further into the American background. It was Obama’s approach to the war on terror that sustained this paradox, as Samuel Moyn writes in his 2021 book, Humane: How the United States Abandoned Peace and Reinvented War. Months into office in 2009, the Obama administration launched the concept of a “global battlefield,” which removed any constraints on where the United States could project force. But, at the same, it cleaned up some of the less seemly elements of the war—forbidding torture, exiting Iraq—and also turned to remote-controlled drones, piloted by Americans in air-conditioned trailers in New Mexico, to do more of the fighting. The pilots were so distant, in fact, that the early drones were afflicted with latency, as the video signal had to travel from the skies of Africa, the Middle East, or Central Asia to a satellite and then back to the United States. The end result: fewer coffins sent back home, less bad press, and waning opposition to the war on terror. (Though the number of non-Americans killed by drones surged under Obama’s reign). It’s this distance that distinguishes the United States’s experience of the war from the states it targeted. Distance—that is, distance from violence—provided Americans with the luxury of tuning the war out. Needless to say, this was a luxury not afforded to Afghans, Iraqis, Pakistanis, and many others caught in the cross-hairs of the American military machine. “No country has been changed more dramatically by the fallout of the 9/11 attacks than Afghanistan,” argues Sune Engel Rasmussen in Twenty Years: Hope, War, and the Betrayal of an Afghan Generation. The Iraq of the late 2010s, Ghaith Abdul-Ahad claims in A Stranger in Your Own City: Travels in the Middle East’s Long War, was “born out of an illegal occupation, two decades of civil wars, savage militancy, car bombs, beheadings and torture.” Finally, according to Hugh Gusterson’s anthropological work on the drone war in the poor, tribal region of Waziristan, the drones lurking above Pakistan have delivered death to many and never-ending fear of death to many more. The drone’s pilots, meanwhile, return to their suburban subdivisions after work. The war on terror may have been fought over there, but it defined American life over here. Of particular note for Beck is the war’s effects on American democracy. The war, it could be argued, began as an assertion of popular will. The public overwhelmingly supported a war against both al-Qaeda and its Afghan host, the Taliban. Regime change had broad appeal. But by mid-2003, when the White House turned its attention to Iraq, support for the broader war on terror began to dwindle. In the years that followed, it would plummet. Yet, against the growing objections of Americans, the war continued. What Americans wanted and what their country did increasingly went in different directions. How is the public theoretically able to shape the doings of the state? One way is through the ballot box. But the war on terror did not come to a close in 2009, when a Democratic president with a historically broad mandate took back the presidency. The press—what Alexis de Tocqueville described, after his tour of the United States in the 1830s, as “the democratic instrument of liberty”—provides another avenue. It is here where public opinion is supposed to be aired out and turned into political force. But that’s not what happened during the war on terror. Instead, as mainstream news outlets fed the public almost invariably the White House’s perspective that Saddam Hussein possessed WMDs, it became an instrument of the state. The few journalists and pundits who questioned this elite consensus faced professional consequences. When Phil Donahue, a veteran media personality, dissented, MSNBC canceled his show in February 2003. Which program filled that cherished 8:00 p.m. slot? Countdown: Iraq. Another way is protest. But the public needs space to do so. And since 9/11, public space—places in which people have the freedom to do what they want, anything from hanging out to exercising their democratic right to protest—have been sacrificed for security. Guards and surveillance cameras increasingly honeycomb them, while police officers, outfitted with military surplus from the Pentagon, have come to resemble “occupying armies.” Access to those spaces have been downgraded from a right to a mere privilege. Just ask the participants of Occupy Wall Street or Black Lives Matter whom the state expelled from parks and streets across the country over the last decade and a half. The public has thus been unable to impose consequences on the people who waged and campaigned for the war on terror. Consider the journalists and bloggers who, after revving up the war machine, collected nothing but garlands. Or the restoration of George Bush’s image in the 2010s (a video from Ellen DeGeneres’s YouTube channel includes a 2019 video titled, “This Photo of Ellen & George W. Bush Will Give You Faith in America Again”: 1.2 million views). Or the countless atrocities committed by American soldiers that have been met with slaps on the wrist, if at all. Or the National Security Agency’s warrantless surveillance of Americans, ruled by courts in 2020 to have been both illegal and useless, and yet, whose directors have gone unpunished. Or the rise of national politicians who had almost uniformly favored the war in Iraq: every single presidential and vice-presidential nominee since 2016 supported the invasion—with the exception of Kamala Harris. And yet her campaign last year, maddeningly, still paraded an endorsement from Dick Cheney—the war on terror’s most direct architect—in what certainly didn’t help her chances of winning the fateful election. Despite handwringing over cancel culture, elites over the past two and a half decades have luxuriated in what Beck calls “impunity culture.” Perhaps it shouldn’t be a surprise, then, that every major political movement since 2003, namely Occupy Wall Street and Black Lives Matter, has targeted impunity itself, whether it was that of the bankers who crashed the economy in 2008 or of the police officers who killed unarmed Black Americans. The lack of accountability in the American political system was distilled by Obama in a memorable interview he gave in 2009: “I don’t believe anybody is above the law. On the other hand, I also have a belief that we need to look forward.” But looking forward is also looking away. No one can doubt that the war on terror was transformative for the United States. But we should be careful not to treat it as a complete rupture from the past. This kind of thinking could imply that the United States was in decent shape on September 10, 2001. Just as the election of Trump signalled the country’s pre-existing troubles—a point that Beck makes emphatically—doesn’t the United States’s aggressive, counterproductive, and often barbaric response to the 9/11 attacks indicate that the country was already in crisis? Beck isn’t oblivious to the war on terror’s pre-history—he devotes pages to everything from settler colonialism to the decline of economic growth since the 1970s. But Homeland regrettably plays down other, more obvious continuities from the past. The foreign policy establishment that led the country into a global crusade against terrorism drew on a repertoire of tactics tried and tested in earlier decades, from regime change to mass surveillance. So too did Americans in their pursuit of security in their everyday life. Indeed, as historian Elaine Tyler May argues in Fortress America: How We Embraced Fear and Abandoned Democracy, it was in the second half of the 20th century—not the beginning of the 21st—that a “new consensus” formed in the United States, one organized around a novel fear-laden definition of “security” that “both major parties adopted and most Americans across the political spectrum accepted.” It led to the creation of the national security state, as well as Americans’ retreat into the home to find safety amid threats of nuclear Armageddon and communist subversion. “During the first decade of the war on terror,” Beck argues, “the United States built up internal fortifications the likes of which the country had never seen.” Again, there is more of a throughline here than Beck lets on. In the early Cold War, Americans became gripped by what Elaine Tyler May calls a “bunker mentality,” in some cases transforming their homes into actual bunkers (“Now,” the Portland Cement Association advertised in the 1950s, “you can protect precious lives with an all-concrete blast-resistance house”). The panic around urban unrest and rising crime rates in the 1960s and ’70s intensified these fortifications. Home security systems flourished; gated communities became the fastest growing form of housing in the 1990s. Meanwhile, the militarization of cities themselves, chillingly catalogued by Mike Davis in City of Quartz in 1990, was decades in the making. The war on terror certainly cranked up the volume of fear and conjured new bogeymen, substituting terrorist for communist. But it only seemed natural to do so after more than half a century of fear and security organizing American policy at home and abroad. Even the rise of the SUV—one of the continuities that Beck does trace adeptly—was a part of this broader story: Consumer data from the year 2000 suggests that the car’s popularity was, in part, motivated by a fear of crime. If the fears of the war on terror began decades before 9/11, then, it’s also worth asking, when did the war on terror end? Could it, perhaps, be continuing? Americans have been treated to regular messaging that it was about to wrap up. There was George Bush’s infamous speech aboard the USS Abraham Lincoln, in front of a banner blaring “Mission Accomplished,” in 2003; there was also Barack Obama’s reforms in 2009. (“With the stroke of his pen,” the Washington Post announced, Obama “effectively declared an end to the ‘war on terror,’ as President George W. Bush had defined it.”) More recently, the United States’s withdrawal from Afghanistan in 2021 seemed to signal the end. “I was not going to extend this forever war,” Joe Biden told the country. As the last of American troops left in 2021, crowds of Afghans—clinging to what meager personal belongings they could carry—desperately tried to escape the Taliban takeover. The chaotic scenes from the Kabul airport harkened back to the fall of Saigon in 1975. But just as concluding one local conflict did not spell the end of the Cold War, neither did concluding another bring the war on terror to a close. The two conflicts do make for intriguing comparison in Homeland. At first glance, it’s their resemblance that one notices. In both Vietnam and Afghanistan, after more than a decade of trying and failing to replace a hostile government with one more pliable, the United States left. But, taking a step back, the differences are even more illuminating. In response to the blood and treasure spilled in Vietnam—conscription was in effect until the final years of the war—an anti-war movement filled the streets. The media adopted a more critical mode. And Congress clawed back its powers over war from the presidency. American militarism itself fell into disrepute. In contrast, the American occupation of Afghanistan—along with the broader war on terror—has generated a comparatively paltry opposition, especially after the initial wave of protest in the early years. But it’s difficult to protest what one doesn’t know. It seems that the country had moved on—or, to use Obama’s words, was looking forward—before the fighting ended. But the war on terror does continue. Many of the tools that the Bush administration introduced are still on the books, finding new uses, more than two decades later. The 2001 Authorization for Use of Military Force (AUMF)—the broadly written and even more broadly interpreted piece of legislation that empowered George Bush to pursue the 9/11 attackers—has since been invoked in military operations in at least 22 countries, most recently by Joe Biden in a bombing campaign against Iran-aligned militia in Iraq. It’s not just the 2001 AUMF. In that same operation last year, Biden also cited the Authorization for Use of Military Force Against Iraq, the 2002 resolution that enabled the Bush administration to overthrow Saddam Hussein. Let that sink in: More than a decade after “ending” the Iraq War, the US president still retains the right to bomb the country, without congressional debate, whenever he wants. What is that, if not a forever war? The war on terror also continues to haunt American society. National security reigns supreme. Democrats and Republicans have stretched the category “terrorist” to include more and more people. Last year, Manhattan’s District Attorney charged Luigi Mangione, who stands accused of killing a health insurance CEO, with an act of terrorism. Trump and his allies, once again in power, apply the category with abandon. Dealing drugs? Terrorism. Opposing deportation efforts? Terrorism.  Vandalizing Tesla cars and infrastructure? Terrorism. Protesting genocide? Terrorism (or, rather, “activities aligned to Hamas, a designated terrorist organization,” an entirely novel and seemingly infinite charge). And regime change, this time in Iran, has re-entered the political mainstream. The war on terror, as Beck illustrates, has been a tragically bipartisan project, supported by Republicans and Democrats alike. Grumbling about the war can be heard across the aisle. But actually bringing it to an end and all that would entail—from restoring civil rights at home to resetting the United States’s relations abroad? To do so would require reckoning with the past. That’s a project in search of a political coalition. Source of the article

Utopia brasileira

Within less than a decade, Brazil will have as many evangelicals as Catholics, a transcendence born of the prosperity gospel Utopia is on the horizon … I move two steps closer; it moves two steps further away. I walk another 10 steps and the horizon runs 10 steps further away. As much as I may walk, I’ll never reach it. So what’s the point of utopia? The point is this: to keep walking. – from Las palabras andantes (1993), or Walking Words, by Eduardo Galeano In 1856, Thomas Ewbank published Life in Brazil, an account of the Englishman’s six months spent in the country a decade earlier. In it, he argued that Catholicism as practised in Brazil and across Latin America constrained material progress. In this, the visitor would be joined by a long line of critics, from the writer and later modernising president of Argentina, Domingo Faustino Sarmiento – who denounced the negative influence of Spanish and Indigenous cultures in Latin America, including the role of the Catholic Church – to the conservative Harvard academic Samuel Huntington. Ewbank contended, moreover, that the ‘Nordic sects will never flourish on the Tropics,’ a line that Brazil’s greatest historian, Sérgio Buarque de Holanda, immortalised in his work Raízes do Brasil (1936), or Roots of Brazil. Protestants would supposedly degenerate here, with the severity, austerity and rigour of that doctrine being incompatible with the archetypal Brazilian: the ‘cordial man’. This figure, according to Holanda, represented interpersonal warmth and openness, in contrast to closed and rule-bound northern Europeans. At present, Protestants account for one-third of the population, while the number of Catholics has just dipped below 50 per cent. By far the largest proportion of Brazilian Protestants are evangelicals, specifically Pentecostals, neo-Pentecostals and related branches. By the centenary of Raízes do Brasil in 2036, Protestants will outnumber Catholics in Brazil for the first time in the country’s 500-plus-year history. In 2018, the far-Right former army captain Jair Bolsonaro shocked the country by winning the presidency, bolstered by an evangelical vote that would remain faithful to him and his socially conservative, politically reactionary and cosmologically apocalyptic politics. The rise of this bloc presents a challenge to perhaps the most clichéd description of Brazil. In 1941, the Austrian Stefan Zweig, seeking refuge from Nazism in Brazil, called this land the ‘country of the future’. Zweig highlighted not just Brazil’s natural endowments but the society’s tolerance, openness, harmony, optimism and fusionist culture. For Zweig, as for many Europeans and Americans before him, Brazil became a utopian gleam in the eye. For centuries, certain common threads had sewn these utopian visions together: Brazil was a picture of idleness, imagination, diversity and conviviality – a means of living together that relied on adaptability. Yet the Bolsonarismo phenomenon, according to critics, is intolerant, punitive, supremacist, an embodiment of a type of Christian cosmovision at odds with any notion of society. Did the presidency of Bolsonaro, under the slogan ‘Brazil above everything, God above everyone’, signal an end of this romance? No one holds Brazil as an existing paradise. Few even sustain any expectation that it will deliver on what was promised for it. And, indeed, utopian thinking probably died as far back as the 1964 military coup. But many have continued to uphold the country’s cultural traits as admirable and enviable – even models for the world. ‘Brazilianization’, a trope taken up by various intellectuals in recent decades, signals a universal tendency towards social inequality, urban segregation, informalisation of labour, and political corruption. Others, though, have sought to rescue a positive aspect: the country’s informality and ductility, particularly in relation to work, as well as its hybridisation, creolisation and openness to the world, made it already adapted to the new, global, postmodern capitalism that followed the Cold War. By the 2000s, Brazil was witnessing peaceful, democratic alternation in government between centre-Left and centre-Right for practically the first time in its history. Under President Lula, it saw booming growth, combined with new measures of social inclusion. But underneath the surface of the globalisation wave that Brazil was surfing, violent crime was on the up, manufacturing was down, and inclusion was being bought on credit. ‘There is indeed an alternative, even if it is an apocalyptic one’ In 2013, it came to a shuddering halt. Rising popular expectations generated a crisis of representation – announced by the biggest mass street mobilisations in the country’s history. This was succeeded by economic crisis and then by institutional crisis, culminating in the parliamentary coup against Lula’s successor, Dilma Rousseff. Now all the energy seemed to be with a new Right-wing movement that dominated the streets. It was topped off by the election of Bolsonaro in 2018. Suddenly, eyes turned to the growing prominence of conservative Pentecostal and neo-Pentecostal outlooks in national life. Bolsonaro failed to be re-elected in 2022. Upon his defeat, Folha de S Paulo, Brazil’s paper of record, reported that, ‘Bolsonarista pastors talk of apocalypse.’ At the evangelical Community of Nations church in Brasília, frequented by Michelle Bolsonaro, wife of Jair, the pastor’s wife is reported to have proclaimed: ‘Brazil has an owner. That hasn’t changed, it won’t change. God continues to be the one who made Brazil shine and be the light of the world. His plan has changed neither with regard to us nor the country.’ It was a rare expression, for our times, of a sense of historical mission or destiny. The age of no alternative was being left behind. ‘There is indeed an alternative, even if it is an apocalyptic one,’ the Brazilian philosopher Paulo Arantes sardonically remarked. In the final 2022 pre-election poll, evangelicals split 69-31 in Bolsonaro’s favour. Although he is Catholic, he was baptised in the River Jordan in 2016 by Pastor Everaldo, an important member of the Pentecostal Assembleia de Deus (the Assemblies of God – the largest Pentecostal church in the world, and the largest evangelical church in Brazil). The creationist and anti-gay Pentecostal Marcelo Crivella shocked many when he defeated a human rights activist to become mayor of Rio de Janeiro in 2016. Crivella’s uncle is Edir Macedo, the founder of the neo-Pentecostal Universal Church of the Kingdom of God (Igreja Universal do Reino de Deus, or IURD), the largest of its denomination, reputedly with 4.8 million faithful in Brazil. Preaching the ‘prosperity gospel’, according to which commitment to the church will be rewarded with wealth, has seen Macedo become a dollar billionaire (of which there are around 60 in the country). The IURD is known for practising exorcisms and divine cures, and for purging demonic spirits, which it associates with Afro-Brazilian religions like Candomblé and Umbanda. But it is the IURD’s political role and media presence that really make it stand out. The Republicanos party, founded in 2005, is a creature of the IURD. Its president, the lawyer Marcos Pereira, was a bishop who held a position in the Michel Temer administration that took office after deposing of Rousseff. The party’s 44 deputies in the lower house of Congress are part of the powerful cross-party evangelical bench in Congress, composed of 215 deputies out of a total of 513. Macedo also owns Record, the second-biggest channel in Brazil, which gave Bolsonaro plenty of free airtime. The articulation between evangelicals and Bolsonaro only strengthened through his term. During the COVID-19 pandemic, Bolsonaro’s denial of the severity of the virus was, in part, a demonstration of evangelical coronafé, or corona-faith: ‘that confidence, that certainty that God is with you and that he will never, ever, at any time fail those who have believed in him,’ in Macedo’s words. Later in his term, Bolsonaro nominated the ‘terribly evangelical’ Presbyterian pastor André Mendonça to an empty Supreme Court seat. Upon Congressional approval, the president’s wife Michelle, a crucial link to the evangelical public, was filmed crying, praying and speaking in tongues. Bolsonarismo is a sort of parody of Christian eschatology After Bolsonaro left office, his supporters stormed government buildings in Brasília on 8 January 2023, in a replay of the storming of the United States Capitol on 6 January 2021. The action was widely unpopular. But 31 per cent of evangelicals supported it, against a national average of 18 per cent. While 40 per cent of the population believed Lula had not won the election fairly, among evangelicals this belief was as high as 68 per cent, with 64 per cent in favour of a coup to overturn the result. The media was full of reports of pro-Bolsonaro protestors praying for miracles, speaking in tongues and behaving like the world was ending. The theologian Yago Martins, whose videos on religious thought have won him more than 1 million followers across his social channels, refers to Bolsonarismo as an apocalípse de palha, or ‘straw apocalypse’. Bolsonarismo’s combination of a conspiratorial mindset, a longing for an imminent national conflagration, a holy war against evil, and its messianic discourse are a sort of parody of Christian eschatology. For Martins, author of A religão do bolsonarismo (2021), or Bolsonarismo as Religion, the movement is a ‘fallacious immanentisation of the eschaton’, a paraphrase of the philosopher Eric Voegelin’s phrase from 1952. Martins, a Baptist pastor, identifies as a Right-wing evangelical, but is a critic of Bolsonarismo (though he admits to voting for him in 2018). His criticisms of Bolsonarismo’s idolatry nevertheless testify to something new on the scene: the insertion of a transcendental viewpoint into politics, something that had supposedly been expulsed with the historic defeat of socialism and nationalism. Indeed, when I spoke to Gedeon Freire de Alencar, a sociologist of religion and author of a book on the contribution of evangelicals to Brazilian culture, as well as a presbyter of the Bethesda Assembly of God in São Paulo, he emphasised the role of dominion theology, according to which believers should seek to institute a nation governed by Christians. The ‘Seven Mountain Mandate’, popularised in 2013 by two American authors, advocates that there are seven areas of life that evangelicals should target: religion, family, government, education, media, arts/entertainment, and business. For many progressives, this struck as a sort of ‘medieval radicalism’, the charge thrown at Crivella by Jean Wyllys, the first gay-rights activist to win a seat in Congress. The philosopher and columnist Vladimir Safatle denounced the ‘project to take Brazil back to the Middle Ages’: yes, Brazil had had its share of authoritarian and conservative figures in the past, but this was new, ‘because the old Right… never needed spokespeople.’ As testament to the growing presence of evangelicals but also their political ambivalence, consider the March for Jesus. The yearly demonstration is known as ‘the world’s largest Christian event’ drawing between 1 and 3 millions crentes, or believers, each year. Though Bolsonaro was the first president to attend the march, in 2019, it was Lula who signed the law that officialised the National Day for the March for Jesus, scheduled for 60 days after Easter. Similarly, back in 1997 it was estimated that one-third of militants in the agrarian land reform movement, MST, were Pentecostals, which would have been double the rate of the local population at the time. Twenty years later, Guilherme Boulos, coordinator of the MTST, the unhoused workers’ movement, claimed that by far the largest part of the movement’s base was made up of Pentecostals. So why the association of evangelicals with darkest reaction? In large part, it’s class prejudice, argues the anthropologist Juliano Spyer, whose book Povo de Deus (2020), or People of God, sparked widespread debate in the country and was a finalist in Brazil’s most prestigious nonfiction prize in 2021. For opinion-formers, the evangelical is either a poor fanatic or a rich manipulator, but the reality is that the religion is socially embedded in Brazil, particularly among the poor and Black population. The Brazilian urban landscape sees a war of all against all play out every day For instance, well-to-do social progressivism tars evangelical religion as patriarchal. Perhaps so, in contrast with contemporary upper-middle-class mores, but in the often machista and violent lifeworld of the Brazilian working class, when a man is born-again, he stops drinking, becomes less likely to beat his wife, and is more inclined to contribute to the household. Similarly, while evangelicals are held to be anti-science and anti-enlightenment, in a culture in which even the elite has never been particularly bookish, conversion is associated with a renewed emphasis on study. This partly explains why Pentecostalism (and evangelical Christianity more broadly), is the faith of the world’s urban poor. And ‘Brazil is ground zero for what is happening within the wider Pentecostal movement, the median global experience,’ explains Elle Hardy, author of a book Beyond Belief (2021) on the phenomenon’s spread worldwide. The evangelical movement must be understood in relation to the reality in which real political corruption abounds, and violence and the threat of violence is omnipresent in the working-class urban context. Brazil now sees more than 50,000 murders a year, and the violence associated with criminal markets, especially drugs, is only the sharp end of a fully marketised society. The Brazilian urban landscape sees a war of all against all play out every day. Middle-class Brazilian progressives were happy to ignore the civil war raging in the urban peripheries until the violence found a spokesperson in Bolsonaro. Broadly, the term evangélicos refers to missionary Protestants who are not members of the historic Protestant churches in Brazil – the Presbyterians, Lutherans, Anglicans, Methodists, Adventists and Baptists who first arrived from Europe in the 19th century. Confusingly, many historic Protestant churches carry the name ‘evangelical’ in their titles, and some have now come to adopt modes of worship evocative of charismatic or revivalist churches. But a distinction remains: historic Protestants in Brazil normally call themselves protestante or cristão, not evangélico or crente – and they tend to be middle class. Pentecostalism arrived in Brazil in the early 20th century, taking root among the poor. Its emblematic church is the Assembleia de Deus, established by two Swedish Baptist missionaries who arrived in the Amazonian port city of Belém in 1910. The third wave, beginning in the 1950s, is marked by the arrival of the Foursquare Church (Igreja Quadrangular), and coincides with rapid industrialisation and urbanisation, with worshipers recruited over the airwaves. But even by 1970, evangelicals still accounted for only 5.2 per cent of the population, while Catholics were at 91.8 per cent. The establishment of the IURD in 1977 marks the arrival of neo-Pentecostalism and the start of the fourth wave. Proselytising is carried out via TV and, doctrinally, a more managerial ethos is introduced. To the Pentecostals’ direct, personal and emotional experience of God is added the idea that conversion leads to financial advancement – the prosperity gospel. Macedo’s church also exemplified the movement’s growing political confidence. By the 1980s, the slogan crente não se mete na política (believers don’t get mixed up in politics) was being replaced by irmão vota em irmão (brothers vote for brothers). Throughout, the share of Catholics in the population was falling, with an almost commensurate rise in evangelicals – by about 1 per cent per decade. But, as of 1990, this accelerates to a 1 per cent change per year. Catholics were still 83 per cent of the population in 1991 and 74 per cent in 2000, when Catholicism hit its peak in absolute terms, with 124.9 million Brazilians – making Brazil the largest Catholic country in the world, a title it still holds. But by 2010, the share of Catholics had fallen to 64.6 per cent, with evangelicals rising to 22.2 per cent. Today, evangelicals represent a third of the population, and Catholics just under half. Modellers have identified 2032 as the year of religious crossover, when each Christian camp will account for an equal share of the population: 39 per cent. Any evangelical entrepreneur with a Bible under his arm and access to an enclosed space can set up shop What explains this explosion? The anthropologist Gilberto Velho points to inward migration, the primary 20th-century phenomenon in Brazil. Tens of millions of poor, illiterate, rural and profoundly Catholic people from the arid northeast of Brazil migrated to big cities, especially in the industrial southeast. Spyer tells me they ‘lived through the shock of leaving the countryside for the electricity of the city – but also the shock of moving to the most vulnerable parts of the city.’ The loss of networks of support, particularly of extended family, was filled by the establishment of evangelical churches. This is why the geographer Mike Davis called Pentecostalism ‘the single most important cultural response to explosive and traumatic urbanisation’. Sixty years ago, Brazil’s population was evenly split between town and country. Now it is 88 per cent urban, comparable with infinitely richer Sweden or Denmark, and higher than the US, the UK or Germany. The urbanisation rate is also much higher than Brazil’s fellow BRICs, China (66 per cent) or South Africa (68 per cent). Over the past decades, Brazil has also suffered ‘premature deindustrialisation’ – the loss of manufacturing jobs on the scale of the UK, for instance, but at a much lower level of income and development. Here is the recipe for what Davis called a ‘planet of slums’: urbanisation without industrialisation. And it is in the peripheries of megalopolises like São Paulo (greater metropolitan population: 22 million) and Rio (14 million), or other large cities where informal or precarious housing and employment dominates, that nimble startup churches sprout. Unlike the slow-moving Catholic Church, which demands more established settings and that its priests undergo four years of theological study, any evangelical entrepreneur with a Bible under his arm and access to an enclosed space, no matter how rudimentary, can set up shop. To ambitious working-class men, this offers a route to a leadership position in the community, a path to self-improvement. It was in what the sociologist Luiz Werneck Vianna called this ‘Sahara of civic life’ that Pentecostals and neo-Pentecostals built spaces of acolhimento, a word denoting both warm reception and refuge. They took root in the places abandoned by the Brazilian Left, of which the Catholic, liberation theology-inspired Comunidades Eclesiasticas de Base were a major part. Turning up in an expensive imported car signals to co-religionaries that the prosperity gospel is working Of course, not all evangelicals in Brazil are poor or working class. The movement has seen significant expansion into the middle class, even if the elite proper remains mostly Catholic. And there are doctrinal differences that map onto these class differences, even if incompletely. The model Pentecostal will be a poor assembleiano, a member of the Assemblies of God, whose small, basic and mostly ugly structures populate the landscape, from gritty industrial suburbs to lost hamlets of a dozen inhabitants deep in the interior. In these houses of worship, eschatological themes are omnipresent and the songs are about Jesus’s second coming. On the way to or back from church, worshipers – in their Sunday best – pass each other’s houses and check in on each other, reinforcing communal ties. At the other end of the spectrum is something like the Bola de Neve Church, founded in a surf shop by a surfing pastor in 1999. Its 560 churches across a number of countries purvey something altogether ‘lighter’. Its middle-class members arrive by car, wear casual clothing, and are treated to sermons accompanied by pop-rock and reggae. Eschatological themes are largely absent. As Alencar put it to me: ‘If Jesus returned now, he’d ruin their gig.’ Accompanying the Church’s suave and sophisticated marketing is the preaching of the theology of prosperity. Turning up in an expensive imported car signals to co-religionaries that the prosperity gospel is working. Importantly, in Brazil, ‘everything is syncretised and miscegenated,’ explains Alencar, so although in doctrinal terms the gulf between Pentecostal and neo-Pentecostal is ‘abyssal’, in practice it is hard to draw clear lines. Moreover, Baptist, Adventist and even Catholic churches are undergoing pentecostalização, adopting charismatic or revivalist features. The prosperity gospel component cuts across many of these complicated lines, a result of the emphasis on competition, individualism and economic ascent typical of neoliberal societies. But ultimately, for all the variety, the growth of evangelical Christianity in a society as unequal as Brazil is a phenomenon of the poor and working class. Conversion and dedication promises – and, in some cases, delivers – a better life: not just money, but also in terms of relationships, family and especially health. Belief functions as a para-medicine, be it directly through faith-healing, through the belief, determination and support to beat addiction, or simply through the provision of psychological support. In the words of Davis, it is a ‘spiritual health-delivery system’. This is the reason why evangelicals tend to be urban, young, Black or Brown women, from the least schooled strata, with the lowest salaries. It is, as Davis put it, ‘the largest self-organised movement of poor urban people in the world.’ Utopian visions have attached themselves to Brazil and informed its self-conception from its European discovery through to the 20th century. Perhaps it was a coincidence, but in Thomas More’s Utopia (1516) news of a distant paradise was brought by a Portuguese sailor. Brazil was Utopia realised. As Patricia Vieira puts it in States of Grace: Utopia in Brazilian Culture (2018), it presented a ‘fantasy of easy enrichment, grounded on the perception of the region as a treasure trove of natural wealth.’ For one 17th-century Jesuit priest, the land demarcated on the east side of the Treaty of Tordesillas would be the ‘Fifth Empire’, a new kingdom of perpetual peace, where people would live in mystical communion with God, and all would have equal rights. Gradually, messianic and theologically informed visions would give way to secularised ones. Curiously, Brazil is the only country whose demonym finishes in the -eiro suffix in Portuguese. So you have the Francês, the Argentino, the Americano, the Israelense… but the Brasileiro. It suggests an occupation, like marceneiro (carpenter), pedreiro (bricklayer), mineiro (miner). To be Brazilian was not a state of being, but an activity, a doing. It was the Portuguese and other Europeans who went off and ‘did’ Brazil – exploited its land. The Indigenous hero is lazy – a ‘trait that Brazilians should embrace and consciously cultivate’ So the Brasileiro is one committed to the project of Brazil, they are not a mere natural feature of the land. But this also speaks to a rapacious pattern of Brazilian development, characterised by using and discarding, rather than building and consolidating. It is a subjectivity evocative of Max Weber’s ‘capitalistic adventurer’; a figure who would ‘collect the fruit without planting the tree,’ as Holanda put it. The utopian tangles with its opposite. Are we dealing with transformation or exploitation? Is the one who works the land subject or object? Rejecting the exploited and exploiter dichotomy, a different utopian vision fixated on the independent, noble savage, free from work. The Índio was celebrated by Brazilian Romantics and modernists alike. In Macunaíma (1928), Mario de Andrade’s landmark novel mixing fantastical and primitivist elements, the eponymous Indigenous hero, a ‘hero without any character’, is above all lazy – a ‘trait that Brazilians should be proud of, embrace, and consciously cultivate,’ according to Vieira. But at issue is not really laziness but ócio – idleness. The Portuguese word for business is negócio, or the negation of idleness (neg-ócio). So, Vieira argues, the ‘business‑as‑usual work mentality of the capitalist world is at odds … with the primeval ócio of Brazilian Indigenous communities…’ The modernist poet Oswald de Andrade likewise foresaw a coming Age of Leisure, enabled by technology. In this egalitarian, matriarchal disposition, Brazil could be at the forefront of nations, showing the way. Civilising work, negócio, had been done; soon the dialectic would swing back to a paradisaical ócio. In practice, the Índio and the adventurer were locked in conflict, but they jointly stood in contrast to the avaricious European bourgeois. It is for this reason that Holanda’s Brazilian archetype of the cordial man is, as the sociologist Jessé de Souza puts it, the ‘perfect inverse of the ascetic Protestant’. Today’s Brazilian evangelicals are likewise not Weber’s northern European protestants. Their worship is emotional, not intellectual, filled with magic, rather than structured by reason. But pecuniary accumulation appears to unite them. As the Left-leaning Brazilian philosopher Roberto Mangabeira Unger has noted, these are the people who ‘[go] to night school, struggle to open a business, to be an independent professional, who are building a new culture of self-help and initiative – they are in command of the national imaginary.’ A few years ago, when asked about Left-wing rejection of the entrepreneurial, evangelical sector, Unger replied that the Brazilian Left should not repeat the ‘calamitous trajectory’ of their European counterparts in demonising the petty bourgeoisie and distancing themselves ‘from the real aspirations of workers’. This neo-Pentecostal consumer-capitalist utopia is necessarily authoritarian The ‘neo-Pentecostal movement today flourishes in a context of dismantling of labour protections,’ argues Brazil’s leading scholar of precarity, Ruy Braga. This requires less a methodical dedication to work, and more the neoliberal self-management typical of popular entrepreneurship. We are dealing not with the Protestant work ethic, but with an evangelical speculative ethic. Quantification becomes the criteria of validation, be it for believers or churches competing in the religious marketplace. ‘Blessings are consumed, praises sold, preaching purchased,’ as Alencar puts it. Whether this is mere capitalist survival or somehow utopian depends on whether you agree with the Catholic theologian Jung Mo Sung’s assertion that evangelicals insert a metaphysical element – perfectibility; the realisation of desire through the market for those who ‘deserve’ it – into mundane society. For a critic of the prosperity gospel like Sung, this neo-Pentecostal consumer-capitalist utopia is necessarily authoritarian. Divine blessing – manifest through the crente’s increased purchasing power – is bestowed as a result of the believers’ spiritual war against the enemies of God: the ‘communists’ and the ‘gays’. The ‘communists’ (who might in fact just be centrist progressives or Catholics) want to give money to the poor; these in turn may be sinners (drug users or traffickers, for instance). This goes against the way that God distributes blessings, which is to favour, economically, those who follow the prosperity gospel. According to most accounts, a unifying element in the evangelical cosmology is the confrontation between good and evil. The fiel (faithful) encounters a binary: the ‘world’ (sin, violence, addiction, suffering, evil – the Devil around every corner) vs the ‘Church’ (the negation of all that). This code is efficient in affording psychic peace to those facing a complex, rapidly changing world. How stark is the contrast with earlier self-understandings of Brazilian culture in which ambiguity prevailed! Brazil apparently lacked a moral nexus (as the historian Caio Prado Jr saw it in the 1940s), it was a society of ‘corrosive tolerance’ (according to the literary critic Antônio Cândido in the 1970s) or represented a ‘world without guilt’ (said another literary critic, Roberto Schwarz, in the 1980s). Outsiders, too, remarked on the absence of moral depth and pure religion. Two 19th-century American missionaries, James Fletcher and Daniel Kidder, lamented in Brazil and the Brazilians (1857) that that this natural paradise could have been a moral paradise, were it not for the fact that tropical Catholicism was superficial, pagan, and hung up on feasts and saints. North Americans of the time learned that the Brazilian was ‘amiable, refined, ceremonious’, but also that the absence of stricter moral codes led him to be ‘irresponsible, insincere and selfish’. The emblematic Brazilian figure, another archetype, is the malandro, or trickster, slacker, scoundrel. Identified by Cândido in his reading of the 19th-century novel Memoirs of a Militia Sergeant, the malandro flits between the upper and lower classes, between order and disorder, and operates on the presumption of an absence of moral judgment, sin and guilt. He does not work full-time, but nor is he a full-time criminal, nor a slave. He gets by on his wits and adapts. For Vieira, the ‘relaxed, leisurely lifestyle of the malandro, which represents the quintessentially Brazilian way of being-in-the-world, generated a society where regulations are lax, and so can be easily bent to accommodate different customs and traditions.’ Conversions are negatively impacting samba schools, with the born-again quitting carnival The malandro is at home in carnaval, which brackets real life, allowing for play, for freedom and fantasy. In Roberto DaMatta’s classic 1979 study, the festival is a subversive, free universe of useless activity – something that looks like madness from the perspective of capitalist work ideology. In this light, Brazil’s great religious transition represents a cultural revolution. Evangelicals interrupt the ‘utopia’ of the idle Índio or the malandro at play in carnival. Firstly, they disdain idleness in favour of entrepreneurial activity and rigorous self-discipline. Secondly, and more directly, they scorn carnival itself. As the leading Pentecostal pastor Silas Malafaia puts it, carnival is a pagan feast ‘marked by sexual licentiousness, boozing, gluttony, group orgies and a lot of music.’ This is felt at the grassroots. Folha de S Paulo reports on how conversions are negatively impacting samba schools and other musical groups, with the born-again quitting carnival. They say Pentecostalism and neo-Pentecostalism owe their success to their adaptability to local contexts. But, at a minimum, these doctrines’ implantation in foreign soil gives voice to deep changes in the receiving culture, and at a maximum may even serve to transform it. If toleration, moral ambiguity and easy-going malleability were central to a Catholic-inflected Brazilian identity, what will an evangelical Brazil look like? In The Making of the English Working Class (1963), E P Thompson comments that Methodism prevented revolution in England in the 1790s. Yet it was indirectly responsible for a growth in working people’s self-confidence and capacity for organisation. Could something similar be said for Brazilian evangelicals, whose self-starting community-building, at a minimum, could be looked at sympathetically for reconstructing associational life? The Canadian political scientist André Corten, who taught and researched across Latin America, remarked that ‘the failure of secularised Utopias makes the persistence of theologised Utopias come to light.’ Pentecostalism, as a sect, is one such utopianism. It withdraws to an ‘elsewhere’ in social space, refuses to compromise with the social world, and is therefore ‘anti-political’. There is a popular-democratic thrust to this: no deference to a professionalised clergy, but rather a horizontal ordering of the faithful. A comparison with revolutionary-democratic liberation theology is illuminating. Insofar as they construct the category of ‘the poor’, both liberation theology and Pentecostalism are discourses about suffering. But Pentecostalism privileges emotion in the place of cognition, glossolalia (speaking in tongues) in the place of equality of speech, and – crucially – it is a religion of the poor, not for the poor. It disdains poverty. Evangelical churches ‘transform people who were born as subaltern – not just poor but also convinced that their social role is to be poor – and they are reborn: they come to understand themselves as equal to other people,’ argues Spyer. They seek to turn their back on poverty and change their lives so as to improve their station. How does this relate to secular utopianism? It doesn’t. This democratic-popular component cannot be recycled by the Left, nor by conservatism; evangelicals may refuse infeudation to a category of scholars but, simultaneously, the intolerance and despotism of custom connote authoritarianism. This is a movement that is ‘at once egalitarian and authoritarian’, says Corten. Is this not the obverse of the hegemonic culture, of progressive neoliberalism? Our societies are, prima facie, egalitarian: most forms of elitism and snobbery are ruled out, and we are tolerant of difference and accepting of minorities, because everything is relativised in a consumer society. But, in practice, there is a deep inequality of income, wealth, power and even recognition. As evangelical Christianity ballooned, it would leave behind the anti-politics of the sect So even if we are to conclude that the evangelical wave contains no utopian seeds, it is at the very least countercultural. Indeed, it was, as Alencar put it to me, ‘contestatory from the start: in their social behaviour, ways of greeting each other, their clothes, music, sport, life…’ But this was always a ‘force of transformation with no intentionality’, says Corten, making its logic distinct from the utopian ideologies of the Left. In any case, as evangelical Christianity ballooned, it was always going to leave behind the anti-politics of the sect. Corten sketched out three political trajectories that might take shape. One is assimilation: adapting to the reigning order of society. In formal politics, this is represented by evangelical political parties or cross-party benches behaving in physiological fashion – a term from Brazilian politics that means to become part of the organs of the state, with all the clientelism and corruption this entails. The happy-clappy neo-Pentecostal churches like Bola de Neve would likewise represent a certain assimilation. Embourgeoisement, for evangelicals, represents not just certain churches becoming middle class, but questions over the professionalisation of the clergy, whether pastors should be paid a salary. These frictions are currently playing out among the faithful, with heated debate within churches – and competition between them. A second entry point to politics is manipulation: this consists in evangelical leaders letting believers think that they continue to be ‘unacceptable’ while playing the political game. This might accord with the authoritarian thesis, whereby evangelical ‘despotism of custom’ fits seamlessly with secular authoritarian rule. The third door leads to messianism. This would present the most obvious threat to liberal democracy, not (only) because it would be a species of authoritarian populism, but because ‘the solution to the conflict they displace outside themselves is sought in a “supernatural” outcome,’ argues Corten. Critical theologians join with much Left-wing opinion in denouncing the falseness and shallowness of evangelical Christianity in its guise as prosperity gospel. Forget countercultural stances, let alone utopian visions, evangelicals are fully subsumed by contemporary capitalism! Worse still, they sustain intolerant, socially conservative attitudes! But even this may be changing. The newsweekly Veja reports that evangelicals today ‘want to participate in the institutional decisions of their faith communities, aim for more democratic and transparent environments, and are much more flexible in behavioural matters.’ And for all the community-building of proletarian Pentecostals, the number of ‘unchurched’ is growing. In tandem, the number of evangelicals who belong to ‘undetermined’ churches is growing at the same rate as evangelicals as a whole. This would be testament to an even more total victory of the forces of commodification, atomisation, reification. In the same river swims the data on secularisation. Those professing ‘no religion’ are increasing, reaching a plurality (30 per cent) among young people in the megalopolises of São Paulo and Rio – but these people mostly do not identify as agnostic or atheist. Indeed, 89 per cent of Brazilians ‘believe in God or a higher power/spirit’, according to the latest Global Religion survey from Ipsos. The trend, then, is for belief without belonging, toward an individualisation of faith and the adoption of eclectic, personalised beliefs used to sustain, justify or comfort the individual subject in a competitive, anomic world. The sectarianism of the closed-off world of believers awaiting the eschaton has been corroded by the fissiparousness of liquid modernity. Others suggest that there remains a contestatory edge to evangelicals. The anthropologist Susan Harding finds a forcible strain of anti-victimhood in Pentecostal and neo-Pentecostal churches. Indeed, this is why progressives disdain evangelicals, because, unlike other groups, they don’t see themselves as victims of the system. They are financially motivated and seek to better themselves, in contrast with the exoticised or culturally relevant poor (Indigenous communities or practitioners of Afro-Brazilian religions, for instance). For the middle-class progressive, distaste for the evangelical is mere demophobia, a rejection of the urban poor, particularly when they organise themselves. The web of evangelical churches may represent genuine social power True as this may be, anti-victimhood tangles in complex fashion with ressentiment, a sense of being unfairly judged or treated. In turn, this is leveraged by evangelical leaders and conservative politicians. This aspect culminates in a seeming vindication of Corten’s manipulationist theory: swampy corruption and authoritarian instincts meld with apocalyptic themes. It is a confluence that was especially evident under Bolsonaro, and the only question now is whether the constellation of forces that regrouped around him will unify again. What isn’t going away is the social presence of evangelicals as such. But as they expand towards a plurality of the population over the next decade, internal differences and divisions will grow. Neither their politics nor their politicisation is a given. Indications from the US are that evangelicals are retreating from politics, having occupied centre-stage in the 1990s and 2000s. If religion is meant to provide solace, but becomes yet another site in which antagonisms rage, either you need to quit religion or your religion needs to quit politics. Still, the social infrastructure represented by what is ultimately a mass movement of the poor is remarkable. The web of evangelical churches may represent genuine social power. Whether it is a carrier of mainstream capitalist values of entrepreneurship and speculation, or an anti-politics of refusal, or something else entirely, remains to be seen. Capitalism’s contradictory tendencies towards individualism and collectivity play out in full here. Brazil’s religious transition is a case of both at once. In Who Are We? (2004), the political scientist Samuel Huntington warns that Hispanic immigration would transform US culture into something more Catholic, with a consequent demotion for Anglo-Protestant work ideology. One should not see in the advance of Pentecostalism and neo-Pentecostalism in Brazil an opposite movement. We are not simply faced with a pendulum swing from leisure to work – nor, needless to say, a utopian overcoming of that division. Instead, urbanisation without industrialisation has created a social landscape of low-key civil war. The war of all against all finds its ideological correlate not in a Protestant work ethic but in the speculative-entrepreneurial ethic of evangelicals. In a terrible duality of overwork and worklessness, a speculative leap towards prosperity looks like the only escape. And this obtains whether one follows the rigours of evangelical dedication, studying, setting up a microbusiness on credit – or turning to a life of crime. There are plenty of cases where it’s both. Finally then, evangelical Christianity may be the form that popular ideology takes in a context of precarity, after old utopias have dried up. All that remains is a utopia in the sense that Theodor W Adorno discussed: not as a positive social vision, but as the absence of worldly suffering. Adorno, though, was mistaken: he conflated the secular notion of freedom (liberation of our finite lives) with a religious notion of salvation (liberation from finite life). It is the former utopianism that is lacking today – that which drags us along and keeps us walking forward. We need not surrender to the grinding banality of capitalist life for the sake of ‘realism’, nor endow tawdry capitalist creeds with the name ‘utopia’. We need only note that the desire for transcendence exists – it is manifest, in both earthly and metaphysical aspects. The worldwide explosion of Pentecostalism should give us pause, and act as an injunction to invent secular transcendence once more. Source of the article