CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

GOATReads:Politics

Forever Wars, Forever Forgotten

Car dealers are notorious for upselling you on things you probably don’t need, like leather seats and rust protection. But what about bulletproof glass? A smoke screen to blind the driver tailing you? Electrified door handles to deter carjackers? A bomb-proof underbody (in case you drive over an IED)? Heck, they’ll throw in gas masks and bulletproof vests for free, if you opt for the vaunted “military package.” These upgrades suit the world of Mario Kart, or, better yet, of Mad Max. But they’re made for ours. Anyone buying a Rezvani Vengeance, a luxury SUV that first came to market in the United States in 2022, can choose to equip their car with these features. “Vengeance is yours,” Rezvani ominously tells potential customers. But why would anyone need such a hulking, militarized vehicle for American streets? The answer has a lot to do with the war on terror. Al-Qaeda’s attacks on the country punctured Americans’ sense of safety. If terrorists could hijack commercial airplanes and fly them into buildings, then everyone was vulnerable. Americans scrambled to protect themselves in their everyday life. Driving the biggest vehicle on the road provided some comfort. There is a parallel too. The war on terror was an attempt to secure the United States. But this pursuit of security for the country came at the expense of the security of others. Around 408,000 civilians in Iraq, Afghanistan, Pakistan, Syria, Yemen, and Somalia lost their lives directly as a result of the war on terror’s violence; more than 4.5 million have died indirectly. A further 38 million people in these war zones (along with the Philippines and Libya) have been displaced, either abroad or internally (Brown University’s Costs of War Project describes this as a “conservative estimate”). The war has made Americans less safe too. The Islamic State’s conquest of large swathes of Iraq and Syria in 2014, as well as the terrorism carried out in its name around the world, were outgrowths of the war on terror, especially the overthrow of Saddam Hussein’s Ba’athist government. Similarly, American consumers’ decision to purchase SUVs en masse might have provided a sense of safety for their occupants. But SUVs—their size, blind spots, weight—have undermined the safety of everyone on the road, from pedestrians and cyclists to occupants of other vehicles. SUVs and pickup trucks account for most car sales in the United States today. Meanwhile, pedestrian deaths in recent years have reached record highs. It makes one wonder on whom Rezvani drivers are supposed to be taking revenge. Of course, it’s not just SUVs. Launching a war of global dimensions shaped the United States from the inside out. The war on terror brought about the rise of militarized police squads, Marvel movies, the Immigration and Customs Enforcement agency, unfettered Islamophobia, and, yes, tactical baby gear. These are just some of the consequences chronicled by Richard Beck in his profoundly illuminating book, Homeland: The War on Terror in American Life. It’s more than a book version of a “crazy ass moments in American history” social media account (though crazy-ass moments abound in Beck’s telling). It is a meditation on how exactly the United States lost its collective mind after September 11, 2001, and what this loss has meant for the world and especially for the United States. The war, Beck argues, has rotted American culture and politics. But yet, despite the extensive impact of the war, it quickly slipped out of focus. It became background noise. And, today, it seems strangely forgotten. In 2018, for example, 42% of Americans weren’t even aware that their country was still at war in Afghanistan, a place that the US military wouldn’t vacate for another three years. Why has it been so hard to see the wreckage of a more-than-two-decades-long conflict? Beck’s book helps explain why. Keeping Americans far from the country’s foreign policy has long been a goal of American policymakers. The national security state, built after the Second World War, is enveloped in secrecy. Similarly, politicians have sought to insulate Americans—or, at least, constituencies that mattered—from the consequences of foreign policy decisions. One such example is the 1973 elimination of the military draft: parents belonging to the upper- and middle classes no longer have to worry about the conscription of their children into war. But that gap grew during the war on terror, almost immediately after it began. Within weeks, George Bush implored citizens to do their patriotic duty—as workers and consumers. “We must stand against terror by going back to work,” Bush urged. “Fly and enjoy America’s great destination spots” and “get down to Disney World in Florida.” Given these instructions, Beck asks, is it a surprise that so many Americans decided to “tune out the whole situation and hope for the best”? As the war on terror expanded abroad, paradoxically, it faded further into the American background. It was Obama’s approach to the war on terror that sustained this paradox, as Samuel Moyn writes in his 2021 book, Humane: How the United States Abandoned Peace and Reinvented War. Months into office in 2009, the Obama administration launched the concept of a “global battlefield,” which removed any constraints on where the United States could project force. But, at the same, it cleaned up some of the less seemly elements of the war—forbidding torture, exiting Iraq—and also turned to remote-controlled drones, piloted by Americans in air-conditioned trailers in New Mexico, to do more of the fighting. The pilots were so distant, in fact, that the early drones were afflicted with latency, as the video signal had to travel from the skies of Africa, the Middle East, or Central Asia to a satellite and then back to the United States. The end result: fewer coffins sent back home, less bad press, and waning opposition to the war on terror. (Though the number of non-Americans killed by drones surged under Obama’s reign). It’s this distance that distinguishes the United States’s experience of the war from the states it targeted. Distance—that is, distance from violence—provided Americans with the luxury of tuning the war out. Needless to say, this was a luxury not afforded to Afghans, Iraqis, Pakistanis, and many others caught in the cross-hairs of the American military machine. “No country has been changed more dramatically by the fallout of the 9/11 attacks than Afghanistan,” argues Sune Engel Rasmussen in Twenty Years: Hope, War, and the Betrayal of an Afghan Generation. The Iraq of the late 2010s, Ghaith Abdul-Ahad claims in A Stranger in Your Own City: Travels in the Middle East’s Long War, was “born out of an illegal occupation, two decades of civil wars, savage militancy, car bombs, beheadings and torture.” Finally, according to Hugh Gusterson’s anthropological work on the drone war in the poor, tribal region of Waziristan, the drones lurking above Pakistan have delivered death to many and never-ending fear of death to many more. The drone’s pilots, meanwhile, return to their suburban subdivisions after work. The war on terror may have been fought over there, but it defined American life over here. Of particular note for Beck is the war’s effects on American democracy. The war, it could be argued, began as an assertion of popular will. The public overwhelmingly supported a war against both al-Qaeda and its Afghan host, the Taliban. Regime change had broad appeal. But by mid-2003, when the White House turned its attention to Iraq, support for the broader war on terror began to dwindle. In the years that followed, it would plummet. Yet, against the growing objections of Americans, the war continued. What Americans wanted and what their country did increasingly went in different directions. How is the public theoretically able to shape the doings of the state? One way is through the ballot box. But the war on terror did not come to a close in 2009, when a Democratic president with a historically broad mandate took back the presidency. The press—what Alexis de Tocqueville described, after his tour of the United States in the 1830s, as “the democratic instrument of liberty”—provides another avenue. It is here where public opinion is supposed to be aired out and turned into political force. But that’s not what happened during the war on terror. Instead, as mainstream news outlets fed the public almost invariably the White House’s perspective that Saddam Hussein possessed WMDs, it became an instrument of the state. The few journalists and pundits who questioned this elite consensus faced professional consequences. When Phil Donahue, a veteran media personality, dissented, MSNBC canceled his show in February 2003. Which program filled that cherished 8:00 p.m. slot? Countdown: Iraq. Another way is protest. But the public needs space to do so. And since 9/11, public space—places in which people have the freedom to do what they want, anything from hanging out to exercising their democratic right to protest—have been sacrificed for security. Guards and surveillance cameras increasingly honeycomb them, while police officers, outfitted with military surplus from the Pentagon, have come to resemble “occupying armies.” Access to those spaces have been downgraded from a right to a mere privilege. Just ask the participants of Occupy Wall Street or Black Lives Matter whom the state expelled from parks and streets across the country over the last decade and a half. The public has thus been unable to impose consequences on the people who waged and campaigned for the war on terror. Consider the journalists and bloggers who, after revving up the war machine, collected nothing but garlands. Or the restoration of George Bush’s image in the 2010s (a video from Ellen DeGeneres’s YouTube channel includes a 2019 video titled, “This Photo of Ellen & George W. Bush Will Give You Faith in America Again”: 1.2 million views). Or the countless atrocities committed by American soldiers that have been met with slaps on the wrist, if at all. Or the National Security Agency’s warrantless surveillance of Americans, ruled by courts in 2020 to have been both illegal and useless, and yet, whose directors have gone unpunished. Or the rise of national politicians who had almost uniformly favored the war in Iraq: every single presidential and vice-presidential nominee since 2016 supported the invasion—with the exception of Kamala Harris. And yet her campaign last year, maddeningly, still paraded an endorsement from Dick Cheney—the war on terror’s most direct architect—in what certainly didn’t help her chances of winning the fateful election. Despite handwringing over cancel culture, elites over the past two and a half decades have luxuriated in what Beck calls “impunity culture.” Perhaps it shouldn’t be a surprise, then, that every major political movement since 2003, namely Occupy Wall Street and Black Lives Matter, has targeted impunity itself, whether it was that of the bankers who crashed the economy in 2008 or of the police officers who killed unarmed Black Americans. The lack of accountability in the American political system was distilled by Obama in a memorable interview he gave in 2009: “I don’t believe anybody is above the law. On the other hand, I also have a belief that we need to look forward.” But looking forward is also looking away. No one can doubt that the war on terror was transformative for the United States. But we should be careful not to treat it as a complete rupture from the past. This kind of thinking could imply that the United States was in decent shape on September 10, 2001. Just as the election of Trump signalled the country’s pre-existing troubles—a point that Beck makes emphatically—doesn’t the United States’s aggressive, counterproductive, and often barbaric response to the 9/11 attacks indicate that the country was already in crisis? Beck isn’t oblivious to the war on terror’s pre-history—he devotes pages to everything from settler colonialism to the decline of economic growth since the 1970s. But Homeland regrettably plays down other, more obvious continuities from the past. The foreign policy establishment that led the country into a global crusade against terrorism drew on a repertoire of tactics tried and tested in earlier decades, from regime change to mass surveillance. So too did Americans in their pursuit of security in their everyday life. Indeed, as historian Elaine Tyler May argues in Fortress America: How We Embraced Fear and Abandoned Democracy, it was in the second half of the 20th century—not the beginning of the 21st—that a “new consensus” formed in the United States, one organized around a novel fear-laden definition of “security” that “both major parties adopted and most Americans across the political spectrum accepted.” It led to the creation of the national security state, as well as Americans’ retreat into the home to find safety amid threats of nuclear Armageddon and communist subversion. “During the first decade of the war on terror,” Beck argues, “the United States built up internal fortifications the likes of which the country had never seen.” Again, there is more of a throughline here than Beck lets on. In the early Cold War, Americans became gripped by what Elaine Tyler May calls a “bunker mentality,” in some cases transforming their homes into actual bunkers (“Now,” the Portland Cement Association advertised in the 1950s, “you can protect precious lives with an all-concrete blast-resistance house”). The panic around urban unrest and rising crime rates in the 1960s and ’70s intensified these fortifications. Home security systems flourished; gated communities became the fastest growing form of housing in the 1990s. Meanwhile, the militarization of cities themselves, chillingly catalogued by Mike Davis in City of Quartz in 1990, was decades in the making. The war on terror certainly cranked up the volume of fear and conjured new bogeymen, substituting terrorist for communist. But it only seemed natural to do so after more than half a century of fear and security organizing American policy at home and abroad. Even the rise of the SUV—one of the continuities that Beck does trace adeptly—was a part of this broader story: Consumer data from the year 2000 suggests that the car’s popularity was, in part, motivated by a fear of crime. If the fears of the war on terror began decades before 9/11, then, it’s also worth asking, when did the war on terror end? Could it, perhaps, be continuing? Americans have been treated to regular messaging that it was about to wrap up. There was George Bush’s infamous speech aboard the USS Abraham Lincoln, in front of a banner blaring “Mission Accomplished,” in 2003; there was also Barack Obama’s reforms in 2009. (“With the stroke of his pen,” the Washington Post announced, Obama “effectively declared an end to the ‘war on terror,’ as President George W. Bush had defined it.”) More recently, the United States’s withdrawal from Afghanistan in 2021 seemed to signal the end. “I was not going to extend this forever war,” Joe Biden told the country. As the last of American troops left in 2021, crowds of Afghans—clinging to what meager personal belongings they could carry—desperately tried to escape the Taliban takeover. The chaotic scenes from the Kabul airport harkened back to the fall of Saigon in 1975. But just as concluding one local conflict did not spell the end of the Cold War, neither did concluding another bring the war on terror to a close. The two conflicts do make for intriguing comparison in Homeland. At first glance, it’s their resemblance that one notices. In both Vietnam and Afghanistan, after more than a decade of trying and failing to replace a hostile government with one more pliable, the United States left. But, taking a step back, the differences are even more illuminating. In response to the blood and treasure spilled in Vietnam—conscription was in effect until the final years of the war—an anti-war movement filled the streets. The media adopted a more critical mode. And Congress clawed back its powers over war from the presidency. American militarism itself fell into disrepute. In contrast, the American occupation of Afghanistan—along with the broader war on terror—has generated a comparatively paltry opposition, especially after the initial wave of protest in the early years. But it’s difficult to protest what one doesn’t know. It seems that the country had moved on—or, to use Obama’s words, was looking forward—before the fighting ended. But the war on terror does continue. Many of the tools that the Bush administration introduced are still on the books, finding new uses, more than two decades later. The 2001 Authorization for Use of Military Force (AUMF)—the broadly written and even more broadly interpreted piece of legislation that empowered George Bush to pursue the 9/11 attackers—has since been invoked in military operations in at least 22 countries, most recently by Joe Biden in a bombing campaign against Iran-aligned militia in Iraq. It’s not just the 2001 AUMF. In that same operation last year, Biden also cited the Authorization for Use of Military Force Against Iraq, the 2002 resolution that enabled the Bush administration to overthrow Saddam Hussein. Let that sink in: More than a decade after “ending” the Iraq War, the US president still retains the right to bomb the country, without congressional debate, whenever he wants. What is that, if not a forever war? The war on terror also continues to haunt American society. National security reigns supreme. Democrats and Republicans have stretched the category “terrorist” to include more and more people. Last year, Manhattan’s District Attorney charged Luigi Mangione, who stands accused of killing a health insurance CEO, with an act of terrorism. Trump and his allies, once again in power, apply the category with abandon. Dealing drugs? Terrorism. Opposing deportation efforts? Terrorism.  Vandalizing Tesla cars and infrastructure? Terrorism. Protesting genocide? Terrorism (or, rather, “activities aligned to Hamas, a designated terrorist organization,” an entirely novel and seemingly infinite charge). And regime change, this time in Iran, has re-entered the political mainstream. The war on terror, as Beck illustrates, has been a tragically bipartisan project, supported by Republicans and Democrats alike. Grumbling about the war can be heard across the aisle. But actually bringing it to an end and all that would entail—from restoring civil rights at home to resetting the United States’s relations abroad? To do so would require reckoning with the past. That’s a project in search of a political coalition. Source of the article

Utopia brasileira

Within less than a decade, Brazil will have as many evangelicals as Catholics, a transcendence born of the prosperity gospel Utopia is on the horizon … I move two steps closer; it moves two steps further away. I walk another 10 steps and the horizon runs 10 steps further away. As much as I may walk, I’ll never reach it. So what’s the point of utopia? The point is this: to keep walking. – from Las palabras andantes (1993), or Walking Words, by Eduardo Galeano In 1856, Thomas Ewbank published Life in Brazil, an account of the Englishman’s six months spent in the country a decade earlier. In it, he argued that Catholicism as practised in Brazil and across Latin America constrained material progress. In this, the visitor would be joined by a long line of critics, from the writer and later modernising president of Argentina, Domingo Faustino Sarmiento – who denounced the negative influence of Spanish and Indigenous cultures in Latin America, including the role of the Catholic Church – to the conservative Harvard academic Samuel Huntington. Ewbank contended, moreover, that the ‘Nordic sects will never flourish on the Tropics,’ a line that Brazil’s greatest historian, Sérgio Buarque de Holanda, immortalised in his work Raízes do Brasil (1936), or Roots of Brazil. Protestants would supposedly degenerate here, with the severity, austerity and rigour of that doctrine being incompatible with the archetypal Brazilian: the ‘cordial man’. This figure, according to Holanda, represented interpersonal warmth and openness, in contrast to closed and rule-bound northern Europeans. At present, Protestants account for one-third of the population, while the number of Catholics has just dipped below 50 per cent. By far the largest proportion of Brazilian Protestants are evangelicals, specifically Pentecostals, neo-Pentecostals and related branches. By the centenary of Raízes do Brasil in 2036, Protestants will outnumber Catholics in Brazil for the first time in the country’s 500-plus-year history. In 2018, the far-Right former army captain Jair Bolsonaro shocked the country by winning the presidency, bolstered by an evangelical vote that would remain faithful to him and his socially conservative, politically reactionary and cosmologically apocalyptic politics. The rise of this bloc presents a challenge to perhaps the most clichéd description of Brazil. In 1941, the Austrian Stefan Zweig, seeking refuge from Nazism in Brazil, called this land the ‘country of the future’. Zweig highlighted not just Brazil’s natural endowments but the society’s tolerance, openness, harmony, optimism and fusionist culture. For Zweig, as for many Europeans and Americans before him, Brazil became a utopian gleam in the eye. For centuries, certain common threads had sewn these utopian visions together: Brazil was a picture of idleness, imagination, diversity and conviviality – a means of living together that relied on adaptability. Yet the Bolsonarismo phenomenon, according to critics, is intolerant, punitive, supremacist, an embodiment of a type of Christian cosmovision at odds with any notion of society. Did the presidency of Bolsonaro, under the slogan ‘Brazil above everything, God above everyone’, signal an end of this romance? No one holds Brazil as an existing paradise. Few even sustain any expectation that it will deliver on what was promised for it. And, indeed, utopian thinking probably died as far back as the 1964 military coup. But many have continued to uphold the country’s cultural traits as admirable and enviable – even models for the world. ‘Brazilianization’, a trope taken up by various intellectuals in recent decades, signals a universal tendency towards social inequality, urban segregation, informalisation of labour, and political corruption. Others, though, have sought to rescue a positive aspect: the country’s informality and ductility, particularly in relation to work, as well as its hybridisation, creolisation and openness to the world, made it already adapted to the new, global, postmodern capitalism that followed the Cold War. By the 2000s, Brazil was witnessing peaceful, democratic alternation in government between centre-Left and centre-Right for practically the first time in its history. Under President Lula, it saw booming growth, combined with new measures of social inclusion. But underneath the surface of the globalisation wave that Brazil was surfing, violent crime was on the up, manufacturing was down, and inclusion was being bought on credit. ‘There is indeed an alternative, even if it is an apocalyptic one’ In 2013, it came to a shuddering halt. Rising popular expectations generated a crisis of representation – announced by the biggest mass street mobilisations in the country’s history. This was succeeded by economic crisis and then by institutional crisis, culminating in the parliamentary coup against Lula’s successor, Dilma Rousseff. Now all the energy seemed to be with a new Right-wing movement that dominated the streets. It was topped off by the election of Bolsonaro in 2018. Suddenly, eyes turned to the growing prominence of conservative Pentecostal and neo-Pentecostal outlooks in national life. Bolsonaro failed to be re-elected in 2022. Upon his defeat, Folha de S Paulo, Brazil’s paper of record, reported that, ‘Bolsonarista pastors talk of apocalypse.’ At the evangelical Community of Nations church in Brasília, frequented by Michelle Bolsonaro, wife of Jair, the pastor’s wife is reported to have proclaimed: ‘Brazil has an owner. That hasn’t changed, it won’t change. God continues to be the one who made Brazil shine and be the light of the world. His plan has changed neither with regard to us nor the country.’ It was a rare expression, for our times, of a sense of historical mission or destiny. The age of no alternative was being left behind. ‘There is indeed an alternative, even if it is an apocalyptic one,’ the Brazilian philosopher Paulo Arantes sardonically remarked. In the final 2022 pre-election poll, evangelicals split 69-31 in Bolsonaro’s favour. Although he is Catholic, he was baptised in the River Jordan in 2016 by Pastor Everaldo, an important member of the Pentecostal Assembleia de Deus (the Assemblies of God – the largest Pentecostal church in the world, and the largest evangelical church in Brazil). The creationist and anti-gay Pentecostal Marcelo Crivella shocked many when he defeated a human rights activist to become mayor of Rio de Janeiro in 2016. Crivella’s uncle is Edir Macedo, the founder of the neo-Pentecostal Universal Church of the Kingdom of God (Igreja Universal do Reino de Deus, or IURD), the largest of its denomination, reputedly with 4.8 million faithful in Brazil. Preaching the ‘prosperity gospel’, according to which commitment to the church will be rewarded with wealth, has seen Macedo become a dollar billionaire (of which there are around 60 in the country). The IURD is known for practising exorcisms and divine cures, and for purging demonic spirits, which it associates with Afro-Brazilian religions like Candomblé and Umbanda. But it is the IURD’s political role and media presence that really make it stand out. The Republicanos party, founded in 2005, is a creature of the IURD. Its president, the lawyer Marcos Pereira, was a bishop who held a position in the Michel Temer administration that took office after deposing of Rousseff. The party’s 44 deputies in the lower house of Congress are part of the powerful cross-party evangelical bench in Congress, composed of 215 deputies out of a total of 513. Macedo also owns Record, the second-biggest channel in Brazil, which gave Bolsonaro plenty of free airtime. The articulation between evangelicals and Bolsonaro only strengthened through his term. During the COVID-19 pandemic, Bolsonaro’s denial of the severity of the virus was, in part, a demonstration of evangelical coronafé, or corona-faith: ‘that confidence, that certainty that God is with you and that he will never, ever, at any time fail those who have believed in him,’ in Macedo’s words. Later in his term, Bolsonaro nominated the ‘terribly evangelical’ Presbyterian pastor André Mendonça to an empty Supreme Court seat. Upon Congressional approval, the president’s wife Michelle, a crucial link to the evangelical public, was filmed crying, praying and speaking in tongues. Bolsonarismo is a sort of parody of Christian eschatology After Bolsonaro left office, his supporters stormed government buildings in Brasília on 8 January 2023, in a replay of the storming of the United States Capitol on 6 January 2021. The action was widely unpopular. But 31 per cent of evangelicals supported it, against a national average of 18 per cent. While 40 per cent of the population believed Lula had not won the election fairly, among evangelicals this belief was as high as 68 per cent, with 64 per cent in favour of a coup to overturn the result. The media was full of reports of pro-Bolsonaro protestors praying for miracles, speaking in tongues and behaving like the world was ending. The theologian Yago Martins, whose videos on religious thought have won him more than 1 million followers across his social channels, refers to Bolsonarismo as an apocalípse de palha, or ‘straw apocalypse’. Bolsonarismo’s combination of a conspiratorial mindset, a longing for an imminent national conflagration, a holy war against evil, and its messianic discourse are a sort of parody of Christian eschatology. For Martins, author of A religão do bolsonarismo (2021), or Bolsonarismo as Religion, the movement is a ‘fallacious immanentisation of the eschaton’, a paraphrase of the philosopher Eric Voegelin’s phrase from 1952. Martins, a Baptist pastor, identifies as a Right-wing evangelical, but is a critic of Bolsonarismo (though he admits to voting for him in 2018). His criticisms of Bolsonarismo’s idolatry nevertheless testify to something new on the scene: the insertion of a transcendental viewpoint into politics, something that had supposedly been expulsed with the historic defeat of socialism and nationalism. Indeed, when I spoke to Gedeon Freire de Alencar, a sociologist of religion and author of a book on the contribution of evangelicals to Brazilian culture, as well as a presbyter of the Bethesda Assembly of God in São Paulo, he emphasised the role of dominion theology, according to which believers should seek to institute a nation governed by Christians. The ‘Seven Mountain Mandate’, popularised in 2013 by two American authors, advocates that there are seven areas of life that evangelicals should target: religion, family, government, education, media, arts/entertainment, and business. For many progressives, this struck as a sort of ‘medieval radicalism’, the charge thrown at Crivella by Jean Wyllys, the first gay-rights activist to win a seat in Congress. The philosopher and columnist Vladimir Safatle denounced the ‘project to take Brazil back to the Middle Ages’: yes, Brazil had had its share of authoritarian and conservative figures in the past, but this was new, ‘because the old Right… never needed spokespeople.’ As testament to the growing presence of evangelicals but also their political ambivalence, consider the March for Jesus. The yearly demonstration is known as ‘the world’s largest Christian event’ drawing between 1 and 3 millions crentes, or believers, each year. Though Bolsonaro was the first president to attend the march, in 2019, it was Lula who signed the law that officialised the National Day for the March for Jesus, scheduled for 60 days after Easter. Similarly, back in 1997 it was estimated that one-third of militants in the agrarian land reform movement, MST, were Pentecostals, which would have been double the rate of the local population at the time. Twenty years later, Guilherme Boulos, coordinator of the MTST, the unhoused workers’ movement, claimed that by far the largest part of the movement’s base was made up of Pentecostals. So why the association of evangelicals with darkest reaction? In large part, it’s class prejudice, argues the anthropologist Juliano Spyer, whose book Povo de Deus (2020), or People of God, sparked widespread debate in the country and was a finalist in Brazil’s most prestigious nonfiction prize in 2021. For opinion-formers, the evangelical is either a poor fanatic or a rich manipulator, but the reality is that the religion is socially embedded in Brazil, particularly among the poor and Black population. The Brazilian urban landscape sees a war of all against all play out every day For instance, well-to-do social progressivism tars evangelical religion as patriarchal. Perhaps so, in contrast with contemporary upper-middle-class mores, but in the often machista and violent lifeworld of the Brazilian working class, when a man is born-again, he stops drinking, becomes less likely to beat his wife, and is more inclined to contribute to the household. Similarly, while evangelicals are held to be anti-science and anti-enlightenment, in a culture in which even the elite has never been particularly bookish, conversion is associated with a renewed emphasis on study. This partly explains why Pentecostalism (and evangelical Christianity more broadly), is the faith of the world’s urban poor. And ‘Brazil is ground zero for what is happening within the wider Pentecostal movement, the median global experience,’ explains Elle Hardy, author of a book Beyond Belief (2021) on the phenomenon’s spread worldwide. The evangelical movement must be understood in relation to the reality in which real political corruption abounds, and violence and the threat of violence is omnipresent in the working-class urban context. Brazil now sees more than 50,000 murders a year, and the violence associated with criminal markets, especially drugs, is only the sharp end of a fully marketised society. The Brazilian urban landscape sees a war of all against all play out every day. Middle-class Brazilian progressives were happy to ignore the civil war raging in the urban peripheries until the violence found a spokesperson in Bolsonaro. Broadly, the term evangélicos refers to missionary Protestants who are not members of the historic Protestant churches in Brazil – the Presbyterians, Lutherans, Anglicans, Methodists, Adventists and Baptists who first arrived from Europe in the 19th century. Confusingly, many historic Protestant churches carry the name ‘evangelical’ in their titles, and some have now come to adopt modes of worship evocative of charismatic or revivalist churches. But a distinction remains: historic Protestants in Brazil normally call themselves protestante or cristão, not evangélico or crente – and they tend to be middle class. Pentecostalism arrived in Brazil in the early 20th century, taking root among the poor. Its emblematic church is the Assembleia de Deus, established by two Swedish Baptist missionaries who arrived in the Amazonian port city of Belém in 1910. The third wave, beginning in the 1950s, is marked by the arrival of the Foursquare Church (Igreja Quadrangular), and coincides with rapid industrialisation and urbanisation, with worshipers recruited over the airwaves. But even by 1970, evangelicals still accounted for only 5.2 per cent of the population, while Catholics were at 91.8 per cent. The establishment of the IURD in 1977 marks the arrival of neo-Pentecostalism and the start of the fourth wave. Proselytising is carried out via TV and, doctrinally, a more managerial ethos is introduced. To the Pentecostals’ direct, personal and emotional experience of God is added the idea that conversion leads to financial advancement – the prosperity gospel. Macedo’s church also exemplified the movement’s growing political confidence. By the 1980s, the slogan crente não se mete na política (believers don’t get mixed up in politics) was being replaced by irmão vota em irmão (brothers vote for brothers). Throughout, the share of Catholics in the population was falling, with an almost commensurate rise in evangelicals – by about 1 per cent per decade. But, as of 1990, this accelerates to a 1 per cent change per year. Catholics were still 83 per cent of the population in 1991 and 74 per cent in 2000, when Catholicism hit its peak in absolute terms, with 124.9 million Brazilians – making Brazil the largest Catholic country in the world, a title it still holds. But by 2010, the share of Catholics had fallen to 64.6 per cent, with evangelicals rising to 22.2 per cent. Today, evangelicals represent a third of the population, and Catholics just under half. Modellers have identified 2032 as the year of religious crossover, when each Christian camp will account for an equal share of the population: 39 per cent. Any evangelical entrepreneur with a Bible under his arm and access to an enclosed space can set up shop What explains this explosion? The anthropologist Gilberto Velho points to inward migration, the primary 20th-century phenomenon in Brazil. Tens of millions of poor, illiterate, rural and profoundly Catholic people from the arid northeast of Brazil migrated to big cities, especially in the industrial southeast. Spyer tells me they ‘lived through the shock of leaving the countryside for the electricity of the city – but also the shock of moving to the most vulnerable parts of the city.’ The loss of networks of support, particularly of extended family, was filled by the establishment of evangelical churches. This is why the geographer Mike Davis called Pentecostalism ‘the single most important cultural response to explosive and traumatic urbanisation’. Sixty years ago, Brazil’s population was evenly split between town and country. Now it is 88 per cent urban, comparable with infinitely richer Sweden or Denmark, and higher than the US, the UK or Germany. The urbanisation rate is also much higher than Brazil’s fellow BRICs, China (66 per cent) or South Africa (68 per cent). Over the past decades, Brazil has also suffered ‘premature deindustrialisation’ – the loss of manufacturing jobs on the scale of the UK, for instance, but at a much lower level of income and development. Here is the recipe for what Davis called a ‘planet of slums’: urbanisation without industrialisation. And it is in the peripheries of megalopolises like São Paulo (greater metropolitan population: 22 million) and Rio (14 million), or other large cities where informal or precarious housing and employment dominates, that nimble startup churches sprout. Unlike the slow-moving Catholic Church, which demands more established settings and that its priests undergo four years of theological study, any evangelical entrepreneur with a Bible under his arm and access to an enclosed space, no matter how rudimentary, can set up shop. To ambitious working-class men, this offers a route to a leadership position in the community, a path to self-improvement. It was in what the sociologist Luiz Werneck Vianna called this ‘Sahara of civic life’ that Pentecostals and neo-Pentecostals built spaces of acolhimento, a word denoting both warm reception and refuge. They took root in the places abandoned by the Brazilian Left, of which the Catholic, liberation theology-inspired Comunidades Eclesiasticas de Base were a major part. Turning up in an expensive imported car signals to co-religionaries that the prosperity gospel is working Of course, not all evangelicals in Brazil are poor or working class. The movement has seen significant expansion into the middle class, even if the elite proper remains mostly Catholic. And there are doctrinal differences that map onto these class differences, even if incompletely. The model Pentecostal will be a poor assembleiano, a member of the Assemblies of God, whose small, basic and mostly ugly structures populate the landscape, from gritty industrial suburbs to lost hamlets of a dozen inhabitants deep in the interior. In these houses of worship, eschatological themes are omnipresent and the songs are about Jesus’s second coming. On the way to or back from church, worshipers – in their Sunday best – pass each other’s houses and check in on each other, reinforcing communal ties. At the other end of the spectrum is something like the Bola de Neve Church, founded in a surf shop by a surfing pastor in 1999. Its 560 churches across a number of countries purvey something altogether ‘lighter’. Its middle-class members arrive by car, wear casual clothing, and are treated to sermons accompanied by pop-rock and reggae. Eschatological themes are largely absent. As Alencar put it to me: ‘If Jesus returned now, he’d ruin their gig.’ Accompanying the Church’s suave and sophisticated marketing is the preaching of the theology of prosperity. Turning up in an expensive imported car signals to co-religionaries that the prosperity gospel is working. Importantly, in Brazil, ‘everything is syncretised and miscegenated,’ explains Alencar, so although in doctrinal terms the gulf between Pentecostal and neo-Pentecostal is ‘abyssal’, in practice it is hard to draw clear lines. Moreover, Baptist, Adventist and even Catholic churches are undergoing pentecostalização, adopting charismatic or revivalist features. The prosperity gospel component cuts across many of these complicated lines, a result of the emphasis on competition, individualism and economic ascent typical of neoliberal societies. But ultimately, for all the variety, the growth of evangelical Christianity in a society as unequal as Brazil is a phenomenon of the poor and working class. Conversion and dedication promises – and, in some cases, delivers – a better life: not just money, but also in terms of relationships, family and especially health. Belief functions as a para-medicine, be it directly through faith-healing, through the belief, determination and support to beat addiction, or simply through the provision of psychological support. In the words of Davis, it is a ‘spiritual health-delivery system’. This is the reason why evangelicals tend to be urban, young, Black or Brown women, from the least schooled strata, with the lowest salaries. It is, as Davis put it, ‘the largest self-organised movement of poor urban people in the world.’ Utopian visions have attached themselves to Brazil and informed its self-conception from its European discovery through to the 20th century. Perhaps it was a coincidence, but in Thomas More’s Utopia (1516) news of a distant paradise was brought by a Portuguese sailor. Brazil was Utopia realised. As Patricia Vieira puts it in States of Grace: Utopia in Brazilian Culture (2018), it presented a ‘fantasy of easy enrichment, grounded on the perception of the region as a treasure trove of natural wealth.’ For one 17th-century Jesuit priest, the land demarcated on the east side of the Treaty of Tordesillas would be the ‘Fifth Empire’, a new kingdom of perpetual peace, where people would live in mystical communion with God, and all would have equal rights. Gradually, messianic and theologically informed visions would give way to secularised ones. Curiously, Brazil is the only country whose demonym finishes in the -eiro suffix in Portuguese. So you have the Francês, the Argentino, the Americano, the Israelense… but the Brasileiro. It suggests an occupation, like marceneiro (carpenter), pedreiro (bricklayer), mineiro (miner). To be Brazilian was not a state of being, but an activity, a doing. It was the Portuguese and other Europeans who went off and ‘did’ Brazil – exploited its land. The Indigenous hero is lazy – a ‘trait that Brazilians should embrace and consciously cultivate’ So the Brasileiro is one committed to the project of Brazil, they are not a mere natural feature of the land. But this also speaks to a rapacious pattern of Brazilian development, characterised by using and discarding, rather than building and consolidating. It is a subjectivity evocative of Max Weber’s ‘capitalistic adventurer’; a figure who would ‘collect the fruit without planting the tree,’ as Holanda put it. The utopian tangles with its opposite. Are we dealing with transformation or exploitation? Is the one who works the land subject or object? Rejecting the exploited and exploiter dichotomy, a different utopian vision fixated on the independent, noble savage, free from work. The Índio was celebrated by Brazilian Romantics and modernists alike. In Macunaíma (1928), Mario de Andrade’s landmark novel mixing fantastical and primitivist elements, the eponymous Indigenous hero, a ‘hero without any character’, is above all lazy – a ‘trait that Brazilians should be proud of, embrace, and consciously cultivate,’ according to Vieira. But at issue is not really laziness but ócio – idleness. The Portuguese word for business is negócio, or the negation of idleness (neg-ócio). So, Vieira argues, the ‘business‑as‑usual work mentality of the capitalist world is at odds … with the primeval ócio of Brazilian Indigenous communities…’ The modernist poet Oswald de Andrade likewise foresaw a coming Age of Leisure, enabled by technology. In this egalitarian, matriarchal disposition, Brazil could be at the forefront of nations, showing the way. Civilising work, negócio, had been done; soon the dialectic would swing back to a paradisaical ócio. In practice, the Índio and the adventurer were locked in conflict, but they jointly stood in contrast to the avaricious European bourgeois. It is for this reason that Holanda’s Brazilian archetype of the cordial man is, as the sociologist Jessé de Souza puts it, the ‘perfect inverse of the ascetic Protestant’. Today’s Brazilian evangelicals are likewise not Weber’s northern European protestants. Their worship is emotional, not intellectual, filled with magic, rather than structured by reason. But pecuniary accumulation appears to unite them. As the Left-leaning Brazilian philosopher Roberto Mangabeira Unger has noted, these are the people who ‘[go] to night school, struggle to open a business, to be an independent professional, who are building a new culture of self-help and initiative – they are in command of the national imaginary.’ A few years ago, when asked about Left-wing rejection of the entrepreneurial, evangelical sector, Unger replied that the Brazilian Left should not repeat the ‘calamitous trajectory’ of their European counterparts in demonising the petty bourgeoisie and distancing themselves ‘from the real aspirations of workers’. This neo-Pentecostal consumer-capitalist utopia is necessarily authoritarian The ‘neo-Pentecostal movement today flourishes in a context of dismantling of labour protections,’ argues Brazil’s leading scholar of precarity, Ruy Braga. This requires less a methodical dedication to work, and more the neoliberal self-management typical of popular entrepreneurship. We are dealing not with the Protestant work ethic, but with an evangelical speculative ethic. Quantification becomes the criteria of validation, be it for believers or churches competing in the religious marketplace. ‘Blessings are consumed, praises sold, preaching purchased,’ as Alencar puts it. Whether this is mere capitalist survival or somehow utopian depends on whether you agree with the Catholic theologian Jung Mo Sung’s assertion that evangelicals insert a metaphysical element – perfectibility; the realisation of desire through the market for those who ‘deserve’ it – into mundane society. For a critic of the prosperity gospel like Sung, this neo-Pentecostal consumer-capitalist utopia is necessarily authoritarian. Divine blessing – manifest through the crente’s increased purchasing power – is bestowed as a result of the believers’ spiritual war against the enemies of God: the ‘communists’ and the ‘gays’. The ‘communists’ (who might in fact just be centrist progressives or Catholics) want to give money to the poor; these in turn may be sinners (drug users or traffickers, for instance). This goes against the way that God distributes blessings, which is to favour, economically, those who follow the prosperity gospel. According to most accounts, a unifying element in the evangelical cosmology is the confrontation between good and evil. The fiel (faithful) encounters a binary: the ‘world’ (sin, violence, addiction, suffering, evil – the Devil around every corner) vs the ‘Church’ (the negation of all that). This code is efficient in affording psychic peace to those facing a complex, rapidly changing world. How stark is the contrast with earlier self-understandings of Brazilian culture in which ambiguity prevailed! Brazil apparently lacked a moral nexus (as the historian Caio Prado Jr saw it in the 1940s), it was a society of ‘corrosive tolerance’ (according to the literary critic Antônio Cândido in the 1970s) or represented a ‘world without guilt’ (said another literary critic, Roberto Schwarz, in the 1980s). Outsiders, too, remarked on the absence of moral depth and pure religion. Two 19th-century American missionaries, James Fletcher and Daniel Kidder, lamented in Brazil and the Brazilians (1857) that that this natural paradise could have been a moral paradise, were it not for the fact that tropical Catholicism was superficial, pagan, and hung up on feasts and saints. North Americans of the time learned that the Brazilian was ‘amiable, refined, ceremonious’, but also that the absence of stricter moral codes led him to be ‘irresponsible, insincere and selfish’. The emblematic Brazilian figure, another archetype, is the malandro, or trickster, slacker, scoundrel. Identified by Cândido in his reading of the 19th-century novel Memoirs of a Militia Sergeant, the malandro flits between the upper and lower classes, between order and disorder, and operates on the presumption of an absence of moral judgment, sin and guilt. He does not work full-time, but nor is he a full-time criminal, nor a slave. He gets by on his wits and adapts. For Vieira, the ‘relaxed, leisurely lifestyle of the malandro, which represents the quintessentially Brazilian way of being-in-the-world, generated a society where regulations are lax, and so can be easily bent to accommodate different customs and traditions.’ Conversions are negatively impacting samba schools, with the born-again quitting carnival The malandro is at home in carnaval, which brackets real life, allowing for play, for freedom and fantasy. In Roberto DaMatta’s classic 1979 study, the festival is a subversive, free universe of useless activity – something that looks like madness from the perspective of capitalist work ideology. In this light, Brazil’s great religious transition represents a cultural revolution. Evangelicals interrupt the ‘utopia’ of the idle Índio or the malandro at play in carnival. Firstly, they disdain idleness in favour of entrepreneurial activity and rigorous self-discipline. Secondly, and more directly, they scorn carnival itself. As the leading Pentecostal pastor Silas Malafaia puts it, carnival is a pagan feast ‘marked by sexual licentiousness, boozing, gluttony, group orgies and a lot of music.’ This is felt at the grassroots. Folha de S Paulo reports on how conversions are negatively impacting samba schools and other musical groups, with the born-again quitting carnival. They say Pentecostalism and neo-Pentecostalism owe their success to their adaptability to local contexts. But, at a minimum, these doctrines’ implantation in foreign soil gives voice to deep changes in the receiving culture, and at a maximum may even serve to transform it. If toleration, moral ambiguity and easy-going malleability were central to a Catholic-inflected Brazilian identity, what will an evangelical Brazil look like? In The Making of the English Working Class (1963), E P Thompson comments that Methodism prevented revolution in England in the 1790s. Yet it was indirectly responsible for a growth in working people’s self-confidence and capacity for organisation. Could something similar be said for Brazilian evangelicals, whose self-starting community-building, at a minimum, could be looked at sympathetically for reconstructing associational life? The Canadian political scientist André Corten, who taught and researched across Latin America, remarked that ‘the failure of secularised Utopias makes the persistence of theologised Utopias come to light.’ Pentecostalism, as a sect, is one such utopianism. It withdraws to an ‘elsewhere’ in social space, refuses to compromise with the social world, and is therefore ‘anti-political’. There is a popular-democratic thrust to this: no deference to a professionalised clergy, but rather a horizontal ordering of the faithful. A comparison with revolutionary-democratic liberation theology is illuminating. Insofar as they construct the category of ‘the poor’, both liberation theology and Pentecostalism are discourses about suffering. But Pentecostalism privileges emotion in the place of cognition, glossolalia (speaking in tongues) in the place of equality of speech, and – crucially – it is a religion of the poor, not for the poor. It disdains poverty. Evangelical churches ‘transform people who were born as subaltern – not just poor but also convinced that their social role is to be poor – and they are reborn: they come to understand themselves as equal to other people,’ argues Spyer. They seek to turn their back on poverty and change their lives so as to improve their station. How does this relate to secular utopianism? It doesn’t. This democratic-popular component cannot be recycled by the Left, nor by conservatism; evangelicals may refuse infeudation to a category of scholars but, simultaneously, the intolerance and despotism of custom connote authoritarianism. This is a movement that is ‘at once egalitarian and authoritarian’, says Corten. Is this not the obverse of the hegemonic culture, of progressive neoliberalism? Our societies are, prima facie, egalitarian: most forms of elitism and snobbery are ruled out, and we are tolerant of difference and accepting of minorities, because everything is relativised in a consumer society. But, in practice, there is a deep inequality of income, wealth, power and even recognition. As evangelical Christianity ballooned, it would leave behind the anti-politics of the sect So even if we are to conclude that the evangelical wave contains no utopian seeds, it is at the very least countercultural. Indeed, it was, as Alencar put it to me, ‘contestatory from the start: in their social behaviour, ways of greeting each other, their clothes, music, sport, life…’ But this was always a ‘force of transformation with no intentionality’, says Corten, making its logic distinct from the utopian ideologies of the Left. In any case, as evangelical Christianity ballooned, it was always going to leave behind the anti-politics of the sect. Corten sketched out three political trajectories that might take shape. One is assimilation: adapting to the reigning order of society. In formal politics, this is represented by evangelical political parties or cross-party benches behaving in physiological fashion – a term from Brazilian politics that means to become part of the organs of the state, with all the clientelism and corruption this entails. The happy-clappy neo-Pentecostal churches like Bola de Neve would likewise represent a certain assimilation. Embourgeoisement, for evangelicals, represents not just certain churches becoming middle class, but questions over the professionalisation of the clergy, whether pastors should be paid a salary. These frictions are currently playing out among the faithful, with heated debate within churches – and competition between them. A second entry point to politics is manipulation: this consists in evangelical leaders letting believers think that they continue to be ‘unacceptable’ while playing the political game. This might accord with the authoritarian thesis, whereby evangelical ‘despotism of custom’ fits seamlessly with secular authoritarian rule. The third door leads to messianism. This would present the most obvious threat to liberal democracy, not (only) because it would be a species of authoritarian populism, but because ‘the solution to the conflict they displace outside themselves is sought in a “supernatural” outcome,’ argues Corten. Critical theologians join with much Left-wing opinion in denouncing the falseness and shallowness of evangelical Christianity in its guise as prosperity gospel. Forget countercultural stances, let alone utopian visions, evangelicals are fully subsumed by contemporary capitalism! Worse still, they sustain intolerant, socially conservative attitudes! But even this may be changing. The newsweekly Veja reports that evangelicals today ‘want to participate in the institutional decisions of their faith communities, aim for more democratic and transparent environments, and are much more flexible in behavioural matters.’ And for all the community-building of proletarian Pentecostals, the number of ‘unchurched’ is growing. In tandem, the number of evangelicals who belong to ‘undetermined’ churches is growing at the same rate as evangelicals as a whole. This would be testament to an even more total victory of the forces of commodification, atomisation, reification. In the same river swims the data on secularisation. Those professing ‘no religion’ are increasing, reaching a plurality (30 per cent) among young people in the megalopolises of São Paulo and Rio – but these people mostly do not identify as agnostic or atheist. Indeed, 89 per cent of Brazilians ‘believe in God or a higher power/spirit’, according to the latest Global Religion survey from Ipsos. The trend, then, is for belief without belonging, toward an individualisation of faith and the adoption of eclectic, personalised beliefs used to sustain, justify or comfort the individual subject in a competitive, anomic world. The sectarianism of the closed-off world of believers awaiting the eschaton has been corroded by the fissiparousness of liquid modernity. Others suggest that there remains a contestatory edge to evangelicals. The anthropologist Susan Harding finds a forcible strain of anti-victimhood in Pentecostal and neo-Pentecostal churches. Indeed, this is why progressives disdain evangelicals, because, unlike other groups, they don’t see themselves as victims of the system. They are financially motivated and seek to better themselves, in contrast with the exoticised or culturally relevant poor (Indigenous communities or practitioners of Afro-Brazilian religions, for instance). For the middle-class progressive, distaste for the evangelical is mere demophobia, a rejection of the urban poor, particularly when they organise themselves. The web of evangelical churches may represent genuine social power True as this may be, anti-victimhood tangles in complex fashion with ressentiment, a sense of being unfairly judged or treated. In turn, this is leveraged by evangelical leaders and conservative politicians. This aspect culminates in a seeming vindication of Corten’s manipulationist theory: swampy corruption and authoritarian instincts meld with apocalyptic themes. It is a confluence that was especially evident under Bolsonaro, and the only question now is whether the constellation of forces that regrouped around him will unify again. What isn’t going away is the social presence of evangelicals as such. But as they expand towards a plurality of the population over the next decade, internal differences and divisions will grow. Neither their politics nor their politicisation is a given. Indications from the US are that evangelicals are retreating from politics, having occupied centre-stage in the 1990s and 2000s. If religion is meant to provide solace, but becomes yet another site in which antagonisms rage, either you need to quit religion or your religion needs to quit politics. Still, the social infrastructure represented by what is ultimately a mass movement of the poor is remarkable. The web of evangelical churches may represent genuine social power. Whether it is a carrier of mainstream capitalist values of entrepreneurship and speculation, or an anti-politics of refusal, or something else entirely, remains to be seen. Capitalism’s contradictory tendencies towards individualism and collectivity play out in full here. Brazil’s religious transition is a case of both at once. In Who Are We? (2004), the political scientist Samuel Huntington warns that Hispanic immigration would transform US culture into something more Catholic, with a consequent demotion for Anglo-Protestant work ideology. One should not see in the advance of Pentecostalism and neo-Pentecostalism in Brazil an opposite movement. We are not simply faced with a pendulum swing from leisure to work – nor, needless to say, a utopian overcoming of that division. Instead, urbanisation without industrialisation has created a social landscape of low-key civil war. The war of all against all finds its ideological correlate not in a Protestant work ethic but in the speculative-entrepreneurial ethic of evangelicals. In a terrible duality of overwork and worklessness, a speculative leap towards prosperity looks like the only escape. And this obtains whether one follows the rigours of evangelical dedication, studying, setting up a microbusiness on credit – or turning to a life of crime. There are plenty of cases where it’s both. Finally then, evangelical Christianity may be the form that popular ideology takes in a context of precarity, after old utopias have dried up. All that remains is a utopia in the sense that Theodor W Adorno discussed: not as a positive social vision, but as the absence of worldly suffering. Adorno, though, was mistaken: he conflated the secular notion of freedom (liberation of our finite lives) with a religious notion of salvation (liberation from finite life). It is the former utopianism that is lacking today – that which drags us along and keeps us walking forward. We need not surrender to the grinding banality of capitalist life for the sake of ‘realism’, nor endow tawdry capitalist creeds with the name ‘utopia’. We need only note that the desire for transcendence exists – it is manifest, in both earthly and metaphysical aspects. The worldwide explosion of Pentecostalism should give us pause, and act as an injunction to invent secular transcendence once more. Source of the article

What is mirror life? Scientists are sounding the alarm

Scientist Kate Adamala doesn’t remember exactly when she realized her lab at the University of Minnesota was working on something potentially dangerous — so dangerous in fact that some researchers think it could pose an existential risk to all complex life forms on Earth. She was one of four researchers awarded a $4 million US National Science Foundation grant in 2019 to investigate whether it’s possible to produce a mirror cell, in which the structure of all of its component biomolecules is the reverse of what’s found in normal cells. The work was important, they thought, because such reversed cells, which have never existed in nature, could shed light on the origins of life and make it easier to create molecules with therapeutic value, potentially tackling significant medical challenges such as infectious disease and superbugs. But doubt crept in. “It was never one light bulb moment. It was kind of a slow boiling over a few months,” Adamala, a synthetic biologist, said. People started asking questions, she added, “and we thought we can answer them, and then we realized we cannot.” The questions hinged on what would happen if scientists succeeded in making a “mirror organism” such as a bacterium from molecules that are the mirror images of their natural forms. Could it inadvertently spread unchecked in the body or an environment, posing grave risks to human health and dire consequences for the planet? Or would it merely fizzle out and harmlessly disappear without a trace? In nature, the structure of many major biomolecules is right- or left-handed, although it’s not clear why life evolved this way. It’s a property known as chirality that was first discovered by French scientist Louis Pasteur in 1848. For example, DNA and RNA are made from “right-handed” nucleotides, and proteins are made from “left-handed” amino acids. Just as a right-handed glove cannot fit a left hand or how a key precisely fits a lock, interactions between molecules often depend on chirality, and living systems need consistent patterns of chirality to function properly. In a mirror cell, all its molecules would be replaced with mirror-image versions. Such a development is hypothetical; even creating a synthetic cell with natural chirality that mimics its normal living counterparts isn’t yet possible, but it’s an active and exciting area of research that Adamala and several other research groups are pursuing. Scientists can make many of the components from non-living precursors, and they could soon engineer normal synthetic cells, which in theory could then create single-cell life forms, such as bacteria. By themselves, small mirror molecules do not pose any particular risks, and scientists can already safely make proteins and carbohydrates with opposite chirality, which hold pharmaceutical promise. Complete mirror cells, however, remain out of reach. Adamala and her colleagues didn’t make much progress with their investigations on that front. The Covid-19 pandemic meant that the research was slow to get off the ground and, more crucially, informal conversations Adamala had with colleagues in other fields, at conferences and other forums, began to sow alarm. “People that are experts in biosafety, immunology and ecology, they didn’t think that something like a mirror cell was actually likely, they thought it science fiction,” she said. One thing these other scientists brought up that was extremely surprising to her was that “mirror cells would likely be completely invisible to the human immune system,” Adamala added. “I used to think the immune system will find a way to detect any invading biomolecules. I didn’t know how chiral the immune system was.” Over the course of 2023 and 2024, those ad-hoc conversations coalesced into a working group of 38 scientists, including Adamala, and in December 2024 the group published a bombshell article in the high-profile scientific journal Science titled “Confronting Risks of Mirror Life,” which summarized the findings of a detailed 300-page report compiled by the same group. The report concluded that mirror cells could become a reality in the next 10 to 30 years and detailed the potentially devastating consequences if mirror bacteria were released into an environment and spread, evading natural biological controls and acting as dangerous pathogens. Since then, a nonprofit called Mirror Biology Dialogues Fund has sponsored a series of meetings to develop recommendations aimed at averting the threat scientists think mirror life could pose. While there’s broad agreement that mirror organisms such as bacteria should not be created, there’s much debate over what — if any — limits to investigating mirror biology should exist. Doomsday scenarios Dozens of experts met at a two-day conference on engineering and safeguarding synthetic life in Manchester, UK, in September to discuss where red lines should be drawn to restrict research into technologies that could enable the creation of mirror organisms. “There is the possibility that, with admittedly a great deal of work, we could create something which could grow inexorably, spread across the planet and displace or kill many, many forms of life, including us, the animals around us, the plants around us, and even some of the microbes,” said David Relman, a professor of microbiology and immunology at Stanford University, who attended the meeting at the University of Manchester’s Institute of Biotechnology. Relman, like Adamala, was one of the early members of the working group and remembered keeping initial conversations under wraps. “We didn’t want to be unduly alarmist. We also didn’t want to sound like crackpots,” he recalled. “I think we all hoped that we would quickly learn there was some fatal flaw in this logic, but it started to bother me enough to wake me up at night.” Their alarm stemmed from the fact that, because natural life is chiral, interactions between natural organisms and mirror bacteria would be profoundly unpredictable. While the first mirror bacterium would likely be extremely fragile, limiting its growth and survival, it could persist, given the right nutrients. Ultimately, it might act like an invasive species, disrupting ecosystems without predators to keep it in check. Relman said mirror bacteria could potentially evade critical parts of plant, animal and human immune systems, which would have difficulty detecting or killing them. Once inside a human, a mirror bacteria could hypothetically replicate to extremely high levels in the body, causing its host to experience something similar to septic shock. Potential medical countermeasures — including most antibiotics — are chiral, meaning they likely would not be effective on mirror bacteria, although it is possible that mirror versions of antibiotics could be produced. And while biocontainment measures, such as those scientists use to work with dangerous pathogens, could theoretically prevent mirror bacteria from leaving a lab, those measures would be vulnerable to human error or deliberate misuse. While such a doomsday scenario is far from a sure thing, and there’s great uncertainty about the risks mirror life would pose given it doesn’t yet exist, no one has been able to totally refute the risks. “At first, people were questioning whether those concerns are actually as serious as we thought they were. And so we were trying to poke holes in it, trying to find ways in which we were wrong,” Adamala said. “The more we looked, the more we were certain, and more people were coming on board to this idea that there actually is no safe way to make a mirror cell.” Relman characterized mirror life as the first plausible existential risk he had encountered in a long career that has included investigating the 2001 deadly anthrax letters and Havana Syndrome, a mysterious health ailment that has impacted spies, soldiers and diplomats around the world. He said what gives him cause for optimism is the fact that, unlike other contentious and risky areas of science such as cloning, mirror life doesn’t exist yet — it isn’t a “here and now thing” that would be harder to stop. “There’s a real chance we don’t have to have this happen to us unless we choose to do so.” Red lines Other experts involved in the mirror life discussions in Manchester, however, said it is important to tread cautiously and not make kneejerk decisions that could thwart scientific progress. They emphasized that research to develop individual mirror molecules should be distinguished from that to develop mirror cells or organisms — even though both are sometimes termed mirror life or biology. Michael Kay, a professor of biochemistry at the University of Utah who is focused on developing drugs — primarily against infectious diseases like HIV — based on mirror-image molecules, said he isn’t “so supportive” of red lines that would completely block an area of study. “I feel like they’re too blunt of a tool,” Kay noted, referring to blanket regulations. Because the human body doesn’t recognize them as readily, mirror molecules such as proteins and nucleic acids resist degradation and are more stable there — useful traits for potential therapeutic drugs. And since mirror proteins cannot self-replicate, they do not pose the same risks as a mirror cell. However, Kay is worried that through unclear messaging, the word “mirror” will become automatically equated with risky research and it will limit innovation in this area. “Mirror molecules are inert chemicals with tremendous benefits,” he explained. “This is very early (research), but they’re here and there’s many more in clinical trials. In the next five or 10 years, this is going to be a big class of drugs.” He also noted that the risks of mirror cells or organisms, should they be unleashed, are unknown. “This (organism) could just starve to death, which we think is probably the most likely thing, or consume all the resources in the Earth and compete with all existing life,” Kay said. “That’s a huge range.” However, he said the ongoing attempts to weigh the risks are important: “I think any effort that buys us time to better understand and better consider the risks and to, you know, just be more deliberate in the way the technologies are advanced, rather than just having a free for all, I think that’s beneficial. And we have the time, you know, for that to happen,” Kay added. “This is not imminent.” Synthetic non-mirror life Many synthetic biologists, including Adamala, are seeking to make a synthetic cell — with natural chirality — from scratch out of non-living parts. The aim is to create something that mimics biological processes to shed light on the cascade of biochemical reactions that allowed life to first arise hundreds of millions of years ago and help solve problems in medicine, industry, the environment and basic research. “A synthetic cell would be like operating system for life: it would let us engineer biology at unprecedented scale, with precision that we can’t get in natural cells,” Adamala explained. “We will build models of cellular processes, both healthy and diseases, to understand how healthy cells can be made to work better and how diseases start,” she added. It’s possible that scientists will hit that milestone of a synthetic non-mirror cell soon, perhaps within a year, according to John Glass, a professor and leader of the La Jolla, California-based J. Craig Venter Institute’s Synthetic Biology Group. And if a normal cell with natural chirality can be created from lifeless molecules, then, in theory, he said a mirror-image cell could be created from mirror-image molecules using the same methods. Glass’ initial conversations with Relman in April 2024 about the potential risks of mirror life left him “shaken,” he remembered. “It made me wonder if the thing, the work that I have been doing for years, could be enabling, one day, the mirror bacteria-based Armageddon that we fear,” he said. However, most experts agree that making a synthetic cell with natural chirality is safe, because if a bacterium made from a synthetic cell were to enter an environment, it would be subject to the normal controls of any ecosystem, making it easy prey to natural predators such as viruses that target bacteria. Thus, it wouldn’t be able to spread uncontrollably. For Glass, an obvious red line is for scientists to refrain from making a mirror ribosome, a biological machine found in a cell’s cytoplasm that makes proteins. While that advance would still be far from creating a mirror cell, he said if you instead draw the line at the last step — creating a mirror membrane and then assembling all the constituent parts — it’s too late. But Kay at the University of Utah feels differently. His lab is focused on improving the chemical synthesis of mirror proteins, a process which is currently very inefficient. As an alternative, he is interested in exploring making a mirror image ribosome. “The idea is that if we could make a mirror ribosome, we could then use that to make these products with much higher quality, much longer proteins, in a way that might be compatible with pharmaceutical use,” Kay explained. “But there is concern, in this group — and I’m undecided right now — whether having that would be such a valuable tool that it would make the development of mirror life inevitable or too easy for others,” he added. Scientific restraint Adamala, along with her colleagues, chose not to renew her research grant, ending her lab’s work on mirror cells. She is focusing instead on discussions around how to regulate mirror life research. To the best of her knowledge, she said, no scientists are pursuing the creation of a mirror cell. In February 2025, nearly 100 researchers, funders and policymakers signed an entreaty arguing that “mirror life should not be created unless future research convincingly demonstrates that it would not pose severe risks.” Ultimately, signatures and self-restraint may be not enough. Adamala, Glass and Relman all said they hope their conversations will inform more formal restrictions or policies at international or national level. There was no concrete outcome from the Manchester meeting, and related discussions continued at a September workshop hosted by National Academies of Sciences, Engineering, and Medicine’s Standing Committee on Advances and National Security Implications of Transdisciplinary Biotechnology. “Pretty much everyone agrees that we should not make a living mirror cell. That’s the baseline but below that people have a lot of different ideas where should we stop the research,” Adamala said. “Right now, the scientific community can’t really agree on the red lines.” Relman said the goal is that the proactive efforts the group is making will not only protect the planet from a doomsday scenario but could help rebuild some of the trust scientists have lost with the public in recent years. “Wouldn’t it be great if we could be the Dr. Ian Malcolm of ‘Jurassic Park’?” he said, referring to the fictional mathematician played by Jeff Goldblum in the 1993 movie, who warns other characters about the potential dangers of unbridled scientific ambition. “We scientists … think about whether we should, not whether we could.” Source of the article

GOATReads:Politics

Dreams of a Maoist India

India’s Maoist guerillas have just surrendered, after decades of waging war on the government from their forest bases On 6 April 2010, a company of India’s central paramilitary soldiers came under attack from Maoist guerrillas in the central-eastern state of Chhattisgarh. The Maoists, who had turned this region into their stronghold, had laid a trap. With little training and scant knowledge of the Amazon-like jungle, the Indian soldiers found themselves ambushed. They fought back, but they could not escape the ambush. Seventy-five soldiers and a state policeman accompanying them were killed. Never before had the Indian forces suffered so many casualties in a single incident, not even in Kashmir, where, for more than 20 years, they had been fighting a protracted battle against Islamist extremists. As the body bags of the soldiers reached their native places in different parts of India, a deep sense of anger generated among people who till recently had only a vague idea about who these Maoists were, and even less about the hinterland that the Maoists had turned into a guerrilla zone. Since the mid-2000s, the Maoists had grown in strength, launching audacious attacks against government forces and looting police armouries and declaring certain areas as ‘liberated zones’. Their operations ran in a contiguous arc of land, from Nepal’s border in the east to the Deccan Plateau in the south – an area the Maoists called Dandakaranya or DK, using the name in its historical sense. This is a region where India’s indigenous people, the Adivasis, lived; it also holds valuable minerals and other natural resources in abundance. The Indian state wanted control over the natural resource wealth, but the Maoists were proving to be an obstacle. Then, in 2009, the then prime minister Manmohan Singh called them India’s ‘greatest internal security threat’. On the morning of the attack in 2010, I’d landed in a part of DK, in a city a three-hour car ride from the edge of a town beyond which was a forest ruled by Maoists. From there, if one started walking, one would, without getting on to a motorable road, reach the spot of the Maoist ambush. There would be only a sprinkling of tiny hamlets, inhabited by Adivasis, who thought of Maoists as the ‘government’. Beyond that, they had very little idea of life outside, least of all the blitzkrieg of ‘India Shining’, a political campaign by a previous government (the conservative BJP), which had in spirit continued to exist as a cursor to economic optimism. The scrawny man – I’ll call him ‘A’ – who came to pick me up from the railway station on his battered motorcycle was a former Maoist. He was also a Dalit, the so-called ‘lower-caste’ people at the bottom of the Hindu caste system, who, like the Adivasis, have been historically maltreated in India. He lived in a slum and had been recruited in the 1980s, along with several others in the city, by a woman Maoist. After a few years, ‘A’ had quit the party to raise a family. Life outside had been harder than inside the forest. For people like ‘A’, it was difficult to come out of the poverty and bitterness that came with their ascriptive status in the caste system. He struggled with odd jobs, and in the night drank heavily and sang resistance songs by the yard to temporarily rid himself of the bile. People had begun to talk of a ‘trillion-dollar economy’, while in some areas the poor would still die of hunger I’d spend a day or two in that city and then travel towards the edge of the town, from where a Maoist sympathiser would pick me up from a stipulated spot. After a bike or jumpy tractor ride, followed by a walk of several hours, contact with a Maoist squad would be established. From there, I would travel with them, sometimes for weeks, from one hamlet to another, crossing rivers and hills, evading bears and venomous snakes, hoping that, once I returned, I wouldn’t be gripped by a fever, which could indicate malaria, endemic in these areas. Twelve years earlier, my own history had prompted my interest in Kashmir and the Maoists. My family belonged to the small Hindu minority in Kashmir – the only Muslim-majority state in an otherwise Hindu-majority India. In 1990, we were forced to leave, as Islamist extremists began targeting the Hindu minority. In a few months, the entire community of roughly 350,000 people was forced into exile. For journalists, though, that expulsion was not very newsworthy. As the Indian forces began conducting operations against militants, resulting in brutal clampdowns and sometimes excesses against civilians, Kashmir became a dangerous place. But, for journalists, it turned into a harbinger of awards, of grants and fellowships. I decided to go away from Kashmir, not at first by design, but by a chance trip to the guerrilla zone. I had barely travelled beyond Delhi, just a few hundred miles south of Kashmir. But, as I began exploring the mainland further, I ended up in hinterland areas where India’s poorest of the poor lived. In the new India, where people had begun to randomly drop into conversations terms such as ‘trillion-dollar economy’, these areas still remained where the poor would die of hunger. What I saw in my journeys into rural India came as a revelation; in contrast, my own exile, of leaving a modest but comfortable home and instead facing the humiliation of living in a run-down room in exile in Jammu city, seemed bearable. Maoists weren’t yet receiving a lot of attention. The prime minister Singh’s pronouncement about Maoists was more than 10 years away, so it was difficult to convince editors to cover them, but I persisted, mostly because I felt that I was receiving a real education, one that a journalism school would never offer. In these years, the lack of government interest in the Maoists was an advantage – one could travel easily to DK without invoking the suspicion of the security apparatus. When I said goodbye to ‘A’ and headed into the forest from where DK began, I knew I’d have to be more cautious. I had avoided spending more than a few hours in the city where we’d met; a hotel reservation could give me away, and the police would put me under surveillance. By the afternoon of the next day, I was in a Maoist camp, under a tarpaulin sheet, meeting, among others, Gajarla Ashok, a Maoist commander, and another senior leader, a woman called Narmada Akka. Like most of the Maoist leadership, they both came from urban areas. These leaders had been teachers, engineers, social scientists and college dropouts, moved by the idea of revolution. They had come to DK with the same dream of establishing an Indian Yan’an (the birthplace of the Chinese communist revolution). But the core of Maoist recruitment came from the Adivasis, and before that from the working class and peasantry among Dalits and other ‘backward’ communities. The Maoists had decided to enter Chhattisgarh and its adjoining areas (which comprised DK) in the early 1980s. That was their second effort to bring about a revolution. An earlier attempt had been made in a village called Naxalbari in West Bengal, in the late 1960s. Peasants, who tilled the fields of landlords and received only a minuscule proportion of the harvest, rose against the iniquity of their small share. The rebellion was inspired by members of the mainstream Communist Party of India, who had begun to grow disillusioned with their organisation. This questioning had also taken place in other parts of the world. In France, for example, during the May 1968 Leftist student protests, the postwar Left came to be seen as an obstacle to real social transformation. What do we win by replacing ‘the employers’ arbitrary will with a bureaucratic arbitrary will?’ asked the Marxist thinker André Gorz. A similar sentiment had been expressed almost 40 years earlier by an Indian revolutionary, Bhagat Singh, whom the British then hanged in 1931 at the age of 23. In a letter to young political workers a month before his hanging, Singh warned that the mere transfer of power from the British to the Indians would not suffice, and that there was a need to transform the whole society: You can’t ‘use’ him [the worker and the peasant] for your purpose; you shall have to mean seriously and to make him understand that the revolution is going to be his and for his good. The revolution of the proletariat and for the proletariat. Singh’s prescription proved to be right. Even as the prime minister Jawaharlal Nehru, whose commitment to social justice could not be doubted, took over from the British in 1947, the poor and the marginalised communities like the Dalits and Adivasis continued to remain outside the welfare circle of his five-year plans. Feudalism did not go away. Land reforms to break up the feudal concentrations of wealth and power were initiated, but the rich and the powerful also found means to circumvent the law. The rich quickly joined politics, and the police acted as their private militia. As recently as 2019, a survey by the Indian government revealed that 83.5 per cent of rural households owned less than one hectare of land. The government’s Planning Commission figures (1997-2002) put the landless among Dalits at 77 per cent, while among the Adivasis it was 90 per cent. The government’s National Sample Survey in 2013 revealed that about 7 per cent of landowners owned almost half of the total land share. Small guerrilla squads began to indulge in ‘class annihilation’, killing hundreds of landlords In the 1960s, these disillusioned communists felt that the Communist Party of India had grown complacent and corrupt, and that its leaders were ‘conscious traitors to the revolutionary cause’. They made their case in long papers and articles full of communist jargon in publications like Deshabrati, People’s March and Liberation. The essence of their indictment was that the poor and the working class had been let down by the parliamentary Left. In 1969, these breakaway communists formed their own party, the Communist Party of India (Marxist-Leninist), which announced its aim to unite the working class with the peasantry and seize power through armed struggle. They sought help from China, which was quick to offer it, calling the uprising ‘a peal of spring thunder’. Some of the men inspired by Mao travelled to China through Nepal and Tibet, receiving political training from Mao’s associates. The Maoist message spread from Naxalbari to other parts of India, like Bihar, Andhra Pradesh, Punjab and Kerala. Inspired by its main leader, Charu Majumdar, small guerrilla squads began to indulge in ‘class annihilation’, killing hundreds of landlords and their henchmen, policemen and other state representatives. ‘To allow the murderers to live on means death to us,’ Majumdar declared. Liberation, the party’s mouthpiece between the years 1967-72, is full of reports of killings of landlords, and how land and other property they owned had been ‘confiscated’ by peasant guerrillas. In practice, however, ‘class annihilation’ proved counterproductive. On the streets of Calcutta (today’s Kolkata), for example, naive men from elite colleges would roam around with crude bombs and even razors with which they attacked lone policemen. Nonetheless, in the late 1960s, the Naxalbari movement inspired thousands of bright men and women from elite families studying at prestigious schools. They said goodbye to lucrative careers and made the feudal areas, where the poor faced the utmost oppression, their workplace. Beginning in July 1971, a brutal government response killed hundreds of Indian Maoists, probably including their leader Majumdar; he died in police custody in 1972. Kondapalli Seetharamaiah, popularly known as KS, was one of those dissatisfied with the shape that the parliamentary Left had taken in India. He was a school teacher in Andhra Pradesh, which had a long history of feudalism and communist struggles. In the state’s North Telangana area, bordering Chhattisgarh, for example, feudal customs of slavery like vetti were still being practised decades after India became free. KS, a former member of the Communist Party of India, had not lost all hope, and decided to join hands with Majumdar’s line. But before he could restart, he decided the Maoists needed a rear base, just like Mao had urged, for the guerrillas to hide in the forest. The other amendment to Majumdar’s line was with regards to the formation of overground organisations to further the cause of revolution, something that Majumdar had strongly opposed. In 1969, KS sent a young medical student to a forest area in North Telangana to explore the possibility of creating the rear base. But in the absence of any support, the lone man could not achieve anything and had to return. In the mid-1970s, KS sent yet another man, this time a little further inside, into Chhattisgarh. Spending a few months inside, the man, who had acquired basic medical training, started treating the poor tribals. But, again, how much could one man or two do? So, he returned as well. So KS made another change in strategy – he took the Maoists out of the shadows and founded a few organisations that, on the surface, were civic associations, but were meant to further the Maoist ideology. Prominent among these was the Radical Students Union (RSU), launched in October 1974. Along with a cultural troupe, Jana Natya Mandali, young RSU members began a ‘Go to Village’ campaign on KS’s instructions. In this campaign, the young student radicals and ardent believers in the armed struggle would try to make villagers politically ‘conscious’. The ‘Go to Village’ campaign enjoyed some initial success, attracting students and other young people from working-class backgrounds. Hundreds of young people in universities and other prestigious institutions in Andhra Pradesh left their studies and vowed to fight for the poor. Fourteen students from Osmania University in what was then Andhra’s capital, Hyderabad, joined; 40 from other parts of the state joined the Maoist RSU. The Maoists’ ‘Go to Village’ campaign found fertile ground in the town of Jagtial, in the state’s Karimnagar district. There, as across Andhra, people celebrate the festival Bathukamma, which includes theatre performances in villages that were home to landlords from the dominant castes. The caste segregation of the villages was complete: the landlords lived in the village centre, while the Dalits lived on its periphery. But now in Jagtial, the Dalit labourer Lakshmi Rajam took the performance to the Dalit quarters. Another Dalit man, Poshetty, occupied a piece of government-owned wasteland, which would usually be in the landlords’ control. These acts enraged the landlords, who killed both these Dalit activists. As the Maoists pushed on, the state retreated, and the Adivasis began to exert their rights over the forest On 7 September 1978, under the influence of the Maoists, tens of thousands of agricultural labourers from 150 villages marched through the centre of Jagtial. The march was led by two people, one of them Mupalla Laxmana Rao, alias Ganapathi. He came from Karimnagar itself and would become KS’s closest confidante, later taking over from him to become the Maoist chief. The other was Mallojula Venkateshwara Rao, alias Kishenji, a science graduate, who would prove to be an efficient leader and military commander. The Jagtial march rattled some landlords so much that they fled to cities. The poor also decided to boycott the landlords who would not agree to any land reforms. Services that the poor provided – washer men, barbers, cattle feeding – were denied to the landlords. This strike led to further backlash from landlords, as reported by the respected Indian civil rights activist K Balagopal. From these village campaigns, KS decided to move ahead and try to create a guerrilla zone where armed squads would mobilise peasants and contest state power. In June 1980, seven squads of five to seven members entered the hinterland – four of them in North Telangana, two in Bastar in Chhattisgarh, and one in Gadchiroli in Maharashtra, an area where the Adivasis lived. They were mostly food-gatherers, and their life had remained unchanged for hundreds of years. Abundant mineral wealth lay in the land under where the Adivasis lived, but they lacked even basic modern services like education and healthcare. Petty government representatives like forest guards would harass the Adivasis for using resources like wood, citing archaic forest laws. At first, the Adivasis did not welcome the presence of the Maoists. However, before long, a kind of alliance between them developed, where the common enemy now was the state. As the Maoists pushed on, the state retreated, and the Adivasis began to exert their rights over the forest. In many areas, the feudal landlords were served ‘justice’ like Mao had dictated. In 1980, the Swedish writer Jan Myrdal visited the Maoists, and one of the comrades told him of an incident from North Telangana, which Myrdal recounts in his book India Waits (1986). A notorious rowdy there had instilled fear among the people on behalf of his master, a landowner. He raped a washer-girl. In shame, she jumped into a well and drowned herself. When the Maoists came to know of it, four of them, till recently students, called him out in the bazaar. When he arrived, the rebels caught him with a lasso, cut off his hands and nailed them to a wall inside a shop. The rough vigilante justice inspired more young people to join the Maoists: men like Nambala Keshava Rao, a graduate of the much-respected Warangal engineering college, and Patel Sudhakar Reddy, who held a master’s degree from Osmania. It also brought in young women like Maddela Swarnalata and Borlam Swarupa. Swarnalata came from a poor Dalit family and was recruited through the Radical Students Union. In the early 1980s, she’d taken part in clashes against Right-wing student groups, especially the Akhil Bharatiya Vidyarthi Parishad. The police would follow her and pressurise her into revealing details of her comrades who had already gone underground. Soon it became impossible to avoid arrest, so she too went underground, joining a Maoist squad, before dying in an encounter with the police in April 1987. Meanwhile, Swarupa had become active through campaigns with farmers’ groups for a better price for their crops. The Maoist leadership placed her as a labourer in a biscuit factory in Hyderabad, in order to recruit among workers there. Once she’d been exposed, Swarupa was asked to shift to the guerrilla zone, where she became the first woman commander, leading a squad in North Telangana, until she was killed in an encounter in February 1992. One of the prominent features of the Maoist movement is the way it attracted women to its fold. For women from the working class, who led difficult lives under a patriarchal mindset, joining the Maoists felt like a liberation. Recruits to the Maoists often attracted their friends, siblings and other family members to join too. Doug McAdam, professor of sociology at Stanford University in California, has written about this ‘strong-tie’ phenomenon, in which personal connections draw people into ‘high-risk activism’ of violence. In Bastar and elsewhere, the Maoist guerrillas targeted people and agencies they considered exploiters. For example, they started to negotiate better rates for the collection of tendu leaves, used in the manufacture of local cigarettes, which was a lucrative business. But along with that, they also started to take cuts from businessmen for running their organisations. The Norwegian anthropologist Bert Suykens, who has studied the tendu leaf business, called it a joint extraction regime. The Maoists also began to extort a levy from corporate houses involved in mining in these areas, as well as from government contractors. In the process, they deviated from their promise – of returning the forest to the Adivasis, and of helping the poor. They spent most of their time running their organisation and launching attacks against government forces. In her research in central Bihar in 1995-96, the Indian sociologist Bela Bhatia concluded that the Maoist leaders ‘have taken little interest in enhancing the quality of life in the villages.’ In fact, these leaders regarded development ‘as antagonistic to revolutionary consciousness,’ she wrote in 2005. In the meantime, the Indian state was growing impatient with the Maoists. In 2010, a London-based securities house report predicted that making the Maoists go away could unlock $80 billion of investment in eastern and central India. New Delhi began preparations for a large-scale operation to get rid of them. But, before that, the extraordinary arrest in 2009 of the Maoist ideologue Kobad Ghandy in Delhi heightened political interest in the insurgents. Special police agents from Andhra Pradesh had managed to locate Ghandy, who had been living in a slum using fake identification. He came from an elite Parsee family in Mumbai; his father was the finance director of Glaxo; he had studied with India’s political dynasts at the elite Doon School, and had then gone to London to pursue further education as an accountant. In the UK, he was introduced to radical politics, and returned to Mumbai in the mid-1970s, where he met Anuradha Shanbag, a young woman from a family of notable Indian communists and a student of Elphinstone College in Mumbai. Shanbag and Ghandy were both drawn to Maoism, fell in love and married. Soon afterwards, in 1981, they met KS in Andhra Pradesh and shifted to a slum area in a city where Shanbag recruited my friend ‘A’ and others. In 2007, Shanbag was promoted to the Maoist Central Committee, a rare accomplishment for a woman. A year later, however, she died from complications due to malaria she had contracted in a guerilla zone. After Ghandy’s arrest in 2009, rumours arose that he had been sent to work among the labourers as part of the Maoists’ urban agenda. His arrest became a hot topic in Delhi circles: for the first time, it sparked interest in the Maoist movement among people who did not bother to read a newspaper beyond its Fashion section. Ghandy’s abandonment of his elite background to fight for the poor created a wave of empathy for the Maoist movement. Around the time of his arrest, I got a rare opportunity to meet the Maoist chief, Ganapathi. The meeting happened by chance. Through some overground sympathisers, he had learnt that I was in a city close to the guerrilla zone in which he was then hiding. By this time, state surveillance was at its peak, and the Maoist leadership was extremely cautious of any contact with outsiders. Ganapathi in particular barely met anyone except his commanders. After days of travel through the guerrilla zone, I was allowed to record our conversation on a digital device provided by his men. After Ganapathi left the area, I transcribed the interview, but even that I was not allowed to carry with me. A month later, I received the transcript through one of his overground workers in Delhi. As part of its anti-Maoist operation, the government began to push infrastructure A few months later, in 2010, while I spent time with the Maoist leaders Gajarla Ashok and Narmada Akka in their camp, I sent a questionnaire to Ganapathi. His reply came a few weeks later, in which he made mention of the importance of work in urban areas: ‘If Giridih [a small town in the east] is liberated first, then based on its strength and on the struggles of the working class in Gurgaon [now Gurugram, a satellite city close to Delhi where most multinational corporations have their offices], Gurgaon will be liberated later. This means one is first and the other is later.’ It was a tall order. There were innumerable problems in cities, including poverty. But with the liberalisation of the 1990s, middle-class insularity had made most people oblivious of the suffering of others. The Maoists wanted to make inroads through slums and labour unions, but did not find enough reception. The curiosity and empathy the Maoists generated among ordinary people in cities soon dissipated. The conservative BJP, which was rising to national power, relentlessly used Kashmir to rouse Hindu sentiment in mainland India. In the first decade of 2000, Islamist radicals targeted mainland India, creating friction with the Muslim minority. The Indian Parliament had come under attack in 2001; Mumbai city faced a terrorist attack in 2008. Between these, many Indian cities like Delhi, Hyderabad, Varanasi and Jaipur were targeted with bomb blasts, killing scores of people. At the same time, the overground sympathisers of the Maoist movement began hobnobbing with separatist elements from Kashmir and India’s Northeast, which had a long history of secessionism, and these potential alliances stirred controversy. This resulted in a backlash against Maoist sympathisers, and a new term was coined for them: ‘urban Naxal’. Hindu nationalism was on the rise in India and, in the coming years, this term would become a ruse for the government to suppress all activism, resulting in the incarceration of civil rights activists like the human rights lawyer Sudha Bharadwaj. What also did not help is the number of body bags – of forces killed in Maoist ambushes – going to different parts of the country. As part of its anti-Maoist operation, the government began to push infrastructure – primarily roads and mobile/cellphone towers – in the Maoist-affected areas. It led to further entrenchment of state forces, which also weakened the Maoists. Their leaders who were in hiding in cities began to be hunted down. The new roads and phone towers were welcomed by rural people. The Maoists began killing Adivasis on suspicion of being police informers. This violence alienated Adivasis, and others too. Earlier, the Maoists would visit a village in the night and slip away. Even if their presence was reported, it was of no use to security forces because the information would reach them quite late. But now, with cellphone networks, the people could call immediately, leading to encounters between the Maoists and state security forces. Since about 2020, the decline of India’s Maoist movement has been rapid. The Maoist commander Ashok – whom I had met in the forest in 2010 – surrendered in 2015. One of his brothers had already died in an encounter. Meanwhile, Akka was arrested in 2019 in Hyderabad where she was seeking treatment for cancer; she died in a hospice three years later. The government raised a special battalion of Adivasis, which included surrendered Maoists, to hunt down the Maoists. It started getting big results. In May this year, Nambala Keshava Rao, who had taken over as the Maoist chief from Ganapathi in 2018, was killed in a police encounter. A few weeks later, another of Ashok’s brothers, a senior commander, was also killed by police. The entire Maoist leadership, barring a few, has been wiped out. Ashok has, of late, joined the Indian National Congress Party. ‘A’ has not been in touch in the last few years, ever since some of his friends were arrested as ‘urban Naxals’. A friend of his told me the other day that he has stopped interacting with people. A month ago, a friend in Gurugram told me of an incident where he lives. His local Resident Welfare Association had put a cage in their park, with a banana inside it to lure marauding monkeys in the vicinity. A few hours later, they found that the banana had been consumed by someone and the peel left outside the cage. It made me imagine how hungry that person would have been, most likely a poor worker. The friend sent me a screenshot of the residents association’s WhatsApp group. ‘Check the CCTV,’ someone had written. The Maoists have completely surrendered now, asking the government to accept a ceasefire. A statement released this September, purportedly by part of the Maoist leadership, apologises to people, saying that, in the process of revolution, the leadership made several tactical mistakes, and that the ceasefire was now important to stop the bloodshed. What those mistakes are, the letter wouldn’t say. As anti-Maoist operations go on with even more rigour, a handful of those still inside the forest will ultimately surrender or be killed. How history remembers them is too early to say; but it is a fact that, had it not been for them, the much-needed focus on the hinterland of DK would not have been there. However, to the man in Gurugram who stole the banana, and to the man in Giridih, who doesn’t even have a banana in sight, it means nothing. Source of the article

GOATReads:Politics

Civility and/or Social Change?

In March 2023, a mass shooting occurred at Nashville’s Covenant School. To protest the subsequent lack of action on gun control, Democratic state legislators Justin Jones, Justin J. Pearson, and Gloria Johnson chanted “No action, no peace” on the floor of the Tennessee House of Representatives. In response, their Republican colleagues issued a set of resolutions accusing the trio of “disorderly and disruptive conduct.” The House subsequently voted to expel Jones and Pearson, who are Black; Johnson, who is white, was allowed to maintain her position. Here, ideals of decorum and politeness—of civility—were used to silence the protests of Black Americans. The politeness-driven civility used to drive Jones and Pearson from the legislature has a long history, dating at least to before the US Civil War. Then, slavery’s practitioners, advocates, and apologists would regularly invoke manners in their attempts to rebut the claims of antislavery activists. In turn, abolitionists like Frederick Douglass condemned those weaponizations of politeness. Slaveholders are “models of taste,” Douglass scathingly observes in his 1857 speech “West India Emancipation.” When debating questions of paramount importance like slavery and freedom, he bitterly continues, “With them, propriety is everything; honesty nothing.” From the years before the Civil War up to today, then, civility advocates have insisted on politeness. But, as Douglass shows, that distracts from urgent problems that “honesty” would demand addressing directly. In fact, as the episode from the Tennessee state house reveals (and as Douglass saw clearly), invocations of civility can actually oppose equity and social change. But even the dangerous discourse of civility decried by Douglass—and weaponized against Jones and Pearson—is presently undergoing a transformation, and not for the better. Demands for mannerliness and propriety are increasingly being replaced by demands that those in conflict just tolerate their disagreements. The civility of politeness, in other words, is being supplanted by what I term an “agree-to-disagree civility.” This emergent civility, I argue, legitimizes reactionary stances and valorizes the status quo. Such agree-to-disagree civility is on clear display in the seemingly cordial statement issued by all the US Presidential Foundations and Centers in anticipation of the acrimony of the recent election cycle. The leaders of the Obama Foundation, the George W. Bush Presidential Center, and 11 other similar organizations admit that they hold “a wide range of views across a breadth of issues.” Still, they insist that “these views can exist peaceably side by side.” The statement goes on to affirm that “debate and disagreement are central features in a healthy democracy,” and that “civility and respect in political discourse, whether in an election year or otherwise, are essential.” One might be moved, perhaps, to see politicians of different parties standing together against the violent bigotry of the Trump campaign. But look closer. To be sure, the civility of this statement is more productive than repressive, more interested in prompting speech than foreclosing it. Yet while the centers valorize the clash of countervailing perspectives, they’re also conspicuously silent regarding how such disagreements might be resolved. Basically, the Centers’ statement suggests, we should all just agree to disagree. It’s easy to see the appeal of this agree-to-disagree civility, because the tolerance it calls for is often taken as a transcendentally good ideal. As the political theorist Wendy Brown reminds us, though, there’s reason to look upon tolerance talk with a more ambivalent eye. For Brown, toleration is less a political ideal than a “practice of governmentality,” a body of commentary and rhetoric that sets the terms for political discussions, and not always in salutary ways. “There are,” she notes, “mobilizations of tolerance that do not simply alleviate but rather circulate racism, homophobia, and ethnic hatreds.” Agree-to-disagree civility, I argue, circulates and sustains such malign paradigms by neutralizing critique and forestalling social change. This civility robs us of our ability to say “x is wrong”: Its principles make racism, homophobia, misogyny, and the like perspectives to be respected, not paradigms to be defeated. Endlessly tolerating divergent outlooks on social inequities is categorically different than working to discern and pursue the most ethical and efficacious modes of redress. In short, this civility allows nothing to happen. As I’ll explain, agree-to-disagree civility is promoted by a wide range of civic organizations beyond the Presidential Centers, and it is expressed in two recent books: Robert Danisch and William Keith’s Radically Civil: Saving Our Democracy One Conversation at a Time, and Alexandra Hudson’s The Soul of Civility: Timeless Principles to Heal Our Country and Ourselves. But while agree-to-disagree civility is increasingly prominent, it’s not entirely new; it was likewise at work in Douglass’s time, alongside the civility of politeness, and an object of analysis in his oratory, which David Blight has recently collected in Speeches & Writings. In these speeches, Douglass emerges as an especially savvy theorist of civility’s limits—and its alternatives. In order to challenge this new form of civility (“agree to disagree”) we need to understand what distinguishes it from its predecessor (the civility of “politeness”). The history of such politeness civility is recounted in Alex Zamalin’s Against Civility: The Hidden Racism in Our Obsession with Civility. “The idea of civility,” Zamalin demonstrates, “has been a tool for silencing dissent, repressing political participation, enforcing economic inequality, and justifying violence upon people of color.” But demands for politeness, manners, and adherence to existing norms and values have long been challenged by “civic radicals,” activist intellectuals like Douglass who dispense with civility in the pursuit of justice. That civic radicalism is particularly necessary, Zamalin reminds us, because white Americans have consistently made orderliness and decorum prerequisites for political participation in ways that stifle the voices of nonwhite people. His argument is very well illustrated by the legislative expulsion of Jones and Pearson. But if the politeness civility that Zamalin analyzes continues to play a role in society, a different and particularly insidious form of civility is becoming increasingly prominent in American culture and politics. Insightful as Zamalin’s book is, it doesn’t quite register that civility discourse is currently changing, and that in particular, civility advocates are turning from demanding politeness to calling for toleration and for agreeing to disagree. One reason agree-to-disagree civility is on the rise is that over the last decade many organizations dedicated to promoting versions of it have appeared. These civility centers are overseen variously by academics (at, for example, Duke, Virginia Tech, and Oakland University, where I teach), by community leaders (as in The Oshkosh Civility Project), by journalists (as at The Great Lakes Civility Project), and by combinations thereof (like in the Ohio Civility Project). They share a concern with respecting difference and tolerating disagreement. One center indeed distills this civility’s ethos in its aspiration to teach people to “Agree to disagree.” Perhaps the most prominent of these groups is Braver Angels, which draws admiring discussions in both Danisch and Keith’s Radically Civil and Hudson’s The Soul of Civility. Founded in 2016 in response to the political partisanship and animosity occasioned by the campaign and election of Donald Trump, Braver Angels aims “not to change people’s views of issues, but to change their views of each other.” They pursue that end through programming like workshops, debates, and podcasts that encourage “understand[ing] the other side’s point of view, even if we do not agree with it.” This emphasis on understanding over persuasion is applauded by Danisch and Keith, as well as Hudson, who is quite effusive in her praise. She lauds Braver Angels’ promotion of “viewpoint diversity” and holds that “civic associations such as Braver Angels are the lifeblood of American society.” This admiration is—importantly—mutual, for Braver Angels has on multiple occasions featured Hudson as a speaker, and its website offers links to her essays as well as an admiring review of her book. One sees here a symbiotic relation between civility organizations and civility literature. Hudson’s praise makes her readers more likely to buy memberships in Braver Angels, and the organization’s support makes its members more likely to buy her book. This mutually beneficial relationship suggests, if not a fully fledged civility-industrial complex, at least a substantively integrated civility media ecosystem. In this media ecosystem, the problem is not impoliteness, as it was for earlier civility advocates. Both Radically Civil and The Soul of Civility critique the weaponized impositions of politeness that were also the primary target of Zamalin, and before him, Douglass. Rather, the central problem advocates of agree-to-disagree civility set out to solve is something like acrimony—people just not getting along, especially regarding politics. Danisch and Keith are worried about partisan polarization, and they draw on communications studies to develop an “antidote,” their notion of “radical civility”: “engaging respectfully with others in ways that can create meaningful connections across difference.” For Hudson, the issue is an inescapable human tendency toward “self-love” that “divides us,” often politically. She finds a remedy affirmed across a range of literary and historical texts: civility, which she understands as a regard for the dignity and value of others and sees as yielding the “cultural toleration of diverse views.” If the problem is indeed people not getting along, then “agreeing to disagree” makes sense. But is that really America’s problem? Describing our difficulties in primly nonpartisan terms like polarization and selfishness obscures how the Trump-era Right has played an outsize role in generating political conflict, through, for instance, legislative obstruction, election denialism, and insurrection. Agree-to-disagree civility suffers from a certain content agnosticism; it tends to ask for the toleration of disagreement without considering what the disagreement is about, or the relative merits of the arguments. As Danisch and Keith put it, “A commitment to civility teaches us that rightness and wrongness are less important than how we treat others.” Yet this relative indifference to the substance of debates carries liabilities. For instance, Braver Angels offers the content-agnostic assertion that in their programs, “neither side is teaching the other or giving feedback on how to think or say things differently.” But is it right, say, to ask a Black American not to be “giving feedback” on how a white supremacist should think differently? Agreeing to disagree, in such a case, would require tolerating the intolerable. Such invocations of agree-to-disagree civility beg the question of when to set aside toleration and embrace persuasion, protest, and praxis. Radically Civil and The Soul of Civility each briefly grapple with that question, in ways that suggest how agreeing to disagree conflicts with social change. Danisch and Keith acknowledge that persuasion can have a role in disagreement, but their radical civility entails far more toleration and waiting than persuasion and action. Especially in “hard cases” involving racism and misogyny, they explain, it’s best to focus on relationship building in an “outcome-indifferent way,” even though doing so is to “take a risk” that will pay off slowly, if at all. Hudson’s approach might seem more open to setting aside toleration for protest; her book features a chapter on “Civil Disobedience,” which focuses on abolitionism, Gandhian satyagraha, and US civil rights struggles and argues that inequality and discrimination demand protest. But Hudson is more interested in spelling out a “litmus test” for acceptable forms of protest than in explaining what should prompt a move from toleration to praxis in the first place. The chapter ends up most concerned with containing, not fostering change-producing protests—making sure they’re sufficiently civil and respectful. To underscore her point, Hudson invokes Frederick Douglass, asserting that if he and other abolitionists “could be civil while criticizing slaveholders—people who owned other persons—we can be civil in disagreements in our modern political realm.” This is not a particularly convincing characterization of Douglass, whose 1845 autobiography recounts physically fighting the slaveholder Edward Covey and who in an 1848 editorial called on abolitionists to speak “words of burning truth.” But if Douglass was happy to cast aside civil decorum, civility was nonetheless often on his mind, as an object of critical analysis. His speeches offer a still-timely interrogation of agree-to-disagree civility. In the decades following the Civil War, there arose a version of agree-to-disagree civility similar to that promulgated today by the bipartisan coalition of Presidential Centers and organizations like Braver Angels. Then, many white Americans prioritized national reconciliation and sought to accommodate the divergent attitudes toward slavery that animated the conflict. Douglass responded in an 1878 speech given in New York City reflecting on the Civil War’s legacy. He warns, “We must not be asked to put no difference between those who fought for the Union and those who fought against it.” He insists—in a line that gives the speech its title in the Blight collection—“There was a right side and a wrong side in the late war.” For Douglass, agreeing to disagree was untenable because such civility would legitimate the ideas of those partisans of the South who wanted to continue the Confederate project by other means. For us, Douglass’s insistence on moral and intellectual clarity regarding slavery is a reminder that some issues require no ongoing debate. It should be no more possible to agree to disagree about, say, the need to fight climate change or the importance of sexual and racial equality than about slavery. In Douglass’s oratory, though, we find not just critiques of various forms of civility but also a deeper analysis of the ideal of an insistently tolerant political culture. Douglass recognizes that an investment in civility amounts to what we’d now call, following the cultural critic Lauren Berlant, a case of cruel optimism. “A relationship of cruel optimism exists,” Berlant writes, “when something you desire is actually an obstacle to your flourishing.” Their touchstone example is the fantasy of the good life: of upward mobility, political fairness, and domestic fulfillment, all increasingly hard to come by in recent decades. Such a fantasy is cruelly optimistic, for Berlant, because it is a yearning for a future unlikely to arrive, and because an overriding focus on such a future prevents people from imagining other ways to thrive. Douglass anticipates some of Berlant’s thinking about this self-defeating kind of desire. He suggests that the civility many hold up as a remedy to a contentious public sphere is in fact an obstacle to an improved state of affairs. Douglass makes this point in the “West India Emancipation” speech, when he excoriates not affability-seeking slaveholders but rather a cohort of ostensible allies of abolitionism, who see civility as necessary to the pursuit of freedom. “Those who profess to favor freedom and yet deprecate agitation,” he explains, “are men who want crops without plowing up the ground.” Douglass here likens the civility advocate wary of “agitation” to a farmer who, for some reason, is unwilling to plow his field. Either way, the wariness of agitation calls to mind the devotee of political politeness as much as the advocate of agreeing to disagree. The suggestion is that their desire to suppress agitation stands as an obstacle to the flourishing of the freedom they profess to desire, much as a farmer’s desire to not plow a field would prevent a bountiful harvest. In this way, Douglass’s unusual comparison indicates how proponents of civility are stuck in a relation of cruel optimism. Seeing civility as a form of cruel optimism allows us to better grasp its appeal. Some speakers invoke the idea of civility cynically, solely to silence opponents or to make space for otherwise untenable positions. But others might well be drawn to notions of civil discourse out of a sincere, if misguided, belief that it offers a path to a better future. In a contentious world, it can feel good to stand for civility. Therefore, any effort to displace civility as an ideal needs to offer a vision of change that is no less affectively satisfying. Fortunately, by reading across Douglass’s oratory, we can find an alternative to civility that does so. One might expect Douglass’s critical analysis of civility to yield a defense of incivility, as it has for some contemporary writers. But, as the philosopher Olúfẹ́mi Táíwò has argued, there are good reasons we should not simply embrace incivility as a counterideal. He observes that incivility might not be politically effective and that, moreover, “opposing civility because elites have abused it betrays the sort of politics that cannot find any orientation to the world (or even to itself) except in relation to today’s oppressor of choice.” These comments show how embracing incivility ends up conceding too much to the advocates of civility: Such a focus keeps attention on the form of discussion rather than its context, content, and outcomes. Douglass flips the script, letting the topic, occasion, and goal of a speech determine his mode of address. Throughout his vast body of speeches, his savvy oratorical adaptability is unmistakable. His was a practice of rhetorical pragmatism: the objects were the freedom and equality of racial justice, and he would adopt whatever manner of speaking, civil or uncivil, would most likely persuade his audience to pursue those ends. It’s this orientation that emotionally energizes Douglass, and his audience, in ways that lend his rhetorical pragmatism enough affective appeal to be a viable and preferable alternative to cruelly optimistic civilities. Douglass invites us to feel passionate about a possible-but-not-assured future of freedom and equality, not the means of getting there. There is an optimism at the heart of his rhetorical pragmatism, but it’s unlikely to turn cruel—to calcify into an obstacle—because Douglass’s practice is so variable, constantly adjusting to most effectively pursue flourishing. When necessary, Douglass rejected civility, like in his 1852 address “What to the Slave Is the Fourth of July?” He gave that speech at a meeting of the Rochester Ladies Anti-Slavery Society, but, as Blight explains, he saw himself as also addressing a broader national audience, which would take in his speech later, in print. Douglass aimed to use the occasion of a former slave reflecting on the nation’s anniversary to underscore the prevailing gap between American ideals of freedom and the realities of slavery and to snap his audience out of its complacent hypocrisy. To achieve that end, he eschewed civil norms. “At a time like this, scorching irony, not convincing argument, is needed,” he declared. “O! had I the ability, and could reach the nation’s ear, I would, today, pour out a fiery stream of biting ridicule, blasting reproach, withering sarcasm, and stern rebuke.” Such fiery rhetoric is purposeful, Douglass explains, for only through such language can “the feeling of the nation […] be quickened” in a way that would silence slavery’s apologists, bring people to the antislavery cause, and lead the cause’s partisans to intensify their activism. But Douglass could strike a civil tone if the situation required it. When in 1855 he addressed fellow abolitionists on “The Anti-Slavery Movement,” he announced, “I wish to speak of that movement, to-night, more as the calm observer, than as the ardent and personally interested advocate.” This approach reflected his immediate goals. At a moment when there were many “sects and parties” within abolitionism, Douglass wanted to persuade his audience to favor the Liberty Party, which insisted on “no slavery for man under the whole heavens,” over other, less radical organizations, which he saw as merely trying to contain slavery. Faced with the task of persuading the already engaged, he opted for a measured style of address. This rhetorical pragmatism offers a thoroughgoing rejection of civility discourse. For advocates of civility, attaining civil speech—polite talk, agreeing to disagree—is an end in itself. By contrast, Douglass refuses such civility for civility’s sake. His objective is not the achievement of some mode of address; his goal is justice. Douglass embodies the sort of “constructive political culture” Táíwò has recently called for. Táíwò holds that such a culture, focused “on outcome over process,” will mount the most effective challenge to racism, capitalism, and global inequality. Whereas civility would end social change, social change is the end toward which Douglass would push us. Source of the article

The problem of mindfulness

Mindfulness promotes itself as value-neutral but it is loaded with (troubling) assumptions about the self and the cosmos Three years ago, when I was studying for a Masters in Philosophy at the University of Cambridge, mindfulness was very much in the air. The Department of Psychiatry had launched a large-scale study on the effects of mindfulness in collaboration with the university’s counselling service. Everyone I knew seemed to be involved in some way: either they were attending regular mindfulness classes and dutifully filling out surveys or, like me, they were part of a control group who didn’t attend classes, but found themselves caught up in the craze even so. We gathered in strangers’ houses to meditate at odd hours, and avidly discussed our meditative experiences. It was a strange time. Raised as a Buddhist in New Zealand and Sri Lanka, I have a long history with meditation – although, like many ‘cultural Catholics’, my involvement was often superficial. I was crushingly bored whenever my parents dragged me to the temple as a child. At university, however, I turned to psychotherapy to cope with the stress of the academic environment. Unsurprisingly, I found myself drawn to schools or approaches marked by the influence of Buddhist philosophy and meditation, one of which was mindfulness. Over the years, before and during the Cambridge trial, therapists have taught me an arsenal of mindfulness techniques. I have been instructed to observe my breath, to scan my body and note the range of its sensations, and to observe the play of thoughts and emotions in my mind. This last exercise often involves visual imagery, where a person is asked to consider thoughts and feelings in terms of clouds in the sky or leaves drifting in a river. A popular activity (though I’ve never tried it myself) even involves eating a raisin mindfully, where you carefully observe the sensory experience from start to finish, including changes in texture and the different tastes and smells. At the end of the Cambridge study, I found myself to be calmer, more relaxed and better able to step away from any overwhelming feelings. My experience was mirrored in the research findings, which concluded that regular mindfulness meditation reduces stress levels and builds resilience. Yet I’d also become troubled by a cluster of feelings that I couldn’t quite identify. It was as if I could no longer make sense of my emotions and thoughts. Did I think the essay I’d just written was bad because the argument didn’t quite work, or was I simply anxious about the looming deadline? Why did I feel so inadequate? Was it imposter syndrome, depression or was I just not a good fit for this kind of research? I couldn’t tell whether I had particular thoughts and feelings simply because I was stressed and inclined to give in to melodramatic thoughts, or because there was a good reason to think and feel those things. Something about the mindfulness practice I’d cultivated, and the way it encouraged me to engage with my emotions, made me feel increasingly estranged from myself and my life. In the intervening years, I’ve obsessed over this experience – to the point that I left a PhD in an entirely different area of philosophy and put myself through the gruelling process of reapplying for graduate programmes, just so I could understand what had happened. I began following a thread from ancient Buddhist texts to more recent books on meditation to see how ideas have migrated to the contemporary mindfulness movement. What I’ve uncovered has disturbing implications for how mindfulness encourages us to relate to our thoughts, emotions and very sense of self. Where once Europeans and North Americans might have turned to religion or philosophy to understand themselves, increasingly they are embracing psychotherapy and its cousins. The mindfulness movement is a prominent example of this shift in cultural habits of self-reflection and interrogation. Instead of engaging in deliberation about oneself, what the arts of mindfulness have in common is a certain mode of attending to present events – often described as a ‘nonjudgmental awareness of the present moment’. Practitioners are discouraged from engaging with their experiences in a critical or evaluative manner, and often they’re explicitly instructed to disregard the content of their own thoughts. When eating the raisin, for example, the focus is on the process of consuming it, rather than reflecting on whether you like raisins or recalling the little red boxes of them you had in your school lunches, and so on. Similarly, when focusing on your breath or scanning your body, you should concentrate on the activity, rather than following the train of your thoughts or giving in to feelings of boredom and frustration. The goal is not to end up thinking or feeling nothing, but rather to note whatever arises, and to let it pass with the same lightness. One reason that mindfulness finds such an eager audience is that it garbs itself in a mantle of value-neutrality. In his book Wherever You Go (1994), Jon Kabat-Zinn, a founding father of the contemporary mindfulness movement, claims that mindfulness ‘will not conflict with any beliefs … – religious or for that matter scientific – nor is it trying to sell you anything, especially not a belief system or ideology’. As well as relieving stress, Kabat-Zinn and his followers claim that mindfulness practices can help with alleviating physical pain, treat mental illness, boost productivity and creativity, and help us understand our ‘true’ selves. Mindfulness has become something of a one-size-fits-all response for a host of modern ills – something ideologically innocent that fits easily into anyone’s life, regardless of background, beliefs or values. Commodification has produced watered-down versions – available via apps, and taught in schools and offices Yet mindfulness is not without its critics. The way in which it relates to Buddhism, particularly its meditation practices, is an ongoing area of controversy. Buddhist scholars have accused the contemporary mindfulness movement of everything from misrepresenting Buddhism to cultural appropriation. Kabat-Zinn has muddied the waters further by claiming that mindfulness demonstrates the truth of key Buddhist doctrines. But critics say that the nonjudgmental aspects of mindfulness are in fact at odds with Buddhist meditation, in which individuals are instructed to actively evaluate and engage with their experiences in light of Buddhist doctrine. Others point out that the goals of psychotherapy and mindfulness do not match up with core Buddhist tenets: while psychotherapy might attempt to reduce suffering, for example, Buddhism takes it to be so deeply entrenched that one should aim to escape the miserable cycle of rebirth altogether. A third line of attack can be summed up in the epithet ‘McMindfulness’. Critics such as the author David Forbes and the management professor Ronald Purser argue that, as mindfulness has moved from therapy to the mainstream, commodification and marketing have produced watered-down, corrupted versions – available via apps such as Headspace and Calm, and taught as courses in schools, universities and offices. My own gripes with mindfulness are of a different, though related, order. In claiming to offer a multipurpose, multi-user remedy for all occasions, mindfulness oversimplifies the difficult business of understanding oneself. It fits oh-so-neatly into a culture of techno-fixes, easy answers and self-hacks, where we can all just tinker with the contents of our heads to solve problems, instead of probing why we’re so dissatisfied with our lives in the first place. As I found with my own experience, though, it’s not enough to simply watch one’s thoughts and feelings. To understand why mindfulness is uniquely unsuited for the project of real self-understanding, we need to probe the suppressed assumptions about the self that are embedded in its foundations. Contrary to Kabat-Zinn’s loftier claims to universalism, mindfulness is in fact ‘metaphysically loaded’: it relies on its practitioners signing up to positions they might not readily accept. In particular, mindfulness is grounded in the Buddhist doctrine of anattā, or the ‘no-self’. Anattā is a metaphysical denial of the self, defending the idea that there is nothing like a soul, spirit or any ongoing individual basis for identity. This view denies that each of us is an underlying subject of our own experience. By contrast, Western metaphysics typically holds that – in addition to the existence of any thoughts, emotions and physical sensations – there is some entity to whom all these experiences are happening, and that it makes sense to refer to this entity as ‘I’ or ‘me’. However, according to Buddhist philosophy, there is no ‘self’ or ‘me’ to which such phenomena belong. It’s striking how much shared terrain there is among the strategies that Buddhists use to reveal the ‘truth’ of anattā, and the exercises of mindfulness practitioners. One technique in Buddhism, for example, involves examining thoughts, feelings and physical sensations, and noting that they are impermanent, both individually and collectively. Our thoughts and emotions change rapidly, and physical sensations come and go in response to stimuli. As such (the thinking goes), they cannot be the entity that persists throughout a lifetime – and, whatever the self is, it cannot be as ephemeral and short-lived as these phenomena. Nor can the self be these phenomena collectively as they are all equally impermanent. But then, the Buddhists point out, there is also nothing besides these phenomena that could be the self. Consequently, there is no self. From the realisation of impermanence, you gain the additional insight that these phenomena are impersonal; if there is no such thing as ‘me’, to whom transitory phenomena such as thoughts can be said to belong, then there’s no sense in which these thoughts are ‘mine’. Like their Buddhist predecessors, contemporary mindfulness practitioners stress these qualities of impermanence and impersonality. Exercises repeatedly draw attention to the transitory nature of what is being observed in the present moment. Explicit directions (‘see how thoughts seem to simply arise and cease’) and visual imagery (‘think of your thoughts like clouds drifting away in the sky’) reinforce ideas of transience, and encourage us to detach ourselves from getting too caught up in our own experience (‘You are not your thoughts; you are not your pain’ are common mantras). After a certain point, mindfulness doesn’t allow you to take responsibility for and analyse your feelings I put my earlier sense of self-estrangement and disorientation down to mindfulness’s close relationship with anattā. With the no-self doctrine, we relinquish not only more familiar understandings of the self, but also the idea that mental phenomena such as thoughts and feelings are our own. In doing so, we make it harder to understand why we think and feel the way we do, and to tell a broader story about ourselves and our lives. The desire for self-understanding tends to be tied up with the belief that there is something to be understood – not necessarily in terms of some metaphysical substrate, but a more commonplace, persisting entity, such as one’s character or personality. We don’t tend to think that thoughts and feelings are disconnected, transitory events that just happen to occur in our minds. Rather, we see them as belonging to us because they are reflective of us in some way. People who worry that they are neurotic, for example, will probably do so based on their repeated feelings of insecurity and anxiety, and their tendency towards nitpicking. They will recognise these feelings as flowing from the fact that they might have a particular personality or character trait. Of course, it’s often pragmatically useful to step away from your own fraught ruminations and emotions. Seeing them as drifting leaves can help us gain a certain distance from the heat of our feelings, so as to discern patterns and identify triggers. But after a certain point, mindfulness doesn’t allow you to take responsibility for and analyse such feelings. It’s not much help in sifting through competing explanations for why you might be thinking or feeling a certain way. Nor can it clarify what these thoughts and feelings might reveal about your character. Mindfulness, grounded in anattā, can offer only the platitude: ‘I am not my feelings.’ Its conceptual toolbox doesn’t allow for more confronting statements, such as ‘I am feeling insecure,’ ‘These are my anxious feelings,’ or even ‘I might be a neurotic person.’ Without some ownership of one’s feelings and thoughts, it is difficult to take responsibility for them. The relationship between individuals and their mental phenomena is a weighty one, encompassing questions of personal responsibility and history. These matters shouldn’t be shunted so easily to one side. As well as severing the relationship between you and your thoughts and feelings, mindfulness makes self-understanding difficult in another way. By relinquishing the self, we divorce it from its environment and therefore its particular explanatory context. As I write this, I’ve spent the past month being fairly miserable. If I were being mindful, I would note that there were emotions of sadness and helplessness as well as anxious thoughts. While mindfulness might indirectly help me glean something about the recurring content of my thoughts, without some idea of a self, separate from but embedded in a social context, I couldn’t gain much further insight. Trails of thought and feeling, on their own, give us no way of telling whether we’re reacting disproportionately to some small event in our lives, or, as I was, responding appropriately to recent tragic events. To look for richer explanations about why you think and feel the way you do, you need to see yourself as a distinct individual, operating within a certain context. You need to have some account of the self, as this demarcates what is a response to your context, and what flows from yourself. I know I have a propensity towards neurotic worrying and overthinking. Thinking of myself as an individual in a particular context is what allows me to identify whether the source of these worries stems from my internal character traits or if I am simply responding to an external situation. Often the answer is a mixture of both, but even this ambiguity requires a careful scrutiny, not only of thoughts and feelings but the specific context in which they arose. The problem is the tendency to present mindfulness as a panacea for all manner of modern ills The contrasting tendency in mindfulness to bracket context not only cramps self-understanding. It also renders our mental challenges dangerously apolitical. In spite of a growing literature probing the root causes of mental-health issues, policymakers tend to rely on low-cost, supposedly all-encompassing solutions for a broad base of clients. The focus tends to be solely on the contents of an individual’s mind and the alleviation of their distress, rather than on interrogating the deeper socioeconomic and political conditions that give rise to the distress in the first place. Older people tend to suffer high rates of depression, for example, but that’s usually addressed via pharmaceutical or therapeutic means – instead of considering, say, social isolation or financial pressures. Mindfulness follows the trend for simplicity and individuation. Its embedded assumptions about the self make it particularly prone to neglecting broader considerations, since they allow for no notion of individuals as enmeshed in and affected by society at large. I don’t mean to suggest that everyone who does mindfulness will feel estranged from their thoughts the way I did, nor that it will inevitably restrict their capacity to understand themselves. It can be a useful tool in helping us gain some distance from the tumult of our inner experience. The problem is the current tendency to present mindfulness as a wholesale remedy, a panacea for all manner of modern ills. I still dabble in mindfulness, but these days I tend to draw on it sparingly. I might do a mindfulness meditation when I’ve had a difficult day at work, or if I’m having trouble sleeping, rather than keeping up a regular practice. With its promises of assisting everyone with anything and everything, the mistake of the mindfulness movement is to present its impersonal mode of awareness as a superior or universally useful one. Its roots in the Buddhist doctrine of anattā mean that it sidelines a certain kind of deep, deliberative reflection that’s required for unpicking which of our thoughts and emotions are reflective of ourselves, which are responses to the environment, and – the most difficult question of all – what we should be doing about it. Source of the article

‘You are constantly told you are evil’: inside the lives of diagnosed narcissists

Few psychiatric conditions are as stigmatised or as misunderstood as narcissistic personality disorder. Here’s how it can damage careers and relationships – even before prejudice takes its toll There are times when Jay Spring believes he is “the greatest person on planet Earth”. The 22-year-old from Los Angeles is a diagnosed narcissist, and in his most grandiose moments, “it can get really delusional”, he says. “You are on cloud nine and you’re like, ‘Everyone’s going to know that I’m better than them … I’ll do great things for the world’.” For Spring, these periods of self-aggrandisement are generally followed by a “crash”, when he feels emotional and embarrassed by his behaviour, and is particularly vulnerable to criticism from others. He came to suspect that he may have narcissistic personality disorder (NPD) after researching his symptoms online – and was eventually diagnosed by a professional. But he doesn’t think he would have accepted the diagnosis had he not already come to the conclusion on his own. “If you try to tell somebody that they have this disorder, they’ll probably deny it,” he says – especially if they experience feelings of superiority, as he does. “They’re in a delusional world that they made for themselves. And that world is like, I’m the greatest and nobody can question me.” Though people have been labelled as narcissists for more than a century, it’s not always clear what is meant by the term. “Everyone calls everybody a narcissist,” says W Keith Campbell, psychology professor at the University of Georgia and a narcissism expert. The word is “used more than it should be” – but when it comes to a formal diagnosis, he believes many people hide it, as there is so much stigma around the disorder. A narcissist will tend to have “an inflated view of oneself”, “a lack of empathy”, and “a strategy of using people to bolster one’s self-esteem or social status through things like seeking admiration, displaying material goods, seeking power,” says Campbell. Those with NPD may be “extremely narcissistic”, to the point that “they’re not able to hold down stable relationships, it damages their jobs”, and they have a “distorted view of reality”, he says. Though up to 75% of people diagnosed with narcissistic personality disorder are men, research from the University of London published last year suggests this figure does not mean there are fewer narcissistic women, but that female narcissism is more often presented in the covert form (also defined as vulnerable narcissism), which is less commonly diagnosed. “Men’s narcissism tends to be a bit more accepted, just kind of like everything in society,” says Atlanta-based Kaelah Oberdorf, 23, who posts about her NPD and borderline personality disorder (BPD) diagnoses on TikTok. It is not uncommon to see the two disorders co-occur. “I really struggle with handling criticism and rejection,” says Oberdorf , “because if I hear that the problem is me, I either go into defence mode or I completely shut down.” Despite having this response – which is sometimes referred to as “narcissistic injury”, she has been trying to overcome it and take advice from her loved ones, as she doesn’t want to slip into the harmful behaviour of her past. “I was very emotionally abusive to my partners as a teenager,” she says. Through dialectical behavioural therapy, she has been able to mitigate her NPD symptoms, and she says she and her current boyfriend “have a dynamic where I told him, ‘If I say something messed up, if I say something manipulative, call it out right then and there’.” Oberdorf grew up primarily in the care of her father and says she lacked positive role models as a child. “I’ve been learning all this time what is and is not appropriate to say during a fight because I never had that growing up,” she says. “Nothing was off-limits when my family members were insulting me when I was growing up.” Personality disorders tend to be associated with difficulties as a child. “There is a genetic component,” says Tennyson Lee, an NHS consultant psychiatrist who works at the DeanCross personality disorder service in London. But, when someone develops narcissistic traits, it is often “linked to that individual’s particular early environment”. Those traits were “their strategy in some ways to survive at a very early age”, he adds, when they may have been neglected, or only shown love that was conditional on meeting certain expectations. They then “continue to use those same mechanisms as adults”. Like several of the NPD-diagnosed people I speak to, John (not his real name) thinks his parents “may be narcissists themselves”. The 38-year-old from Leeds says when he was a child, “everything was all about them and their work and their social life. So it was like, stay out of our way.” When their focus was on him, it came in the form of “a great amount of pressure” to achieve good grades and career success, he says, which made him feel that if he didn’t meet their standards, he wasn’t “good enough”. When he became an adult, none of his relationships ever worked out. “I’ve never cared about anyone really,” he says. “So I’ve never taken relationships seriously.” He didn’t think he was capable of loving someone, until he met his current partner of three years, who is diagnosed with BPD, so, like him, struggles with emotional regulation. She is “really understanding of the stuff that goes on in my head”, he says – it was actually she who first suspected he might have NPD. After a visit to his GP, John was referred to a clinical psychologist for an assessment and was told his diagnosis. He has been referred for talking therapy on the NHS (a long period of therapy is the only treatment that has been shown to help NPD patients, says Lee), but has been on the waiting list for a year and a half: “They said it is probably going to be maybe February or March next year.” John has only told a handful of people about his NPD diagnosis, because “there’s a big stigma that all narcissists are abusers”, but, privately, he has accepted it. “It helps me to understand myself better, which is always a good thing,” he says. All of the people I speak to have accepted their narcissism and are seeking help for it – hence being willing to talk about it – which is probably not representative of all people with the disorder. But the existence of NPD content creators such as Oberdorf and Lee Hammock, and the growth of online support communities, suggest that more narcissists are openly acknowledging the issues they face – and the ones they may be causing for others. “Seeing that you’re not alone in what you’re struggling with, being able to talk to other people who relate to you and maybe hearing coping mechanisms” are reasons why reddit user Phteven_j (who would like to remain anonymous) started joining in conversations about NPD online. Now a moderator of the r/NPD subreddit, the 37-year-old software engineer thinks he and his co-moderators are “pretty good about not encouraging disordered behaviour” and ensuring “it’s not a breeding ground for any sort of negative or disorder behaviour and more of a place where you can try to improve”. Although, in volunteering as a moderator, “I’d be lying if I said that I wasn’t seeking out some kind of position of authority” – which arguably stems from NPD symptoms – Phteven_j believes the subreddit is largely a force for good. However, the slew of reddit users wanting to complain about narcissists (and sometimes even the existence of a subreddit that acts as a support group for them) “is constant” he says. Across the internet, narcissists are often “painted as almost like supervillains” and the stories shared are often from the perspective of those who have been abused by someone they believe to be a narcissist. “The advice is, typically, the same: run away, you’ve got to leave them, don’t ever talk to them again,” the moderator says. Oberdorf is also critical of the way narcissism is discussed online. Social media users have accused her of “bragging” about her personality disorders because she lists them on her profiles and discusses them in her content. “I’m not bragging about the fact that I have debilitating mental illness,” she says. “I am proud of the fact that I have survived with mental illnesses that statistically could have taken my life.” She is keen to open up more conversations about NPD – “stigma is the number one worst thing for any illness ever”. In this age of selfies and thirst traps, it can feel that narcissism must be on the rise. But just because there are now more outlets for narcissistic behaviour, prevalence of the clinical condition doesn’t seem to be increasing, says Lee. It’s worth noting, Campbell adds, that “social media is making people feel worse about themselves”, and, for most people, “it doesn’t make them feel positive about themselves or think they’re awesome”. The way NPD diagnoses are made is “suboptimal”, however, according to Lee. Most of the research on NPD has been done in the US, where a paper published by the American Psychiatric Association estimates the disorder is found in 1%–2% of the population. “If you make the diagnosis, then it’s made on the [American Psychiatric Association’s] Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) guidelines, where it only captures an aspect of narcissism, which is the more overt, sort of aggressive type of narcissism, but it doesn’t capture the more covert or sensitive form,” says Lee. There are two most commonly talked about types of narcissism. The first is the “grandiose” or “overt” form, which manifests in stereotypically narcissistic behaviours such as aggression and attention-seeking. The second is a “vulnerable” or “covert” narcissist, which is “the kind of individual the clinician might miss, because they often come across as far more contained, even self-effacing at times”, Lee says. Grandiose and vulnerable narcissism “are different sides of the same coin”, he says. Both types will have an inflated sense of their own importance, but for a covert narcissist that may mean a hypersensitivity to criticism or a victim mentality rather than a desire to put themselves in the spotlight. Campbell points out that there is a risk of narcissists “using social media to maintain their narcissism”, as it can be a tool “to get favourable attention or positive feedback”, but does see the benefit of positive role models and support for people with NPD. When a celebrity, such as the American comedian Nick Cannon in 2024, “comes out with NPD and says it’s causing me problems, that’s a great message,” says Campbell, “that’s a great message”. Lee, too, is wary of social media being used to educate or as a support system for people with NPD, “because there’s so much misinformation”. But he believes “more structured” information is missing, particularly in the NHS. “The service for narcissistic individuals is very uneven throughout the UK” and “many clinicians don’t make the diagnosis of narcissism”, Lee says, partly because they aren’t primed to notice it, and partly out of reluctance to make a diagnosis that is perceived so negatively. The symptoms of NPD also mean that “if a narcissist is successfully leading their life, even though they might have quite a strong level of narcissism, they’re not going to seek treatment”. When a patient with NPD does seek help, it is often because they have suffered negative consequences of their narcissistic behaviour, or a partner or family member has encouraged them. Spring wishes people would reframe the way they think about narcissists. “A narcissist is attempting to believe that they are the best because that is the coping mechanism for feeling like: ‘I am the worst,’” he says. “There’s something missing with me and I need to be in this fantasy world where I’m the hero because maybe in my childhood I was the villain and now I need to overcompensate for that.” NPD is clearly a condition that requires psychological help, but Oberdorf can understand why narcissists don’t seek it: “If you have a problem, and you are constantly being told that people with your type of specific problem are unworthy, or they’re evil, or they’re horrible people because of this problem, why would you want to admit that you have that problem?” Source of the article

GOATReads: Psychology

Brain man

How can you have a picture of the world when your brain is locked up in your skull? Neuroscientist Dale Purves has clues Picture someone washing their hands. The water running down the drain is a deep red. How you interpret this scene depends on its setting, and your history. If the person is in a gas station bathroom, and you just saw the latest true-crime series, these are the ablutions of a serial killer. If the person is at a kitchen sink, then perhaps they cut themselves while preparing a meal. If the person is in an art studio, you might find resonance with the struggle to get paint off your hands. If you are naive to crime story tropes, cooking or painting, you would have a different interpretation. If you are present, watching someone wash deep red off their hands into a sink, your response depends on even more variables. How we act in the world is also specific to our species; we all live in an ‘umwelt’, or self-centred world, in the words of the philosopher-biologist Jakob von Uexküll (1864-1944). It’s not as simple as just taking in all the sensory information and then making a decision. First, our particular eyes, ears, nose, tongue and skin already filter what we can see, hear, smell, taste and feel. We don’t take in everything. We don’t see ultraviolet light like a bird, we don’t hear infrasound like elephants and baleen whales do. Second, the size and shape of our bodies determine what possible actions we can take. Parkour athletes – those who run, vault, climb and jump in complex urban environments – are remarkable in their skills and daring, but sustain injuries that a cat doing the exact same thing would not. Every animal comes with a unique bag of tricks to exploit their environment; these tricks are also limitations under different conditions. Third, the world, our environment, changes. Seasons change, what animals can eat therefore also changes. If it’s the rainy season, grass will be abundant. The amount of grass determines who is around to eat it and therefore who is around to eat the grass-eaters. Ultimately, the challenge for each of us animals is how to act in this unstable world that we do not fully apprehend with our senses and our body’s limited degrees of freedom. There is a fourth constraint, one that isn’t typically recognised. Most of the time, our intuition tells us that what we are seeing (or hearing or feeling) is an accurate representation of what is out there, and that anyone else would see (or hear or feel) it the same way. But we all know that’s not true and yet are continually surprised by it. It is even more fundamental than that: you know that seemingly basic sensory information that we are able to take in with our eyes and ears? It’s inaccurate. How we perceive elementary colours, ‘red’ for example, always depends on the amount of light, surrounding colours and other factors. In low lighting, the deep red washing down the sink might appear black. A yellow sink will make it look more orange; a blue sink may make it look violet. If, instead of through human eyeballs, we measured the wavelengths of light coming off the scene with a device called a spectrophotometer, then the wavelength of the light reflected off that ‘blood’ would be the same, no matter the surrounding colours. But our eyes don’t see the world as it really is because our eyes don’t measure wavelengths like a spectrophotometer. Dale Purves, the George B Geller Professor of Neurobiology (Emeritus) at Duke University in North Carolina, thinks that, because we can never really see the world accurately, the brain’s primary purpose is to help us make associations to guide our behaviour in a way that, literally, makes sense. Purves sees ‘making sense’ as an active process where the brain produces inferences, of which we are not consciously aware, based on past experiences, to interpret and construct a coherent picture of our surroundings. Our brains use learned patterns and expectations to compensate for our imperfect senses and finite experiences, to give us the best understanding of the world it can. Purves is the scientist’s scientist. He pursues the questions he’s genuinely interested in and does so with original approaches and ideas. Over the years, he’s changed the subject of his research multiple times, all with the intent of understanding how the brain works, tackling subjects that were new and unfamiliar to him, as opposed to chasing trends and techniques or sticking to a tried-and-true research path. His career is an instance of the claim Viktor Frankl makes in Man’s Search for Meaning (1946): ‘For success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended side-effect of one’s personal dedication to a cause greater than oneself …’ If success is measured by accolades, then it did indeed follow Purves’s pursuits. Among a laundry list of awards and honours, he’s one of the few scientists elected both to the National Academy of Sciences (1989) and to what is now the National Academy of Medicine (1996). Election to either is considered to be among the highest honours that can be bestowed on a scientist in the United States. Nevertheless, if the name ‘Dale Purves’ sounds familiar to you, it is likely because you took a neuroscience course in college, for which the textbook was Neuroscience by Purves et al (one of the most popular and now in its 7th edition). Indeed, this is the text I used when I taught an introductory neuroscience course at Princeton. Oddly enough, Purves’s passion for neuroscience took time and experience to materialise. As an undergraduate at Yale, he struggled initially but found a major – philosophy – intending to pursue medicine. Purves developed an interest in science but didn’t know anything about being a scientist, and medicine seemed close enough. In 1960, he entered Harvard Medical School thinking he would become a psychiatrist. In his first year, he took a course on the nervous system, taught by young hotshot neuroscientists, some of whom would go on to be among the greats of the 20th century (and whose work is now textbook material): David Potter, Ed Furshpan, David Hubel and Torsten Wiesel (the latter two won the Nobel Prize in 1981, along with Roger Sperry). Purves finished his medical degree in 1964, but he had grown disillusioned with psychiatry. He’d tried switching to general surgery, but realised he lacked the intensity of interest in surgery required to excel. It was 1965, and the Vietnam War gave him time to think. Purves was drafted but, as a physician, could serve his time as a Peace Corps employee, which he did in Venezuela. There, he said, he came across a book, The Machinery of the Brain (1963) by Dean Wooldridge, that synthesised what he learned years ago in that first-year course on the nervous system. The book was written for the lay reader and shared the current knowledge about the brains of humans and other animals. Its particular angle was to compare the brain to the computer technology of the time. The book reignited Purves’s interest in the brain, which was first piqued in his medical school course. When he returned to the US, Purves would begin again as a researcher in neuroscience. There were more neurons in the developing embryo than there would be in the adult chicken I was an undergraduate at the University of Idaho when I first read Purves’s work. It was the early 1990s and I was a philosophy major like Purves but was really interested in neuroscience. So I sought hands-on research experience in a professor’s lab in the biology department. We were studying the immune-system markers of motor neurons in the rat’s spinal cord, the ones that connect to leg muscles causing them to contract. Mark DeSantis, my advisor, suggested a book by Purves. At the time, there were few courses or serious books on the brain. Purves’s Body and Brain: A Trophic Theory of Neural Connections (1988) was perfect, just what I needed. Its central thesis is that the survival of neurons making connections, and the number of these connections, is regulated by the targets of those neuronal connections. In essence, he was telling us that, unlike the static circuit board of a computer, which is carefully designed and built according to a plan, the circuits of the nervous system are constructed on the fly in accordance with signals it receives from the targets of its connections. Those targets could be other neurons, organs or muscles in the body. As a result, as the body changes over the course of development in an individual or in the evolution of species, the neural circuits will adjust accordingly. Why is this important? Purves showed us that the brain is more than just a controller of the body: it is also an organ system that is embedded in a dynamic relationship with the rest of the body, affected by body size, shape and activity. One of Purves’s favourite examples to illustrate what sparked this theory is from the work of one of his scientific heroes, Viktor Hamburger. Hamburger had been studying central nervous system development in the 1930s, first at the University of Chicago, then as a faculty member at Washington University in St Louis. Using chicken embryos and those spinal cord motor neurons, Hamburger showed that there were more neurons in the developing embryo than there would be in the adult chicken. How could that be? Why were there more neurons in an embryo than actually needed and why did some die off? Hamburger’s idea was that the muscles targeted by those neurons supplied a limited amount of trophic, or nutrient, factors. In essence, the target muscles were producing a ‘food’ (we now call it ‘nerve growth factor’) that kept the neuron alive. The size of the target determined how much food was available and therefore how many neuron lives it could sustain. Exploiting the ease with which chick embryos could be manipulated, Hamburger showed this by first amputating one of the wing buds (ie, the nascent wing). When he did so, the final number of motor neurons on the amputated side was lower than typical, with fewer on the ‘control’ side of the spinal cord. So, the limb bud was important for the survival of neurons. If that was true, then more of this ‘target tissue’ should save more neurons. Was it possible to ‘rescue’ those extra neurons that would normally die off? To answer this question, Hamburger surgically attached an extra limb bud on one side of the embryo, thereby artificially creating more target tissue. The result: more motor neurons survived on that side of the spinal cord. In both experiments, the size of the connection target – the body, those limb buds specifically – determined the number of neurons that survived. Purves ran with this idea of a dialogic relationship between body and brain. Around four decades later, in the 1970s and ’80s, and Purves was now a young faculty colleague of Hamburger, his elder statesman at Washington University. Here, he took Hamburger’s theory about neuron-to-muscle connections and applied it to neuron-to-neuron connections. While he also looked at neuronal cell survival like Hamburger did, Purves also investigated the elimination and elaboration of individual connections that neurons make, their synapses. This was a big leap because Purves was now testing whether or not Hamburger’s findings about death and survival of neurons in the chick embryo were peculiar to that species’ neuron-to-muscle relationship. Was the same process apparent in other developing circuits in other animals? And, if so, if a neuron survives, then are the number of connections (those synapses) also subject to competition for trophic factors? Neurons come in all shapes and sizes, with different degrees of complexity. If you’ve seen a typical picture of a single neuron then you know it looks like a tree or bush, with a set of root-like branches on one end and a single, long limb-like branch on the other. The latter can branch as well, depending on circumstances. One of those sets of branches – the dendrites – receives inputs from other neurons, and the other – the axon – sends outputs to other neurons. Together with his graduate student Jeff Lichtman, Purves wanted to know how synapse numbers change with development and across different species of animals. Purves and Lichtman started with a simple neuron-to-neuron connection, where the receiving neuron had zero dendrites and the sending neuron’s axons made synapses directly on the receiving neuron’s cell body. To see this, they would surgically remove a tight group of functionally similar neurons, known as a ‘ganglion’, from different animals. They would then carefully fill a few individual neurons with a special enzyme. When this enzyme is then given a chemical to react with, it produces a colour. This colouring allows the neuron to be visualised under a microscope in its full glory – all their branches can be seen and counted. (Imagine a microscopic glass-blown tentacled creature being filled with ink and then counting its appendages.) The brain and body constitute a single, coherent and dynamic network; there is no way to separate them The end of each branch represents a synaptic connection. Comparing connections in developing rats versus adults, they found that neurons initially received a few synapses from a number of different neurons. In a sense, the circuits were tangled up in young rats. By the time the rats were adults, each neuron had many synapses but only from one neuron – the circuit was untangled. How did this happen? Akin to the process of eliminating extra neurons based on target size, there was a process of elimination of superfluous connections from some neurons. Then there was an additional process of multiplication (or elaboration) of connections coming from the ‘correct’ (so to speak) neuron. In essence, once neurons could find the right partners by getting rid of the less desirable ones, their relationship could blossom in the form of additional synapses. Purves and Lichtman then replicated this basic finding with increasingly complex sets of neurons and in other species. Before we get lost in the weeds, here’s the bottom line: trophic interactions between neurons match the number of neurons to the target size, and these interactions also regulate how many synapses they make. The grander theory is this: each class of cells in a neural pathway is supporting and regulating the connections it receives by trophic interactions with the cells it’s connected to down the line. Thus, a coordinated chain of connectivity extends from neuronal connections with the body’s muscles and organs to connections among neurons within the brain itself. The brain and body constitute a single, coherent and dynamic network; there is no way to separate them. They depend on each other at every level. Some artists go through distinct periods in their careers, while others stick to similar themes in and/or approaches to their work for decades. Scientists are the same. Most stick to trying to answer one particular question, getting deeper and deeper, as they figure out more and more details about their subject. Others find that, at some point, they are satisfied with the answers at hand and move on, finding a new question or challenge. Purves is the latter type of scientist, making multiple radical shifts in his scientific research. His important work supporting trophic theory had an obvious direction for continued investigation: using molecular tools to find new ways to visualise synaptic development. Purves was not interested. His research programme up to this time exploited easily manipulated and unambiguously visualised neural circuits in the peripheral nervous system. The brain itself is a different story, there where all the important action supposedly is; its complexity and density make it impossible to address the same questions – which connections are disappearing or multiplying – with the same degree of clarity and specificity. Neuroscience was changing as well. By the time the late 1980s and ’90s rolled around, the most attention-getting work focused on the brain, particularly in the neocortex – the part that has disproportionately increased in size in primates like us. Many who were interested in how the brain developed were inspired by those Nobel Prize-winners, Hubel and Wiesel, who elegantly demonstrated that the visual part of the neocortex had a critical period of development. At this point, Purves had reached middle age, and he was at an impasse. The answer to the question ‘What to do next?’ was not self-evident. As an academic scientist, one can pretty much study whatever one wants, but it has to be interesting, potentially consequential, and, for Purves especially, it has to be tractable: you should be able to formulate a clear hypothesis that could lead to an unambiguous finding. The answer came in the form of a new collaborator, Anthony-Samuel LaMantia, who joined his lab in 1988 as a postdoctoral fellow after completing his PhD on the development of the neocortex. Together, Purves and LaMantia decided to tackle the question ‘How does the brain grow?’ Mice were not born with their full set of glomeruli; over the course of development, new ones were added There are many different kinds of brains, as many as there are animals. There is a beauty in all of them, perhaps because they adhere quite nicely to the form-follows-function principle of design. The designer in each case is natural selection’s influence on how a species develops, and thus the form its body and brain take in response to environmental challenges. Brain scientists study the anatomy of these solutions when we use any number of techniques like tracers, stains, imaging, etc. Each technique is suited to looking at the brain at a particular spatial scale. Consistently, what they reveal is that the brain is beautiful, sometimes stunning. At one of those scales, you can see repeated patterns of neural circuitry, or modules, that look exactly like the spots and stripes we see on the skin of so many animals. For example, depending on what dyes you use to stain it, the visual cortex of primates has a pattern of stripes, with each stripe seemingly dedicated to the visual signals coming from one of our two eyes. Stain it another way, and you’ll see an array of ‘blobs’ that are claimed by some to be dedicated to colour processing. Other animals have different patterns: rats have an array of barrel-shaped modules in their somatosensory (touch) cortex which corresponds to their array of facial whiskers. Dolphins have blobs in their auditory cortex; we don’t know what their function is. Purves wanted to know how these iterated patterns of neural circuitry developed. He started by first looking just outside the neocortex, in the olfactory bulb of the mouse. In this structure, mice have a number of modules known as ‘glomeruli’. The olfactory bulbs of mice jut out from the main part of the brain and so are more accessible for experiments. Purves and LaMantia developed a method for exposing the bulbs in a live animal and staining the glomeruli with a dye that would not hurt the mice. They could then see that mice were not born with their full set of glomeruli; over the course of development, new ones were added. This was exciting and surprising because many popular theories at the time argued that brain development is mainly the result of selecting useful circuits from a larger repertoire of possible circuits. Here, they were showing that useful circuits were actually being constructed, not selected. Moreover, if circuits were constructed in this way after the animal is born, then the circuits might be influenced by experience. Were other modules in other species and brain areas added in the same way? In the macaque monkey visual cortex (ie, the experimental animal most closely related to humans and the brain area that is among the most studied) they couldn’t look at module development like they did in the mouse (looking at the same brain structure in the same animal repeatedly over time), but they were able to count the number of blobs in young monkeys versus adult monkeys. Unlike the glomeruli in mice, however, the number of blobs remained constant over time. To Purves, this was not super exciting. He had hoped to find more traction on perhaps a new process of neocortex development in primates, one that he could elaborate into a novel research programme. Nevertheless, he did come to one important conclusion. It seemed that most scientists – indeed, many luminaries of neuroscience – wanted to see brain modules as fundamental features of the neocortex, each serving a particular behavioural or perceptual purpose. For example, one ‘barrel’ in the rat’s touch cortex is there to process the inputs of one whisker on its face. Purves pointed out that iterated patterns of modules may be found in one species’ brain but absent in a closely related species. Moreover, he also noted that they don’t seem to be obligatorily linked to function. ‘Blobs’ are there in the human and the monkey visual cortex and are linked to colour-vision processing, but nocturnal primates with poor colour vision still have blobs in their visual cortex. So, the blobs do not seem to enable colour vision. Similarly, chinchillas have a barrel cortex like rats but don’t have the whisker movements of rats. Cats and dogs have whiskers but no related modules in their touch cortex. Thus, it seems that, while the iterated patterns of the brain are beautiful, they are unlike modern architecture in that their beauty is not linked to function. So why then do they form at all? Here, Purves suggested that iterated patterns are the result of synaptic connections finding and relying on each other and then making more of those connections in pathways that are most active. In other words, the iterated patterns of the brain are epiphenomenal, the byproducts of the rules of neural connections and competing patterns of neural activity. Those activity patterns are generated by sensory inputs coming from the sensory organs – the eyes, ears, nose and skin. So seeing beautiful-looking patterns in the brain does not necessarily mean they were constructed for a particular purpose. Ifirst met Purves in 1993 when I was interviewing for graduate school after he had moved to Duke University. I had already read a lot of his work and was in awe of his contrarian instincts and pursuit of work that is out of the mainstream yet important. When I entered his office for my interview, I was extremely nervous but managed to ask about the portraits on his office walls. They were scientists. John Newport Langley, a 19th-century British physiologist who made important discoveries about neurotransmitters. He inspired the problems Purves tackled as a new professor. The aforementioned Viktor Hamburger was also there. He was a major figure in 20th-century embryology and also a good friend of Purves, despite the difference in their ages and experience. Another photo was of Stephen Kuffler, perhaps the most beloved figure in neuroscience at the time and who made key discoveries in vision. Kuffler had organised the neuroscience team who taught Purves when he was in medical school, and Purves considers him a mentor who exemplified what to pursue (and what not to pursue) in neuroscience. The final photo was of Bernard Katz, a Nobel laureate who figured out how neurons communicate with muscles. Purves collaborated with Katz in the 1970s and considers him a paragon of scientific excellence. I was admitted to Duke and, a year later, moved to Durham, North Carolina hoping to study with Purves or LaMantia, who was there too as a new professor. When I arrived at Duke, Purves was about to make a major change, away from studying the brain itself entirely. This seemed kind of crazy after so much success with discoveries about the developing nervous system, building an enviable career and becoming a sought-after leader in the field. But Purves’s restless instinct arose again and he switched his focus, this time to study perception. He had a hunch that the great advances wowing people about brain anatomy and the function of circuits therein were not going to be enough to make it clear how the brain works in actually guiding human behaviours. The origin of the hunch was in philosophy, which Purves had majored in as an undergraduate. The philosopher George Berkeley (1685-1753) had noticed that our eyeballs take in radically different-sized, three-dimensional objects and then project them back onto the retina (the sensory wall in the back of the eye) in exactly the same size and only in two dimensions (known as the inverse optics problem). This is why framing a distant human’s whole body between your two fingers, seemingly able to crush them, is amusing. It uses forced perspective to imply an impossibility. The implication of the inverse problem is profound. It means that the information about the object (the source) coming into our brain is uncertain, incomplete, partial. As a solution to the inverse problem, the scientist Hermann von Helmholtz (1821-94) proposed that perception relied on learning from experience. We learn about objects through trial and error, and make inferences about any ambiguous image. Thus, since we have no experiences with lilliputian human beings, we can infer that the tiny human in the forced-perspective example is actually far away. Purves took the seed of Helmholtz’s idea – that our perception depends on experience – and built an entire research programme around it. Since the mid-1990s, he and his collaborators have systematically analysed a variety of visual illusions in brightness, contrast, motion, and geometry. They have shown that our perceptions are experience-based constructions, not accurate reflections of their sources in the real world. The example of ‘red’ from the beginning of this essay is based on his colour work. Our ability to perceive accurately is largely based on past experiences and learned associations Purves and his collaborator Beau Lotto would generate two identically coloured ‘target’ squares on a computer screen but give them backgrounds of different colours. The backgrounds would make the two squares look like they were different colours (even though they were actually identical, as measured by a spectrophotometer). Then, participants were asked to adjust the hue, saturation and brightness (the same controls on your phone’s camera app) of the target squares until they looked identical. Each participant’s adjustments were quantified and used as a difference measure between perception and reality. Ultimately, Purves’s research led to the conclusion that the brain functions on a wholly empirical basis. We construct our perception of the world through our past experiences in that world. This is a radical departure from the long-standing prevailing orthodoxy that the brain extracts features from objects and other sensory sources only to recombine them to guide our behaviour. Instead of extracting features and combining them in the brain (red+round = apple), Purves argues that it is our learned associations among an event or feature of the world, the context in which it appeared and the consequences of our subsequent actions that builds our umwelt, our self-centred world. The research of Purves and his collaborators showed that our ability to perceive accurately is largely based on past experiences and learned associations. This means that we must learn about the space around us, the objects in it and other aspects of perception; these are not innate but developed through interaction with the environment. This all seems very reasonable. The environment is always in flux, with different challenges at different times for any animal. You wouldn’t want your brain fine-tuned to an environment in which you no longer live. Equally, it also wouldn’t make sense for a species to build each individual’s brain as a tabula rasa, to start from scratch with every generation. Purves’s findings and interpretation lead to a more philosophical puzzle. To what extent is the ‘environment’ that the brain is trying to make sense of actually ‘out there’ beyond our heads? Is there a real reality? Purves has shown that, even if there is a real reality, we don’t perceive much of it… or at least we don’t have a universal way of perceiving it. For example, not all humans see the same colours the same way. There are two reasons for this. One is that colour and its interpretation depend highly on environmental factors. The other is that perception also depends on experience. Experiences depend on your interaction with specific environments. Do you live in the sea, on land, in burrows, in a nest or in a climate-controlled house? Do you have vision, or are you blind? What do your physiology and anatomy allow you to perceive and interact with? What have you seen before? Your perception and interpretation of the world, and indeed those of other animals, depend on the answers to these types of questions. In her memoir Pilgrim at Tinker Creek (1974), Annie Dillard writes about coming across a book on vision, which shows what happens when blind humans of all ages are suddenly given the ability to see. Does experience really determine how we see and act in the world? That book was Space and Sight (1960). In it, Marius von Senden describes how patients who were blind at birth because of cataracts saw the world when those cataracts were removed. Were they able to see the world as we see it, those of us with vision since birth? No. Most patients did not. In one example from the book, Dillard recounts: Before the operation a doctor would give a blind patient a cube and a sphere; the patient would tongue it or feel it with his hands, and name it correctly. After the operation the doctor would show the same objects to the patient without letting him touch them; now he had no clue whatsoever what he was seeing. There is even an example of a patient finally ‘seeing’ her mother but at a distance. Because of a lack of experience, she failed to understand the relationship between size and distance (forced perspective) that we learn from experience with sight. When asked how big her mother was, she set her two fingers a few inches apart. These types of experiments (which have been replicated in various ways) show just how important experience and learned associations are to making sense of the world. Today, Purves has enough studies to show an operating principle of the nervous system or – more cautiously, he would say – ‘how it seems to work’. The function of the nervous system is to make, maintain and modify neural associations to guide adaptive behaviours – those that lead to survival and reproduction – in a world that sensory systems cannot accurately capture. It is not a stretch to link Purves’s work, all the way back, on trophic theory to the current ideas. A biological agent must assemble, without any blueprint or instructions, a nervous system that matches the shape and size of a changing body. This nervous system, paired with sensory organs that filter the world in peculiar ways, must somehow process the physical states of the world in order to guide behaviour. Similar principles – neural activity, changing synaptic connections – that guided development also guide our ongoing perceptions of an ever-changing world. We use our individual experiences to do this guiding. If we happen to perceive or interpret events like other human beings, it is because we have similar bodies and shared similar experiences at some point. To my experience, Purves is a paragon of scientific excellence. Purves first identifies a big, interesting question, and then finds whatever means are necessary to find the answer Recently, I asked Purves how he thought of the arc of his career, and it was very different from my perception of it. From my perspective, it seems that Purves’s question is always ‘How is a nervous system built?’ Addressing this question took him to increasingly large scales: from trophic theory and neural connections in the peripheral nervous system to neural construction of the brain (iterated patterns; growth of the neocortex) to the relationships between brain-area size and perceptual acuity to the construction of ‘reality’ via experience. I asked him what he thought about this narrative, and he responded: That is one way of framing it, but I don’t really see a narrative arc. As you know, one’s work is often driven by venal/trivial considerations such as what research direction has a better chance of winning grant support or addressing the popular issues of the day. The theme you mention (how nervous systems are built) was not much in my thinking, although in retrospect that narrative seems to fit. Neither one of us is wrong. In fact, we are interpreting the body of work according to our own experiences and how it best suits our needs, just as his research would suggest. Purves’s remarkable research insights are the product of his distinctive approach to science. A popular approach in neuroscience is to identify one problem early in a career and to just keep plugging away, learning more and more details about it. Then, maybe acquire an exciting new technique – like fMRI in the 1990s or optogenetics in the 2000s – to investigate the same problem in a different way. Or adopt the new technique and then search for a new question that could be answered by that method. Another approach would be to just apply the method, collect some data and only then ask what story can be told with those data. None of these approaches is ‘wrong’, but Purves’s scientific approach is in stark contrast to this. He first identifies a big, interesting question, one that could possibly have a ‘yes’ or ‘no’ answer, and then finds whatever means are necessary to find the answer. To put it another way, there is a lot of thought behind his work and approach, and there is a lot of thought about what any findings may mean in the big picture of brain science. Purves is always engaged. Very few scientists have original, influential work in multiple domains of their field. Dale Purves has achieved major advances in our understanding of brain development, from small circuits to big ones, and from bodily experience to a new way of thinking about how the brain works. Source of the article

GOATReads: History

7 Ways the Printing Press Changed the World

In the 15th century, an innovation enabled people to share knowledge more quickly and widely. Civilization never looked back. Knowledge is power, as the saying goes, and the invention of the mechanical movable type printing press helped disseminate knowledge wider and faster than ever before. German goldsmith Johannes Gutenberg is credited with inventing the printing press around 1436, although he was far from the first to automate the book-printing process. Woodblock printing in China dates back to the 9th century and Korean bookmakers were printing with moveable metal type a century before Gutenberg. But most historians believe Gutenberg’s adaptation, which employed a screw-type wine press to squeeze down evenly on the inked metal type, was the key to unlocking the modern age. With the newfound ability to inexpensively mass-produce books on every imaginable topic, revolutionary ideas and priceless ancient knowledge were placed in the hands of every literate European, whose numbers doubled every century. Here are just some of the ways the printing press helped pull Europe out of the Middle Ages and accelerate human progress. 1. A Global News Network Was Launched Gutenberg didn’t live to see the immense impact of his invention. His greatest accomplishment was the first print run of the Bible in Latin, which took three years to print around 200 copies, a miraculously speedy achievement in the day of hand-copied manuscripts. But as historian Ada Palmer explains, Gutenberg’s invention wasn’t profitable until there was a distribution network for books. Palmer, a professor of early modern European history at the University of Chicago, compares early printed books like the Gutenberg Bible to how e-books struggled to find a market before Amazon introduced the Kindle. “Congratulations, you’ve printed 200 copies of the Bible; there are about three people in your town who can read the Bible in Latin,” says Palmer. “What are you going to do with the other 197 copies?” Gutenberg died penniless, his presses impounded by his creditors. Other German printers fled for greener pastures, eventually arriving in Venice, which was the central shipping hub of the Mediterranean in the late 15th century. “If you printed 200 copies of a book in Venice, you could sell five to the captain of each ship leaving port,” says Palmer, which created the first mass-distribution mechanism for printed books. The ships left Venice carrying religious texts and literature, but also breaking news from across the known world. Printers in Venice sold four-page news pamphlets to sailors, and when their ships arrived in distant ports, local printers would copy the pamphlets and hand them off to riders who would race them off to dozens of towns. Since literacy rates were still very low in the 1490s, locals would gather at the pub to hear a paid reader recite the latest news, which was everything from bawdy scandals to war reports. “This radically changed the consumption of news,” says Palmer. “It made it normal to go check the news every day.” 2. The Renaissance Kicked Into High Gear The Italian Renaissance began nearly a century before Gutenberg invented his printing press when 14th-century political leaders in Italian city-states like Rome and Florence set out to revive the Ancient Roman educational system that had produced giants like Caesar, Cicero and Seneca. One of the chief projects of the early Renaissance was to find long-lost works by figures like Plato and Aristotle and republish them. Wealthy patrons funded expensive expeditions across the Alps in search of isolated monasteries. Italian emissaries spent years in the Ottoman Empire learning enough Ancient Greek and Arabic to translate and copy rare texts into Latin. The operation to retrieve classic texts was in action long before the printing press, but publishing the texts had been arduously slow and prohibitively expensive for anyone other than the richest of the rich. Palmer says that one hand-copied book in the 14th century cost as much as a house and libraries cost a small fortune. The largest European library in 1300 was the university library of Paris, which had 300 total manuscripts. By the 1490s, when Venice was the book-printing capital of Europe, a printed copy of a great work by Cicero only cost a month’s salary for a school teacher. The printing press didn’t launch the Renaissance, but it vastly accelerated the rediscovery and sharing of knowledge. “Suddenly, what had been a project to educate only the few wealthiest elite in this society could now become a project to put a library in every medium-sized town, and a library in the house of every reasonably wealthy merchant family,” says Palmer. 3. Martin Luther Becomes the First Best-Selling Author There’s a famous quote attributed to German religious reformer Martin Luther that sums up the role of the printing press in the Protestant Reformation: “Printing is the ultimate gift of God and the greatest one.” Luther wasn’t the first theologian to question the Church, but he was the first to widely publish his message. Other “heretics” saw their movements quickly quashed by Church authorities and the few copies of their writings easily destroyed. But the timing of Luther’s crusade against the selling of indulgences coincided with an explosion of printing presses across Europe. As the legend goes, Luther nailed his “95 Theses” to the church door in Wittenberg on October 31, 1517. Palmer says that broadsheet copies of Luther’s document were being printed in London as quickly as 17 days later. Thanks to the printing press and the timely power of his message, Luther became the world’s first best-selling author. Luther’s translation of the New Testament into German sold 5,000 copies in just two weeks. From 1518 to 1525, Luther’s writings accounted for a third of all books sold in Germany and his German Bible went through more than 430 editions. 4. Printing Powers the Scientific Revolution The English philosopher Francis Bacon, who’s credited with developing the scientific method, wrote in 1620 that the three inventions that forever changed the world were gunpowder, the nautical compass and the printing press. For millennia, science was a largely solitary pursuit. Great mathematicians and natural philosophers were separated by geography, language and the sloth-like pace of hand-written publishing. Not only were handwritten copies of scientific data expensive and hard to come by, they were also prone to human error. With the newfound ability to publish and share scientific findings and experimental data with a wide audience, science took great leaps forward in the 16th and 17th centuries. When developing his sun-centric model of the galaxy in the early 1500s, for example, Polish astronomer Nicolaus Copernicus relied not only on his own heavenly observations, but on printed astronomical tables of planetary movements. When historian Elizabeth Eisenstein wrote her 1980 book about the impact of the printing press, she said that its biggest gift to science wasn’t necessarily the speed at which ideas could spread with printed books, but the accuracy with which the original data were copied. With printed formulas and mathematical tables in hand, scientists could trust the fidelity of existing data and devote more energy to breaking new ground. 5. Fringe Voices Get a Platform “Whenever a new information technology comes along, and this includes the printing press, among the very first groups to be ‘loud’ in it are the people who were silenced in the earlier system, which means radical voices,” says Palmer. It takes effort to adopt a new information technology, whether it’s the ham radio, an internet bulletin board, or Instagram. The people most willing to take risks and make the effort to be early adopters are those who had no voice before that technology existed. “In the print revolution, that meant radical heresies, radical Christian splinter groups, radical egalitarian groups, critics of the government,” says Palmer. “The Protestant Reformation is only one of many symptoms of print enabling these voices to be heard.” As critical and alternative opinions entered the public discourse, those in power tried to censor it. Before the printing press, censorship was easy. All it required was killing the “heretic” and burning his or her handful of notebooks. But after the printing press, Palmer says it became nearly impossible to destroy all copies of a dangerous idea. And the more dangerous a book was claimed to be, the more the people wanted to read it. Every time the Church published a list of banned books, the booksellers knew exactly what they should print next. 6. From Public Opinion to Popular Revolution During the Enlightenment era, philosophers like John Locke, Voltaire and Jean-Jacques Rousseau were widely read among an increasingly literate populace. Their elevation of critical reasoning above custom and tradition encouraged people to question religious authority and prize personal liberty. Increasing democratization of knowledge in the Enlightenment era led to the development of public opinion and its power to topple the ruling elite. Writing in pre-Revolution France, Louis-Sebástien Mercier declared: “A great and momentous revolution in our ideas has taken place within the last thirty years. Public opinion has now become a preponderant power in Europe, one that cannot be resisted… one may hope that enlightened ideas will bring about the greatest good on Earth and that tyrants of all kinds will tremble before the universal cry that echoes everywhere, awakening Europe from its slumbers.” “[Printing] is the most beautiful gift from heaven,” continues Mercier. “It soon will change the countenance of the universe… Printing was only born a short while ago, and already everything is heading toward perfection… Tremble, therefore, tyrants of the world! Tremble before the virtuous writer!” Even the illiterate couldn’t resist the attraction of revolutionary Enlightenment authors, Palmer says. When Thomas Paine published “Common Sense” in 1776, the literacy rate in the American colonies was around 15 percent, yet there were more copies printed and sold of the revolutionary tract than the entire population of the colonies. 7. Machines ‘Steal Jobs’ From Workers The Industrial Revolution didn’t get into full swing in Europe until the mid-18th century, but you can make the argument that the printing press introduced the world to the idea of machines “stealing jobs” from workers. Before Gutenberg’s paradigm-shifting invention, scribes were in high demand. Bookmakers would employ dozens of trained artisans to painstakingly hand-copy and illuminate manuscripts. But by the late 15th century, the printing press had rendered their unique skillset all but obsolete. On the flip side, the huge demand for printed material spawned the creation of an entirely new industry of printers, brick-and-mortar booksellers and enterprising street peddlers. Among those who got his start as a printer's apprentice was future Founding Father, Benjamin Franklin. Source of the article

GOATReads:Politics

María Corina Machado’s peace prize follows Nobel tradition of awarding recipients for complex reasons

Few can doubt the courage María Corina Machado has shown in fighting for a return to democracy in Venezuela. The 58-year-old politician and activist is the undisputed leader of the opposition to Nicolás Maduro – a man widely seen as a dictator who has taken Venezuela down the path of repression, human rights violations and increasing poverty since becoming president in 2013. Maduro is widely believed to have lost the 2024 presidential election to rival Edmundo González, a candidate substituting Machado, yet still claimed victory. Machado has been in hiding since the fraudulent vote. And her courage in having participated in an unfair contest and in exposing Maduro’s fraud by publishing the true vote tallies on the internet, surely made Machado stand out to the Nobel committee. Indeed, in making Machado the 2025 recipient of the Nobel Peace Prize, organizers stated they were recognizing her “tireless work promoting democratic rights for the people of Venezuela and for her struggle to achieve a just and peaceful transition from dictatorship to democracy.” But as a scholar of Venezuela’s political process, I know that is only part of the story. Machado is in many ways a controversial pick, less a peace activist than a political operator willing to use some of the trade’s dark arts for the greater democratic good. Joining a controversial list of laureates Of course, many Nobel Peace Prize awards generate controversy. It has often been bestowed on great politicians over activists. And sometimes the prize’s winners can have complex pasts and very non-peaceful resumes. Past recipients include Henry Kissinger, who as U.S. secretary of state and Richard Nixon’s security adviser was responsible for the illegal bombing of Cambodia, supporting Indonesia’s brutal invasion of East Timor and propping up dictators in Latin America, among many other morally dubious actions. Similarly, former Palestinian Liberation Organization leader Yasser Arafat and Israeli Prime Minister Menachem Begin were both awarded the prize, in 1994 and 1978 respectively, despite their past association with violent activities in the Middle East. The Nobel Committee often seems to use these awards not to celebrate past achievements but to affect the future course of events. The nods to Begin and Arafat were, in this way, used for encouragement of the Middle East peace process. In fact, sometimes, the peace prize is seemingly bestowed as a sign of approval for a break from the past. Barack Obama won his in 2009 despite only being nine months into his presidency. It was taken by many as a rejection of the previous presidency of George W. Bush, rather than recognition of Obama’s limited achievements at that time. In 2016, Colombian President Juan Manuel Santos was awarded the Nobel Peace Prize just days after his peace plan was rejected in a referendum. In that instance, the committee seemed to want to give his efforts a push just after a major setback. Democratic path or dark arts? So what should be made of the Nobel Peace Prize committee’s decision to recognize opposition to Maduro now? Certainly Machado’s profile is ascendant. In her political career, she has participated in elections – winning a seat in the National Assembly in 2010 – but boycotted many more. She also boycotted negotiation processes, suggesting instead that foreign intervention was the only way to remove Maduro. In 2023 she returned to the electoral path and steadfastly mobilized the Venezuelan population for opposition primaries and presidential elections, even after her candidacy was disqualified by the government-controlled electoral authority, and innumerable other obstacles were put in her path. The campaign included spearheading caravans and events across the country at significant personal risk. However, much of her fight since then has been via less-democratic means. Machado has shunned local and regional elections suggesting there was no sense in participating until the government honored the results of the 2024 presidential election. She has also again sought international intervention to remove Maduro. Over the past year she has aggressively promoted the discredited theory that Maduro is in control of the Tren de Aragua gang and is using it to invade the United States – a narrative gladly accepted and repurposed by U.S. President Donald Trump. In addition to being the expressed motivation for a U.S. military buildup off the coasts of Venezuela, this theory has also been the central justification cited by the Trump administration for using the Alien Enemies Act to deport, without due process, 238 Venezuelan men to a horrific prison in El Salvador. Relations with Trump The Nobel Peace Prize could help unify the Venezuelan opposition movement, which over the past year has begun to fray over differences in strategy, especially with respect to Machado’s return to electoral boycotts. And it will certainly draw more international attention to Venezuelans’ struggle for democracy and could galvanize international stakeholders to push for change. What it will mean in terms of Trump’s relationship to Machado and Venezuela is yet to be seen. Her main connection with the administration is through Secretary of State Marco Rubio, who has aggressively represented her views and is pushing for U.S. military intervention to depose Maduro Awarding Machado the prize could strengthen Trump’s resolve to seek regime change in Venezuela. Or, if he feels snubbed by the Nobel committee after very vocally lobbying to be awarded the peace prize himself, it could be a wedge between the U.S. president and Machado. Machado seems to understand this. After not mentioning him in her first statement after the award was announced, she has since mentioned him multiple times, even dedicating the prize to both the Venezuelan people and Trump. Trump has subsequently called to congratulate her. A game changer? Perhaps not To the degree that the Nobel Peace Prize is not just a model of change but a model for change, the decision to award it to Machado could conceivably affect the nature of Venezuela’s struggle against authoritarianism, leading her to continue to seek the restoration of democracy with a greater focus on reconciliation and coexistence among all Venezuelans, including the still politically relevant followers of the late Hugo Chávez. Whatever the impact, it probably will not be game-changing. As we have seen with other winners, the initial glow of public recognition is quickly consumed by political conflict. And in Venezuela, there is no easy way to translate this prize into real democratic progress. While Machado and other Venezuelan democrats may have more support than ever among global democrats, Nicolás Maduro controls all of Venezuela’s institutions including the armed forces and the state oil company, which, even when sanctioned, provides substantial resources. Maduro also has forged strategic alliances with China, Russia and Iran. The only way one can imagine the restoration of democracy in Venezuela, with or without military action, is through an extensive process of negotiation, reconciliation, disarmament and justice that could lay the groundwork for coexistence. This Nobel Peace Prize could position Machado for this task. Source of the article