CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

India's jobs guarantee scheme: A global model under threat?

India is home to one of the world's most ambitious social programmes - a jobs guarantee that gives every rural household the legal right to paid work. Launched in 2005 by a Congress party government, the National Rural Employment Guarantee Scheme (NREGS) entitled every rural household to demand up to 100 days of paid manual work each year at a statutory minimum wage. This mattered in a country where 65% of 1.4 billion people live in rural areas and nearly half rely on farming, which generates insufficient income, accounting for just 16% of India's GDP. Providing unskilled public work across all but fully urban districts, the scheme has become a backbone of rural livelihoods, cushioning demand during economic shocks. It is also among the world's most studied anti-poverty programmes, with strong equity: over half of the estimated 126 million scheme workers are women, and around 40% come from "scheduled castes" or tribes, among the most deprived Indians The ruling Narendra Modi government, initially critical and later inclined to pare it back, turned to the scheme in crises - most notably during the Covid pandemic, when mass return migration from cities to villages sharply drove up demand for work. Economists say the scheme lifted rural consumption, reduced poverty, improved school attendance, and in some regions pushed up private-sector wages. Last week, the government introduced a new law that repeals and rebrands the scheme. The programme - renamed MGNREGA in 2009 to honour Mahatma Gandhi - has now dropped his name altogether. While the renaming drew the political heat, the more consequential changes lie in what the new law - known as G RAM G for short - actually does. It raises the annual employment guarantee from 100 to 125 days per rural household. It retains the provision that workers not given jobs within 15 days are entitled to an unemployment allowance. Under the original scheme, the federal government paid all labour wages and most material costs - roughly a 90:10 split with the states. Funding will now follow a 60:40 split between the federal government and most states. That could push states' contribution to 40% or more of total project cost. The federal government keeps control, including the power to notify the scheme and decide state-wise allocations. States remain legally responsible for providing employment - or paying unemployment allowances, even as the central government allocates $9.5bn for the scheme in the current financial year, ending next March. The government frames the reforms as a modernised, more effective, and corruption-free programme aimed at empowering the poor. "This law stands firmly in favour of the poor, in support of progress, and in complete guarantee of employment for the workers," says federal agriculture minister Shivraj Singh Chouhan. Critics - including opposition parties, academics, and some state governments - warn that capping funds and shifting costs to states could dilute a rare legal right in India's welfare system. "It is the culmination of the long-standing drive for centralisation of the scheme under the Modi government. But it is more than centralisation. It is the reduction of employment guarantee to a discretionary scheme. A clause allows the federal government to decide where and when the scheme applies," Jean Dreze, a development economist, told me. Prof Dreze says the increase to 125 guaranteed workdays per household may sound like a major revamp, but is a "red herring". A recent report by LibTech India, an advocacy group, found that only 7% of rural households working on the scheme received the 100 days of work in 2023-24. "When the ceiling is not binding, how does it help to raise it? Raising wage rates, again, is a much better way of expanding benefits. Second, raising the ceiling is a cosmetic measure when financial restrictions pull the other way, " Prof Dreze notes. These and other concerns appear to have prompted a group of international scholars to petition the Modi government in defence of the original scheme, warning that the new funding model could undermine its purpose. "The [scheme] has captured the world's attention with its demonstrated achievements and innovative design. To dismantle it now would be a historic error," an open letter, led by Olivier De Schutter, UN special rapporteur on extreme poverty and human rights, warned. To be sure, the scheme has faced persistent challenges, including underfunding and delays in wage payments. West Bengal's programme, for example, has faced deep cuts and funding freezes since 2022, with the federal government halting funds over alleged non-compliance. Yet despite these challenges, the scheme appears to have delivered measurable impact. An influential study by economists Karthik Muralidharan, Paul Niehaus, and Sandip Sukhtankar found that the broader, economy-wide impacts of the scheme boosted beneficiary households' earnings by 14% and cut poverty by 26%. Workers demanded higher wages, land returns fell, and job gains were larger in villages, the study found. But many say the scheme's durability also underscores a deeper structural problem: India's chronic inability to generate enough non-farm jobs to absorb surplus rural labour. Agriculture has consistently lagged behind the broader economy, growing just 3% annually since 2001–02, compared with 7% for the rest of the economy. Critics such as Nitin Pai of the Takshashila Institution, a think-tank, argue that the scheme cushions distress but does little to raise long-term rural productivity, and may even blunt incentives for agricultural reform. "With [the scheme] we're merely treating a serious underlying malaise with steroids," said Mr Pai in a post on X. The government's Economic Survey 2023–24 questions whether demand under the scheme truly mirrors rural hardship. If that was the case, data should show higher fund use and employment in poorer states with higher unemployment, the survey says. Yet, it notes, Tamil Nadu, with under 1% of the country's poor, received nearly 15% of the scheme's funds, while Kerala, with just 0.1% of the poor, accounted for almost 4% of federal allocations. The survey adds that the actual work generated depends largely on a state's administrative capacity: states with trained staff can process requests on time, directly influencing how much employment is provided. Despite these anomalies, the case for the scheme remains strong in a country where many depend on low-income rural work and where the deeper challenge is the lack of quality employment. Even headline figures on rising labour participation in India can be misleading: more people "working" does not always mean better or more productive jobs. A recent paper by economists Maitreesh Ghatak, Mrinalini Jha and Jitendra Singh finds that the country's recent rise in labour force participation, especially among women, reflects economic distress rather than growth-driven job creation. The authors say the increase is concentrated in the most vulnerable forms of work: unpaid family helpers and self-employed workers, who have very low productivity and falling real earnings. "The recent expansion in employment reflects economic distress leading to subsistence work, rather than growth-driven better quality job creation," they say. The evidence suggests people are driven into subsistence work by necessity, not drawn into better-quality jobs by a stronger economy. This ensures that the world's largest jobs guarantee scheme will remain central to the livelihoods of hundreds of millions of Indians - whether the revamped version will strengthen it or undermine its impact remains to be seen. Source of the article

GOATReads: History

How the CIA Used ‘Animal Farm’ As Cold War Propaganda

Orwell’s allegory didn’t make it to the screen exactly as he wrote it. One of the most celebrated books of the 20th century, George Orwell’s Animal Farm is a biting critique of totalitarianism. Published shortly after the end of World War II, the novella tells the story of farm animals who revolt against their human owner—only to see their rebellion corrupted from within. Beneath its barnyard setting, the fable is a pointed allegory for how the promise of the 1917 Russian Revolution descended into the tyrannical reign of Joseph Stalin. In his essay, Why I Write, Orwell admitted, “Animal Farm was the first book in which I tried, with full consciousness of what I was doing, to fuse political purpose and artistic purpose into one whole.” Upon its release on August 17, 1945, in England, the satirical novel quickly sold out its initial print run of 4,500 copies. When it hit American shelves in August 1946, it sold over half a million editions in its first year alone, according to Mark Satta, associate professor of philosophy and law at Wayne State University. Though reception to the story's satire was mixed, the book got the attention of the Central Intelligence Agency (CIA). As Cold War tensions gripped the United States, the American government was searching for anti-Soviet propaganda to spread across the world. Animal Farm’s effective plot and messaging made it the perfect material to aid their battle against Stalin and his regime. The CIA wanted to bring Animal Farm to a much wider audience, reported The New York Times, by covertly backing a movie adaptation that downplayed the source material’s attacks on capitalism and amplified its opposition to communism. “It was perceived as having a simple story that would be accessible to families, children and people of all educational levels,” says Tony Shaw, the author of Hollywood's Cold War. “They wanted to make it clear to ordinary people that communism is a danger to you.” Behind the Scenes The CIA likely began to think about adapting Animal Farm shortly following Orwell’s death on January 21, 1950. After undercover agents bought the film rights from his widow Sonia Orwell, Louis de Rochemont—the filmmaker behind the monthly theatrical newsreels The March Of Time—was hired as an intermediary between the production and intelligence agency.  Rather than using an American animation company, the CIA hired Halas and Batchelor, run by a U.K.-based husband-and-wife team. “They didn’t use Hollywood because they wanted some distance. Using a British company made it look less like American propaganda,” explains Shaw. During this era of Joseph McCarthy's infamous communist accusations in Hollywood, the CIA also harbored suspicion towards American film companies. There was a belief that some individuals in Hollywood could not be trusted to keep the CIA's involvement a secret, says Shaw. Meanwhile, Halas and Batchelor had produced around 70 war information and propaganda films for the UK’s Ministry of Information and War Office during World War II.  Changing the Story Under orders from the CIA, de Rochemont told screenwriters Philip Stapp and Lothar Wolff to change various elements of Orwell’s plot to make its anti-Communist message clear. “They simplified the book and got rid of characters and elements that were very critical of capitalism,” says Shaw. This included making the character of Snowball the pig, who represented Leon Trotsky, much less sympathetic and more fanatical.  The biggest alteration was the conclusion. While the book ends in a pessimistic fashion, the movie finishes with the animals rallying together and triumphantly storming the farm against their new oppressors—the pigs who became indistinguishable from the humans they replaced. “They revolt and smash it down,” says Shaw. “This, to my reading, is a clear case of the CIA telling the people living under communism to revolt.” Despite its government-backing, when the movie finally hit cinemas in the US and UK in January of 1955, it underperformed. Unlike Orwell’s book, which was snuck behind the Iron Curtain, the film wasn’t distributed or spread around the Soviet Union. However, the adaptation did eventually find an audience in South America, where over the next few decades the U.S. government would aid coups in Brazil, Chile, the Dominican Republic and Ecuador to prevent the rise of communism. “Its other target would have been these developing countries, where power was up for grabs by the mid-'50s into the 1960s. That’s where the Cold War could have been won or lost," says Shaw.  The movie was also used as an educational tool in both Great Britain and the United States. Until the conclusion of the Cold War in 1991, it was regularly shown in schools to teenagers as a cautionary tale about the dangers of communism.  Culture the CIA Propagated  Animal Farm wasn’t the only piece of culture the CIA used in its covert fight against the Soviet Union. After learning that Stalin highlighted how racially divided the United States was to undermine its image of freedom, the CIA encouraged film studios to “insert a number of Black characters into films,” says Shaw. From the 1970s onwards, the CIA also helped to promote rock music in the Soviet Union and East Germany, all with the intention of destabilizing the Eastern bloc. While it’s impossible to quantify the impact culture had on the ultimate downfall of the Soviet Union, historians over the last two decades have analyzed what people bought, listened to and watched in the lead-up to the fall of the Berlin Wall.  “I don't think it's any doubt that American propaganda played a critical role in helping the West win the Cold War,” says Shaw. “The amount of effort the American government put into film and culture tells us that they thought they were getting some reward and that it worked.” Source of the article

GOATReads:Politics

Dreams of a Maoist India

India’s Maoist guerillas have just surrendered, after decades of waging war on the government from their forest bases On 6 April 2010, a company of India’s central paramilitary soldiers came under attack from Maoist guerrillas in the central-eastern state of Chhattisgarh. The Maoists, who had turned this region into their stronghold, had laid a trap. With little training and scant knowledge of the Amazon-like jungle, the Indian soldiers found themselves ambushed. They fought back, but they could not escape the ambush. Seventy-five soldiers and a state policeman accompanying them were killed. Never before had the Indian forces suffered so many casualties in a single incident, not even in Kashmir, where, for more than 20 years, they had been fighting a protracted battle against Islamist extremists. As the body bags of the soldiers reached their native places in different parts of India, a deep sense of anger generated among people who till recently had only a vague idea about who these Maoists were, and even less about the hinterland that the Maoists had turned into a guerrilla zone. Since the mid-2000s, the Maoists had grown in strength, launching audacious attacks against government forces and looting police armouries and declaring certain areas as ‘liberated zones’. Their operations ran in a contiguous arc of land, from Nepal’s border in the east to the Deccan Plateau in the south – an area the Maoists called Dandakaranya or DK, using the name in its historical sense. This is a region where India’s indigenous people, the Adivasis, lived; it also holds valuable minerals and other natural resources in abundance. The Indian state wanted control over the natural resource wealth, but the Maoists were proving to be an obstacle. Then, in 2009, the then prime minister Manmohan Singh called them India’s ‘greatest internal security threat’. On the morning of the attack in 2010, I’d landed in a part of DK, in a city a three-hour car ride from the edge of a town beyond which was a forest ruled by Maoists. From there, if one started walking, one would, without getting on to a motorable road, reach the spot of the Maoist ambush. There would be only a sprinkling of tiny hamlets, inhabited by Adivasis, who thought of Maoists as the ‘government’. Beyond that, they had very little idea of life outside, least of all the blitzkrieg of ‘India Shining’, a political campaign by a previous government (the conservative BJP), which had in spirit continued to exist as a cursor to economic optimism. The scrawny man – I’ll call him ‘A’ – who came to pick me up from the railway station on his battered motorcycle was a former Maoist. He was also a Dalit, the so-called ‘lower-caste’ people at the bottom of the Hindu caste system, who, like the Adivasis, have been historically maltreated in India. He lived in a slum and had been recruited in the 1980s, along with several others in the city, by a woman Maoist. After a few years, ‘A’ had quit the party to raise a family. Life outside had been harder than inside the forest. For people like ‘A’, it was difficult to come out of the poverty and bitterness that came with their ascriptive status in the caste system. He struggled with odd jobs, and in the night drank heavily and sang resistance songs by the yard to temporarily rid himself of the bile. People had begun to talk of a ‘trillion-dollar economy’, while in some areas the poor would still die of hunger I’d spend a day or two in that city and then travel towards the edge of the town, from where a Maoist sympathiser would pick me up from a stipulated spot. After a bike or jumpy tractor ride, followed by a walk of several hours, contact with a Maoist squad would be established. From there, I would travel with them, sometimes for weeks, from one hamlet to another, crossing rivers and hills, evading bears and venomous snakes, hoping that, once I returned, I wouldn’t be gripped by a fever, which could indicate malaria, endemic in these areas. Twelve years earlier, my own history had prompted my interest in Kashmir and the Maoists. My family belonged to the small Hindu minority in Kashmir – the only Muslim-majority state in an otherwise Hindu-majority India. In 1990, we were forced to leave, as Islamist extremists began targeting the Hindu minority. In a few months, the entire community of roughly 350,000 people was forced into exile. For journalists, though, that expulsion was not very newsworthy. As the Indian forces began conducting operations against militants, resulting in brutal clampdowns and sometimes excesses against civilians, Kashmir became a dangerous place. But, for journalists, it turned into a harbinger of awards, of grants and fellowships. I decided to go away from Kashmir, not at first by design, but by a chance trip to the guerrilla zone. I had barely travelled beyond Delhi, just a few hundred miles south of Kashmir. But, as I began exploring the mainland further, I ended up in hinterland areas where India’s poorest of the poor lived. In the new India, where people had begun to randomly drop into conversations terms such as ‘trillion-dollar economy’, these areas still remained where the poor would die of hunger. What I saw in my journeys into rural India came as a revelation; in contrast, my own exile, of leaving a modest but comfortable home and instead facing the humiliation of living in a run-down room in exile in Jammu city, seemed bearable. Maoists weren’t yet receiving a lot of attention. The prime minister Singh’s pronouncement about Maoists was more than 10 years away, so it was difficult to convince editors to cover them, but I persisted, mostly because I felt that I was receiving a real education, one that a journalism school would never offer. In these years, the lack of government interest in the Maoists was an advantage – one could travel easily to DK without invoking the suspicion of the security apparatus. When I said goodbye to ‘A’ and headed into the forest from where DK began, I knew I’d have to be more cautious. I had avoided spending more than a few hours in the city where we’d met; a hotel reservation could give me away, and the police would put me under surveillance. By the afternoon of the next day, I was in a Maoist camp, under a tarpaulin sheet, meeting, among others, Gajarla Ashok, a Maoist commander, and another senior leader, a woman called Narmada Akka. Like most of the Maoist leadership, they both came from urban areas. These leaders had been teachers, engineers, social scientists and college dropouts, moved by the idea of revolution. They had come to DK with the same dream of establishing an Indian Yan’an (the birthplace of the Chinese communist revolution). But the core of Maoist recruitment came from the Adivasis, and before that from the working class and peasantry among Dalits and other ‘backward’ communities. The Maoists had decided to enter Chhattisgarh and its adjoining areas (which comprised DK) in the early 1980s. That was their second effort to bring about a revolution. An earlier attempt had been made in a village called Naxalbari in West Bengal, in the late 1960s. Peasants, who tilled the fields of landlords and received only a minuscule proportion of the harvest, rose against the iniquity of their small share. The rebellion was inspired by members of the mainstream Communist Party of India, who had begun to grow disillusioned with their organisation. This questioning had also taken place in other parts of the world. In France, for example, during the May 1968 Leftist student protests, the postwar Left came to be seen as an obstacle to real social transformation. What do we win by replacing ‘the employers’ arbitrary will with a bureaucratic arbitrary will?’ asked the Marxist thinker André Gorz. A similar sentiment had been expressed almost 40 years earlier by an Indian revolutionary, Bhagat Singh, whom the British then hanged in 1931 at the age of 23. In a letter to young political workers a month before his hanging, Singh warned that the mere transfer of power from the British to the Indians would not suffice, and that there was a need to transform the whole society: You can’t ‘use’ him [the worker and the peasant] for your purpose; you shall have to mean seriously and to make him understand that the revolution is going to be his and for his good. The revolution of the proletariat and for the proletariat. Singh’s prescription proved to be right. Even as the prime minister Jawaharlal Nehru, whose commitment to social justice could not be doubted, took over from the British in 1947, the poor and the marginalised communities like the Dalits and Adivasis continued to remain outside the welfare circle of his five-year plans. Feudalism did not go away. Land reforms to break up the feudal concentrations of wealth and power were initiated, but the rich and the powerful also found means to circumvent the law. The rich quickly joined politics, and the police acted as their private militia. As recently as 2019, a survey by the Indian government revealed that 83.5 per cent of rural households owned less than one hectare of land. The government’s Planning Commission figures (1997-2002) put the landless among Dalits at 77 per cent, while among the Adivasis it was 90 per cent. The government’s National Sample Survey in 2013 revealed that about 7 per cent of landowners owned almost half of the total land share. Small guerrilla squads began to indulge in ‘class annihilation’, killing hundreds of landlords In the 1960s, these disillusioned communists felt that the Communist Party of India had grown complacent and corrupt, and that its leaders were ‘conscious traitors to the revolutionary cause’. They made their case in long papers and articles full of communist jargon in publications like Deshabrati, People’s March and Liberation. The essence of their indictment was that the poor and the working class had been let down by the parliamentary Left. In 1969, these breakaway communists formed their own party, the Communist Party of India (Marxist-Leninist), which announced its aim to unite the working class with the peasantry and seize power through armed struggle. They sought help from China, which was quick to offer it, calling the uprising ‘a peal of spring thunder’. Some of the men inspired by Mao travelled to China through Nepal and Tibet, receiving political training from Mao’s associates. The Maoist message spread from Naxalbari to other parts of India, like Bihar, Andhra Pradesh, Punjab and Kerala. Inspired by its main leader, Charu Majumdar, small guerrilla squads began to indulge in ‘class annihilation’, killing hundreds of landlords and their henchmen, policemen and other state representatives. ‘To allow the murderers to live on means death to us,’ Majumdar declared. Liberation, the party’s mouthpiece between the years 1967-72, is full of reports of killings of landlords, and how land and other property they owned had been ‘confiscated’ by peasant guerrillas. In practice, however, ‘class annihilation’ proved counterproductive. On the streets of Calcutta (today’s Kolkata), for example, naive men from elite colleges would roam around with crude bombs and even razors with which they attacked lone policemen. Nonetheless, in the late 1960s, the Naxalbari movement inspired thousands of bright men and women from elite families studying at prestigious schools. They said goodbye to lucrative careers and made the feudal areas, where the poor faced the utmost oppression, their workplace. Beginning in July 1971, a brutal government response killed hundreds of Indian Maoists, probably including their leader Majumdar; he died in police custody in 1972. Kondapalli Seetharamaiah, popularly known as KS, was one of those dissatisfied with the shape that the parliamentary Left had taken in India. He was a school teacher in Andhra Pradesh, which had a long history of feudalism and communist struggles. In the state’s North Telangana area, bordering Chhattisgarh, for example, feudal customs of slavery like vetti were still being practised decades after India became free. KS, a former member of the Communist Party of India, had not lost all hope, and decided to join hands with Majumdar’s line. But before he could restart, he decided the Maoists needed a rear base, just like Mao had urged, for the guerrillas to hide in the forest. The other amendment to Majumdar’s line was with regards to the formation of overground organisations to further the cause of revolution, something that Majumdar had strongly opposed. In 1969, KS sent a young medical student to a forest area in North Telangana to explore the possibility of creating the rear base. But in the absence of any support, the lone man could not achieve anything and had to return. In the mid-1970s, KS sent yet another man, this time a little further inside, into Chhattisgarh. Spending a few months inside, the man, who had acquired basic medical training, started treating the poor tribals. But, again, how much could one man or two do? So, he returned as well. So KS made another change in strategy – he took the Maoists out of the shadows and founded a few organisations that, on the surface, were civic associations, but were meant to further the Maoist ideology. Prominent among these was the Radical Students Union (RSU), launched in October 1974. Along with a cultural troupe, Jana Natya Mandali, young RSU members began a ‘Go to Village’ campaign on KS’s instructions. In this campaign, the young student radicals and ardent believers in the armed struggle would try to make villagers politically ‘conscious’. The ‘Go to Village’ campaign enjoyed some initial success, attracting students and other young people from working-class backgrounds. Hundreds of young people in universities and other prestigious institutions in Andhra Pradesh left their studies and vowed to fight for the poor. Fourteen students from Osmania University in what was then Andhra’s capital, Hyderabad, joined; 40 from other parts of the state joined the Maoist RSU. The Maoists’ ‘Go to Village’ campaign found fertile ground in the town of Jagtial, in the state’s Karimnagar district. There, as across Andhra, people celebrate the festival Bathukamma, which includes theatre performances in villages that were home to landlords from the dominant castes. The caste segregation of the villages was complete: the landlords lived in the village centre, while the Dalits lived on its periphery. But now in Jagtial, the Dalit labourer Lakshmi Rajam took the performance to the Dalit quarters. Another Dalit man, Poshetty, occupied a piece of government-owned wasteland, which would usually be in the landlords’ control. These acts enraged the landlords, who killed both these Dalit activists. As the Maoists pushed on, the state retreated, and the Adivasis began to exert their rights over the forest On 7 September 1978, under the influence of the Maoists, tens of thousands of agricultural labourers from 150 villages marched through the centre of Jagtial. The march was led by two people, one of them Mupalla Laxmana Rao, alias Ganapathi. He came from Karimnagar itself and would become KS’s closest confidante, later taking over from him to become the Maoist chief. The other was Mallojula Venkateshwara Rao, alias Kishenji, a science graduate, who would prove to be an efficient leader and military commander. The Jagtial march rattled some landlords so much that they fled to cities. The poor also decided to boycott the landlords who would not agree to any land reforms. Services that the poor provided – washer men, barbers, cattle feeding – were denied to the landlords. This strike led to further backlash from landlords, as reported by the respected Indian civil rights activist K Balagopal. From these village campaigns, KS decided to move ahead and try to create a guerrilla zone where armed squads would mobilise peasants and contest state power. In June 1980, seven squads of five to seven members entered the hinterland – four of them in North Telangana, two in Bastar in Chhattisgarh, and one in Gadchiroli in Maharashtra, an area where the Adivasis lived. They were mostly food-gatherers, and their life had remained unchanged for hundreds of years. Abundant mineral wealth lay in the land under where the Adivasis lived, but they lacked even basic modern services like education and healthcare. Petty government representatives like forest guards would harass the Adivasis for using resources like wood, citing archaic forest laws. At first, the Adivasis did not welcome the presence of the Maoists. However, before long, a kind of alliance between them developed, where the common enemy now was the state. As the Maoists pushed on, the state retreated, and the Adivasis began to exert their rights over the forest. In many areas, the feudal landlords were served ‘justice’ like Mao had dictated. In 1980, the Swedish writer Jan Myrdal visited the Maoists, and one of the comrades told him of an incident from North Telangana, which Myrdal recounts in his book India Waits (1986). A notorious rowdy there had instilled fear among the people on behalf of his master, a landowner. He raped a washer-girl. In shame, she jumped into a well and drowned herself. When the Maoists came to know of it, four of them, till recently students, called him out in the bazaar. When he arrived, the rebels caught him with a lasso, cut off his hands and nailed them to a wall inside a shop. The rough vigilante justice inspired more young people to join the Maoists: men like Nambala Keshava Rao, a graduate of the much-respected Warangal engineering college, and Patel Sudhakar Reddy, who held a master’s degree from Osmania. It also brought in young women like Maddela Swarnalata and Borlam Swarupa. Swarnalata came from a poor Dalit family and was recruited through the Radical Students Union. In the early 1980s, she’d taken part in clashes against Right-wing student groups, especially the Akhil Bharatiya Vidyarthi Parishad. The police would follow her and pressurise her into revealing details of her comrades who had already gone underground. Soon it became impossible to avoid arrest, so she too went underground, joining a Maoist squad, before dying in an encounter with the police in April 1987. Meanwhile, Swarupa had become active through campaigns with farmers’ groups for a better price for their crops. The Maoist leadership placed her as a labourer in a biscuit factory in Hyderabad, in order to recruit among workers there. Once she’d been exposed, Swarupa was asked to shift to the guerrilla zone, where she became the first woman commander, leading a squad in North Telangana, until she was killed in an encounter in February 1992. One of the prominent features of the Maoist movement is the way it attracted women to its fold. For women from the working class, who led difficult lives under a patriarchal mindset, joining the Maoists felt like a liberation. Recruits to the Maoists often attracted their friends, siblings and other family members to join too. Doug McAdam, professor of sociology at Stanford University in California, has written about this ‘strong-tie’ phenomenon, in which personal connections draw people into ‘high-risk activism’ of violence. In Bastar and elsewhere, the Maoist guerrillas targeted people and agencies they considered exploiters. For example, they started to negotiate better rates for the collection of tendu leaves, used in the manufacture of local cigarettes, which was a lucrative business. But along with that, they also started to take cuts from businessmen for running their organisations. The Norwegian anthropologist Bert Suykens, who has studied the tendu leaf business, called it a joint extraction regime. The Maoists also began to extort a levy from corporate houses involved in mining in these areas, as well as from government contractors. In the process, they deviated from their promise – of returning the forest to the Adivasis, and of helping the poor. They spent most of their time running their organisation and launching attacks against government forces. In her research in central Bihar in 1995-96, the Indian sociologist Bela Bhatia concluded that the Maoist leaders ‘have taken little interest in enhancing the quality of life in the villages.’ In fact, these leaders regarded development ‘as antagonistic to revolutionary consciousness,’ she wrote in 2005. In the meantime, the Indian state was growing impatient with the Maoists. In 2010, a London-based securities house report predicted that making the Maoists go away could unlock $80 billion of investment in eastern and central India. New Delhi began preparations for a large-scale operation to get rid of them. But, before that, the extraordinary arrest in 2009 of the Maoist ideologue Kobad Ghandy in Delhi heightened political interest in the insurgents. Special police agents from Andhra Pradesh had managed to locate Ghandy, who had been living in a slum using fake identification. He came from an elite Parsee family in Mumbai; his father was the finance director of Glaxo; he had studied with India’s political dynasts at the elite Doon School, and had then gone to London to pursue further education as an accountant. In the UK, he was introduced to radical politics, and returned to Mumbai in the mid-1970s, where he met Anuradha Shanbag, a young woman from a family of notable Indian communists and a student of Elphinstone College in Mumbai. Shanbag and Ghandy were both drawn to Maoism, fell in love and married. Soon afterwards, in 1981, they met KS in Andhra Pradesh and shifted to a slum area in a city where Shanbag recruited my friend ‘A’ and others. In 2007, Shanbag was promoted to the Maoist Central Committee, a rare accomplishment for a woman. A year later, however, she died from complications due to malaria she had contracted in a guerilla zone. After Ghandy’s arrest in 2009, rumours arose that he had been sent to work among the labourers as part of the Maoists’ urban agenda. His arrest became a hot topic in Delhi circles: for the first time, it sparked interest in the Maoist movement among people who did not bother to read a newspaper beyond its Fashion section. Ghandy’s abandonment of his elite background to fight for the poor created a wave of empathy for the Maoist movement. Around the time of his arrest, I got a rare opportunity to meet the Maoist chief, Ganapathi. The meeting happened by chance. Through some overground sympathisers, he had learnt that I was in a city close to the guerrilla zone in which he was then hiding. By this time, state surveillance was at its peak, and the Maoist leadership was extremely cautious of any contact with outsiders. Ganapathi in particular barely met anyone except his commanders. After days of travel through the guerrilla zone, I was allowed to record our conversation on a digital device provided by his men. After Ganapathi left the area, I transcribed the interview, but even that I was not allowed to carry with me. A month later, I received the transcript through one of his overground workers in Delhi. As part of its anti-Maoist operation, the government began to push infrastructure A few months later, in 2010, while I spent time with the Maoist leaders Gajarla Ashok and Narmada Akka in their camp, I sent a questionnaire to Ganapathi. His reply came a few weeks later, in which he made mention of the importance of work in urban areas: ‘If Giridih [a small town in the east] is liberated first, then based on its strength and on the struggles of the working class in Gurgaon [now Gurugram, a satellite city close to Delhi where most multinational corporations have their offices], Gurgaon will be liberated later. This means one is first and the other is later.’ It was a tall order. There were innumerable problems in cities, including poverty. But with the liberalisation of the 1990s, middle-class insularity had made most people oblivious of the suffering of others. The Maoists wanted to make inroads through slums and labour unions, but did not find enough reception. The curiosity and empathy the Maoists generated among ordinary people in cities soon dissipated. The conservative BJP, which was rising to national power, relentlessly used Kashmir to rouse Hindu sentiment in mainland India. In the first decade of 2000, Islamist radicals targeted mainland India, creating friction with the Muslim minority. The Indian Parliament had come under attack in 2001; Mumbai city faced a terrorist attack in 2008. Between these, many Indian cities like Delhi, Hyderabad, Varanasi and Jaipur were targeted with bomb blasts, killing scores of people. At the same time, the overground sympathisers of the Maoist movement began hobnobbing with separatist elements from Kashmir and India’s Northeast, which had a long history of secessionism, and these potential alliances stirred controversy. This resulted in a backlash against Maoist sympathisers, and a new term was coined for them: ‘urban Naxal’. Hindu nationalism was on the rise in India and, in the coming years, this term would become a ruse for the government to suppress all activism, resulting in the incarceration of civil rights activists like the human rights lawyer Sudha Bharadwaj. What also did not help is the number of body bags – of forces killed in Maoist ambushes – going to different parts of the country. As part of its anti-Maoist operation, the government began to push infrastructure – primarily roads and mobile/cellphone towers – in the Maoist-affected areas. It led to further entrenchment of state forces, which also weakened the Maoists. Their leaders who were in hiding in cities began to be hunted down. The new roads and phone towers were welcomed by rural people. The Maoists began killing Adivasis on suspicion of being police informers. This violence alienated Adivasis, and others too. Earlier, the Maoists would visit a village in the night and slip away. Even if their presence was reported, it was of no use to security forces because the information would reach them quite late. But now, with cellphone networks, the people could call immediately, leading to encounters between the Maoists and state security forces. Since about 2020, the decline of India’s Maoist movement has been rapid. The Maoist commander Ashok – whom I had met in the forest in 2010 – surrendered in 2015. One of his brothers had already died in an encounter. Meanwhile, Akka was arrested in 2019 in Hyderabad where she was seeking treatment for cancer; she died in a hospice three years later. The government raised a special battalion of Adivasis, which included surrendered Maoists, to hunt down the Maoists. It started getting big results. In May this year, Nambala Keshava Rao, who had taken over as the Maoist chief from Ganapathi in 2018, was killed in a police encounter. A few weeks later, another of Ashok’s brothers, a senior commander, was also killed by police. The entire Maoist leadership, barring a few, has been wiped out. Ashok has, of late, joined the Indian National Congress Party. ‘A’ has not been in touch in the last few years, ever since some of his friends were arrested as ‘urban Naxals’. A friend of his told me the other day that he has stopped interacting with people. A month ago, a friend in Gurugram told me of an incident where he lives. His local Resident Welfare Association had put a cage in their park, with a banana inside it to lure marauding monkeys in the vicinity. A few hours later, they found that the banana had been consumed by someone and the peel left outside the cage. It made me imagine how hungry that person would have been, most likely a poor worker. The friend sent me a screenshot of the residents association’s WhatsApp group. ‘Check the CCTV,’ someone had written. The Maoists have completely surrendered now, asking the government to accept a ceasefire. A statement released this September, purportedly by part of the Maoist leadership, apologises to people, saying that, in the process of revolution, the leadership made several tactical mistakes, and that the ceasefire was now important to stop the bloodshed. What those mistakes are, the letter wouldn’t say. As anti-Maoist operations go on with even more rigour, a handful of those still inside the forest will ultimately surrender or be killed. How history remembers them is too early to say; but it is a fact that, had it not been for them, the much-needed focus on the hinterland of DK would not have been there. However, to the man in Gurugram who stole the banana, and to the man in Giridih, who doesn’t even have a banana in sight, it means nothing. Source of the article

The Modern Origins of the 8-Hour Sleep Cycle

The advent of electricity changed the way we sleep. "Get your eight hours." It’s a command so familiar it feels timeless—an unquestioned truth that humans naturally need a full night’s sleep. But a closer look at historical sleep patterns challenges the idea that even our biological needs exist outside the influence of time and culture. The Social Transformation of Nighttime For much of recorded history, humans have actually slept eight hours, but in two distinct phases of approximately four hours each. As scholar Roger Ekirch uncovered through historical study of literature, art and diaries, people would once head to bed when it got dark, sleep for four hours, wake for a while, and then slide into a “second sleep” for another four hours. People didn’t just toss and turn along in between their two sleep sessions: they would contemplate their dreams, read by candlelight or have sex. Writers from Livy to Plutarch to Virgil to Homer all referred to this structure, as well as medieval Christian and African tribal cultures. But the “biphasic sleep” pattern, which was governed by the natural timing of nightfall and sunrise, didn’t last in the modern era. As artificially illuminating the night sky became more affordable, life—and soon sleep habits—changed. Fifty European cities introduced tax-supported street lighting by 1700 and made it safe and socially acceptable to move about publicly after dark, a time of day that had previously been considered the domain solely of prostitutes and other suspicious characters. The assumption that nothing good could go on at night was so widespread that until the arrival of artificial lighting, citizens often freely emptied their “piss-pots” out of windows after dark. In the United States, Baltimore became the first city to be lit by gas in 1816; a century later, electricity in streets and in growing numbers of homes meant nightfall no longer ensured the inescapable darkness that had dictated beginning one’s first sleep soon after. Going out at night became a fashionable social pastime, pushing bedtime later and bringing the two separate sleep phases closer to the single stretch we know today. Industrialization and the Single Sleep Cycle It was industrialization that solidified the single sleep as a social norm. Especially in the cities that increasingly revolved around factory production, a newly formalized workday structured daily life and a fascination with productivity meant that spending hours lolling around in the middle of the night was considered slothful. School schedules also became increasingly standardized, and as early as the 1820s, parenting books advised promptly weaning children from the two-sleep pattern. By the late 19th century in the United States, school attendance was compulsory, creating yet another cultural pressure to conform to this new sleeping schedule. Clocks had existed since the ancient world, but when Levi Hutchins of New Hampshire in 1787 fashioned the first mechanical alarm clock specifically to rouse him for work, it marked a new understanding of the relationship between sleep and labor. Half a century later, Frenchman Antonie Redier patented the invention; in 1876, Seth E. Thomas mass-produced a wildly popular American version embraced by a society required to adjust its biological rhythms to the industrial clock. By World War II, the U.S. Office of Price Administration reluctantly lifted a production ban on “emergency” alarm clocks since workers with broken devices were sleeping through factory shifts crucial to the war effort. Perhaps the most compelling evidence of the general acceptance of this new norm is the anxiety among and about those who failed to abide by it. In medieval Europe, for example, historian Eluned Summers-Bremner found that nocturnal wakefulness was encouraged as a form of vigilance against bedbugs, arsonists or the Devil, who was believed to prey on the oblivious slumberer. But by the late 19th century, insomnia—defined as the inability to sleep restfully through a single sleep cycle—had been categorized as a disorder, signaling the end of the era of the unproblematic segmented sleep. These days, advice literature continues to idealize the elusive, uninterrupted 8-hour slumber ironically eroded by our 24/7 connectedness. Yet if there is a “natural” sleep cycle to aspire to, science suggests it is closer to the biphasic model. Who Invented the Alarm Clock? They have been around as far back as ancient times—but the snooze button didn't arrive until 1956. As early as the 5th century B.C., the Greek philosopher Plato invented an ingenious water clock that not only accurately measured the passage of time, but also sounded a whistle to wake him for his morning lectures. The Buddhist monk Yixing built the first water-driven mechanical clock around A.D. 700 and Chinese tower clocks may have inspired the first European mechanical clocks, which appeared in the 13th century. The classic tabletop alarm clock with a clattering bell didn’t emerge until the 19th century, introduced by French and American clockmakers. The first snooze button didn’t arrive until 1956. Why were the first alarm clocks invented?  Monks were some of the first people who cared deeply about accurately tracking the passage of time. For centuries, they relied on water-driven clocks that would ring a bell to mark their daily rituals—for example, when they needed to pause at the eighth hour of daylight for prayer. When the first mechanical clocks arrived in the late 13th-century—they, too, were used in ecclesiastical contexts, to keep prayer, work and meal schedules and, increasingly, in the bell towers of churches. In fact, the word “clock” comes from the Latin clocca, which means “bell.”  Who invented the modern alarm clock?  Mechanical clocks were exclusively for the rich through at least the 17th century. Only churches, royal palaces and the very wealthiest households could afford these intricate, hand-made machines. In addition to telling time, many clocks and even watches in 16th-century Europe had programmable alarms. Queen Elizabeth I reportedly owned a tiny ring-based watch that sounded a “silent alarm,” reminding her of her appointments by gently scratching her finger with a metal prong.  Levi Hutchins, an American clockmaker, built one of the first (relatively) affordable alarm clocks in 1787. Hutchins, who lived in Concord, New Hampshire, engineered his wooden, cabinet-style clock to ring a bell every morning at precisely 4 a.m., his preferred wake-up time. Hutchins didn’t patent his invention, which is just as well, because not everyone wants to get up before dawn.  When did alarm clocks become household fixtures? Fifty years after Hutchins, a French clockmaker named Antoine Redier is credited with filing the first patent for an alarm clock in 1847, closely followed in 1852 by a U.S. patent for a “Time-Alarm Clock” by J.S. Turner. It wasn’t long before clockmakers were selling “illumination alarm clocks” that struck a match and lit an oil lamp at the sound of the bell. Think of them as the world’s first “sunrise” alarm clocks.  In 1876, the Seth Thomas Clock Company patented the standard bedside alarm clock, which became mass produced at the turn of the 20th century. The first alarm clock with a snooze button came from General Electric-Telechron, which designed the futuristic 1956 “Snooz-Alarm.” Because of the size and shape of the clock’s alarm gear, the snooze function only worked for nine minutes, not 10. All these years later, that’s still the standard snooze time, even for smartphones. Source of the article

Life thrums with music

Listen to the boundless sounds of nature, the great animal orchestra, whose songs imbue the world with fresh meaning Sound is life. The sound of God’s voice created life, in Christian understanding. In the womb, sound is the first of the senses to apprehend the world beyond the body: a fetus is able to hear their mother’s voice, while a chick in the egg hears the song of its parent birds. Hearing is thought to be the last sense to leave us in our dying, and we speak of the silence of the grave. Healing has traditionally been associated with sound, from the psychological medicine of a lullaby to the chants of monks and nuns in the early hospitals of monasteries, convents and religious centres. The incantations of shamans are some of the oldest of medicines. Melody and song are understood to have healing qualities, ancient and modern, from gong baths to Brian Eno’s album Ambient 1: Music for Airports (1978). When something is well, we say it is ‘sound’. The body is well when it is in harmony with itself, in its inner balance of homeostasis. In traditional Navajo culture, medicine people are known as ‘Singers’ whose healing work seeks to restore harmony within the individual, and more widely to attune a person with other humans and the world. Nature’s sounds are healing. The sound of a rainforest; the flowing of a stream; a cascade of a waterfall; frogsong, insect rhythms and birdsong: these heal and salve. We humans as a species were born into our existence hearing the mother-voice of nature and the primal song of the animal musicians who were there before us, drumming, carolling, whistling and hooting. Hearing the sounds of animals, whose bodies thrum with their calls, we experience viscerally the truths of our existence: that we are manifested creatures, wholly embodied, and that ours is but one voice among many in a gorgeously plural world-choir, where giraffes hum in low voices caressing each other on a savannah evening, where bitterns boom at dawn by a quiet lake, where hedgehogs snore, in a rising half-voiced wheep and exhaled whiffle. Even when the sounds are in the deep infrasound range of elephants, lower than our hearing, we can still feel, right inside us, the deep pulses that throb the air. Through the sounds of nature, we may move into the fullness of what it is to be a person. That word ‘person’ is from the Latin, persona, the mask worn over the face in ancient classical theatre. Persona is understood to mean the sound (son) coming through (per) the mask. But arguably there is something deeper going on. We are resounders, and perhaps to be a true and fully well person includes being permeated by the sounds that pass through us, being a sounding-board for millipede, muntjac and muskrat. A person is an instrument of listening, played by the everything that surrounds them, resonating with goat-song and puffin-joke and camel-carol. Our bodies rung by the wild voices, the bells, blowings and buglings of life, conducting us to that place both electric and tender where we can feel most fully alive. In the sickening days of the COVID-19 pandemic, people heard – as if newly – the birdsong. Momentarily, in that world-hour, our droning machinery and technics fell quiet, and we could hear the voices of life in its self-befriending, and the soaring spiritedness of the music of birds lifted people’s morale. Not cured but surely healed. Birds embody the very quick of things, vitality intensifying the living air that is their element Birdsong is the quintessential healing sound of nature. In myth, the Irish goddess Clíodhna cared for three magic birds with songs so sweet they cured every illness. In Welsh myth, the birds of Rhiannon sang so exquisitely that their music banished sadness and, listening to them, 80 years would slip by as if it were a day, and there was no memory of sorrow. In Berlin in times gone by, people said, if someone was sick or near death, they would ask to be carried out into the streets at night to hear a nightingale sing for them. Berlin today is the capital city of nightingales, home to more of the birds than almost any other European city. Birdsong is both fleet and fleeting, fast and evanescent. Quick and quickening, it touches the quick of the spirit. It quickens the woodlands with liveliness, as to be quick also means to be alive, as in ‘the quick and the dead’. Birds embody the very quick of things, vitality or the life force intensifying the living air that is their element. A world without birds is not only silent, it is dead and deadening. Researchers from King’s College London have demonstrated that seeing or hearing birds improves mental wellbeing and helps to lift depression, and the healing effect can last for up to eight hours. A study based on data from Michigan in the United States found that areas with lower bird diversity have more hospitalisations for mental health conditions than areas with higher bird diversity, suggesting that declining biodiversity may be intricately connected with anxiety and mood. Having a diversity of species around us is evidence that the world is well, that it is sound, and it is richly medicinal, as humans feel well and whole when we see and hear sounds from a wide variety of animals, birds and insects, the full harmonies of the living world that Henry David Thoreau called ‘a vibration of the universal lyre’. The words ‘heal’, ‘health’ and ‘whole’ are all derived from the Old English ‘hál’, meaning whole or sound. To be healthy is also to be part of the whole: health is not a solo state of wellness for one but is inextricably linked to the health of all and, in order that all shall be well, each must be well. Every part of livingkind is implicated with and dependent on the health of other species and the wider environment, co-flourishing. Bernie Krause is a musician and soundscape ecologist. The sound of animals has been personally healing for him, as he used to suffer the effects of undiagnosed ADHD while anxiety totally dominated his life. Then, in a forest in Northern California, he experienced for the first time the power of the forest sounds that brought him, as he writes in The Power of Tranquility in a Very Noisy World (2021), ‘an overwhelming sense of relief both physically and emotionally. It was a safe remedy I would rely on for the rest of my life.’ For decades, Krause recorded natural sounds, concentrating not so much on an individual creature’s performance but on the whole acoustic world of an environment, and listening to everything from insects and frogs to birds and mammals. He terms the collective chorus the ‘Great Animal Orchestra’, where creatures inhabit a sonic niche, a particular place in the soundscape of their precise ecology. The world of animal sound is always moving, drifting, ebbing and flowing, but forming an intense and aesthetic concert in various ecologies. At the top, higher than human hearing and far above the top notes of a piano keyboard, are the ultrasonic calls of bats. Down a little are the cicadas and insects. Further down, the bright screech of the swift. Then, down through the piano’s top octaves, many other birds, down to cats, some monkeys and human voices. The sloth is said to sing at night, in the musical intervals of a human scale. Bear cubs in the den hum as they suckle. Gibbons sing for sunrise in glissando phrases (and in Indonesia their songs are considered so beautiful that Dayak myth says the sun rises in answer). Chimpanzees hoot, in rising and falling cadences for storms and for dawn. Further down the keyboard is the sea lion’s roaring call, and then below the lowest notes of the keyboard is the infrasonic humming of giraffes, and the basso profundo of elephants and whales in their infrasonic lowings. The sound of bees buzzing makes something in my spirit feel calm, reassured and also gently tingly And at the centre is the bee, right at the core of it all. When bees are flower-buzzing, they hum a half-note above Middle C, at the core of the keyboard, right where they belong, in the sweet heart of everything. The music of the animals vouchsafes us, leaving us calmed and invigorated at the same time, sung into the eternal present as life is ceaselessly sung into being, sounds swelling, filling, resounding, flourishing and made whole. We know in our most atavistic selves that we are made whole in the healing wellness of all, when livingkind is sound and whole in itself, a net of wild melodies that we can rest in. The sound of bees buzzing makes something in my spirit feel calm, reassured and also gently tingly. I felt that it was a healing sound before I learned its factual truth. Bees buzz from about 10 to 1,000 Hertz, and these sound frequencies resonate with organic tissues that promote healing: the sound stimulates the cerebrospinal fluid in the brain and spine, causing it to resonate and aiding the immune system, circulating nutrients and filtering the blood. These sound frequencies also affect the pineal and pituitary glands, the hypothalamus and the amygdala. My garden is incomplete if it is silent. It needs bees in order to flower aurally with that sweet susurrus sounding the blossom in its blessing-song, humming that all is well and all shall be well. The bee is a sweet alchemist, turning pollen into honey and Hertz into healing. The ancient Egyptians were the first to describe cerebrospinal fluid around 3,000 BCE, and some say they had a tradition of ‘bee teachings’ in which the humming of bees was understood to stimulate the release of the ‘elixirs of metamorphosis’, an exquisite phrase for conjuring the soft thrill that the buzzing of bees gives us. In Slovenia, a country rich with beehives, there is a tradition of using the sound of bees for healing, involving people lying down in a room with hives of thousands of bees. Firefighters and others with stressful jobs use this as a technique for relaxation and recuperation after traumatic call-outs, using the sound and also the smell of the bees – beeswax, earth and honey – as healing. One beekeeper interviewed by the BBC said he records the sound of the bees and, if he has difficulty sleeping, he will ‘turn on the buzz of the bees, and float away.’ Many schools in Slovenia have beehives on the premises and it is common for pupils who are restless and upset to be sent to the bees to be cared for – the child can lie in a net like a large hammock near the hive and the buzzing presence of the bees calms them. Some months ago, the American composer David Rothenberg, in his pursuit of animal music, made a recording of pond insects and sent it over to me. Listening to this insect chorus, with a variety of species crackling, ticking and chirping, flooded me with a sense of wellness. It sounds full to the brim, both complete and diverse like a perfect gathering. In a pond, the insect orchestra is an accord of sounds in neat-tucked tidiness, intimate and close. It has the same effect on me as ASMR, the autonomous sensory meridian response, where certain sounds make you feel at once thrilled and soothed, ecstatic and serene. I now have one of Rothenberg’s special hydrophones so I can listen to pond music wherever I go, and the first time I tried it out was at a loch on the Aigas estate in Scotland, with the author and naturalist John Lister-Kaye, whose reaction was to feel it as something that would gentle the mind into sweet sleep. Ican’t help resonating with insects: they give me good vibrations and I’m not alone. Amazonian people say the song of the insects, humming and buzzing, makes a strong impact on people and is associated with powerful transformation and the fertility of nature. When the Lakota medicine man Lame Deer described the perfect soundscape for a holy man, he noted the preference for a place with ‘no sound but the humming of insects’. This is how two treehoppers (small insects) communicate. A treehopper squeezes its tummy to send vibrations down its legs, along a plant stem and up the legs of other treehoppers, giving them good vibes. Humans can’t hear the vibrations except through a vibrometer, which converts them into audible sounds. I’ve heard it – a rumbly, soft kind of noise with upstrokes of jazz clarinet in miniature. Treehopper The First, who happens to be male, purrs, punctuating it with a highly suggestive ticking sound. You can almost hear it in translation: ‘I like you, I really really do. Shall we, eh? Shall we, eh?’ Treehopper The Second, who happens to be female, replies with a warm, assentive hum. I hear it in translation: ‘Yes, I said, yes I will. Yes.’ In its resonance of lush voluptuousness, it is healing, in impish vivacity. Then there’s dolphins. In the book Dolphins and Their Power to Heal (1992) by Amanda Cochrane and Karena Callen, the sound engineer Tony Bassett suggests that dolphins may pulse frequencies of around 6 Hz that attune human brainwaves to a theta state of deep relaxation and, arguably, in this meditative state, bodily healing can take place. Many humans who spend time with dolphins report healing responses to the point of euphoria. Part of that may be explained by dolphins appearing to ‘smile’, and by human expectation of healing with cetaceans. In addition, Bassett has found that frequencies in the region of 2,000 Hz appear to trigger the production of endorphins in humans, and dolphins do emit sound in that range. Bonobos laugh as if their organs are being massaged by the bubbles of a hundred and one jacuzzis The sound of a cat’s purr is thought to be healing for humans as well as for cats themselves. Domestic cats typically purr within the range of 20 to 27 Hz but can reach 150 Hz. When humans are treated with frequencies of around 20-50 Hz, bone strength can be improved by up to 20 per cent, the bones hardening in response to the pressure, suggesting that the cat’s purr could help with osteoporosis. The purr-frequencies of cats correspond to vibrational frequencies used to treat oedema, muscle strain, joint flexibility, shortness of breath and wounds. Healing in tendons and ligaments can be treated with higher frequencies, closer to 120 Hz. The range of a cat’s purr is known to relieve both chronic and acute pain. The vibrational stimulation of a cat’s purr also improves blood circulation. Old women with their cats on their laps maybe intuit healing properties, unconsciously treating osteoporosis and providing themselves with the furriest and warmest kind of pain management. Animals may use their voices to draw us into communication with them, body to body. Adult cats, in the main, do not mew: they learn to mew to us, and choose the sweeter, higher tones that we appreciate. Orangutans use physical gesture to communicate and, if they want to communicate with us, will use gestures they know we understand. We feel an urge to mimic birdsong, and composers such as Olivier Messiaen used human instruments to imitate the birds. From their earliest years, children try to ‘talk’ to animals, with the ‘woof’, ‘baa’ or ‘moo’ of domesticated animals. Another way in which animal sounds are healing is the infectious sound of their laughter. Bonobos, if tickled, make a choked he-he-he gasp of laughter, followed by a peeeep of pleasure. In fact, they laugh until they fart. Their laughter sounds like a human who laughs so hard that they are clean out of breath, but still can’t stop; they’re laughing in fits and in stitches, their entire body convulsed with vibrations as if their organs are being massaged by the bubbles of a hundred and one jacuzzis. Laughter, the best medicine. Try a kea parrot. Kea parrots are mischievous birds of New Zealand, with olive and orange colouring, and they are known as the clowns of the mountains. When they laugh, the sound cascades happily down an octave and triggers an automatic play-reaction in other keas, launching them into spontaneous aerial acrobatics, play-pouncing and tussling each other. It also elicits a helplessly happy playstate in my mind and I want to chirp along with them. When it comes to vitality, laughter is the dog’s bollocks. Or the chimpanzee’s clitoris. Female chimpanzees may laugh when they tickle their own fancy, using a stick to rub their genitals. Some 65 animal species are known to laugh while playing, including many primates, foxes, badgers, polecats, mongoose, cats, cows, kangaroos, elephants, whales and seals, the Australian magpie and the budgerigar. Rats, if tickled, chirp with laughter far above our hearing range but ultrasound microphones can record and replay them in a lower register. I’ve listened, and it is the sweetest sound, lit with chirrups and tiny squeaks. I have a friend who farts like a horse. More exactly, he farts as unselfconsciously as a horse, without the strange embarrassment we have come to associate with our own bodies’ emanations and eructations. Animals are not bourgeois about these things. ‘Be a good animal, true to your animal instinct,’ wrote D H Lawrence in The White Peacock (1911). Animals model a healthy way to be fully present in the body not as an appendage to our emotional and mental lives but as the very axis of vitality. They offer an almost irresistible invitation to be physical, to inhabit our skins and smells, to feel ourselves as pelt, paw and feather. Leave your scent-marking on the gatepost. Catch leaves in your snout. Know the belly-drum and the barrel-cock, the shanks and teats and horns. Like this, we can breathe in electric air, brimful, right to the very edge of our carnal selves, lips on the rim. Within ourselves, we become a carnival of carnality, parading our bodies, preening like peacocks. Chimpanzees perform ‘waterfall dances’, swaying rhythmically, stamping in the streams, throwing rocks to crash against others, drumming with the waterfall in uproarious exuberance, dancing with it, magnifying its force, their godbody on full alleluia. The chimp shakes the branches to rattling, getting everything going, struck by his primordial imperative: dance to the music of this moment. When the water churns with life, he salutes it with his own resounding spirit. Perhaps the primates feel the transcendence that a stamped rhythm can offer, the ecstasy of trance-dance, when the spirit is both embodied and ecstatic, known to spiritual traditions from Jesus as the apocryphal Lord of the Dance to the Sufi whirling dervish. Drumming up life, energy and vitality thrumming through the body’s instruments. Chimpanzees have also been seen performing ‘raindances’ and, in one filmed occasion on the shore of Lake Tanganyika, a jungle storm crackles brief but heavy and a chimp climbs a tree, swinging on a vine with a thumping rhythm. From a cloudy day in mid-Wales, watching safely online, my mirror neurones go AWOL, totally flipped out by the chimp’s drumming rhythm and stamping swing, and I am swaying side to side on the same vine, with a chimp I’ve never met in a place I’ve never seen. It feels as if he is waking and rousing the sleeping world to the dynamism of the storming dance of life. I am swept up into his carnival, hoot-laughing with intoxication. Because I am at a safe distance. But if I’d been there, I’d be petrified. In the video, the camera pans away from the chimp and we see that visiting humans are standing motionless in identical grey cagoules like drenched ghosts, perhaps thinking it might be risky to move a muscle or raise a peep in such a charged atmosphere. A fly, too early on a summer morning, irritates me insanely with the sheer stupidity of its drone It’s magnificent to hear lions roaring when they can’t attack you: but how terrifying to hear the roar of a hungry or angry lion nearby. Then, your body is on high alert: it is being sounded and the thunder shudders through you. Terror. Cold sweat. Pallor. You know that you are prey, and you know that they know. My body no longer a carnival: I could be carne – meat. Our own bodies are our first experience of animals, and our physical reactions can happen before our minds respond. The disturbing animal sounds are fiercer tutors to our carnal beings. Rattlesnakes rattle us. Hyenas laughing remind us we may be the butt of their jokes. Foxes at night sound like babies being tortured. The huffing snort of an upset horse is agitating. Even animals that can’t harm us can disturb: I admit I get terribly startled by pheasants leaping up churring from the ground. A fly, too early on a summer morning, irritates me insanely with the sheer stupidity of its drone. A single mosquito can drive me half demented. This sense of our own carnality, which we learn both from within ourselves and implacably from the animals, is profoundly healthy. It teaches us that we are but one animal among many, showing us that human sound alone is insufficient and unhealthy because it is unnatural, unbalanced, and out of harmony with the All. The animals are collectively the ground bass of normal. When human voices quake us, screaming a dissonance of lies, dishonesty like nails on a blackboard scraping out a malevolent seventh to spoil the sweetness of accord, and rasping the nastiest lie that only humans matter, the animal voices save us, their sounds are the tonic of the chord and also the medicinal tonic, the rightness of the physical world played in the home key. The animals call us into the belonging world where we matter and they matter, where matter matters, where the unashamed physicality of our being is embedded in the true world, shared and live. The orchestra of animals is an inveterate conscience reminding us of the glittering infinity, multivocal, a speckled plethora of polka-dotted sounds, tawny roars, turquoise peeps, tangerine chirrups, cavalier in the reckless plurality of life. Source of the article

GOATReads: Psychology

How to Stop Being a Victim of Your Past

Victimization comes from the outside. Victimhood comes from within. “I saw something nasty in the woodshed!” It’s the tormented refrain uttered by Aunt Ada Doom in Stella Gibbons’s comedic novel-turned-movie Cold Comfort Farm. When she was a young girl, Ada encountered a deeply unsettling sight. Sixty-nine years later, she still has not recovered. She lives as a recluse on the second floor of the family home; her meals are brought on a tray left outside her door. Whenever someone implores her to leave the room, she moans, “I saw something nasty in the woodshed!” And it’s not only Ada who suffers. When her determined young niece Flora arrives at the farm, she asks what Cold Comfort Farm is like. “There’s a curse on the place,” she’s told. The seeds won’t grow, the soil is eroded, and the animals are barren. “All is turned to sourness and ruin.” When Flora asks why they don’t sell the farm and move on, she’s told the family can’t leave because the farm is their cross to bear, all because of what Ada saw. All too often, I see some version of this plot play out in real life. People have an experience—they suffer adversity, have a difficult start in life, or are confronted with challenges to their physical or mental health or performance—and that’s where they stop. They fixate on what happened or the obstacles in their path, and everything turns sour and ruined. They become attached to a belief that life is over, or at least severely limited. They become stuck. Several decades ago, psychologist Martin Seligman conducted seminal research on what he called “learned helplessness.” Starting with a series of studies on dogs who learned to stay imprisoned even when they were free to escape, he showed that adversity can cause us to give up hope that life can be different. If opportunity does arise, when in this state, we fail to capitalize on it or even recognize that it’s there. Humanistic psychotherapist and Holocaust survivor Viktor Frankl, author of Man’s Search for Meaning, describes when Allied forces arrived to liberate prisoners from concentration camps: Some rejoiced. Others, however, stumbled numbly through the gates only to pause, then turn around and wander back into the camp. It had become impossible for them to contemplate another reality. While some of us may gravitate toward learned helplessness more easily than others, research over many years suggests that it is our default response. But we can learn how to be hopeful. Learning Hopefulness Our future expectations of life are based mostly on our prior experiences. When we cultivate experiences that provide us with more empowered messages about life and our abilities, that becomes what we expect out of life. Much like a weather forecast, we can reprogram our expectations about what weather to expect from the world. Neuroscientist Lisa Feldman Barrett, the author of 7½ Lessons About the Brain, describes how our brains function to create our experience of life. Most of us believe that the brain is like a reporter. It takes in the information from our senses and uses that input to tell us what’s going on in the world around us. If the brain is a reporter, it’s not a particularly good one; it likes to turn in its stories before they’re fact-checked. It’s also not great about attribution. For instance, it can misinterpret body signals. If your body lacks energy, the brain might hastily announce: “We’re hungry! Give us food!” That lack of energy is due to dehydration, and you need water. Or you might experience a rapid heart rate and sweaty palms. Your thoughts declare, “We’re afraid!” In reality, you’re about to step onstage to deliver a presentation for which you’re well prepared. The truth is that you’re more excited than anxious. What does this have to do with overcoming adversity and learning hopefulness? When our brain tells us that life will always be like this, it’s not stating a fact, it’s making a prediction. Instead of being a reporter, your brain is a prediction machine. Let’s say you grew up in an unstable and unpredictable environment, in which case your brain may be sculpted to forecast a life of instability. You may overgeneralize and be acutely attuned to cues of instability: “The world is unsafe!” Coupled with this, your brain might ignore all the good things happening around you, including important contextual information to tell you what’s going on. Fear learning and fear unlearning happen in separate parts of the brain. Fear is automatically learned, but fear must be actively unlearned. We have to choose a different way of living, and we can start by taking responsibility for unlearning fear—or past patterns—which can take a lot of inner work. Many of us equate responsibility with saying it’s our fault, but that’s not what I mean. As Barrett notes, “Sometimes we’re responsible for things not because they’re our fault, but because we’re the only ones who can change them.” When you take responsibility, it’s not about saying that you’re to blame—that your mother wasn’t a present parent, that you’re neurodivergent, that you were assaulted, or that you have a predisposed temperament to experience stress more intensely. It’s about saying, “This is the hand I’ve been dealt, and I will play it out. I will make active, intentional choices about how I engage with life.” The only person who can determine what you do is you. Aunt Ada chose the passive route, demanding that her family dote on her and bend to her every whim. This protected her status as one who was wronged; however, it also kept her from living a rich and full life. Then, she made a different choice. One day, young Flora knocks on the door, and Ada finally bends to Flora’s repeated pleas to engage with her. “I saw something nasty in the woodshed!” she says. “What was it?” Flora asks. “I don’t know. I was little,” Ada replies. “Something terrible!” “Are you sure?” Flora asks, prompting Ada to revisit her potentially faulty memory. “I’m sure!” Ada declares. “Or maybe the potting shed. Or the bicycle shed.” Maybe the story isn’t the story after all. Maybe what she saw wasn’t so horrible. Maybe it was. This brief moment of questioning raises the possibility that this memory—whether faulty or not—doesn’t have to hold Ada. A belief that’s been reinforced for 69 years suddenly seems less certain. The question becomes: Have I made other assumptions that may not be true? Taking responsibility involves recognizing that our brains are just doing the best they can with the information they have. We can make a concerted effort to feed them different information by having more varied experiences. And we can become more critical of the negative things our brains tell us. We can second-guess the messages we get, not only about what’s possible in life but also about what we think and feel in any moment. You can be a more informed user of your brain and not simply accept everything it hands you, because a rather surprising amount of the time, it’s wrong. Our level of happiness in life correlates strongly to our sense of responsibility and agency—specifically, to something called our locus of control. When we have an internal locus of control, we believe that even when life hands us a boatload of lemons, we still can make sweet lemonade. When we have an external locus of control, we believe that factors beyond our grasp dictate our destiny. When we’re in this headspace, we see the world in more negative terms, making it easier for our darker emotions to get the best of us. Not surprisingly, people with an internal locus of control are likely much happier. How can we make this switch? One actionable step in turning down the volume of our emotions and seeing things more clearly is to ask ourselves what, not why. What, Not Why Organizational psychologist Tasha Eurich studies the insights we have about ourselves, including why some of us possess high self-awareness while others struggle. She and her team studied “self-awareness unicorns,” people with low to moderate self-awareness who learned to become more self-aware. In analyzing transcripts of their conversations, the team discovered an interesting speech pattern: The participants often asked themselves what questions, but rarely described engaging with why questions. One participant, a 42-year-old mother, explained. “If you ask why, [I think] you’re putting yourself into a victim mentality. When I feel anything other than peace, I ask myself: What’s going on? What am I feeling? What is the dialogue inside my head? What’s another way to see this situation? What can I do to respond better?” As Eurich observes, “Why questions can draw us to our limitations. What questions help us see our potential. Why questions stir up negative emotions. What questions keep us curious. Why questions trap us in our past. What questions help us create a better future.” Consider that you don’t sleep well because you’re tending to a sick pet. If you feel sad and ask yourself why, your brain will be more than happy to offer all kinds of answers. “Why am I sad? What kind of question is that? The world is in ruins, that’s why!” Instead, asking yourself what you’re feeling drills down to a more precise answer: “I feel tired and worried about Mr. Fluffy.” We can name what we’re feeling and not over-identify with our emotions. This distancing trick helps to keep us from getting overwhelmed by what we’re experiencing. From these observations, we can construct a useful response. First, you can have some compassion for yourself—it’s hard to have a sick pet, and it’s hard when you lose sleep. Then you can take steps—call the vet and take a nap. The world isn’t coming to an end. Psychologist and Holocaust survivor Edith Eger observed that victimization comes from the outside world, but victimhood comes from the inside. According to Eger, at some point, we will suffer some kind of affliction or abuse caused by circumstances over which we have little or no control. No one can make you a victim but you. We become victims not because of what happens to us but because we choose to hold onto our victimhood. Keeping ourselves locked up is an inside job. Obsessed With Trauma Trauma is real, but we can heal from it. To many, that’s an unwelcome truth. Some will fight tooth and nail to defend the idea that trauma is permanent. But why? Trauma can leave indelible marks on us, but as research has consistently shown, adversity can also be a powerful lever for learning and development. Both of these can be true at the same time. Our experiences can permanently affect us, but we can use our challenges to become stronger. Edith Shiro, a clinical psychologist who has spent decades helping people not only survive severe trauma but grow as a result, makes a distinction between recovering from trauma (returning to the state you were in before you experienced trauma) and experiencing post-traumatic growth (having a life that’s better than before). She notes that trauma is complicated. The road to post-traumatic growth requires conscious awareness of our intention to move beyond the trauma without dismissing or downplaying the difficulties. Transformation is possible but can’t be rushed. Suffering is real; people deserve to have it acknowledged, and they deserve to be supported in their healing. But if we get stuck in our suffering, then we’re no longer the narrator in our story or the hero—we’re simply a victim. Sadly, much of our current culture supports and even encourages this. Certainly, some of this is well-intentioned, and we must recognize and validate people’s experiences. But somewhere in this, we’ve crossed a line, assigning a special social status to those who’ve suffered, and this has started to backfire. We’ve begun to disempower the people we’re praising because to keep that status, they must remain victims, even adopting that label as part of their identity. To move on would mean moving out of this protected or celebrated class, thus losing valuable social capital. As podcaster and author Tim Ferriss—who recently opened up about his history of sexual abuse—says, it has become disturbingly common to “trauma vomit” on someone within 10 minutes of meeting them, sharing all the ways the world has wronged you. We’re selling ourselves short. As George Bonanno has reported, we’re pretty damn resilient. That doesn’t mean that everything bounces off us, but overall, we can recover from even the most difficult experiences. Yes, some people do suffer a full derailment after a spouse dies, for instance, but most can get back on track after such a tragedy. In our mostly well-intentioned efforts to name real challenges and help people get real support, we’re inadvertently catching people in a trap that can be very hard to get out of. By overemphasizing trauma and attributing every normal challenge we might experience in life to it, we’ve gone down an adversity rabbit hole. For “trauma response” to mean anything, it can’t mean everything. Unfortunately, we’ve oversimplified things in our quest to compress a potentially complex set of physiological and emotional responses into material short and snappy enough to share via a tweet or a 20-second video. And there’s big money in trauma right now. Loads of folks who aren’t nearly qualified to be speaking on such complex topics are raking in followers by convincing people that absolutely everything wrong in their lives is due to trauma. More often than not, they also offer a—usually expensive—solution. Sometimes, though, the solution is simply coddling the victim. The following might sound like an overboard caricature, but it’s not. Here is a paraphrase of a post from a therapist: “Man, what happened to you was wrong, and you would be justified to do nothing but sit on your bed and cry for the rest of your entire life.” If this guy didn’t feel terrible about himself before, he sure does now. He also probably believes there’s no chance for him to ever get past it. But we want better for him. If you’ve experienced trauma, this is not what you need to hear. You can sit in your room for the rest of your life like Aunt Ada Doom, but that’s a choice. And I question the motivation of anyone who encourages you to make that choice in the name of “compassion.” We’ve tasked one word with far too much work. We’re making trauma do the heavy lifting of describing every adverse event a human might experience. And that’s extremely disempowering. Our propensity to see trauma lurking around every corner and to self-diagnose with mental health problems has the effect of pathologizing everyday life. Many of us now view ourselves as hopelessly traumatized, which we interpret as damaged beyond all repair. Thank goodness that’s not the case. Your Past Will Never Change, But You Can In many ways, how we interpret circumstances relies at least in part on the language we have—or don’t have—to characterize them. If the only term we have to describe the challenges we’re facing is trauma, then every adverse event becomes traumatic. Words matter because they bring with them an entire array of beliefs. If you’re a college student struggling with a certain topic, that can be an isolated experience. But if you were “traumatized” by how hard the class was for you, that indicates a deep and lasting effect that may have rewired your brain and shifted your entire perception of life. And by employing that language, you may shift your perceptions to believe it. See the problem here? Of course, trauma is real, but it would benefit us to have a broader language we can invoke when describing challenging experiences. This is linked to emotional granularity, where we describe our feeling states with nuanced terms that more accurately characterize what we’re experiencing. We’re not just sad; we’re disengaged, disenchanted, worn out, and so on. Interestingly, research shows that people with more emotional granularity—who can differentiate more specifically what they’re feeling and label their experiences with more precise language—tend to be less reactive to negative circumstances and have greater psychological resilience. If we can connect with and describe our experiences beyond this one big word, it can help us relate to what’s happening in subtle and meaningful ways. Our mindset and capacity to deal with challenges are not predetermined—they can be learned. Take a good, honest look at what you’re working with and start making some choices and real changes; write a new story. The existential psychiatrist Irvin Yalom describes one of his patients as a strong and resourceful woman who was the head of a major industrial company. As a child, she suffered vicious and continual verbal abuse from her father. In one session, she described a daydream she had where she was seeing a therapist who had the technology to cause total memory erasure in a patient. In her daydream, she was asked by the therapist if she would like to do a total erasure of all memory of her father’s existence. While this sounded great, she told Yalom it was a tough call. Her response: At first, it seemed like a no-brainer: My father was a monster who terrified me and my siblings throughout our childhood. But, in the end, I decided to leave my memory alone and have none of it erased. Despite the wretched abuse I suffered, I have succeeded in life beyond my furthest dreams. Somewhere, somehow, I have developed a lot of resilience and resourcefulness. Was it despite my father? Or because of him? Yalom notes that this fantasy was the first step in a major shift toward forgiving her father and coming to terms with the inalterability of her experience; he added, “Sooner or later, she had to give up hope for a better past.” To orient ourselves toward growth, just like this patient, we must accept what has been and turn toward the future— not by ignoring our past but by processing it meaningfully and using it as the seeds to become the person we wish to become, sometimes even changing our narrative about trauma. That’s exactly what Aunt Ada Doom did. With some help from young Flora, she realized that there was a whole world out there she was missing. She accepted that she couldn’t unsee whatever she saw in the woodshed—or the potting shed, or wherever it was—and realized that while she sat confined to her upstairs room dwelling on it, life was passing her by. She got up, combed her hair, put on her fancy clothes, and flew to Paris. Moving on doesn’t mean that whatever happened and what you experienced doesn’t matter. Of course, it does. And it always will. It means that you no longer allow past experiences to control how you experience your life right here, right now. Moving on also doesn’t mean that you’re accepting blame. It means that you’re accepting responsibility. It means you’re deciding that you’re going to slide on over into the driver’s seat and take it from there. Yes, crappy experiences can change your brain. But you know what else can change your brain in the way you want it to be changed? You. Source of the article

At the Mysterious Boundary Between Waking Life and Sleep, What Happens in the Brain?

Neuroscientists studying the shifts between sleep and awareness are finding many liminal states, which could help explain the disorders that can result when sleep transitions go wrong The pillow is cold against your cheek. Your upstairs neighbor creaks across the ceiling. You close your eyes; shadows and light dance over your vision. A cat sniffs at a piece of cheese. Dots fall into a lake. All this feels very normal and fine, even though you don’t own a cat and you’re nowhere near a lake. You’ve started your journey into sleep, the cryptic state that you and most other animals need in some form to survive. Sleep refreshes the brain and body in ways we don’t fully understand: repairing tissues, clearing out toxins and solidifying memories. But as anyone who has experienced insomnia can attest, entering that state isn’t physiologically or psychologically simple. To fall asleep, “everything has to change,” says Adam Horowitz, a research affiliate in sleep science at MIT. The flow of blood to the brain slows down, and the circulation of cerebrospinal fluid speeds up. Neurons release neurotransmitters that shift the brain’s chemistry, and they start to behave differently, firing more in sync with one another. Mental images float in and out. Thoughts begin to warp. “Our brains can really rapidly transform us from being aware of our environments to being unconscious, or even experiencing things that aren’t there,” says Laura Lewis, a fellow sleep researcher at MIT. “This raises deeply fascinating questions about our human experience.” It’s still largely mysterious how the brain manages to move between these states safely and efficiently. But studies targeting transitions both into and out of sleep are starting to unravel the neurobiological underpinnings of these in-between states, yielding an understanding that could explain how sleep disorders, such as insomnia or sleep paralysis, can result when things go awry. Sleep has been traditionally thought of as an all-or-nothing phenomenon, Lewis says. You’re either awake or asleep. But the new findings are showing that it’s “much more of a spectrum than it is a category.” Riding the brain wave In the 1930s, the millionaire Wall Street tycoon, lawyer and amateur scientist Alfred Lee Loomis liked to scan the brains of his guests as they napped in his mansion north of New York City. He was pioneering the use of a machine known as an electroencephalograph to study sleep. Every napper wore a cap with electrodes, which could noninvasively measure their brain activity. The machine would use a pen to physically scribble waves with peaks and troughs onto paper scrolling at a rate of one centimeter per second to create an electroencephalogram (EEG). The waves represented the gross activity of neurons. As we fall asleep, neurons start to synchronize, which means they fire together and go silent together. (No one knows exactly why this happens.) As a person sleeps, this synchrony grows, producing brain waves that are lower in frequency and higher in amplitude. Over the course of a night’s sleep, the waves will speed up and slow down in a cyclical fashion—all night, every night. Loomis categorized the different types of brain waves into what became known as sleep states and created a nomenclature to describe the phases of unconsciousness. Electroencephalography catalyzed sleep research. Measuring the waves recorded on an EEG became a common way for neuroscientists to infer a person’s brain or sleep state without invasive surgery. It became the go-to method for understanding both the activity of neurons as we sleep and the subjective experiences, such as dreams, that they create as we move through different forms of sleep consciousness. In the early 1950s, the physiologist Nathaniel Kleitman at the University of Chicago and his student Eugene Aserinsky first described the sleep stage categorized by rapid eye movement, or REM sleep—a cycle the brain repeats multiple times throughout the night, during which we tend to dream. In REM sleep, brain waves are faster than in non-REM sleep and look more like those produced when we’re awake. A few years later, Kleitman and the sleep researcher William Dement, also at the University of Chicago, put together an improved sleep-stage schema: four non-REM sleep stages, based on Loomis’ original work, and one REM stage. A modified version (with the last two non-REM stages combined into a single stage) is still in use today. However, by creating sharp boundaries, the schema obscured the subtleties of what happened between the stages. It became a norm in the field that “you have three options: You are either awake, in non-REM [sleep] or in REM sleep,” says Thomas Andrillon, a cognitive neuroscientist at the Paris Brain Institute. Though some evidence indicated that the brain could exist in a state that mixed sleep and wakefulness, it was largely ignored. It was considered too complicated and variable, counter to most researchers’ tightly defined view of sleep. But little by little, a new wave of neuroscientists started questioning this status quo, Andrillon says. And they realized, “well, maybe that’s where things are interesting, actually.” Drifting off Salvador Dalí might agree. Around the time that Loomis was conducting EEG experiments in his mansion, the surrealist artist was experimenting with his own transitions into sleep. As he described it in his 1948 book, 50 Secrets of Magic Craftsmanship, he would sit in a “bony armchair, preferably of Spanish style,” while loosely holding a heavy key in one palm above an upside-down plate on the floor. As he drifted off, his hands would slacken—and eventually, the key would fall through his fingers. The sudden clack of the key hitting the plate would wake him. Convinced that being aroused during this period revived his psychic being and boosted creativity, Dalí would then sit down and start painting. Other great minds, including Thomas Edison and Edgar Allan Poe, shared his interest in and experimentation with what is known as the hypnagogic state—the early window of sleep when we start to experience mental imagery while we’re still awake. In 2021, a group of researchers at the Paris Brain Institute, including Andrillon, discovered that these self-experimenters had gotten it right. Waking up from this earliest sleep stage, known as N1, seemed to put people in a “creative sweet spot.” People who woke up after spending around 15 seconds in the hypnagogic state were nearly three times as likely to discover a hidden rule in a mathematical problem. A couple years later, another study, led by Horowitz at MIT, found that it’s possible to further boost creativity in people emerging from this state by guiding what they dream about. It’s not exactly clear why hypnagogia appears to increase creativity. One possibility is that the process of falling asleep “requires us to release control over our thoughts,” says Karen Konkoly, who studied lucid dreaming as a postdoctoral fellow at Northwestern University and now consults for the sleep startup Dust Systems (co-founded by Horowitz). “As our executive control over our mind relaxes, we can perhaps access a broader semantic network of information, which could help creativity.” Andrillon agrees that the sleep transition produces a state of “freewheeling consciousness” that unshackles the brain from its regular ways of thinking. Like houses slowly shutting off their lights as a town falls into slumber, the brain gradually turns to night mode. Sleep starts at the center of town: Neurons deep in the brain, such as those in the ancient control center known as the hypothalamus, fire signals to suppress arousal circuits. Nearby brain regions such as the thalamus, which relays information from your senses to the rest of your brain, shut off first. Minutes later, the cortex, which is involved in more conscious, high-order thinking, follows suit. It shuts down from the front of the brain, where planning and decision-making occurs, to the back, where senses such as vision are analyzed. During this transition, as some parts of the brain shut down while other parts remain awake, we can sometimes experience dreamlike thoughts. In this hypnagogic state, many people are “one foot in dreams and one foot in the world,” Horowitz says. Some people hear things; others have visions. These are like dreams but lighter: projections against the scaffold of the real world, which is still in our grasp. “We could think that there’s a function” to these mental experiences, says Sidarta Ribeiro, a neuroscientist at the Federal University of Rio Grande do Norte in Brazil. “But maybe there isn’t. Maybe it’s a byproduct of what’s going on in the brain.” With your eyes shut and your senses powering down, you’re no longer getting much input from the outside world. But you’re still getting signals from inside the brain, maybe remnants of the day’s experiences. Ribeiro and his team recently reported that a person’s daytime experience can show up in hypnagogic imagery early in the process of drifting to sleep, adding to other studies that made similar findings. Some researchers are using this state between sleep and wakefulness to study the nature of consciousness itself. “If you can track what’s going on in the brain when you go from those two opposite worlds, that would give you a lot of insights as to how consciousness fluctuates,” says Nicolas Decat, a graduate student studying sleep and consciousness at the Paris Brain Institute. In preliminary research that’s not yet peer-reviewed, Decat used an EEG to record the brain waves of more than 100 people as they were falling asleep. Following the techniques of Dalí and Edison, he had participants hold bottles so that as they drifted off, the bottles would fall and make a sound to wake them up. By comparing the participants’ brain waves with their self-reports about what crossed their minds, Decat realized that some dreamlike imagery had occurred while they were technically awake, and some voluntary thinking had occurred while they were technically sleeping. For example, one participant reported ants crawling on her back, even as the EEG documented the fast and frequent brain waves of wakefulness. Another reported having conscious thoughts about how they were falling asleep while they were technically asleep, based on slow and infrequent brain waves. The unpublished data suggests that sleep states may not be the best way to categorize sleep consciousness. “Being awake or asleep does not fully determine what crosses your mind,” Decat says. The data “challenges the popular view that when you’re awake, you have certain thoughts. When you’re asleep, you have dreamlike imagery. It’s not necessarily like that.” The transition to sleep can last for tens of minutes. That means it’s fairly easy for researchers to study—far easier than the process of waking up, which happens much more quickly and in a less controlled way. It’s much harder to predict when someone’s going to wake up. Good morning, sunshine Aurélie Stephan, a postdoctoral researcher at the University of Lausanne in Switzerland, grew interested in the wake-up process when she was studying a phenomenon known as paradoxical insomnia. Unlike people with insomnia, who are up all night without sleeping, people with paradoxical insomnia believe that they’re up all night, even though their brain waves show that they’re asleep. “They sleep as much as good sleepers … so it’s a mystery,” Stephan says. To understand this problem, she needed to first study a more typical wake-up process. When a good sleeper wakes up, what is their brain doing? In a recent study, she examined more than 1,000 different awakenings or arousals—transitions from being asleep to being awake—on a time scale of seconds. She observed a curious slow brain wave in the data from good sleepers as they woke from non-REM sleep. Based on past animal studies, Stephan and her team hypothesized that this slow wave emanated from a spot deep in the brain. After this signal, she saw the cortex wake up (as brain waves grew faster) from the front, which manages executive function, to the back, where vision and other senses are processed. When people woke from REM sleep, their cortex woke up in the same way, but without the preceding slow wave. The presence of this unique slow wave was correlated with how people felt when they awoke, Stephan found. Participants who showed the signal woke up less drowsy than those without it. This suggested, but didn’t prove, that this might be an arousal signal that assists the wake-up process, Stephan says. “They have done a very good job of finding this signature of the sleep-to-wake transitions,” says Luis de Lecea, a molecular biologist who studies sleep transitions in animals at Stanford University and was not involved with the study. They created a “detailed portrait,” says MIT’s Lewis, who was also not involved with the work, and showed why “we don’t always wake up the same way.” Still, EEG readings are coarse and can’t probe the deep brain or provide great detail. However, previous studies that used fMRI scans and electrodes unearthed some of the deeper mechanisms from which such arousal signals might arise. They found that neural signals for waking begin in deep, inner regions of the brain, such as the hypothalamus and the brainstem. These areas wake up the thalamus, which projects the instructions to the cortex. Though typically faster than falling asleep, waking up can also take some time. Stephan’s sleep signature took a few seconds to travel from the front of the cortex to the back. But recovering consciousness and cognitive abilities, and shedding all sleep inertia, can take minutes to an hour, she says. This study and others also showed that slow waves, usually associated with sleep, can sometimes indicate arousal. The lines are blurry. Even when we think we’re fully awake and wandering about the world, parts of our brain could be sleeping. This phenomenon, known as local sleep, is thought to occur so that overworked neurons in the brain can rest and be refreshed. It is not unlike how dolphins can sleep with only one brain hemisphere at a time or how some birds sleep on the wing. Sometimes when we’re really tired, some neurons need to refresh and recharge, even if we’re still up and going about our day. “These people are awake. They have their eyes open. They can be even doing things,” Andrillon says. And yet parts of their brain are undergoing the classic slow waves of sleep. Given that, local sleep challenges what “sleep” actually is. Troubled transitions As we wake up and fall asleep, or even move between sleep states, different types of waves happen at the same time, as neurons synchronize and desynchronize in different regions, in a cacophony of rhythms. This mosaic can lead to experiences such as hypnagogia, lucid dreaming and sleep disorders. “Sleep disorders are incredibly common,” Lewis says. “They really are often defined by problems with the state switching.” These disorders might manifest as insomnia, where people don’t fall asleep properly, or as night terrors, sleep paralysis or sleepwalking, where they don’t awaken as expected. In many cases, parts of the brain are awake when they should be sleeping, or vice versa. Insomnia is fundamentally a difficulty with initiating the transition into sleep or maintaining it. In sleep paralysis, the cortex wakes up before deeper brain regions that control the body, resulting in full consciousness without the ability to move. In paradoxical insomnia, the potential arousal signal Stephan observed in her new study is weak, “so instead of waking them up completely, it makes them feel awake,” she says. Her team found the same signal in sleepwalkers, but in those cases, it happened “in an inappropriate time window” during deep sleep, she says. They also found that the brain activity of sleepwalkers is similar to that seen during dreaming, suggesting that both states result from similar mechanisms of sleep consciousness. Decat is continuing to probe what that sleep consciousness looks like. He is running a survey to learn more about the mental experiences people have while falling asleep. Those thoughts and mental images can be hard to remember, because to do so, we have to wake up. Sometimes we wake up right as we’re falling asleep or from the depths of our sleep cycle—times we’re not really supposed to. Maybe it’s a bedmate turning in their sleep that disturbs us. Maybe it’s the clink of keys on a hard floor. Maybe it’s the brain itself, miscalculating when it’s supposed to arouse certain regions. Your sleep consciousness is disrupted. You pull back from the edge of sleep, and your eyes blink open. Source of the article

GOATReads:Sociology

Safety is fatal

Humans need closeness and belonging but any society that closes its gates is doomed to atrophy. How do we stay open? Many of us will recall Petri dishes from our first biology class – those shallow glass vessels containing a nutrient gel into which a microbe sample is injected. In this sea of nutrients, the cells grow and multiply, allowing the colony to flourish, its cells dividing again and again. But just as interesting is how these cells die. Cell death in a colony occurs in two ways, essentially. One is through an active process of programmed elimination; in this so-called ‘apoptotic’ death, cells die across the colony, ‘sacrificing’ themselves in an apparent attempt to keep the colony going. Though the mechanisms underlying apoptotic death are not well understood, it’s clear that some cells benefit from the local nutrient deposits of dying cells in their midst, while others seek nutrition at the colony’s edges. The other kind of colony cell death is the result of nutrient depletion – a death induced by the impact of decreased resources on the structure of the waning colony. Both kinds of cell death have social parallels in the human world, but the second type is less often studied, because any colony’s focus is on sustainable development; and because a colony is disarmed in a crisis by suddenly having to focus on hoarding resources. At such times, the cells in a colony huddle together at the centre to preserve energy (they even develop protective spores to conserve heat). While individual cells at the centre slow down, become less mobile and eventually die – not from any outside threat, but from their own dynamic decline – life at the edges of such colonies remains, by contrast, dynamic. Are such peripheral cells seeking nourishment, or perhaps, in desperation, an alternative means to live? But how far can we really push this metaphor: are human societies the same? As they age under confinement, do they become less resilient? Do they slow down as resources dwindle, and develop their own kinds of protective ‘spores’? And do these patterns of dying occur because we’ve built our social networks – like cells growing together with sufficient nutrients – on the naive notion that resources are guaranteed and infinite? Finally, do human colonies on the wane also become increasingly less capable of differentiation? We know that, when human societies feel threatened, they protect themselves: they zero in on short-term gains, even at the cost of their long-term futures. And they scale up their ‘inclusion criteria’. They value sameness over difference; stasis over change; and they privilege selfish advantage over civic sacrifice. Viewed this way, the comparison seems compelling. In crisis, the colony introverts; collapsing inwards as inequalities escalate and there’s not enough to go around. In a crisis, as we’ve seen during the COVID-19 pandemic, people define ‘culture’ more aggressively, looking for alliances in the very places where they can invest their threatened social trust; for the centre is threatened and perhaps ‘cannot hold’. Human cultures, like cell cultures, are not steady states. They can have split purposes as their expanding and contracting concepts of insiders and outsiders shift, depending on levels of trust, and on the relationship between available resources and how many people need them. Trust, in other words, is not only related to moral engagement, or the health of a moral economy. It’s also dependent on the dynamics of sharing, and the relationship of sharing practices to group size – this last being a subject that fascinates anthropologists. In recent years, there’s been growing attention to what drives group size – and what the implications are for how we build alliances, how we see ourselves and others, and who ‘belongs’ and who doesn’t. Of course, with the advent of social media, our understanding of what a group is has fundamentally changed. The British anthropologist Robin Dunbar popularised the question of group size in his book How Many Friends Does One Person Need? (2010). In that study, he took on the challenge of relating the question of group size to our understanding of social relationships. His interest was based on his early studies of group behaviour in animal primates, and his comparison of group sizes among tribal clans. Dunbar realised that, in groups of more than 150 people, clans tend to split. Averaging sizes of some 20 clan groups, he arrived at 153 members as their generalised limit. However, as we all know, ‘sympathy groups’ (those built on meaningful relationships and emotional connections) are much smaller. Studies of grieving, for example, show that our number of deep relationships (as measured by extended grieving following the death of a sympathy group member) reach their upward limit at around 15 people, though others see that number as even smaller at 10, while others, still, focus on close support groups that average around five people. For Dunbar, 150 is the optimal size of a personal network (even if Facebook thinks we have more like 500 ‘friends’), while management specialists think that this number represents the higher limits of cooperation. In tribal contexts, where agrarian or hunting skills might be distributed across a small population, the limiting number is taken to indicate the point after which hierarchy and specialisation emerge. Indeed, military units, small egalitarian companies and innovative think-tanks seem to top out somewhere between 150 and 200 people, depending on the strength of shared conventional understandings. Though it’s tempting to think that 150 represents both the limits of what our brains can accommodate in assuring common purpose, and the place where complexity emerges, the truth is different; for the actual size of a group successfully working together is, it turns out, less important than our being aware of what those around us are doing. In other words, 150 might be an artefact of social agreement and trust, rather than a biologically determined structural management goal, as Dunbar and so many others think. We know this because it’s the limit after which hierarchy develops in already well-ordered contexts. But we also know this because of the way that group size shrinks radically in the absence of social trust. When people aren’t confident about what proximate others are mutually engaged in, the relevant question quickly turns from numbers of people in a functioning network to numbers of potential relationships in a group. So, while 153 people might constitute a maximum ideal clan size, based on brain capacity, 153 relationships exist in a much smaller group – in fact, 153 relationships exist exactly among only 18 people. Smaller college size facilitates growing trust among strangers, making for better educational experiences Dunbar’s number should actually be 18, since, under stress, the quality of your relationships matters much more than the number of people in your network. The real question is not how many friends a person can have, but how many people with unknown ideas can be put together and manage themselves in creating a common purpose, bolstered by social rules or cultures of practice (such as the need to live or work together). Once considered this way, anyone can understand why certain small elite groups devoted to creative thinking are sized so similarly. Take small North American colleges. Increasingly, they vie with big-name universities such as Harvard and Stanford not only because they’re considered safer environments by worried parents, but because their smaller size facilitates growing trust among strangers, making for better educational experiences. Their smaller size matters. Plus, it’s no accident that the best of these colleges on average have about 150 teaching staff (Dunbar’s number) and that (as any teacher will know) a seminar in which you expect everyone to talk tops out at around 18 people. But what do we learn from these facts? Well, we can learn quite a bit. While charismatic speakers can wow a crowd, even the most gifted seminar leader will tell you that his or her ability to involve everyone starts to come undone as you approach 20 people. And if any of those people require special attention (or can’t tolerate ideological uncertainty) that number will quickly shrink. In the end, therefore, what matters much more than group size is social integration and social trust. As for Facebook’s or Dunbar’s question of how many ‘friends’ we can manage, the real question ought to be: how healthy is the Petri dish? To determine this, we need to assess not how strong are the dish’s bastions (an indicator of what it fears) but its ability, as with the small North American college, to engage productively and creatively in extroverted risk. And that’s a question that some other cultures have embraced much better than even North American colleges. On the Indonesian island of Bali, a village isn’t a community unless it has three temples: one for the dead ancestors and things past (pura dalem); a community temple that manages social life (pura desa); and a temple of origin (pura puseh). This last temple is what literally ties an individual self to a particular place. For the word puseh means ‘navel’. To this last temple every Balinese is connected by a spiritual umbilicus, and every 210 days (that is, every Balinese year) a person thus tied is obliged to return physically to honour that connectedness, becoming again a metaphorical stem cell: returning to their place of origin, examining their patterns of growth, and using their ‘stem’ in the interests of restructuring a healthier future. The stem cell, of course, is the recursive place where embryologists gather cells to regrow us more healthily; and, in Bali, extroversion is health-enhancing only once we bring back what we learn to where we began. Neglecting this originary connection can cause grave harm, and being far removed, or abroad for an extended period, risks snapping that cord if stretched too far, severing the very lifeline to one’s own past, present and future. But why stretch your umbilicus at all if potential outcomes might be dire? Because boundary exploration helps us define who we are; because the unfamiliar makes us conscious of what’s central; because we need to approach things that are unusual if we’re to diversify and grow. It’s the idea behind the avant-garde (literally, the advance guard) – the original French term referred to a small group of soldiers dispatched to explore the terrain ahead so as to test the enemy. You could stay put and remain ignorant, or go too far and get killed. Alternatively, you might go just far enough to learn something and come back to describe what you’d witnessed. It’s a simple idea, part of every vision quest, and filled with deep uncertainty. Indeed, the very uncertainty of exploration is critical to adaptation and growth. Our shared values (the ‘cultures’ we think we know at the centre of the Petri dish) are always explicitly defined at the peripheries, where we become more aware of our assumptions. And if there’s no wall or Petri dish to contain us, we need to have that umbilicus: because we need a device to measure how far is too far. This being the case, it follows that curiosity is critical to rethinking what we take for granted. It can make us better informed, but it can also get us into trouble. When will the umbilicus snap? How far is too far? These are good questions that once again might be illuminated by a biological example. The human immune system is the best one I know. For a long time, science told us that immunity was about defending ourselves from foreign invaders. This model explains the way we resist becoming host to lots of foreign things that could destroy us – it’s how the body resists becoming a toxic dump site. It also animates the way we teach schoolchildren about washing hands and, today, donning masks and remaining socially distant. Viruses are not living invaders. They’re just information that can sit around like books in our genetic library Setting aside its inherent xenophobia (keep out all things foreign), the defence model works well enough. But there’s a big problem with this simple idea: because we need knowledge of the foreign landscape and its inhabitants in order to adapt. Indeed, we build immunity on the back of dendritic (presentation) cells that, like the military advance guard, bring back to our bodies specific information that we assess and respond to. While it’s true that, in this sense, we’re reacting ‘defensively’ when we adapt, that’s pretty much where the utility of the military metaphor ends – and where modern immunity begins to challenge what immunologists have defined for decades as the ‘recognition and elimination of nonself’. The metaphor fails because viruses are not living invaders. They are just information that can sit around like books in our genetic library until someone reads them, revising what they mean through some editorial updating, and then bringing the information they offer to life once again, in a new form. Moreover, like books in a lending library, some viruses remain unread, while others are widely used. Some are dusty, some dog-eared. That’s because viruses proliferate only when people congregate in reading groups and animate them; where what those groups attend to is socially, not biologically, driven. Like those books, viruses are just bits of data that our bodies interpret and share with others, for better or worse. This is a process that happens every day, and mostly for the better, especially when viral intelligence helps us to adapt, and prevents us (like isolated tribes) from dying of the common cold every time cruise ships or truckers from abroad show up at our ferries and ports. But there’s another reason that invasive images fail to explain the science. In 1994, the immunologist Polly Matzinger introduced an immune system model in which our antibodies don’t respond solely as a matter of defence. They respond, in her view, because antigen-presenting (dendritic) cells stimulate immunologic responses. Although the immune system remains defensive in this view, Matzinger’s argument shifted the debate ever so slightly from levels of self-preservation to information-presentation – from excluding outsiders to understanding them. The idea was radical in immunologic science, but mundane in anthropology. Countless anthropological arguments saying much the same thing about self, awareness of ‘the other’ had been around for more than a century (and obvious to other cultures for millennia), but the assault on self-preservation through extroverted risk finally entered bench science with Matzinger, appearing not only as ‘new’, but in a form familiar enough to bench scientists to sound plausible. The immune system is your biological intelligence. It needs the ‘infection’ of foreign bodies to help you survive Now, if belatedly, immunology was poised to question both Darwinian preservation and selfishness in one go, as well as its own otherwise unexamined assumptions about the social and biological exclusion of ‘nonself’. Matzinger’s idea got traction, its shift from defence to curiosity calling attention to the immune system’s role in assessing the unknown (as opposed to shunning the outside). Still, the argument would in any case be revised by three key realities. The first, which didn’t take root among theoretical immunologists until regenerative medicine emerged at the end of the 1990s, is that viruses are less invaders than informants. I’d picked up this idea from the Balinese whom I worked with during the AIDS crisis in the 1980s. But it wasn’t limited to them. Other, less ‘Cartesian’ Indigenous groups, such as the Navajo, share this understanding. The second truth, which came from the same cross-cultural experience, was that immunology was stuck in self-interest: it couldn’t fathom why a self would reach out in an extroverted and potentially dangerous manner instead of only selfishly defending its identity. Scientists were slowly awakening to a fact well known in many non-Darwinian settings: namely, that externality (extroversion) matters. So does reciprocity – as anthropologists well know. External information has to resonate with ‘self’ – in this case, with cells that your body already makes – in order to bind, transcribe and replicate. That’s the key function of our immune cells, which are made mostly in the thymus (T cells) and bone marrow (B cells). Our bodies make millions of novel cells in these mutation factories, so many in fact that we can’t even count them. Like experimental radio beams sent into outer space, these cells send out signals, functioning as much as search engines as systems of defence. The point here is that thinking of the immune system only as a defensive fortress-builder seriously misses what it’s actually doing. Because the immune system is also, and quite literally, your biological intelligence. It needs the ‘infection’ of foreign bodies to help you develop and survive. This same need also explains how vaccines protect us from biological meltdown. Extroversion is therefore not only needed as a defence strategy, as Matzinger would have it, but as a means of engaging with and also creating environmental adaptations, even if these encounters prove life-threatening for some. We see this need manifest itself graphically in the present COVID-19 crisis – less by what is happening scientifically, than by what is happening socially. Arecent report on wellbeing and mental health by the Brookings Institution attempts to deconstruct the apparent paradox of reported feelings of hope among otherwise disadvantaged and openly disenfranchised populations in the United States during the pandemic. ‘Predominantly Black counties have COVID-19 infection rates that are nearly three times higher than that of predominantly white counties,’ the report says, ‘and are 3.5 times more likely to die from the disease compared to white populations.’ Yet those same communities also express much higher levels of optimism and hope. The authors list various potential explanations for these higher rates of infection and death: ‘overrepresentation in “essential” jobs in the health sector and in transportation sectors where social distancing is impossible’; ‘underrepresentation in access to good health care, and their higher probability of being poor’; ‘longer-term systemic barriers in housing, opportunity, and other realms’; and being ‘more likely to have pre-existing health conditions [risk factors] such as asthma, diabetes, and cardiovascular diseases’. Given such disadvantage, and the inability to practise social distancing, the authors understandably presume that these socially disadvantaged groups should ‘demonstrate the highest losses in terms of mental health and other dimensions of wellbeing’. However, what they discovered is the exact opposite. Not only do African Americans remain the most optimistic of all the cohorts studied, when data is controlled for race and income, they also report ‘better mental health than whites, with the most significant differences between low-income Blacks and whites’. Indeed, low-income African Americans are 50 per cent less likely to report experiencing stress than low-income whites, and (along with Hispanics) are far less likely to involve themselves in deaths born of despair than whites. There are, of course, many complex reasons involved, including such things as community resilience and extended family ties, a belief in the merits of higher education, and a history of overcoming social inequality – some of which (like the merits of education) have declined among low-income whites. According to the authors of the Brookings Institution report, ‘the same traits that drive minority resilience in general are also protective of wellbeing and mental health in the context of the pandemic’. Now, these factors fit well with the literature on so-called ‘post-traumatic growth’ (where overcoming threatening hurdles can be strengthening). They also conform with what has been written about ‘resilient kids’ – those children who make good on challenging backgrounds to become considerate and sometimes successful human beings. Such findings, though, can be dangerous if the only take-home message is that adversity produces resilience. Herbert Spencer, the 19th-century father of Social Darwinism, who believed that stress was strengthening, and that charity only delayed what biology, in eliminating the weak, would take care of on its own. For Spencer, stress defined resilience. Every time we look one another in the eye and nod affirmatively, we create an informal contract And that’s the problem. Because the simple act of translating a biological story into a social one exposes a critical fallacy in the biology itself – this being that our otherwise inert genes possess the animated capacity for ‘selfishness’, even though they’re just bits of inert information to which our cells clearly bring life. Here, the supposedly scientific argument about determinism emerges as animated fantasy – a tendentious fundamentalism bordering on religious fundamentalism; or a moral lesson, as E O Wilson thought of sociobiology, in which stress emerges as morally and allegorically conditional. The only problem is, well, that’s just not what’s happening. Stress, to be clear, is neither good nor bad. It is amoral – or rather, its moral content is something we make together – socially, not biologically. For social engagement is itself a form of extroversion – an act of accommodation, a belief in the value of difference – in short, an anti-fundamentalist, anti-determinist view of the merits of navigating uncertainty together. But resilience can look Darwinian – both because the disadvantaged African Americans who respond to Brookings Institution surveys have already transcended significant challenges; and because the uneven playing field on which they’ve lived has long since silenced, ruined or completely destroyed those lacking survival networks. Such a story might even be corroborated by the unhappy fact that African Americans (and men in particular) live less long than their counterparts in other groups; and, when they do live longer, they’re more likely to spend time in prison if what stress teaches them is antisocial. Research on minority resilience must, therefore, be read differently. For it is social exchange – our very sociality, the ‘moral economy’ – that produces hope. Here, everything depends on social context. So, those who engage and exchange socially (by choice with families, or by default or of necessity in healthcare and service jobs) are better equipped to deal with the uncertainty of COVID-19 – and remain hopeful. It’s the engagement part – by choice or necessity – that nourishes hope. Every time we look one another in the eye and nod affirmatively in a social setting, we create an informal contract with another person. Dozens, sometimes hundreds, of times a day, we affirm our trust in others by this simple act, masked or not. We do this as an act of extroversion, hoping that we can survive and grow through creative engagement with what we learn on the edges of our community, and, if not, that our resilience can be nourished by those with whom we share common purpose. Black people in America might die more than three times more often than whites in the pandemic, but they’re also less socially isolated via their higher representation in public-facing jobs in which they have to engage with others. Like the military advance guard, or those cells at the edge of the Petri dish colony, they’re more likely to learn more from extroverted risk, and to adjust their expectations accordingly, emerging as more resilient in themselves and less vulnerable to mistrusting others. That’s not only why deaths born of despair are less common among them, but why isolation itself is a major driver of COVID-19 fatigue for all of us. It’s the engagement that matters. The so-called ‘healthy migrant effect’ offers a clear example. Migrant struggles are well documented, but migrants who enter into new communities often have just as good or even better health statuses than native populations. Thus, second-generation Asian-American migrants are more likely to excel in secondary school, and have much higher test scores, attend elite colleges and receive high-income professional degrees (eg, business, medicine, etc). The point is that it’s not only the extroverted risk of migrating that matters: it’s whether that risk results in a sense of meaningful exchange within a social context. It’s exchange itself, it turns out, that’s important. What’s more, the more moral its content, the better the odds that such exchange will enhance resilience. Most of the time, risks don’t work out as expected. And when they don’t work out, we all need a parent’s couch to sleep on and a shared meal to increase our sense of belonging and hope. It’s what the French sociologist Marcel Mauss observed almost a century ago about the value of reciprocity in his essay The Gift (1925): that the giver gives a part of him or herself, and that the thing given implies a return. Which is to say that it’s the exchange relationship that makes an economy ‘moral’ in the first place. By contrast, being alone undermines wellbeing. We know this from studying the impact of social isolation on mortality and morbidity. There’s lots of evidence here, and not just from studies of suicide: experiencing social isolation is a key reason why children who are wards of state, for example, often elect to return to families that are dangerous for them. In fact, being socially engaged even trumps being equal to others when it comes to what we all need. Again, evidence falls readily to hand. Some recent work on isolation and healthcare in China, carried out by members of the Cities Changing Diabetes global academic network that I lead, shows just how much of a risk factor social isolation is. Asked if equality of access to healthcare contributed directly to an inability to manage disease, about one-third of the several hundred people we interviewed said ‘yes, equality matters’. Asked how much the absence of family networks (a proxy for social isolation) impacted illness experience, and the percentage who said it did rose to almost everyone (93 per cent). And that’s in a country known to provide next-to-no care, let alone equal care, for economic migrants who must go home to be treated. This finding is startling, because equality is the gold standard for engagement in any democracy. Yet even it fades in importance when the moral economy is measured. For hope to proliferate, we need much more than endurance in the heroic, Darwinian sense The same holds true of refugees from violence. In another project (one in which I’ve been personally involved), funded by the University of Applied Sciences in Bochum, Germany, we systematically documented the health vulnerabilities of recent migrants. Asked whether they were receiving good healthcare, Syrian refugees resettled in communities often answered that they were receiving excellent care, even though German-born citizens publicly stated that those migrants were getting less. That’s not just because welfare in Germany looks pretty good when compared with Aleppo. It’s because extroverted hope, when paired with the altruism it generates socially, mediates a person’s ability to believe in the future – even if that hoped-for future is still somewhere far in the distance. There’s an important conclusion here: equality is only a first step towards alleviating human suffering and promoting feeling well within a moral economy. The bigger part concerns how people learn to hope about more than getting through the day. To put it another way, being hopeful requires a belief in the future, a long-term view. But being hopeful also requires more than that. It requires a sense of deep time and an enduring willingness – a desire – to engage. For hope to proliferate, we need much more than endurance in the heroic, Darwinian sense. We need a willingness to accept the natural place of everyday uncertainty, and we need diversity – even redundancy – to make that possible. The idea isn’t hard to grasp. The American inventor Thomas Edison once said that, in order to create, inventors need ‘a good imagination and a pile of junk’. The implication is that the hope required to convert junk into something useful sustains your extended contemplation of a pile of rubbish (what looks irrelevant now) over the deep time required to reshape it. But there’s another lesson: if you eliminate (recycle) what in the moment seems redundant or useless, without giving it a fair chance at invention, you also eliminate the possibility of making something new. Growth depends on merging two unlike things in the interest of making something greater. Redundancy and diversity form the basis of every moral economy, which is why neoliberal economies – those that take what look like redundancies and eliminate them in the interest of ‘efficiency’ – fail miserably in assisting population wellbeing. I have yet to see, for example, how profit manages itself in places where state welfare is almost entirely absent (eg, Nigeria). Neoliberalism succeeds only when it emerges within otherwise generous societies that have welfare stockpiles that can be selfishly mined. On that point, Ayn-Rand-style economics fails, and will forever fail, by favouring self-interest and efficiency over diversity, generosity and altruism. Observe what short-term self-interest has done to challenged economies, and a picture of what my fellow anthropologist Jonathan Benthall in 1991 called ‘market fundamentalism’ is easily painted. The social parallels here almost need no stating: what seems irrelevant to any one of us today, including the peculiar views of others, might in the end provide the very thing necessary to make us resilient to a future challenge – just as hope in the future mediates the uncertainties of COVID-19 through social engagement. Source of the article

GOATReads: Psychology

4 Ways You Self-Sabotage Your Joy In Daily Life, By A Psychologist

Most of us, at some point in our lives, have stood in the way of our own growth. We make progress on a project, start to feel hopeful about a relationship or finally get on track with a goal, and then we do something that undermines it. We fall into a procrastination spiral, pick a fight or simply quit; in doing so, we talk ourselves out of something that could potentially bring us happiness. There’s a name for this kind of behavior: self-sabotage. It looks like standing in your own way, but beneath the surface, there are deep cognitive and emotional dynamics at work. Here are four well-studied reasons why people sabotage good things, based on research in psychology. 1. You Self-Sabotage By Avoiding Blame One of the most consistently researched patterns in self-sabotage comes from what psychologists call self-handicapping. This is a behavior in which people create obstacles to their own success so that if they fail, they can blame external factors instead of internal ability. A prime example comes from classic research where researchers observed students who procrastinated studying for an important test. The ones who failed mostly attributed it to lack of preparation rather than lack of organization or discipline. For the ones who succeeded, it felt like a greater personal triumph because they succeeded despite the handicap. Self-handicapping is not simply laziness or whimsy. Rather, it is a strategy people use to protect their self-worth in situations they might perform “poorly” in or where they might be perceived as inadequate. It goes without saying that this strategy is counterproductive in the long run. The abovementioned study notes that habitual self-handicapping is linked with lower achievement, reduced effort and increasing avoidance over time. People end up sabotaging their outcomes to protect their ego in the moment. 2. You Self-Sabotage Because Of A Fear Of Failure Or Success (Or Both) People often think of the fear of failure as the main emotional driver behind self-sabotage. But research points to the fear of success as an equal, yet less talked-about engine of the phenomenon. Both fears can push people to undermine opportunities that are actually aligned with their long-term goals. Fear of failure motivates avoidance as it can protect people from harsh self-judgment if things go poorly. People who worry that failure will confirm their negative self-beliefs are more likely to adopt defensive avoidance tactics, like procrastination or quitting early. Fear of success, though less widely discussed, operates in a similar fashion. What motivates this fear is the anxiety that comes with the consequences of success. These could be higher self-expectations (or from others), increased visibility or a sense that you will no longer fit into familiar social roles. Psychologists like Abraham Maslow called this the Jonah complex: the fear of one’s own potential when success creates new demands and threats to identity. So, self-sabotaging success can be a way to stay within a comfort zone where expectations are familiar, even if that zone is unsatisfying. 3. You Self-Sabotage Because Of Negative Self-Beliefs Self-sabotage is tightly intertwined with how people view themselves. When someone doubts their worth, their ability or their right to be happy, they may unconsciously act in ways that confirm those negative self-views. Psychological theories like self-discrepancy theory help explain this. It proposes that people experience emotional discomfort when their actual self does not match their ideal self. This mismatch can lead to negative emotions such as shame, anxiety or depression. To reduce that discomfort, some may unconsciously avoid situations where those discrepancies could be highlighted, even if those situations are positive in nature. For example, someone who believes deep down that they “don’t deserve” success may avoid opportunities where success is possible because acceptance of that success would trigger uncomfortable self-judgments. Their behavior is not illogical when viewed through the lens of protecting a fragile identity, even though the outcome — self-sabotage — is counterproductive. 4. You Self-Sabotage Because You’re Coping With Stress and Anxiety Self-sabotage often emerges in moments of high stress or emotional threat. When people feel overwhelmed, anxious or stretched thin, their nervous systems shift into protective modes. Instead of moving forward, they retreat, avoid or defensively withdraw. Threat or uncertainty can reduce cognitive regulation and increase avoidance behaviors. In situations of perceived threat, even if the threat is potential success or evaluation, people can default to behaviors that feel safer, even if they undermine long-term goals. In practical terms, this means someone under chronic stress may procrastinate, ruminate or choose short-term relief over long-term gain, effectively sabotaging progress to manage anxiety in the moment. What All These Patterns Of Self-Sabotage Have in Common These psychological processes may feel very different on the surface, manifesting as procrastination, quitting, relationship withdrawal, distraction or negative self-talk, but they share common underlying themes: A desire to protect self-esteem by avoiding situations where perceived personal flaws might be exposed A fear of consequences, either of failure or of success, that feels threatening to identity or emotional stability Internal negative beliefs about self-worth and competence that are at odds with conscious goals Short-term emotional regulation strategies that comfort over long-term achievement What looks like “standing in your own way” is often a defensive strategy your mind developed to manage risk, emotional, social or identity related. That explains why self-sabotage can feel automatic and unconscious, rather than deliberate. Understanding why self-sabotage occurs is the first step toward changing it. But real progress comes from shifting underlying beliefs and responses, not just behaviors: Reframe failure as feedback. When failure is seen as a source of information rather than a judgment on worth, the fear that drives sabotage weakens. Build self-compassion. Self-compassion has been linked with lower tendencies to self-handicap. Treating yourself kindly in the face of setbacks makes it easier to stay engaged rather than withdraw. Challenge self-worth beliefs. Work on internal narratives that tell you you’re unworthy of success. Aligning self-concept with realistic goals reduces conflict and avoidance. Develop adaptive stress responses. Reducing chronic stress and improving emotional regulation, through mindfulness, social support or therapy, can help prevent threat-driven avoidance. These strategies don’t magically eliminate self-sabotage, but they weaken its psychological roots. The goal isn’t to eradicate fear or doubt, but to stop letting them dictate your actions. Source of the articel

Earthly delights

Noticing first one then many parrots, peacocks, owls and more birds in Old Master paintings taught me to truly see the world Iam an accidental birder. While I never used to pay much attention to the birds outside my window, even being a bit afraid of them when I was a child, I have always loved making lists. Ranking operas and opera houses, categorising favourite books and beautiful libraries – not to mention decades of creating ‘Top Ten’ lists of hikes, drives, national parks, hotels, and bottles of wine. My birding hobby grew out of this predilection. Specifically, out of my penchant for writing down the birds I found in the paintings by the Old Masters. Hieronymus Bosch, for starters. Bringing my opera glasses to the Museo del Prado in Madrid, I delighted in sitting across the room and counting the birds in Bosch’s painting, today called Garden of Earthly Delights (1490-1510). The triptych, which visualises the fate of humanity in three large panels, is exploding with birds. So far, my list of Bosch birds includes spiralling flocks of starlings amid posing peacocks and pheasants. Closer to the water are storks, egrets and two kinds of herons. A jackdaw and a jay can be identified near a giant ‘strawberry tree’, below which are two spoonbills. And lurking in the trees are three kinds of owls, serving as signs of heresy. In his book A Dark Premonition: Journeys to Hieronymus Bosch (2016), the Dutch poet and novelist Cees Nooteboom describes seeing Bosch’s work when he was a young man of 21 – and then seeing it again when he was 82. He asks of one picture: How has the painting changed? How has the viewer changed? Am I even the same man now? These are the questions I ask myself while standing in front of a certain picture by Raphael in the Uffizi. The first time I saw the Madonna del Cardellino (c1505-06) was more than 30 years ago. I was 19. My college boyfriend and I had stopped in Europe on the way back from two magical months in India. It was my first time in Italy. And Florence was so damn pretty. I vividly recall what a warm day it was, and how overwhelmed I felt by the grand museum. Walking past picture after picture, I turned back to look for my boyfriend, who was trailing behind. And there he was, utterly gobsmacked in front of a painting. So I walked back to look at it too. It was a Madonna by Raphael. A beautiful blonde Madonna, in a rich red dress with her cloak of ultramarine draped over her shoulders, and seated with two babes at her feet. One was holding a goldfinch. Being young Americans, we couldn’t understand any of it. Why were there two baby boys? If the second was John the Baptist, where was the child’s mother? And were those violets and chamomile under their feet? Serious birders sometimes talk about their first bird memory. My own earliest bird-in-a-painting memory was that goldfinch in the painting by Raphael in the Uffizi. Its composition is much like Raphael’s Madonna del Prato (1506), in Vienna – but at the Uffizi, instead of a cross, the children play with a tiny bird. Thirty years later, standing in front of the same painting, I now know the bird symbolises the Christ Child and the Passion. In Catalonia in Spain, there is a wonderful legend that suggests that the jagged and holy mountains of Montserrat rose from the earth at the precise moment that Christ was crucified in Jerusalem – as if the earth itself rose in anger. There was a similar story from the Middle Ages about how the goldfinch received its red spot. Flying down over Christ on the Cross, the bird tried to help Him by picking out a thorn from the Crown – and in this way was forever after splashed with the drop of His blood. In an enchanted world, everything seems to be telling a story. Second marriages are notoriously difficult. My new husband had been wiped out financially and emotionally by his previous marriages (yes, there was more than one). By the time I met Chris, he was barely hanging on to the house, his kids showing varying degrees of alienation. It was impressive that he wanted to try again – and so soon? Not six months after our first date and whirlwind romance, we had done it! I sometimes think we were like survivors of a shipwreck; his life was a wreck, but mine was worse. Of course, we underwent couples therapy and laughed off the obligatory (but serious) warnings about our dim hopes of survival. We were just happy to have found each other; happy to be still breathing; for, as Voltaire said in 1761: ‘[E]verything is a shipwreck; save yourself who can! … Let us then cultivate our garden …’ My first marriage had been to a Japanese man. Having spent my adult life in his country, where I spoke, thought, and dreamt in Japanese, I hoped marrying an American would be easier. After all, we shared a language and a culture. But it wasn’t easier. Marriage is tough in any language. And so, I have tried much harder this time to cultivate shared values and interests – which is challenging when you are married to an astrophysicist! I do love watching Chris look at art. He becomes intensely attentive, as if every nerve-ending in his body is switched on. It’s not like he’s trying to figure out the nature of galaxy evolution or doing the complicated mathematics that he does when he’s working. He just stands there before the picture, fully present. Most of the time, I have a hard time understanding what he’s thinking about. I know he can build things that go into space. And that he teaches quantum mechanics at Caltech and can perform multivariable calculus. He can even make a cat die and not die at the same time. This is mainly lost on me, which is why I love looking at art together with him. It’s something we can share, something over which we can linger, in each other’s company. That was how my husband and I started going on what we call our ‘art pilgrimages’. From the very beginning of our marriage, we spent enormous amounts of time standing side by side silently looking at Old Masters. Sometimes we might talk a bit, hold hands, and exchange a knowing smile, but mainly we stood there silently soaking it all in. Shortly after getting married, I took Chris to the Getty Museum, in Los Angeles. I was excited to share my favourite picture in the collection, Vittore Carpaccio’s Hunting on the Lagoon (c1490-95). The museum acquired the painting in 1979, from the collection of the Metropolitan Opera basso Luben Vichey and his wife. Hunting on the Lagoon shimmers with atmospheric effects. Painted in azurite, yellow ochre and lead white, there are touches of costly ultramarine used for the sky and mountains, while vermilion is used on the servant’s jacket. Hunting on the Lagoon depicts a group of aristocratic gentlemen hunting from a small boat on the water. ‘Hunting birds with a bow and arrow?’ Chris wondered. Looking carefully, you can see they are shooting clay balls at what appear to be grebes. I tell him that it was apparently the custom to hunt birds in this way so as not to damage their pelts. ‘But what about those dark birds with the serpentine necks sitting one to a boat?’ he asked. I watched his eyes move to the same birds posing on pylons in the water. ‘Unmistakably cormorants.’ And the theory is, I tell him, that the birds were used for hunting fish. In Japan, you can still see this traditional way of fishing, called ukai. I am always so excited to share something of my life in Japan with Chris, even though it was in the days before we met. I tell him how I watched this kind of fishing years ago. ‘It was at night by lamplight on boats that ply the Kiso River, in Aichi Prefecture.’ The birds, held by what seem to be spruce fibre leashes, were trained to dive for ayu sweetfish and deliver them back to the fishermen on the boats, I say, wishing I could show him. ‘Do you think the custom came to Europe from Japan?’ he wonders. I think it arrived from China, though that story might be made up. In 17th-century England, King James I was known to have kept a large – and very costly – stock of cormorants in London, which he took hunting. Looking at the painting, however, I thought the practice I’d seen in Japan had been altered almost beyond recognition. During the Renaissance, the lagoon in the painting must have been jam-packed with fish and mussels and clams and birds. A perfect place to spend an afternoon. But those men, with their colourful hose, with their bows and clay balls, are clearly no fishermen. It was then that Chris noticed the strange, oversized lilies protruding from the water in the foreground of the painting. It took him long enough to notice, I thought. Those flowers have driven art historians crazy for generations. ‘Don’t tell me,’ he said, ‘There must be another picture? One with a missing vase, right?’ Right he was! There is an even better-known painting by Vittore Carpaccio, Two Venetian Ladies (c1490-1510), hanging in the Museo Correr in Venice. We went to see it a few years later. And, sure enough, there is a pretty majolica vase sitting on the wall of the balcony, which seems ready and waiting for those lilies. The two works (painted on wooden panels) fit together, one on top of the other. Before this was figured out, art historians believed the two bored-looking ladies to be courtesans. One of the reasons for thinking this was the two doves sitting on the balustrade, which are ancient symbols of Venus and romantic love. But the ladies are also shown sitting next to a large peahen, symbols of marriage and fidelity. Looking bored, with their tall wooden clogs tossed to the side, they were declared by art historians to be courtesans. Definitely courtesans. Like pieces of a puzzle, the matched set of paintings has now convinced art historians that these ‘ladies’ are in fact wives of the ‘fishermen’, who are themselves no longer believed to be fishermen but, rather, aristocratic Venetians out hunting waterfowl for sport on the lagoon. A great painter of dogs, Carpaccio was even better at birds. Beyond his doves, grebes and cormorants, he is perhaps best known for his colourful red parrots. According to Jan Morris writing in 2014, the Victorian art critic John Ruskin was much taken with Carpaccio’s menagerie. At the Ashmolean Museum in Oxford, there is a small watercolour drawing that is a copy of Carpaccio’s red parrot, made by Ruskin in 1875. Calling it a scarlet parrot, Ruskin wondered if it wasn’t an unknown species, and so decided to draw a picture of it in order to ‘immortalise Carpaccio’s name and mine’. It might be classified as Epops carpaccii, he suggested – Carpaccio’s Hoopoe. Chris and I were delirious to have found each other. Grateful for this chance to have our spirits reborn, we celebrated by taking multiple honeymoons that first year. And without a doubt, the most romantic was the trip we took to Venice – on the hunt to find Carpaccio’s red parrot, which, happily, one can see in the place for which it was originally commissioned: in the Scuola di San Giorgio degli Schiavoni. Today, when introducing foreign visitors to Venice’s scuole, tour guides will sometimes compare the medieval confraternities to modern-day business associations that carry out philanthropic activities, like the rotary club. That is probably not far off the mark. Carpaccio’s great narrative cycles were created to adorn the walls of these scuole. The pictures were not merely to decorate, but there to tell stories relevant to the confraternity. Perhaps the best known of these are two of the paintings commissioned by the Scuola di San Giorgio degli Schiavoni. The red parrot that Ruskin adored is still there in one of the paintings, the Baptism of the Selenites (1502). Chris and I barely made it in time before the small scuola closed for the day. It was hot and the air heavy in the dark interior. When the author Henry James visited the Schiavoni in 1882, he complained that ‘the pictures are out of sight and ill-lighted, the custodian is rapacious, the visitors are mutually intolerable …’ However, then he magnanimously added: ‘but the shabby little chapel is a palace of art.’ Eventually locating the parrot, we marvelled at how often such exotic birds can be counted in religious paintings from the Renaissance. We assumed they must be prized like the tulips of Amsterdam during the Dutch Golden Age of paintings, coveted and displayed for their rarity. I learned only later that it was also because they were a symbol of the Virgin birth. Art historians suggest that this is due to an ancient belief that conception occurred through the ear (and parrots can speak…?) Another more interesting explanation is something found in the Latin writings of Macrobius, who said that when it was announced in Rome that Caesar’s adopted nephew Octavian was triumphant at the Battle of Actium in 31 BCE, at least one parrot congratulated him with: ‘Ave Caesar.’ This was seen as prefiguring the Annunciation and Ave Maria. In another painting in the scuola, Saint Jerome and the Lion (1509), Carpaccio has drawn what looked to us as an entire bestiary – including a beautiful peacock that seems to be trying to get as far away from the lion as it can. Peacocks always remind me of Flannery O’Connor, who lived on a farm in Georgia with ‘forty beaks to feed’. She loved her peacocks, calling them the ‘king of the birds’. No matter how her family complained, she remained firm in her devotion. Recently re-reading her essays in the posthumous collection Mystery and Manners (1969), I learned that the Anglo tradition is very different from the Indian one, when it comes to peacocks. In India, they are viewed as symbols of love and beauty, while Europeans typically associate peacocks with vanity and pride. This notion stretches all the way back to Aristotle, who remarked that some animals are jealous and vain, like a peacock. That is why you find them aplenty in Bosch’s paintings. A warning against the pride of vanity. O’Connor knew that the peacock was a Christian symbol of resurrection and eternal life. Others concurred. The ancient Romans held that the flesh of the peacock stayed fresh forever. Augustine of Hippo tested this with a live peacock in Carthage, noting that: ‘A year later, it was still the same, except that it was a little more shrivelled, and drier.’ Thus, the peacock came to populate Christian art from mosaics in the Basilica di San Marco to paintings by Fra Angelico in the Renaissance. Perhaps this is one of the reasons I came to love peacocks so much; as after all, I was experiencing my own kind of resurrection of the spirit with Chris. The late German art historian Hans Belting wrote about the exotic creatures found in Bosch’s triptych. Belting’s interpretation is interesting, as he views the middle panel – the eponymous Garden of Earthly Delights – as being a version of utopia. By Bosch’s day, the New World had been ‘discovered’ by Europeans – and, indeed, the painting can be dated because of the New World pineapples seen in the central panel. When Christopher Columbus set sail to the Indies, he believed, like many of the theologians of his time, that an earthy paradise existed in the waters antipodal to Jerusalem, just as Dante Alighieri described. But what is Bosch trying to say? I don’t think anyone really understands. What we do know is that the triptych was never installed in a church – but was instead shown along with exotic items in the Wunderkammer of his patrons. Albrecht Dürer, my beloved painter of owls and rhinos, visited Brussels three years after the completion of Bosch’s painting but said not one word about it in his copious journals. Was he disappointed? Scandalised? Belting thinks his silence speaks volumes, and he describes Dürer’s astonishment when visiting the castle and seeing the wild animals and all manner of exotic things from the Americas and beyond. There was a reason why the Europeans of the time called the Americas the New World, instead of just the ‘new continent’. For this was a revelation, not just of new land, but of sought-after minerals, like gold and silver. It was a new world of tastes. From potatoes to tomatoes and chocolate to corn, the dinner tables of Europe would be transformed in the wake of Columbus’s trip. There were animals never seen in Europe, like the turkey and the American bison. And hummingbirds. How wide-eyed those Europeans must have been. In 1923, Marcel Proust wrote that: ‘The only true voyage of discovery … would be not to visit strange lands but to possess other eyes.’ And this was how I felt coming back to California after two decades in Japan. It was also how I felt during the early days of the COVID-19 pandemic, when time took on a stretched-out quality. To feel oneself slowing down was also to discover new eyes – to begin to savour the seasons changing, the birdsong, or the peaceful sound of the rustling leaves in the palm trees. To listen to the loud rustle of the grapefruit tree just before a huge, round fruit falls smack onto the ground was like a revelation the first time I heard it. And how did I reach 50 years old and never once hear baby birds chirping to be fed – like crickets! The lockdowns became a time for me to see the world with new eyes. And it continues, wave after wave. It was during that time when our ‘birdwatching in oil paintings’ obsession, mine and Chris’s, was transformed into real-life birding. The pandemic, and lockdown, changed everything. When restrictions lifted, rather than taking off to museums in Europe, we travelled to Alaska, where we spent weeks traipsing across the tundra in Denali National Park. So often looking down at my feet, I’d marvel at the wondrous tangle of green and yellow lichen; of moss and red berries; and at a variety of dwarf willow and rhododendron, none more than an inch tall. It created a beautiful pattern, like a Persian carpet. Enchanted, I wanted to take off my shoes and feel the spongy earth between my toes. When was the last time I had walked anywhere barefoot? Even at the beach, I usually keep my shoes on. And not only that, but I had never in my life walked off-trail, much less traipsed across tundra. When I was young, I once camped along the Indus River, in India, but that was so long ago. How had I become so alienated from wild things? Life is, after all, constantly shuffling the deck, with each moment precious and unique. All those heightened moments we experienced in our favourite paintings are precisely what the great artists were celebrating. The perfect unfolding of now. And what was true in the paintings was also true out in the world. Birding alone and then later in groups, we have savoured those moments when a bird is spotted, and we all grow instantly quiet. Frantically training our binoculars on the object, it seems we are all frozen in a great hush. With laser focus, we attune ourselves to the bird, on a hair’s breadth of losing it, aware of the tiniest flitter, flutter and peep. It is enchantment. And through this, I have felt a little of how birds must have exerted power over the Renaissance imagination too. I continue to marvel at these free creatures of the air, symbolising hope and rebirth, messengers from distant lands, inhabitants of a canvas of beauty and life in this great garden of earthly delights. Source of the article