CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

GOATReads:Sociology

D.A.R.E. Is More Than Just Antidrug Education—It Is Police Propaganda

Almost all Americans of a certain age have a DARE story. Usually, its millennials with the most vivid memories of the program—which stands for “Drug Abuse Resistance Education”—who can not only recount their DARE experiences from elementary school but also the name of their DARE officer. Looking back on DARE, many recall it as an ineffective program that did little to prevent drug use, which is why they are often surprised that the program still exists. In fact, DARE celebrated its 40th anniversary last year. Schools continue to graduate DARE classes, albeit at a far slower pace than during the program’s heyday in the 1980s and 1990s. While DARE gained widespread support and resources on the presumption that it was an alternative to the supply side approaches to the drug war that relied on arrest and incarceration, my research shows that DARE was less an alternative to policing and more a complementary piece of law enforcement’s larger War on Drugs. As police fought and continue to fight a drug war primarily through violent criminalization, arrest, and incarceration, their presence in schools presents law enforcement a way to advance the police mission of defending the “law-abiding” from the “criminal element” of society by another means. In the process, DARE offers reliably positive public relations when reactionary police activities garner unwanted political critique or public protest, offering a kind of built-in legitimacy that shields against more radical efforts to dismantle police power. DARE America, the nonprofit organization that coordinates the program, suggests that DARE has evolved into a “comprehensive, yet flexible, program of prevention education curricula.” But the program remains largely faithful to its original carceral approach and goal of legitimizing police authority through drug education and prevention. The revised curriculum still ultimately skews toward an abstinence-only, zero-tolerance approach that criminalizes drugs and drug users. It fails to embrace harm reduction approaches, such as sharing information on how students can minimize the health risks if they do choose to use drugs, even as research increasingly demonstrates the effectiveness of such methods and as knowledge about the harmful effects of hyperpunitive, abstinence-only drug education becomes more mainstream. DARE’s reluctance to change—especially change that diminishes the police’s authority to administer drug education—should not come as a surprise. My new book, DARE to Say No: Policing and the War on Drugs in Schools, offers the first in-depth historical exploration of the once-ubiquitous and most popular drug education program in the US, charting its origins, growth and development, cultural and political significance, and the controversy that led to its fall from grace. Although DARE lost its once hegemonic influence over drug education, it had long-lasting effects on American policing, politics, and culture. As I suggest in DARE to Say No, after the establishment of DARE and the deployment of the DARE officer as the solution to youth drug use, there was almost no approach to preventing drug use that did not involve police. In doing so, DARE ensures that drug use and prevention, what many experts consider a public health issue, continues to fall under the purview of law enforcement. It is another example of the way the police have claimed authority over all aspects of social life in the United States even as evidence of the deadly consequences of this expansion of police power have come to public attention in recent years with police killings in response to mental health and other service calls. Viewed in this light, DARE administrators continue to see the program as a reliable salve for the police amid ongoing police brutality, violence, and abuse. Revisiting this history of the preventive side of America’s long-running drug war offers vital lessons for drug education today, cautioning us to be wary of drug prevention initiatives that ultimately reinforce police power and proliferate state violence in our schools and communities. DARE was, in fact, born out of police failure. The brainchild of the Los Angeles Police Department’s (LAPD) chief of police Daryl Gates and other brass, the drug education program got its start in Los Angeles, where LAPD’s efforts to stem youth drug use had repeatedly failed. The LAPD had actually tried placing undercover officers in schools as early as 1974 to root out drug dealers, but drug use among young Angelenos only increased in intervening years, making a mockery of the police’s antidrug enforcement in schools. Recognizing this failure, Gates looked for an alternative to supply reduction efforts which relied on vigorous law enforcement operations. He began talking about the need to reduce the demand for drugs, especially by kids and teenagers. In January 1983, he approached the Los Angeles Unified School District (LAUSD) with an idea: schools needed a new type of drug education and prevention program. Working with LAUSD officials, LAPD brass developed a proposal for the use of police officers to teach a new form of substance abuse education in Los Angeles schools. The program that emerged from that work was Project DARE. The joint LAPD and LAUSD venture launched a pilot program in the fall of 1983. Project DARE came at a moment when the LAPD waged a violent and racist drug war on city streets. If Gates promoted DARE as an alternative, he was certainly no slouch when it came to combatting drugs. A longtime LAPD officer who had helped create the nation’s first SWAT team in Los Angeles following the 1965 Watts uprising, Gates believed in asserting aggressive police power to wage what he described as a literal war to control the streets, especially in the city’s Black and Latinx neighborhoods. Gates rose to chief of police in 1978 and oversaw a vigorous and violent war on drugs and gangs, relying on a destructive mix of antidrug raids and gang sweeps that targeted Black and Latinx youth. Perhaps Gates’s most notorious statement about his attitude toward the treatment of drug users came when he quipped to a congressional committee, “The casual user ought to be taken out and shot.” Gates’s militarized and flagrantly racist approach drug and crime enforcement provoked growing scrutiny from antipolice activists who called out the LAPD for its racism and abuse in the years prior to the 1991 beating of Rodney King and the 1992 Los Angeles rebellion. Against this context, DARE’s focus on prevention and education in schools offered the LAPD a means to counteract this tough, violent image of the warrior cop, not to mention Gates’s own punitive rhetoric. While publicly framed as an alternative to tough antidrug policing, DARE also offered the police a means to enhance their legitimacy and bolster their institutional authority at the very same time their aggressive urban policing practices were alienating predominantly Black and Latinx working-class communities and prompting growing charges of racism and brutality within LAPD’s ranks. In its first iteration, DARE began with stints of 15 (later expanded to 17) weeks to deliver the DARE curriculum in 50 classrooms. Deploying veteran police officers to the classroom beat was a calculated move. Program designers, along with many educators, believed that the youth drug crisis was so advanced that students as young as fifth graders were so savvy about drugs and drug culture that teachers were out of their depth to teach about drugs. By contrast, the thinking went, because police had experience with the negative consequences of drug use, they had much more credibility for this generation of supposed young drug savants. But it was not only that police officers had experience with drugs that lent them credibility when compared to classroom teachers. For many law enforcement officials, DARE became a shining example of how the police could wage the drug war in the nation’s schools through prevention rather than enforcement of drug laws. Focusing on prevention and education would “soften” the aggressive image of the police that routinely appeared in exposés on crack and gang violence on the nightly news and in national newsmagazines such as Newsweek and Time. As teachers, DARE officers would promote a more responsible and professional police image. Early returns from the DARE program pointed to an effective and successful program. Studies conducted in the mid-1980s by Glenn Nyre of the Evaluation and Training Institute (ETI), an organization hired by the LAPD to evaluate the program in its early years, found positive results when it came to student attitudes about drug use, knowledge of how to say no, and respect for the police. School administrators and classroom teachers also responded to the program with gusto, reporting better student behavior and discipline in the classroom. Students also seemed to like the program, especially since most of the evidence of student reactions came from DARE essays written in class or in DARE’s public relations material. As one DARE graduate recalled when the program ended, “I’m sad, because we can’t see our officer again and happy because we know we don’t have to take drugs.” That LAPD handpicked ETI to conduct this assessment suggests it was hardly an independent evaluation, a fact that some observers noted at the time. Nevertheless, such initial positive results gave LAPD and LAUSD officials a gloss of authority and primed them to make good on their promise of bringing the program to every student in the country. And they very nearly did. Within a decade of its founding, DARE became the largest and most visible drug prevention program in the United States. At its height, police officers taught DARE to fifth- and sixth-grade students in more than 75 percent of American school districts as well as in dozens of countries around the world. Officers came to Los Angeles to be trained in the delivery of the DARE curriculum. The demand for DARE led to the creation of a network of training centers across the country, which vastly expanded the network of trained DARE officers. DARE leaders also created a DARE Parent Program to teach parents how to know the signs of youth drug use and the best approach to dealing with their kids who used drugs. DARE, in short, created a wide network that linked police, schools, and parents in the common cause of stopping youth drug use. Everyone seemed to love DARE. Especially politicians. Congressmembers from both parties fawned over it. In congressional hearings and on the floor of congress, they lauded the program and allocated funds for drug education and prevention programming in the Drug-Free Schools and Communities Act (DFSCA) provisions of the 1986 Anti-Drug Abuse Act. Amendments to the DFSCA in 1989 referenced the use of law enforcement officers as teachers of drug education and, more directly, a 1990 amendment mentioned the DARE program by name. President Reagan was the first president to announce National DARE Day, a tradition that continued every year through the Obama presidency. Bill Clinton also singled out the program in his State of the Union address in 1996 stating, “I challenge Congress not to cut our support for drug-free schools. People like the D.A.R.E. officers are making a real impression on grade-school children that will give them the strength to say no when the time comes.” Rehabilitating the police image and sustaining police authority by supporting DARE was very much a bipartisan effort.   Political support for DARE reflected the program’s widespread popularity among several constituencies. Law enforcement officials hoped it would be a way to develop relationships with kids at the very moment they waged an aggressive and violent war on drugs on the nation’s streets. Educators liked it because it absolved them from teaching about drugs and meant teachers got a class period off from teaching. Parents, many of whom felt they did not know how to talk to their kids about drugs, also saw value in DARE. As nominal educators, DARE officers became part of schools’ daily operation. Even as they wore their uniforms, they were unarmed and explicitly trained not to act in a law enforcement role while on campus. DARE officers would not enforce drug laws in schools but rather teach kids self-esteem, resistance to peer pressure, and how to say no to drugs. In the minds of the program’s supporters, turning police into teachers tempered the drug war by helping kids learn to avoid drugs rather than targeting them for arrest. Officers did much more than just teach DARE classes. DARE officers embedded themselves into their communities, engaging in a wide variety of extracurricular activities. For instance, one officer coached a DARE Track Club. Another coached a football team. Some even played Santa Claus and Rudolf during the holidays. To bolster their authority on a national scale, DARE administrators constructed a public relations campaign enlisting athletes and celebrities to promote the program and facilitate trust between children and the police. More than just a feel-good program for the police and youth, however, law enforcement needed DARE—and not just for the purported goal of fighting drugs. DARE offered a means to burnish the public image of policing after years of aggressive and militarized policing associated with the drug war and high-profile episodes of police violence and profiling, such as the beating of Rodney King in Los Angeles or the discriminatory targeting of the Central Park Five in New York. By using cops as teachers, DARE administrators and proponents hoped to humanize the police by transforming them into friends and mentors of the nation’s youth instead of a uniformed enemy. For DARE’s proponents, they insisted that kids took the police message to heart. As DARE America director Glenn Levant made clear, DARE’s success was evident during the 1992 Los Angeles rebellion, when, instead of protesting, “we saw kids in DARE shirts walking the streets with their parents, hand-in-hand, as if to say, ‘I’m a good citizen, I’m not going to participate in the looting.’” The underlying goal was to transform the image of the police in the minds of kids and to develop rapport with students so that they no longer viewed the police as threatening or the enforcers of drug laws. But DARE’s message about zero tolerance for drug use—and the legitimacy of police authority—sometimes led to dire consequences that ultimately revealed law enforcement’s quite broad power to punish. The most high-profile instances occurred when students told their DARE officers about their parents’ drug use, which occasionally led to the arrest of the child’s family members. For those students who took the DARE message to heart, they unwittingly became snitches, serving as the eyes and ears of the police and giving law enforcement additional avenues for surveilling and criminalizing community drug use. DARE was not a benign program aimed only at preventing youth drug use. It was a police legitimacy project disguised as a wholesome civic education effort. Relying on the police to teach zero tolerance for drugs and respect for law and order accomplished political-cultural work for both policy makers and law enforcement who needed to retain public investment in law and order even amid credible allegations of police misconduct and terror. Similarly, DARE diverted attention from the violent reality of the drug war that threatened to undermine trust in the police and alienate constituencies who faced the brunt of such policing. Through softening and rehabilitating the image of police for impressionable youth and their families, DARE ultimately enabled the police to continue their aggressive tactics of mass arrest, punishment, and surveillance, especially for Black and Latinx youth. Far from an alternative to the violent and death-dealing war on drugs, DARE ensured that its punitive operations could continue apace. But all “good” things come to an end. By the mid-1990s, DARE came under scrutiny for its failure to prevent youth drug use. Despite initial reports of programmatic success, social scientists evaluating the program completed dozens of studies pointing to DARE’s ineffectiveness, which led to public controversy and revisions to the program’s curriculum. Initially, criticism from social science researchers did little to dent the program’s popularity. But as more evidence came out that DARE did not work to reduce youth drug use, some cities began to drop the program. Federal officials also put pressure on DARE by requiring that programs be verified as effective by researchers to receive federal funds. By the late 1990s, DARE was on the defensive and risked losing much of its cultural cachet. In response, DARE adapted. It revised its curriculum and worked with researchers at the University of Akron to evaluate the new curriculum in the early 2000s. Subsequent revisions to the DARE curriculum relied on close partnership with experts and evaluators led to the introduction of a new version of the curriculum in 2007 called “keepin’ it REAL” (kiR). The kiR model decentered the antidrug message of the original curriculum and emphasized life skills and decision-making in its place. For all the criticism and revision, however, few observers ever questioned, or studied for that matter, the efficacy of using police officers as teachers. Despite the focus on life skills and healthy lifestyles, DARE remains a law enforcement–oriented program with a zero tolerance spirit to help kids, in the words of DARE’s longtime motto, “To Resist Drugs and Violence.” While DARE remains alive and well, its future is increasingly uncertain. The dramatic rise in teen overdose deaths from fentanyl has renewed demands for drug education and prevention programs in schools. Rather than following the DARE’s zero-tolerance playbook, some school districts have explored adopting new forms of drug education programming focused on honesty and transparency about drug use and its effects, a model known as harm reduction. The Drug Policy Alliance’s (DPA) Safety First drug education curriculum, for instance, is based on such principles. Rather than pushing punitive, abstinence-only lessons, Safety First emphasizes scientifically accurate and honest lessons about drugs and encourages students to reduce the risks of drug use if they choose to experiment with drugs. Most notably, it neither requires nor encourages the use of police officers to administer their programming. The implementation of Safety First marks the beginning of what could promise to be a vastly different approach to drug education and prevention programs. It is a welcome alternative to drug education programs of the past. As the history of DARE demonstrates, police-led, zero-tolerance drug education not only does not reduce drug abuse among youth, but serves as a massive public relations campaign for law enforcement, helping to obscure racist police violence and repression. It is high time Americans refuse to take the bait. Source of the article

Life happened fast

It’s time to rethink how we study life’s origins. It emerged far earlier, and far quicker, than we once thought possible Here’s a story you might have read before in a popular science book or seen in a documentary. It’s the one about early Earth as a lifeless, volcanic hellscape. When our planet was newly formed, the story goes, the surface was a barren wasteland of sharp rocks, strewn with lava flows from erupting volcanoes. The air was an unbreathable fume of gases. There was little or no liquid water. Just as things were starting to settle down, a barrage of meteorites tens of kilometres across came pummelling down from space, obliterating entire landscapes and sending vast plumes of debris high into the sky. This barren world persisted for hundreds of millions of years. Finally, the environment settled down enough that oceans could form, and the conditions were finally right for microscopic life to emerge. That’s the story palaeontologists and geologists told for many decades. But a raft of evidence suggests it is completely wrong. The young Earth was not hellish, or at least not for long (in geological terms). And, crucially, life formed quickly after the planet solidified – perhaps astonishingly quickly. It may be that the first life emerged within just millions of years of the planet’s origin. With hindsight, it is strange that the idea of hellscape Earth ever became as established as it did. There was never any direct evidence of such lethal conditions. However, that lack of evidence may be the explanation. Humans are very prone to theorise wildly when there’s no evidence, and then to become extremely attached to their speculations. That same tendency – becoming over-attached to ideas that have only tenuous support – has also bedevilled research into the origins of life. Every journalist who has written about the origins of life has a few horror stories about bad-tempered researchers unwilling to tolerate dissent from their treasured ideas. Now that the idea of hellscape Earth has so comprehensively collapsed, we need to discard some lingering preconceptions about how life began, and embrace a more open-minded approach to this most challenging of problems. Whereas many researchers once assumed it took a chance event within a very long timescale for Earth’s biosphere to emerge, that increasingly looks untenable. Life happened fast – and any theory that seeks to explain its origins now needs to explain why. One of the greatest scientific achievements of the previous century was to extend the fossil record much further back in time. When Charles Darwin published On the Origin of Species (1859), the oldest known fossils were from the Cambrian period. Older rock layers appeared to be barren. This was a problem for Darwin’s theory of evolution, one he acknowledged: ‘To the question why we do not find records of these vast primordial periods, I can give no satisfactory answer.’ The problem got worse in the early 20th century, when geologists began to use radiometric dating to firm up the ages of rocks, and ultimately of Earth itself. The crucial Cambrian period, with those ancient fossils, began 538.8 million years ago. Yet radiometric dating revealed that Earth is a little over 4.5 billion years old – the current best estimate is 4.54 billion. This means the entire fossil record from the Cambrian to the present comprises less than one-eighth of our planet’s history. Stromatolites were well known, but these, from the Pilbara region in Western Australia, were astonishingly old However, in the mid-20th century, palaeontologists finally started finding older, ‘Pre-Cambrian’ fossils. In 1948, the geologist Reg Sprigg described fossilised impressions of what seemed to be jellyfish in rocks from the Ediacara Hills in South Australia. At the time, he described them as ‘basal Cambrian’, but they turned out to be older. A decade later, Trevor Ford wrote about frond-like remains found by schoolchildren in Charnwood Forest in England; he called them ‘Pre-Cambrian fossils’. The fossil record was inching back into the past. By 1980, the fossil record had become truly epic. On 3 April that year, a pair of papers was published in Nature, describing yet more fossils from Australia. They were stromatolites: mounds with alternating layers of microorganisms and sediments. In life, microbes like bacteria often grow in mats. These become covered in sediments like sand, and a new layer of cells grows on top, over and over. Stromatolites were well known, but these, from the Pilbara region in Western Australia, were astonishingly old. One set was 3.4 billion years old; the other looked like it might be even older, as much as 3.5 billion years old. Over the past 45 years, palaeontologists have meticulously re-analysed the Pilbara remains to confirm that they are real. It’s not a trivial problem: with rocks that ancient, strange distortions can form that look like fossilised microbes but are actually just deformed rocks. To resolve this, researchers have deployed an array of techniques, including searching for traces of organic matter. At this point, we are as confident as we can be that the Pilbara fossils are real. That means life has existed for at least 3.5 billion years. When I wrote The Genesis Quest back in 2020, I said this gave us a billion-year time window after the formation of Earth in which life could form. Since then, the evidence for life has been pushed further back in time. Until relatively recently, many researchers would have said the window was distinctly narrower than that. That’s because there were reasons to think that Earth was entirely uninhabitable for hundreds of millions of years after it formed. The first obstacle to life’s emergence was the Moon’s formation. This seems to have happened very soon after Earth coalesced, and in the most dramatic way imaginable: another planetary body, about the size of Mars, collided with Earth. The impact released so much energy it vaporised the surface of the planet, blasting a huge volume of rocks and dust into orbit. For a little while, Earth had a ring, until all that material gradually fused to form the Moon. This explosive scenario is the only one anyone has so far thought of that can explain why Moon rocks share similar isotopes with Earth rocks. It seems clear that, if there was any nascent life on the young Earth, it was obliterated in the searing heat of the impact. Still, this happened around 4.5 billion years ago. What about the billion years between the Moon-forming impact and the Pilbara fossils? The surface was an ocean of magma that slowly cooled and solidified We can divide this vast span of time into two aeons. They are divided by one simple factor: the existence of a rock record. The oldest known rocks are 4.031 billion years old. The half-billion years before that is called the Hadean; the subsequent time is called the Archean. As its ominous name suggests, the Hadean was assumed to have been hellish. In the immediate aftermath of the Moon-forming impact, the surface was an ocean of magma that slowly cooled and solidified. Artist’s impressions of this aeon often feature volcanoes, lava flows and meteorite impacts. The early Archean, if anything, seemed to be worse – thanks to a little thing called the Late Heavy Bombardment. Between around 3.8 and 4 billion years ago, waves of meteoroids swept through the solar system. Earth took a battering, and any life would have been obliterated. Only when the bombardment eased, 3.8 billion years ago, could life begin. In which case, life began in the 300 million years between the end of the Late Heavy Bombardment and the Pilbara fossils. This was a compelling narrative for many years. It was repeated uncritically in many books about the origins and history of life. Yet there were always nagging issues. In particular, palaeontologists kept finding apparent traces of life from older strata – life that was, on the face of it, too old to be real. As early as 1996, the geologist Stephen Mojzsis, then at the University of California, San Diego, and his colleagues were reporting that life was older than 3.8 billion years. They studied crystals of apatite from 3.8-billion-year-old rocks from the Isua supracrustal belt in West Greenland. Within the crystals are traces of carbon, which proved to be rich in one isotope, carbon-12, and low in the heavier carbon-13. This is characteristic of living matter, as living organisms prefer to use carbon-12. Nearly two decades later, the record was extended even further back in time by Elizabeth Bell at the University of California, Los Angeles and her colleagues. They studied thousands of tiny zircon crystals from the Jack Hills of Western Australia. Some of these crystals are Hadean in age: since there are no rocks from the Hadean, these minuscule shards are almost all we have to go on. One zircon proved to be about 4.1 billion years old. Trapped within it was a tiny amount of carbon, with the same telltale isotope mixture that suggested it was biogenic. For many years, the sceptics carried the argument, but more recently the tides have turned Perhaps most dramatically, in 2017, Dominic Papineau at University College London and his colleagues described tubes and filaments, resembling colonies of bacteria, in rocks from the Nuvvuagittuq belt in Quebec, Canada. The age of these rocks is disputed: they are at least 3.77 billion years old, and a study published this June found some of them are 4.16 to 4.20 billion. This would mean that life formed within 200 million years of Earth’s formation, deep in the Hadean. There are many more such studies. None of them is wholly convincing on its own: they often rely on a single crystal, or a rock formation that has been heated and crushed, and thus distorted. Each study has come in for strong criticism. This makes it difficult to assess the evidence, because there are multiple arguments in play. A believer in an early origin of life would highlight the sheer number of studies, from different parts of the world and using different forms of evidence. A sceptic would counter that we should accept a fossil only if it is supported by multiple lines of evidence, as happened in the Pilbara. To which a believer would say: the rock record from the early Archean is very sparse, and there are no rocks from the Hadean at all. It is simply not possible to obtain multiple lines of evidence from such limited material, so we must make a judgment based on what we have. The sceptic would then say: in that case, we don’t and can’t know the answer. For many years, the sceptics carried the argument, but more recently the tides have turned. This is partly because the fossil evidence of early life has accumulated – but it’s also because the evidence for the Late Heavy Bombardment sterilising the planet has collapsed. An early crack in the façade emerged when Mojzsis and Oleg Abramov at the University of Colorado simulated the Late Heavy Bombardment and concluded that it was not intense enough to sterilise Earth. Surface life might have been obliterated, but microbes could have survived underground in huge numbers. However, the bigger issue is that the Late Heavy Bombardment may not have happened at all. The evidence rested on argon isotopes from Moon rocks collected by the Apollo missions in the 1960s and ’70s. A re-analysis found that those isotopes were prone to a specific kind of artefact in the radioisotope data – creating the illusion of a sharp bombardment 3.9 billion years ago. What’s more, the Apollo missions all went to the same region of the Moon, so the astronauts may have mostly collected rocks from the same big impact – which would all naturally be the same age. Meanwhile, rocks on Earth preserve evidence of past impacts, and they show a long slow decline until 3 billion years ago, or later. Likewise, giant impacts on Mars appear to have tailed off by 4.48 billion years ago. There is also no sign of a Late Heavy Bombardment on the asteroid Vesta. If the Late Heavy Bombardment really didn’t happen, then it is reasonable to imagine that life began much earlier – perhaps even in the Hadean. The problem is how to demonstrate it, when the fossil evidence is so impossibly scant. This is where genetics comes in. Specifically phylogenetics, which means creating family trees of different organisms showing how they are related, and when the various splits occurred. For example, phylogenetics tells us that humans, chimpanzees and bonobos are descended from a shared ancestor that lived about 7 million years ago. By constructing family trees of the oldest and most divergent forms of life, phylogeneticists have tried to push back to the last universal common ancestor (LUCA). This is the most recent population of organisms from which every single living thing today is descended. It is the great-great-etc grandmother of all of us, from bacteria to mosses to scarlet macaws. Estimating the date of LUCA is fraught with uncertainties, but in the past decade phylogeneticists have started to narrow it down. One such attempt was published by a team led by Davide Pisani at the University of Bristol in the UK. They created a family tree of 102 species, focusing on microorganisms, as those are the oldest forms of life. They calibrated their tree using 11 dates known from the fossil record. The headline finding was that LUCA was at least 3.9 billion years old. It’s possible that life had existed long before LUCA – beginning early in the Hadean In 2024, many of the same researchers returned with a more detailed analysis of LUCA based on more than 3,500 modern genomes. This suggested LUCA lived between 4.09 and 4.33 billion years ago, with a best estimate of around 4.2 billion. What’s more, their reconstruction of LUCA’s genome suggested it was pretty complex, with a genome that encoded around 2,600 proteins. It also seems to have lived in a complex ecosystem. In particular, it appears to have had a primitive immune system, which implies it had to defend itself from some of its microbial neighbours. These details highlight a point that is not always obvious: LUCA does not represent the origin of life. It is just the most recent ancestor shared by all modern organisms. It’s possible that life had existed long before LUCA – beginning early in the Hadean. This fits with gathering evidence that the Hadean was not so hellish after all. It’s true that the entire planetary surface was molten at the very start of the Hadean, but it seems to have solidified by 4.4 billion years ago. Evidence from zircons suggests there was abundant liquid water at least 4.3 billion years ago, and possibly 4.4 billion. By 4.2 billion years ago, there seem to have been oceans. These primordial seas may have been considerably deeper than they are today, because Earth’s interior was hotter and could not hold as much water, so for a time there may have been no exposed land – or at least, only small islands. These strands of evidence amount to a complete rewriting of the early history of life on Earth. Instead of life beginning shortly after the Late Heavy Bombardment 3.8 billion years ago, it may have arisen within 100 million years of the planet’s formation. If so, what does that tell us about how it happened? The most immediate implication is that our ideas cannot rely on the power of chance at all. There have been a great many hypotheses about the origins of life that relied on a coincidence: say, a one-in-a-billion collision between two biological molecules in the primordial soup. But if life really formed within 0.1 billion years of the planet’s birth, ideas like this are absolutely untenable. There just wasn’t time. Take the RNA World, one of the leading hypotheses of life’s origins since the 1980s. The idea is that the first life did not contain the smorgasbord of organic chemicals that modern cells do. Instead, life was based entirely on RNA: a close cousin of the more familiar DNA, of which our genomes are made. RNA is appealing because it can carry genetic information, like DNA, but it can also control the rates of chemical reactions – something that is more usually done by protein-based enzymes. This adaptability, the argument goes, makes RNA the ideal molecule to kickstart life. They must find processes that work quickly and efficiently to generate complexity and life-like systems However, a close examination of the RNA World scenario reveals gaping holes. An RNA molecule is essentially a chain, and there are huge numbers of possible RNAs, depending on the sequence of links in the chain. Only a fraction of these RNAs actually make proteins. It’s not obvious how those ‘good’ RNAs are supposed to have formed: why didn’t conditions on the young Earth just create a random mix of RNAs? And, remember, we can’t rely on the power of chance and large numbers: it all happened too quickly. Instead, researchers now largely agree that they must find processes that work quickly and efficiently to generate complexity and life-like systems. But what does that mean in practice? There are various ideas. One prominent school of thought is that life formed in alkaline vents on the sea floor, where the flow of hot water and chemicals created a cradle that incubated life. Others have highlighted the potential of volcanic vents, meteorite impact craters, geothermal ponds, and tidal zones: anywhere that has a flow of energy and chemicals. The reality is that we are dealing with a huge number of intersecting questions. What was the environment in which the first life emerged? What was that first life made of and how did it work? Was the first life a simplified version of something we can observe today, or was it something radically different – either in composition or mechanism, or both – that was then supplanted by more familiar systems? I believe that the most promising thing to have happened in origins-of-life research in recent years has been a growing willingness to accept uncertainty and reject dogma. Origins research is barely a century old: the first widely discussed hypotheses were set out by Alexander Oparin and J B S Haldane in the 1920s, and the Miller-Urey experiment that kickstarted practical research in the field was published in 1953. For those first few decades, origins research was on the fringes of science, with only a handful of researchers actively working on it. Just as there was no direct evidence that the Hadean was a hellscape, there has been very little hard evidence for any of the competing scenarios for life’s origins. Researchers devised elaborate stories with multiple steps, found experimental evidence that supported one or two of those stages, and declared the problem solved. What origins research needs is open-mindedness and a willingness to disagree constructively A small group of people, a lack of hard evidence and a great many intersecting questions: that’s a recipe for dogmatic ideas and angry disagreements. And that’s what origins research was like for decades. I’ve been reporting on the field since the 2000s, and on multiple occasions I’ve seen researchers – including heads of labs – use language that resembled the worst kind of internet trolling. There was a time when I thought this abrasiveness was funny: now I just think it’s ugly and pointless. What origins research needs is open-mindedness and a willingness to disagree constructively. That culture shift is being driven by a generation of younger researchers, who have organised themselves through the Origin of Life Early-career Network (OoLEN). In 2020, a large group of OoLEN members and other researchers set out what they see as the future of the field. They complained of ‘distressing divisions in OoL research’: for instance, supporters of the RNA World have tended to contemptuously dismiss those who argue that life began with metabolic processes, and vice versa. The OoLEN team argued that these ‘classical approaches’ to the problem should not be seen as ‘mutually exclusive’: instead, ‘they can and should feed integrating approaches.’ This is exactly what is happening. Instead of focusing exclusively on RNA, many teams are now exploring what happens when RNA – or its constituent parts – are combined with other biological molecules, such as lipids and peptides. They are deploying artificial intelligence to make sense of the huge numbers of molecules involved. And they are holding back from strong statements in favour of their own pet hypotheses, and against other peoples’. This isn’t just a healthier way to work – though it absolutely is that. I believe it will also lead to faster and deeper progress. In the coming years, I expect many more insights into what happened on our planet when it was young and what the first life might have looked like. I presented the hellscape-Earth scenario as a kind of just-so story. Of course, because the data is so limited, we cannot escape telling stories about our planet’s infancy. But maybe soon we’ll be able to tell some better ones. Source of the article

The macho sperm myth

The idea that millions of sperm are on an Olympian race to reach the egg is yet another male fantasy of human reproduction Before science was able to shed light on human reproduction, most people thought new life arose through spontaneous generation from non-living matter. That changed a smidgen in the middle of the 17th century, when natural philosophers were able (barely) to see the female ovum, or egg, with the naked eye. They theorised that all life was spawned at the moment of divine creation; one person existed inside the other within a woman’s eggs, like Russian nesting dolls. This view of reproduction, called preformation, suited the ruling class well. ‘By putting lineages inside each other,’ notes the Portuguese developmental biologist and writer Clara Pinto-Correia in The Ovary of Eve (1997), ‘preformation could function as a “politically correct” antidemocratic doctrine, implicitly legitimising the dynastic system – and of course, the leading natural philosophers of the Scientific Revolution certainly were not servants.’ One might think that, as science progressed, it would crush the Russian-doll theory through its lucid biological lens. But that’s not precisely what occurred – instead, when the microscope finally enabled researchers to see not just eggs but sperm, the preformation theory morphed into a new, even more patriarchal political conceit: now, held philosophers and some students of reproduction, the egg was merely a passive receptacle waiting for vigorous sperm to arrive to trigger development. And sperm? The head of each contained a tiny preformed human being – a homunculus, to be exact. The Dutch mathematician and physicist Nicolaas Hartsoeker, inventor of the screw-barrel microscope, drew his image of the homunculus when sperm became visible for the first time in 1695. He did not actually see a homunculus in the sperm head, Hartsoeker conceded at the time, but he convinced himself that it was there. More powerful microscopes eventually relegated the homunculus to the dustbin of history – but in some ways not much has changed. Most notably, the legacy of the homunculus survives in the stubbornly persistent notion of the egg as a passive participant in fertilisation, awaiting the active sperm to swim through a hailstorm of challenges to perpetuate life. It’s understandable – though unfortunate – that a lay public might adopt these erroneous, sexist paradigms and metaphors. But biologists and physicians are guilty as well. It was in the relatively recent year of 1991, long after much of the real science had been set in stone, that the American anthropologist Emily Martin, now at New York University, described what she called a ‘scientific fairy tale’ – a picture of egg and sperm that suggests that ‘female biological processes are less worthy than their male counter-parts’ and that ‘women are less worthy than men’. The ovary, for instance, is depicted with a limited stock of starter eggs depleted over a lifetime whereas the testes are said to produce new sperm throughout life. Human egg production is commonly described as ‘wasteful’ because, from 300,000 egg starter cells present at puberty, only 400 mature eggs will ever be released; yet that adjective is rarely used to describe a man’s lifetime production of more than 2 trillion sperm. Whether in the popular or scientific press, human mating is commonly portrayed as a gigantic marathon swimming event in which the fastest, fittest sperm wins the prize of fertilising the egg. If this narrative was just a prejudicial holdover from our sexist past – an offensive male fantasy based on incorrect science – that would be bad enough, but continued buy-in to biased information impedes crucial fertility treatments for men and women alike. To grasp how we got here, a tour through history can help. Scientific understanding of sex cells and the process of human conception is a comparatively recent development. An egg, the largest cell in a human body, is barely visible to the naked eye, and about as big as the period ending this sentence. So the smallest human body cell, a sperm, is utterly invisible for the unaided eye. Sperm were unknown to science until 1677, when the Dutch amateur scientist Antonie van Leeuwenhoek first observed human sperm under a microscope. Around the same time, it was realised that the human ovary produced eggs, although it was not until 1827 that the German biologist Karl Ernst von Baer first reported actual observations of human and other mammalian eggs. After van Leeuwenhoek’s discovery of sperm, it took another century before anyone realised that they were needed to fertilise eggs. That revelation came in the 1760s, when the Italian priest and natural scientist Lazzaro Spallanzani, experimenting on male frogs wearing tight-fitting taffeta pants, demonstrated that eggs would not develop into tadpoles unless sperm was shed into the surrounding water. Bizarrely, until Spallanzani announced his findings, it was widely thought – even by van Leeuwenhoek for some years – that sperm were tiny parasites living in human semen. It was only in 1876 that the German zoologist Oscar Hertwig demonstrated the fusion of sperm and egg in sea urchins. Eventually, powerful microscopes revealed that an average human ejaculate, with a volume of about half a teaspoon, contains some 250 million sperm. But a key question remains unanswered: ‘Why so many?’ In fact, studies show that pregnancy rates tend to decline once a man’s ejaculate contains less than 100 million sperm. Clearly, then, almost half the sperm in an average human ejaculate are needed for normal fertility. A favoured explanation for this is sperm competition, stemming from that macho-male notion of sperm racing to fertilise – often with the added contention that more than one male might be involved. As in a lottery, the more tickets you buy, the likelier you are to win. Natural selection, the thinking goes, drives sperm numbers sky-high in a kind of arms race for the fertilisation prize. Striking examples of sperm competition do indeed abound in the animal kingdom. Our closest relatives, the chimpanzees, live in social units containing several adult males that regularly engage in promiscuous mating; females in turn are mated by multiple males. Numerous features, such as conspicuously large testes, reflect a particularly high level of sperm production in such mammal species. In addition to large testes, they have fast sperm production, high sperm counts, large sperm midpieces (containing numerous energy-generating mitochondria for propulsion), notably muscular sperm-conducting ducts, large seminal vesicles and prostate glands, and high counts of white blood cells (to neutralise sexually transmitted pathogens). The vesicles and the prostate gland together produce seminal fluid, which can coagulate to form a plug in the vagina, temporarily blocking access by other males. Popular opinion and even many scientists perpetuate the same sperm scenario for humans, but evidence points in a different direction. In fact, despite various lurid claims to the contrary, there’s no convincing evidence that men are biologically adapted for sperm competition. The story of sperm abundance in promiscuously mating chimpanzees contrasts with what we see in various other primates, including humans. Many primates live in groups with just a single breeding male, lack direct competition and have notably small testes. In all relevant comparisons, humans emerge as akin to primates living in single-male groups – including the typical nuclear family. Walnut-sized human testes are just a third of the size of chimpanzee testes, which are about as large chickens’ eggs. Moreover, while chimpanzee ejaculate contains remarkably few physically abnormal sperm, human semen contains a large proportion of duds. Quality controls on human ejaculate have seemingly been relaxed in the absence of direct sperm competition. Sperm passage is more like a challenging military obstacle course than a standard swimming race For species not regularly exposed to direct sperm competition, the only promising alternative explanation for high sperm counts concerns genetic variation. In a couple of rarely cited papers published more than four decades ago, the biologist Jack Cohen at the University of Birmingham in the UK noted an association between sperm counts and the generation of chromosome copies during sperm production. During meiosis, the special type of cell division that produces sex cells, pairs of chromosomes exchange chunks of material through crossing over. What Cohen found is that, across species, sperm counts increase in tandem with the number of crossovers during their production. Crossing over increases variation, the essential raw material for natural selection. Think of sperm production as a kind of lottery in which enough tickets (sperm) are printed to match available numbers (different genetic combinations). Other findings fly in the face of the popular scenario, too. For instance, most mammalian sperm do not in fact swim up the entire female tract but are passively transported part or most of the way by pumping and wafting motions of the womb and oviducts. Astoundingly, sperm of smaller mammals tend to be longer on average than sperm of larger mammals – a mouse sperm is longer than the sperm of a whale. But even if these were equivalent in size, swimming up to an egg becomes more of a stretch the larger a species gets. Indeed, it might be feasible for a mouse sperm to swim all the way up to the egg – but it is quite impossible for an even smaller blue whale sperm to swim 100 times further up the female tract unaided. Convincing evidence has instead revealed that human sperm are passively transported over considerable distances while travelling through the womb and up the oviducts. So much for Olympic-style racing sperm! In fact, of the 250 million sperm in the average human ejaculate, only a few hundred actually end up at the fertilisation site high up in the oviduct. Sperm passage up the female tract is more like an extremely challenging military obstacle course than a standard sprint-style swimming race. Sperm numbers are progressively whittled down as they migrate up the female tract, so that less than one in a million from the original ejaculate will surround the egg at the time of fertilisation. Any sperm with physical abnormalities are progressively eliminated along the way, but survivors surrounding the egg are a random sample of intact sperm. Many sperm do not even make it into the neck of the womb (cervix). Acid conditions in the vagina are hostile and sperm do not survive there for long. Passing through the cervix, many sperm that escape the vagina become ensnared in mucus. Any with physical deformities are trapped. Moreover, hundreds of thousands of sperm migrate into side-channels, called crypts, where they can be stored for several days. Relatively few sperm travel directly though the womb cavity, and numbers are further reduced during entry into the oviduct. Once in the oviduct, sperm are temporarily bound to the inner surface, and only some are released and allowed to approach the egg. Pushing the notion that the fertilising sperm is some kind of Olympic champion has obscured the fact that an ejaculate can contain too many sperm. If sperm surround the egg in excessive numbers, the danger of fertilisation by more than one (polyspermy) arises with catastrophic results. Polyspermy occasionally occurs in humans, especially when fathers have very high sperm counts. In the commonest outcome in which two sperm fertilise an egg, cells of the resulting embryo contain 69 chromosomes instead of the usual 46. This is always fatal, usually resulting in miscarriage. Although some individuals survive as far as birth, they always expire shortly afterwards. Because polyspermy typically has a fatal outcome, evolution has evidently led to a series of obstacles in the female reproductive tract that strictly limit the number of sperm allowed to surround an egg. Polyspermy has practical implications for assisted reproduction in cases of compromised fertility or infertility. For instance, the original standard procedure of introducing semen into the vagina for artificial insemination has been replaced by direct injection into the womb (intrauterine insemination, or IUI). Directly introducing semen into the womb bypasses the reduction of sperm numbers that normally occurs in the cervix, where mucus weeds out physically abnormal sperm. Analyses of clinical data have revealed that depositing 20 million sperm in the womb (less than a 10th of the number in the average ejaculate) is enough to achieve a routine pregnancy rate. Sperm numbers become even more important when it comes to in vitro fertilisation (IVF), with direct exposure of an egg to sperm in a glass vessel. This bypasses every single one of the natural filters between the vagina and the egg. In the early development of IVF, the general tendency was to use far too many sperm. This reflected the understandable aim of maximising fertilisation success, but it ignored natural processes. High sperm numbers between 50,000 and 0.5 million increasingly depressed the success rate. Optimal fertilisation rates were achieved with only 25,000 sperm around an egg. Both IUI and IVF potentially increase the risk of polyspermy and the likelihood of miscarriage. Human fertilisation is a gigantic lottery with 250 million tickets: for healthy sperm, it is the luck of the draw The possibility of polyspermy casts new light on the evolution of sperm counts. Discussions of sperm competition generally focus exclusively on maximising sperm counts, but – as is common in biology – some kind of trade-off is involved. Whereas natural selection can lead to increased sperm production if males are in direct competition, it will also favour mechanisms in the female tract that constrain numbers of sperm around the egg. In promiscuously mating primates, such as chimpanzees, increased oviduct length in females offsets increased sperm production by males. This presumably limits the numbers of sperm approaching the egg. It also shows that the female’s role in fertilisation is by no means as passive as is often assumed. The entrenched idea that ‘the best sperm wins’ has elicited various suggestions that some kind of selection occurs, but it is difficult to imagine how this could possibly happen. The DNA in a sperm head is tightly bound and virtually crystalline, so how could its properties be detected from outside? Experiments on mice indicate, for instance, that there is no selection according to whether a sperm contains a male-determining Y-chromosome or a female-determining X-chromosome. It seems far more likely that human fertilisation is a gigantic lottery with 250 million tickets, in which – for healthy sperm – successful fertilisation is essentially the luck of the draw. Other puzzling features of sperm also await explanation. It has long been known, for instance, that human semen contains a large proportion of structurally abnormal sperm with obvious defects such as double tails or tiny heads. The ‘kamikaze sperm’ hypothesis proposed that these dud sperm in fact serve different functions in competition, such as blocking or even killing sperm from other men. However, this has since been effectively discredited. The entrenched notion that human sperm, once ejaculated, engage in a frantic race to reach the egg has completely overshadowed the real story of reproduction, including evidence that many sperm do not dash towards the egg but are instead stored for many days before proceeding. It was long accepted as established fact that human sperm survive for only two days in a woman’s genital tract. However, from the mid-1970s on, mounting evidence revealed that human sperm can survive intact for at least five days. An extended period of sperm survival is now widely accepted, and it could be as long as 10 days or more. Other myths abound. Much has been written about mucus produced by the human cervix. In so-called ‘natural’ methods of birth control, the consistency of mucus exuding from the cervix has been used as a key indicator. Close to ovulation, cervical mucus is thin and has a watery, slippery texture. But precious little has been reported regarding the association between mucus and storage of sperm in the cervix. It has been clearly established that sperm are stored in the crypts from which the mucus flows. But our knowledge of the process involved is regrettably restricted to a single study reported in 1980 by the gynaecologist Vaclav Insler and colleagues of Tel Aviv University in Israel. In this study, 25 women bravely volunteered to be artificially inseminated on the day before scheduled surgical removal of the womb (hysterectomy). Then, Insler and his team microscopically examined sperm stored in the crypts in serial sections of the cervix. Within two hours after insemination, sperm colonised the entire length of the cervix. Crypt size was very variable, and sperm were stored mainly in the larger ones. Insler and colleagues calculated the number of crypts containing sperm and sperm density per crypt. In some women, up to 200,000 sperm were stored in the cervical crypts. Insler and colleagues also reported that live sperm had actually been found in cervical mucus up to the ninth day after insemination. Summarising available evidence, they suggested that after insemination the cervix serves as a sperm reservoir from which viable sperm are gradually released to make their way up the oviduct. This dramatic finding has been widely cited yet largely ignored, and there has never been a follow-up study. Mutations accumulate four times faster in sperm than in eggs, so semen from old men is risk-laden In his textbook Conception in the Human Female (1980) – more than 1,000 pages in length – Sir Robert Edwards, a recipient of the 2010 Nobel prize for the development of IVF, mentioned cervical crypts in a single sentence. Since then, many other authors have mentioned sperm storage in those cervical crypts equally briefly. Yet storage of sperm, with gradual release, has major implications for human reproduction. Crucially, the widespread notion of a restricted ‘fertile window’ in the menstrual cycle depends on the long-accepted wisdom that sperm survive only two days after insemination. Sperm survival perhaps for 10 days or more radically erodes the basis for so-called ‘natural’ methods of birth control through avoidance of conception. Sperm storage is also directly relevant to attempts to treat infertility. Another dangerous misconception is the myth that men retain full fertility into old age, starkly contrasting with the abrupt cessation of fertility seen in women at menopause. Abundant evidence shows that, in men, sperm numbers and quality decline with increasing age. Moreover, it has recently emerged that mutations accumulate about four times faster in sperm than in eggs, so semen from old men is actually risk-laden. Much has been written about the fact that in industrialised societies age at first birth is increasing in women, accompanied by slowly mounting reproductive problems. A proposed solution is the highly invasive and very expensive procedure of ‘fertility preservation’ in which eggs are harvested from young women for use later in life. However, increasing reproductive problems with ageing men, notably more rapid accumulation of sperm mutations, have passed largely unmentioned. One very effective and far less expensive and invasive way of reducing reproductive problems for ageing couples would surely be to store semen samples from young men to be used later in life. This is just one of the benefits to be gained from less sexism and more reliable knowledge in the realm of human reproduction. Nowadays, the story of Hartsoeker’s homunculus might seem veiled in the mist of time, mentioned only as an entertaining illustration of blunders in the early exploration of human sex cells. But its influence, along with the macho-male bias that spawned it, has lived on in subtler form among the cultural stereotypes that influence the questions we ask about reproductive biology. Source of the article

GOATReads: Philosophy

Aha = wow

We surveyed thousands of scientists in four countries and learned just how important beauty is to them When Paulo was an undergraduate, he was tasked with taking photographs of neurons. ‘A single cell,’ he came to notice, ‘it’s a whole universe.’ Looking at cells beneath a microscope is not unlike gazing at stars in the sky, Paulo realised. ‘We all know they are there, but until you see them with your own eyes, you don’t have that experience of awe, of wow.’ It was then, as he put it, that he ‘fell in love with these cells for the first time.’ Now a stem cell biologist at a university in the United States, Paulo remains enthralled with ‘pretty’ cells. His desktop background is filled with their images. But the ‘aesthetic’ of stem cells, Paulo insists, is not just in the pretty pictures they make. ‘There is some other type of beauty that is not visual,’ he explained to us in an interview. ‘I’m not sure what it is. Perhaps there is some kind of…’ His voice trailed off. Searching for the right words, Paulo continued: ‘It’s almost an appreciation for the complexity of things. But what you realise is that it’s all uncoverable. We can know it all.’ Paulo fell in love with science for the visual beauty of nature, but his continued passion for it owes to what we call the beauty of understanding – the aesthetic experience of new insight into the way things are, when encountering the hidden order or inner logic underlying phenomena. To grow in understanding, Paulo reflected, is ‘something very satisfying’. ‘The beauty in science,’ he emphasised, ‘that is, at least for me, a huge motivator.’ Public conversations about beauty in science tend to focus on the beauty anyone can see, say, in photographs of stars like those released from the James Webb Space Telescope, or the beauty that physicists, for instance, ascribe to elegant equations. But these foci miss something important. Over the past three years, we have studied thousands of scientists on three different continents, asking them about the role of beauty in their work. Our research left us convinced that the core aesthetic experience science has to offer is not primarily about sensory experiences or formulas. At the deepest level, what motivates scientists to pursue and persist in their work is the aesthetic experience of understanding itself. Centring the beauty of understanding presents an image of science more recognisable to scientists themselves and with greater appeal for future scientists. Moreover, it foregrounds the need for institutionally supporting scientists in their quests for understanding, which are all too often stifled by the very system on which they depend. Beauty’, for many, is the last word that comes to mind when thinking of science. Where the word ‘science’ connotes ‘objective’, ‘value-free’ and ‘rational’, the term ‘beauty’ evokes the ‘subjective’ and ‘emotional’. Such associations are reinforced by the nerdy and aloof scientist stereotype, which remains common among young people, according to decades of research using the ‘Draw-A-Scientist Test’. Modern education systems institutionalise this dichotomy with the division of the sciences and humanities into separate classes and colleges. Such tensions hark back at least to the 19th century, when English Romantic poets such as William Blake and John Keats accused scientists of stripping beauty and mystery away from nature. Keats, despite his own considerable scientific training, reputedly complained that Isaac Newton ‘destroyed all the poetry of the rainbow, by reducing it to the prismatic colours.’ In his poem Lamia, Keats writes: ‘all charms fly / At the mere touch of’ natural philosophy or science, which, he laments, will ‘clip an Angel’s wings, / Conquer all mysteries by rule and line,’ and ‘unweave a rainbow’. Are aesthetic experiences in science the rare prerogative of geniuses? This stereotype of science, however, contrasts with what scientists have had to say in recent decades, from Nobel Prize-winning physicists such as Subrahmanyan Chandrasekhar and Frank Wilczek to the British biologist Richard Dawkins. These scientists portray science as an impassioned endeavour, and a font of beauty, awe and wonder. Dawkins, for instance, aims at Keats when he argues in his book Unweaving the Rainbow (1998) that, far from being a source of disenchantment, science nourishes an appetite for wonder. ‘The feeling of awed wonder that science can give us,’ Dawkins contends, ‘is one of the highest experiences of which the human psyche is capable.’ ‘It is a deep aesthetic passion,’ he writes, ‘to rank with the finest that music and poetry can deliver.’ These testimonies call to mind pioneering scientists like Alexander von Humboldt, whose passion for beauty fuelled his work, as Andrea Wulf shows in her exquisite biography The Invention of Nature (2015). Such cases, however, raise several questions: are aesthetic experiences in science the rare prerogative of geniuses, or do most contemporary scientists often find beauty in their work? How do encounters with beauty shape scientists personally, and their practice of science – whether positively or negatively? To answer these questions, we conducted the world’s first large-scale international study of the role of aesthetics in science. In 2021, our team surveyed nationally representative samples of nearly 3,500 physicists and biologists in four countries: India, Italy, the United Kingdom and the United States. Over the past three years, we also interviewed more than 300 of them in-depth. Our data have made clear to us not only that science is an aesthetic quest – just like art, poetry or music – but also that the heart of this quest is the beauty of understanding. Our research identified three types of beauty in science, each of which shapes the course of scientists’ work in distinct yet interconnected ways. What is visually or aurally striking – what we refer to as sensory beauty – is a common source of beauty for scientists. Indeed, the majority of scientists (75 per cent) see beauty in the objects and phenomena they study, from cells to stars, and in the symmetries, simplicity and complexity of nature, which can evoke emotions of awe and wonder. Many even find such beauty in scientific models and instruments. As was the case for Paulo, sensory beauty is often what draws in people to science. For instance, a biologist in the UK told us that beauty was what drew her to her field of study: ‘I personally think that a pollinator visiting a flower, both of those things to me are incredibly beautiful. And the two of them interacting with each other, I think, is one of the most beautiful things in nature, which is why I study pollination.’ Of course, not all aspects of scientific work are beautiful in this sense. Many respondents told us about the absence of sensory beauty in their work. ‘To be honest with you, if anything, I would describe the real practice of science as ugly or hideous,’ one UK physicist told us. ‘There is an awful lot of boring drudge work, frustration, cursing, swearing, like I’m sure there is in any developmental job, and beautiful is not a word I would use at that point.’ We also found no shortage of complaints about ugliness in scientific workplaces such as dark, dingy labs. But even those who do not find much sensory beauty in their work can encounter beauty in another sense. Physicists in particular often rely on what we call useful beauty. This second type of beauty involves treating aesthetic properties such as simplicity, symmetry, aptness or elegance as heuristics or guides to truth. Historically, many theories considered beautiful turned out to be wrong, while ugly ones turned out to be right For instance, the Nobel Prize-winning physicist Murray Gell-Mann said that ‘in fundamental physics, a beautiful or elegant theory is more likely to be correct than a theory that is inelegant.’ Paul Dirac, another Nobel Prize-winner, went so far as to say that ‘it is more important to have beauty in one’s equations than to have them fit experiment.’ Not all physicists, however, are enamoured with useful beauty. There is no guarantee the aesthetic properties that facilitated understanding in the past will do so in the future. And, historically, many theories considered beautiful turned out to be wrong, while ugly ones turned out to be right – and then were no longer considered ugly. Some argue that physics is in a similar situation today. In Farewell to Reality (2013), Jim Baggott complains that theories of super-symmetry, super-strings, the multiverse and so on – driven largely by beautiful mathematics – are ‘fairytale physics’. They are ‘not only not true’, he contends, they are ‘not even science’. Similarly, in Lost in Math (2018), Sabine Hossenfelder argues that aesthetic criteria have become a source of cognitive bias leading physics astray. Our survey found that physicists are evenly divided when it comes to the reliability of useful beauty. On the question of whether beautiful mathematics is a guide to truth, 34 per cent disagree while 35 per cent agree; regarding whether elegant theories are more likely to be correct than inelegant ones, 42 per cent disagree while 23 per cent agree. As to Dirac’s claim about beauty in equations being more important than experimental support, a striking 77 per cent of physicists disagree, while only 8 per cent agree with the statement. Beauty’s utility in science is not limited to physics or to its heuristic role in theory selection; it can also be significant when designing experiments in physics and biology. ‘I try to make our experimental design as elegant and as direct as possible, which for me is a form of beauty,’ a US biologist told us. ‘Because the less effort we have to put into getting an answer the better. The faster, the cleaner, the fewer different parameters we have to control for in an experiment, the more convincing and clean, I think, the information will be that they get out of it.’ Many scientists similarly emphasised the relevance of aesthetic considerations for designing a project, writing code and analysing data. Beauty, thus, has relevance in science beyond theory choice. The third type of beauty, which we find is most prevalent and argue is the most important, is what we call the beauty of understanding. The vast majority of scientists describe moments of understanding as beautiful. In our surveys and interviews, when asked where they find beauty in their work, scientists regularly pointed to times when they grasped the hidden order, inner logic or causal mechanisms of natural phenomena. These moments, one UK physicist told us, are ‘like looking into the face of God for non-religious people – how you can look at something and think, oh my God, that’s how things actually work, that’s how things are!’ A US biologist concurred: ‘It is recognising,’ he explained, ‘this is what’s going on. There’s a leap to the truth, or a leap to a sense of generalisation or something that is beyond the particular but in some way represents the real thing that’s there, the real thing that’s going on. We think of that as beautiful.’ Nearly 95 per cent of scientists in the four countries we studied report experiencing this kind of beauty at some point in their scientific practice. If the notion of the beauty of understanding seems strange, that is due to a dualistic perspective – the philosophical roots of which stretch back through Immanuel Kant to René Descartes – that treats reason and emotion, objectivity and subjectivity, science and art as binary. But there is an alternative perspective, which brings into view the continuity of reason and emotion, objectivity and subjectivity, science and art. In this view, it is as natural to recognise beauty in understanding as in a sunset. In 1890s, the polymath and founder of American pragmatism Charles S Peirce coined the term ‘synechism’ for ‘the tendency to regard everything as continuous‘ (the term’s roots are in the Greek word for ‘continuous’). Embracing this philosophy of continuity, his fellow pragmatist John Dewey developed a theory of aesthetic experience encompassing science and art as differing only in ‘tempo and emphasis’, not in kind. Each, Dewey argued, manifests experience that has been transformed from a state of fragmentation, turbulence or longing to a state of integration, harmony or – his favoured term – ‘consummation’. The process of transformation is a struggle, but that only enhances the satisfaction of the consummatory experience it brings about. From initial puzzles through the ugliness of data collection to the closure of theoretical explanation, Dewey’s theory of aesthetic experience describes just the sort of journey the scientists we spoke with articulated when describing their quests for the beauty of understanding. Once one considers, with Dewey, aesthetic experience as a potential latent in any fragmented experience – recognising its scope extends beyond sensory or useful beauty – one can see that science enables aesthetic experience through, for instance, the integration of seemingly disparate observations and ideas in moments of understanding. The process of moving from a puzzling observation to a theory that accounts for it, or from a theory to a discovery of evidence it predicts, can be long, arduous and messy – a far cry from sensory beauty. But when understanding emerges, so does beauty as the quality making it stand out against the rough course of experience that ran before. Feeling the beauty of the moment clues in scientists to the possibility that they just may have arrived at something transformative – emotion and reason work hand in hand. In our interviews, many scientists recalled voicing their sense of beauty in moments of understanding with terms like ‘aha’, or evoked feelings of a ‘high’ or a ‘kick’ to describe it. As Dewey noted in his essay ‘Qualitative Thought’ (1931), such ‘ejaculatory judgments’ voiced ‘at the close of every scientific investigation … [mark] the realised appreciation of a pervading quality,’ namely, the distinctly aesthetic quality of the experience. It is this quality that scientists refer to as ‘beauty’ in relation to understanding. Half of academic scientists now quit after five years. A toxic work culture may be pushing them out Both sensory and useful beauty are oriented to the beauty of understanding. The beauty scientists find in the phenomena is an invitation to deeper understanding. And it is this understanding that imparts value to the technologies, experiments and theories employed in research. The deeper significance of the aesthetic properties that scientists appeal to lies in their potential for facilitating understanding. The beauty of understanding, further, can help scientists persist through the challenges of research. After telling us of his passion for the beauty in science, Paulo shared that, a year earlier, he nearly quit his job. ‘I was hospitalised out of stress,’ he admitted. ‘I had shingles. That stuff is an endogenous virus that comes out when you’re stressed. It was horrible.’ Taken aback, we asked him to explain what stressed him out so much. ‘I was having just so many grants rejected. And I was like: “Oh my gosh, what’s going to happen now?”’ Like most scientists, Paulo relies on competitive research grants to do his work. He had spent countless hours applying for research funding, but was having no success. Not only had he become physically ill, he was also sick of trying. He made a LinkedIn account. ‘I was ready to look for another job,’ he said. ‘I was completely devastated by the fact that I just didn’t want to write more grants.’ And, without funding, his scientific quest would end. Paulo is no outlier in considering leaving his position due to stress. Depression and anxiety have become commonplace among researchers in their early careers. More and more scientists seem to be leaving their jobs; 50 per cent of academic scientists these days quit after five years. A toxic work culture may be pushing them out. Leading scientific publications such as Nature have issued warnings of a mental health crisis in science, which is causing attrition that threatens the future of the profession. Paulo nearly became one of these casualties. Fortunately, a conversation with a mentor intervened. ‘Everything just changed all of a sudden, because she was just telling me the grants are not really the most important thing in the world.’ His mentor told him not to neglect ‘all the other good things’ he was doing, and to take pride that his students were ‘very motivated about science’. When Paulo faltered in his motivation to pursue the grants he needed for research, renewal came with realising that his guidance was invaluable for his students’ quests for the beauty of understanding. Paulo did not quit his job. And he eventually got the funding he needed to continue his research. Beauty helped him persevere. Beauty permeates science, from the sensory beauty that often draws people to a scientific career in the first place, to aesthetic criteria such as elegance, which shapes theory selection and experimentation for better or worse. But most important is the beauty of understanding, an aesthetic experience in its own right, the quest for which deepens scientists’ motivation to pursue and persist in their careers. Appreciating the aesthetic nature of the scientific quest is crucial for understanding science and why people devote their lives to it. Two-thirds of the scientists we studied insist it is important for scientists to encounter beauty, awe and wonder in their work. But, for too long, popular culture has been saturated with false myths about science and beauty, misguiding young people about what it means to be a scientist and what doing science is actually like. As we learned through our research, scientists are motivated by beauty. Their quests to achieve the beauty of understanding are fuelled by both reason and emotion. Dualistic assumptions about ‘objective’ science and ‘subjective’ judgments of beauty get in the way of seeing the continuity of science with other aesthetic ventures. ‘It’s because I’m an artist. Physicists are artists who can’t draw’ Making the beauty of understanding central to public conversations about science will draw much-needed attention to questions of how institutions can better support and leverage scientists’ motivating passion. How many scientists would find greater motivation for their work if there were fewer barriers to realising the beauty of understanding that drives them in the first place? Institutional incentives to ‘publish or perish’ and bring in short-term grant funding inadvertently contribute to an environment where many scientists like Paulo find themselves burnt out, or so disillusioned with toxic competition that they leave academia altogether. For scientists to thrive, institutional incentives need to capitalise upon, not crush, the beauty that drives scientists. Funders might offer flexible and longer-term research grants that facilitate creative interdisciplinary collaborations. Considering that more frequent aesthetic experiences at work are associated with higher levels of wellbeing among scientists, workshops incorporating aesthetic and appreciative reflection about research practices may help scientists reconnect with the deeper, motivating beauty of their work. As Andrea Wulf told us in a conversation, science is a palace with many doors; yet our educational systems tend to guide students through only a few conventional entry points. The beauty of understanding is a neglected pathway. By centring such beauty in public discussions about science, we can better engage bright, creative youth, helping them see that scientists are people like themselves. Exactly this is what inspired one young woman we interviewed to become a physicist. At age 17, Tricia had babysat for a physicist’s young children, so she was used to hearing questions about what makes a rainbow and why the sky is blue. But what surprised and enchanted her was hearing these very questions posed by the children’s father. He explained his wonder this way: ‘It’s because I’m an artist. So physicists are artists who can’t draw.’ This ‘triggered something in me’, Tricia told us, as she had always wanted ‘to paint to describe nature’ but lacked the skill. This physicist’s example convinced her she need not give up her dream entirely. What she discovered, rather, was that science could allow her to capture the beauty of nature in an even more profound way – not just through images, but through understanding. Now a theoretical particle physicist at a university in the UK, Tricia is living her dream by uncovering the hidden structures of the natural world, painting with the brush of theory. Who else might become a scientist if only they, like Tricia, were shown the beauty of understanding that science offers? Source of the article

GOATReads: History

The unseen masterpieces of Frida Kahlo

Lost or little-known works by the Mexican artist provide fresh insights on her life and work. Holly Williams explores the rarely seen art included in a new book of the complete paintings. You know Frida Kahlo – of course you do. She is the most famous female artist of all time, and her image is instantly recognisable, and unavoidable. Kahlo can be found everywhere, on T-shirts and notebooks and mugs. While writing this piece, I spotted a selection of cutesy cartoon Kahlo merchandise in the window of a shop, maybe three minutes' walk from my home. I bet many readers are similarly in striking distance of some representation of her, with her monobrow and traditional Mexican clothing, her flowery headbands and red lipstick. Partly, this is because her own image was a major subject for Kahlo – around a third of her works were self-portraits. Although she died in 1954, her work still reads as bracingly fresh: her self-portraits speak volumes about identity, of the need to craft your own image and tell your own story. She paints herself looking out at the viewer: direct, fierce, challenging. All of which means Kahlo can fit snugly into certain contemporary, feminist narratives – the strong independent woman, using herself as her subject, and unflinchingly exploring the complicated, messy, painful aspects of being female. Her paintings intensely represent dramatic elements of a dramatic life: a miscarriage, and being unable to have children; bodily pain (she was in a horrific crash at 18, and suffered physically all her life); great love (she had a tempestuous relationship with the Mexican artist Diego Rivera, as well as many other lovers, male and female, including Leon Trotsky), and great jealousy (Rivera cheated on her repeatedly, including with her own sister). But thats not all they show – her art is not always just about her life, although you could be forgiven for assuming it was. Books are written about her trauma, her love life; she's been the subject of a Hollywood movie starring Salma Hayek. Kahlo has become a bankable blockbuster topic, guaranteed to get visitors through the door of galleries, even if what they see is often more about the woman than her art. But what about her work? For some art historians, the relentless focus on the person rather than the output has become tiresome, which is why a new, monumental book – Frida Kahlo: The Complete Paintings – has just been published by Taschen, offering for the first time a survey of her entire oeuvre. Mexican art historian Luis-Martín Lozano, working with Andrea Kettenmann and Marina Vázquez Ramos, provides notes on every single Kahlo work we have images of – 152 in total, including many lost works we only know from photographs. Speaking to Lozano on a video call from Mexico City, I ask if a comprehensive survey of her work is overdue, despite there being so many shows about her all over the world? "As an art historian, my main interest in Kahlo has been in her work as an artist. If this had been the main concern of most projects in recent decades, maybe I would say this book has no reason to be. But the truth is, it hasn't," he says. "Most people at exhibitions, they're interested in her personality – who she is, how she dressed, who does she go to bed with, her lovers, her story." Because of this, exhibitions and their catalogues have often focused on that story, and tend to "repeat the same paintings, and the same ideas about the same paintings. They leave aside a whole bunch of works," says Lozano. Books also re-tread the same ground: "You repeat the same things, and it will sell – because everything about Kahlo sells. It's unfortunate to say, but she's become a merchandise. But this explains why [exhibitions and books] don't go beyond this – because they don't need to." The result is that certain mistakes get made – paintings mis-titled, mis-dated, or the same poor-quality, off-colour photographs reproduced. But it also means that ideas about what her works mean get repeated ad infinitum. "The interpretation level becomes contaminated," suggests Lozano. "All they say about the paintings, over and over, is 'oh it's because she loved Rivera', 'because she couldn't have a kid', 'because she's in the hospital'. In some cases, it is true – but there's so much more to it than that." The number of paintings – 152 – is not an enormous body of work for a major artist. And yet, astonishingly, some of these havenever been written about before: "never, not a single sentence!" laughs Lozano. "It's kind of a mess, in terms of art history." Offering a comprehensive survey of her work means bringing together lost or little-known works, including those that have come to light in auctions in the past decade or so, and others that are rarely loaned by private collectors and so have remained obscure. Lozano hopes to open up our understanding of Kahlo. "First of all – who was she as an artist? What did she think of her own work? What did she want to achieve as an artist? And what do these paintings mean by themselves?" This means looking again at early works, which might not be the sort of thing we associate with Kahlo – but reveal how much she was inspired by her father, Guillermo, a professional photographer and an amateur painter of floral still lifes. Pieces such as the little-known Still Life (with Roses) from 1925, which has not been exhibited since 1953, are notably similar in style to his. Kahlo continued to paint astonishing, vibrant still lifes her whole career – although they are less well-known to the general public than her self-portraits, less collectable, and less studied. An understanding of their importance to her has been strengthened since Lozano and co discovered documents revealing Kahlo's life-long interest in the symbolic meaning of plants. She learnt this from her father, and discussed it in letters with her half-sister Margarita (her father's child from an earlier marriage), who became a nun. The missing links Kahlo and Margarita's letters "talk about the symbolic meaning of flowers and fruits and the garden of Eden, that our body is like a flower we have to take care of because it was ripped off from paradise," says Lozano. "This is amazing, and proves why this topic of still lifes and flowers had such meaning to her." He offers a new interpretation of a painting from 1938, called Tunas, which depicts three prickly pears in different stages of ripening – from green and unripe to a vibrant, juicy, blood-red – as representing Kahlo's own understanding of her maturation as an artist and person, but as also potentially having religious symbolism (the bloody flesh evoking sacrifice). The Complete Paintings book also takes pains to reveal the depths of Kahlo's intellectual engagement with art-world developments – countering the notion that she was merely influenced by meeting Rivera in 1928, or that her work is some self-taught, instinctive howl of womanly pain. Her paintings reveal Kahlo's research into and experiments in art movements, from the youthful Mexican take on Modernism, Stridentism, to Cubism and later Surrealism.   "Frida Kahlo's paintings were not only the result of her personal issues, but she looked around at who was painting, what were the trends, the discussions," says Lozano. He points to her first attempts at avant-garde paintings – 1927's Pancho Villa and Adelita, and the lost work If Adelita, both of which use sharp, Modernist lines and angles – as proof that "she was looking at trends in Mexican art even before she met Rivera". You can also see her interest in Renaissance Old Masters, which she discovered prints of in her father's library, in early work: it's suggested her 1928 painting, Two Women (Portrait of Salvadora and Herminia), depicting two maids against a lush, leafy background, was inspired by Renaissance portraiture traditions, as seen in the works of Leonardo da Vinci. Bought in the year it was painted, the location of this work remained unknown until it was acquired by the Museum of Fine Arts, Boston, in 2015. Given she only made around 152 paintings, a surprising number are lost. But then, Kahlo wasn't so successful in her lifetime – she didn't have so many shows, or sell that many works through galleries and dealers. Instead, many of her paintings were sold or given away directly to artists, friends and family, as well as movie stars and other glittering admirers, often living abroad. That means less of a paper trail, making it harder to track down works. In honesty, looking at black-and-white pictures of lost portraits probably isn't going to prove revelatory to anyone beyond the most hard-core scholars – although there are some astonishing paintings still missing. One lost 1938 image, Girl with Death Mask II, depicts a little girl in a skull mask in an empty landscape; it chills, and we know Kahlo discussed this painting in relation to her sorrow at being unable to conceive. Check your attics, too, for Kahlo's painting of a horrific plane crash – which we only have a photograph of now – which she's known to have made in a period of great personal turmoil in the years after discovering her sister's affair with Rivera in 1935. Like another of her very well-known paintings, Passionately in Love or A Few Small Nips, depicting a woman murdered by her husband, The Airplane Crash was based very closely on a real-life news report; Lozano's team have unearthed both original articles in their research. While Kahlo may have been drawn to these traumatic events because she was suffering pain in her own life, her degree of almost documentary precision in external news stories here should not be overlooked. Kahlo was an avowed Communist, and politically engaged all her life, but it is in less well-known works from the final years of her life where you see this most explicitly emerge. At this time, she suffered a great deal of pain, and underwent many operations, eventually including amputation below the knee. But Kahlo continued painting till 1953, with difficulty but also with renewed purpose. Her biographer Raquel Tibol documented her saying: "I am very concerned about my painting. More than anything, to change it, to make it into something useful, because up until now all I have painted is faithful portraits of my own self, but that's so far removed from what my painting could be doing to serve the [Communist] Party. I must fight with all my strength so that the small amount of good I am able to do with my health in the way it is will be directed toward helping the Revolution. That's the only real reason for living." This resulted in works like 1952's Congress of the Peoples for Peace (which has not been exhibited since 1953), showing a dove in broad fruit tree – and two mushroom clouds, representing Kahlo's nightmares about nuclear warfare. She became an active member of many peace groups – collecting signatures from Mexican artists in support of a World Peace Council, helping form the Mexican Committee of Partisans for Peace, and making this painting for Rivera to take to the Congress of the Peoples for Peace in Vienna in 1952. Doves feature in several of her late still lifes – as do an increasing number of Mexican flags or colour schemes (using watermelons to reflect the green, white and red of the flag), suggesting Kahlo's intention was that her work should show her nationalism and Communism. More uncomfortably, her final paintings include loving depictions of Stalin, as her politics became more militant. Perhaps her most moving late painting, however, is a self-portrait: Frida in Flames (Self-portrait Inside a Sunflower). It's harrowing, painted in thick, colourful impasto; shortly before her death, Kahlo slashed at it with a knife, scraping away the paint, frustrated at her inability to make work or perhaps in an acknowledgment that her end was nearing. Tibol, who was witness to this decisive, destructive act, called it "a ritual of self-sacrifice". "It's a tremendous image," says Lozano. "It's very interesting in terms of aesthetics – when your body is not working anymore, when your brain is not enough to portray what you want to paint, the only source she's left with is to deconstruct the image. This is a very contemporary, conceptual position about art: that the painting exists not only in its craft, but also what I think the painting stands for." We are left with a painting that is imperfect, certainly a world away from the fine, smooth surfaces and attention to detail of Kahlo's more famous self-portraits – but it nonetheless is an astonishingly powerful work that deserves to be known. There is something tremendously poignant in an artist so well-known for crafting their own image using their final creative act to deliberately destroy that image. Even in obliterating herself, Kahlo made her work speak loudly to us. Source of the article

GOATReads:Politics

South Africa has chosen a risky approach to global politics: 3 steps it must take to succeed

South Africa finds itself in a dangerous historical moment. The world order is under threat from its own primary architect. The US wants to remain the premier global political power without taking on any of its responsibilities. This dangerous moment also presents opportunities. South Africa’s response has been one of strategic autonomy. This involves taking independent and non-aligned positions on global affairs, to navigate between competing world powers. But South African policymakers lack the political acumen and bureaucratic ability required to navigate this complex global order and to exploit the new possibilities. Strategic autonomy is not the norm in global affairs. It is very rare for small countries to succeed at it without at least some costs. Drawing from our expertise – as a political scientist and an economist working on the international economy – we conclude that if South Africa is to succeed in its strategic autonomy ambitions the country must do three things. First, its economic and foreign policy priority must be the African continent. Second, it must pursue bureaucratic excellence, especially in its diplomatic and security apparatus. Third, it must prepare for reprisals that are likely to follow its choice of an independent path to global affairs. Strategic autonomy A handful of countries have been able to pursue strategic autonomy in navigating the international system. They include Brazil, India and the Republic of Ireland. These countries have four necessary assets: global economic importance; leverage; bureaucratic capability; and political will and agency manifested in foreign policy cohesiveness and agility. India’s size – over 1.4 billion people and the fourth largest market in the world – makes it a location of both production and consumption. This has become more important given the US and western desire to create a counter balance to China as a low-cost producer and a market for exports. Brazil’s assets are its geographic size, its mid-size population (three times South Africa’s), its mineral wealth, and its political importance to South America. It is also the tenth largest economy in the world. Ireland is a small country, but it uses its strategic location in the European Union to influence global affairs. South Africa is currently lacking on all these fronts. But, with strategic planning and reforms, and in partnership with other African countries, it is possible to enhance the country’s strategic importance to the global economy. Where to from here? If South Africa is to succeed as a nation, become globally relevant, and have autonomy in the global economy, it must recognise its challenges, understand their drivers and address them pragmatically. So what should it do? First, it’s important to recognise that South Africa is a small country. Its economy is marginal to the rest of the world. The continent of Africa has a population of around 1.5 billion people, which is likely to double by 2070 – the only part of the global economy in which demographic growth will occur. Purely in terms of population size, Africa will be more important than ever before. This can only be a strategic lever if countries across the continent integrate their economies more strongly. Thus, South Africa’s economic and foreign policy should focus on Africa and on building the African Continental Free Trade Area. Without this, its long-term economic development is in danger and it can’t develop the political leverage that enables independence in global affairs. With its African partners, South Africa should be rebalancing its international trade. It should shift from being an exporter of raw materials to being a manufacturing and service economy. Many countries across Africa have deposits of minerals that are strategically important to the global economy, especially as the climate transition shapes relations. This must be used to build integration across the continent so the region engages with powerful economies as a regional bloc. Second, professional excellence must be taken seriously. South Africa’s political stewardship of the economy has been poor, and driven by narrow political objectives of the ruling party-linked elite. For example, policy in the important mining sector has been chaotic, at best. It has not served as a developmental stimulant or as a political lever for strategic autonomy. Specific to international affairs, South Africa has to professionalise the diplomatic corps. It has been significantly weakened and its professional capability eroded through political appointments. These make up the vast majority of ambassadorial deployments. There should be limits to the political appointments of ambassadors from the cohort of former African National Congress politicians and their family members. In addition, South Africa should have fewer embassies, located in more strategic countries, with appropriate budgets to their job. It is embarrassing that embassies in places like London don’t have enough budget to market the country, undertake advocacy and advance the country’s national agenda. But professional excellence needs to be extended far beyond the diplomatic corps. South Africa cannot continue to be compromised by incompetent municipal and national governance. And this is not solely the result of corruption and cadre deployment. It’s also tied to a transformation agenda that eschews academic and professional excellence. In addition, South Africa cannot pretend to be leading an independent path in global affairs without having the security apparatus that goes with such leadership. On this score, the country is sadly lacking. Its security apparatus – the South African National Defence Force, police and intelligence service – needs attention. The defence force is poorly funded and, like the police and intelligence, largely a “social service” for former ANC operatives combatants. Third, South Africa needs to prepare for the reprisals that are likely to follow if it charts an independent path in global affairs, such as the current response from the Trump administration to discipline South Africa for taking an autonomous position on Gaza. This requires understanding the form that such reprisals could take and their consequences and being prepared for them. This would require diplomatic agility to proactively seek new markets, alternative sources of investment and additional political allies. In contrast, South Africa’s responses have largely been reactive. Dangers, as well as opportunities While it’s a dangerous and uncertain world, it is also full of new possibilities. A new bipolar or multipolar world could enable South Africa and Africa to play off global powers against each other, to maximise opportunities for national economic development and independence. This will only happen if South Africans collectively become agents of their own change. It will require developing leverage which others take seriously, and a government and public administration that works for the people of the country. Source of the article

Health v heritage: Pigeon feeding ban sparks debate in India

A recent court ban on feeding pigeons in public spaces in the western Indian city of Mumbai has become a major flashpoint between civic bodies, public health activists and bird lovers. This month, hundreds of people clashed with police twice while protesting the closure of a decades-old pigeon feeding spot, or a kabutarkhana. (Kabutar is the Hindi word for pigeon.) Some tore down the tarpaulin sheets covering the spot and threatened an indefinite hunger strike. Police briefly detained about 15 people at another protest, media reports said. Authorities had imposed the ban due to concerns about health hazards due to pigeon droppings. The problem is not unique to Mumbai. In Venice, feeding pigeons in historic squares is banned. Singapore imposes hefty fines, and New York and London have regulated feeding zones. In India too, Pune and Thane cities in Maharashtra state - of which Mumbai is the capital - have imposed penalties on feeding pigeons. Delhi is mulling an advisory against feeding the birds in public spaces. The crackdown has angered animal lovers and religious feeders, as pigeons are long woven into India's cultural fabric. Films often use shots of grain-feeding pigeons to evoke cities like Mumbai and Delhi, where the birds are a familiar presence on balconies and air-conditioners. Some of Mumbai's kabutarkhanas are iconic heritage structures and are said to have originated as charitable spaces where communities could donate grain. There are religious sentiments involved as well. In Mumbai, the Jain community, which considers feeding pigeons a pious duty, has been vocal in their protests. Elsewhere too, many share a bond with pigeons - seen as symbols of peace and loyalty. In Delhi, Syed Ismat says he has been feeding the birds for 40 years and considers them his family. "They are innocent. Perhaps the most innocent of all creatures. All they ask for is a little kindness," said Mr Ismat. But these sentiments are pitted against studies which show that prolonged exposure to pigeon droppings poses risks of pulmonary and respiratory illnesses. A boom in India's pigeon population in recent years has heightened this risk, prompting the restrictions. Delhi-based biodiversity expert Faiyaz Khudsar says easy availability of food has led to overpopulation of pigeons in many countries. In India, he said, the challenge is compounded by a decline in birds like the goraiya, commonly known as the house sparrow, which are increasingly being displaced by pigeons. "With easy food and no predators, pigeons are breeding faster than ever. They are outcompeting other urban birds, creating an ecological loss," Mr Khudsar said. The 2023 State of India's Birds report says pigeon numbers have risen more than150% since 2000 - the biggest jump among all birds - leaving homes and public spaces with droppings, as each bird can produce up to 15kg (33lbs) a year. Studies show these droppings contain at least seven types of zoonotic pathogens that can cause diseases such as pneumonia, fungal infections and even lung damage in humans. Nirmal Kohli, a 75-year-old Delhi resident, started complaining of persistent cough and had trouble breathing a few years ago. "Eventually, a CT scan showed that part of her lung had shrunk," says her son Amit Kohli. "The doctors said it was due to exposure to pigeon droppings." Last year, an 11-year-old boy died in Delhi due to hypersensitivity pneumonitis - a disease that causes inflammation in lungs. Doctors said the reason was prolonged exposure to pigeon droppings and feathers. RS Pal, a pulmonologist, told the BBC that such cases were common. "Even if you don't directly feed pigeons, their droppings on window sills and balconies can cause hypersensitivity pneumonitis," he said. "We also see bacterial, viral and fungal infections in people handling pigeons regularly." These concerns are what led the Mumbai civic body to impose the feeding ban last month and launch a drive to demolish feeding centres. Demolitions are on hold, but the Bombay High Court has dismissed a plea against the feeding ban, citing public health as "paramount" and ordering strict action on illegal feeding. Delhi mayor Raja Iqbal Singh told the BBC that love for birds cannot come at the cost of people's well-being. "Feeding spots often turn dirty, leading to foul smells, infections and pests. We are working to minimise feeding," he said. But many animal lovers disagree. Mohammad Younus, who supplies grains to a feeding spot in Delhi, argues that all animals can spread diseases if hygiene is not maintained. "I have been surrounded by pigeons for the past 15 years. If something were to happen, it would have happened to me too," he said. In Mumbai, a Jain monk told BBC Marathi that thousands of pigeons would die of hunger due to the feeding ban. Megha Uniyal, an animal rights activist, pointed out that there was no clarity on how the ban on feeding pigeons would be implemented. "As far as regulating pigeon feeding is concerned, it is a word thrown around by the authorities, but no one really understands what this could entail," she said. Amid these competing contentions, efforts are on to find a middle ground. Ujjwal Agrain, of People for the Ethical Treatment of Animals (Peta) India, suggests allowing pigeon feeding only during set morning and evening hours. "That gives enough time for civic bodies to clean the place and maintain hygiene. This way, we respect both public health and emotional bonds," he said. The Bombay High Court has set up an expert panel to suggest alternatives, and Mumbai civic officials say controlled, staggered feeding may be allowed based on its advice. For Syed Ismat, the solution lies in rethinking the relationship between birds and urban spaces. "Maybe it's time to reimagine how we share our cities, not just with pigeons but with all forms of life," he said. Source of the article

Science Fiction? Think Again. Scientists Are Learning How to Decode Inner Thoughts

A brain-computer interface has gotten better than ever before at translating thoughts from people with speech difficulties. Researchers are also thinking through how to protect users’ privacy In recent years, scientists have been working on technology that could help people—such as those paralyzed by strokes or with neurological conditions like ALS—to communicate. In particular, devices called brain-computer interfaces, or BCIs, pick up electrical signals in the brain as people try to form words, then translate these signals out loud. But in a new study, published Thursday in the journal Cell, researchers report they have decoded four participants’ inner voices for the first time, with up to 74 percent accuracy. “It’s a fantastic advance,” Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the new study, tells the New York Times’ Carl Zimmer, adding that the technology not only helps people speak who otherwise can’t but also improves scientists’ understanding of how language works. In previous uses of BCIs, researchers asked patients who couldn’t speak to try to physically form words. This created signals of so-called attempted speech in the brain, which were picked up by implanted electrodes and decoded with a computer algorithm. This process, however, could be tiring and uncomfortable for users, as Erin Kunz, an electrical engineer at Stanford University, tells Science’s Annika Inampudi. So, Kunz and her team decided to try to investigate whether implanted electrodes could pick up a patient’s inner voice. If it could, operating the device might become easier for users. But this research was also meant to investigate questions around privacy—could these devices also pick up unintentional speech? After all, our inner voices might say things we don’t want others to hear. The team says that so far, the technology isn’t accurate enough to translate thoughts against a participant’s will. “We’re focused on helping people … who have issues with speaking,” Kunz tells Science. To protect users’ privacy, they chose a passphrase to activate the device that was unlikely to come up in everyday speech: “Chitty Chitty Bang Bang,” the title of the 1964 Ian Fleming novel and 1968 movie. The technology would start translating thoughts when it detected the phrase, which, for one participant, it did with 98.75 percent accuracy. In the tests, the researchers asked the participants—all four of whom have some trouble speaking—to either attempt saying a set of seven words or to merely think them. They found the patterns of neural activity and regions of the brain used in both scenarios were similar, but the inner thoughts produced weaker signals. Then, the team trained the computer system on the signals produced when participants thought words from a 125,000-word vocabulary. When the users then thought sentences with these words, the device translated the resulting brain activity. The technology produced words with an error rate of 26 to 54 percent, making it the most accurate attempt to decode inner speech to date, Science reports. In another experiment, participants were asked to tally circles on a screen as the device tried to translate their thoughts. This was meant to test whether the computer would pick up internal thoughts that the participants weren’t told to say, and in some cases, the system picked up a number. Beyond helping people speak who otherwise couldn’t, the findings show that, at least for some people, language plays a role in the process of thought, Herff tells the New York Times. It also reveals neural differences between attempted and internal speech, Silvia Marchesotti, a neuroengineer at the University of Geneva who wasn’t involved in the research, tells Nature’s Gemma Conroy. But the researchers say they’re not done yet—and they’re working toward even better outcomes. “The results are an initial proof of concept more than anything,” Kunz tells the New York Times. “We haven’t hit the ceiling yet.” Source of the article

A plague mysteriously spread from Europe into Asia 4,000 years ago. Scientists now think they may know how

For thousands of years, a disease repeatedly struck ancient Eurasia, quickly spreading far and wide. The bite of infected fleas that lived on rats passed on the plague in its most infamous form — the Black Death of the 14th century — to humans, and remains its most common form of transmission today. During the Bronze Age, however, the plague bacterium, Yersinia pestis, had not yet developed the genetic tool kit that would allow later strains to be spread by fleas. Scientists have been baffled as to how the illness could have persisted at that time. Now, an international team of researchers has recovered the first ancient Yersinia pestis genome from a nonhuman host — a Bronze Age domesticated sheep that lived around 4,000 years ago in what is now modern-day Russia. The discovery has allowed the scientists to better understand the transmission and ecology of the disease in the ancient past, leading them to believe that livestock played a role in its spread throughout Eurasia. The findings were published Monday in the journal Cell. “Yersinia pestis is a zoonotic disease (transmitted between humans and animals) that emerged during prehistory, but so far the way that we have studied it using ancient DNA has been completely from human remains, which left us with a lot of questions and few answers about how humans were getting infected,” said lead author Ian Light-Maka, a doctoral researcher at the Max Planck Institute for Infection Biology in Berlin. There have been nearly 200 Y. pestis genomes recovered from ancient humans, the researchers wrote. Finding the ancient bacterium in an animal not only helps researchers understand how the bacterial lineage evolved, but it could also have implications for understanding modern diseases, Light-Maka added via email. “Evolution can sometimes be ‘lazy,’ finding the same type of solution independently for a similar problem — the genetic tools that worked for pestis to thrive for over 2000 years across over Eurasia might be reused again.” Unraveling the mystery of a Bronze Age plague The ancient bacterium that caused the Eurasia plague, known today as the Late Neolithic Bronze Age lineage, spread from Europe all the way to Mongolia, with evidence of the disease found across 6,000 kilometers (3,700 miles). Recent evidence suggests that the majority of modern human diseases emerged within the last 10,000 years and coincided with the domestication of animals such as livestock and pets, according to a release from the German research institute. Scientists suspected that animals other than rodents were a part of the enormous puzzle of the Bronze Age plague transmission, but without any bacterial genomes recovered from animal hosts, it was not clear which ones. To find the ancient plague genome, the study authors investigated Bronze Age animal remains from an archaeological site in Russia known as Arkaim. The settlement was once associated with a culture called Sintashta-Petrovka, known for its innovations in livestock. There, the researchers discovered the missing connection — the tooth of a 4,000-year-old sheep that was infected with the same plague bacteria found in humans from that area. Finding infected livestock suggests that the domesticated sheep served as a bridge between the humans and infected wild animals, said Dr. Taylor Hermes, a study coauthor and an assistant professor of anthropology at the University of Arkansas. “We’re sort of unveiling this in real time and trying to get a sense for how Bronze Age nomadic herders out in the Eurasian Steppe were setting the stage for disease transmission that potentially led to impacts elsewhere,” Hermes said, “not only in later in time, but also in a much more distant, distant landscape.” During this time within the Eurasian Steppe, as many as 20% of the bodies in some cemeteries are those of people who were infected with, and most likely died from, the plague, making it an extremely pervasive disease, Hermes said. While livestock are seemingly a part of what made the disease so widespread, they are only one piece of the puzzle. The identification of the bacterial lineage in an animal opens new avenues for researching this disease’s evolution as well as the later lineage that caused the Black Death in Europe and the plague that’s still around today, he added. “It’s not surprising, but it is VERY cool to see (the DNA) isolated from an ancient animal. It’s extremely difficult to find it in humans and even more so in animal remains, so this is really interesting and significant,” Hendrik Poinar, evolutionary geneticist and director of the Ancient DNA Centre at McMaster University in Hamilton, Ontario, wrote in an email. Poinar was not involved with the study. It is likely that humans and animals were passing the strains back and forth, but it isn’t clear how they did so — or how sheep were infected in the first place. It is possible sheep picked up the bacteria through a food or water source and then transmitted the disease to humans via the animal’s contaminated meat, he added. “I think it shows how extremely successful (if you want to label it that way) this particular pathogen has been,” Poinar added. He, as well as the study’s authors, said they hope that further research uncovers other animals infected with the ancient strain to further the understanding of the disease’s spread and evolution. Ancient plague to modern plague While the plague lineage that persisted during the Bronze Age is extinct, Yersinia pestis is still around in parts of Africa and Asia as well as the western United States, Brazil and Peru. But it’s rare to encounter the bacteria, with only 1,000 to 2,000 cases of plague annually worldwide. There is no need for alarm when it comes to dealing with livestock and pets, Hermes said. The findings are a reminder that animals carry diseases that are transmittable to humans. Be cautious when cooking meat, or to take care when bitten by an animal, he added. “The takeaway is that humans aren’t alone in disease, and this has been true for thousands of years. The ways we are drastically changing our environment and how wild and domesticated animals are connected to us have the potential to change how disease can come into our communities,” Light-Maka said. “And if you see a dead prairie dog, maybe don’t go and touch it.” Source of the article

GPT-5: has AI just plateaued?

OpenAI claims that its new flagship model, GPT-5, marks “a significant step along the path to AGI” – that is, the artificial general intelligence that AI bosses and self-proclaimed experts often claim is around the corner. According to OpenAI’s own definition, AGI would be “a highly autonomous system that outperforms humans at most economically valuable work”. Setting aside whether this is something humanity should be striving for, OpenAI CEO Sam Altman’s arguments for GPT-5 being a “significant step” in this direction sound remarkably unspectacular. He claims GPT-5 is better at writing computer code than its predecessors. It is said to “hallucinate” a bit less, and is a bit better at following instructions – especially when they require following multiple steps and using other software. The model is also apparently safer and less “sycophantic”, because it will not deceive the user or provide potentially harmful information just to please them. Altman does say that “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert”. Yet it still doesn’t have a clue about whether anything it says is accurate, as you can see from its attempt below to draw a map of North America. It also cannot learn from its own experience, or achieve more than 42% accuracy on a challenging benchmark like “Humanity’s Last Exam”, which contains hard questions on all kinds of scientific (and other) subject matter. This is slightly below the 44% that Grok 4, the model recently released by Elon Musk’s xAI, is said to have achieved. The main technical innovation behind GPT-5 seems to be the introduction of a “router”. This decides which model of GPT to delegate to when asked a question, essentially asking itself how much effort to invest in computing its answers (then improving over time by learning from feedback about its previous choices). The options for delegation include the previous leading models of GPT and also a new “deeper reasoning” model called GPT-5 Thinking. It’s not clear what this new model actually is. OpenAI isn’t saying it is underpinned by any new algorithms or trained on any new data (since all available data was pretty much being used already). One might therefore speculate that this model is really just another way of controlling existing models with repeated queries and pushing them to work harder until it produces better results. What LLMs are It was back in 2017 when researchers at Google found out that a new type of AI architecture was capable of capturing tremendously complex patterns within long sequences of words that underpin the structure of human language. By training these so-called large language models (LLMs) on large amounts of text, they could respond to prompts from a user by mapping a sequence of words to its most likely continuation in accordance with the patterns present in the dataset. This approach to mimicking human intelligence became better and better as LLMs were trained on larger and larger amounts of data – leading to systems like ChatGPT. Ultimately, these models just encode a humongous table of stimuli and responses. A user prompt is the stimulus, and the model might just as well look it up in a table to determine the best response. Considering how simple this idea seems, it’s astounding that LLMs have eclipsed the capabilities of many other AI systems – if not in terms of accuracy and reliability, certainly in terms of flexibility and usability. The jury may still be out on whether these systems could ever be capable of true reasoning, or understanding the world in ways similar to ours, or keeping track of their experiences to refine their behaviour correctly – all arguably necessary ingredients of AGI. In the meantime, an industry of AI software companies has sprung up that focuses on “taming” general purpose LLMs to be more reliable and predictable for specific use cases. Having studied how to write the most effective prompts, their software might prompt a model multiple times, or use numerous LLMs, adjusting the instructions until it gets the desired result. In some cases, they might “fine-tune” an LLM with small-scale add-ons to make them more effective. OpenAI’s new router is in the same vein, except it’s built into GPT-5. If this move succeeds, the engineers of companies further down the AI supply chain will be needed less and less. GPT-5 would also be cheaper to users than its LLM competitors because it would be more useful without these embellishments. At the same time, this may well be an admission that we have reached a point where LLMs cannot be improved much further to deliver on the promise of AGI. If so, it will vindicate those scientists and industry experts who have been arguing for a while that it won’t be possible to overcome the current limitations in AI without moving beyond LLM architectures. Old wine into new models? OpenAI’s new emphasis on routing also harks back to the “meta reasoning” that gained prominence in AI in the 1990s, based on the idea of “reasoning about reasoning”. Imagine, for example, you were trying to calculate an optimal travel route on a complex map. Heading off in the right direction is easy, but every time you consider another 100 alternatives for the remainder of the route, you will likely only get an improvement of 5% on your previous best option. At every point of the journey, the question is how much more thinking it’s worth doing. This kind of reasoning is important for dealing with complex tasks by breaking them down into smaller problems that can be solved with more specialised components. This was the predominant paradigm in AI until the focus shifted to general-purpose LLMs. It is possible that the release of GPT-5 marks a shift in the evolution of AI which, even if it is not a return to this approach, might usher in the end of creating ever more complicated models whose thought processes are impossible for anyone to understand. Whether that could put us on a path toward AGI is hard to say. But it might create an opportunity to move towards creating AIs we can control using rigorous engineering methods. And it might help us remember that the original vision of AI was not only to replicate human intelligence, but also to better understand it. Source of the article