Verify it's really you

Please re-enter your password to continue with this action.

Posts

Witty wotty dashes

Doodles are the emanations of our pixillated minds, freewheeling into dissociation, graphology, and radical openness In 1936, Gary Cooper starred in the Oscar-winning film Mr Deeds Goes to Town and changed the meaning of marginal squiggles forever. Mr Deeds, a sweet man from small-town Vermont, survives the Great Depression juggling a string of quirky jobs – he’s a part-time greetings card poet, a tuba player, and an investor in the local animal fat factory – but then he inherits a cool $20 million from an estranged uncle. The film follows the travails of this loveable everyman as he attempts to give away his newfound wealth to the poor. Mr Deeds’s radical acts of altruism (for example, offering 2,000 10-acre farms to struggling Americans) quickly excites the ire of New York’s elite, not least the pernicious attorney John Cedar who plots to have Mr Deeds declared ‘insane’ by a New York judge. In the film’s courtroom finale, various witnesses from his hometown attack Mr Deeds’s personality, claiming he has long been known as ‘pixillated’ – one of them clarifying: Pixillated is an early American expression deriving from the word ‘pixies’, meaning elves. They would say, ‘The pixies have got him,’ as we nowadays would say a man is ‘balmy’. Next, the snooty Dr Emile Von Haller (a parody of Central European intellectuals like Sigmund Freud) appears with a large graph depicting the mood swings of a manic depressive – pixillated affective errancies that map exactly onto the everyday eccentricities of Mr Deeds as described by other witnesses. Called upon to defend himself against such slander, Mr Deeds demonstrates his grip on rationality by celebrating his love of irrational things – from ‘walking in the rain without a hat’ to ‘playing the tuba’ during the Great Depression – before introducing the courtroom to a newfangled form: ‘the doodle’. ‘[E]verybody does something silly when they’re thinking,’ says Mr Deeds as the courtroom erupts with laughter. ‘For instance, the Judge here is an “O-filler”.’ ‘A what?’ says the Judge. ‘An O-filler,’ says Mr Deeds. ‘You fill in all the spaces in the O’s, with your pencil … That may make you look a little crazy, Your Honour, … but I don’t see anything wrong ’cause that helps you to think. Other people,’ Mr Deeds says, ‘are doodlers.’ ‘Doodlers?’ the judge exclaims. ‘That’s a name we made up back home for people who make foolish designs on paper when they’re thinking. It’s called doodling. Almost everybody’s a doodler … People draw the most idiotic pictures when they’re thinking. Dr Von Haller, here, could probably think of a long name for it because he doodles all the time.’ Reflecting the elitist psychoanalytical gaze back at Dr Von Haller, Mr Deeds finds something between ‘a chimpanzee’ and ‘Mr Cedar’ in Von Haller’s idiotic doodles, effortlessly exposing the hidden errancies that inform even the most rational analytical formulas. The doodle is subversive. Democratically able to contain all gestures regardless of their formal difference. In light of this remarkable egalitarian power, the judge irreverently declares Mr Deeds ‘the sanest man that ever walked into this courtroom.’ Far from symbolising a frivolous eruption of nonsignifying noise, the doodle emerges in interwar modernist culture as a distinctly informal form oriented towards containing the value of apparently illogical things – like giving away your personal wealth for public good. Or upholding democratic processes that defend the rights of everyone (even the pixillated, meaning the ‘pixies’ rather than the ‘pixels’) in an age defined by the cold logic of mechanical reproduction, anti-humanist Taylorite efficiency programmes, and the global ascension of fascism. As Mr Deeds says: ‘everybody’s a doodler’. Everybody matters. To doodle is – if anything – a doddle. Equally the domain of avant-garde artists and incarcerated monkeys, presidents and poets, toddlers and self-help gurus, doodling is a radically non-hierarchical and non-classical activity that relays modernism’s epochal desire to reinvent traditional systems of value and to encourage the acceptance of new modes of being. Doodles explode across modernist culture, high and low. From the constellatory squiggles and biomorphic shapes parading across Jean Dubuffet’s doodle cycle L’Hourloupe (1962-85) to Norman McLaren’s abstract expressionist cartoon Boogie-Doodle (1940), through to the ‘pixillated’ and ‘sagacious’ doodlers of, respectively, James Joyce’s novel Finnegans Wake (1939) and Samuel Beckett’s novel Watt (1953), the modernist exhibits a veritable will-to-doodle. But why does the doodle come to matter at this 20th-century modernist moment? I would argue that the doodle is to modernism something like what the beautiful once was to the Renaissance: an aesthetic form that indicates a wider system of value intrinsic to a period of history. The beautiful embodies Renaissance ideals of harmony, rationality and humanism, just as the doodle alerts us to modernism’s fascination with difference and repetition, complexity, errancy and the ordered disorders that hide in irrational processes of all sorts. The aesthetic experience of the doodle is never fixed, never singular As a distinct aesthetic form, the doodle is often aligned with Paul Klee’s understanding that ‘A line is a dot that went for a walk’ – or as his fellow artist Saul Steinberg fondly misquotes it: ‘A line is a thought that went for a walk.’ Steinberg’s slippage points to modernist lines not being formally representative of things but things in and of themselves. This is form animated by the eventful idiosyncrasies of what the poet Henri Michaux calls ‘an entanglement, a drawing as it were desiring to withdraw into itself.’ Doodles are noisy and unfinished process-oriented forms – emerging always like a multitude of ‘starts that come out’, as the poet and musician Clark Coolidge says: they are minor gestural forces that open experience to novel possibilities of becoming. Doodles speak to the modernist idea of an endlessly wandering world, open, as the philosopher Jean-Luc Nancy says, to ‘the indeterminate possibility of the possible’. Because of its radical openness to difference, the doodle tends to function as a kind of meta-aesthetic attuned to containing a network of ambivalent affects and fleeting everyday aesthetic experiences that become increasingly common in the 20th century. Just as our consumerist lifeworld is a patchwork of ‘cool’ Nike ads, ‘cute’ outfits, ‘interesting’ data points, ‘dank’ memes and ‘whimsical’ shopping-mall muzak, the doodle presents scrawled assemblages that flitter with cute blobs, cool waveforms, interesting jottings and fey twinkling stars. The aesthetic experience of the doodle is never fixed, never singular. Far from encouraging distinct sovereign aesthetic experiences like the beautiful, the modernist doodle presents a decentred experience that is, in part, about the meta-aesthetic expansion and reframing of classical aesthetic experience into something non-sovereign, multitudinous and relational. While drawing together errant, marginal and unworthy gestures long excluded from the royal aesthetic experiences favoured by the Renaissance, the doodle is about commingling weak and everyday forms in a democratic soup of bounteous difference and animate commixture. Doodles have a long history, of course. There’s the ‘monkish pornographic doodle’ that the poet Lisa Robertson marvels at, drawn around a flaw in the vellum of a translation of the Codex Oblongus from De rerum natura held in the British Library. Or the variety of curious knots and flourishes that adorn the marginalia of Edward Cocker’s early modern ‘writing-books’ like Arts Glory (1669). Such pre-modern forms are largely treated as formless nothings before the modernist moment awakes to the informal value of doodled forms, whether avant-garde or popular. Indeed, the word ‘doodle’ enters popular parlance in the early 20th century – appearing alongside swaths of similar ‘doo-’ terms for worthless objects, errant movements and unbalanced states of mind, including ‘doodad’, ‘doodah’, ‘doolally’, ‘doohickey’, ‘doojigger’, ‘doo-doo’, ‘dooky’ and ‘doofus’ – as a variant both of old German words like Dödel (‘fool’) or Dudeltopf (‘simpleton’) and the loaded revolutionary war-era phrase ‘Yankee Doodle’ – before bursting forth across US pop culture with the success of Mr Deeds Goes to Town. By the late 1930s, most major newspapers in the US take to clarifying the meaning of ‘doodles’, often alongside definitions of the promiscuous neurotic category of the ‘pixillated zany’, underlining the doodle’s comedy of psychoanalytical pretension. The Los Angeles Times in 1936 insists that ‘Persons who create geometric masterpieces during a telephone conversation, are a little pixilated [sic].’ In 1937, a columnist at The Washington Post admits (ironically) to fearing being caught doodling ‘dogwood blossoms’ while chatting on the telephone and being goaded with ‘that fatal form of neurosis known as “pixillated”.’ As one popular columnist surmises: ‘a hundred million guinea pigs are now “doodles”-conscious’ – both in thrall to doodling and in fear of the wisecracks it’ll inspire. Also in 1937, Life magazine publishes funny exposés of doodling politicians and doodling celebrities alongside a feature on the New York subway’s pixillated ‘photo-doodlers’. What matters here is the doodle’s containment of a ‘pixillated’ comedy of neurosis, skewering the period’s po-faced ‘science’ of dreams while celebrating errant expressions of all kinds. These burgeoning pop cultures celebrating the American public’s democratic respect for minor differences coincided with, and were fed by, a growing interest in psychoanalysis and the unconscious colliding with older (largely Western European) avant-garde, occult and pseudoscientific interests in planchette writing, graphology, automatism and free-association parlour games. In Life magazine’s 1930s showcasing of subway photo-doodles and doodling Democrats, for example, explicit parallels can be drawn with Marcel Duchamp’s Dada portrait of the Mona Lisa in L.H.O.O.Q. (1919) and Louis Aragon’s recycling of the discarded doodles of French ministers in the magazine La Révolution surréaliste in 1926. In turn, Russell Arundel’s cartoonish doodle catalogue, Everybody’s Pixillated (1937) – ‘a pixillated book for pixillated people’, published hastily in the wake of Mr Deeds Goes to Town – owes as much of a debt to late-19th-century graphology textbooks that attempt to taxonomise character through handwriting as it does to the highbrow strictures of Freudian psychoanalysis. The doodle skirts close to an irreverent form of psychoanalysis for the people Graphological understandings of the modernist doodle catalogue are more evident still in Your Doodles and What They Mean to You (1957) by Helen King: The signature shows the personality – that side which we appear to be to the public. The penmanship shows the character – that which we really are. And the doodles tell of the unconscious thoughts, hopes, desires. [my italics] King’s pop-ish sense of doodles as the cartoon sigla of modernism’s unconscious folds back readily into the avant-garde surrealist’s interest in parlour games like ‘exquisite corpse’, which topologically enfolds an assemblage of individual doodles into a grotesque vision of Jung’s collective unconscious. To be sure, as they erupt across modernist pop culture, doodles often parody these older and more serious concerns of graphologists, occult automatists, avant-garde surrealists and early psychologists. As if referring back to Mr Deeds’s comic analysis of Dr Von Haller’s doodles, the doodle comes to the fore as a tongue-in-cheek expression of a re-materialised unconscious that is more inclined to poke fun at the elitism and highfalutin snobbery saturating interwar modernist psychoanalytical practices than to posit any serious means of understanding dreams. This said, in preferring to simply celebrate the silly things people do to help them think (to paraphrase Mr Deeds), the doodle skirts close to an irreverent form of psychoanalysis for the people. Like the modernist doodle, the graphologist’s doodle upholds the value of democracy through its informal containment of difference. Asurprising number of interwar modernist novels contain characters who erupt with dawdling forms that are in close dialogue with late-Victorian practices of automatism and graphology. In Virginia Woolf’s novel Night and Day (1919), for instance, Ralph Denham gazes absently into a page riddled with ‘half-obliterated scratches’ and a ‘circumference of smudges surrounding a central blot’, before beginning to doodle ‘blots fringed with flames’. Instantly, Ralph finds his lawyerly way of inhabiting the world softening, opening onto a network of what Helen King called ‘unconscious thoughts, hopes, desires’. Drawing on the occult practice of planchette writing, Woolf has Ralph find the ‘objects of [his] life, softening their sharp outline’ as the doodle mediates first a kitsch image of cosmic totality, and then his genuine human connection with the woman he loves, the upper-class Katharine Hilbery. For Katharine immediately recognises something familiar in ‘the idiotic symbol of his most confused and emotional moments’. ‘Yes,’ she says in rational agreement with Ralph’s irrational doodle, ‘the world looks something like that to me too.’ The modernist’s will-to-doodle folds out still more explicitly and wonderfully in Joyce’s ‘verbivocovisual’ Finnegans Wake. Apropos of the media frenzy surrounding Mr Deeds Goes to Town, Joyce adds in numerous references to the ‘doodling dawdling’ antics of his dream novel’s cast. ‘He, the pixillated doodler,’ writes Joyce, ‘is on his last with illegible clergimanths boasting always of his ruddy complexious!’ The reference is to Shem the Penman’s terrible handwriting in his transcription of a letter by his mother, Anna Livia Plurabelle (ie, ALP). The social affordance of the doodle contains both ecological and democratic differences The letter itself is a muddied and crumpled communicative mess that echoes Finnegans Wake’s own erratic and polyphonic form. ALP’s doodle-laden pixillated letter is a non-sovereign projective field of subjects and objects all mixing and mingling in ‘strangewrote anaglyptics’, wherein the voice of Anna Livia blurs not only with Shem’s ‘kakography’ but also with ‘inbursts’ from Maggy (ie, Isobel/Issy/Girl Cloud, and ALP’s only daughter with HCE, ie, Here Comes Everybody/Humphrey Chimpden Earwicker), and an array of more-than-human gestures, from tea stains and orange-peel smudges to chicken scratches, electromagnetic wavelengths, muddy splashes and more. Regurgitating what Stephen Dedalus in Joyce’s earlier novel Ulysses (1922) calls the ‘signatures of all things’, ALP’s doodle-laden letter is both an indelibly human and more-than-human form that foregrounds the marginalised signatory gestures of subjects and objects alike: rivers; men; orange-peels; Morse code machines. For all of Joyce’s interest in Mr Deeds and the rise of modernism’s ‘pixillated doodlers’, the meaning of the Wake’s doodles again blurs with an older, late-Victorian interest in graphology. As Walter Benjamin writes in 1928, ‘graphology is concerned with the bodily aspect of the language of handwriting and with the expressive aspect of the body of handwriting.’ With Shem’s/ALP’s pixillated letter, Joyce explodes this ‘body of handwriting’ into an intermedial and non-anthropocentric ecology that brings together the animate gestures of chickens, medieval high priests and supersonic televisions. Rejoicing in the many-in-oneness of all expression – or ‘the identities in the writer complexus’ – Joyce dwells on the poetry of a letter’s variegated surfaces. Its ‘stabs and foliated gashes’; its ‘curt witty wotty dashes’ and ‘disdotted aiches’; its ‘superciliouslooking crisscrossed Greek ees’ and ‘pees with their caps awry’; its ‘fourlegged ems’ and ‘fretful fidget eff[s]’; its ‘riot of blots and blurs and bars and balls and hoops and wriggles and juxtaposed jottings linked by spurts of speed’. From Joyce we learn that the social affordance of the doodle contains both ecological and democratic differences. A healthy public depends on a healthy planet, and both begin by recognising that everyone – and indeed everything – is a doodler. In the postwar period, and eventually in pop culture, the modernist doodle takes on an increasingly cartoonish and commodified form. The more doodles circulate, the more they become pastiches of themselves (meaningless sigla imbued with oversized meanings, parodies of an all-too-formulaic commitment to formlessness). In Robert Arthur Jr’s satirical story ‘Mr Milton’s Gift’ (1958), first published in Fantasy and Science Fiction, we find the hapless Horace Milton ‘sitting back, daydreaming and doodling … doodling something while daydreaming about being rich’. Following a mysterious ‘charm’, Horace Milton’s lazy doodles come to constitute a bizarre late-capitalist labour-saving device, as he realises ‘he hadn’t just been doodling. Unknown to him, his hand was drawing a perfect hundred dollar bill.’ The modernist doodle is here remade in the parodic vision of a post-modernist America, defined not by the democratic containment of difference but by the hypercommodification of everything. Not least, the collective desire to ‘daydream about being rich’. The play of difference and irreverence celebrated by Mr Deeds – and his ‘insane desire to become a public benefactor’ – is absorbed into a homogenising pursuit of capital. Throughout the post-1945 period, the doodle has been further sterilised, homogenised and hypercommodified into an increasingly ‘game-changing’ neoliberal form of ‘creative power’. Think of ‘Google Doodles’ (1998-), or business management gurus styling themselves as ‘info-doodlers’, for example. The modernist doodle’s non-hierarchical formalism has been flattened and its sociopolitical errancy standardised in overwrought simulations of spontaneity. Into the early 21st century, the postmodernist doodle tends to help brands and corporations disguise systemic agendas of extraction and exploitation in colourful and fun-loving – squiggling and non-sovereign(!) – surfaces full, as Google says, ‘of spontaneous and delightful changes’. Yet I cannot help but wonder: what if the doodle rekindled its modernist errancy and rediscovered its democratic roots? The doodle’s history points to an indelibly American form of ‘being in the impasse together,’ as Lauren Berlant says in Cruel Optimism (2011), meaning that it helps us to imagine a non-classical kind of collectivity that foregrounds a non-hierarchical communion of marginal differences. As Gilles Deleuze once said of that exemplary modernist Dödel, Charlie Chaplin, the struggle is to find a form that ‘make[s] the slight difference between men the variable of a great situation of community and communality (Democracy).’ The original social agency of the doodle is clearly formed by a promiscuous ability to bring people together by way of popularising their errant and wasteful gestures – gestures once confined to the peripheries of occult automatisms, pseudoscientific graphology manuals and obscure surrealist practices. The question now is: how might the doodle recover this capacity to put the play of marginal differences to work as socially generative form? In a time when democracy seems once more under threat, how might the doodle rekindle its long-lost power to inspire pixillated dreams of togetherness? Source of the article

GOATReads: Literature

Me, myself and I

Loneliness can be a shameful hunger, a shell, a dangerous landscape of shadowy figures. But it is also a gift The bluest period I ever spent was in Manhattan’s East Village, not so long back. I lived on East 2nd Street, in an unreconstructed tenement building, and each morning I walked across Tompkins Square Park to get my coffee. When I arrived the trees were bare, and I dedicated those walks to checking the progress of the blossoms. There are many community gardens in that part of town, and so I could examine irises and tulips, forsythia, cherry trees and a great weeping willow that seemed to drop its streamers overnight, like a ship about to lift anchor and sail away. I wasn’t supposed to be in New York, or not like this, anyway. I’d met someone in America and then lost them almost instantly, but the future we’d dreamed up together retained its magnetism, and so I moved alone to the city I’d expected to become my home. I had friends there, but none of the ordinary duties and habits that comprise a life. I’d severed all those small, sustaining cords, and, as such, it wasn’t surprising that I experienced a loneliness more paralysing than anything I’d encountered in more than a decade of living alone. What did it feel like? It felt like being hungry, I suppose, in a place where being hungry is shameful, and where one has no money and everyone else is full. It felt, at least sometimes, difficult and embarrassing and important to conceal. Being foreign didn’t help. I kept botching the ballgame of language: fumbling my catches, bungling my throws. Most days, I went for coffee in the same place, a glass-fronted café full of tiny tables, populated almost exclusively by people gazing into the glowing clamshells of their laptops. Each time, the same thing happened. I ordered the nearest thing to filter on the menu: a medium urn brew, which was written in large chalk letters on the board. Each time, without fail, the barista looked blankly up and asked me to repeat myself. I might have found it funny in England, or irritating, or I might not have noticed it all, but that spring it worked under my skin, depositing little grains of anxiety and shame. Something funny happens to people who are lonely. The lonelier they get, the less adept they become at navigating social currents. Loneliness grows around them, like mould or fur, a prophylactic that inhibits contact, no matter how badly contact is desired. Loneliness is accretive, extending and perpetuating itself. Once it becomes impacted, it isn’t easy to dislodge. When I think of its advance, an anchoress’s cell comes to mind, as does the exoskeleton of a gastropod. I thought of those dreamlike crumbling rooms, extending across the water, where men long since dead freed one another This sounds like paranoia, but in fact loneliness’s odd mode of increase has been mapped by medical researchers. It seems that the initial sensation triggers what psychologists call hypervigilance for social threat. In this state, which is entered into unknowingly, one tends to experience the world in negative terms, and to both expect and remember negative encounters – instances of rudeness, rejection or abrasion, like my urn brew episodes in the café. This creates, of course, a vicious circle, in which the lonely person grows increasingly more isolated, suspicious and withdrawn. At the same time, the brain’s state of red alert brings about a series of physiological changes. Lonely people are restless sleepers. Loneliness drives up blood pressure, accelerates ageing, and acts as a precursor to cognitive decline. According to a 2010 study I came across in the Annals of Behavioral Medicine entitled ‘Loneliness Matters: A Theoretical and Empirical Review of Consequences and Mechanisms’, loneliness predicts increased morbidity and mortality, which is an elegant way of saying that loneliness can prove fatal. I don’t think I experienced cognitive decline, but I quickly became intimate with hypervigilance. During the months I lived in Manhattan, it manifested as an almost painful alertness to the city, a form of over-arousal that oscillated between paranoia and desire. During the day, I rarely encountered anyone in my building, but at night I’d hear doors opening and closing, and people passing a few feet from my bed. The man next door was a DJ, and at odd hours the apartment would be flooded with his music. At two or three in the morning, the heat rose clanking through the pipes, and just before dawn I’d sometimes be woken by the siren of the ladder truck leaving the East 2nd Street fire station, which had lost six crew members on 9/11. On those broken nights, the city seemed a place of seepage, both ghosted and full of gaps. Lying awake in my platform bed, the bass from next door pummelling my chest, I’d think of how the neighbourhood used to be, the stories that I’d heard. In the 1980s, this section of the East Village – which is known as Alphabet City because of its four vertical avenues, A to D – was dominated by heroin. People sold it in stairways, or through holes in doors, and sometimes the queues would run right down the street. Many of the buildings were derelict then, and some were turned into impromptu shooting galleries, while others were occupied by the artists who were just beginning to colonise the area. The one I felt most affinity for was David Wojnarowicz, skinny and lantern-jawed in a leather jacket. He’d been a street kid and a hustler before he became an artist, and grew famous alongside Jean-Michel Basquiat and Keith Haring. He died in 1992, a couple of months short of his 38th birthday, of AIDS-related complications. Just before his death, he put together a book called Close to the Knives: A Memoir of Disintegration, a ranging, raging collection of essays about sex and cruising, loneliness, sickness and the wicked politicians who refused to take seriously the crisis of AIDS. I loved that book, especially the passages about the Hudson river piers. As shipping declined in the 1960s, the piers that ran along the Hudson, from Christopher Street to 14th Street, were abandoned and fell into disrepair. In the 1970s, New York was nearly bankrupt, and so these immense decaying buildings could neither be destroyed nor properly secured. Some were squatted by homeless people, who built camps inside the old goods sheds and baggage halls, and others were adopted by gay men as cruising grounds. In Close to the Knives, Wojnarowicz described prowling around the Beaux-Art departure halls at night or during storms. They were vast as football fields, their walls damaged by fire, their floors and ceilings full of holes. In the shadows, he’d see men embracing, and often he’d follow a single figure down passageways and up flights of stairs into rooms carpeted with grass or filled with boxes of abandoned papers, where you could catch the scent of salt rising from the river. ‘So simple,’ he wrote, ‘the appearance of night in a room full of strangers, the maze of hallways wandered as in films, the fracturing of bodies from darkness into light, sounds of plane engines easing into the distance.’ Soon other artists began to occupy the piers. Paintings bloomed across the walls. Giant naked men with erect cocks. Keith Haring’s radiant babies. A labyrinth, picked out with white paint on the filthy floor. A leaping cat, a faun in sunglasses, Wojnarowicz’s gagging cows. Great murals in pinks and oranges of entwining torsos. Mike Bidlo’s intricate abstract expressionist drip paintings, which wouldn’t have looked out of place in the Museum of Modern Art. Up on the catwalk you could gaze across the river to the Jersey shore, and on hot days the naked men sunbathed on the wooden decks, while inside filmmakers recreated the fall of Pompeii. Those buildings are long gone now, torn down in the mid-eighties, just as AIDS was beginning to devastate the population who’d adopted them. Over time the waterfront was transformed into the Hudson River Park, a landscaped pleasure-ground of trees and rollerbladers and glossy parents with strollers and small dogs. But even a curfew didn’t suppress the erotic spirit of the place. On summer nights, Pier 45, the old sex pier, continues to turn into a catwalk-cum-dancefloor for the city’s gay and transgender homeless kids, though every year battles rage over policing and violence. I was glad fierce kids were still throwing shade beside the river, but whenever I walked through the park I mourned those ruined buildings. I suppose I liked to dream of the piers as they once were, their vast and damaged rooms, because they seemed to represent an ideal kind of city, one which permitted solitude in company, which offered the possibility of encounter, expression and the pleasure of being alone amongst one’s tribe (whatever tribe that happened to be). I thought of them often, those dreamlike, crumbling rooms, extending out across the water, where men now long since dead freed one another, as Wojnarowicz put it, ‘from the silences of the interior life’. Loneliness and art, loneliness and sex: these things are connected, and connected too with cities. One of the habits associated with chronic loneliness is hoarding, a condition that shares a boundary with art. I can think of at least three artists who medicated their sense of isolation by collecting objects off the streets, and whose art-making practices were loosely allied to trash-gathering and to the curation of the dirty, the salvaged and the discarded. I’m thinking of Joseph Cornell, that shy, unworldly man who pioneered the art of assemblage; of Henry Darger, the Chicago janitor and outsider artist; and of Andy Warhol, who, despite surrounding himself with glittering crowds, often commented on his abject sense of loneliness and alienation. Cornell made lovely worlds in boxes out of little things he toted home from thrift stores, while Warhol shopped obsessively for decades (this is the acquisitive Andy immortalised in the silver statue in Union Square, his Polaroid camera around his neck, a Bloomingdale’s Medium Brown Bag in his right hand). His largest and most extensive artwork was the Time Capsules, 612 sealed brown cardboard boxes filled over the last 13 years of his life with all the varied detritus that flooded into the Factory: postcards, letters, newspapers, magazines, photographs, invoices, slices of pizza, a piece of cake, even a mummified human foot. As for Darger, he spent almost all his free time roaming Chicago, gathering and sorting trash. He used some of it in his strange, disturbing paintings of little girls engaged in terrible battles, but most of it – pieces of string, in particular – existed as a kind of counter-exhibit of its own, though he never showed it to a living soul. I’ve missed you, Alastair once said, and my heart jumped at the pleasure of existing in someone else’s life People who hoard tend to be socially withdrawn. Sometimes the hoarding causes isolation, and sometimes it is a palliative to loneliness, a way of comforting oneself. Not everyone is susceptible to the companionship of objects; to the desire to keep and sort them; to employ them as barricades or to play, as Warhol did, back and forth between expulsion and retention. In that funny, lonely spring, I developed a fondness for the yellow ordering slips from the New York Public Library, which I kept in my wallet. I liked biros and pencils of all kinds, and I grew enamoured of a model Sumo wrestler a friend at Columbia had given me; a spectacularly ugly object that was designed to be crushed in one’s fist to relieve stress, though the tears it quickly developed suggested it wasn’t quite fitted to the task. Like Warhol and Darger, Wojnarowicz also had a proclivity for objects. His art was full of found things: pieces of driftwood painted like crocodiles; maps, clocks and bits of comic books. Among his entourage was the skeleton of a baby elephant, which moved with him from cluttered apartment to apartment. For a while, he’d lived in a building on my block and on the day he moved in had carried the skeleton down the street concealed beneath a sheet, so his new neighbours wouldn’t be alarmed. Later, when he was dying, he gave it and his battered, grubby leather jacket to two friends he’d been collaborating with. Is this the appeal of objects to the lonely: that we can trust them to outlive us? In the mornings when I went out to the Hudson River, I’d sometimes call in afterwards to the West Village to eat breakfast with the father of a friend of mine. Alastair lived in a tiny, shipshape apartment not far from the Christopher Street subway in West Village. He was a poet and, although he originally came from Scotland, he’d spent most of his life in South America, where he wrote dispatches for the New Yorker and translated Borges and Neruda into English. His room was full of books and pleasing bits and pieces: a fossilised leaf, a desk-mounted pencil sharpener, an extraordinary folding bike. Each time I came, I brought chrysanthemums the colour of pound coins, and in return he fed me muffins and tiny cups of coffee, and told me stories about the dead from yet another era of New York artists. He remembered Dylan Thomas hurtling through the bars of Greenwich Village, and Frank O’Hara, the New York School poet who’d died at 40 in a car accident on Fire Island. A sweet man, he said. He smoked as he talked, breaking off into great hacking bouts of coughing. Mostly he told me about Jorge Luis Borges, blind Borges, who was bilingual from childhood, and died in exile in Switzerland, and whom all the taxi drivers in Buenos Aires had adored. I left these conversations almost radiant. It was good to be greeted, to be embraced. I’ve missed you, Alastair once said, and my heart jumped at the pleasure of existing in someone else’s life. It might have been then that I realised I couldn’t teeter on like this, not quite committed to New York, not quite sure about going home. I missed my friends and I missed especially the kind of solidity of relationship in which one can express more than the brightest of moods. I wanted my flat back too, the ornaments and objects I’d assembled over decades. I hadn’t bargained for how strange I’d find it, living in someone else’s house, or how attenuating it would prove to my sense of security or self. Soon after that, I got on a plane to England and set about recovering the old, familiar relationships I thought I’d left for good. It seems that this is what loneliness is designed to do: to provoke the restoration of social bonds. Like pain itself, it exists to alert the organism to a state of untenability, to prompt a change in circumstance. We are social animals, the theory goes, and so isolation is – or was, at some unspecified point in our evolutionary journey – unsafe for us. This theory neatly explains the physical consequences of loneliness, which ally to a heightened sense of threat, but I can’t help feeling it doesn’t capture the entirety of loneliness as a state. A little while after I came home, I found a poem by Borges, written in English, the language his grandmother had taught him as a child. It reminded me of my time in New York, and of Wojnarowicz in particular. It’s a love poem, written by a man who’s stayed up all night wandering through a city. Indeed, since he compares the night explicitly to waves, ‘darkblue top-heavy waves … laden with/ things unlikely and desirable’, one might literally say that he’s been cruising. In the first part of the poem he describes an encounter with you, ‘so lazily and incessantly beautiful,’ and in the second he lists what he has to offer, a litany of surprising and ambiguous gifts that ends with three lines I’m certain Wojnarowicz would have understood: I can give you my loneliness, my darkness, the hunger of my heart; I am trying to bribe you with uncertainty, with danger, with defeat. It took me a long time to understand how loneliness might be a gift, but now I think I’ve got it. Borges’s poem voiced the flip side of that disturbing essay I’d read in the Annals of Behavioral Medicine on loneliness’s consequences and mechanisms. Loneliness might raise one’s blood pressure and fill one with paranoia, but it also offers compensations: a depth of vision, a hungry kind of acuity. When I think of it now, I think of it as a place not dissimilar to the old Hudson river piers: a landscape of danger and potential, inhabited by the shadowy presences of fellow travellers, where one sometimes rounds a corner to see lines of glowing colour drawn on dirty walls. Source of the article

GOATReads:Politics

Kafala in the Time of the Flood

It was May Day 2016 and we were standing with African and Asian domestic workers on the streets of Beirut, following the cue of their voices. The first time I heard the slogan I faltered, it caught in my throat, I was unprepared for the rhyme. I shouldn’t have been; we already knew how many were dying. The sentence marched in my head for years. Since 2010, feminist and antiracist organizations in Beirut have come together on the Sunday closest to May Day to protest Lebanon’s Kafala system, the exploitative system that governs temporary migrant labor in many parts of the region. Bringing together a diverse coalition of migrants, activists, NGOs, workers, and allies, the march fills the streets of Lebanon’s capital with voices demanding both concrete labor reforms and total abolition. For over a decade, the annual Migrant Workers’ Day parade and festival has ranked among Beirut’s most beautiful public gatherings, workers peering over balconies to find solidarity and not discrimination, the city momentarily transformed into an image worthy of its status as cosmopolitan. Years later, attempting to write an anthropology of migrant labor in Lebanon while watching the region aflame under genocide, the womens’ voices continued to haunt me. We academics got Kafala so wrong, I realized, parsing its colonial archives in British rule and its political technologies of legal impermanence and human rights violations. We had given it the privilege of abstraction and logistics while refusing its personality of murderous desire. We should have known better—after all, we knew Israel, and with it an investment in annihilation. The migrant domestic workers knew, though. They were Kafala’s insides and they had lived the psychic infrastructure of its necrotics. Their presence laid bare the troubling truth that incarceration had entered the heart of postwar Lebanese life. By 2008, Human Rights Watch estimated one migrant domestic worker in the country was dying every week from either suicide, failed attempt at escape, or murder. In the meantime, tens of thousands of others were kept working, most without access to adequate rest, food, mobility, cell phones, or even wages. Gathered on the streets together, the women who survived had lived to raise their voices and tell us very simply: The opposite of the Kafala system is not work permits and immigration visas and wages and unions and even open borders. The opposite of Kafala is being alive. I. Kafala as AntiHumanism African and Asian migrants have been traveling to the Middle East under the loose rubric of the Kafala system since the 1970s, when the convergence of the oil boom in the Gulf, the suppression of worker uprisings and revolutionary consciousness across the region, and the globalization of capital that constitutes “neoliberalism” produced newly transnational circuits of labor exploitation. “Kafala” itself is usually translated as “sponsorship,” referring to the requirement that migrants have a citizen-sponsor to whom both their work and residence in the country is tied. But “sponsorship” captures neither its lived experience nor the scale of the cultural transformation that has come in its wake. The Council on Foreign Relations casually lists the current number of workers governed by the Kafala system in the region as tens of millions. How many, we might wonder, over the course of its half century? Twenties of millions? Hundreds of millions? Meanwhile, the ghosts of all those who have died on the job swallow their tongues inside human rights reports and abuse stories that no one really reads. They all blur into one another; they all start to sound the same; which is to say, they are a pattern, they are a social fact, they constitute a culture, they demand a diagnosis. Kafala is a social pathology. The strange thing about African and Asian migrant labor in Lebanon, a system frequently referred to as “modern-day slavery,” is that it has only been around for four decades. Who learns to discriminate so quickly? And although the region has its histories of enslavement and servitude, of elitism and racism, of all the violences that define the great accomplishment we call Culture, it has not always been this way. In a not-so-distant era, the presence of Africans and Asians in Lebanon signaled a very different internationalism. Consider the fact that there was once an Ethiopian Student Union in Beirut—by “once,” I mean the 1970s. Today, to say the very word “Ethiopian” is to preclude the possibility of there currently being, there ever having been an Ethiopian Student Union in Beirut. The erasure of this imagination is the annihilation of a moment in which Asian, African, and Arab could be spoken together with neither master nor servant in the sentence. What then, if not sponsorship, is this Kafala system? We might think of it as a neoliberal antihumanism wedged into the heart of a Middle East that was once an ecumenical frame for living. The Kafala system is a historical process by which the figure of the Arab has been wrenched of its humanism from the inside. This is a humanism of the Orient without Orientalism, fully modern and Islamicate and ours, civilizational heritage of the modern world; that which produced the first university, first astronomy, first violin, and last prophet, now gnarled by pipelines and dictators and Zionism and greed, such that the ugliness of capitalism-as-racism burns its scars into womens’ backs. What happened to Arab Nationalism, anticolonial icons led by the triad of Nasser, Nkrumah, and Nehru? Consider, as answer, Kafala. What happened to Lebanon, headquarters of the Palestinian Revolution? Kafala. What happened to the Indian Ocean Arabo-Persian Gulf, all trade and ports and mysterious characters surfacing in geniza fragments? Kafala. What happened to split a map of the world into an embodied earthquake where the edge of Asia became kafeel while Africa to its west and Asia to its east came to name countries as synonyms for servant? Fuck. That. Kafala. In Lebanon, the word Srilankiyye (Arabic for female Sri Lankan) simply means maid. This sentence is necessary but insufficient. In actuality, the term means migrant domestic worker, racialized woman, foreigner who cleans, woman whose hair is forcibly sheared upon arrival to the country where she was looking for a job, woman whose passport is held in a locked drawer in a bedroom where she cleans the sheets and folds the underwear and learns the gossip and consoles the mother and washes the child and nurses the elder and makes coffee and washes dishes and makes coffee and washes floors and makes coffee and washes windows and then sleeps, avoiding his gaze, on the floor; woman who is maybe from Sri Lanka, but a Sri Lanka that is not Sri Lanka, a Sri Lanka that has no beaches no wars no histories no flavors only brown women who wash Lebanese floors, a perfect tautology: You are a domestic worker because you are from Sri Lanka. You are from Sri Lanka because you are a domestic worker. Echoes of Fanon’s unforgettable formula flipped. You are rich because you are white, you are white because you are rich. Before Mehdi ben Barka disappeared, before Amílcar Cabral was assassinated, before the setback and the betrayal and the melancholy that settled into a generation of male intellectuals who never quite managed to build something new in the ruins of their grief, it was not necessarily going to look this way. The Middle East could have been the west of Asia, and Asia could have named our synchronicity; a multiplicity of cultures that were all worth fighting for. The hour of liberation was knocking the ground beneath the women who sang its anthems. We must never forget that Kafala was its counterrevolution. To recall the radical history of the 20th century is to remember that there was once an opening to an alternate present. We had a fissure, one of those endless ones that Benjamin told us is how the light gets in. Its Western numerical index is 1968 but we should recall it as the Tricontinental, that 1966 conference held in Havana that brought together anticolonial icons from across Africa, Asia, and Latin America. This was a moment when Arab, Asian, and African solidarity was the story of the Middle East. It took diverse forms, manifesting in guerrillas and female militants, in raised fists and bold prints, in hijackings and black turtlenecks, in wire-rimmed glasses and AK-47s and a particular shade of olive green. It was a time when Spanish and Arabic were on our lips, and we danced to a soundtrack of sultry rhythms. OSPAAAL (the Organization of Solidarity with the People of Asia, Africa and Latin America) made posters of disco fedayeen and in the smoke-filled offices of Beirut, Yasser Arafat, the Chairman of Palestinian Liberation Organization, shared wine and humor with Urdu poet Faiz Ahmed Faiz. It was a time when the region was headquartered in Egypt and nestled within the continent of Africa rather than the continent of Islam, and so the beards of its men were not yet targets of annihilation. Instead, we had divas like Oum Kulthoum, who sang for leaders with cavalcades that would later be remembered by children who peered out windows at the top of the stairs, only to be left with nostalgia instead of liberation, dull ache for an era when they had not yet assassinated all our heroes. In this history, the future of which has yet to be determined, Arab named a freedom drive. How did a complex and beautiful region known as “The Arabian Gulf” appear to go so quickly from precapitalist merchant to advanced-capitalist monarch? It seems partly about speed, in the structure of that conjuncture between the region’s natural resources and American capital. Black gold, a sticky viscous substance that moves magically underground, produced the joint development strategy of the refinery and the toxic sludge. In other words, oil was discovered, Ford built cars, America built dictators, and men got rich. They call it: economies grew. By now, we know that economies grow at the expense of societies and that economic growth is the harbinger of cultural genocide. Certainly, the Gulf grew, buildings bursting vertically from sand and sea, and off the tops of them South Asian men, imported to produce a wealth that would never be theirs, plunged to their deaths. Some through unfinished windows, some off metal rebars, some burned to a crisp under the summer sun. Others, unchained, peons of debt. We used to share a prophet who crossed the desert in prophecy finding the shade of date palms and now the Gulf is a graveyard made of glass. This, and not only Kuwaiti biryani, is the afterlife of cosmopolitanism along the northern curvature of the Indian Ocean. Who decided that Arab was going to be a name for master and not comrade? What world-historical destruction turned these dreams into so much war? II. Humanity Exceeds Humans (Lebanon Exceeds Lebanese) In Beirut, for more than a year, I kept hearing migrant workers ask, in tones gentle but furious: Manna insan kamein? Are we not human, too? They had not decided it would be Kafala’s refrain but I heard it that way all the same, different people constructing the same sentence, as if it had crystallized into an ontology that took the shape of a philosophical mantra in three simple words. I tried to listen to what they were saying and at first, it appeared to me that Lebanon no longer had room for them in its word for human. Yet it was phrased as a question, and a question posed in Arabic, for they had already entered Lebanon’s language; already refused the denial of the fact they were here, claiming their presence inside a culture that had staked itself on the magical properties of language and therefore could always be learned and transformed from the inside. They told us Kafala wanted them dead and then they reminded us all of the unshakeable speech of their humanity, with echoes of Sojourner Truth declaring, “Ain’t I a woman?” and Rashida Tlaib asking, “Why do the cries of Palestinians sound different to you all?.”  An uncomplicated confrontation between the alive and the inhumane. Sometimes I wonder what Edward Said would think of African and Asian women who clean homes in Lebanon meeting the gaze of his insistent humanism and asking: Are we not human, too? Would it shake his faith in a world of secular universals or would it simply be a testament to what changed when value overtook values, or when it seemed we were in danger of losing the project of Arabic as heritage of liberation to Arabic as the vision of the Abraham accords and shopping malls? Of course, Gaza would beg to differ, as would the people of Yemen. We do not yet know what will become of the Arab world after a Free Palestine. The second refrain I kept hearing from the migrant domestic workers of Beirut was even more common than the first. Sometimes it felt as if every single African or Asian current or former migrant worker I met in Lebanon had her own rendition to offer. Rita: “I love Lebanon but the people suck.” Selam: “Lebanon is really nice, but the Lebanese, their hearts are hard.” Michelle: “I love Lebanon, I love the food, the beauty, the mountains, the city, but the problem is the people are too arrogant.” Hawa: “Lebanon is beautiful, but there are many bad people here.” Meseret: “I love Lebanon like it’s my own country, what I don’t like are the people. Most of the people.” Said Beza to Hana, in a conversation about Beirut, “The country does not hate you. Its people hate you.” Makdes: “I love Lebanon but not the people.” One of my favorites came from Zennash: “el-‘aalam khara bas el-balad mitl imm”—“The people are shit, but the place is like a mother.” Yet another antonym as axiom. I came to think of it as: Not all Lebanon is Lebanese. How to reconcile an insistence on a shared humanity that still indicts Kafala’s deadly desire? What was this Lebanon that everyone held on to, despite so many experiences of mistreatment and cruelty, despite constant reminders of their nonbelonging—what was it that retained Lebanon as referent for the beautiful? Even as Zennash claims “the people are shit,” the excrement emanating from those who lurk as the personification of Kafala’s death drive has somehow not yet swallowed all the shadows of Lebanon’s humanity; of its capacity to cradle the dispossessed. Still, she insists, “the place is like a mother.” I was reminded of yet another contrast. It returns us to the streets, glorious and collective. A scene that temporarily turns those windowless laundry rooms and closets where foreigners are caged inside out, the women of Kafala suddenly resplendent and carrying megaphones. The question referred to Sejaan ‘Azzi, Lebanon’s former Minister of Labor, who had famously publicly opposed domestic workers’ right to unionize in the country. Yet I was struck by the contrast: Why was ‘Azzi the site of a rhetorical question that (in theory if not in practice) could be answered in the negative? Why is Sejaan asked for his confession, whereas Kafala always already wants death? These are the same streets, the same voices, and the same system. Contained in this protest chant is a distinction between individual and structure. The gap is that of a conversation: the ability to ask a question to which the answer can be no, even when it is structurally conditioned and historically determined to be yes. I try and parse it. It could have been the case that the Minister of Labor in Lebanon might not have a domestic worker inside his home, or at least have one he treated well, and this, rather than the stroke of his policy-making pen, would be an opening toward the abolition of Kafala; toward the death of its death. Even as they indict him in their call, reminding us whose invisibilized labor runs the households of the country’s elite, the play of the women’s words lies in their rhymed repetition of “or not?” It is as if the subjects of the Kafala system insist on giving Sejaan—not only as individual complicit in state power, but as abstract figure of Lebanese citizenship itself—a way out. A way to say no, a way to be better than himself, a way to not be the category he is hailed by, because they insist that Not all Lebanon is Lebanese. III. Who Shows Us the Way Out? From Ethiopia, which she returned to in 2021 after over a decade in Lebanon, Beza sends me a voice note in Arabic. Sumayya hayati how are you, is everything ok? Hamdilla I’m good, everything is fine, I don’t know how I can tell you fine, but technically, physically, we are fine but psychologically, honestly I’m not fine at all. Everyday I’m seeing what’s happening in the world and I’m seeing the extent to which people are clinging to this propaganda that is completely wrong, and I’m seeing people dying, I’m seeing what’s happening. You know, me and you, we’re close to people who are in Palestine and are Palestinian and Lebanese, we know what they think, what their perspectives are, what the truth is, but you try and tell the truth to people and they just don’t want to listen. It’s a huge problem. Everyday I just feel like, what in God’s name is happening? I also wish I was in Lebanon. What’s happening is horrific, it’s just not okay, completely not okay, this propaganda that they’re spreading about Hamas is completely wrong and the people who are dying are not even Hamas, not one of them, the ones dying are women and children and people who have nothing to do with this. It’s so, so painful. Everyday I’m sitting and watching and I don’t know what to do, what I can possibly do to help, I don’t have any answers and it hurts so so so much. Honestly I can’t even be happy, every day it becomes morning and I sit in front of the television, then it becomes afternoon and I’m still sitting here watching but I don’t know what to do. This is such a difficult thing for the world, and somehow the rest of them—the Americans, everyone else, they’re happy about this. I don’t know what they get from it, if Palestinians are annihilated from the world how does it benefit them? I simply can’t understand this, it’s unbelievable, you know? And they completely refuse to understand, you try to explain and they just refuse. Half of them, they are blinded by religion, they have all these lies against Muslims and they want revenge against these “terrorists,” but you have to understand their story, these are not terrorists! You try to explain and they just refuse to listen. I know you know the truth too, it’s so hard to bear … I just pray—if there really is a god, that’s what I tell myself, if Allah is here watching what is happening, then do something! Look at what is happening! I don’t know what to say. But C and I are good, we went to the part of Ethiopia that was most affected by the war and we saw so many people, if you saw the way people are living, ya Allah it’s so difficult, to see what an ugly thing war is. I don’t know, apart from that, physically we’re well. The war has been a month you know, and no one is doing anything. If Gaza is destroyed, will they get it then? Is that we’re waiting for? Are you speaking to Hana and everyone else? I’m not, I just don’t know what I can say to them … apart from that, I’m good, my daughter is good, C is good … I saw there are protests in Canada, but everywhere in the world no one understands what is happening—if you haven’t lived in an Arab country or you don’t know the story [of Palestine], you just won’t understand, you won’t know anything about Israel. Only if you live with them do you come to understand their story, this is the really difficult and strange thing—I just wish everyone in the world could understand it—I’m good, I’m good hamdilla, a bit scared because of Hizbullah and Lebanon but thank God C came, but we have to keep praying, you and me, because there’s so many people we love in Lebanon and in Palestine. I used to think: the destruction of Afro-Asian solidarity was the condition of possibility for the Kafala system. I was (partially) wrong. What also happened is that we started living together. And as the map of Kafala’s African and Asian subjects has expanded across the world, so has a new community of those who now bear witness to the struggle for life against empire’s assaults in the Middle East. Bearing memories of their Palestinian, Syrian, Lebanese, Iraqi, and Sudanese friends and neighbors, and not only sponsors or bosses; of political speeches heard on television and bombs just barely escaped under campaigns of total destruction; of revolutionary soundtracks and an unbeatable humor, the migrant workers of the region also bear witness to Israel as a name for death. And as they have built new communities of belonging configured through the Arabic language, so they have claimed their own centrality to a shared project of liberation. As we envision a region free of war and imprisonment, the women of Kafala look toward us from behind locked kitchen doors and high-rise balconies, insisting that abolition begins from inside the home. It is towards them, also, that the struggle for a free Palestine points us.  Source of the article

GOATReads: Philosophy

A life in Zen

Growing up in countercultural California, ‘enlightenment’ had real glamour. But decades of practice have changed my mind On 18 May 1904, in a village near Japan’s Sagami Bay, looking out to the white peak of Mt Fuji, the son of a Buddhist priest was born. Shunryu Suzuki grew up amid the quiet rituals of Sōtō Zen, a tradition that prized stillness, repetition and near-imperceptible spiritual refinement. When he wasn’t sweeping temple courtyards, he was studying, preparing to follow in his father’s footsteps. But then, at age 55, after a life steeped in the disciplines of Japanese Zen, Suzuki travelled to San Francisco. He arrived in California in 1959, as the United States’ literary counterculture was turning toward ‘The East’ in search of new ideas. Alan Watts had already begun to popularise this turn through The Way of Zen (1957), a book that offered Americans liberation from the disillusionment and disorientation of the 20th century. A former Anglican priest with a taste for LSD, Watts presented Eastern ideas as a corrective to Western striving. For him, Zen was a way of ‘dissolving what seemed to be the most oppressive human problems’. It offered liberation from the strictures of social conditioning, convention, self-consciousness and even time. Others began to take notice, too. In California, the austere discipline of Japanese monasticism was being reimagined in a new milieu alive with jazz, psychedelics and endless seekers looking for ways to fix the human condition, or at least their own case of it. Suzuki arrived in California with answers. At a dilapidated temple founded for Japanese immigrants in San Francisco’s Japantown, he slowly defined the trajectory of Zen in the US. ‘Just sit,’ he told his followers. ‘Just breathe.’ As this spiritual practice entered the chrome-lit sprawl of postwar US, it became a new tool for artists, poets, dropouts, bohemians: a technology for awakening. It found kinship, perhaps uneasily, with the Human Potential Movement then flourishing down the coast at the Esalen Institute, where encounter groups, psychedelics and primal scream therapy were all aimed at cultivating a more expansive idea of human flourishing. In this new setting, Zen slotted neatly into the dream of total transformation. Enlightenment was no longer a mountaintop ordeal. Instead, it became a weekend workshop, a practice, a form of self-help. But Suzuki’s original path, the one he learned in Japan, appeared to point elsewhere. Real awakening seemed to demand isolation, silence, the stripping away of ordinary life. It required years in robes, cold zendō (meditation halls), endless chanting, discipline with no immediate reward. This ‘old’ version of Zen was monastic at its core. And so, in the US, Zen seemed to split, quietly, in two. On one side, a casual practice adapted to modern lives; on the other, a pursuit of true enlightenment, suited only to those who could withdraw. This was the complicated spiritual world I was born into: a world of Californian counterculture and Sōtō Zen austerity. And soon, it would set me on a tangled path of my own. Could I follow the ancient route to awakening and still live fully in the world, with all its noise, its complexity, its folly? I first encountered Zen in the late 1960s when my mother, hoping to make me a better, or at least more bearable, person than the obnoxious middle-school boy I undoubtedly was, dragged me to a group meditation session, known as a ‘sit’, at a community preschool in our hometown of Mill Valley, just north of San Francisco. The sit was hosted by a student of Suzuki’s named Jakusho Kwong. By this time, Suzuki had been in the Bay Area for roughly 10 years and was no longer ministering only to the immigrant and Nisei (or second-generation) communities in Japantown. By 1967, he had helped open the San Francisco Zen Center and had purchased a mountain retreat (named Zenshin-ji, ‘Zen Mind Temple’) near Big Sur. He was also about to acquire an urban temple in San Francisco that would be called Hosshin-ji, ‘Beginner’s Mind Temple’. His students, like Jakusho, were beginning to spread Zen far and wide across California to followers who were overwhelmingly young, white and (mostly) hip. That early morning in Mill Valley, I remember Jakusho stalking around the room, striking sleepy or slouching students with the keisaku (‘wake-up stick’). Each time he came around to me, sighing, he would bend down and straighten my back against the stick in a firm but gentle way. I was relieved not to be hit, but also embarrassed and oddly comforted by his touch. They gave me LSD when I was 14 or so. I mostly remember a beautiful, dizzying day full of sun and music My mother also practised with Suzuki at his mountain retreat, and I came with her one afternoon. I remember one thing from the visit. When we arrived, she stopped the car at the entrance and got out to speak with a monk wearing full robes. They had a brief conversation and the monk, probably responding to a simple question like ‘Where should I park?’, stood back and pointed. Something about the gesture was compelling: the tall, thin man in black with a shaved head pointing as if to say: ‘That is the Way you should go!’ These early, fleeting glimpses of Zen were accompanied by massive doses of the conceptual framing that surrounds Buddhism. My parents had met Alan Watts and his young family through the same preschool in Mill Valley where I first learned to meditate as a child. My father was also friends with the author Dennis Murphy and, through him, met his brother Michael Murphy, the co-founder of the Esalen Institute. Michael introduced my parents to a long list of movers and shakers in the Human Potential Movement, a countercultural spiritual and psychological movement that had drawn in the likes of Aldous Huxley and the Beatles. It was Murphy who invited my parents to observe and participate in early experiments with psychedelic drugs, including LSD and psilocybin. Our family spent a bit of time at Esalen, engaging in ‘encounter sessions’ and other experiences favoured by the Movement. As part of the programme, my parents also experimented on me: with my consent, they gave me LSD when I was 14 or so. I mostly remember a beautiful, dizzying day full of sun and music. The world was shiny, and it danced with images the like of which I had never seen. My parents and their friends never really stopped talking, and when the conversation turned to Zen, there was a flood of information and questions about enlightenment. How could someone ‘get’ it? What did ‘getting enlightened’ actually mean? The picture that emerged was of a lasting state attained in a flash, usually due to a profound shift in perception or understanding, and, once you ‘had’ it, the ordinary struggle and suffering of being human would no longer be a problem. This image of enlightenment was based on the experiences of monks in Japanese Zen monasteries, and on the teachings from Suzuki’s mountain retreat, Zenshin-ji, which took Japanese monasticism as its model. The path to enlightenment, according to my adult informants, was a path of monastic seclusion and intensive practice. Japanese Buddhism began in the 6th century, but Zen (originally known as ‘Chán’ in China) didn’t arrive in Japan for another 700 years. Unlike the earlier forms, which relied on strict rituals, historical teachings and doctrine, this new form of Buddhism placed greater emphasis on direct experience. In the Zen school, an unmediated experience of reality – and even enlightenment – was attainable through meditation, deemphasising conceptual thinking. During the 13th century, these ideas began to flourish in Japan after two Japanese monks from the Tendai school of Buddhism, Eisai and Dōgen, travelled to China and were introduced to the teachings of the Chán school. When Eisai returned from China in 1202, he established a temple, Kennin-ji, in Kyoto, which became the central location for those hoping to study the new approach – it remains the oldest Zen temple in the city. A decade or two later, the monk Dōgen journeyed to China with Eisai’s successor, Myōzen. He returned to Japan transformed by his understanding of Chán and established Eihei-ji, a temple in the mountains of Fukui Prefecture, northeast from Kyoto. Working from memory of what he learned in China, he established the codes and forms of monastic conduct that are still followed to this day at Zen temples around the world, including Zenshin-ji, where my mother once practised. Zazen is ideally performed while sitting with legs folded, facing the wall in absolute, upright stillness At Zenshin-ji, these codes are followed during two periods of ango (‘peaceful abiding’) each year. Practitioners who attend ango adhere to a strict schedule (involving regular days, work days, and rest days) and engage in monthly retreats that generally last a week, known as sesshin (‘mind-gathering’). The schedule on regular days involves roughly four to five hours of meditation wrapped around an afternoon of intensive work. Sesshin days involve very little work but up to 12 hours of meditation. There are also three ceremonial services each day, regular Dharma talks (a formal lecture from a Buddhist teacher), and study time. Meals are eaten in the meditation hall in a style called ōryōki (‘the bowl that holds enough’), which is highly formalised and involves a great deal of ceremony. The overarching standards for conduct during ango emphasise silence and deliberate, harmonious interaction. During ango, ‘meditation practice’ involves alternating periods of zazen (‘seated concentration’), which last from 30 minutes to an hour, and kinhin (‘walking back and forth’), a form of slow walking meditation that takes around 10 minutes and eases the strain from sitting. Zazen is ideally performed while sitting with legs folded in full or half-lotus posture facing the wall in absolute, upright stillness. Serious Zen students might spend five years participating in the ango at Zenshin-ji. Some, seeking an even deeper engagement, might spend their entire lives in monastic practice. I had other plans. When I was first exposed to Zen, I was in my early teens and semi-feral. I went to school, of course, but on the weekends, I did everything I could to get away and get outside. The town of Mill Valley lies at the foot of the beautiful Mount Tamalpais, and many weekends were spent hiking and camping there with friends. Sometimes we went further afield, hitchhiking to camp on the beaches of Mendocino, 140 miles away. In summer, I took longer trips: climbing mountains, swimming in ice-cold nameless lakes, sleeping in alpine meadows. A life of monastic seclusion and discipline didn’t appeal to me. And I couldn’t help noticing that the adults I knew who talked about Zen had lives that seemed at odds with their spiritual interests: they had spouses, houses, children, jobs, hobbies, extramarital affairs and addictions, among other things, all of which they would have to abandon if they were to follow the Way. None of them seemed to be willing to take the plunge. Zen didn’t appear compatible with modern life. So when my mother gave me a copy of Suzuki’s Zen Mind, Beginner’s Mind (1970), hot off the press, I read it with genuine interest, but found it easy to put down. For the next 20 years, I hardly thought about Zen. I attended a boarding school on the East Coast. I studied abstract algebra. I learned to play the electric guitar and graduated college with a major in music. Then I worked at a Burger King, then as a pot-washer, and finally as a mechanic. Even when things were going well, I was unsatisfied with everything I did I felt lost and sought advice from an old music professor, but he painted a bleak picture of life as a professional musician. He said I’d be better off moving to New York and playing in punk bands. When I went to ask my mathematics professors what I should do, they were unanimous: ‘You foolish boy. Have you not heard of the digital computer?’ And so, I stumbled into a career in tech, eventually landing in Silicon Valley as a software engineer – a ‘hacker’ as we called ourselves at the time. Living in San Francisco, I found time and energy to play music again. I started a band, the Loud Family, with a gifted singer-songwriter called Scott Miller. We even managed to make a few albums together and tour. And by my 30s, I had quit my job as a software engineer and dedicated myself to music On paper, my life looked good. I had creative friends, gainful and enjoyable employment, and had even found a way to quit (or at least pause) my ‘day job’ – an opportunity most struggling musicians would kill for. But my experience of this life didn’t match how it looked on paper. In my mid-30s, I was in the middle of a messy divorce (my second) and grieving the untimely death of my father from lung cancer. My problems were also internal. I was always wanting more of this and less of that. Even when things were going well, I was unsatisfied with everything I did and hyper-sensitive to criticism, especially when I made a real mistake, which I often did. This made me hard to work with. I often acted foolishly. I hurt people who deserved better from me. Buddhism has always been a radical explanatory framework and a set of concrete practices that directly address the ‘human condition’. When we look closely, we find people everywhere grappling with the problem of being human. Across cultures and millennia, our species has returned time and again to the same fundamental question: Why do we make such a mess of things and how can we do better? The countless responses to this dilemma have given rise to a universal genre that attempts to explain (and solve) human folly. At the start of Homer’s Odyssey, Zeus laments: ‘See now, how men lay blame upon us gods for what is after all nothing but their own folly.’ In the Dàodé jīng (Tao Te Ching) the Chinese sage Lǎozǐ spells out a similar concern: When Dào is lost, there is goodness. When goodness is lost, there is kindness. When kindness is lost, there is justice. When justice is lost, there is ritual. Now ritual is the husk of faith and loyalty, the beginning of confusion. Knowledge of the future is only a flowery trapping of Dào. It is the beginning of folly. Despite the murkiness of its early history, Buddhism crystallised around key axioms that offer different explanations and solutions for human folly. First, human suffering and misbehaviour are built in. They are intimately entangled with qualities that make us human: our capacity to use language, make long-term plans and form complex societies. Second, to use those capacities, we must construct a ‘self’. But this self is often based on flawed narratives shaped by culture and personal experience. Built on faulty assumptions, our self-stories generate desires – manufactured goals, preferences and ideals – which are driven by powerful emotions. But our experience of life often remains unsatisfactory, driving further striving and disappointment. I was in the grip of the very human folly that Buddhism has always sought to address Despite this predicament, the situation is not hopeless. The Buddhist ‘Way’ offers practical tools – ethical conduct, meditation, insight – that can transform our inner lives and outward behaviour. By the time Buddhism evolved into the Chán schools in China (what would later be called ‘Zen’ in Japan), further axioms had been established. First, true learning occurs through relationships; second, awakening unfolds not just through studying texts, but from self-study mostly through zazen. When Zen eventually landed in the West with the arrival of Suzuki and others, these axioms began to find a new space to flourish. Decades after first reading (and shelving) Zen Mind, Beginner’s Mind, I realised that I was in the grip of the very human folly that Buddhism has always sought to address. And so, in the middle of a busy and complicated life, I began an unexpected second career as a Zen practitioner. Had I lived anywhere else in the world, this might not have been possible. The San Francisco Zen Center (SFZC), where I decided to practise, was unique for striving to make a monastic form of Zen accessible to laypeople. I began with the kind of assumptions that are common among beginners. I thought that I would attain a persistent enlightened state through rigorous adherence to the traditional monastic model. I thought that through meditation I would resolve all my personal suffering, and that I would attain a deep understanding of human life. I believed this would change me, and might change those around me, too. I thought I could even become something like a sage, moving through life effortlessly on whatever path I chose. And so, in the early 1990s, I began. Thankfully, because I was playing in rock bands for a living, I had a loose schedule, which enabled me to do a lot of sitting. I could also participate regularly in ango and sesshin at Hosshin-ji. I even did an ango sandwiched around a six-week international tour in support of the band’s second album, and got back in time to sit the seven-day sesshin at the end. But after five years of this, it became clear that I needed a ‘real’ job. So I went back to working in tech. From that day on, and for many years after, my life was devoted to balancing the long hours and tight schedules in the tech sector with the long hours and strict schedules of Zen practice. By the mid-2000s, I was growing more serious about Zen and wanted to become a teacher. But my life had become busier. I now had four children, spanning adulthood to toddlerhood. My wife and I were both working, and I was still playing in two bands. My teacher, Ryushin Paul Haller, suggested that I guide an ango at Hosshin-ji by serving as shuso (‘head seat’), a role that a monk must often take on, in order to pursue a career as a Zen teacher. Though I was unsure, I agreed mostly due to the Zen principle: when your teacher asks, you say ‘Yes’ without hesitation. During the ango, I would rise at 3:30 am, ride my bike to Hosshin-ji with a brief stop for a donut and coffee on the way, change into my robes, run around the building with a bell to wake everyone up, have tea with Ryushin and his assistant, then open up the meditation hall for zazen at 5:25 am. After a couple of periods of zazen followed by a ceremonial service that involved a lot of vigorous bowing and chanting of the Zen liturgy, and finally breakfast, I would hop on my bike again, ride to the station and take the train to San Jose where I’d work a full day at my tech job. After that, I’d take the train back to San Francisco, ride home and arrive, often as late as 9 pm, to eat dinner alone at the kitchen table. I’d then fall into bed, sleep as much as I could and get up to do it again the next day. I learned to become dedicated to, as Suzuki would say, making my ‘best effort in each moment’   It was exhausting and unsustainable – even with a supportive family. The monastic model favoured by SFZC and other institutions sets up social, financial and logistical barriers that are difficult for the majority of Zen aspirants to pass. Though my path was not typical, the Way is fundamentally the same. The most common mistake is to confuse the two. A few weeks after my stint as shuso, Ryushin suggested that I start a zazenkai (Zen sitting group) – an informal, less-intensive kind of practice – at a community centre in my North Beach neighbourhood in San Francisco. I wanted it to be easy enough so that a participant could roll out of bed every weekday and attend a half-hour period of zazen. I kept the rituals basic: incense at a small altar, three bows at the start (accompanied by bells), a single bell at the end. Later, we added the chant used at SFZC temples after morning zazen. It begins by saying the following twice in Japanese: Dai sai ge da pu ku musō fuku den e hi bu nyo rai kyo kō do shoshu jo And then once in English: Great Robe of Liberation Field far beyond Form and Emptiness Wearing the Tathagata’s teaching Saving all beings The group is still going, roughly 15 years later, and it has taught me a lot in that time. I learned how to balance the intensive and the ordinary. I learned to become less concerned with what my practice should or could be, and more simply dedicated to, as Suzuki would say, making my ‘best effort in each moment’. I have learned to recognise, through intimate, ongoing self-study, the characteristics and processes involved in my own suffering and to open into the spaciousness that’s available in everyday life. This space has leavened and counterbalanced the emotions driving my habitual responses – my frustrations, fears, anxieties. Today, my experience of life is the fruit of simple, diligent practice. In Zen Mind, Beginner’s Mind, Suzuki explains the process of becoming enlightened like this: After you have practised for a while, you will realise that it is not possible to make rapid, extraordinary progress. Even though you try very hard, the progress you make is always little by little. It is not like going out in a shower in which you know when you get wet. In a fog, you do not know you are getting wet, but as you keep walking you get wet little by little. To my surprise, this turned out to be true, even when my practice was just the simple act of sitting each day with minimum formality. I wasn’t alone. Though plenty of monks and nuns have spent their lives in monasteries, the monastic path was never considered the only way to go. In fact, from 618 to 907 during the Táng Dynasty (the so-called ‘golden age’ of Chán), laypeople were often held up as exemplary practitioners. One example is Layman Páng and his family. Páng is still celebrated for recognising that mindful attention to ordinary tasks can, over time, become a path to awakening: How miraculous and wondrous, Hauling water and carrying firewood! In the Vimalakirti Sutra, written as early as the 3rd century, another layperson named Vimalakīrti is depicted as a house-holding family man and entrepreneur in the time of the Buddha. Even while lying sick in bed, Vimalakīrti manages to best Mañjuśrī, the Bodhisattva of transcendent wisdom, in a debate, while countless beings cram into his tiny house to watch. They fiercely discourage the ‘pursuit’ of awakening or the idea of ‘learning’ how to be enlightened In the end, I had to conclude that all the ideas I held about Zen practice when I started were wrong or, at the very least, misleading. There is no persistent state of enlightenment. The pursuit of such a state is vain by definition. There’s no ‘fix’ for the human condition in the sense that I originally sought. The Way is not accomplished by gaining ‘understanding’ in the conventional sense or by forcing the mind to shut up – no matter how appealing that prospect seems. These conclusions arose out of my own direct experience but also out of my reading of the Zen literature, which, for more than 1,000 years has been stating things differently. The founding documents of the Chán schools in China and the Zen schools in Japan are a fistful of manifestos that point to the particulars of human experience and talk about how to practise with them. These are full of aspirational formulae and encouragement, but, at the same time, fiercely discourage the ‘pursuit’ of awakening or the idea of ‘learning’ how to be enlightened. Instead – at the risk of oversimplifying something that’s bafflingly complex – they describe two major modes of engagement that characterise Zen practice. The first of these modes has been called ‘conventional cognition’ and is a form of thinking that is deeply familiar to most humans. This mode continuously manifests in our conscious and semi-conscious minds, engaging the human qualities of language, planning and sociality. It can appear as a kind of ruminative self-narration underpinned by emotional tags that drive both inner life and outward behaviour. We experience it as a running dialogue in our heads, which expresses our hopes, fears, experiences, desires, uncertainties. This mode works by building models (of the world and self) using a vast storehouse of remembered, language-ready categories to imagine future outcomes. These models allow us to navigate the world through anticipatory action. As we move through our day, we imagine how others will perceive us, recalling past events to anticipate their responses and determine what we should say and do. Conventional cognition gets a bad rap in most Buddhist literature because it’s so obviously the cause of the aforementioned human folly and suffering. How could it be otherwise? We are beings with a very limited perspective, provided by our sensory hardware and the experiences in our relatively short lives, operating in a world of ungraspable complexity. We are almost constantly focused on what we think will benefit us, even though our ungraspable world is so richly interconnected that the effects of our actions fall far beyond our understanding or control: determining which factors will genuinely do us good is very hard to figure out. What could possibly go wrong? But this same mode is directly responsible, at least in part, for all the beautiful things that humans make and do. Poetry, iPhones, quantum mechanics, Buddhism – none of these would exist without conventional cognition. Furthermore, we literally can’t live without it. The idea that we can somehow exit this mode for any appreciable length of time is absurd. The great Táng Dynasty Chán master Zhàozhōu captured this admirably when he said: ‘The Way is easy. Just avoid choosing.’ He then added, ‘but as soon as you use words, you’re saying, “This is choosing,” or “This is clarity.” This old monk can’t stay in clarity. Do you still hold on to anything or not?’ (My translation.) The other mode posited in Buddhist literature goes by many names, but Suzuki dubs it ‘big mind’. In contrast to the narrow focus of conventional cognition, big mind manifests as a kind of broad, relaxed, receptive attention that, by default, easily gives way to focused attention when circumstances demand it. It is not particularly tied to language. The categories, objects and concepts that are the province of conventional cognition – the elements directly involved in the activities of self-construction and self-narration – have no meaning for big mind. Conventional cognition is driven by powerful emotions; big mind is driven by an appreciation for the simple act of being alive. While both modes are active and ever-present, many people are barely aware of the presence of big mind because their preoccupation with conventional cognition is so strong. One can easily observe this through zazen. Sitting to meditate, we can experience the feeling of being tangled up in thought, which can stop us being aware of ourselves as embodied beings sitting upright. Just paying attention to our breathing can be a struggle as thoughts intrude. The point of a sitting practice is to wholeheartedly study, as intimately as possible, the moment-to-moment activity of your body and mind until big mind swims into view, even briefly. From there, the tangled relationship between these two modes becomes clearer, and big mind begins to take its natural place in our everyday lives – not only while sitting zazen, but also while walking, talking, working and playing. This is an answer to Zhàozhōu’s question: do you still hold on to anything or not? A new relationship between big mind and conventional cognition is what we preserve: a continuous practice of staying awake to our activity and its consequences in the context of big mind. One might reasonably ask: ‘Well, what good is that?’ It’s an excellent question and the answer has two parts. First, on a practical level, when we meet the world through big mind, even imperfectly, the grip of conventional cognition is loosened. This doesn’t mean our habitual responses disappear. On the contrary, they sometimes become more visible. But they are now surrounded by a sense of space and choice. We’re no longer compelled to act on our habitual responses, and it becomes easier to consider more skilful alternatives. We find ourselves entering each moment with a new awareness as our sensory experience of the world outside meets the inner world of our concepts and habits, and through that meeting – infused with a kind of compassionate curiosity – a way forward takes shape seemingly of its own accord. After we act, we see the results, and then begin the cycle again in a way that feels more agile and spontaneous. The ceremonies move the rock of practice nearer to the centre of the river of the ordinand’s life Second, beyond its practical benefits, this practice opens us to experiences that go far beyond what we think of as ‘the everyday’. It underscores what many historical traditions have observed: the full range of human experience is much broader than we normally expect. I have been practising in this way for roughly 35 years. Some of this practice was intensely monastic and formal. In that time, I passed through three successive ordination ceremonies – lay ordination, priest ordination, and Dharma transmission – with my teacher Ryushin. Each of these involved long periods of preparation, a lot of it spent sewing together a robe in the same way that monks have done for thousands of years. The ceremonies themselves, especially the Dharma transmission, which takes weeks to perform, are designed to change the life of the ordinand by moving the rock of practice nearer to the centre of the river of their life. After each ordination, I felt suddenly and startlingly different, and the vow to take full responsibility going forward for my conduct and its consequences gathered weight. But, more often, I practised in the context of a busy life involving work, family and passions (for me, art-making and long-distance cycling). And, over the decades, the reward of continuous practice – as emphasised by Dōgen Zenji, Suzuki and countless other teachers through the centuries – has become more deeply embedded in my being. It has manifested in my day-to-day life. As usual, this change has been both sudden and gradual. So, what are we to do? How are those of us still caught in the flux of the ‘modern’ world supposed to find peace, alleviate suffering, and confront human folly? My own experience might suggest a deprecation of monasticism, but this would be inaccurate. Monastic practice, tuned as it has been for thousands of years, is an excellent vehicle for exactly this exploration. A person who completely gives themselves over to the forms and schedules prepared for them is constantly being reminded of the beauty and the burden of conventional cognition. Again and again, they are given the opportunity to lay down their burden. Initially, they may not even recognise this invitation. Later, they might ignore or resist it, clinging to ideas they’ve developed about how things ought to be. But, in the long run, at least some practitioners are able to loosen their grip. That said, of the few people who are financially and logistically able to take advantage of extended monastic practice, fewer still are able to follow those forms and schedules completely. Age, fitness, physical incompatibility, disability and other constraints often limit participation in traditional monastic practices. Fortunately, the heart of zazen has nothing to do with where you live, whether you can twist your legs into lotus posture or whether you like getting up at 3:30 am. Simply taking up the posture of zazen in a quiet room has a powerful effect on body and mind. To do it, find a quiet place and, in that place, find a posture that will allow you to keep physically still for 30 minutes or so. If that is difficult or you can’t sit comfortably for 30 minutes, standing or lying down is also an option. What does it feel like to be present? What does it feel like to be ‘non-present’? Zazen is essentially a yogic practice, which invites a particular kind of continuous engagement, especially with the body and breath but also with the mind and senses. The posture, both inner and outer, should feel simultaneously relaxed and energised. A helpful principle is ‘always be sitting’. This doesn’t mean one must literally sit – those standing or lying down are free to construe ‘sitting’ metaphorically. ‘Always be sitting’ means that whenever one is engaged in zazen, one should, as much of the time as possible, be bodily engaged. This means forming the sitting posture as if it were happening for the very first time, feeling the actual rate and depth of the breath, bringing attention to where discomfort arises and, perhaps, moving gently and deliberately to relieve it. This kind of attention isn’t always easy. Sometimes we’re able to be present and sometimes we’re not. Such unpredictability is often a source of struggle for Zen students because they think they’re supposed to be ‘quieting the mind’ and they see the moments when they’re not as a failure. But this is fundamentally incorrect and unhelpful. The real invitation in zazen is not to suppress thinking, but to participate fully in, and become intimately aware of, our own version of the attentional cycle. Is it short or long? What does it feel like to be present? What does it feel like to be ‘non-present’? This last question is important. In the early stages of practice, when the mind is fully engaged in conventional cognition – especially when emotionally charged thoughts and stories are present in the mind – the broader view afforded by big mind is obscured. And the transition between these states, which can happen repeatedly during a single sitting, is extraordinarily subtle. One moment you can be sitting in awareness, and the next you’re simply thinking about your day – remembering a conversation or anxiously anticipating something – without knowing how you got there. This can get complicated. Consider the story of a monk I practised with at Zenshin-ji, who told me she once walked in the garden and was utterly transfixed by the sight of a blooming flower. Its beauty, perfection and aliveness stopped her in her tracks and, as she paused there to take it in, she was moved to tears. But then, some seconds later, a thought arose: ‘And… she was moved to tears!’ She heard this phrase in her mind, and the tone of the thought had such a sneering, mocking intonation that she immediately reacted angrily, raging at herself for spoiling the immediacy of the moment by commenting on and distancing herself from the experience. In the end, the relationship between big mind and conventional cognition is a tangled weave: as our attentiveness is erased, we can go from unconditioned appreciation to self-condemnation in a flash. Daily practice can feel like walking in a fog and slowly being ‘dampened’ by the effects On the other hand, the transition back into big mind doesn’t involve this erasure. When we return, we are still intimately (or at least retrospectively) aware of the thoughts, emotions and texture of whatever episode we’re right in the middle of. And in this return lies the heart of zazen: the possibility of making a subtle, almost imperceptible effort to broaden and settle our attention, to fully inhabit the moment, and to be completely present for what comes next. Once a routine is established, a regular dip into more intensive practice can be tremendously helpful. Though daily practice can feel like walking in a fog and slowly being ‘dampened’ by the effects, more intensive practice offers something different. Even devoting a single day can allow the mind to settle more deeply, freed temporarily from our regular preoccupations. Such moments can offer surprising clarity. Perhaps the corresponding metaphor is more like a long swim in a mountain lake or, if you prefer, a quiet forest pool. Finally, it’s crucial to have companions on the Way. Practising zazen in the company of others provides powerful support and shared commitments. Comparing experiences and testing insights with others can be an antidote to the blindspots and subtle delusions that plague us all. Sustained collective practice also helps cultivate samadhi – the deep meditative absorption that clarifies perception and steadies the mind. That’s why, in Buddhism, the ‘three treasures’ are Buddha (the historical figure and the symbol of the potential for awakening), Dharma (the teachings), and Sangha (the community). Dōgen Zenji, the remarkable person who really established the Zen school in Japan during the early 13th century, wrote a manifesto called Fukanzazengi, a title that may be translated (very loosely) as ‘Everyone should be sitting zazen like this.’ The last few lines, in the translation used by SFZC, read: Gain accord with the enlightenment of the Buddhas; succeed to the legitimate lineage of the ancestors’ samadhi. Constantly perform in such a manner and you are assured of being a person such as they. Your treasure-store will open of itself, and you will use it at will. In the 1960s, after Zen had travelled from Japan to San Francisco, many Americans embraced this imported spiritual practice with a hopeful, if often misguided, belief: enlightenment – the ultimate fix for the human condition – demanded monastic discipline and withdrawal from the clatter of modern life. To find a cure for human folly, one had to step outside the world entirely. I carried the same conviction. When I look back at the person who began practising Zen in earnest more than 30 years ago, I can see that my initial ideas about Zen were misguided. As those ideas began to slowly unravel, the texture and quality of my life has been transformed. It has become richer, more vivid, and more deeply alive than I could ever have imagined. I’ve come to understand that the Way isn’t a destination located far from the world, reached only by force of will or sudden insight. It unfolds through steady, daily practice. What Zen offers is quiet, strange and radical: a form of engagement that begins almost imperceptibly but can grow into something truly transformative. As Suzuki put it, if you walk through fog long enough, you’ll eventually be soaked. Slowly, you begin to inhabit the texture of your own life more completely. Eventually, you stop trying to be elsewhere. You begin to realise the Way was never hidden up a mountain. It’s right here, buried far beneath your own ideas about who you should be. Source of the article

GOATReads:Sociology

D.A.R.E. Is More Than Just Antidrug Education—It Is Police Propaganda

Almost all Americans of a certain age have a DARE story. Usually, its millennials with the most vivid memories of the program—which stands for “Drug Abuse Resistance Education”—who can not only recount their DARE experiences from elementary school but also the name of their DARE officer. Looking back on DARE, many recall it as an ineffective program that did little to prevent drug use, which is why they are often surprised that the program still exists. In fact, DARE celebrated its 40th anniversary last year. Schools continue to graduate DARE classes, albeit at a far slower pace than during the program’s heyday in the 1980s and 1990s. While DARE gained widespread support and resources on the presumption that it was an alternative to the supply side approaches to the drug war that relied on arrest and incarceration, my research shows that DARE was less an alternative to policing and more a complementary piece of law enforcement’s larger War on Drugs. As police fought and continue to fight a drug war primarily through violent criminalization, arrest, and incarceration, their presence in schools presents law enforcement a way to advance the police mission of defending the “law-abiding” from the “criminal element” of society by another means. In the process, DARE offers reliably positive public relations when reactionary police activities garner unwanted political critique or public protest, offering a kind of built-in legitimacy that shields against more radical efforts to dismantle police power. DARE America, the nonprofit organization that coordinates the program, suggests that DARE has evolved into a “comprehensive, yet flexible, program of prevention education curricula.” But the program remains largely faithful to its original carceral approach and goal of legitimizing police authority through drug education and prevention. The revised curriculum still ultimately skews toward an abstinence-only, zero-tolerance approach that criminalizes drugs and drug users. It fails to embrace harm reduction approaches, such as sharing information on how students can minimize the health risks if they do choose to use drugs, even as research increasingly demonstrates the effectiveness of such methods and as knowledge about the harmful effects of hyperpunitive, abstinence-only drug education becomes more mainstream. DARE’s reluctance to change—especially change that diminishes the police’s authority to administer drug education—should not come as a surprise. My new book, DARE to Say No: Policing and the War on Drugs in Schools, offers the first in-depth historical exploration of the once-ubiquitous and most popular drug education program in the US, charting its origins, growth and development, cultural and political significance, and the controversy that led to its fall from grace. Although DARE lost its once hegemonic influence over drug education, it had long-lasting effects on American policing, politics, and culture. As I suggest in DARE to Say No, after the establishment of DARE and the deployment of the DARE officer as the solution to youth drug use, there was almost no approach to preventing drug use that did not involve police. In doing so, DARE ensures that drug use and prevention, what many experts consider a public health issue, continues to fall under the purview of law enforcement. It is another example of the way the police have claimed authority over all aspects of social life in the United States even as evidence of the deadly consequences of this expansion of police power have come to public attention in recent years with police killings in response to mental health and other service calls. Viewed in this light, DARE administrators continue to see the program as a reliable salve for the police amid ongoing police brutality, violence, and abuse. Revisiting this history of the preventive side of America’s long-running drug war offers vital lessons for drug education today, cautioning us to be wary of drug prevention initiatives that ultimately reinforce police power and proliferate state violence in our schools and communities. DARE was, in fact, born out of police failure. The brainchild of the Los Angeles Police Department’s (LAPD) chief of police Daryl Gates and other brass, the drug education program got its start in Los Angeles, where LAPD’s efforts to stem youth drug use had repeatedly failed. The LAPD had actually tried placing undercover officers in schools as early as 1974 to root out drug dealers, but drug use among young Angelenos only increased in intervening years, making a mockery of the police’s antidrug enforcement in schools. Recognizing this failure, Gates looked for an alternative to supply reduction efforts which relied on vigorous law enforcement operations. He began talking about the need to reduce the demand for drugs, especially by kids and teenagers. In January 1983, he approached the Los Angeles Unified School District (LAUSD) with an idea: schools needed a new type of drug education and prevention program. Working with LAUSD officials, LAPD brass developed a proposal for the use of police officers to teach a new form of substance abuse education in Los Angeles schools. The program that emerged from that work was Project DARE. The joint LAPD and LAUSD venture launched a pilot program in the fall of 1983. Project DARE came at a moment when the LAPD waged a violent and racist drug war on city streets. If Gates promoted DARE as an alternative, he was certainly no slouch when it came to combatting drugs. A longtime LAPD officer who had helped create the nation’s first SWAT team in Los Angeles following the 1965 Watts uprising, Gates believed in asserting aggressive police power to wage what he described as a literal war to control the streets, especially in the city’s Black and Latinx neighborhoods. Gates rose to chief of police in 1978 and oversaw a vigorous and violent war on drugs and gangs, relying on a destructive mix of antidrug raids and gang sweeps that targeted Black and Latinx youth. Perhaps Gates’s most notorious statement about his attitude toward the treatment of drug users came when he quipped to a congressional committee, “The casual user ought to be taken out and shot.” Gates’s militarized and flagrantly racist approach drug and crime enforcement provoked growing scrutiny from antipolice activists who called out the LAPD for its racism and abuse in the years prior to the 1991 beating of Rodney King and the 1992 Los Angeles rebellion. Against this context, DARE’s focus on prevention and education in schools offered the LAPD a means to counteract this tough, violent image of the warrior cop, not to mention Gates’s own punitive rhetoric. While publicly framed as an alternative to tough antidrug policing, DARE also offered the police a means to enhance their legitimacy and bolster their institutional authority at the very same time their aggressive urban policing practices were alienating predominantly Black and Latinx working-class communities and prompting growing charges of racism and brutality within LAPD’s ranks. In its first iteration, DARE began with stints of 15 (later expanded to 17) weeks to deliver the DARE curriculum in 50 classrooms. Deploying veteran police officers to the classroom beat was a calculated move. Program designers, along with many educators, believed that the youth drug crisis was so advanced that students as young as fifth graders were so savvy about drugs and drug culture that teachers were out of their depth to teach about drugs. By contrast, the thinking went, because police had experience with the negative consequences of drug use, they had much more credibility for this generation of supposed young drug savants. But it was not only that police officers had experience with drugs that lent them credibility when compared to classroom teachers. For many law enforcement officials, DARE became a shining example of how the police could wage the drug war in the nation’s schools through prevention rather than enforcement of drug laws. Focusing on prevention and education would “soften” the aggressive image of the police that routinely appeared in exposés on crack and gang violence on the nightly news and in national newsmagazines such as Newsweek and Time. As teachers, DARE officers would promote a more responsible and professional police image. Early returns from the DARE program pointed to an effective and successful program. Studies conducted in the mid-1980s by Glenn Nyre of the Evaluation and Training Institute (ETI), an organization hired by the LAPD to evaluate the program in its early years, found positive results when it came to student attitudes about drug use, knowledge of how to say no, and respect for the police. School administrators and classroom teachers also responded to the program with gusto, reporting better student behavior and discipline in the classroom. Students also seemed to like the program, especially since most of the evidence of student reactions came from DARE essays written in class or in DARE’s public relations material. As one DARE graduate recalled when the program ended, “I’m sad, because we can’t see our officer again and happy because we know we don’t have to take drugs.” That LAPD handpicked ETI to conduct this assessment suggests it was hardly an independent evaluation, a fact that some observers noted at the time. Nevertheless, such initial positive results gave LAPD and LAUSD officials a gloss of authority and primed them to make good on their promise of bringing the program to every student in the country. And they very nearly did. Within a decade of its founding, DARE became the largest and most visible drug prevention program in the United States. At its height, police officers taught DARE to fifth- and sixth-grade students in more than 75 percent of American school districts as well as in dozens of countries around the world. Officers came to Los Angeles to be trained in the delivery of the DARE curriculum. The demand for DARE led to the creation of a network of training centers across the country, which vastly expanded the network of trained DARE officers. DARE leaders also created a DARE Parent Program to teach parents how to know the signs of youth drug use and the best approach to dealing with their kids who used drugs. DARE, in short, created a wide network that linked police, schools, and parents in the common cause of stopping youth drug use. Everyone seemed to love DARE. Especially politicians. Congressmembers from both parties fawned over it. In congressional hearings and on the floor of congress, they lauded the program and allocated funds for drug education and prevention programming in the Drug-Free Schools and Communities Act (DFSCA) provisions of the 1986 Anti-Drug Abuse Act. Amendments to the DFSCA in 1989 referenced the use of law enforcement officers as teachers of drug education and, more directly, a 1990 amendment mentioned the DARE program by name. President Reagan was the first president to announce National DARE Day, a tradition that continued every year through the Obama presidency. Bill Clinton also singled out the program in his State of the Union address in 1996 stating, “I challenge Congress not to cut our support for drug-free schools. People like the D.A.R.E. officers are making a real impression on grade-school children that will give them the strength to say no when the time comes.” Rehabilitating the police image and sustaining police authority by supporting DARE was very much a bipartisan effort.   Political support for DARE reflected the program’s widespread popularity among several constituencies. Law enforcement officials hoped it would be a way to develop relationships with kids at the very moment they waged an aggressive and violent war on drugs on the nation’s streets. Educators liked it because it absolved them from teaching about drugs and meant teachers got a class period off from teaching. Parents, many of whom felt they did not know how to talk to their kids about drugs, also saw value in DARE. As nominal educators, DARE officers became part of schools’ daily operation. Even as they wore their uniforms, they were unarmed and explicitly trained not to act in a law enforcement role while on campus. DARE officers would not enforce drug laws in schools but rather teach kids self-esteem, resistance to peer pressure, and how to say no to drugs. In the minds of the program’s supporters, turning police into teachers tempered the drug war by helping kids learn to avoid drugs rather than targeting them for arrest. Officers did much more than just teach DARE classes. DARE officers embedded themselves into their communities, engaging in a wide variety of extracurricular activities. For instance, one officer coached a DARE Track Club. Another coached a football team. Some even played Santa Claus and Rudolf during the holidays. To bolster their authority on a national scale, DARE administrators constructed a public relations campaign enlisting athletes and celebrities to promote the program and facilitate trust between children and the police. More than just a feel-good program for the police and youth, however, law enforcement needed DARE—and not just for the purported goal of fighting drugs. DARE offered a means to burnish the public image of policing after years of aggressive and militarized policing associated with the drug war and high-profile episodes of police violence and profiling, such as the beating of Rodney King in Los Angeles or the discriminatory targeting of the Central Park Five in New York. By using cops as teachers, DARE administrators and proponents hoped to humanize the police by transforming them into friends and mentors of the nation’s youth instead of a uniformed enemy. For DARE’s proponents, they insisted that kids took the police message to heart. As DARE America director Glenn Levant made clear, DARE’s success was evident during the 1992 Los Angeles rebellion, when, instead of protesting, “we saw kids in DARE shirts walking the streets with their parents, hand-in-hand, as if to say, ‘I’m a good citizen, I’m not going to participate in the looting.’” The underlying goal was to transform the image of the police in the minds of kids and to develop rapport with students so that they no longer viewed the police as threatening or the enforcers of drug laws. But DARE’s message about zero tolerance for drug use—and the legitimacy of police authority—sometimes led to dire consequences that ultimately revealed law enforcement’s quite broad power to punish. The most high-profile instances occurred when students told their DARE officers about their parents’ drug use, which occasionally led to the arrest of the child’s family members. For those students who took the DARE message to heart, they unwittingly became snitches, serving as the eyes and ears of the police and giving law enforcement additional avenues for surveilling and criminalizing community drug use. DARE was not a benign program aimed only at preventing youth drug use. It was a police legitimacy project disguised as a wholesome civic education effort. Relying on the police to teach zero tolerance for drugs and respect for law and order accomplished political-cultural work for both policy makers and law enforcement who needed to retain public investment in law and order even amid credible allegations of police misconduct and terror. Similarly, DARE diverted attention from the violent reality of the drug war that threatened to undermine trust in the police and alienate constituencies who faced the brunt of such policing. Through softening and rehabilitating the image of police for impressionable youth and their families, DARE ultimately enabled the police to continue their aggressive tactics of mass arrest, punishment, and surveillance, especially for Black and Latinx youth. Far from an alternative to the violent and death-dealing war on drugs, DARE ensured that its punitive operations could continue apace. But all “good” things come to an end. By the mid-1990s, DARE came under scrutiny for its failure to prevent youth drug use. Despite initial reports of programmatic success, social scientists evaluating the program completed dozens of studies pointing to DARE’s ineffectiveness, which led to public controversy and revisions to the program’s curriculum. Initially, criticism from social science researchers did little to dent the program’s popularity. But as more evidence came out that DARE did not work to reduce youth drug use, some cities began to drop the program. Federal officials also put pressure on DARE by requiring that programs be verified as effective by researchers to receive federal funds. By the late 1990s, DARE was on the defensive and risked losing much of its cultural cachet. In response, DARE adapted. It revised its curriculum and worked with researchers at the University of Akron to evaluate the new curriculum in the early 2000s. Subsequent revisions to the DARE curriculum relied on close partnership with experts and evaluators led to the introduction of a new version of the curriculum in 2007 called “keepin’ it REAL” (kiR). The kiR model decentered the antidrug message of the original curriculum and emphasized life skills and decision-making in its place. For all the criticism and revision, however, few observers ever questioned, or studied for that matter, the efficacy of using police officers as teachers. Despite the focus on life skills and healthy lifestyles, DARE remains a law enforcement–oriented program with a zero tolerance spirit to help kids, in the words of DARE’s longtime motto, “To Resist Drugs and Violence.” While DARE remains alive and well, its future is increasingly uncertain. The dramatic rise in teen overdose deaths from fentanyl has renewed demands for drug education and prevention programs in schools. Rather than following the DARE’s zero-tolerance playbook, some school districts have explored adopting new forms of drug education programming focused on honesty and transparency about drug use and its effects, a model known as harm reduction. The Drug Policy Alliance’s (DPA) Safety First drug education curriculum, for instance, is based on such principles. Rather than pushing punitive, abstinence-only lessons, Safety First emphasizes scientifically accurate and honest lessons about drugs and encourages students to reduce the risks of drug use if they choose to experiment with drugs. Most notably, it neither requires nor encourages the use of police officers to administer their programming. The implementation of Safety First marks the beginning of what could promise to be a vastly different approach to drug education and prevention programs. It is a welcome alternative to drug education programs of the past. As the history of DARE demonstrates, police-led, zero-tolerance drug education not only does not reduce drug abuse among youth, but serves as a massive public relations campaign for law enforcement, helping to obscure racist police violence and repression. It is high time Americans refuse to take the bait. Source of the article

Life happened fast

It’s time to rethink how we study life’s origins. It emerged far earlier, and far quicker, than we once thought possible Here’s a story you might have read before in a popular science book or seen in a documentary. It’s the one about early Earth as a lifeless, volcanic hellscape. When our planet was newly formed, the story goes, the surface was a barren wasteland of sharp rocks, strewn with lava flows from erupting volcanoes. The air was an unbreathable fume of gases. There was little or no liquid water. Just as things were starting to settle down, a barrage of meteorites tens of kilometres across came pummelling down from space, obliterating entire landscapes and sending vast plumes of debris high into the sky. This barren world persisted for hundreds of millions of years. Finally, the environment settled down enough that oceans could form, and the conditions were finally right for microscopic life to emerge. That’s the story palaeontologists and geologists told for many decades. But a raft of evidence suggests it is completely wrong. The young Earth was not hellish, or at least not for long (in geological terms). And, crucially, life formed quickly after the planet solidified – perhaps astonishingly quickly. It may be that the first life emerged within just millions of years of the planet’s origin. With hindsight, it is strange that the idea of hellscape Earth ever became as established as it did. There was never any direct evidence of such lethal conditions. However, that lack of evidence may be the explanation. Humans are very prone to theorise wildly when there’s no evidence, and then to become extremely attached to their speculations. That same tendency – becoming over-attached to ideas that have only tenuous support – has also bedevilled research into the origins of life. Every journalist who has written about the origins of life has a few horror stories about bad-tempered researchers unwilling to tolerate dissent from their treasured ideas. Now that the idea of hellscape Earth has so comprehensively collapsed, we need to discard some lingering preconceptions about how life began, and embrace a more open-minded approach to this most challenging of problems. Whereas many researchers once assumed it took a chance event within a very long timescale for Earth’s biosphere to emerge, that increasingly looks untenable. Life happened fast – and any theory that seeks to explain its origins now needs to explain why. One of the greatest scientific achievements of the previous century was to extend the fossil record much further back in time. When Charles Darwin published On the Origin of Species (1859), the oldest known fossils were from the Cambrian period. Older rock layers appeared to be barren. This was a problem for Darwin’s theory of evolution, one he acknowledged: ‘To the question why we do not find records of these vast primordial periods, I can give no satisfactory answer.’ The problem got worse in the early 20th century, when geologists began to use radiometric dating to firm up the ages of rocks, and ultimately of Earth itself. The crucial Cambrian period, with those ancient fossils, began 538.8 million years ago. Yet radiometric dating revealed that Earth is a little over 4.5 billion years old – the current best estimate is 4.54 billion. This means the entire fossil record from the Cambrian to the present comprises less than one-eighth of our planet’s history. Stromatolites were well known, but these, from the Pilbara region in Western Australia, were astonishingly old However, in the mid-20th century, palaeontologists finally started finding older, ‘Pre-Cambrian’ fossils. In 1948, the geologist Reg Sprigg described fossilised impressions of what seemed to be jellyfish in rocks from the Ediacara Hills in South Australia. At the time, he described them as ‘basal Cambrian’, but they turned out to be older. A decade later, Trevor Ford wrote about frond-like remains found by schoolchildren in Charnwood Forest in England; he called them ‘Pre-Cambrian fossils’. The fossil record was inching back into the past. By 1980, the fossil record had become truly epic. On 3 April that year, a pair of papers was published in Nature, describing yet more fossils from Australia. They were stromatolites: mounds with alternating layers of microorganisms and sediments. In life, microbes like bacteria often grow in mats. These become covered in sediments like sand, and a new layer of cells grows on top, over and over. Stromatolites were well known, but these, from the Pilbara region in Western Australia, were astonishingly old. One set was 3.4 billion years old; the other looked like it might be even older, as much as 3.5 billion years old. Over the past 45 years, palaeontologists have meticulously re-analysed the Pilbara remains to confirm that they are real. It’s not a trivial problem: with rocks that ancient, strange distortions can form that look like fossilised microbes but are actually just deformed rocks. To resolve this, researchers have deployed an array of techniques, including searching for traces of organic matter. At this point, we are as confident as we can be that the Pilbara fossils are real. That means life has existed for at least 3.5 billion years. When I wrote The Genesis Quest back in 2020, I said this gave us a billion-year time window after the formation of Earth in which life could form. Since then, the evidence for life has been pushed further back in time. Until relatively recently, many researchers would have said the window was distinctly narrower than that. That’s because there were reasons to think that Earth was entirely uninhabitable for hundreds of millions of years after it formed. The first obstacle to life’s emergence was the Moon’s formation. This seems to have happened very soon after Earth coalesced, and in the most dramatic way imaginable: another planetary body, about the size of Mars, collided with Earth. The impact released so much energy it vaporised the surface of the planet, blasting a huge volume of rocks and dust into orbit. For a little while, Earth had a ring, until all that material gradually fused to form the Moon. This explosive scenario is the only one anyone has so far thought of that can explain why Moon rocks share similar isotopes with Earth rocks. It seems clear that, if there was any nascent life on the young Earth, it was obliterated in the searing heat of the impact. Still, this happened around 4.5 billion years ago. What about the billion years between the Moon-forming impact and the Pilbara fossils? The surface was an ocean of magma that slowly cooled and solidified We can divide this vast span of time into two aeons. They are divided by one simple factor: the existence of a rock record. The oldest known rocks are 4.031 billion years old. The half-billion years before that is called the Hadean; the subsequent time is called the Archean. As its ominous name suggests, the Hadean was assumed to have been hellish. In the immediate aftermath of the Moon-forming impact, the surface was an ocean of magma that slowly cooled and solidified. Artist’s impressions of this aeon often feature volcanoes, lava flows and meteorite impacts. The early Archean, if anything, seemed to be worse – thanks to a little thing called the Late Heavy Bombardment. Between around 3.8 and 4 billion years ago, waves of meteoroids swept through the solar system. Earth took a battering, and any life would have been obliterated. Only when the bombardment eased, 3.8 billion years ago, could life begin. In which case, life began in the 300 million years between the end of the Late Heavy Bombardment and the Pilbara fossils. This was a compelling narrative for many years. It was repeated uncritically in many books about the origins and history of life. Yet there were always nagging issues. In particular, palaeontologists kept finding apparent traces of life from older strata – life that was, on the face of it, too old to be real. As early as 1996, the geologist Stephen Mojzsis, then at the University of California, San Diego, and his colleagues were reporting that life was older than 3.8 billion years. They studied crystals of apatite from 3.8-billion-year-old rocks from the Isua supracrustal belt in West Greenland. Within the crystals are traces of carbon, which proved to be rich in one isotope, carbon-12, and low in the heavier carbon-13. This is characteristic of living matter, as living organisms prefer to use carbon-12. Nearly two decades later, the record was extended even further back in time by Elizabeth Bell at the University of California, Los Angeles and her colleagues. They studied thousands of tiny zircon crystals from the Jack Hills of Western Australia. Some of these crystals are Hadean in age: since there are no rocks from the Hadean, these minuscule shards are almost all we have to go on. One zircon proved to be about 4.1 billion years old. Trapped within it was a tiny amount of carbon, with the same telltale isotope mixture that suggested it was biogenic. For many years, the sceptics carried the argument, but more recently the tides have turned Perhaps most dramatically, in 2017, Dominic Papineau at University College London and his colleagues described tubes and filaments, resembling colonies of bacteria, in rocks from the Nuvvuagittuq belt in Quebec, Canada. The age of these rocks is disputed: they are at least 3.77 billion years old, and a study published this June found some of them are 4.16 to 4.20 billion. This would mean that life formed within 200 million years of Earth’s formation, deep in the Hadean. There are many more such studies. None of them is wholly convincing on its own: they often rely on a single crystal, or a rock formation that has been heated and crushed, and thus distorted. Each study has come in for strong criticism. This makes it difficult to assess the evidence, because there are multiple arguments in play. A believer in an early origin of life would highlight the sheer number of studies, from different parts of the world and using different forms of evidence. A sceptic would counter that we should accept a fossil only if it is supported by multiple lines of evidence, as happened in the Pilbara. To which a believer would say: the rock record from the early Archean is very sparse, and there are no rocks from the Hadean at all. It is simply not possible to obtain multiple lines of evidence from such limited material, so we must make a judgment based on what we have. The sceptic would then say: in that case, we don’t and can’t know the answer. For many years, the sceptics carried the argument, but more recently the tides have turned. This is partly because the fossil evidence of early life has accumulated – but it’s also because the evidence for the Late Heavy Bombardment sterilising the planet has collapsed. An early crack in the façade emerged when Mojzsis and Oleg Abramov at the University of Colorado simulated the Late Heavy Bombardment and concluded that it was not intense enough to sterilise Earth. Surface life might have been obliterated, but microbes could have survived underground in huge numbers. However, the bigger issue is that the Late Heavy Bombardment may not have happened at all. The evidence rested on argon isotopes from Moon rocks collected by the Apollo missions in the 1960s and ’70s. A re-analysis found that those isotopes were prone to a specific kind of artefact in the radioisotope data – creating the illusion of a sharp bombardment 3.9 billion years ago. What’s more, the Apollo missions all went to the same region of the Moon, so the astronauts may have mostly collected rocks from the same big impact – which would all naturally be the same age. Meanwhile, rocks on Earth preserve evidence of past impacts, and they show a long slow decline until 3 billion years ago, or later. Likewise, giant impacts on Mars appear to have tailed off by 4.48 billion years ago. There is also no sign of a Late Heavy Bombardment on the asteroid Vesta. If the Late Heavy Bombardment really didn’t happen, then it is reasonable to imagine that life began much earlier – perhaps even in the Hadean. The problem is how to demonstrate it, when the fossil evidence is so impossibly scant. This is where genetics comes in. Specifically phylogenetics, which means creating family trees of different organisms showing how they are related, and when the various splits occurred. For example, phylogenetics tells us that humans, chimpanzees and bonobos are descended from a shared ancestor that lived about 7 million years ago. By constructing family trees of the oldest and most divergent forms of life, phylogeneticists have tried to push back to the last universal common ancestor (LUCA). This is the most recent population of organisms from which every single living thing today is descended. It is the great-great-etc grandmother of all of us, from bacteria to mosses to scarlet macaws. Estimating the date of LUCA is fraught with uncertainties, but in the past decade phylogeneticists have started to narrow it down. One such attempt was published by a team led by Davide Pisani at the University of Bristol in the UK. They created a family tree of 102 species, focusing on microorganisms, as those are the oldest forms of life. They calibrated their tree using 11 dates known from the fossil record. The headline finding was that LUCA was at least 3.9 billion years old. It’s possible that life had existed long before LUCA – beginning early in the Hadean In 2024, many of the same researchers returned with a more detailed analysis of LUCA based on more than 3,500 modern genomes. This suggested LUCA lived between 4.09 and 4.33 billion years ago, with a best estimate of around 4.2 billion. What’s more, their reconstruction of LUCA’s genome suggested it was pretty complex, with a genome that encoded around 2,600 proteins. It also seems to have lived in a complex ecosystem. In particular, it appears to have had a primitive immune system, which implies it had to defend itself from some of its microbial neighbours. These details highlight a point that is not always obvious: LUCA does not represent the origin of life. It is just the most recent ancestor shared by all modern organisms. It’s possible that life had existed long before LUCA – beginning early in the Hadean. This fits with gathering evidence that the Hadean was not so hellish after all. It’s true that the entire planetary surface was molten at the very start of the Hadean, but it seems to have solidified by 4.4 billion years ago. Evidence from zircons suggests there was abundant liquid water at least 4.3 billion years ago, and possibly 4.4 billion. By 4.2 billion years ago, there seem to have been oceans. These primordial seas may have been considerably deeper than they are today, because Earth’s interior was hotter and could not hold as much water, so for a time there may have been no exposed land – or at least, only small islands. These strands of evidence amount to a complete rewriting of the early history of life on Earth. Instead of life beginning shortly after the Late Heavy Bombardment 3.8 billion years ago, it may have arisen within 100 million years of the planet’s formation. If so, what does that tell us about how it happened? The most immediate implication is that our ideas cannot rely on the power of chance at all. There have been a great many hypotheses about the origins of life that relied on a coincidence: say, a one-in-a-billion collision between two biological molecules in the primordial soup. But if life really formed within 0.1 billion years of the planet’s birth, ideas like this are absolutely untenable. There just wasn’t time. Take the RNA World, one of the leading hypotheses of life’s origins since the 1980s. The idea is that the first life did not contain the smorgasbord of organic chemicals that modern cells do. Instead, life was based entirely on RNA: a close cousin of the more familiar DNA, of which our genomes are made. RNA is appealing because it can carry genetic information, like DNA, but it can also control the rates of chemical reactions – something that is more usually done by protein-based enzymes. This adaptability, the argument goes, makes RNA the ideal molecule to kickstart life. They must find processes that work quickly and efficiently to generate complexity and life-like systems However, a close examination of the RNA World scenario reveals gaping holes. An RNA molecule is essentially a chain, and there are huge numbers of possible RNAs, depending on the sequence of links in the chain. Only a fraction of these RNAs actually make proteins. It’s not obvious how those ‘good’ RNAs are supposed to have formed: why didn’t conditions on the young Earth just create a random mix of RNAs? And, remember, we can’t rely on the power of chance and large numbers: it all happened too quickly. Instead, researchers now largely agree that they must find processes that work quickly and efficiently to generate complexity and life-like systems. But what does that mean in practice? There are various ideas. One prominent school of thought is that life formed in alkaline vents on the sea floor, where the flow of hot water and chemicals created a cradle that incubated life. Others have highlighted the potential of volcanic vents, meteorite impact craters, geothermal ponds, and tidal zones: anywhere that has a flow of energy and chemicals. The reality is that we are dealing with a huge number of intersecting questions. What was the environment in which the first life emerged? What was that first life made of and how did it work? Was the first life a simplified version of something we can observe today, or was it something radically different – either in composition or mechanism, or both – that was then supplanted by more familiar systems? I believe that the most promising thing to have happened in origins-of-life research in recent years has been a growing willingness to accept uncertainty and reject dogma. Origins research is barely a century old: the first widely discussed hypotheses were set out by Alexander Oparin and J B S Haldane in the 1920s, and the Miller-Urey experiment that kickstarted practical research in the field was published in 1953. For those first few decades, origins research was on the fringes of science, with only a handful of researchers actively working on it. Just as there was no direct evidence that the Hadean was a hellscape, there has been very little hard evidence for any of the competing scenarios for life’s origins. Researchers devised elaborate stories with multiple steps, found experimental evidence that supported one or two of those stages, and declared the problem solved. What origins research needs is open-mindedness and a willingness to disagree constructively A small group of people, a lack of hard evidence and a great many intersecting questions: that’s a recipe for dogmatic ideas and angry disagreements. And that’s what origins research was like for decades. I’ve been reporting on the field since the 2000s, and on multiple occasions I’ve seen researchers – including heads of labs – use language that resembled the worst kind of internet trolling. There was a time when I thought this abrasiveness was funny: now I just think it’s ugly and pointless. What origins research needs is open-mindedness and a willingness to disagree constructively. That culture shift is being driven by a generation of younger researchers, who have organised themselves through the Origin of Life Early-career Network (OoLEN). In 2020, a large group of OoLEN members and other researchers set out what they see as the future of the field. They complained of ‘distressing divisions in OoL research’: for instance, supporters of the RNA World have tended to contemptuously dismiss those who argue that life began with metabolic processes, and vice versa. The OoLEN team argued that these ‘classical approaches’ to the problem should not be seen as ‘mutually exclusive’: instead, ‘they can and should feed integrating approaches.’ This is exactly what is happening. Instead of focusing exclusively on RNA, many teams are now exploring what happens when RNA – or its constituent parts – are combined with other biological molecules, such as lipids and peptides. They are deploying artificial intelligence to make sense of the huge numbers of molecules involved. And they are holding back from strong statements in favour of their own pet hypotheses, and against other peoples’. This isn’t just a healthier way to work – though it absolutely is that. I believe it will also lead to faster and deeper progress. In the coming years, I expect many more insights into what happened on our planet when it was young and what the first life might have looked like. I presented the hellscape-Earth scenario as a kind of just-so story. Of course, because the data is so limited, we cannot escape telling stories about our planet’s infancy. But maybe soon we’ll be able to tell some better ones. Source of the article

The macho sperm myth

The idea that millions of sperm are on an Olympian race to reach the egg is yet another male fantasy of human reproduction Before science was able to shed light on human reproduction, most people thought new life arose through spontaneous generation from non-living matter. That changed a smidgen in the middle of the 17th century, when natural philosophers were able (barely) to see the female ovum, or egg, with the naked eye. They theorised that all life was spawned at the moment of divine creation; one person existed inside the other within a woman’s eggs, like Russian nesting dolls. This view of reproduction, called preformation, suited the ruling class well. ‘By putting lineages inside each other,’ notes the Portuguese developmental biologist and writer Clara Pinto-Correia in The Ovary of Eve (1997), ‘preformation could function as a “politically correct” antidemocratic doctrine, implicitly legitimising the dynastic system – and of course, the leading natural philosophers of the Scientific Revolution certainly were not servants.’ One might think that, as science progressed, it would crush the Russian-doll theory through its lucid biological lens. But that’s not precisely what occurred – instead, when the microscope finally enabled researchers to see not just eggs but sperm, the preformation theory morphed into a new, even more patriarchal political conceit: now, held philosophers and some students of reproduction, the egg was merely a passive receptacle waiting for vigorous sperm to arrive to trigger development. And sperm? The head of each contained a tiny preformed human being – a homunculus, to be exact. The Dutch mathematician and physicist Nicolaas Hartsoeker, inventor of the screw-barrel microscope, drew his image of the homunculus when sperm became visible for the first time in 1695. He did not actually see a homunculus in the sperm head, Hartsoeker conceded at the time, but he convinced himself that it was there. More powerful microscopes eventually relegated the homunculus to the dustbin of history – but in some ways not much has changed. Most notably, the legacy of the homunculus survives in the stubbornly persistent notion of the egg as a passive participant in fertilisation, awaiting the active sperm to swim through a hailstorm of challenges to perpetuate life. It’s understandable – though unfortunate – that a lay public might adopt these erroneous, sexist paradigms and metaphors. But biologists and physicians are guilty as well. It was in the relatively recent year of 1991, long after much of the real science had been set in stone, that the American anthropologist Emily Martin, now at New York University, described what she called a ‘scientific fairy tale’ – a picture of egg and sperm that suggests that ‘female biological processes are less worthy than their male counter-parts’ and that ‘women are less worthy than men’. The ovary, for instance, is depicted with a limited stock of starter eggs depleted over a lifetime whereas the testes are said to produce new sperm throughout life. Human egg production is commonly described as ‘wasteful’ because, from 300,000 egg starter cells present at puberty, only 400 mature eggs will ever be released; yet that adjective is rarely used to describe a man’s lifetime production of more than 2 trillion sperm. Whether in the popular or scientific press, human mating is commonly portrayed as a gigantic marathon swimming event in which the fastest, fittest sperm wins the prize of fertilising the egg. If this narrative was just a prejudicial holdover from our sexist past – an offensive male fantasy based on incorrect science – that would be bad enough, but continued buy-in to biased information impedes crucial fertility treatments for men and women alike. To grasp how we got here, a tour through history can help. Scientific understanding of sex cells and the process of human conception is a comparatively recent development. An egg, the largest cell in a human body, is barely visible to the naked eye, and about as big as the period ending this sentence. So the smallest human body cell, a sperm, is utterly invisible for the unaided eye. Sperm were unknown to science until 1677, when the Dutch amateur scientist Antonie van Leeuwenhoek first observed human sperm under a microscope. Around the same time, it was realised that the human ovary produced eggs, although it was not until 1827 that the German biologist Karl Ernst von Baer first reported actual observations of human and other mammalian eggs. After van Leeuwenhoek’s discovery of sperm, it took another century before anyone realised that they were needed to fertilise eggs. That revelation came in the 1760s, when the Italian priest and natural scientist Lazzaro Spallanzani, experimenting on male frogs wearing tight-fitting taffeta pants, demonstrated that eggs would not develop into tadpoles unless sperm was shed into the surrounding water. Bizarrely, until Spallanzani announced his findings, it was widely thought – even by van Leeuwenhoek for some years – that sperm were tiny parasites living in human semen. It was only in 1876 that the German zoologist Oscar Hertwig demonstrated the fusion of sperm and egg in sea urchins. Eventually, powerful microscopes revealed that an average human ejaculate, with a volume of about half a teaspoon, contains some 250 million sperm. But a key question remains unanswered: ‘Why so many?’ In fact, studies show that pregnancy rates tend to decline once a man’s ejaculate contains less than 100 million sperm. Clearly, then, almost half the sperm in an average human ejaculate are needed for normal fertility. A favoured explanation for this is sperm competition, stemming from that macho-male notion of sperm racing to fertilise – often with the added contention that more than one male might be involved. As in a lottery, the more tickets you buy, the likelier you are to win. Natural selection, the thinking goes, drives sperm numbers sky-high in a kind of arms race for the fertilisation prize. Striking examples of sperm competition do indeed abound in the animal kingdom. Our closest relatives, the chimpanzees, live in social units containing several adult males that regularly engage in promiscuous mating; females in turn are mated by multiple males. Numerous features, such as conspicuously large testes, reflect a particularly high level of sperm production in such mammal species. In addition to large testes, they have fast sperm production, high sperm counts, large sperm midpieces (containing numerous energy-generating mitochondria for propulsion), notably muscular sperm-conducting ducts, large seminal vesicles and prostate glands, and high counts of white blood cells (to neutralise sexually transmitted pathogens). The vesicles and the prostate gland together produce seminal fluid, which can coagulate to form a plug in the vagina, temporarily blocking access by other males. Popular opinion and even many scientists perpetuate the same sperm scenario for humans, but evidence points in a different direction. In fact, despite various lurid claims to the contrary, there’s no convincing evidence that men are biologically adapted for sperm competition. The story of sperm abundance in promiscuously mating chimpanzees contrasts with what we see in various other primates, including humans. Many primates live in groups with just a single breeding male, lack direct competition and have notably small testes. In all relevant comparisons, humans emerge as akin to primates living in single-male groups – including the typical nuclear family. Walnut-sized human testes are just a third of the size of chimpanzee testes, which are about as large chickens’ eggs. Moreover, while chimpanzee ejaculate contains remarkably few physically abnormal sperm, human semen contains a large proportion of duds. Quality controls on human ejaculate have seemingly been relaxed in the absence of direct sperm competition. Sperm passage is more like a challenging military obstacle course than a standard swimming race For species not regularly exposed to direct sperm competition, the only promising alternative explanation for high sperm counts concerns genetic variation. In a couple of rarely cited papers published more than four decades ago, the biologist Jack Cohen at the University of Birmingham in the UK noted an association between sperm counts and the generation of chromosome copies during sperm production. During meiosis, the special type of cell division that produces sex cells, pairs of chromosomes exchange chunks of material through crossing over. What Cohen found is that, across species, sperm counts increase in tandem with the number of crossovers during their production. Crossing over increases variation, the essential raw material for natural selection. Think of sperm production as a kind of lottery in which enough tickets (sperm) are printed to match available numbers (different genetic combinations). Other findings fly in the face of the popular scenario, too. For instance, most mammalian sperm do not in fact swim up the entire female tract but are passively transported part or most of the way by pumping and wafting motions of the womb and oviducts. Astoundingly, sperm of smaller mammals tend to be longer on average than sperm of larger mammals – a mouse sperm is longer than the sperm of a whale. But even if these were equivalent in size, swimming up to an egg becomes more of a stretch the larger a species gets. Indeed, it might be feasible for a mouse sperm to swim all the way up to the egg – but it is quite impossible for an even smaller blue whale sperm to swim 100 times further up the female tract unaided. Convincing evidence has instead revealed that human sperm are passively transported over considerable distances while travelling through the womb and up the oviducts. So much for Olympic-style racing sperm! In fact, of the 250 million sperm in the average human ejaculate, only a few hundred actually end up at the fertilisation site high up in the oviduct. Sperm passage up the female tract is more like an extremely challenging military obstacle course than a standard sprint-style swimming race. Sperm numbers are progressively whittled down as they migrate up the female tract, so that less than one in a million from the original ejaculate will surround the egg at the time of fertilisation. Any sperm with physical abnormalities are progressively eliminated along the way, but survivors surrounding the egg are a random sample of intact sperm. Many sperm do not even make it into the neck of the womb (cervix). Acid conditions in the vagina are hostile and sperm do not survive there for long. Passing through the cervix, many sperm that escape the vagina become ensnared in mucus. Any with physical deformities are trapped. Moreover, hundreds of thousands of sperm migrate into side-channels, called crypts, where they can be stored for several days. Relatively few sperm travel directly though the womb cavity, and numbers are further reduced during entry into the oviduct. Once in the oviduct, sperm are temporarily bound to the inner surface, and only some are released and allowed to approach the egg. Pushing the notion that the fertilising sperm is some kind of Olympic champion has obscured the fact that an ejaculate can contain too many sperm. If sperm surround the egg in excessive numbers, the danger of fertilisation by more than one (polyspermy) arises with catastrophic results. Polyspermy occasionally occurs in humans, especially when fathers have very high sperm counts. In the commonest outcome in which two sperm fertilise an egg, cells of the resulting embryo contain 69 chromosomes instead of the usual 46. This is always fatal, usually resulting in miscarriage. Although some individuals survive as far as birth, they always expire shortly afterwards. Because polyspermy typically has a fatal outcome, evolution has evidently led to a series of obstacles in the female reproductive tract that strictly limit the number of sperm allowed to surround an egg. Polyspermy has practical implications for assisted reproduction in cases of compromised fertility or infertility. For instance, the original standard procedure of introducing semen into the vagina for artificial insemination has been replaced by direct injection into the womb (intrauterine insemination, or IUI). Directly introducing semen into the womb bypasses the reduction of sperm numbers that normally occurs in the cervix, where mucus weeds out physically abnormal sperm. Analyses of clinical data have revealed that depositing 20 million sperm in the womb (less than a 10th of the number in the average ejaculate) is enough to achieve a routine pregnancy rate. Sperm numbers become even more important when it comes to in vitro fertilisation (IVF), with direct exposure of an egg to sperm in a glass vessel. This bypasses every single one of the natural filters between the vagina and the egg. In the early development of IVF, the general tendency was to use far too many sperm. This reflected the understandable aim of maximising fertilisation success, but it ignored natural processes. High sperm numbers between 50,000 and 0.5 million increasingly depressed the success rate. Optimal fertilisation rates were achieved with only 25,000 sperm around an egg. Both IUI and IVF potentially increase the risk of polyspermy and the likelihood of miscarriage. Human fertilisation is a gigantic lottery with 250 million tickets: for healthy sperm, it is the luck of the draw The possibility of polyspermy casts new light on the evolution of sperm counts. Discussions of sperm competition generally focus exclusively on maximising sperm counts, but – as is common in biology – some kind of trade-off is involved. Whereas natural selection can lead to increased sperm production if males are in direct competition, it will also favour mechanisms in the female tract that constrain numbers of sperm around the egg. In promiscuously mating primates, such as chimpanzees, increased oviduct length in females offsets increased sperm production by males. This presumably limits the numbers of sperm approaching the egg. It also shows that the female’s role in fertilisation is by no means as passive as is often assumed. The entrenched idea that ‘the best sperm wins’ has elicited various suggestions that some kind of selection occurs, but it is difficult to imagine how this could possibly happen. The DNA in a sperm head is tightly bound and virtually crystalline, so how could its properties be detected from outside? Experiments on mice indicate, for instance, that there is no selection according to whether a sperm contains a male-determining Y-chromosome or a female-determining X-chromosome. It seems far more likely that human fertilisation is a gigantic lottery with 250 million tickets, in which – for healthy sperm – successful fertilisation is essentially the luck of the draw. Other puzzling features of sperm also await explanation. It has long been known, for instance, that human semen contains a large proportion of structurally abnormal sperm with obvious defects such as double tails or tiny heads. The ‘kamikaze sperm’ hypothesis proposed that these dud sperm in fact serve different functions in competition, such as blocking or even killing sperm from other men. However, this has since been effectively discredited. The entrenched notion that human sperm, once ejaculated, engage in a frantic race to reach the egg has completely overshadowed the real story of reproduction, including evidence that many sperm do not dash towards the egg but are instead stored for many days before proceeding. It was long accepted as established fact that human sperm survive for only two days in a woman’s genital tract. However, from the mid-1970s on, mounting evidence revealed that human sperm can survive intact for at least five days. An extended period of sperm survival is now widely accepted, and it could be as long as 10 days or more. Other myths abound. Much has been written about mucus produced by the human cervix. In so-called ‘natural’ methods of birth control, the consistency of mucus exuding from the cervix has been used as a key indicator. Close to ovulation, cervical mucus is thin and has a watery, slippery texture. But precious little has been reported regarding the association between mucus and storage of sperm in the cervix. It has been clearly established that sperm are stored in the crypts from which the mucus flows. But our knowledge of the process involved is regrettably restricted to a single study reported in 1980 by the gynaecologist Vaclav Insler and colleagues of Tel Aviv University in Israel. In this study, 25 women bravely volunteered to be artificially inseminated on the day before scheduled surgical removal of the womb (hysterectomy). Then, Insler and his team microscopically examined sperm stored in the crypts in serial sections of the cervix. Within two hours after insemination, sperm colonised the entire length of the cervix. Crypt size was very variable, and sperm were stored mainly in the larger ones. Insler and colleagues calculated the number of crypts containing sperm and sperm density per crypt. In some women, up to 200,000 sperm were stored in the cervical crypts. Insler and colleagues also reported that live sperm had actually been found in cervical mucus up to the ninth day after insemination. Summarising available evidence, they suggested that after insemination the cervix serves as a sperm reservoir from which viable sperm are gradually released to make their way up the oviduct. This dramatic finding has been widely cited yet largely ignored, and there has never been a follow-up study. Mutations accumulate four times faster in sperm than in eggs, so semen from old men is risk-laden In his textbook Conception in the Human Female (1980) – more than 1,000 pages in length – Sir Robert Edwards, a recipient of the 2010 Nobel prize for the development of IVF, mentioned cervical crypts in a single sentence. Since then, many other authors have mentioned sperm storage in those cervical crypts equally briefly. Yet storage of sperm, with gradual release, has major implications for human reproduction. Crucially, the widespread notion of a restricted ‘fertile window’ in the menstrual cycle depends on the long-accepted wisdom that sperm survive only two days after insemination. Sperm survival perhaps for 10 days or more radically erodes the basis for so-called ‘natural’ methods of birth control through avoidance of conception. Sperm storage is also directly relevant to attempts to treat infertility. Another dangerous misconception is the myth that men retain full fertility into old age, starkly contrasting with the abrupt cessation of fertility seen in women at menopause. Abundant evidence shows that, in men, sperm numbers and quality decline with increasing age. Moreover, it has recently emerged that mutations accumulate about four times faster in sperm than in eggs, so semen from old men is actually risk-laden. Much has been written about the fact that in industrialised societies age at first birth is increasing in women, accompanied by slowly mounting reproductive problems. A proposed solution is the highly invasive and very expensive procedure of ‘fertility preservation’ in which eggs are harvested from young women for use later in life. However, increasing reproductive problems with ageing men, notably more rapid accumulation of sperm mutations, have passed largely unmentioned. One very effective and far less expensive and invasive way of reducing reproductive problems for ageing couples would surely be to store semen samples from young men to be used later in life. This is just one of the benefits to be gained from less sexism and more reliable knowledge in the realm of human reproduction. Nowadays, the story of Hartsoeker’s homunculus might seem veiled in the mist of time, mentioned only as an entertaining illustration of blunders in the early exploration of human sex cells. But its influence, along with the macho-male bias that spawned it, has lived on in subtler form among the cultural stereotypes that influence the questions we ask about reproductive biology. Source of the article

GOATReads: Philosophy

Aha = wow

We surveyed thousands of scientists in four countries and learned just how important beauty is to them When Paulo was an undergraduate, he was tasked with taking photographs of neurons. ‘A single cell,’ he came to notice, ‘it’s a whole universe.’ Looking at cells beneath a microscope is not unlike gazing at stars in the sky, Paulo realised. ‘We all know they are there, but until you see them with your own eyes, you don’t have that experience of awe, of wow.’ It was then, as he put it, that he ‘fell in love with these cells for the first time.’ Now a stem cell biologist at a university in the United States, Paulo remains enthralled with ‘pretty’ cells. His desktop background is filled with their images. But the ‘aesthetic’ of stem cells, Paulo insists, is not just in the pretty pictures they make. ‘There is some other type of beauty that is not visual,’ he explained to us in an interview. ‘I’m not sure what it is. Perhaps there is some kind of…’ His voice trailed off. Searching for the right words, Paulo continued: ‘It’s almost an appreciation for the complexity of things. But what you realise is that it’s all uncoverable. We can know it all.’ Paulo fell in love with science for the visual beauty of nature, but his continued passion for it owes to what we call the beauty of understanding – the aesthetic experience of new insight into the way things are, when encountering the hidden order or inner logic underlying phenomena. To grow in understanding, Paulo reflected, is ‘something very satisfying’. ‘The beauty in science,’ he emphasised, ‘that is, at least for me, a huge motivator.’ Public conversations about beauty in science tend to focus on the beauty anyone can see, say, in photographs of stars like those released from the James Webb Space Telescope, or the beauty that physicists, for instance, ascribe to elegant equations. But these foci miss something important. Over the past three years, we have studied thousands of scientists on three different continents, asking them about the role of beauty in their work. Our research left us convinced that the core aesthetic experience science has to offer is not primarily about sensory experiences or formulas. At the deepest level, what motivates scientists to pursue and persist in their work is the aesthetic experience of understanding itself. Centring the beauty of understanding presents an image of science more recognisable to scientists themselves and with greater appeal for future scientists. Moreover, it foregrounds the need for institutionally supporting scientists in their quests for understanding, which are all too often stifled by the very system on which they depend. Beauty’, for many, is the last word that comes to mind when thinking of science. Where the word ‘science’ connotes ‘objective’, ‘value-free’ and ‘rational’, the term ‘beauty’ evokes the ‘subjective’ and ‘emotional’. Such associations are reinforced by the nerdy and aloof scientist stereotype, which remains common among young people, according to decades of research using the ‘Draw-A-Scientist Test’. Modern education systems institutionalise this dichotomy with the division of the sciences and humanities into separate classes and colleges. Such tensions hark back at least to the 19th century, when English Romantic poets such as William Blake and John Keats accused scientists of stripping beauty and mystery away from nature. Keats, despite his own considerable scientific training, reputedly complained that Isaac Newton ‘destroyed all the poetry of the rainbow, by reducing it to the prismatic colours.’ In his poem Lamia, Keats writes: ‘all charms fly / At the mere touch of’ natural philosophy or science, which, he laments, will ‘clip an Angel’s wings, / Conquer all mysteries by rule and line,’ and ‘unweave a rainbow’. Are aesthetic experiences in science the rare prerogative of geniuses? This stereotype of science, however, contrasts with what scientists have had to say in recent decades, from Nobel Prize-winning physicists such as Subrahmanyan Chandrasekhar and Frank Wilczek to the British biologist Richard Dawkins. These scientists portray science as an impassioned endeavour, and a font of beauty, awe and wonder. Dawkins, for instance, aims at Keats when he argues in his book Unweaving the Rainbow (1998) that, far from being a source of disenchantment, science nourishes an appetite for wonder. ‘The feeling of awed wonder that science can give us,’ Dawkins contends, ‘is one of the highest experiences of which the human psyche is capable.’ ‘It is a deep aesthetic passion,’ he writes, ‘to rank with the finest that music and poetry can deliver.’ These testimonies call to mind pioneering scientists like Alexander von Humboldt, whose passion for beauty fuelled his work, as Andrea Wulf shows in her exquisite biography The Invention of Nature (2015). Such cases, however, raise several questions: are aesthetic experiences in science the rare prerogative of geniuses, or do most contemporary scientists often find beauty in their work? How do encounters with beauty shape scientists personally, and their practice of science – whether positively or negatively? To answer these questions, we conducted the world’s first large-scale international study of the role of aesthetics in science. In 2021, our team surveyed nationally representative samples of nearly 3,500 physicists and biologists in four countries: India, Italy, the United Kingdom and the United States. Over the past three years, we also interviewed more than 300 of them in-depth. Our data have made clear to us not only that science is an aesthetic quest – just like art, poetry or music – but also that the heart of this quest is the beauty of understanding. Our research identified three types of beauty in science, each of which shapes the course of scientists’ work in distinct yet interconnected ways. What is visually or aurally striking – what we refer to as sensory beauty – is a common source of beauty for scientists. Indeed, the majority of scientists (75 per cent) see beauty in the objects and phenomena they study, from cells to stars, and in the symmetries, simplicity and complexity of nature, which can evoke emotions of awe and wonder. Many even find such beauty in scientific models and instruments. As was the case for Paulo, sensory beauty is often what draws in people to science. For instance, a biologist in the UK told us that beauty was what drew her to her field of study: ‘I personally think that a pollinator visiting a flower, both of those things to me are incredibly beautiful. And the two of them interacting with each other, I think, is one of the most beautiful things in nature, which is why I study pollination.’ Of course, not all aspects of scientific work are beautiful in this sense. Many respondents told us about the absence of sensory beauty in their work. ‘To be honest with you, if anything, I would describe the real practice of science as ugly or hideous,’ one UK physicist told us. ‘There is an awful lot of boring drudge work, frustration, cursing, swearing, like I’m sure there is in any developmental job, and beautiful is not a word I would use at that point.’ We also found no shortage of complaints about ugliness in scientific workplaces such as dark, dingy labs. But even those who do not find much sensory beauty in their work can encounter beauty in another sense. Physicists in particular often rely on what we call useful beauty. This second type of beauty involves treating aesthetic properties such as simplicity, symmetry, aptness or elegance as heuristics or guides to truth. Historically, many theories considered beautiful turned out to be wrong, while ugly ones turned out to be right For instance, the Nobel Prize-winning physicist Murray Gell-Mann said that ‘in fundamental physics, a beautiful or elegant theory is more likely to be correct than a theory that is inelegant.’ Paul Dirac, another Nobel Prize-winner, went so far as to say that ‘it is more important to have beauty in one’s equations than to have them fit experiment.’ Not all physicists, however, are enamoured with useful beauty. There is no guarantee the aesthetic properties that facilitated understanding in the past will do so in the future. And, historically, many theories considered beautiful turned out to be wrong, while ugly ones turned out to be right – and then were no longer considered ugly. Some argue that physics is in a similar situation today. In Farewell to Reality (2013), Jim Baggott complains that theories of super-symmetry, super-strings, the multiverse and so on – driven largely by beautiful mathematics – are ‘fairytale physics’. They are ‘not only not true’, he contends, they are ‘not even science’. Similarly, in Lost in Math (2018), Sabine Hossenfelder argues that aesthetic criteria have become a source of cognitive bias leading physics astray. Our survey found that physicists are evenly divided when it comes to the reliability of useful beauty. On the question of whether beautiful mathematics is a guide to truth, 34 per cent disagree while 35 per cent agree; regarding whether elegant theories are more likely to be correct than inelegant ones, 42 per cent disagree while 23 per cent agree. As to Dirac’s claim about beauty in equations being more important than experimental support, a striking 77 per cent of physicists disagree, while only 8 per cent agree with the statement. Beauty’s utility in science is not limited to physics or to its heuristic role in theory selection; it can also be significant when designing experiments in physics and biology. ‘I try to make our experimental design as elegant and as direct as possible, which for me is a form of beauty,’ a US biologist told us. ‘Because the less effort we have to put into getting an answer the better. The faster, the cleaner, the fewer different parameters we have to control for in an experiment, the more convincing and clean, I think, the information will be that they get out of it.’ Many scientists similarly emphasised the relevance of aesthetic considerations for designing a project, writing code and analysing data. Beauty, thus, has relevance in science beyond theory choice. The third type of beauty, which we find is most prevalent and argue is the most important, is what we call the beauty of understanding. The vast majority of scientists describe moments of understanding as beautiful. In our surveys and interviews, when asked where they find beauty in their work, scientists regularly pointed to times when they grasped the hidden order, inner logic or causal mechanisms of natural phenomena. These moments, one UK physicist told us, are ‘like looking into the face of God for non-religious people – how you can look at something and think, oh my God, that’s how things actually work, that’s how things are!’ A US biologist concurred: ‘It is recognising,’ he explained, ‘this is what’s going on. There’s a leap to the truth, or a leap to a sense of generalisation or something that is beyond the particular but in some way represents the real thing that’s there, the real thing that’s going on. We think of that as beautiful.’ Nearly 95 per cent of scientists in the four countries we studied report experiencing this kind of beauty at some point in their scientific practice. If the notion of the beauty of understanding seems strange, that is due to a dualistic perspective – the philosophical roots of which stretch back through Immanuel Kant to René Descartes – that treats reason and emotion, objectivity and subjectivity, science and art as binary. But there is an alternative perspective, which brings into view the continuity of reason and emotion, objectivity and subjectivity, science and art. In this view, it is as natural to recognise beauty in understanding as in a sunset. In 1890s, the polymath and founder of American pragmatism Charles S Peirce coined the term ‘synechism’ for ‘the tendency to regard everything as continuous‘ (the term’s roots are in the Greek word for ‘continuous’). Embracing this philosophy of continuity, his fellow pragmatist John Dewey developed a theory of aesthetic experience encompassing science and art as differing only in ‘tempo and emphasis’, not in kind. Each, Dewey argued, manifests experience that has been transformed from a state of fragmentation, turbulence or longing to a state of integration, harmony or – his favoured term – ‘consummation’. The process of transformation is a struggle, but that only enhances the satisfaction of the consummatory experience it brings about. From initial puzzles through the ugliness of data collection to the closure of theoretical explanation, Dewey’s theory of aesthetic experience describes just the sort of journey the scientists we spoke with articulated when describing their quests for the beauty of understanding. Once one considers, with Dewey, aesthetic experience as a potential latent in any fragmented experience – recognising its scope extends beyond sensory or useful beauty – one can see that science enables aesthetic experience through, for instance, the integration of seemingly disparate observations and ideas in moments of understanding. The process of moving from a puzzling observation to a theory that accounts for it, or from a theory to a discovery of evidence it predicts, can be long, arduous and messy – a far cry from sensory beauty. But when understanding emerges, so does beauty as the quality making it stand out against the rough course of experience that ran before. Feeling the beauty of the moment clues in scientists to the possibility that they just may have arrived at something transformative – emotion and reason work hand in hand. In our interviews, many scientists recalled voicing their sense of beauty in moments of understanding with terms like ‘aha’, or evoked feelings of a ‘high’ or a ‘kick’ to describe it. As Dewey noted in his essay ‘Qualitative Thought’ (1931), such ‘ejaculatory judgments’ voiced ‘at the close of every scientific investigation … [mark] the realised appreciation of a pervading quality,’ namely, the distinctly aesthetic quality of the experience. It is this quality that scientists refer to as ‘beauty’ in relation to understanding. Half of academic scientists now quit after five years. A toxic work culture may be pushing them out Both sensory and useful beauty are oriented to the beauty of understanding. The beauty scientists find in the phenomena is an invitation to deeper understanding. And it is this understanding that imparts value to the technologies, experiments and theories employed in research. The deeper significance of the aesthetic properties that scientists appeal to lies in their potential for facilitating understanding. The beauty of understanding, further, can help scientists persist through the challenges of research. After telling us of his passion for the beauty in science, Paulo shared that, a year earlier, he nearly quit his job. ‘I was hospitalised out of stress,’ he admitted. ‘I had shingles. That stuff is an endogenous virus that comes out when you’re stressed. It was horrible.’ Taken aback, we asked him to explain what stressed him out so much. ‘I was having just so many grants rejected. And I was like: “Oh my gosh, what’s going to happen now?”’ Like most scientists, Paulo relies on competitive research grants to do his work. He had spent countless hours applying for research funding, but was having no success. Not only had he become physically ill, he was also sick of trying. He made a LinkedIn account. ‘I was ready to look for another job,’ he said. ‘I was completely devastated by the fact that I just didn’t want to write more grants.’ And, without funding, his scientific quest would end. Paulo is no outlier in considering leaving his position due to stress. Depression and anxiety have become commonplace among researchers in their early careers. More and more scientists seem to be leaving their jobs; 50 per cent of academic scientists these days quit after five years. A toxic work culture may be pushing them out. Leading scientific publications such as Nature have issued warnings of a mental health crisis in science, which is causing attrition that threatens the future of the profession. Paulo nearly became one of these casualties. Fortunately, a conversation with a mentor intervened. ‘Everything just changed all of a sudden, because she was just telling me the grants are not really the most important thing in the world.’ His mentor told him not to neglect ‘all the other good things’ he was doing, and to take pride that his students were ‘very motivated about science’. When Paulo faltered in his motivation to pursue the grants he needed for research, renewal came with realising that his guidance was invaluable for his students’ quests for the beauty of understanding. Paulo did not quit his job. And he eventually got the funding he needed to continue his research. Beauty helped him persevere. Beauty permeates science, from the sensory beauty that often draws people to a scientific career in the first place, to aesthetic criteria such as elegance, which shapes theory selection and experimentation for better or worse. But most important is the beauty of understanding, an aesthetic experience in its own right, the quest for which deepens scientists’ motivation to pursue and persist in their careers. Appreciating the aesthetic nature of the scientific quest is crucial for understanding science and why people devote their lives to it. Two-thirds of the scientists we studied insist it is important for scientists to encounter beauty, awe and wonder in their work. But, for too long, popular culture has been saturated with false myths about science and beauty, misguiding young people about what it means to be a scientist and what doing science is actually like. As we learned through our research, scientists are motivated by beauty. Their quests to achieve the beauty of understanding are fuelled by both reason and emotion. Dualistic assumptions about ‘objective’ science and ‘subjective’ judgments of beauty get in the way of seeing the continuity of science with other aesthetic ventures. ‘It’s because I’m an artist. Physicists are artists who can’t draw’ Making the beauty of understanding central to public conversations about science will draw much-needed attention to questions of how institutions can better support and leverage scientists’ motivating passion. How many scientists would find greater motivation for their work if there were fewer barriers to realising the beauty of understanding that drives them in the first place? Institutional incentives to ‘publish or perish’ and bring in short-term grant funding inadvertently contribute to an environment where many scientists like Paulo find themselves burnt out, or so disillusioned with toxic competition that they leave academia altogether. For scientists to thrive, institutional incentives need to capitalise upon, not crush, the beauty that drives scientists. Funders might offer flexible and longer-term research grants that facilitate creative interdisciplinary collaborations. Considering that more frequent aesthetic experiences at work are associated with higher levels of wellbeing among scientists, workshops incorporating aesthetic and appreciative reflection about research practices may help scientists reconnect with the deeper, motivating beauty of their work. As Andrea Wulf told us in a conversation, science is a palace with many doors; yet our educational systems tend to guide students through only a few conventional entry points. The beauty of understanding is a neglected pathway. By centring such beauty in public discussions about science, we can better engage bright, creative youth, helping them see that scientists are people like themselves. Exactly this is what inspired one young woman we interviewed to become a physicist. At age 17, Tricia had babysat for a physicist’s young children, so she was used to hearing questions about what makes a rainbow and why the sky is blue. But what surprised and enchanted her was hearing these very questions posed by the children’s father. He explained his wonder this way: ‘It’s because I’m an artist. So physicists are artists who can’t draw.’ This ‘triggered something in me’, Tricia told us, as she had always wanted ‘to paint to describe nature’ but lacked the skill. This physicist’s example convinced her she need not give up her dream entirely. What she discovered, rather, was that science could allow her to capture the beauty of nature in an even more profound way – not just through images, but through understanding. Now a theoretical particle physicist at a university in the UK, Tricia is living her dream by uncovering the hidden structures of the natural world, painting with the brush of theory. Who else might become a scientist if only they, like Tricia, were shown the beauty of understanding that science offers? Source of the article

GOATReads: History

The unseen masterpieces of Frida Kahlo

Lost or little-known works by the Mexican artist provide fresh insights on her life and work. Holly Williams explores the rarely seen art included in a new book of the complete paintings. You know Frida Kahlo – of course you do. She is the most famous female artist of all time, and her image is instantly recognisable, and unavoidable. Kahlo can be found everywhere, on T-shirts and notebooks and mugs. While writing this piece, I spotted a selection of cutesy cartoon Kahlo merchandise in the window of a shop, maybe three minutes' walk from my home. I bet many readers are similarly in striking distance of some representation of her, with her monobrow and traditional Mexican clothing, her flowery headbands and red lipstick. Partly, this is because her own image was a major subject for Kahlo – around a third of her works were self-portraits. Although she died in 1954, her work still reads as bracingly fresh: her self-portraits speak volumes about identity, of the need to craft your own image and tell your own story. She paints herself looking out at the viewer: direct, fierce, challenging. All of which means Kahlo can fit snugly into certain contemporary, feminist narratives – the strong independent woman, using herself as her subject, and unflinchingly exploring the complicated, messy, painful aspects of being female. Her paintings intensely represent dramatic elements of a dramatic life: a miscarriage, and being unable to have children; bodily pain (she was in a horrific crash at 18, and suffered physically all her life); great love (she had a tempestuous relationship with the Mexican artist Diego Rivera, as well as many other lovers, male and female, including Leon Trotsky), and great jealousy (Rivera cheated on her repeatedly, including with her own sister). But thats not all they show – her art is not always just about her life, although you could be forgiven for assuming it was. Books are written about her trauma, her love life; she's been the subject of a Hollywood movie starring Salma Hayek. Kahlo has become a bankable blockbuster topic, guaranteed to get visitors through the door of galleries, even if what they see is often more about the woman than her art. But what about her work? For some art historians, the relentless focus on the person rather than the output has become tiresome, which is why a new, monumental book – Frida Kahlo: The Complete Paintings – has just been published by Taschen, offering for the first time a survey of her entire oeuvre. Mexican art historian Luis-Martín Lozano, working with Andrea Kettenmann and Marina Vázquez Ramos, provides notes on every single Kahlo work we have images of – 152 in total, including many lost works we only know from photographs. Speaking to Lozano on a video call from Mexico City, I ask if a comprehensive survey of her work is overdue, despite there being so many shows about her all over the world? "As an art historian, my main interest in Kahlo has been in her work as an artist. If this had been the main concern of most projects in recent decades, maybe I would say this book has no reason to be. But the truth is, it hasn't," he says. "Most people at exhibitions, they're interested in her personality – who she is, how she dressed, who does she go to bed with, her lovers, her story." Because of this, exhibitions and their catalogues have often focused on that story, and tend to "repeat the same paintings, and the same ideas about the same paintings. They leave aside a whole bunch of works," says Lozano. Books also re-tread the same ground: "You repeat the same things, and it will sell – because everything about Kahlo sells. It's unfortunate to say, but she's become a merchandise. But this explains why [exhibitions and books] don't go beyond this – because they don't need to." The result is that certain mistakes get made – paintings mis-titled, mis-dated, or the same poor-quality, off-colour photographs reproduced. But it also means that ideas about what her works mean get repeated ad infinitum. "The interpretation level becomes contaminated," suggests Lozano. "All they say about the paintings, over and over, is 'oh it's because she loved Rivera', 'because she couldn't have a kid', 'because she's in the hospital'. In some cases, it is true – but there's so much more to it than that." The number of paintings – 152 – is not an enormous body of work for a major artist. And yet, astonishingly, some of these havenever been written about before: "never, not a single sentence!" laughs Lozano. "It's kind of a mess, in terms of art history." Offering a comprehensive survey of her work means bringing together lost or little-known works, including those that have come to light in auctions in the past decade or so, and others that are rarely loaned by private collectors and so have remained obscure. Lozano hopes to open up our understanding of Kahlo. "First of all – who was she as an artist? What did she think of her own work? What did she want to achieve as an artist? And what do these paintings mean by themselves?" This means looking again at early works, which might not be the sort of thing we associate with Kahlo – but reveal how much she was inspired by her father, Guillermo, a professional photographer and an amateur painter of floral still lifes. Pieces such as the little-known Still Life (with Roses) from 1925, which has not been exhibited since 1953, are notably similar in style to his. Kahlo continued to paint astonishing, vibrant still lifes her whole career – although they are less well-known to the general public than her self-portraits, less collectable, and less studied. An understanding of their importance to her has been strengthened since Lozano and co discovered documents revealing Kahlo's life-long interest in the symbolic meaning of plants. She learnt this from her father, and discussed it in letters with her half-sister Margarita (her father's child from an earlier marriage), who became a nun. The missing links Kahlo and Margarita's letters "talk about the symbolic meaning of flowers and fruits and the garden of Eden, that our body is like a flower we have to take care of because it was ripped off from paradise," says Lozano. "This is amazing, and proves why this topic of still lifes and flowers had such meaning to her." He offers a new interpretation of a painting from 1938, called Tunas, which depicts three prickly pears in different stages of ripening – from green and unripe to a vibrant, juicy, blood-red – as representing Kahlo's own understanding of her maturation as an artist and person, but as also potentially having religious symbolism (the bloody flesh evoking sacrifice). The Complete Paintings book also takes pains to reveal the depths of Kahlo's intellectual engagement with art-world developments – countering the notion that she was merely influenced by meeting Rivera in 1928, or that her work is some self-taught, instinctive howl of womanly pain. Her paintings reveal Kahlo's research into and experiments in art movements, from the youthful Mexican take on Modernism, Stridentism, to Cubism and later Surrealism.   "Frida Kahlo's paintings were not only the result of her personal issues, but she looked around at who was painting, what were the trends, the discussions," says Lozano. He points to her first attempts at avant-garde paintings – 1927's Pancho Villa and Adelita, and the lost work If Adelita, both of which use sharp, Modernist lines and angles – as proof that "she was looking at trends in Mexican art even before she met Rivera". You can also see her interest in Renaissance Old Masters, which she discovered prints of in her father's library, in early work: it's suggested her 1928 painting, Two Women (Portrait of Salvadora and Herminia), depicting two maids against a lush, leafy background, was inspired by Renaissance portraiture traditions, as seen in the works of Leonardo da Vinci. Bought in the year it was painted, the location of this work remained unknown until it was acquired by the Museum of Fine Arts, Boston, in 2015. Given she only made around 152 paintings, a surprising number are lost. But then, Kahlo wasn't so successful in her lifetime – she didn't have so many shows, or sell that many works through galleries and dealers. Instead, many of her paintings were sold or given away directly to artists, friends and family, as well as movie stars and other glittering admirers, often living abroad. That means less of a paper trail, making it harder to track down works. In honesty, looking at black-and-white pictures of lost portraits probably isn't going to prove revelatory to anyone beyond the most hard-core scholars – although there are some astonishing paintings still missing. One lost 1938 image, Girl with Death Mask II, depicts a little girl in a skull mask in an empty landscape; it chills, and we know Kahlo discussed this painting in relation to her sorrow at being unable to conceive. Check your attics, too, for Kahlo's painting of a horrific plane crash – which we only have a photograph of now – which she's known to have made in a period of great personal turmoil in the years after discovering her sister's affair with Rivera in 1935. Like another of her very well-known paintings, Passionately in Love or A Few Small Nips, depicting a woman murdered by her husband, The Airplane Crash was based very closely on a real-life news report; Lozano's team have unearthed both original articles in their research. While Kahlo may have been drawn to these traumatic events because she was suffering pain in her own life, her degree of almost documentary precision in external news stories here should not be overlooked. Kahlo was an avowed Communist, and politically engaged all her life, but it is in less well-known works from the final years of her life where you see this most explicitly emerge. At this time, she suffered a great deal of pain, and underwent many operations, eventually including amputation below the knee. But Kahlo continued painting till 1953, with difficulty but also with renewed purpose. Her biographer Raquel Tibol documented her saying: "I am very concerned about my painting. More than anything, to change it, to make it into something useful, because up until now all I have painted is faithful portraits of my own self, but that's so far removed from what my painting could be doing to serve the [Communist] Party. I must fight with all my strength so that the small amount of good I am able to do with my health in the way it is will be directed toward helping the Revolution. That's the only real reason for living." This resulted in works like 1952's Congress of the Peoples for Peace (which has not been exhibited since 1953), showing a dove in broad fruit tree – and two mushroom clouds, representing Kahlo's nightmares about nuclear warfare. She became an active member of many peace groups – collecting signatures from Mexican artists in support of a World Peace Council, helping form the Mexican Committee of Partisans for Peace, and making this painting for Rivera to take to the Congress of the Peoples for Peace in Vienna in 1952. Doves feature in several of her late still lifes – as do an increasing number of Mexican flags or colour schemes (using watermelons to reflect the green, white and red of the flag), suggesting Kahlo's intention was that her work should show her nationalism and Communism. More uncomfortably, her final paintings include loving depictions of Stalin, as her politics became more militant. Perhaps her most moving late painting, however, is a self-portrait: Frida in Flames (Self-portrait Inside a Sunflower). It's harrowing, painted in thick, colourful impasto; shortly before her death, Kahlo slashed at it with a knife, scraping away the paint, frustrated at her inability to make work or perhaps in an acknowledgment that her end was nearing. Tibol, who was witness to this decisive, destructive act, called it "a ritual of self-sacrifice". "It's a tremendous image," says Lozano. "It's very interesting in terms of aesthetics – when your body is not working anymore, when your brain is not enough to portray what you want to paint, the only source she's left with is to deconstruct the image. This is a very contemporary, conceptual position about art: that the painting exists not only in its craft, but also what I think the painting stands for." We are left with a painting that is imperfect, certainly a world away from the fine, smooth surfaces and attention to detail of Kahlo's more famous self-portraits – but it nonetheless is an astonishingly powerful work that deserves to be known. There is something tremendously poignant in an artist so well-known for crafting their own image using their final creative act to deliberately destroy that image. Even in obliterating herself, Kahlo made her work speak loudly to us. Source of the article

GOATReads:Politics

South Africa has chosen a risky approach to global politics: 3 steps it must take to succeed

South Africa finds itself in a dangerous historical moment. The world order is under threat from its own primary architect. The US wants to remain the premier global political power without taking on any of its responsibilities. This dangerous moment also presents opportunities. South Africa’s response has been one of strategic autonomy. This involves taking independent and non-aligned positions on global affairs, to navigate between competing world powers. But South African policymakers lack the political acumen and bureaucratic ability required to navigate this complex global order and to exploit the new possibilities. Strategic autonomy is not the norm in global affairs. It is very rare for small countries to succeed at it without at least some costs. Drawing from our expertise – as a political scientist and an economist working on the international economy – we conclude that if South Africa is to succeed in its strategic autonomy ambitions the country must do three things. First, its economic and foreign policy priority must be the African continent. Second, it must pursue bureaucratic excellence, especially in its diplomatic and security apparatus. Third, it must prepare for reprisals that are likely to follow its choice of an independent path to global affairs. Strategic autonomy A handful of countries have been able to pursue strategic autonomy in navigating the international system. They include Brazil, India and the Republic of Ireland. These countries have four necessary assets: global economic importance; leverage; bureaucratic capability; and political will and agency manifested in foreign policy cohesiveness and agility. India’s size – over 1.4 billion people and the fourth largest market in the world – makes it a location of both production and consumption. This has become more important given the US and western desire to create a counter balance to China as a low-cost producer and a market for exports. Brazil’s assets are its geographic size, its mid-size population (three times South Africa’s), its mineral wealth, and its political importance to South America. It is also the tenth largest economy in the world. Ireland is a small country, but it uses its strategic location in the European Union to influence global affairs. South Africa is currently lacking on all these fronts. But, with strategic planning and reforms, and in partnership with other African countries, it is possible to enhance the country’s strategic importance to the global economy. Where to from here? If South Africa is to succeed as a nation, become globally relevant, and have autonomy in the global economy, it must recognise its challenges, understand their drivers and address them pragmatically. So what should it do? First, it’s important to recognise that South Africa is a small country. Its economy is marginal to the rest of the world. The continent of Africa has a population of around 1.5 billion people, which is likely to double by 2070 – the only part of the global economy in which demographic growth will occur. Purely in terms of population size, Africa will be more important than ever before. This can only be a strategic lever if countries across the continent integrate their economies more strongly. Thus, South Africa’s economic and foreign policy should focus on Africa and on building the African Continental Free Trade Area. Without this, its long-term economic development is in danger and it can’t develop the political leverage that enables independence in global affairs. With its African partners, South Africa should be rebalancing its international trade. It should shift from being an exporter of raw materials to being a manufacturing and service economy. Many countries across Africa have deposits of minerals that are strategically important to the global economy, especially as the climate transition shapes relations. This must be used to build integration across the continent so the region engages with powerful economies as a regional bloc. Second, professional excellence must be taken seriously. South Africa’s political stewardship of the economy has been poor, and driven by narrow political objectives of the ruling party-linked elite. For example, policy in the important mining sector has been chaotic, at best. It has not served as a developmental stimulant or as a political lever for strategic autonomy. Specific to international affairs, South Africa has to professionalise the diplomatic corps. It has been significantly weakened and its professional capability eroded through political appointments. These make up the vast majority of ambassadorial deployments. There should be limits to the political appointments of ambassadors from the cohort of former African National Congress politicians and their family members. In addition, South Africa should have fewer embassies, located in more strategic countries, with appropriate budgets to their job. It is embarrassing that embassies in places like London don’t have enough budget to market the country, undertake advocacy and advance the country’s national agenda. But professional excellence needs to be extended far beyond the diplomatic corps. South Africa cannot continue to be compromised by incompetent municipal and national governance. And this is not solely the result of corruption and cadre deployment. It’s also tied to a transformation agenda that eschews academic and professional excellence. In addition, South Africa cannot pretend to be leading an independent path in global affairs without having the security apparatus that goes with such leadership. On this score, the country is sadly lacking. Its security apparatus – the South African National Defence Force, police and intelligence service – needs attention. The defence force is poorly funded and, like the police and intelligence, largely a “social service” for former ANC operatives combatants. Third, South Africa needs to prepare for the reprisals that are likely to follow if it charts an independent path in global affairs, such as the current response from the Trump administration to discipline South Africa for taking an autonomous position on Gaza. This requires understanding the form that such reprisals could take and their consequences and being prepared for them. This would require diplomatic agility to proactively seek new markets, alternative sources of investment and additional political allies. In contrast, South Africa’s responses have largely been reactive. Dangers, as well as opportunities While it’s a dangerous and uncertain world, it is also full of new possibilities. A new bipolar or multipolar world could enable South Africa and Africa to play off global powers against each other, to maximise opportunities for national economic development and independence. This will only happen if South Africans collectively become agents of their own change. It will require developing leverage which others take seriously, and a government and public administration that works for the people of the country. Source of the article