CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

GOATReads: Psychology

Why Does Every Global Event Feel Like a Crisis?

Understanding the psychology of presentism can help you adapt through change. Every generation believes it is living through the most dangerous, most consequential moment in history. Public policymakers call this presentism or presentism bias: our tendency to overestimate the singularity and existential weight of our own time. Psychologists would focus more on availability heuristics and recency bias: information that is most easily recalled, and more recent events tend to seem most important. It isn’t irrational. The Prussian and the Roman empires aren’t going to hurt you now. The eventual heat death of the universe isn’t worth losing sleep over. Threats that feel most relevant are the ones that could affect us directly. Nuclear brinkmanship in the 1960s, stagflation in the 1970s, and the 2008 financial crash were each experienced as uniquely defining. The greater the crisis, the more it commands our attention. Human cognition evolved to prioritize the immediate and tangible over the distant and abstract. That vigilance helps us survive, but it also narrows perspective, making it harder to see continuity with the past or to prepare for the future. Overlapping Transitions History rarely moves in clean breaks. Even in times of upheaval, remnants of the old order persist. Industrial and digital economies overlap. Secular and religious values compete. Generational attitudes toward work, family, and identity clash and cross-pollinate at the same time. Psychologically, this coexistence can feel destabilizing. The neat categories people expect, like “old versus new” and “Boomers versus Zoomers,” don’t map cleanly onto reality. New technology is never just simply better; it has risks and consequences. People find themselves in the middle of contradictions: leaders who preach change but make the same mistakes, workplaces that find new ways to make the same mistakes, families that are divided by different views and struggle to relate. This is where ambiguity tolerance plays an important role. It is trait that describes how comfortable people are with uncertainty, complexity, and mixed messages. For those with high ambiguity tolerance, the coexistence of old and new feels manageable or even energizing. They can sit comfortably with paradox and enjoy the complexity and unpredictability. But for those with low ambiguity tolerance, overlapping and uncertain transitions can feel threatening. The lack of clarity can produce anxiety, rigid thinking, or even backlash. Living by Other People’s Rules This tension is familiar across the lifespan. Entering the workforce, many young adults discover that workplaces are governed by norms and hierarchies shaped by older generations. The rules of belonging, from communication styles to work ethic to career expectations, often feel strange and inscrutable. Over time, individuals may find their own cohort rising into positions of influence. But later in life, the cycle reverses. Older adults re-enter environments increasingly governed by younger norms, from digital platforms to cultural spaces, where they must once again adapt to rules that don’t feel like their own. At both ends of the lifespan, individuals often have the least autonomy and the least power to impose the norms of their own age group. Adolescents adapt to adult rules. The elderly adjust to the changes brought in by younger generations. It’s a process of constant renegotiation and change. Whenever things seem to be settling in, life events and global changes can upend everything again. The overall process of continuous change is nothing new in the context of history, but the challenges are totally new from an individual perspective. Why the Present Feels So Heavy What makes periods of transition so emotionally charged is that they always involve a struggle over power and legitimacy. Established ways of doing things must continually prove their relevance to be adopted by newcomers, while emerging groups test different ideas, pushing the boundaries of what should endure and what should be reformed. Psychologists studying identity threat show that when long-held norms are challenged, groups often respond with defensiveness, nostalgia, or outright hostility. Those in power may exaggerate the risks of new approaches not only because they are unfamiliar but because they unsettle the group’s social position or role. When the costs of maintaining old structures fall on some while the benefits accrue to others, resentment can slowly build up. Workplaces as a crucible of change Workplaces are a good example of this. Ideally, succession planning provides a bridge between generations: Experienced leaders pass down not just technical knowledge but also the tacit know-how, networks, and cultural memory that keep an organization functioning. Done well, it is gradual, deliberate, and reciprocal. The outgoing group retains dignity and respect by sharing what they have built, while the incoming group gains confidence by learning within a supportive structure. Both sides invest in a future where their contributions endure. Changes are navigated through interpersonal networks instead of crises and knee-jerk responses. In reality, succession is often messy. Urgent demands of the present overwhelm long-term preparation. Transitions happen suddenly, with little time for mentoring, documentation, or shared reflection. New groups are left to reinvent processes from scratch; older groups feel discarded or unappreciated. Instead of continuity, organizations get torn apart. When resentment builds, people may be eager to discard inherited structures and processes wholesale, when they were not involved in sustaining and retaining the value of them. This is one of the reasons times of transition can feel so difficult. Not only are rules and norms shifting, but the difficult work of handing them over is often avoided or resisted. The demands increase without sufficient resources (psychological, social, material) to support the process of change. Without deliberate, well-managed succession, the weight of the present demands feels more intense, as each new group scrambles to rebuild what might have been passed on more gracefully and with less resulting damage. Resistance to change Decline is inevitable. Change can bring renewal, but when people cling to power long after their ability to wield it effectively, they often accelerate their own downfall. Bound by their own rules and hierarchies, leaders may resist change even as their capacity to shape those around them diminishes. Increasingly rigid or manipulative methods only hasten collapse. A long-standing leadership structure may still appear to wield power, but without moral authority or genuine respect, its rules become hollow and observed outwardly but ignored where and when it is not convenient. The weight of now Presentism and associated psychological biases mean we tend to see own era as uniquely consequential, more turbulent or unstable than anything that came before. In some sense, this is inevitable: the crises we face today are the ones we can most immediately feel. Recognizing presentism doesn’t make current crises less real, but it offers perspective. We are not the first to feel this way, and we will not be the last. Source of the article

Essence is fluttering

As Zhuangzi saw, there is no immutably true self. Instead our identity is as dynamic and alive as a butterfly in flight As Zhuangzi saw, there is no immutably true self. Instead our identity is as dynamic and alive as a butterfly in flight ‘Most people are other people,’ wrote Oscar Wilde. ‘Their thoughts are some one else’s opinions, their lives a mimicry, their passions a quotation.’ This was obviously meant as a criticism, but what exactly is the criticism? Most people are other people, in a different sense to how Wilde meant it: the vast majority of people are not me. The enormous size of this majority – billions to one – guarantees that there will be somebody better than me at anything I can think of. If I make a dress for the first time, I am wise to follow a pattern. If I cook a meal for the first time, I am wise to follow a recipe. This is (as far as I know) my first time living as a human being, so why wouldn’t it be wise for me to emulate a successful model of living, especially when there are so many candidate models, past and present? When Wilde was writing, literary culture had reached the pinnacle of Romantic individualism. In that culture, it was obvious what’s wrong with being other people: doing so is a betrayal of your true self. Each of us was thought to possess a unique, individual identity, sewn into the very fabric of our being. Walt Whitman celebrated ‘the thought of Identity – yours for you, whoever you are, as mine for me,’ and defined it as: ‘The quality of BEING, in the object’s self, according to its own central idea and purpose, and of growing therefrom and thereto – not criticism by other standards, and adjustment thereto.’ In the 20th century, the assumption would be famously challenged, for example by Jean-Paul Sartre, who proclaimed that humans come into existence with no definite identity – no inborn ‘central idea and purpose’ at all: ‘man is nothing else but that which he makes of himself’. This is a logically tricky point, since it’s unclear how something devoid of all identity could ‘make itself’ into anything. We can’t suppose, for example, that it makes itself according to its whims, or inclinations, or desires, since if it has any of those then it already has an identity of some sort. This gets us into a quandary. Simone de Beauvoir – Sartre’s partner in philosophy, life, and crime – responded to it by admitting that self-creation can work ‘only on a basis revealed by other men’. But popular culture has by and large responded by retreating uncritically to the old Romanticism. Apple’s Steve Jobs advised a graduating class at Stanford not to ‘let the noise of others’ opinions drown out your own inner voice’ and to ‘have the courage to follow your heart and intuition’, which ‘somehow already know what you truly want to become’. Advertisements for shampoo and travel money apps advise you to dig deep inside and find the True You. The question of how we each came to be pre-programmed with this unique true identity, this articulate inner voice, is left aside, as is the more troubling question of how the advertisers can be so confident in betting that your true self is going to love their products. How would things look, philosophically, if we cleared this Romantic notion out of the picture? I think we can get a sense of it by looking at the great philosophical tradition that flourished before the formation of the Qin dynasty in what is now China. The most well-known philosopher from this tradition is Confucius (Kong Qiu, 孔丘), classically believed to have lived from 551 to 479 BCE. In reality, books ascribed to him were probably written by multiple authors over a long period. The most famous of these, the Analects, propounds an ethical ideal based on emulating admirable examples. The philosopher Amy Olberding wrote a whole book devoted to this topic, Moral Exemplars in the Analects (2012). For Confucius, being ‘other people’ is precisely what you should be aiming at – as long as you emulate praiseworthy people like the great sage-kings Yao and Shun or indeed Confucius himself. The objection that this would betray your true inborn identity doesn’t come up. The idea that we each have an individual, inborn, true identity doesn’t seem to appear in this tradition. What the tradition does recognise is role-identities. Confucius was concerned that these were being lost in his time. In the Analects, he declares that his first act in government would be to ‘rectify names’ (zheng ming, 正名). This is a complex concept, but some light is shed on its meaning by another passage, in which he is asked about zheng (政) – social order – and replies: ‘Let the lord be a true lord, the ministers true ministers, the fathers true fathers, and the sons true sons.’ Confucius feared the loss of the traditional zheng, which he associated with the recently collapsed Western Zhou Kingdom, leading to a social chaos in which lords, ministers, fathers, and sons no longer played their appropriate roles. ‘Rectifying names’ could mean ensuring that people live up to their names – not their individual names but the names of their social role or station (ming, 名, can be used to mean ‘name’ but also to denote rank, or status). To the question ‘Who am I?’, Confucius would like you to reply with your traditionally defined social role. As to how you should play that role, the ideal would be to emulate a well-known exemplary figure who played a similar role. Under the Han dynasty, which adopted Confucianism as a sort of official philosophy, many catalogues of role-models were produced, for instance Liu Xiang’s Traditions of Exemplary Women (Lie nu chuan, 列女傳), full of models of wifehood, motherhood, ladyhood, etc. The ethical ideal is not to replace a conformist identity with an individual one. It is to get rid of identity altogether Confucius’s philosophy was opposed in his era, but not by Romantic individualists. On one side was Mozi (or Mo Di, 墨翟), who proposed that we should take nature as a model rather than past heroes. For this to make sense, nature had to be anthropomorphised into having cares and concerns – a comparison could be made with the ancient Greek and Roman Stoics. On the other side was Zhuangzi (Zhuang Zhou, 莊周), perhaps the strangest philosopher of any culture, and a central focus of my book, Against Identity: The Wisdom of Escaping the Self (2025). Zhuangzi (again, the writings ascribed to him – called the Zhuangzi – were probably written by multiple authors) rejected Confucian role-conformism. He argued that you shouldn’t aim to be a sage-king, or an exemplary mother, or any other predetermined role-identity. You shouldn’t aim, in Wilde’s terms, to be other people. In our highly individualistic culture, we can’t help but expect this line of thinking to continue: just be yourself! But this is not what Zhuangzi says. Instead, he says: ‘zhi ren wu ji (至人無己),’ translated as: ‘the Consummate Person has no fixed identity’ or ‘the ultimate person has no self’. The ethical ideal is not to replace a conformist identity with an individual one. It is to get rid of identity altogether. As the philosopher Brook Ziporyn puts it, ‘it is just as dangerous to try to be like yourself as to try to be like anyone else’. Why is it dangerous? In the first place, attachment to a fixed identity closes you off from taking on new forms. This in turn makes it difficult for you to adapt to new situations. In her book Freedom’s Frailty (2024), Christine Abigail L Tan puts it this way: ‘if one commits to an identity that is fixed, then that is already problematic as one does not self-transform or self-generate.’ Borrowing a term from psychology, we could call this the problem of ‘identity foreclosure’. The American Psychological Association defines ‘identity foreclosure’ as: premature commitment to an identity: the unquestioning acceptance by individuals (usually adolescents) of the role, values, and goals that others (eg, parents, close friends, teachers, athletic coaches) have chosen for them. But the radical message of the Zhuangzi is that it can be just as dangerous a ‘foreclosure’ to accept the role, values and goals that you have chosen for yourself. Doing so cuts you off from the possibility of radically rethinking all of these under external influences. Indeed, it drives you to resist external influences, for a simple reason. We have a strong survival instinct – an urge to continue existing. But continuing to exist means retaining the form that makes you yourself and not something else. Turning into a corpse, obviously, doesn’t count as surviving, but neither would turning into something too radically different from what you fundamentally are. I fear waking up tomorrow with my body, memories and personality replaced by those of somebody else about as much as dying in my sleep; indeed, it might well count as me dying. Surviving means remaining the same in crucial, definitive respects. But this means that the more narrowly you define yourself, the more defensive you will feel against external influences that might change you. The term ‘identity’ lends itself naturally to this sense of self-definition. It comes from the Latin identitas, with the root idem, which means ‘same’. A common expression is unus et idem, ‘one and the same’. Your identity is whatever must stay the same in order for you to remain you. A narrow identity makes heavy demands of consistency, upon you and also upon the wider world. If your identity is bound up in being a harness-maker, then your survival (under that identity) requires not only that you keep working at making harnesses; the harness industry must also stay viable. If the industry dies out, for example with the coming of automobiles, then you will find yourself in a desperate identity crisis, panicked at the idea that nothing you can now be will count as what you had hitherto recognised as yourself. Hopefully you will find new things by which to define yourself. But you could have saved yourself the anguish by not binding up your identity with something so specific in the first place. Now suppose that your identity is bound up with certain religious or political beliefs. In that case, your survival instinct will be put on alert anytime anything threatens those beliefs. The more convincing an argument against them seems, the less able you will be to hear it. The more appealing an alternative seems, the harder you will push it away – for fear of changing and losing yourself in the change. In this way, foreclosing on a fixed identity, even one that you have chosen, will push you to insulate yourself from external influences. We can think of an example from Sartre: ‘the attentive pupil who wants to be attentive exhausts himself – his gaze riveted on his teacher, all ears – in playing the attentive pupil, to the point where he can no longer listen to anything.’ But the world is always changing, in ways that we cannot predict. When attachment to a fixed identity drives us to close ourselves off from external influences, those influences might otherwise have been very valuable in guiding us through change and uncertainty. There are many stories of Australian colonists rejecting the Indigenous knowledge that could have helped them survive in a harsh and unknown environment, due to their excessive attachment to an idea of themselves as scientifically and racially superior. This idea was incompatible with the suggestion that they might have something to learn from those they saw as ‘naked savages’. When we seek a definite identity, we betray our true nature as fundamentally fluid and indeterminate There is another reason that trying to be yourself, in the sense of some fixed identity, is dangerous. To appreciate it, we must ask: where did you get the idea of that fixed identity? Remember that we are now imagining ourselves in a cultural context devoid of the Romantic notion that each of us is born with an inborn self. The compiler of the earliest surviving text of the Zhuangzi, Guo Xiang (252-312 CE), also provided a commentary, which outlines a very different notion of self. Guo’s commentary draws out elements in the Zhuangzi that criticise the Confucian ethic of model-emulation, which Guo calls following ‘footprints’ (ji, 跡). For example, Guo comments on one passage as follows: Benevolence and righteousness naturally belong to one’s innate character [qing, 情], but beginning with the Three Dynasties, people have perversely joined in such noisy contention over them, abandoning what is in their innate characters to chase after the footprints of them elsewhere, behaving as if they would never catch up, and this too, has it not, resulted in much grief! The reference to ‘innate character’ here might mislead us into reading this passage in a Romantic way – a familiar celebration of being your true, inborn self rather than following in the footsteps of others. But when Guo uses terms like this, or ‘original nature’ (benxing, 本性), he appears to mean something quite different. Tan explains: ‘original nature (benxing, 本性) does not actually mean that it is unchanging and fixed, or that it is inborn, for it simply means unfettered.’ Zhuangzi’s view, as interpreted by Guo, is that when we seek a definite identity, we betray our true nature as fundamentally fluid and indeterminate. We end up pursuing some external model of definiteness. Even if the model is one of our own devising, it is external to our true indefinite nature. One story in the Zhuangzi tells of a welcoming and benevolent but faceless emperor, Hundun. Hundun has a face drilled into him by two other emperors who already have faces of their own. As a result, he dies. The story suggests that fixed identity always comes to us from the outside, from others who have already attached themselves to fixed identities and drive us to do the same, not usually by drills but rather by example. This peer-driven attachment to identity kills our fundamental nature, which is formless and fluid like Hundun (his name, 混沌, means something like ‘mixed-up chaos’, and each character contains the water radical signalling fluidity). Our true Hundun-nature is the capacity to take on many forms without being finally defined by any of them. This encounter with a distant philosophical culture liberates us to ask questions we might not have otherwise thought to ask. Imagine if we hadn’t been conditioned to believe that our true self is something fixed and inborn. Would we have inevitably found our way to this idea? Or might we instead, like Zhuangzi, have supposed our true nature to reside in boundless suppleness and fluidity, upon which any definite identity can only be a foreign imposition? What would our culture look like if that was the dominant idea we all had of our true self? Would it reduce human life to a meaningless chaos of wandering without purpose? Or could it, perhaps, be more peaceful, more adaptable, and more exciting? I am inclined towards the latter position. Admitting my confirmation bias, I see examples everywhere of how identity holds us back. In a complex and unpredictable world, nations need more than ever to learn from each other. Instead they are closing their doors to foreigners and going into international dialogues with megaphone on and earplugs in. In modern democracies, people vote for who they are, not what they want, as Kwame Anthony Appiah puts it, leading to policies that pit identity groups against each other, rather than pursuing collective benefits – or indeed even real benefits to any one group. Information technology puts the whole world at our fingertips, yet people remain shockingly incurious about anything that goes outside their own narrow cultural sphere – as if fearful that exposure to too much difference will detach them from their treasured identity. And even when current patterns are shown to be unsustainable, we find it difficult to change them, due to our identities becoming somehow bound up in them. He doesn’t know if he is Zhuangzi, who dreamt he was a butterfly, or a butterfly now dreaming of being Zhuangzi A personal example of the latter is my experience of grief. As I slowly lost my father to Alzheimer’s disease, I realised that the terrifying part of grief is the feeling of not only losing a loved one but losing yourself. I struggled to imagine myself without a figure who shone out warmth in my earliest memories. I was stuck in a desperate, hopeless compulsion to yank back the past into the present. It was my father’s own courage in the face of his much more direct loss of identity that taught me that I too could adapt, learning to accept and even appreciate the complete transformation of myself. The most famous story of the Zhuangzi is the Butterfly Dream. Zhuangzi awakens from a dream, in which he was a butterfly fluttering freely. He doesn’t know if he is Zhuangzi, who dreamt he was a butterfly, or a butterfly now dreaming of being Zhuangzi. This story puts Zhuangzi in contact with ‘the transformation of things’ – a reality in which identities are always fluid and never fixed. As Kuang-Ming Wu points out in The Butterfly as Companion (1990), the butterfly’s ‘essence is fluttering’. Hundun and the butterfly are symbols of an inner fluttering – an inner indefiniteness that lies deeper in us than any fixed identity, whether chosen or imposed. The demand to be true to ourselves – as individuals, as communities, as churches, parties, cities, nations – leads us to manhandle the world. Keeping ourselves the same means keeping things in the right shape to provide the context for our self-definition. Others are trying to keep things in the right shape to suit their self-definitions. The result is a world full of strife and devoid of progress. Perhaps it is time to seek our inner fluttering instead. Source of the article

GOATReads: Literature

Stephen Dedalus in A Portrait of an Artist as a Young Man

His soul had arisen from the grave of boyhood, spurning her grave-clothes. Yes! Yes! Yes! He would create proudly out of the freedom and power of his soul, as the great artificer whose name he bore, a living thing, new and soaring and beautiful, impalpable, imperishable Throughout A Portrait of an Artist as a Young Man Stephen Dedalus is persistently portrayed as the outsider, apart from the society he and his family inhabit, connecting with no-one and seeking solitude and isolation at every turn. Does this self-imposed exile lead to or directly influence his artistic awakening or not? This essay will examine (both thematically and stylistically) Stephen's alienation from the traditional voices of authority in his life and explore how this impacts upon his budding artistic talent. A Portrait of an Artist as a Young Man was Joyce's first published novel, written in neutral Switzerland but published in New York in 1916. Europe was at war and Michael Collins had been taken prisoner during the Easter Rising in Dublin. This novel is therefore bound up with an Irish history rich in rebels and freedom fighters. A real history was raging in Joyce's homeland where the Fenians were fighting against English rule, the oppressive landlord system and eventually the Catholic church in hock to the English rulers. The novel, however, as the title suggests, is not a story of revolutionary politics but of the quiet but dogged rebellion of a young man in search of his artistic voice. From the opening pages the reader realises that this is no traditional narrative. There is no safe 3rd person distance from the main protagonist, the reader never escapes Stephen's perception of events. The style is direct and visceral and reflects, in its immediacy, the disjointed manner in which memories are recollected and thoughts enter the protagonist's imagination. The effect is claustrophobic but also highly instructive as the reader walks beside Stephen on his journey of self-discovery. The readers, discerning as they are, will groan at some of Stephen's poetry and mawkish ideas but they cannot deny that they are seeing what Stephen sees and experiencing his life first hand. Stephen starts as an object - Baby Tuckoo - in his father's story of his early years and is thus without his own identity. Later, at Clongowes, he is either gripped with embarrassment as he fails to connect with his peers or speechless at a family Christmas dinner as debate and anger rages around him. He is isolated, associating only with the sounds of words (belt, iss, suck) and other stimuli. He doesn't understand the schoolboy argot and his consequent victimisation is all too predictable as his peers react with typical schoolboy nastiness to a boy who doesn't fit in. In protesting his palm-whipping, however Stephen not only wins back the respect of his peers but also performs his first act of rebellion or independence. As a young boy, this apartness appears too real and solid to be something he just grows out of or learns to subsume and he turns to literature as a means of escape. It is no mere chance that Stephen enjoys The Count of Monte Cristo the story of choice for many schoolboys seeking escape from the imprisonment of school and the cruelty of their peers. Reality eventually encroaches upon Stephen's internal reveries and teenage angst, In a vague way he understood that his father was in trouble and this was the reason why he himself had not been sent back to Clongowes (p.64) Stephen's selfish detachment persists throughout the book as external events of great import to those who love him drift in and out of his consciousness without having any real direct impact. This is of course if his internal dialogue is to be believed. Thus we see Stephen isolated from his peers as his family struggles from one property to another and his father from pub to pub seeking work. He can find no real connection with his father. The distance between them can be seen when, having performed his act of rebellion at school concerning his unjustified palm-whipping, Stephen hears his father recalling a conversation with the Jesuit to whom Stephen protested, Shows you the spirit in which they take the boys there. O, a Jesuit for your life, for diplomacy! (p.73) The bluff manner in which his father refers to the incident couldn't be further from Stephen's own tortured experience. It is around this time also that Stephen commences a period of whoring. Whilst this sexual engagement with prostitutes requires no emotional attachment, this interlude, coming as it does at the conclusion of chapter 2, signifies the nadir of Stephen's path away from Jesuit and familial authority. But what of Stephen's artistic yearnings? At this stage there is no discernible development of a poetic voice but Stephen does feel some shadowy intimation of otherness or the transcendental world, A vague dissatisfaction grew up within him as he looked on the quays and on the river and on the lowering skies and yet he continued to wander up and down day after day as if he really sought someone who eluded him. (p.67) Stephen is in physical exile and whilst his family is cast adrift, he is groping for an artistic expression which eludes him. A particular scene, remembered during a conversation with his friend, portrays his directionless spirit, The old restlessness had again filled his breast as it had done on the night of the party, but had not found an outlet in verse. The growth and knowledge of two years of boyhood stood between then and now, forbidding such an outlet (p.77) It is the adolescent games of torment and humiliation coupled with a rigid Catholic approach to literary criticism (In any case Byron was a heretic and immoral too (p.81)) that prevents the genuine artistic outlet Stephen seeks. Visions remain formless and his isolation from his peers prevents him from relating to them, Do you use a holder? - I don't smoke, answered Stephen - No, said Heron, Dedalus is a model youth. He doesn't smoke and he doesn't go to bazaars and he doesn't flirt and he doesn't damn anything or damn all. (p.76) Were he able to voice these half-formed feelings amongst like-minded young men, perhaps he would feel less isolated. The strict Catholic nature of their education and the widening social gap between him and his peers brought about by his father's downfall cements his alienation and otherness so his artistic yearnings remain 'monstrous reveries' (p.90) without any real articulation or development. By the conclusion of chapter 3 Stephen is in a state of cold, lucid indifference, alienated from his schoolmates and lost in a world of meaningless sexual encounters. Yet by the start of chapter 4 he is well on his way back to a state of grace as he takes whole-hearted part in a Catholic retreat organised by Belvedere college. This passage of writing is tedious and its repetitive, didactic style reflects Stephen's utter immersion in the catholic faith. His fate is almost sealed as he is invited to take Holy orders at the conclusion of his devotions. Stephen is seduced at first but his by now instinctive resistance to any form of belonging again kicks in, At once, from every part of his being unrest began to irradiate (p.161) His decision to reject the priesthood is a serious one which Stephen follows with conviction, even refusing to perform Easter Duties for his mother. By now, Stephen is grasping his own destiny and taking positive steps towards a mature poetic voice. Thus the phrase, a day of dappled, seaborne clouds is taken by Stephen and woven into his own experience...The phrase and the day and the scene harmonised in a chord (p.166). These lines mark Stephen out as the young artist he has been aspiring to be, referring the poetic to the mundane in a structured and considered manner. At the same time he becomes aware of the symbolic nature of his surname and the mythical character from which it is taken. Dedalus was the great artificer and creator of Icarus' wings which were themselves a symbol of escape. As the Ovid quote at the start of the novel states it was Dedalus who...altered/improved the laws of nature. By the conclusion of chapter 5 therefore we see Stephen the creator who chooses exile rather than the daring Icarus-like youngster seeking escape but doomed to failure, His soul had arisen from the grave of boyhood, spurning her grave-clothes. Yes! Yes! Yes! He would create proudly out of the freedom and power of his soul, as the great artificer whose name he bore, a living thing, new and soaring and beautiful, impalpable, imperishable (p.170) Stephen's rejection of the environment that shaped him is now complete and his diary entries at the conclusion of the novel show a purposeful young artist seeking expression in Europe. Source of the article

GOATReads: History

Masters and Mamluks: Islam’s Slave Soldiers

The military elite of the medieval and early modern Muslim world consisted of men who had been captured and forced into service. But to what extent were the janissaries and their predecessors subject to slavery? ‘I see that those on my side have been routed. I fear they will abandon me. I do not expect them to return. I have decided to dismount and fight by myself, until God decrees what He wants. Whoever of you wishes to depart, let him depart. By God, I would rather that you survive than that you perish and be destroyed!’ They replied: ‘Then we would be treating you unjustly, by God! You freed us from slavery, raised us up from humiliation, enriched us after we were poor and then we abandon you in this condition! No, we will advance before you and die beside the stirrup of your horse. May God curse this world and life after your death!’ Then they dismounted, hamstrung their horses, and attacked. This is an excerpt from al-Tabari’s universal history describing the exchange between an emir (commander) and his military slaves and freedmen after the tide of battle had turned against them during the Abbasid civil war (811-819). The emir’s forces had fled and he was left only with his slaves, who refused to abandon their master even though he urged them to save themselves. This exchange exemplifies the loyalty of military slaves and freedmen, a characteristic that made them the most elite and reliable soldiers of the medieval and early modern Muslim world. Over a period of almost 1,000 years, military slaves and the institution of military slavery dominated premodern Muslim polities. From their rise to prominence in the early ninth century under the Abbasid caliph al-Mutasim (r. 833-842) to the disbandment of the Ottoman janissary corps in 1826, military slaves formed the elite core and backbone of almost every Muslim army. Military slaves also rose to positions of power that enabled them to dominate the politics, economics and cultures of the societies in which they lived. The beginning of a tradition During the early period of the Islamic empire, following the Prophet’s death in 630, armies were primarily composed of Arab warriors, with slaves and freedmen (often referred to as mawali or clients) serving mainly as bodyguards and retainers of the caliphs and commanders. The situation changed with the overthrow of the Umayyads by the Abbasid revolution in 750. Mawali, especially Iranians, started to play a bigger role in both the army and the administration. But it was not until the end of the Abbasid civil war between the sons of the caliph Harun al-Rashid, who died in 809, that the caliphate’s military was fully transformed, with a professional standing army of slave soldiers at its core. Al-Mamun, the victor in the civil war, had depended on a versatile and mobile cavalry army of eastern Iranians and Turks that defeated the much larger forces mustered against him by his brother, al-Amin. Al-Mutasim, al-Mamun’s other brother and successor, took things further and reformed the military during his reign. Even before his ascent to the throne, al-Mutasim created a private army primarily of Turkic slaves purchased from the Samanids, a semi-autonomous Iranian dynasty that ruled the eastern parts of the caliphate. The Samanids were in direct contact with the Turks who inhabited the steppe. The two sides often raided one another, resulting in large numbers of captive Turks entering the caliphate. Upon his accession in 833, al-Mutasim disbanded the old army, which had been dominated by the Arabs and Iranians, and removed them from the imperial payroll, relegating them to the role of auxiliaries. He replaced these troops with his Turkic slave soldiers, vassals from eastern Iran and mercenaries. Why? The need for reliable, loyal and skilled soldiers is one of many reasons that the rulers of the Muslim world adopted military slavery. The loyalties of the warriors who formed the early Muslim armies often lay with their tribes or the regions from which they hailed. Often, these men did not wish to leave their homes, land and families to go on long and distant campaigns. Furthermore, four major civil wars were fought during the first two centuries of Islam, which threatened to split the Muslim world along political, regional, tribal, factional and, later, sectarian lines. In these conflicts, acts of treachery and betrayal were common. It was during the civil war between his brothers, al-Amin and al-Mamun, that al-Mutasim witnessed first-hand how his older brother al-Amin was abandoned by his forces when the tide turned against him. Recruiting and training slave soldiers with tight bonds of solidarity to the master and to one another was the solution to this problem. Slave soldiers were usually from beyond the boundaries of the caliphate. They were either prisoners of war or purchased from traders, and many were acquired at a young age. They were educated, raised and trained in their patron’s household and became members of his family. As a result, strong bonds of loyalty were formed between the slaves and their master, and among the slaves themselves. Being foreigners, the slaves looked to their master for their pay, rewards and wellbeing; the master depended on his slaves to protect him and keep him in power. Imprisoned or purchased By the late ninth century, the caliphs had lost much of their power. Although the caliphate still existed, it had fractured into fragments ruled by autonomous dynasties. But these polities modelled their armies on the caliphate’s military, recruiting military slaves as elite soldiers on whom they depended. They came from varied ethnic backgrounds and included Turks, Greeks, Slavs, Africans and Mongols, their origins and numbers varying depending on their proximity. North African dynasties had larger numbers of African slave soldiers who were brought across the Sahara and Slavic slave soldiers, referred to as saqaliba, who were purchased from Frankish and Italian slave merchants from across the Mediterranean and the Iberian frontier. There were large numbers of saqaliba in Islamic Spain, and East Africans and Abyssinians found their way into the militaries of Syria and Iraq through the slave trade across the Red Sea and Egypt. Large numbers of Turks and others from the steppes of Inner Eurasia filled the ranks of the armies of the dynasties that ruled the region spanning Central Asia to Egypt. By the 16th century, the main sources of military slaves for the two most powerful Muslim empires, the Ottomans and the Safavids, were the Balkans and Georgia, respectively. Regardless of geographic proximity, most medieval Islamic polities sought to acquire Turkic slave soldiers. During the Middle Ages the Turks were seen as the most martial of all peoples and became the elite soldiers of most Muslim armies. They were considered a tough and hardy martial people, uncorrupted by civilisation and urban life. Hailing from the Altai region, Turkic tribes inhabited large portions of the Inner Eurasian steppes, which brought them into direct contact with the Muslims on their northern and eastern frontiers. These pastoralist nomads had to survive in harsh environments; their tribes raided one another for livestock and competed for grazing grounds. They also raided and fought the sedentary peoples around them. Turkic children learned to ride horses and use weapons, specifically the bow. As slave soldiers in the medieval Muslim world, in which mounted warriors dominated the battlefield, the Turks, therefore, served as elite heavy cavalry, forming the dominant strike force of most Muslim dynasties until the rise of the Ottomans and the creation of their elite janissary corps, composed of infantrymen. The Islamic institution of military slavery produced some of the period’s best soldiers. Upon being purchased, often at high prices, the slaves were attached to their master’s household. They underwent years of education and rigorous training, which included riding horses as individuals and in grouped formations, archery, using melee weapons, such as swords, lances, maces, daggers and axes, both on foot and on horseback, and wrestling on horseback. The acquisition and training of military slaves was expensive and involved investment in time. Most often it was only the ruling elite that could afford to recruit them in large numbers. Often, the slaves were emancipated upon the completion of their military training. Vocabulary of slaves There were several terms designating various types of slaves. One of the earliest, used between the ninth and 12th centuries, was ghulam, meaning ‘boy’ or ‘youth’. This is not surprising, because a large number of the slave soldiers were either captured or purchased when they were still young boys. This term was eventually replaced, by the late 12th and early 13th centuries, with mamluk, ‘one who is owned’. Both of these terms refer to a specific group of military slaves who were fair-skinned and fought on horseback. The terms abd, abid and sudan all refer to African slave soldiers, who were regarded as inferior to their mamluk counterparts. The saqaliba were military slaves of mainly Slavic origin, who served the Umayyad caliphate of Islamic Spain and some of the North African polities. The term kul was used in the Ottoman period to refer to the sultan’s slaves and means ‘slave’ or ‘servant’. Finally, kapi-kulu, meaning ‘slaves of the Porte’, referred to the household troops that formed the Ottoman sultans’ standing army and included the janissary infantry corps. Although the system of training military slaves on a grand scale was unique to the Muslim world, there were Iranian and central Asian practices that may have provided the foundation upon which the Muslims built. The Sogdians, an Iranian people who lived between the Amu Darya and Syr Darya Rivers in Central Asia (in modern Tajikistan and Uzbekistan), were heavily engaged in trade with both the east and the west. They gathered children and trained them as military slaves to defend their city states and protect their caravans. The Sasanians, the last great Iranian empire before the Islamic conquests, also enlisted prisoners of war and slaves into their military, whom they settled on their frontier regions, which they defended in exchange for land and pay. Incidentally, it was the Samanids, a semi-independent Iranian dynasty ruling the eastern parts of the caliphate (including Sogdia), who first created a corps of Turkic slave soldiers. It may have been their sojourn in the east during the Abbasid Civil War that prompted both al-Mamun and al-Mutasim to adopt this tradition of training and using slave soldiers. What made Islamic military slavery different was its institutionalisation, the slaves’ elite status within society, and their proximity to, and influence on, the central ruling powers. There were other societies that used slaves for war during various eras. The Spartans sometimes mobilised the Helots, their servile population, for war. Herodotus claims that there were Helots among the Greek casualties of Thermopylae and that at the Battle of Plataea every one of the 5,000 Spartan hoplites was accompanied by seven lightly armed Helots. The Romans enlisted large numbers of slaves to replenish the ranks of their legions after suffering several defeats by Hannibal during the Second Punic War. The European colonial powers also recruited slaves in their colonies during times of war in the Americas and Africa. Slaves also participated in the fighting during the American War of Independence and the Civil War. But in the Muslim world, slave soldiers formed socio-military elites and, in some cases, even rose to form the ruling class. Slaves were enlisted into the military in other societies during emergencies, such as civil wars and after military defeats that left the ranks of the regular army depleted, or when there was a shortage of manpower but they had little or no social standing or influence. Wealth and social mobility  Unlike slaves in other societies, military slaves were paid handsomely for their services. They received stipends and salaries from the central treasury. Military slavery was also one of the means through which one could acquire upward social mobility in an age when climbing the social ladder was rare. The most intelligent, promising, loyal, brave and capable military slaves were promoted to become officers and generals in the army, to government posts and to positions in the ruler’s household and inner circle. Posts such as royal arms bearer, cup bearer, holder of the royal inkwell, keeper of the hunting dogs, stable master, master of the hunt and chamberlain may not seem impressive, but they were all held by senior officers. Such positions indicated the slaves’ proximity to the master, the intimate relationship they shared and the trust that the patron had for the men who served him. Most of the Ottoman viziers were the brightest cadets selected from among the boys collected for the janissary corps. Slave soldiers who were promoted to become officers in the army and government officials became wealthy and powerful. In addition to receiving their pay they were often given parcels of land from which they drew an income. They amassed huge amounts of wealth in the form of gold, land, palaces, horses and livestock. These commanders, who had started their careers as slaves, then recruited slave soldiers of their own. Some slave generals grew so powerful that they were able to challenge their masters and, in some cases, overthrow them and establish empires and dynasties of their own. There are several examples of slave soldiers turning on their masters. In 861 the Abbasid caliph al-Mutawakkil was murdered by his Turkic guards while drinking with some of his companions. The Turks were members of the army that his father, al Mutasim, had created. In the east, Alp Tegin, a Turkic slave of the Samanids, had grown too powerful for his master’s comfort. When the Samanid prince divested him of his rank and possessions and sent an army to arrest him in 962, Alp Tegin fought his master and defeated the force sent against him. He then fled to and conquered the city of Ghazna (in modern Afghanistan) with his own slave soldiers. From there he and his successors created the Ghaznavid Empire (977-1163) that eventually swallowed up the domains of their former masters. Similarly, in 868 Ahmad ibn Tulun, a member of the Abbasids’ Turkish guard, was sent to Egypt as its governor, but he took complete control of the treasury and created a new army that was loyal to him. He and his descendants managed to maintain their independence until 905. With the collapse of the Umayyad Caliphate of Cordoba in Islamic Spain in 1031, several successor states emerged, known as the Taifa. A number of these principalities were established and ruled by the saqaliba and included the Taifas of Valencia, Denia and Almeria. Perhaps the best example of military slaves in power is the Mamluk sultanate of Egypt and Syria, established in 1250, which lasted until 1517. The last effective Ayyubid sultan, al-Salih Najm al-Din Ayyub, created a new army composed primarily of mamluks after he rose to power. He had previously been betrayed by his troops during his struggle with other family members for the throne and it was only his mamluks who remained loyal to him. He treated them well, paying them handsomely and promoting many to high positions. When he died, his successor, Turanshah, did not share his father’s affection and he made it clear that he was going to disband them and have their leaders killed. Upon learning of the new sultan’s intentions, the mamluks killed him and established their own regime, rather than returning to their homelands. They continued to refer to themselves as mamluks, which they considered more honourable than being a mere freeborn subject of the caliph. Loyalty paradox Although the institution of military slavery produced excellent and loyal elite soldiers it had its weaknesses. Loyalty did not necessarily pass on to a ruler’s successor, who was sometimes deposed and killed. Successors who managed to establish themselves on the throne often purged their predecessor’s slaves, replacing them with their own. The Ottoman case was exceptional, because the army was loyal to the dynasty and not to individual sultans. Riots, mutinies and rebellions were common, the main trigger being late or unforthcoming pay or mistreatment. Mardavij ibn Ziyar, a Northern Iranian prince, soldier of fortune and the founder of the Ziyard dynasty, for example, was murdered by his Turkish slave soldiers due to his mistreatment of them. Similarly, the great Mamluk emir, Yalbugha al-Umari, was murdered at the peak of his power in 1366 because of his harshness and the severe punishments he meted out to those who fell short of his expectations. Another weakness of the institution of military slavery was manpower and costs. That most military slaves were foreigners, as well as the time and money it took to train them, made them valuable and costly assets which were difficult to replace. Military slaves got married and had families; however, their descendants were born free as Muslims. Having grown up in the towns and cities of the Muslim world, they were viewed as being less martial than their fathers and not suitable to replace them. Fresh tough and ‘uncorrupted’ recruits were preferred, brought in from the steppes or mountainous regions such as the Caucasus. Battlefield dominance  Despite the weaknesses of military slavery, the institution produced some of the best soldiers of the medieval and early modern periods. The performance of the ghulams, mamluks and janissaries on the battlefield are a testament to their superiority over most of their counterparts. Mahmud of Ghazna, the greatest of the Ghaznavid sultans, launched several campaigns into what is now Pakistan and northern India between 1001 and 1024. His forces were almost always heavily outnumbered, but they were superior in training and equipment. At the Battle of Manzikert in 1071, it was the Seljuk sultan Alp Arslan’s heavy ghulam cavalry that dealt the death blow to the Byzantine army after it had been weakened by skirmishing light cavalry. The Mamluk sultanate’s army, composed predominantly of mamluk soldiers, defeated the hitherto undefeated Mongols, halting their westward advance at the Battle of Ayn Jalut and subsequently defeated four other much larger Mongol invasions of their territory. The Mamluks also defeated Louis IX’s Seventh Crusade and put an end to the Crusader states in the Levant. Slaves or not? Were these soldiers slaves as we understand the term? It is true that many practices in the Muslim world fit our understanding of slavery; military slavery is not one of them. Until the proliferation of effective gunpowder weapons in the late 16th and early 17th centuries, they dominated battlefields and rose to dominate the armies, politics and societies of the regions where they were employed. Some of the wealthiest and most powerful individuals in Muslim societies were military slaves who had risen to become generals, governors and ministers. In some cases, they even rose to be princes and sultans and ruled in their own right. Slaves recruited through a military institution became a political and social elite, which dominated and ruled the Muslim world for much of its history. Source of the article

Witty wotty dashes

Doodles are the emanations of our pixillated minds, freewheeling into dissociation, graphology, and radical openness In 1936, Gary Cooper starred in the Oscar-winning film Mr Deeds Goes to Town and changed the meaning of marginal squiggles forever. Mr Deeds, a sweet man from small-town Vermont, survives the Great Depression juggling a string of quirky jobs – he’s a part-time greetings card poet, a tuba player, and an investor in the local animal fat factory – but then he inherits a cool $20 million from an estranged uncle. The film follows the travails of this loveable everyman as he attempts to give away his newfound wealth to the poor. Mr Deeds’s radical acts of altruism (for example, offering 2,000 10-acre farms to struggling Americans) quickly excites the ire of New York’s elite, not least the pernicious attorney John Cedar who plots to have Mr Deeds declared ‘insane’ by a New York judge. In the film’s courtroom finale, various witnesses from his hometown attack Mr Deeds’s personality, claiming he has long been known as ‘pixillated’ – one of them clarifying: Pixillated is an early American expression deriving from the word ‘pixies’, meaning elves. They would say, ‘The pixies have got him,’ as we nowadays would say a man is ‘balmy’. Next, the snooty Dr Emile Von Haller (a parody of Central European intellectuals like Sigmund Freud) appears with a large graph depicting the mood swings of a manic depressive – pixillated affective errancies that map exactly onto the everyday eccentricities of Mr Deeds as described by other witnesses. Called upon to defend himself against such slander, Mr Deeds demonstrates his grip on rationality by celebrating his love of irrational things – from ‘walking in the rain without a hat’ to ‘playing the tuba’ during the Great Depression – before introducing the courtroom to a newfangled form: ‘the doodle’. ‘[E]verybody does something silly when they’re thinking,’ says Mr Deeds as the courtroom erupts with laughter. ‘For instance, the Judge here is an “O-filler”.’ ‘A what?’ says the Judge. ‘An O-filler,’ says Mr Deeds. ‘You fill in all the spaces in the O’s, with your pencil … That may make you look a little crazy, Your Honour, … but I don’t see anything wrong ’cause that helps you to think. Other people,’ Mr Deeds says, ‘are doodlers.’ ‘Doodlers?’ the judge exclaims. ‘That’s a name we made up back home for people who make foolish designs on paper when they’re thinking. It’s called doodling. Almost everybody’s a doodler … People draw the most idiotic pictures when they’re thinking. Dr Von Haller, here, could probably think of a long name for it because he doodles all the time.’ Reflecting the elitist psychoanalytical gaze back at Dr Von Haller, Mr Deeds finds something between ‘a chimpanzee’ and ‘Mr Cedar’ in Von Haller’s idiotic doodles, effortlessly exposing the hidden errancies that inform even the most rational analytical formulas. The doodle is subversive. Democratically able to contain all gestures regardless of their formal difference. In light of this remarkable egalitarian power, the judge irreverently declares Mr Deeds ‘the sanest man that ever walked into this courtroom.’ Far from symbolising a frivolous eruption of nonsignifying noise, the doodle emerges in interwar modernist culture as a distinctly informal form oriented towards containing the value of apparently illogical things – like giving away your personal wealth for public good. Or upholding democratic processes that defend the rights of everyone (even the pixillated, meaning the ‘pixies’ rather than the ‘pixels’) in an age defined by the cold logic of mechanical reproduction, anti-humanist Taylorite efficiency programmes, and the global ascension of fascism. As Mr Deeds says: ‘everybody’s a doodler’. Everybody matters. To doodle is – if anything – a doddle. Equally the domain of avant-garde artists and incarcerated monkeys, presidents and poets, toddlers and self-help gurus, doodling is a radically non-hierarchical and non-classical activity that relays modernism’s epochal desire to reinvent traditional systems of value and to encourage the acceptance of new modes of being. Doodles explode across modernist culture, high and low. From the constellatory squiggles and biomorphic shapes parading across Jean Dubuffet’s doodle cycle L’Hourloupe (1962-85) to Norman McLaren’s abstract expressionist cartoon Boogie-Doodle (1940), through to the ‘pixillated’ and ‘sagacious’ doodlers of, respectively, James Joyce’s novel Finnegans Wake (1939) and Samuel Beckett’s novel Watt (1953), the modernist exhibits a veritable will-to-doodle. But why does the doodle come to matter at this 20th-century modernist moment? I would argue that the doodle is to modernism something like what the beautiful once was to the Renaissance: an aesthetic form that indicates a wider system of value intrinsic to a period of history. The beautiful embodies Renaissance ideals of harmony, rationality and humanism, just as the doodle alerts us to modernism’s fascination with difference and repetition, complexity, errancy and the ordered disorders that hide in irrational processes of all sorts. The aesthetic experience of the doodle is never fixed, never singular As a distinct aesthetic form, the doodle is often aligned with Paul Klee’s understanding that ‘A line is a dot that went for a walk’ – or as his fellow artist Saul Steinberg fondly misquotes it: ‘A line is a thought that went for a walk.’ Steinberg’s slippage points to modernist lines not being formally representative of things but things in and of themselves. This is form animated by the eventful idiosyncrasies of what the poet Henri Michaux calls ‘an entanglement, a drawing as it were desiring to withdraw into itself.’ Doodles are noisy and unfinished process-oriented forms – emerging always like a multitude of ‘starts that come out’, as the poet and musician Clark Coolidge says: they are minor gestural forces that open experience to novel possibilities of becoming. Doodles speak to the modernist idea of an endlessly wandering world, open, as the philosopher Jean-Luc Nancy says, to ‘the indeterminate possibility of the possible’. Because of its radical openness to difference, the doodle tends to function as a kind of meta-aesthetic attuned to containing a network of ambivalent affects and fleeting everyday aesthetic experiences that become increasingly common in the 20th century. Just as our consumerist lifeworld is a patchwork of ‘cool’ Nike ads, ‘cute’ outfits, ‘interesting’ data points, ‘dank’ memes and ‘whimsical’ shopping-mall muzak, the doodle presents scrawled assemblages that flitter with cute blobs, cool waveforms, interesting jottings and fey twinkling stars. The aesthetic experience of the doodle is never fixed, never singular. Far from encouraging distinct sovereign aesthetic experiences like the beautiful, the modernist doodle presents a decentred experience that is, in part, about the meta-aesthetic expansion and reframing of classical aesthetic experience into something non-sovereign, multitudinous and relational. While drawing together errant, marginal and unworthy gestures long excluded from the royal aesthetic experiences favoured by the Renaissance, the doodle is about commingling weak and everyday forms in a democratic soup of bounteous difference and animate commixture. Doodles have a long history, of course. There’s the ‘monkish pornographic doodle’ that the poet Lisa Robertson marvels at, drawn around a flaw in the vellum of a translation of the Codex Oblongus from De rerum natura held in the British Library. Or the variety of curious knots and flourishes that adorn the marginalia of Edward Cocker’s early modern ‘writing-books’ like Arts Glory (1669). Such pre-modern forms are largely treated as formless nothings before the modernist moment awakes to the informal value of doodled forms, whether avant-garde or popular. Indeed, the word ‘doodle’ enters popular parlance in the early 20th century – appearing alongside swaths of similar ‘doo-’ terms for worthless objects, errant movements and unbalanced states of mind, including ‘doodad’, ‘doodah’, ‘doolally’, ‘doohickey’, ‘doojigger’, ‘doo-doo’, ‘dooky’ and ‘doofus’ – as a variant both of old German words like Dödel (‘fool’) or Dudeltopf (‘simpleton’) and the loaded revolutionary war-era phrase ‘Yankee Doodle’ – before bursting forth across US pop culture with the success of Mr Deeds Goes to Town. By the late 1930s, most major newspapers in the US take to clarifying the meaning of ‘doodles’, often alongside definitions of the promiscuous neurotic category of the ‘pixillated zany’, underlining the doodle’s comedy of psychoanalytical pretension. The Los Angeles Times in 1936 insists that ‘Persons who create geometric masterpieces during a telephone conversation, are a little pixilated [sic].’ In 1937, a columnist at The Washington Post admits (ironically) to fearing being caught doodling ‘dogwood blossoms’ while chatting on the telephone and being goaded with ‘that fatal form of neurosis known as “pixillated”.’ As one popular columnist surmises: ‘a hundred million guinea pigs are now “doodles”-conscious’ – both in thrall to doodling and in fear of the wisecracks it’ll inspire. Also in 1937, Life magazine publishes funny exposés of doodling politicians and doodling celebrities alongside a feature on the New York subway’s pixillated ‘photo-doodlers’. What matters here is the doodle’s containment of a ‘pixillated’ comedy of neurosis, skewering the period’s po-faced ‘science’ of dreams while celebrating errant expressions of all kinds. These burgeoning pop cultures celebrating the American public’s democratic respect for minor differences coincided with, and were fed by, a growing interest in psychoanalysis and the unconscious colliding with older (largely Western European) avant-garde, occult and pseudoscientific interests in planchette writing, graphology, automatism and free-association parlour games. In Life magazine’s 1930s showcasing of subway photo-doodles and doodling Democrats, for example, explicit parallels can be drawn with Marcel Duchamp’s Dada portrait of the Mona Lisa in L.H.O.O.Q. (1919) and Louis Aragon’s recycling of the discarded doodles of French ministers in the magazine La Révolution surréaliste in 1926. In turn, Russell Arundel’s cartoonish doodle catalogue, Everybody’s Pixillated (1937) – ‘a pixillated book for pixillated people’, published hastily in the wake of Mr Deeds Goes to Town – owes as much of a debt to late-19th-century graphology textbooks that attempt to taxonomise character through handwriting as it does to the highbrow strictures of Freudian psychoanalysis. The doodle skirts close to an irreverent form of psychoanalysis for the people Graphological understandings of the modernist doodle catalogue are more evident still in Your Doodles and What They Mean to You (1957) by Helen King: The signature shows the personality – that side which we appear to be to the public. The penmanship shows the character – that which we really are. And the doodles tell of the unconscious thoughts, hopes, desires. [my italics] King’s pop-ish sense of doodles as the cartoon sigla of modernism’s unconscious folds back readily into the avant-garde surrealist’s interest in parlour games like ‘exquisite corpse’, which topologically enfolds an assemblage of individual doodles into a grotesque vision of Jung’s collective unconscious. To be sure, as they erupt across modernist pop culture, doodles often parody these older and more serious concerns of graphologists, occult automatists, avant-garde surrealists and early psychologists. As if referring back to Mr Deeds’s comic analysis of Dr Von Haller’s doodles, the doodle comes to the fore as a tongue-in-cheek expression of a re-materialised unconscious that is more inclined to poke fun at the elitism and highfalutin snobbery saturating interwar modernist psychoanalytical practices than to posit any serious means of understanding dreams. This said, in preferring to simply celebrate the silly things people do to help them think (to paraphrase Mr Deeds), the doodle skirts close to an irreverent form of psychoanalysis for the people. Like the modernist doodle, the graphologist’s doodle upholds the value of democracy through its informal containment of difference. Asurprising number of interwar modernist novels contain characters who erupt with dawdling forms that are in close dialogue with late-Victorian practices of automatism and graphology. In Virginia Woolf’s novel Night and Day (1919), for instance, Ralph Denham gazes absently into a page riddled with ‘half-obliterated scratches’ and a ‘circumference of smudges surrounding a central blot’, before beginning to doodle ‘blots fringed with flames’. Instantly, Ralph finds his lawyerly way of inhabiting the world softening, opening onto a network of what Helen King called ‘unconscious thoughts, hopes, desires’. Drawing on the occult practice of planchette writing, Woolf has Ralph find the ‘objects of [his] life, softening their sharp outline’ as the doodle mediates first a kitsch image of cosmic totality, and then his genuine human connection with the woman he loves, the upper-class Katharine Hilbery. For Katharine immediately recognises something familiar in ‘the idiotic symbol of his most confused and emotional moments’. ‘Yes,’ she says in rational agreement with Ralph’s irrational doodle, ‘the world looks something like that to me too.’ The modernist’s will-to-doodle folds out still more explicitly and wonderfully in Joyce’s ‘verbivocovisual’ Finnegans Wake. Apropos of the media frenzy surrounding Mr Deeds Goes to Town, Joyce adds in numerous references to the ‘doodling dawdling’ antics of his dream novel’s cast. ‘He, the pixillated doodler,’ writes Joyce, ‘is on his last with illegible clergimanths boasting always of his ruddy complexious!’ The reference is to Shem the Penman’s terrible handwriting in his transcription of a letter by his mother, Anna Livia Plurabelle (ie, ALP). The social affordance of the doodle contains both ecological and democratic differences The letter itself is a muddied and crumpled communicative mess that echoes Finnegans Wake’s own erratic and polyphonic form. ALP’s doodle-laden pixillated letter is a non-sovereign projective field of subjects and objects all mixing and mingling in ‘strangewrote anaglyptics’, wherein the voice of Anna Livia blurs not only with Shem’s ‘kakography’ but also with ‘inbursts’ from Maggy (ie, Isobel/Issy/Girl Cloud, and ALP’s only daughter with HCE, ie, Here Comes Everybody/Humphrey Chimpden Earwicker), and an array of more-than-human gestures, from tea stains and orange-peel smudges to chicken scratches, electromagnetic wavelengths, muddy splashes and more. Regurgitating what Stephen Dedalus in Joyce’s earlier novel Ulysses (1922) calls the ‘signatures of all things’, ALP’s doodle-laden letter is both an indelibly human and more-than-human form that foregrounds the marginalised signatory gestures of subjects and objects alike: rivers; men; orange-peels; Morse code machines. For all of Joyce’s interest in Mr Deeds and the rise of modernism’s ‘pixillated doodlers’, the meaning of the Wake’s doodles again blurs with an older, late-Victorian interest in graphology. As Walter Benjamin writes in 1928, ‘graphology is concerned with the bodily aspect of the language of handwriting and with the expressive aspect of the body of handwriting.’ With Shem’s/ALP’s pixillated letter, Joyce explodes this ‘body of handwriting’ into an intermedial and non-anthropocentric ecology that brings together the animate gestures of chickens, medieval high priests and supersonic televisions. Rejoicing in the many-in-oneness of all expression – or ‘the identities in the writer complexus’ – Joyce dwells on the poetry of a letter’s variegated surfaces. Its ‘stabs and foliated gashes’; its ‘curt witty wotty dashes’ and ‘disdotted aiches’; its ‘superciliouslooking crisscrossed Greek ees’ and ‘pees with their caps awry’; its ‘fourlegged ems’ and ‘fretful fidget eff[s]’; its ‘riot of blots and blurs and bars and balls and hoops and wriggles and juxtaposed jottings linked by spurts of speed’. From Joyce we learn that the social affordance of the doodle contains both ecological and democratic differences. A healthy public depends on a healthy planet, and both begin by recognising that everyone – and indeed everything – is a doodler. In the postwar period, and eventually in pop culture, the modernist doodle takes on an increasingly cartoonish and commodified form. The more doodles circulate, the more they become pastiches of themselves (meaningless sigla imbued with oversized meanings, parodies of an all-too-formulaic commitment to formlessness). In Robert Arthur Jr’s satirical story ‘Mr Milton’s Gift’ (1958), first published in Fantasy and Science Fiction, we find the hapless Horace Milton ‘sitting back, daydreaming and doodling … doodling something while daydreaming about being rich’. Following a mysterious ‘charm’, Horace Milton’s lazy doodles come to constitute a bizarre late-capitalist labour-saving device, as he realises ‘he hadn’t just been doodling. Unknown to him, his hand was drawing a perfect hundred dollar bill.’ The modernist doodle is here remade in the parodic vision of a post-modernist America, defined not by the democratic containment of difference but by the hypercommodification of everything. Not least, the collective desire to ‘daydream about being rich’. The play of difference and irreverence celebrated by Mr Deeds – and his ‘insane desire to become a public benefactor’ – is absorbed into a homogenising pursuit of capital. Throughout the post-1945 period, the doodle has been further sterilised, homogenised and hypercommodified into an increasingly ‘game-changing’ neoliberal form of ‘creative power’. Think of ‘Google Doodles’ (1998-), or business management gurus styling themselves as ‘info-doodlers’, for example. The modernist doodle’s non-hierarchical formalism has been flattened and its sociopolitical errancy standardised in overwrought simulations of spontaneity. Into the early 21st century, the postmodernist doodle tends to help brands and corporations disguise systemic agendas of extraction and exploitation in colourful and fun-loving – squiggling and non-sovereign(!) – surfaces full, as Google says, ‘of spontaneous and delightful changes’. Yet I cannot help but wonder: what if the doodle rekindled its modernist errancy and rediscovered its democratic roots? The doodle’s history points to an indelibly American form of ‘being in the impasse together,’ as Lauren Berlant says in Cruel Optimism (2011), meaning that it helps us to imagine a non-classical kind of collectivity that foregrounds a non-hierarchical communion of marginal differences. As Gilles Deleuze once said of that exemplary modernist Dödel, Charlie Chaplin, the struggle is to find a form that ‘make[s] the slight difference between men the variable of a great situation of community and communality (Democracy).’ The original social agency of the doodle is clearly formed by a promiscuous ability to bring people together by way of popularising their errant and wasteful gestures – gestures once confined to the peripheries of occult automatisms, pseudoscientific graphology manuals and obscure surrealist practices. The question now is: how might the doodle recover this capacity to put the play of marginal differences to work as socially generative form? In a time when democracy seems once more under threat, how might the doodle rekindle its long-lost power to inspire pixillated dreams of togetherness? Source of the article

GOATReads: Literature

Me, myself and I

Loneliness can be a shameful hunger, a shell, a dangerous landscape of shadowy figures. But it is also a gift The bluest period I ever spent was in Manhattan’s East Village, not so long back. I lived on East 2nd Street, in an unreconstructed tenement building, and each morning I walked across Tompkins Square Park to get my coffee. When I arrived the trees were bare, and I dedicated those walks to checking the progress of the blossoms. There are many community gardens in that part of town, and so I could examine irises and tulips, forsythia, cherry trees and a great weeping willow that seemed to drop its streamers overnight, like a ship about to lift anchor and sail away. I wasn’t supposed to be in New York, or not like this, anyway. I’d met someone in America and then lost them almost instantly, but the future we’d dreamed up together retained its magnetism, and so I moved alone to the city I’d expected to become my home. I had friends there, but none of the ordinary duties and habits that comprise a life. I’d severed all those small, sustaining cords, and, as such, it wasn’t surprising that I experienced a loneliness more paralysing than anything I’d encountered in more than a decade of living alone. What did it feel like? It felt like being hungry, I suppose, in a place where being hungry is shameful, and where one has no money and everyone else is full. It felt, at least sometimes, difficult and embarrassing and important to conceal. Being foreign didn’t help. I kept botching the ballgame of language: fumbling my catches, bungling my throws. Most days, I went for coffee in the same place, a glass-fronted café full of tiny tables, populated almost exclusively by people gazing into the glowing clamshells of their laptops. Each time, the same thing happened. I ordered the nearest thing to filter on the menu: a medium urn brew, which was written in large chalk letters on the board. Each time, without fail, the barista looked blankly up and asked me to repeat myself. I might have found it funny in England, or irritating, or I might not have noticed it all, but that spring it worked under my skin, depositing little grains of anxiety and shame. Something funny happens to people who are lonely. The lonelier they get, the less adept they become at navigating social currents. Loneliness grows around them, like mould or fur, a prophylactic that inhibits contact, no matter how badly contact is desired. Loneliness is accretive, extending and perpetuating itself. Once it becomes impacted, it isn’t easy to dislodge. When I think of its advance, an anchoress’s cell comes to mind, as does the exoskeleton of a gastropod. I thought of those dreamlike crumbling rooms, extending across the water, where men long since dead freed one another This sounds like paranoia, but in fact loneliness’s odd mode of increase has been mapped by medical researchers. It seems that the initial sensation triggers what psychologists call hypervigilance for social threat. In this state, which is entered into unknowingly, one tends to experience the world in negative terms, and to both expect and remember negative encounters – instances of rudeness, rejection or abrasion, like my urn brew episodes in the café. This creates, of course, a vicious circle, in which the lonely person grows increasingly more isolated, suspicious and withdrawn. At the same time, the brain’s state of red alert brings about a series of physiological changes. Lonely people are restless sleepers. Loneliness drives up blood pressure, accelerates ageing, and acts as a precursor to cognitive decline. According to a 2010 study I came across in the Annals of Behavioral Medicine entitled ‘Loneliness Matters: A Theoretical and Empirical Review of Consequences and Mechanisms’, loneliness predicts increased morbidity and mortality, which is an elegant way of saying that loneliness can prove fatal. I don’t think I experienced cognitive decline, but I quickly became intimate with hypervigilance. During the months I lived in Manhattan, it manifested as an almost painful alertness to the city, a form of over-arousal that oscillated between paranoia and desire. During the day, I rarely encountered anyone in my building, but at night I’d hear doors opening and closing, and people passing a few feet from my bed. The man next door was a DJ, and at odd hours the apartment would be flooded with his music. At two or three in the morning, the heat rose clanking through the pipes, and just before dawn I’d sometimes be woken by the siren of the ladder truck leaving the East 2nd Street fire station, which had lost six crew members on 9/11. On those broken nights, the city seemed a place of seepage, both ghosted and full of gaps. Lying awake in my platform bed, the bass from next door pummelling my chest, I’d think of how the neighbourhood used to be, the stories that I’d heard. In the 1980s, this section of the East Village – which is known as Alphabet City because of its four vertical avenues, A to D – was dominated by heroin. People sold it in stairways, or through holes in doors, and sometimes the queues would run right down the street. Many of the buildings were derelict then, and some were turned into impromptu shooting galleries, while others were occupied by the artists who were just beginning to colonise the area. The one I felt most affinity for was David Wojnarowicz, skinny and lantern-jawed in a leather jacket. He’d been a street kid and a hustler before he became an artist, and grew famous alongside Jean-Michel Basquiat and Keith Haring. He died in 1992, a couple of months short of his 38th birthday, of AIDS-related complications. Just before his death, he put together a book called Close to the Knives: A Memoir of Disintegration, a ranging, raging collection of essays about sex and cruising, loneliness, sickness and the wicked politicians who refused to take seriously the crisis of AIDS. I loved that book, especially the passages about the Hudson river piers. As shipping declined in the 1960s, the piers that ran along the Hudson, from Christopher Street to 14th Street, were abandoned and fell into disrepair. In the 1970s, New York was nearly bankrupt, and so these immense decaying buildings could neither be destroyed nor properly secured. Some were squatted by homeless people, who built camps inside the old goods sheds and baggage halls, and others were adopted by gay men as cruising grounds. In Close to the Knives, Wojnarowicz described prowling around the Beaux-Art departure halls at night or during storms. They were vast as football fields, their walls damaged by fire, their floors and ceilings full of holes. In the shadows, he’d see men embracing, and often he’d follow a single figure down passageways and up flights of stairs into rooms carpeted with grass or filled with boxes of abandoned papers, where you could catch the scent of salt rising from the river. ‘So simple,’ he wrote, ‘the appearance of night in a room full of strangers, the maze of hallways wandered as in films, the fracturing of bodies from darkness into light, sounds of plane engines easing into the distance.’ Soon other artists began to occupy the piers. Paintings bloomed across the walls. Giant naked men with erect cocks. Keith Haring’s radiant babies. A labyrinth, picked out with white paint on the filthy floor. A leaping cat, a faun in sunglasses, Wojnarowicz’s gagging cows. Great murals in pinks and oranges of entwining torsos. Mike Bidlo’s intricate abstract expressionist drip paintings, which wouldn’t have looked out of place in the Museum of Modern Art. Up on the catwalk you could gaze across the river to the Jersey shore, and on hot days the naked men sunbathed on the wooden decks, while inside filmmakers recreated the fall of Pompeii. Those buildings are long gone now, torn down in the mid-eighties, just as AIDS was beginning to devastate the population who’d adopted them. Over time the waterfront was transformed into the Hudson River Park, a landscaped pleasure-ground of trees and rollerbladers and glossy parents with strollers and small dogs. But even a curfew didn’t suppress the erotic spirit of the place. On summer nights, Pier 45, the old sex pier, continues to turn into a catwalk-cum-dancefloor for the city’s gay and transgender homeless kids, though every year battles rage over policing and violence. I was glad fierce kids were still throwing shade beside the river, but whenever I walked through the park I mourned those ruined buildings. I suppose I liked to dream of the piers as they once were, their vast and damaged rooms, because they seemed to represent an ideal kind of city, one which permitted solitude in company, which offered the possibility of encounter, expression and the pleasure of being alone amongst one’s tribe (whatever tribe that happened to be). I thought of them often, those dreamlike, crumbling rooms, extending out across the water, where men now long since dead freed one another, as Wojnarowicz put it, ‘from the silences of the interior life’. Loneliness and art, loneliness and sex: these things are connected, and connected too with cities. One of the habits associated with chronic loneliness is hoarding, a condition that shares a boundary with art. I can think of at least three artists who medicated their sense of isolation by collecting objects off the streets, and whose art-making practices were loosely allied to trash-gathering and to the curation of the dirty, the salvaged and the discarded. I’m thinking of Joseph Cornell, that shy, unworldly man who pioneered the art of assemblage; of Henry Darger, the Chicago janitor and outsider artist; and of Andy Warhol, who, despite surrounding himself with glittering crowds, often commented on his abject sense of loneliness and alienation. Cornell made lovely worlds in boxes out of little things he toted home from thrift stores, while Warhol shopped obsessively for decades (this is the acquisitive Andy immortalised in the silver statue in Union Square, his Polaroid camera around his neck, a Bloomingdale’s Medium Brown Bag in his right hand). His largest and most extensive artwork was the Time Capsules, 612 sealed brown cardboard boxes filled over the last 13 years of his life with all the varied detritus that flooded into the Factory: postcards, letters, newspapers, magazines, photographs, invoices, slices of pizza, a piece of cake, even a mummified human foot. As for Darger, he spent almost all his free time roaming Chicago, gathering and sorting trash. He used some of it in his strange, disturbing paintings of little girls engaged in terrible battles, but most of it – pieces of string, in particular – existed as a kind of counter-exhibit of its own, though he never showed it to a living soul. I’ve missed you, Alastair once said, and my heart jumped at the pleasure of existing in someone else’s life People who hoard tend to be socially withdrawn. Sometimes the hoarding causes isolation, and sometimes it is a palliative to loneliness, a way of comforting oneself. Not everyone is susceptible to the companionship of objects; to the desire to keep and sort them; to employ them as barricades or to play, as Warhol did, back and forth between expulsion and retention. In that funny, lonely spring, I developed a fondness for the yellow ordering slips from the New York Public Library, which I kept in my wallet. I liked biros and pencils of all kinds, and I grew enamoured of a model Sumo wrestler a friend at Columbia had given me; a spectacularly ugly object that was designed to be crushed in one’s fist to relieve stress, though the tears it quickly developed suggested it wasn’t quite fitted to the task. Like Warhol and Darger, Wojnarowicz also had a proclivity for objects. His art was full of found things: pieces of driftwood painted like crocodiles; maps, clocks and bits of comic books. Among his entourage was the skeleton of a baby elephant, which moved with him from cluttered apartment to apartment. For a while, he’d lived in a building on my block and on the day he moved in had carried the skeleton down the street concealed beneath a sheet, so his new neighbours wouldn’t be alarmed. Later, when he was dying, he gave it and his battered, grubby leather jacket to two friends he’d been collaborating with. Is this the appeal of objects to the lonely: that we can trust them to outlive us? In the mornings when I went out to the Hudson River, I’d sometimes call in afterwards to the West Village to eat breakfast with the father of a friend of mine. Alastair lived in a tiny, shipshape apartment not far from the Christopher Street subway in West Village. He was a poet and, although he originally came from Scotland, he’d spent most of his life in South America, where he wrote dispatches for the New Yorker and translated Borges and Neruda into English. His room was full of books and pleasing bits and pieces: a fossilised leaf, a desk-mounted pencil sharpener, an extraordinary folding bike. Each time I came, I brought chrysanthemums the colour of pound coins, and in return he fed me muffins and tiny cups of coffee, and told me stories about the dead from yet another era of New York artists. He remembered Dylan Thomas hurtling through the bars of Greenwich Village, and Frank O’Hara, the New York School poet who’d died at 40 in a car accident on Fire Island. A sweet man, he said. He smoked as he talked, breaking off into great hacking bouts of coughing. Mostly he told me about Jorge Luis Borges, blind Borges, who was bilingual from childhood, and died in exile in Switzerland, and whom all the taxi drivers in Buenos Aires had adored. I left these conversations almost radiant. It was good to be greeted, to be embraced. I’ve missed you, Alastair once said, and my heart jumped at the pleasure of existing in someone else’s life. It might have been then that I realised I couldn’t teeter on like this, not quite committed to New York, not quite sure about going home. I missed my friends and I missed especially the kind of solidity of relationship in which one can express more than the brightest of moods. I wanted my flat back too, the ornaments and objects I’d assembled over decades. I hadn’t bargained for how strange I’d find it, living in someone else’s house, or how attenuating it would prove to my sense of security or self. Soon after that, I got on a plane to England and set about recovering the old, familiar relationships I thought I’d left for good. It seems that this is what loneliness is designed to do: to provoke the restoration of social bonds. Like pain itself, it exists to alert the organism to a state of untenability, to prompt a change in circumstance. We are social animals, the theory goes, and so isolation is – or was, at some unspecified point in our evolutionary journey – unsafe for us. This theory neatly explains the physical consequences of loneliness, which ally to a heightened sense of threat, but I can’t help feeling it doesn’t capture the entirety of loneliness as a state. A little while after I came home, I found a poem by Borges, written in English, the language his grandmother had taught him as a child. It reminded me of my time in New York, and of Wojnarowicz in particular. It’s a love poem, written by a man who’s stayed up all night wandering through a city. Indeed, since he compares the night explicitly to waves, ‘darkblue top-heavy waves … laden with/ things unlikely and desirable’, one might literally say that he’s been cruising. In the first part of the poem he describes an encounter with you, ‘so lazily and incessantly beautiful,’ and in the second he lists what he has to offer, a litany of surprising and ambiguous gifts that ends with three lines I’m certain Wojnarowicz would have understood: I can give you my loneliness, my darkness, the hunger of my heart; I am trying to bribe you with uncertainty, with danger, with defeat. It took me a long time to understand how loneliness might be a gift, but now I think I’ve got it. Borges’s poem voiced the flip side of that disturbing essay I’d read in the Annals of Behavioral Medicine on loneliness’s consequences and mechanisms. Loneliness might raise one’s blood pressure and fill one with paranoia, but it also offers compensations: a depth of vision, a hungry kind of acuity. When I think of it now, I think of it as a place not dissimilar to the old Hudson river piers: a landscape of danger and potential, inhabited by the shadowy presences of fellow travellers, where one sometimes rounds a corner to see lines of glowing colour drawn on dirty walls. Source of the article

GOATReads:Politics

Kafala in the Time of the Flood

It was May Day 2016 and we were standing with African and Asian domestic workers on the streets of Beirut, following the cue of their voices. The first time I heard the slogan I faltered, it caught in my throat, I was unprepared for the rhyme. I shouldn’t have been; we already knew how many were dying. The sentence marched in my head for years. Since 2010, feminist and antiracist organizations in Beirut have come together on the Sunday closest to May Day to protest Lebanon’s Kafala system, the exploitative system that governs temporary migrant labor in many parts of the region. Bringing together a diverse coalition of migrants, activists, NGOs, workers, and allies, the march fills the streets of Lebanon’s capital with voices demanding both concrete labor reforms and total abolition. For over a decade, the annual Migrant Workers’ Day parade and festival has ranked among Beirut’s most beautiful public gatherings, workers peering over balconies to find solidarity and not discrimination, the city momentarily transformed into an image worthy of its status as cosmopolitan. Years later, attempting to write an anthropology of migrant labor in Lebanon while watching the region aflame under genocide, the womens’ voices continued to haunt me. We academics got Kafala so wrong, I realized, parsing its colonial archives in British rule and its political technologies of legal impermanence and human rights violations. We had given it the privilege of abstraction and logistics while refusing its personality of murderous desire. We should have known better—after all, we knew Israel, and with it an investment in annihilation. The migrant domestic workers knew, though. They were Kafala’s insides and they had lived the psychic infrastructure of its necrotics. Their presence laid bare the troubling truth that incarceration had entered the heart of postwar Lebanese life. By 2008, Human Rights Watch estimated one migrant domestic worker in the country was dying every week from either suicide, failed attempt at escape, or murder. In the meantime, tens of thousands of others were kept working, most without access to adequate rest, food, mobility, cell phones, or even wages. Gathered on the streets together, the women who survived had lived to raise their voices and tell us very simply: The opposite of the Kafala system is not work permits and immigration visas and wages and unions and even open borders. The opposite of Kafala is being alive. I. Kafala as AntiHumanism African and Asian migrants have been traveling to the Middle East under the loose rubric of the Kafala system since the 1970s, when the convergence of the oil boom in the Gulf, the suppression of worker uprisings and revolutionary consciousness across the region, and the globalization of capital that constitutes “neoliberalism” produced newly transnational circuits of labor exploitation. “Kafala” itself is usually translated as “sponsorship,” referring to the requirement that migrants have a citizen-sponsor to whom both their work and residence in the country is tied. But “sponsorship” captures neither its lived experience nor the scale of the cultural transformation that has come in its wake. The Council on Foreign Relations casually lists the current number of workers governed by the Kafala system in the region as tens of millions. How many, we might wonder, over the course of its half century? Twenties of millions? Hundreds of millions? Meanwhile, the ghosts of all those who have died on the job swallow their tongues inside human rights reports and abuse stories that no one really reads. They all blur into one another; they all start to sound the same; which is to say, they are a pattern, they are a social fact, they constitute a culture, they demand a diagnosis. Kafala is a social pathology. The strange thing about African and Asian migrant labor in Lebanon, a system frequently referred to as “modern-day slavery,” is that it has only been around for four decades. Who learns to discriminate so quickly? And although the region has its histories of enslavement and servitude, of elitism and racism, of all the violences that define the great accomplishment we call Culture, it has not always been this way. In a not-so-distant era, the presence of Africans and Asians in Lebanon signaled a very different internationalism. Consider the fact that there was once an Ethiopian Student Union in Beirut—by “once,” I mean the 1970s. Today, to say the very word “Ethiopian” is to preclude the possibility of there currently being, there ever having been an Ethiopian Student Union in Beirut. The erasure of this imagination is the annihilation of a moment in which Asian, African, and Arab could be spoken together with neither master nor servant in the sentence. What then, if not sponsorship, is this Kafala system? We might think of it as a neoliberal antihumanism wedged into the heart of a Middle East that was once an ecumenical frame for living. The Kafala system is a historical process by which the figure of the Arab has been wrenched of its humanism from the inside. This is a humanism of the Orient without Orientalism, fully modern and Islamicate and ours, civilizational heritage of the modern world; that which produced the first university, first astronomy, first violin, and last prophet, now gnarled by pipelines and dictators and Zionism and greed, such that the ugliness of capitalism-as-racism burns its scars into womens’ backs. What happened to Arab Nationalism, anticolonial icons led by the triad of Nasser, Nkrumah, and Nehru? Consider, as answer, Kafala. What happened to Lebanon, headquarters of the Palestinian Revolution? Kafala. What happened to the Indian Ocean Arabo-Persian Gulf, all trade and ports and mysterious characters surfacing in geniza fragments? Kafala. What happened to split a map of the world into an embodied earthquake where the edge of Asia became kafeel while Africa to its west and Asia to its east came to name countries as synonyms for servant? Fuck. That. Kafala. In Lebanon, the word Srilankiyye (Arabic for female Sri Lankan) simply means maid. This sentence is necessary but insufficient. In actuality, the term means migrant domestic worker, racialized woman, foreigner who cleans, woman whose hair is forcibly sheared upon arrival to the country where she was looking for a job, woman whose passport is held in a locked drawer in a bedroom where she cleans the sheets and folds the underwear and learns the gossip and consoles the mother and washes the child and nurses the elder and makes coffee and washes dishes and makes coffee and washes floors and makes coffee and washes windows and then sleeps, avoiding his gaze, on the floor; woman who is maybe from Sri Lanka, but a Sri Lanka that is not Sri Lanka, a Sri Lanka that has no beaches no wars no histories no flavors only brown women who wash Lebanese floors, a perfect tautology: You are a domestic worker because you are from Sri Lanka. You are from Sri Lanka because you are a domestic worker. Echoes of Fanon’s unforgettable formula flipped. You are rich because you are white, you are white because you are rich. Before Mehdi ben Barka disappeared, before Amílcar Cabral was assassinated, before the setback and the betrayal and the melancholy that settled into a generation of male intellectuals who never quite managed to build something new in the ruins of their grief, it was not necessarily going to look this way. The Middle East could have been the west of Asia, and Asia could have named our synchronicity; a multiplicity of cultures that were all worth fighting for. The hour of liberation was knocking the ground beneath the women who sang its anthems. We must never forget that Kafala was its counterrevolution. To recall the radical history of the 20th century is to remember that there was once an opening to an alternate present. We had a fissure, one of those endless ones that Benjamin told us is how the light gets in. Its Western numerical index is 1968 but we should recall it as the Tricontinental, that 1966 conference held in Havana that brought together anticolonial icons from across Africa, Asia, and Latin America. This was a moment when Arab, Asian, and African solidarity was the story of the Middle East. It took diverse forms, manifesting in guerrillas and female militants, in raised fists and bold prints, in hijackings and black turtlenecks, in wire-rimmed glasses and AK-47s and a particular shade of olive green. It was a time when Spanish and Arabic were on our lips, and we danced to a soundtrack of sultry rhythms. OSPAAAL (the Organization of Solidarity with the People of Asia, Africa and Latin America) made posters of disco fedayeen and in the smoke-filled offices of Beirut, Yasser Arafat, the Chairman of Palestinian Liberation Organization, shared wine and humor with Urdu poet Faiz Ahmed Faiz. It was a time when the region was headquartered in Egypt and nestled within the continent of Africa rather than the continent of Islam, and so the beards of its men were not yet targets of annihilation. Instead, we had divas like Oum Kulthoum, who sang for leaders with cavalcades that would later be remembered by children who peered out windows at the top of the stairs, only to be left with nostalgia instead of liberation, dull ache for an era when they had not yet assassinated all our heroes. In this history, the future of which has yet to be determined, Arab named a freedom drive. How did a complex and beautiful region known as “The Arabian Gulf” appear to go so quickly from precapitalist merchant to advanced-capitalist monarch? It seems partly about speed, in the structure of that conjuncture between the region’s natural resources and American capital. Black gold, a sticky viscous substance that moves magically underground, produced the joint development strategy of the refinery and the toxic sludge. In other words, oil was discovered, Ford built cars, America built dictators, and men got rich. They call it: economies grew. By now, we know that economies grow at the expense of societies and that economic growth is the harbinger of cultural genocide. Certainly, the Gulf grew, buildings bursting vertically from sand and sea, and off the tops of them South Asian men, imported to produce a wealth that would never be theirs, plunged to their deaths. Some through unfinished windows, some off metal rebars, some burned to a crisp under the summer sun. Others, unchained, peons of debt. We used to share a prophet who crossed the desert in prophecy finding the shade of date palms and now the Gulf is a graveyard made of glass. This, and not only Kuwaiti biryani, is the afterlife of cosmopolitanism along the northern curvature of the Indian Ocean. Who decided that Arab was going to be a name for master and not comrade? What world-historical destruction turned these dreams into so much war? II. Humanity Exceeds Humans (Lebanon Exceeds Lebanese) In Beirut, for more than a year, I kept hearing migrant workers ask, in tones gentle but furious: Manna insan kamein? Are we not human, too? They had not decided it would be Kafala’s refrain but I heard it that way all the same, different people constructing the same sentence, as if it had crystallized into an ontology that took the shape of a philosophical mantra in three simple words. I tried to listen to what they were saying and at first, it appeared to me that Lebanon no longer had room for them in its word for human. Yet it was phrased as a question, and a question posed in Arabic, for they had already entered Lebanon’s language; already refused the denial of the fact they were here, claiming their presence inside a culture that had staked itself on the magical properties of language and therefore could always be learned and transformed from the inside. They told us Kafala wanted them dead and then they reminded us all of the unshakeable speech of their humanity, with echoes of Sojourner Truth declaring, “Ain’t I a woman?” and Rashida Tlaib asking, “Why do the cries of Palestinians sound different to you all?.”  An uncomplicated confrontation between the alive and the inhumane. Sometimes I wonder what Edward Said would think of African and Asian women who clean homes in Lebanon meeting the gaze of his insistent humanism and asking: Are we not human, too? Would it shake his faith in a world of secular universals or would it simply be a testament to what changed when value overtook values, or when it seemed we were in danger of losing the project of Arabic as heritage of liberation to Arabic as the vision of the Abraham accords and shopping malls? Of course, Gaza would beg to differ, as would the people of Yemen. We do not yet know what will become of the Arab world after a Free Palestine. The second refrain I kept hearing from the migrant domestic workers of Beirut was even more common than the first. Sometimes it felt as if every single African or Asian current or former migrant worker I met in Lebanon had her own rendition to offer. Rita: “I love Lebanon but the people suck.” Selam: “Lebanon is really nice, but the Lebanese, their hearts are hard.” Michelle: “I love Lebanon, I love the food, the beauty, the mountains, the city, but the problem is the people are too arrogant.” Hawa: “Lebanon is beautiful, but there are many bad people here.” Meseret: “I love Lebanon like it’s my own country, what I don’t like are the people. Most of the people.” Said Beza to Hana, in a conversation about Beirut, “The country does not hate you. Its people hate you.” Makdes: “I love Lebanon but not the people.” One of my favorites came from Zennash: “el-‘aalam khara bas el-balad mitl imm”—“The people are shit, but the place is like a mother.” Yet another antonym as axiom. I came to think of it as: Not all Lebanon is Lebanese. How to reconcile an insistence on a shared humanity that still indicts Kafala’s deadly desire? What was this Lebanon that everyone held on to, despite so many experiences of mistreatment and cruelty, despite constant reminders of their nonbelonging—what was it that retained Lebanon as referent for the beautiful? Even as Zennash claims “the people are shit,” the excrement emanating from those who lurk as the personification of Kafala’s death drive has somehow not yet swallowed all the shadows of Lebanon’s humanity; of its capacity to cradle the dispossessed. Still, she insists, “the place is like a mother.” I was reminded of yet another contrast. It returns us to the streets, glorious and collective. A scene that temporarily turns those windowless laundry rooms and closets where foreigners are caged inside out, the women of Kafala suddenly resplendent and carrying megaphones. The question referred to Sejaan ‘Azzi, Lebanon’s former Minister of Labor, who had famously publicly opposed domestic workers’ right to unionize in the country. Yet I was struck by the contrast: Why was ‘Azzi the site of a rhetorical question that (in theory if not in practice) could be answered in the negative? Why is Sejaan asked for his confession, whereas Kafala always already wants death? These are the same streets, the same voices, and the same system. Contained in this protest chant is a distinction between individual and structure. The gap is that of a conversation: the ability to ask a question to which the answer can be no, even when it is structurally conditioned and historically determined to be yes. I try and parse it. It could have been the case that the Minister of Labor in Lebanon might not have a domestic worker inside his home, or at least have one he treated well, and this, rather than the stroke of his policy-making pen, would be an opening toward the abolition of Kafala; toward the death of its death. Even as they indict him in their call, reminding us whose invisibilized labor runs the households of the country’s elite, the play of the women’s words lies in their rhymed repetition of “or not?” It is as if the subjects of the Kafala system insist on giving Sejaan—not only as individual complicit in state power, but as abstract figure of Lebanese citizenship itself—a way out. A way to say no, a way to be better than himself, a way to not be the category he is hailed by, because they insist that Not all Lebanon is Lebanese. III. Who Shows Us the Way Out? From Ethiopia, which she returned to in 2021 after over a decade in Lebanon, Beza sends me a voice note in Arabic. Sumayya hayati how are you, is everything ok? Hamdilla I’m good, everything is fine, I don’t know how I can tell you fine, but technically, physically, we are fine but psychologically, honestly I’m not fine at all. Everyday I’m seeing what’s happening in the world and I’m seeing the extent to which people are clinging to this propaganda that is completely wrong, and I’m seeing people dying, I’m seeing what’s happening. You know, me and you, we’re close to people who are in Palestine and are Palestinian and Lebanese, we know what they think, what their perspectives are, what the truth is, but you try and tell the truth to people and they just don’t want to listen. It’s a huge problem. Everyday I just feel like, what in God’s name is happening? I also wish I was in Lebanon. What’s happening is horrific, it’s just not okay, completely not okay, this propaganda that they’re spreading about Hamas is completely wrong and the people who are dying are not even Hamas, not one of them, the ones dying are women and children and people who have nothing to do with this. It’s so, so painful. Everyday I’m sitting and watching and I don’t know what to do, what I can possibly do to help, I don’t have any answers and it hurts so so so much. Honestly I can’t even be happy, every day it becomes morning and I sit in front of the television, then it becomes afternoon and I’m still sitting here watching but I don’t know what to do. This is such a difficult thing for the world, and somehow the rest of them—the Americans, everyone else, they’re happy about this. I don’t know what they get from it, if Palestinians are annihilated from the world how does it benefit them? I simply can’t understand this, it’s unbelievable, you know? And they completely refuse to understand, you try to explain and they just refuse. Half of them, they are blinded by religion, they have all these lies against Muslims and they want revenge against these “terrorists,” but you have to understand their story, these are not terrorists! You try to explain and they just refuse to listen. I know you know the truth too, it’s so hard to bear … I just pray—if there really is a god, that’s what I tell myself, if Allah is here watching what is happening, then do something! Look at what is happening! I don’t know what to say. But C and I are good, we went to the part of Ethiopia that was most affected by the war and we saw so many people, if you saw the way people are living, ya Allah it’s so difficult, to see what an ugly thing war is. I don’t know, apart from that, physically we’re well. The war has been a month you know, and no one is doing anything. If Gaza is destroyed, will they get it then? Is that we’re waiting for? Are you speaking to Hana and everyone else? I’m not, I just don’t know what I can say to them … apart from that, I’m good, my daughter is good, C is good … I saw there are protests in Canada, but everywhere in the world no one understands what is happening—if you haven’t lived in an Arab country or you don’t know the story [of Palestine], you just won’t understand, you won’t know anything about Israel. Only if you live with them do you come to understand their story, this is the really difficult and strange thing—I just wish everyone in the world could understand it—I’m good, I’m good hamdilla, a bit scared because of Hizbullah and Lebanon but thank God C came, but we have to keep praying, you and me, because there’s so many people we love in Lebanon and in Palestine. I used to think: the destruction of Afro-Asian solidarity was the condition of possibility for the Kafala system. I was (partially) wrong. What also happened is that we started living together. And as the map of Kafala’s African and Asian subjects has expanded across the world, so has a new community of those who now bear witness to the struggle for life against empire’s assaults in the Middle East. Bearing memories of their Palestinian, Syrian, Lebanese, Iraqi, and Sudanese friends and neighbors, and not only sponsors or bosses; of political speeches heard on television and bombs just barely escaped under campaigns of total destruction; of revolutionary soundtracks and an unbeatable humor, the migrant workers of the region also bear witness to Israel as a name for death. And as they have built new communities of belonging configured through the Arabic language, so they have claimed their own centrality to a shared project of liberation. As we envision a region free of war and imprisonment, the women of Kafala look toward us from behind locked kitchen doors and high-rise balconies, insisting that abolition begins from inside the home. It is towards them, also, that the struggle for a free Palestine points us.  Source of the article

GOATReads: Philosophy

A life in Zen

Growing up in countercultural California, ‘enlightenment’ had real glamour. But decades of practice have changed my mind On 18 May 1904, in a village near Japan’s Sagami Bay, looking out to the white peak of Mt Fuji, the son of a Buddhist priest was born. Shunryu Suzuki grew up amid the quiet rituals of Sōtō Zen, a tradition that prized stillness, repetition and near-imperceptible spiritual refinement. When he wasn’t sweeping temple courtyards, he was studying, preparing to follow in his father’s footsteps. But then, at age 55, after a life steeped in the disciplines of Japanese Zen, Suzuki travelled to San Francisco. He arrived in California in 1959, as the United States’ literary counterculture was turning toward ‘The East’ in search of new ideas. Alan Watts had already begun to popularise this turn through The Way of Zen (1957), a book that offered Americans liberation from the disillusionment and disorientation of the 20th century. A former Anglican priest with a taste for LSD, Watts presented Eastern ideas as a corrective to Western striving. For him, Zen was a way of ‘dissolving what seemed to be the most oppressive human problems’. It offered liberation from the strictures of social conditioning, convention, self-consciousness and even time. Others began to take notice, too. In California, the austere discipline of Japanese monasticism was being reimagined in a new milieu alive with jazz, psychedelics and endless seekers looking for ways to fix the human condition, or at least their own case of it. Suzuki arrived in California with answers. At a dilapidated temple founded for Japanese immigrants in San Francisco’s Japantown, he slowly defined the trajectory of Zen in the US. ‘Just sit,’ he told his followers. ‘Just breathe.’ As this spiritual practice entered the chrome-lit sprawl of postwar US, it became a new tool for artists, poets, dropouts, bohemians: a technology for awakening. It found kinship, perhaps uneasily, with the Human Potential Movement then flourishing down the coast at the Esalen Institute, where encounter groups, psychedelics and primal scream therapy were all aimed at cultivating a more expansive idea of human flourishing. In this new setting, Zen slotted neatly into the dream of total transformation. Enlightenment was no longer a mountaintop ordeal. Instead, it became a weekend workshop, a practice, a form of self-help. But Suzuki’s original path, the one he learned in Japan, appeared to point elsewhere. Real awakening seemed to demand isolation, silence, the stripping away of ordinary life. It required years in robes, cold zendō (meditation halls), endless chanting, discipline with no immediate reward. This ‘old’ version of Zen was monastic at its core. And so, in the US, Zen seemed to split, quietly, in two. On one side, a casual practice adapted to modern lives; on the other, a pursuit of true enlightenment, suited only to those who could withdraw. This was the complicated spiritual world I was born into: a world of Californian counterculture and Sōtō Zen austerity. And soon, it would set me on a tangled path of my own. Could I follow the ancient route to awakening and still live fully in the world, with all its noise, its complexity, its folly? I first encountered Zen in the late 1960s when my mother, hoping to make me a better, or at least more bearable, person than the obnoxious middle-school boy I undoubtedly was, dragged me to a group meditation session, known as a ‘sit’, at a community preschool in our hometown of Mill Valley, just north of San Francisco. The sit was hosted by a student of Suzuki’s named Jakusho Kwong. By this time, Suzuki had been in the Bay Area for roughly 10 years and was no longer ministering only to the immigrant and Nisei (or second-generation) communities in Japantown. By 1967, he had helped open the San Francisco Zen Center and had purchased a mountain retreat (named Zenshin-ji, ‘Zen Mind Temple’) near Big Sur. He was also about to acquire an urban temple in San Francisco that would be called Hosshin-ji, ‘Beginner’s Mind Temple’. His students, like Jakusho, were beginning to spread Zen far and wide across California to followers who were overwhelmingly young, white and (mostly) hip. That early morning in Mill Valley, I remember Jakusho stalking around the room, striking sleepy or slouching students with the keisaku (‘wake-up stick’). Each time he came around to me, sighing, he would bend down and straighten my back against the stick in a firm but gentle way. I was relieved not to be hit, but also embarrassed and oddly comforted by his touch. They gave me LSD when I was 14 or so. I mostly remember a beautiful, dizzying day full of sun and music My mother also practised with Suzuki at his mountain retreat, and I came with her one afternoon. I remember one thing from the visit. When we arrived, she stopped the car at the entrance and got out to speak with a monk wearing full robes. They had a brief conversation and the monk, probably responding to a simple question like ‘Where should I park?’, stood back and pointed. Something about the gesture was compelling: the tall, thin man in black with a shaved head pointing as if to say: ‘That is the Way you should go!’ These early, fleeting glimpses of Zen were accompanied by massive doses of the conceptual framing that surrounds Buddhism. My parents had met Alan Watts and his young family through the same preschool in Mill Valley where I first learned to meditate as a child. My father was also friends with the author Dennis Murphy and, through him, met his brother Michael Murphy, the co-founder of the Esalen Institute. Michael introduced my parents to a long list of movers and shakers in the Human Potential Movement, a countercultural spiritual and psychological movement that had drawn in the likes of Aldous Huxley and the Beatles. It was Murphy who invited my parents to observe and participate in early experiments with psychedelic drugs, including LSD and psilocybin. Our family spent a bit of time at Esalen, engaging in ‘encounter sessions’ and other experiences favoured by the Movement. As part of the programme, my parents also experimented on me: with my consent, they gave me LSD when I was 14 or so. I mostly remember a beautiful, dizzying day full of sun and music. The world was shiny, and it danced with images the like of which I had never seen. My parents and their friends never really stopped talking, and when the conversation turned to Zen, there was a flood of information and questions about enlightenment. How could someone ‘get’ it? What did ‘getting enlightened’ actually mean? The picture that emerged was of a lasting state attained in a flash, usually due to a profound shift in perception or understanding, and, once you ‘had’ it, the ordinary struggle and suffering of being human would no longer be a problem. This image of enlightenment was based on the experiences of monks in Japanese Zen monasteries, and on the teachings from Suzuki’s mountain retreat, Zenshin-ji, which took Japanese monasticism as its model. The path to enlightenment, according to my adult informants, was a path of monastic seclusion and intensive practice. Japanese Buddhism began in the 6th century, but Zen (originally known as ‘Chán’ in China) didn’t arrive in Japan for another 700 years. Unlike the earlier forms, which relied on strict rituals, historical teachings and doctrine, this new form of Buddhism placed greater emphasis on direct experience. In the Zen school, an unmediated experience of reality – and even enlightenment – was attainable through meditation, deemphasising conceptual thinking. During the 13th century, these ideas began to flourish in Japan after two Japanese monks from the Tendai school of Buddhism, Eisai and Dōgen, travelled to China and were introduced to the teachings of the Chán school. When Eisai returned from China in 1202, he established a temple, Kennin-ji, in Kyoto, which became the central location for those hoping to study the new approach – it remains the oldest Zen temple in the city. A decade or two later, the monk Dōgen journeyed to China with Eisai’s successor, Myōzen. He returned to Japan transformed by his understanding of Chán and established Eihei-ji, a temple in the mountains of Fukui Prefecture, northeast from Kyoto. Working from memory of what he learned in China, he established the codes and forms of monastic conduct that are still followed to this day at Zen temples around the world, including Zenshin-ji, where my mother once practised. Zazen is ideally performed while sitting with legs folded, facing the wall in absolute, upright stillness At Zenshin-ji, these codes are followed during two periods of ango (‘peaceful abiding’) each year. Practitioners who attend ango adhere to a strict schedule (involving regular days, work days, and rest days) and engage in monthly retreats that generally last a week, known as sesshin (‘mind-gathering’). The schedule on regular days involves roughly four to five hours of meditation wrapped around an afternoon of intensive work. Sesshin days involve very little work but up to 12 hours of meditation. There are also three ceremonial services each day, regular Dharma talks (a formal lecture from a Buddhist teacher), and study time. Meals are eaten in the meditation hall in a style called ōryōki (‘the bowl that holds enough’), which is highly formalised and involves a great deal of ceremony. The overarching standards for conduct during ango emphasise silence and deliberate, harmonious interaction. During ango, ‘meditation practice’ involves alternating periods of zazen (‘seated concentration’), which last from 30 minutes to an hour, and kinhin (‘walking back and forth’), a form of slow walking meditation that takes around 10 minutes and eases the strain from sitting. Zazen is ideally performed while sitting with legs folded in full or half-lotus posture facing the wall in absolute, upright stillness. Serious Zen students might spend five years participating in the ango at Zenshin-ji. Some, seeking an even deeper engagement, might spend their entire lives in monastic practice. I had other plans. When I was first exposed to Zen, I was in my early teens and semi-feral. I went to school, of course, but on the weekends, I did everything I could to get away and get outside. The town of Mill Valley lies at the foot of the beautiful Mount Tamalpais, and many weekends were spent hiking and camping there with friends. Sometimes we went further afield, hitchhiking to camp on the beaches of Mendocino, 140 miles away. In summer, I took longer trips: climbing mountains, swimming in ice-cold nameless lakes, sleeping in alpine meadows. A life of monastic seclusion and discipline didn’t appeal to me. And I couldn’t help noticing that the adults I knew who talked about Zen had lives that seemed at odds with their spiritual interests: they had spouses, houses, children, jobs, hobbies, extramarital affairs and addictions, among other things, all of which they would have to abandon if they were to follow the Way. None of them seemed to be willing to take the plunge. Zen didn’t appear compatible with modern life. So when my mother gave me a copy of Suzuki’s Zen Mind, Beginner’s Mind (1970), hot off the press, I read it with genuine interest, but found it easy to put down. For the next 20 years, I hardly thought about Zen. I attended a boarding school on the East Coast. I studied abstract algebra. I learned to play the electric guitar and graduated college with a major in music. Then I worked at a Burger King, then as a pot-washer, and finally as a mechanic. Even when things were going well, I was unsatisfied with everything I did I felt lost and sought advice from an old music professor, but he painted a bleak picture of life as a professional musician. He said I’d be better off moving to New York and playing in punk bands. When I went to ask my mathematics professors what I should do, they were unanimous: ‘You foolish boy. Have you not heard of the digital computer?’ And so, I stumbled into a career in tech, eventually landing in Silicon Valley as a software engineer – a ‘hacker’ as we called ourselves at the time. Living in San Francisco, I found time and energy to play music again. I started a band, the Loud Family, with a gifted singer-songwriter called Scott Miller. We even managed to make a few albums together and tour. And by my 30s, I had quit my job as a software engineer and dedicated myself to music On paper, my life looked good. I had creative friends, gainful and enjoyable employment, and had even found a way to quit (or at least pause) my ‘day job’ – an opportunity most struggling musicians would kill for. But my experience of this life didn’t match how it looked on paper. In my mid-30s, I was in the middle of a messy divorce (my second) and grieving the untimely death of my father from lung cancer. My problems were also internal. I was always wanting more of this and less of that. Even when things were going well, I was unsatisfied with everything I did and hyper-sensitive to criticism, especially when I made a real mistake, which I often did. This made me hard to work with. I often acted foolishly. I hurt people who deserved better from me. Buddhism has always been a radical explanatory framework and a set of concrete practices that directly address the ‘human condition’. When we look closely, we find people everywhere grappling with the problem of being human. Across cultures and millennia, our species has returned time and again to the same fundamental question: Why do we make such a mess of things and how can we do better? The countless responses to this dilemma have given rise to a universal genre that attempts to explain (and solve) human folly. At the start of Homer’s Odyssey, Zeus laments: ‘See now, how men lay blame upon us gods for what is after all nothing but their own folly.’ In the Dàodé jīng (Tao Te Ching) the Chinese sage Lǎozǐ spells out a similar concern: When Dào is lost, there is goodness. When goodness is lost, there is kindness. When kindness is lost, there is justice. When justice is lost, there is ritual. Now ritual is the husk of faith and loyalty, the beginning of confusion. Knowledge of the future is only a flowery trapping of Dào. It is the beginning of folly. Despite the murkiness of its early history, Buddhism crystallised around key axioms that offer different explanations and solutions for human folly. First, human suffering and misbehaviour are built in. They are intimately entangled with qualities that make us human: our capacity to use language, make long-term plans and form complex societies. Second, to use those capacities, we must construct a ‘self’. But this self is often based on flawed narratives shaped by culture and personal experience. Built on faulty assumptions, our self-stories generate desires – manufactured goals, preferences and ideals – which are driven by powerful emotions. But our experience of life often remains unsatisfactory, driving further striving and disappointment. I was in the grip of the very human folly that Buddhism has always sought to address Despite this predicament, the situation is not hopeless. The Buddhist ‘Way’ offers practical tools – ethical conduct, meditation, insight – that can transform our inner lives and outward behaviour. By the time Buddhism evolved into the Chán schools in China (what would later be called ‘Zen’ in Japan), further axioms had been established. First, true learning occurs through relationships; second, awakening unfolds not just through studying texts, but from self-study mostly through zazen. When Zen eventually landed in the West with the arrival of Suzuki and others, these axioms began to find a new space to flourish. Decades after first reading (and shelving) Zen Mind, Beginner’s Mind, I realised that I was in the grip of the very human folly that Buddhism has always sought to address. And so, in the middle of a busy and complicated life, I began an unexpected second career as a Zen practitioner. Had I lived anywhere else in the world, this might not have been possible. The San Francisco Zen Center (SFZC), where I decided to practise, was unique for striving to make a monastic form of Zen accessible to laypeople. I began with the kind of assumptions that are common among beginners. I thought that I would attain a persistent enlightened state through rigorous adherence to the traditional monastic model. I thought that through meditation I would resolve all my personal suffering, and that I would attain a deep understanding of human life. I believed this would change me, and might change those around me, too. I thought I could even become something like a sage, moving through life effortlessly on whatever path I chose. And so, in the early 1990s, I began. Thankfully, because I was playing in rock bands for a living, I had a loose schedule, which enabled me to do a lot of sitting. I could also participate regularly in ango and sesshin at Hosshin-ji. I even did an ango sandwiched around a six-week international tour in support of the band’s second album, and got back in time to sit the seven-day sesshin at the end. But after five years of this, it became clear that I needed a ‘real’ job. So I went back to working in tech. From that day on, and for many years after, my life was devoted to balancing the long hours and tight schedules in the tech sector with the long hours and strict schedules of Zen practice. By the mid-2000s, I was growing more serious about Zen and wanted to become a teacher. But my life had become busier. I now had four children, spanning adulthood to toddlerhood. My wife and I were both working, and I was still playing in two bands. My teacher, Ryushin Paul Haller, suggested that I guide an ango at Hosshin-ji by serving as shuso (‘head seat’), a role that a monk must often take on, in order to pursue a career as a Zen teacher. Though I was unsure, I agreed mostly due to the Zen principle: when your teacher asks, you say ‘Yes’ without hesitation. During the ango, I would rise at 3:30 am, ride my bike to Hosshin-ji with a brief stop for a donut and coffee on the way, change into my robes, run around the building with a bell to wake everyone up, have tea with Ryushin and his assistant, then open up the meditation hall for zazen at 5:25 am. After a couple of periods of zazen followed by a ceremonial service that involved a lot of vigorous bowing and chanting of the Zen liturgy, and finally breakfast, I would hop on my bike again, ride to the station and take the train to San Jose where I’d work a full day at my tech job. After that, I’d take the train back to San Francisco, ride home and arrive, often as late as 9 pm, to eat dinner alone at the kitchen table. I’d then fall into bed, sleep as much as I could and get up to do it again the next day. I learned to become dedicated to, as Suzuki would say, making my ‘best effort in each moment’   It was exhausting and unsustainable – even with a supportive family. The monastic model favoured by SFZC and other institutions sets up social, financial and logistical barriers that are difficult for the majority of Zen aspirants to pass. Though my path was not typical, the Way is fundamentally the same. The most common mistake is to confuse the two. A few weeks after my stint as shuso, Ryushin suggested that I start a zazenkai (Zen sitting group) – an informal, less-intensive kind of practice – at a community centre in my North Beach neighbourhood in San Francisco. I wanted it to be easy enough so that a participant could roll out of bed every weekday and attend a half-hour period of zazen. I kept the rituals basic: incense at a small altar, three bows at the start (accompanied by bells), a single bell at the end. Later, we added the chant used at SFZC temples after morning zazen. It begins by saying the following twice in Japanese: Dai sai ge da pu ku musō fuku den e hi bu nyo rai kyo kō do shoshu jo And then once in English: Great Robe of Liberation Field far beyond Form and Emptiness Wearing the Tathagata’s teaching Saving all beings The group is still going, roughly 15 years later, and it has taught me a lot in that time. I learned how to balance the intensive and the ordinary. I learned to become less concerned with what my practice should or could be, and more simply dedicated to, as Suzuki would say, making my ‘best effort in each moment’. I have learned to recognise, through intimate, ongoing self-study, the characteristics and processes involved in my own suffering and to open into the spaciousness that’s available in everyday life. This space has leavened and counterbalanced the emotions driving my habitual responses – my frustrations, fears, anxieties. Today, my experience of life is the fruit of simple, diligent practice. In Zen Mind, Beginner’s Mind, Suzuki explains the process of becoming enlightened like this: After you have practised for a while, you will realise that it is not possible to make rapid, extraordinary progress. Even though you try very hard, the progress you make is always little by little. It is not like going out in a shower in which you know when you get wet. In a fog, you do not know you are getting wet, but as you keep walking you get wet little by little. To my surprise, this turned out to be true, even when my practice was just the simple act of sitting each day with minimum formality. I wasn’t alone. Though plenty of monks and nuns have spent their lives in monasteries, the monastic path was never considered the only way to go. In fact, from 618 to 907 during the Táng Dynasty (the so-called ‘golden age’ of Chán), laypeople were often held up as exemplary practitioners. One example is Layman Páng and his family. Páng is still celebrated for recognising that mindful attention to ordinary tasks can, over time, become a path to awakening: How miraculous and wondrous, Hauling water and carrying firewood! In the Vimalakirti Sutra, written as early as the 3rd century, another layperson named Vimalakīrti is depicted as a house-holding family man and entrepreneur in the time of the Buddha. Even while lying sick in bed, Vimalakīrti manages to best Mañjuśrī, the Bodhisattva of transcendent wisdom, in a debate, while countless beings cram into his tiny house to watch. They fiercely discourage the ‘pursuit’ of awakening or the idea of ‘learning’ how to be enlightened In the end, I had to conclude that all the ideas I held about Zen practice when I started were wrong or, at the very least, misleading. There is no persistent state of enlightenment. The pursuit of such a state is vain by definition. There’s no ‘fix’ for the human condition in the sense that I originally sought. The Way is not accomplished by gaining ‘understanding’ in the conventional sense or by forcing the mind to shut up – no matter how appealing that prospect seems. These conclusions arose out of my own direct experience but also out of my reading of the Zen literature, which, for more than 1,000 years has been stating things differently. The founding documents of the Chán schools in China and the Zen schools in Japan are a fistful of manifestos that point to the particulars of human experience and talk about how to practise with them. These are full of aspirational formulae and encouragement, but, at the same time, fiercely discourage the ‘pursuit’ of awakening or the idea of ‘learning’ how to be enlightened. Instead – at the risk of oversimplifying something that’s bafflingly complex – they describe two major modes of engagement that characterise Zen practice. The first of these modes has been called ‘conventional cognition’ and is a form of thinking that is deeply familiar to most humans. This mode continuously manifests in our conscious and semi-conscious minds, engaging the human qualities of language, planning and sociality. It can appear as a kind of ruminative self-narration underpinned by emotional tags that drive both inner life and outward behaviour. We experience it as a running dialogue in our heads, which expresses our hopes, fears, experiences, desires, uncertainties. This mode works by building models (of the world and self) using a vast storehouse of remembered, language-ready categories to imagine future outcomes. These models allow us to navigate the world through anticipatory action. As we move through our day, we imagine how others will perceive us, recalling past events to anticipate their responses and determine what we should say and do. Conventional cognition gets a bad rap in most Buddhist literature because it’s so obviously the cause of the aforementioned human folly and suffering. How could it be otherwise? We are beings with a very limited perspective, provided by our sensory hardware and the experiences in our relatively short lives, operating in a world of ungraspable complexity. We are almost constantly focused on what we think will benefit us, even though our ungraspable world is so richly interconnected that the effects of our actions fall far beyond our understanding or control: determining which factors will genuinely do us good is very hard to figure out. What could possibly go wrong? But this same mode is directly responsible, at least in part, for all the beautiful things that humans make and do. Poetry, iPhones, quantum mechanics, Buddhism – none of these would exist without conventional cognition. Furthermore, we literally can’t live without it. The idea that we can somehow exit this mode for any appreciable length of time is absurd. The great Táng Dynasty Chán master Zhàozhōu captured this admirably when he said: ‘The Way is easy. Just avoid choosing.’ He then added, ‘but as soon as you use words, you’re saying, “This is choosing,” or “This is clarity.” This old monk can’t stay in clarity. Do you still hold on to anything or not?’ (My translation.) The other mode posited in Buddhist literature goes by many names, but Suzuki dubs it ‘big mind’. In contrast to the narrow focus of conventional cognition, big mind manifests as a kind of broad, relaxed, receptive attention that, by default, easily gives way to focused attention when circumstances demand it. It is not particularly tied to language. The categories, objects and concepts that are the province of conventional cognition – the elements directly involved in the activities of self-construction and self-narration – have no meaning for big mind. Conventional cognition is driven by powerful emotions; big mind is driven by an appreciation for the simple act of being alive. While both modes are active and ever-present, many people are barely aware of the presence of big mind because their preoccupation with conventional cognition is so strong. One can easily observe this through zazen. Sitting to meditate, we can experience the feeling of being tangled up in thought, which can stop us being aware of ourselves as embodied beings sitting upright. Just paying attention to our breathing can be a struggle as thoughts intrude. The point of a sitting practice is to wholeheartedly study, as intimately as possible, the moment-to-moment activity of your body and mind until big mind swims into view, even briefly. From there, the tangled relationship between these two modes becomes clearer, and big mind begins to take its natural place in our everyday lives – not only while sitting zazen, but also while walking, talking, working and playing. This is an answer to Zhàozhōu’s question: do you still hold on to anything or not? A new relationship between big mind and conventional cognition is what we preserve: a continuous practice of staying awake to our activity and its consequences in the context of big mind. One might reasonably ask: ‘Well, what good is that?’ It’s an excellent question and the answer has two parts. First, on a practical level, when we meet the world through big mind, even imperfectly, the grip of conventional cognition is loosened. This doesn’t mean our habitual responses disappear. On the contrary, they sometimes become more visible. But they are now surrounded by a sense of space and choice. We’re no longer compelled to act on our habitual responses, and it becomes easier to consider more skilful alternatives. We find ourselves entering each moment with a new awareness as our sensory experience of the world outside meets the inner world of our concepts and habits, and through that meeting – infused with a kind of compassionate curiosity – a way forward takes shape seemingly of its own accord. After we act, we see the results, and then begin the cycle again in a way that feels more agile and spontaneous. The ceremonies move the rock of practice nearer to the centre of the river of the ordinand’s life Second, beyond its practical benefits, this practice opens us to experiences that go far beyond what we think of as ‘the everyday’. It underscores what many historical traditions have observed: the full range of human experience is much broader than we normally expect. I have been practising in this way for roughly 35 years. Some of this practice was intensely monastic and formal. In that time, I passed through three successive ordination ceremonies – lay ordination, priest ordination, and Dharma transmission – with my teacher Ryushin. Each of these involved long periods of preparation, a lot of it spent sewing together a robe in the same way that monks have done for thousands of years. The ceremonies themselves, especially the Dharma transmission, which takes weeks to perform, are designed to change the life of the ordinand by moving the rock of practice nearer to the centre of the river of their life. After each ordination, I felt suddenly and startlingly different, and the vow to take full responsibility going forward for my conduct and its consequences gathered weight. But, more often, I practised in the context of a busy life involving work, family and passions (for me, art-making and long-distance cycling). And, over the decades, the reward of continuous practice – as emphasised by Dōgen Zenji, Suzuki and countless other teachers through the centuries – has become more deeply embedded in my being. It has manifested in my day-to-day life. As usual, this change has been both sudden and gradual. So, what are we to do? How are those of us still caught in the flux of the ‘modern’ world supposed to find peace, alleviate suffering, and confront human folly? My own experience might suggest a deprecation of monasticism, but this would be inaccurate. Monastic practice, tuned as it has been for thousands of years, is an excellent vehicle for exactly this exploration. A person who completely gives themselves over to the forms and schedules prepared for them is constantly being reminded of the beauty and the burden of conventional cognition. Again and again, they are given the opportunity to lay down their burden. Initially, they may not even recognise this invitation. Later, they might ignore or resist it, clinging to ideas they’ve developed about how things ought to be. But, in the long run, at least some practitioners are able to loosen their grip. That said, of the few people who are financially and logistically able to take advantage of extended monastic practice, fewer still are able to follow those forms and schedules completely. Age, fitness, physical incompatibility, disability and other constraints often limit participation in traditional monastic practices. Fortunately, the heart of zazen has nothing to do with where you live, whether you can twist your legs into lotus posture or whether you like getting up at 3:30 am. Simply taking up the posture of zazen in a quiet room has a powerful effect on body and mind. To do it, find a quiet place and, in that place, find a posture that will allow you to keep physically still for 30 minutes or so. If that is difficult or you can’t sit comfortably for 30 minutes, standing or lying down is also an option. What does it feel like to be present? What does it feel like to be ‘non-present’? Zazen is essentially a yogic practice, which invites a particular kind of continuous engagement, especially with the body and breath but also with the mind and senses. The posture, both inner and outer, should feel simultaneously relaxed and energised. A helpful principle is ‘always be sitting’. This doesn’t mean one must literally sit – those standing or lying down are free to construe ‘sitting’ metaphorically. ‘Always be sitting’ means that whenever one is engaged in zazen, one should, as much of the time as possible, be bodily engaged. This means forming the sitting posture as if it were happening for the very first time, feeling the actual rate and depth of the breath, bringing attention to where discomfort arises and, perhaps, moving gently and deliberately to relieve it. This kind of attention isn’t always easy. Sometimes we’re able to be present and sometimes we’re not. Such unpredictability is often a source of struggle for Zen students because they think they’re supposed to be ‘quieting the mind’ and they see the moments when they’re not as a failure. But this is fundamentally incorrect and unhelpful. The real invitation in zazen is not to suppress thinking, but to participate fully in, and become intimately aware of, our own version of the attentional cycle. Is it short or long? What does it feel like to be present? What does it feel like to be ‘non-present’? This last question is important. In the early stages of practice, when the mind is fully engaged in conventional cognition – especially when emotionally charged thoughts and stories are present in the mind – the broader view afforded by big mind is obscured. And the transition between these states, which can happen repeatedly during a single sitting, is extraordinarily subtle. One moment you can be sitting in awareness, and the next you’re simply thinking about your day – remembering a conversation or anxiously anticipating something – without knowing how you got there. This can get complicated. Consider the story of a monk I practised with at Zenshin-ji, who told me she once walked in the garden and was utterly transfixed by the sight of a blooming flower. Its beauty, perfection and aliveness stopped her in her tracks and, as she paused there to take it in, she was moved to tears. But then, some seconds later, a thought arose: ‘And… she was moved to tears!’ She heard this phrase in her mind, and the tone of the thought had such a sneering, mocking intonation that she immediately reacted angrily, raging at herself for spoiling the immediacy of the moment by commenting on and distancing herself from the experience. In the end, the relationship between big mind and conventional cognition is a tangled weave: as our attentiveness is erased, we can go from unconditioned appreciation to self-condemnation in a flash. Daily practice can feel like walking in a fog and slowly being ‘dampened’ by the effects On the other hand, the transition back into big mind doesn’t involve this erasure. When we return, we are still intimately (or at least retrospectively) aware of the thoughts, emotions and texture of whatever episode we’re right in the middle of. And in this return lies the heart of zazen: the possibility of making a subtle, almost imperceptible effort to broaden and settle our attention, to fully inhabit the moment, and to be completely present for what comes next. Once a routine is established, a regular dip into more intensive practice can be tremendously helpful. Though daily practice can feel like walking in a fog and slowly being ‘dampened’ by the effects, more intensive practice offers something different. Even devoting a single day can allow the mind to settle more deeply, freed temporarily from our regular preoccupations. Such moments can offer surprising clarity. Perhaps the corresponding metaphor is more like a long swim in a mountain lake or, if you prefer, a quiet forest pool. Finally, it’s crucial to have companions on the Way. Practising zazen in the company of others provides powerful support and shared commitments. Comparing experiences and testing insights with others can be an antidote to the blindspots and subtle delusions that plague us all. Sustained collective practice also helps cultivate samadhi – the deep meditative absorption that clarifies perception and steadies the mind. That’s why, in Buddhism, the ‘three treasures’ are Buddha (the historical figure and the symbol of the potential for awakening), Dharma (the teachings), and Sangha (the community). Dōgen Zenji, the remarkable person who really established the Zen school in Japan during the early 13th century, wrote a manifesto called Fukanzazengi, a title that may be translated (very loosely) as ‘Everyone should be sitting zazen like this.’ The last few lines, in the translation used by SFZC, read: Gain accord with the enlightenment of the Buddhas; succeed to the legitimate lineage of the ancestors’ samadhi. Constantly perform in such a manner and you are assured of being a person such as they. Your treasure-store will open of itself, and you will use it at will. In the 1960s, after Zen had travelled from Japan to San Francisco, many Americans embraced this imported spiritual practice with a hopeful, if often misguided, belief: enlightenment – the ultimate fix for the human condition – demanded monastic discipline and withdrawal from the clatter of modern life. To find a cure for human folly, one had to step outside the world entirely. I carried the same conviction. When I look back at the person who began practising Zen in earnest more than 30 years ago, I can see that my initial ideas about Zen were misguided. As those ideas began to slowly unravel, the texture and quality of my life has been transformed. It has become richer, more vivid, and more deeply alive than I could ever have imagined. I’ve come to understand that the Way isn’t a destination located far from the world, reached only by force of will or sudden insight. It unfolds through steady, daily practice. What Zen offers is quiet, strange and radical: a form of engagement that begins almost imperceptibly but can grow into something truly transformative. As Suzuki put it, if you walk through fog long enough, you’ll eventually be soaked. Slowly, you begin to inhabit the texture of your own life more completely. Eventually, you stop trying to be elsewhere. You begin to realise the Way was never hidden up a mountain. It’s right here, buried far beneath your own ideas about who you should be. Source of the article

GOATReads:Sociology

D.A.R.E. Is More Than Just Antidrug Education—It Is Police Propaganda

Almost all Americans of a certain age have a DARE story. Usually, its millennials with the most vivid memories of the program—which stands for “Drug Abuse Resistance Education”—who can not only recount their DARE experiences from elementary school but also the name of their DARE officer. Looking back on DARE, many recall it as an ineffective program that did little to prevent drug use, which is why they are often surprised that the program still exists. In fact, DARE celebrated its 40th anniversary last year. Schools continue to graduate DARE classes, albeit at a far slower pace than during the program’s heyday in the 1980s and 1990s. While DARE gained widespread support and resources on the presumption that it was an alternative to the supply side approaches to the drug war that relied on arrest and incarceration, my research shows that DARE was less an alternative to policing and more a complementary piece of law enforcement’s larger War on Drugs. As police fought and continue to fight a drug war primarily through violent criminalization, arrest, and incarceration, their presence in schools presents law enforcement a way to advance the police mission of defending the “law-abiding” from the “criminal element” of society by another means. In the process, DARE offers reliably positive public relations when reactionary police activities garner unwanted political critique or public protest, offering a kind of built-in legitimacy that shields against more radical efforts to dismantle police power. DARE America, the nonprofit organization that coordinates the program, suggests that DARE has evolved into a “comprehensive, yet flexible, program of prevention education curricula.” But the program remains largely faithful to its original carceral approach and goal of legitimizing police authority through drug education and prevention. The revised curriculum still ultimately skews toward an abstinence-only, zero-tolerance approach that criminalizes drugs and drug users. It fails to embrace harm reduction approaches, such as sharing information on how students can minimize the health risks if they do choose to use drugs, even as research increasingly demonstrates the effectiveness of such methods and as knowledge about the harmful effects of hyperpunitive, abstinence-only drug education becomes more mainstream. DARE’s reluctance to change—especially change that diminishes the police’s authority to administer drug education—should not come as a surprise. My new book, DARE to Say No: Policing and the War on Drugs in Schools, offers the first in-depth historical exploration of the once-ubiquitous and most popular drug education program in the US, charting its origins, growth and development, cultural and political significance, and the controversy that led to its fall from grace. Although DARE lost its once hegemonic influence over drug education, it had long-lasting effects on American policing, politics, and culture. As I suggest in DARE to Say No, after the establishment of DARE and the deployment of the DARE officer as the solution to youth drug use, there was almost no approach to preventing drug use that did not involve police. In doing so, DARE ensures that drug use and prevention, what many experts consider a public health issue, continues to fall under the purview of law enforcement. It is another example of the way the police have claimed authority over all aspects of social life in the United States even as evidence of the deadly consequences of this expansion of police power have come to public attention in recent years with police killings in response to mental health and other service calls. Viewed in this light, DARE administrators continue to see the program as a reliable salve for the police amid ongoing police brutality, violence, and abuse. Revisiting this history of the preventive side of America’s long-running drug war offers vital lessons for drug education today, cautioning us to be wary of drug prevention initiatives that ultimately reinforce police power and proliferate state violence in our schools and communities. DARE was, in fact, born out of police failure. The brainchild of the Los Angeles Police Department’s (LAPD) chief of police Daryl Gates and other brass, the drug education program got its start in Los Angeles, where LAPD’s efforts to stem youth drug use had repeatedly failed. The LAPD had actually tried placing undercover officers in schools as early as 1974 to root out drug dealers, but drug use among young Angelenos only increased in intervening years, making a mockery of the police’s antidrug enforcement in schools. Recognizing this failure, Gates looked for an alternative to supply reduction efforts which relied on vigorous law enforcement operations. He began talking about the need to reduce the demand for drugs, especially by kids and teenagers. In January 1983, he approached the Los Angeles Unified School District (LAUSD) with an idea: schools needed a new type of drug education and prevention program. Working with LAUSD officials, LAPD brass developed a proposal for the use of police officers to teach a new form of substance abuse education in Los Angeles schools. The program that emerged from that work was Project DARE. The joint LAPD and LAUSD venture launched a pilot program in the fall of 1983. Project DARE came at a moment when the LAPD waged a violent and racist drug war on city streets. If Gates promoted DARE as an alternative, he was certainly no slouch when it came to combatting drugs. A longtime LAPD officer who had helped create the nation’s first SWAT team in Los Angeles following the 1965 Watts uprising, Gates believed in asserting aggressive police power to wage what he described as a literal war to control the streets, especially in the city’s Black and Latinx neighborhoods. Gates rose to chief of police in 1978 and oversaw a vigorous and violent war on drugs and gangs, relying on a destructive mix of antidrug raids and gang sweeps that targeted Black and Latinx youth. Perhaps Gates’s most notorious statement about his attitude toward the treatment of drug users came when he quipped to a congressional committee, “The casual user ought to be taken out and shot.” Gates’s militarized and flagrantly racist approach drug and crime enforcement provoked growing scrutiny from antipolice activists who called out the LAPD for its racism and abuse in the years prior to the 1991 beating of Rodney King and the 1992 Los Angeles rebellion. Against this context, DARE’s focus on prevention and education in schools offered the LAPD a means to counteract this tough, violent image of the warrior cop, not to mention Gates’s own punitive rhetoric. While publicly framed as an alternative to tough antidrug policing, DARE also offered the police a means to enhance their legitimacy and bolster their institutional authority at the very same time their aggressive urban policing practices were alienating predominantly Black and Latinx working-class communities and prompting growing charges of racism and brutality within LAPD’s ranks. In its first iteration, DARE began with stints of 15 (later expanded to 17) weeks to deliver the DARE curriculum in 50 classrooms. Deploying veteran police officers to the classroom beat was a calculated move. Program designers, along with many educators, believed that the youth drug crisis was so advanced that students as young as fifth graders were so savvy about drugs and drug culture that teachers were out of their depth to teach about drugs. By contrast, the thinking went, because police had experience with the negative consequences of drug use, they had much more credibility for this generation of supposed young drug savants. But it was not only that police officers had experience with drugs that lent them credibility when compared to classroom teachers. For many law enforcement officials, DARE became a shining example of how the police could wage the drug war in the nation’s schools through prevention rather than enforcement of drug laws. Focusing on prevention and education would “soften” the aggressive image of the police that routinely appeared in exposés on crack and gang violence on the nightly news and in national newsmagazines such as Newsweek and Time. As teachers, DARE officers would promote a more responsible and professional police image. Early returns from the DARE program pointed to an effective and successful program. Studies conducted in the mid-1980s by Glenn Nyre of the Evaluation and Training Institute (ETI), an organization hired by the LAPD to evaluate the program in its early years, found positive results when it came to student attitudes about drug use, knowledge of how to say no, and respect for the police. School administrators and classroom teachers also responded to the program with gusto, reporting better student behavior and discipline in the classroom. Students also seemed to like the program, especially since most of the evidence of student reactions came from DARE essays written in class or in DARE’s public relations material. As one DARE graduate recalled when the program ended, “I’m sad, because we can’t see our officer again and happy because we know we don’t have to take drugs.” That LAPD handpicked ETI to conduct this assessment suggests it was hardly an independent evaluation, a fact that some observers noted at the time. Nevertheless, such initial positive results gave LAPD and LAUSD officials a gloss of authority and primed them to make good on their promise of bringing the program to every student in the country. And they very nearly did. Within a decade of its founding, DARE became the largest and most visible drug prevention program in the United States. At its height, police officers taught DARE to fifth- and sixth-grade students in more than 75 percent of American school districts as well as in dozens of countries around the world. Officers came to Los Angeles to be trained in the delivery of the DARE curriculum. The demand for DARE led to the creation of a network of training centers across the country, which vastly expanded the network of trained DARE officers. DARE leaders also created a DARE Parent Program to teach parents how to know the signs of youth drug use and the best approach to dealing with their kids who used drugs. DARE, in short, created a wide network that linked police, schools, and parents in the common cause of stopping youth drug use. Everyone seemed to love DARE. Especially politicians. Congressmembers from both parties fawned over it. In congressional hearings and on the floor of congress, they lauded the program and allocated funds for drug education and prevention programming in the Drug-Free Schools and Communities Act (DFSCA) provisions of the 1986 Anti-Drug Abuse Act. Amendments to the DFSCA in 1989 referenced the use of law enforcement officers as teachers of drug education and, more directly, a 1990 amendment mentioned the DARE program by name. President Reagan was the first president to announce National DARE Day, a tradition that continued every year through the Obama presidency. Bill Clinton also singled out the program in his State of the Union address in 1996 stating, “I challenge Congress not to cut our support for drug-free schools. People like the D.A.R.E. officers are making a real impression on grade-school children that will give them the strength to say no when the time comes.” Rehabilitating the police image and sustaining police authority by supporting DARE was very much a bipartisan effort.   Political support for DARE reflected the program’s widespread popularity among several constituencies. Law enforcement officials hoped it would be a way to develop relationships with kids at the very moment they waged an aggressive and violent war on drugs on the nation’s streets. Educators liked it because it absolved them from teaching about drugs and meant teachers got a class period off from teaching. Parents, many of whom felt they did not know how to talk to their kids about drugs, also saw value in DARE. As nominal educators, DARE officers became part of schools’ daily operation. Even as they wore their uniforms, they were unarmed and explicitly trained not to act in a law enforcement role while on campus. DARE officers would not enforce drug laws in schools but rather teach kids self-esteem, resistance to peer pressure, and how to say no to drugs. In the minds of the program’s supporters, turning police into teachers tempered the drug war by helping kids learn to avoid drugs rather than targeting them for arrest. Officers did much more than just teach DARE classes. DARE officers embedded themselves into their communities, engaging in a wide variety of extracurricular activities. For instance, one officer coached a DARE Track Club. Another coached a football team. Some even played Santa Claus and Rudolf during the holidays. To bolster their authority on a national scale, DARE administrators constructed a public relations campaign enlisting athletes and celebrities to promote the program and facilitate trust between children and the police. More than just a feel-good program for the police and youth, however, law enforcement needed DARE—and not just for the purported goal of fighting drugs. DARE offered a means to burnish the public image of policing after years of aggressive and militarized policing associated with the drug war and high-profile episodes of police violence and profiling, such as the beating of Rodney King in Los Angeles or the discriminatory targeting of the Central Park Five in New York. By using cops as teachers, DARE administrators and proponents hoped to humanize the police by transforming them into friends and mentors of the nation’s youth instead of a uniformed enemy. For DARE’s proponents, they insisted that kids took the police message to heart. As DARE America director Glenn Levant made clear, DARE’s success was evident during the 1992 Los Angeles rebellion, when, instead of protesting, “we saw kids in DARE shirts walking the streets with their parents, hand-in-hand, as if to say, ‘I’m a good citizen, I’m not going to participate in the looting.’” The underlying goal was to transform the image of the police in the minds of kids and to develop rapport with students so that they no longer viewed the police as threatening or the enforcers of drug laws. But DARE’s message about zero tolerance for drug use—and the legitimacy of police authority—sometimes led to dire consequences that ultimately revealed law enforcement’s quite broad power to punish. The most high-profile instances occurred when students told their DARE officers about their parents’ drug use, which occasionally led to the arrest of the child’s family members. For those students who took the DARE message to heart, they unwittingly became snitches, serving as the eyes and ears of the police and giving law enforcement additional avenues for surveilling and criminalizing community drug use. DARE was not a benign program aimed only at preventing youth drug use. It was a police legitimacy project disguised as a wholesome civic education effort. Relying on the police to teach zero tolerance for drugs and respect for law and order accomplished political-cultural work for both policy makers and law enforcement who needed to retain public investment in law and order even amid credible allegations of police misconduct and terror. Similarly, DARE diverted attention from the violent reality of the drug war that threatened to undermine trust in the police and alienate constituencies who faced the brunt of such policing. Through softening and rehabilitating the image of police for impressionable youth and their families, DARE ultimately enabled the police to continue their aggressive tactics of mass arrest, punishment, and surveillance, especially for Black and Latinx youth. Far from an alternative to the violent and death-dealing war on drugs, DARE ensured that its punitive operations could continue apace. But all “good” things come to an end. By the mid-1990s, DARE came under scrutiny for its failure to prevent youth drug use. Despite initial reports of programmatic success, social scientists evaluating the program completed dozens of studies pointing to DARE’s ineffectiveness, which led to public controversy and revisions to the program’s curriculum. Initially, criticism from social science researchers did little to dent the program’s popularity. But as more evidence came out that DARE did not work to reduce youth drug use, some cities began to drop the program. Federal officials also put pressure on DARE by requiring that programs be verified as effective by researchers to receive federal funds. By the late 1990s, DARE was on the defensive and risked losing much of its cultural cachet. In response, DARE adapted. It revised its curriculum and worked with researchers at the University of Akron to evaluate the new curriculum in the early 2000s. Subsequent revisions to the DARE curriculum relied on close partnership with experts and evaluators led to the introduction of a new version of the curriculum in 2007 called “keepin’ it REAL” (kiR). The kiR model decentered the antidrug message of the original curriculum and emphasized life skills and decision-making in its place. For all the criticism and revision, however, few observers ever questioned, or studied for that matter, the efficacy of using police officers as teachers. Despite the focus on life skills and healthy lifestyles, DARE remains a law enforcement–oriented program with a zero tolerance spirit to help kids, in the words of DARE’s longtime motto, “To Resist Drugs and Violence.” While DARE remains alive and well, its future is increasingly uncertain. The dramatic rise in teen overdose deaths from fentanyl has renewed demands for drug education and prevention programs in schools. Rather than following the DARE’s zero-tolerance playbook, some school districts have explored adopting new forms of drug education programming focused on honesty and transparency about drug use and its effects, a model known as harm reduction. The Drug Policy Alliance’s (DPA) Safety First drug education curriculum, for instance, is based on such principles. Rather than pushing punitive, abstinence-only lessons, Safety First emphasizes scientifically accurate and honest lessons about drugs and encourages students to reduce the risks of drug use if they choose to experiment with drugs. Most notably, it neither requires nor encourages the use of police officers to administer their programming. The implementation of Safety First marks the beginning of what could promise to be a vastly different approach to drug education and prevention programs. It is a welcome alternative to drug education programs of the past. As the history of DARE demonstrates, police-led, zero-tolerance drug education not only does not reduce drug abuse among youth, but serves as a massive public relations campaign for law enforcement, helping to obscure racist police violence and repression. It is high time Americans refuse to take the bait. Source of the article

Life happened fast

It’s time to rethink how we study life’s origins. It emerged far earlier, and far quicker, than we once thought possible Here’s a story you might have read before in a popular science book or seen in a documentary. It’s the one about early Earth as a lifeless, volcanic hellscape. When our planet was newly formed, the story goes, the surface was a barren wasteland of sharp rocks, strewn with lava flows from erupting volcanoes. The air was an unbreathable fume of gases. There was little or no liquid water. Just as things were starting to settle down, a barrage of meteorites tens of kilometres across came pummelling down from space, obliterating entire landscapes and sending vast plumes of debris high into the sky. This barren world persisted for hundreds of millions of years. Finally, the environment settled down enough that oceans could form, and the conditions were finally right for microscopic life to emerge. That’s the story palaeontologists and geologists told for many decades. But a raft of evidence suggests it is completely wrong. The young Earth was not hellish, or at least not for long (in geological terms). And, crucially, life formed quickly after the planet solidified – perhaps astonishingly quickly. It may be that the first life emerged within just millions of years of the planet’s origin. With hindsight, it is strange that the idea of hellscape Earth ever became as established as it did. There was never any direct evidence of such lethal conditions. However, that lack of evidence may be the explanation. Humans are very prone to theorise wildly when there’s no evidence, and then to become extremely attached to their speculations. That same tendency – becoming over-attached to ideas that have only tenuous support – has also bedevilled research into the origins of life. Every journalist who has written about the origins of life has a few horror stories about bad-tempered researchers unwilling to tolerate dissent from their treasured ideas. Now that the idea of hellscape Earth has so comprehensively collapsed, we need to discard some lingering preconceptions about how life began, and embrace a more open-minded approach to this most challenging of problems. Whereas many researchers once assumed it took a chance event within a very long timescale for Earth’s biosphere to emerge, that increasingly looks untenable. Life happened fast – and any theory that seeks to explain its origins now needs to explain why. One of the greatest scientific achievements of the previous century was to extend the fossil record much further back in time. When Charles Darwin published On the Origin of Species (1859), the oldest known fossils were from the Cambrian period. Older rock layers appeared to be barren. This was a problem for Darwin’s theory of evolution, one he acknowledged: ‘To the question why we do not find records of these vast primordial periods, I can give no satisfactory answer.’ The problem got worse in the early 20th century, when geologists began to use radiometric dating to firm up the ages of rocks, and ultimately of Earth itself. The crucial Cambrian period, with those ancient fossils, began 538.8 million years ago. Yet radiometric dating revealed that Earth is a little over 4.5 billion years old – the current best estimate is 4.54 billion. This means the entire fossil record from the Cambrian to the present comprises less than one-eighth of our planet’s history. Stromatolites were well known, but these, from the Pilbara region in Western Australia, were astonishingly old However, in the mid-20th century, palaeontologists finally started finding older, ‘Pre-Cambrian’ fossils. In 1948, the geologist Reg Sprigg described fossilised impressions of what seemed to be jellyfish in rocks from the Ediacara Hills in South Australia. At the time, he described them as ‘basal Cambrian’, but they turned out to be older. A decade later, Trevor Ford wrote about frond-like remains found by schoolchildren in Charnwood Forest in England; he called them ‘Pre-Cambrian fossils’. The fossil record was inching back into the past. By 1980, the fossil record had become truly epic. On 3 April that year, a pair of papers was published in Nature, describing yet more fossils from Australia. They were stromatolites: mounds with alternating layers of microorganisms and sediments. In life, microbes like bacteria often grow in mats. These become covered in sediments like sand, and a new layer of cells grows on top, over and over. Stromatolites were well known, but these, from the Pilbara region in Western Australia, were astonishingly old. One set was 3.4 billion years old; the other looked like it might be even older, as much as 3.5 billion years old. Over the past 45 years, palaeontologists have meticulously re-analysed the Pilbara remains to confirm that they are real. It’s not a trivial problem: with rocks that ancient, strange distortions can form that look like fossilised microbes but are actually just deformed rocks. To resolve this, researchers have deployed an array of techniques, including searching for traces of organic matter. At this point, we are as confident as we can be that the Pilbara fossils are real. That means life has existed for at least 3.5 billion years. When I wrote The Genesis Quest back in 2020, I said this gave us a billion-year time window after the formation of Earth in which life could form. Since then, the evidence for life has been pushed further back in time. Until relatively recently, many researchers would have said the window was distinctly narrower than that. That’s because there were reasons to think that Earth was entirely uninhabitable for hundreds of millions of years after it formed. The first obstacle to life’s emergence was the Moon’s formation. This seems to have happened very soon after Earth coalesced, and in the most dramatic way imaginable: another planetary body, about the size of Mars, collided with Earth. The impact released so much energy it vaporised the surface of the planet, blasting a huge volume of rocks and dust into orbit. For a little while, Earth had a ring, until all that material gradually fused to form the Moon. This explosive scenario is the only one anyone has so far thought of that can explain why Moon rocks share similar isotopes with Earth rocks. It seems clear that, if there was any nascent life on the young Earth, it was obliterated in the searing heat of the impact. Still, this happened around 4.5 billion years ago. What about the billion years between the Moon-forming impact and the Pilbara fossils? The surface was an ocean of magma that slowly cooled and solidified We can divide this vast span of time into two aeons. They are divided by one simple factor: the existence of a rock record. The oldest known rocks are 4.031 billion years old. The half-billion years before that is called the Hadean; the subsequent time is called the Archean. As its ominous name suggests, the Hadean was assumed to have been hellish. In the immediate aftermath of the Moon-forming impact, the surface was an ocean of magma that slowly cooled and solidified. Artist’s impressions of this aeon often feature volcanoes, lava flows and meteorite impacts. The early Archean, if anything, seemed to be worse – thanks to a little thing called the Late Heavy Bombardment. Between around 3.8 and 4 billion years ago, waves of meteoroids swept through the solar system. Earth took a battering, and any life would have been obliterated. Only when the bombardment eased, 3.8 billion years ago, could life begin. In which case, life began in the 300 million years between the end of the Late Heavy Bombardment and the Pilbara fossils. This was a compelling narrative for many years. It was repeated uncritically in many books about the origins and history of life. Yet there were always nagging issues. In particular, palaeontologists kept finding apparent traces of life from older strata – life that was, on the face of it, too old to be real. As early as 1996, the geologist Stephen Mojzsis, then at the University of California, San Diego, and his colleagues were reporting that life was older than 3.8 billion years. They studied crystals of apatite from 3.8-billion-year-old rocks from the Isua supracrustal belt in West Greenland. Within the crystals are traces of carbon, which proved to be rich in one isotope, carbon-12, and low in the heavier carbon-13. This is characteristic of living matter, as living organisms prefer to use carbon-12. Nearly two decades later, the record was extended even further back in time by Elizabeth Bell at the University of California, Los Angeles and her colleagues. They studied thousands of tiny zircon crystals from the Jack Hills of Western Australia. Some of these crystals are Hadean in age: since there are no rocks from the Hadean, these minuscule shards are almost all we have to go on. One zircon proved to be about 4.1 billion years old. Trapped within it was a tiny amount of carbon, with the same telltale isotope mixture that suggested it was biogenic. For many years, the sceptics carried the argument, but more recently the tides have turned Perhaps most dramatically, in 2017, Dominic Papineau at University College London and his colleagues described tubes and filaments, resembling colonies of bacteria, in rocks from the Nuvvuagittuq belt in Quebec, Canada. The age of these rocks is disputed: they are at least 3.77 billion years old, and a study published this June found some of them are 4.16 to 4.20 billion. This would mean that life formed within 200 million years of Earth’s formation, deep in the Hadean. There are many more such studies. None of them is wholly convincing on its own: they often rely on a single crystal, or a rock formation that has been heated and crushed, and thus distorted. Each study has come in for strong criticism. This makes it difficult to assess the evidence, because there are multiple arguments in play. A believer in an early origin of life would highlight the sheer number of studies, from different parts of the world and using different forms of evidence. A sceptic would counter that we should accept a fossil only if it is supported by multiple lines of evidence, as happened in the Pilbara. To which a believer would say: the rock record from the early Archean is very sparse, and there are no rocks from the Hadean at all. It is simply not possible to obtain multiple lines of evidence from such limited material, so we must make a judgment based on what we have. The sceptic would then say: in that case, we don’t and can’t know the answer. For many years, the sceptics carried the argument, but more recently the tides have turned. This is partly because the fossil evidence of early life has accumulated – but it’s also because the evidence for the Late Heavy Bombardment sterilising the planet has collapsed. An early crack in the façade emerged when Mojzsis and Oleg Abramov at the University of Colorado simulated the Late Heavy Bombardment and concluded that it was not intense enough to sterilise Earth. Surface life might have been obliterated, but microbes could have survived underground in huge numbers. However, the bigger issue is that the Late Heavy Bombardment may not have happened at all. The evidence rested on argon isotopes from Moon rocks collected by the Apollo missions in the 1960s and ’70s. A re-analysis found that those isotopes were prone to a specific kind of artefact in the radioisotope data – creating the illusion of a sharp bombardment 3.9 billion years ago. What’s more, the Apollo missions all went to the same region of the Moon, so the astronauts may have mostly collected rocks from the same big impact – which would all naturally be the same age. Meanwhile, rocks on Earth preserve evidence of past impacts, and they show a long slow decline until 3 billion years ago, or later. Likewise, giant impacts on Mars appear to have tailed off by 4.48 billion years ago. There is also no sign of a Late Heavy Bombardment on the asteroid Vesta. If the Late Heavy Bombardment really didn’t happen, then it is reasonable to imagine that life began much earlier – perhaps even in the Hadean. The problem is how to demonstrate it, when the fossil evidence is so impossibly scant. This is where genetics comes in. Specifically phylogenetics, which means creating family trees of different organisms showing how they are related, and when the various splits occurred. For example, phylogenetics tells us that humans, chimpanzees and bonobos are descended from a shared ancestor that lived about 7 million years ago. By constructing family trees of the oldest and most divergent forms of life, phylogeneticists have tried to push back to the last universal common ancestor (LUCA). This is the most recent population of organisms from which every single living thing today is descended. It is the great-great-etc grandmother of all of us, from bacteria to mosses to scarlet macaws. Estimating the date of LUCA is fraught with uncertainties, but in the past decade phylogeneticists have started to narrow it down. One such attempt was published by a team led by Davide Pisani at the University of Bristol in the UK. They created a family tree of 102 species, focusing on microorganisms, as those are the oldest forms of life. They calibrated their tree using 11 dates known from the fossil record. The headline finding was that LUCA was at least 3.9 billion years old. It’s possible that life had existed long before LUCA – beginning early in the Hadean In 2024, many of the same researchers returned with a more detailed analysis of LUCA based on more than 3,500 modern genomes. This suggested LUCA lived between 4.09 and 4.33 billion years ago, with a best estimate of around 4.2 billion. What’s more, their reconstruction of LUCA’s genome suggested it was pretty complex, with a genome that encoded around 2,600 proteins. It also seems to have lived in a complex ecosystem. In particular, it appears to have had a primitive immune system, which implies it had to defend itself from some of its microbial neighbours. These details highlight a point that is not always obvious: LUCA does not represent the origin of life. It is just the most recent ancestor shared by all modern organisms. It’s possible that life had existed long before LUCA – beginning early in the Hadean. This fits with gathering evidence that the Hadean was not so hellish after all. It’s true that the entire planetary surface was molten at the very start of the Hadean, but it seems to have solidified by 4.4 billion years ago. Evidence from zircons suggests there was abundant liquid water at least 4.3 billion years ago, and possibly 4.4 billion. By 4.2 billion years ago, there seem to have been oceans. These primordial seas may have been considerably deeper than they are today, because Earth’s interior was hotter and could not hold as much water, so for a time there may have been no exposed land – or at least, only small islands. These strands of evidence amount to a complete rewriting of the early history of life on Earth. Instead of life beginning shortly after the Late Heavy Bombardment 3.8 billion years ago, it may have arisen within 100 million years of the planet’s formation. If so, what does that tell us about how it happened? The most immediate implication is that our ideas cannot rely on the power of chance at all. There have been a great many hypotheses about the origins of life that relied on a coincidence: say, a one-in-a-billion collision between two biological molecules in the primordial soup. But if life really formed within 0.1 billion years of the planet’s birth, ideas like this are absolutely untenable. There just wasn’t time. Take the RNA World, one of the leading hypotheses of life’s origins since the 1980s. The idea is that the first life did not contain the smorgasbord of organic chemicals that modern cells do. Instead, life was based entirely on RNA: a close cousin of the more familiar DNA, of which our genomes are made. RNA is appealing because it can carry genetic information, like DNA, but it can also control the rates of chemical reactions – something that is more usually done by protein-based enzymes. This adaptability, the argument goes, makes RNA the ideal molecule to kickstart life. They must find processes that work quickly and efficiently to generate complexity and life-like systems However, a close examination of the RNA World scenario reveals gaping holes. An RNA molecule is essentially a chain, and there are huge numbers of possible RNAs, depending on the sequence of links in the chain. Only a fraction of these RNAs actually make proteins. It’s not obvious how those ‘good’ RNAs are supposed to have formed: why didn’t conditions on the young Earth just create a random mix of RNAs? And, remember, we can’t rely on the power of chance and large numbers: it all happened too quickly. Instead, researchers now largely agree that they must find processes that work quickly and efficiently to generate complexity and life-like systems. But what does that mean in practice? There are various ideas. One prominent school of thought is that life formed in alkaline vents on the sea floor, where the flow of hot water and chemicals created a cradle that incubated life. Others have highlighted the potential of volcanic vents, meteorite impact craters, geothermal ponds, and tidal zones: anywhere that has a flow of energy and chemicals. The reality is that we are dealing with a huge number of intersecting questions. What was the environment in which the first life emerged? What was that first life made of and how did it work? Was the first life a simplified version of something we can observe today, or was it something radically different – either in composition or mechanism, or both – that was then supplanted by more familiar systems? I believe that the most promising thing to have happened in origins-of-life research in recent years has been a growing willingness to accept uncertainty and reject dogma. Origins research is barely a century old: the first widely discussed hypotheses were set out by Alexander Oparin and J B S Haldane in the 1920s, and the Miller-Urey experiment that kickstarted practical research in the field was published in 1953. For those first few decades, origins research was on the fringes of science, with only a handful of researchers actively working on it. Just as there was no direct evidence that the Hadean was a hellscape, there has been very little hard evidence for any of the competing scenarios for life’s origins. Researchers devised elaborate stories with multiple steps, found experimental evidence that supported one or two of those stages, and declared the problem solved. What origins research needs is open-mindedness and a willingness to disagree constructively A small group of people, a lack of hard evidence and a great many intersecting questions: that’s a recipe for dogmatic ideas and angry disagreements. And that’s what origins research was like for decades. I’ve been reporting on the field since the 2000s, and on multiple occasions I’ve seen researchers – including heads of labs – use language that resembled the worst kind of internet trolling. There was a time when I thought this abrasiveness was funny: now I just think it’s ugly and pointless. What origins research needs is open-mindedness and a willingness to disagree constructively. That culture shift is being driven by a generation of younger researchers, who have organised themselves through the Origin of Life Early-career Network (OoLEN). In 2020, a large group of OoLEN members and other researchers set out what they see as the future of the field. They complained of ‘distressing divisions in OoL research’: for instance, supporters of the RNA World have tended to contemptuously dismiss those who argue that life began with metabolic processes, and vice versa. The OoLEN team argued that these ‘classical approaches’ to the problem should not be seen as ‘mutually exclusive’: instead, ‘they can and should feed integrating approaches.’ This is exactly what is happening. Instead of focusing exclusively on RNA, many teams are now exploring what happens when RNA – or its constituent parts – are combined with other biological molecules, such as lipids and peptides. They are deploying artificial intelligence to make sense of the huge numbers of molecules involved. And they are holding back from strong statements in favour of their own pet hypotheses, and against other peoples’. This isn’t just a healthier way to work – though it absolutely is that. I believe it will also lead to faster and deeper progress. In the coming years, I expect many more insights into what happened on our planet when it was young and what the first life might have looked like. I presented the hellscape-Earth scenario as a kind of just-so story. Of course, because the data is so limited, we cannot escape telling stories about our planet’s infancy. But maybe soon we’ll be able to tell some better ones. Source of the article