Verify it's really you

Please re-enter your password to continue with this action.

Posts

GOATReads:Politics

Civility and/or Social Change?

In March 2023, a mass shooting occurred at Nashville’s Covenant School. To protest the subsequent lack of action on gun control, Democratic state legislators Justin Jones, Justin J. Pearson, and Gloria Johnson chanted “No action, no peace” on the floor of the Tennessee House of Representatives. In response, their Republican colleagues issued a set of resolutions accusing the trio of “disorderly and disruptive conduct.” The House subsequently voted to expel Jones and Pearson, who are Black; Johnson, who is white, was allowed to maintain her position. Here, ideals of decorum and politeness—of civility—were used to silence the protests of Black Americans. The politeness-driven civility used to drive Jones and Pearson from the legislature has a long history, dating at least to before the US Civil War. Then, slavery’s practitioners, advocates, and apologists would regularly invoke manners in their attempts to rebut the claims of antislavery activists. In turn, abolitionists like Frederick Douglass condemned those weaponizations of politeness. Slaveholders are “models of taste,” Douglass scathingly observes in his 1857 speech “West India Emancipation.” When debating questions of paramount importance like slavery and freedom, he bitterly continues, “With them, propriety is everything; honesty nothing.” From the years before the Civil War up to today, then, civility advocates have insisted on politeness. But, as Douglass shows, that distracts from urgent problems that “honesty” would demand addressing directly. In fact, as the episode from the Tennessee state house reveals (and as Douglass saw clearly), invocations of civility can actually oppose equity and social change. But even the dangerous discourse of civility decried by Douglass—and weaponized against Jones and Pearson—is presently undergoing a transformation, and not for the better. Demands for mannerliness and propriety are increasingly being replaced by demands that those in conflict just tolerate their disagreements. The civility of politeness, in other words, is being supplanted by what I term an “agree-to-disagree civility.” This emergent civility, I argue, legitimizes reactionary stances and valorizes the status quo. Such agree-to-disagree civility is on clear display in the seemingly cordial statement issued by all the US Presidential Foundations and Centers in anticipation of the acrimony of the recent election cycle. The leaders of the Obama Foundation, the George W. Bush Presidential Center, and 11 other similar organizations admit that they hold “a wide range of views across a breadth of issues.” Still, they insist that “these views can exist peaceably side by side.” The statement goes on to affirm that “debate and disagreement are central features in a healthy democracy,” and that “civility and respect in political discourse, whether in an election year or otherwise, are essential.” One might be moved, perhaps, to see politicians of different parties standing together against the violent bigotry of the Trump campaign. But look closer. To be sure, the civility of this statement is more productive than repressive, more interested in prompting speech than foreclosing it. Yet while the centers valorize the clash of countervailing perspectives, they’re also conspicuously silent regarding how such disagreements might be resolved. Basically, the Centers’ statement suggests, we should all just agree to disagree. It’s easy to see the appeal of this agree-to-disagree civility, because the tolerance it calls for is often taken as a transcendentally good ideal. As the political theorist Wendy Brown reminds us, though, there’s reason to look upon tolerance talk with a more ambivalent eye. For Brown, toleration is less a political ideal than a “practice of governmentality,” a body of commentary and rhetoric that sets the terms for political discussions, and not always in salutary ways. “There are,” she notes, “mobilizations of tolerance that do not simply alleviate but rather circulate racism, homophobia, and ethnic hatreds.” Agree-to-disagree civility, I argue, circulates and sustains such malign paradigms by neutralizing critique and forestalling social change. This civility robs us of our ability to say “x is wrong”: Its principles make racism, homophobia, misogyny, and the like perspectives to be respected, not paradigms to be defeated. Endlessly tolerating divergent outlooks on social inequities is categorically different than working to discern and pursue the most ethical and efficacious modes of redress. In short, this civility allows nothing to happen. As I’ll explain, agree-to-disagree civility is promoted by a wide range of civic organizations beyond the Presidential Centers, and it is expressed in two recent books: Robert Danisch and William Keith’s Radically Civil: Saving Our Democracy One Conversation at a Time, and Alexandra Hudson’s The Soul of Civility: Timeless Principles to Heal Our Country and Ourselves. But while agree-to-disagree civility is increasingly prominent, it’s not entirely new; it was likewise at work in Douglass’s time, alongside the civility of politeness, and an object of analysis in his oratory, which David Blight has recently collected in Speeches & Writings. In these speeches, Douglass emerges as an especially savvy theorist of civility’s limits—and its alternatives. In order to challenge this new form of civility (“agree to disagree”) we need to understand what distinguishes it from its predecessor (the civility of “politeness”). The history of such politeness civility is recounted in Alex Zamalin’s Against Civility: The Hidden Racism in Our Obsession with Civility. “The idea of civility,” Zamalin demonstrates, “has been a tool for silencing dissent, repressing political participation, enforcing economic inequality, and justifying violence upon people of color.” But demands for politeness, manners, and adherence to existing norms and values have long been challenged by “civic radicals,” activist intellectuals like Douglass who dispense with civility in the pursuit of justice. That civic radicalism is particularly necessary, Zamalin reminds us, because white Americans have consistently made orderliness and decorum prerequisites for political participation in ways that stifle the voices of nonwhite people. His argument is very well illustrated by the legislative expulsion of Jones and Pearson. But if the politeness civility that Zamalin analyzes continues to play a role in society, a different and particularly insidious form of civility is becoming increasingly prominent in American culture and politics. Insightful as Zamalin’s book is, it doesn’t quite register that civility discourse is currently changing, and that in particular, civility advocates are turning from demanding politeness to calling for toleration and for agreeing to disagree. One reason agree-to-disagree civility is on the rise is that over the last decade many organizations dedicated to promoting versions of it have appeared. These civility centers are overseen variously by academics (at, for example, Duke, Virginia Tech, and Oakland University, where I teach), by community leaders (as in The Oshkosh Civility Project), by journalists (as at The Great Lakes Civility Project), and by combinations thereof (like in the Ohio Civility Project). They share a concern with respecting difference and tolerating disagreement. One center indeed distills this civility’s ethos in its aspiration to teach people to “Agree to disagree.” Perhaps the most prominent of these groups is Braver Angels, which draws admiring discussions in both Danisch and Keith’s Radically Civil and Hudson’s The Soul of Civility. Founded in 2016 in response to the political partisanship and animosity occasioned by the campaign and election of Donald Trump, Braver Angels aims “not to change people’s views of issues, but to change their views of each other.” They pursue that end through programming like workshops, debates, and podcasts that encourage “understand[ing] the other side’s point of view, even if we do not agree with it.” This emphasis on understanding over persuasion is applauded by Danisch and Keith, as well as Hudson, who is quite effusive in her praise. She lauds Braver Angels’ promotion of “viewpoint diversity” and holds that “civic associations such as Braver Angels are the lifeblood of American society.” This admiration is—importantly—mutual, for Braver Angels has on multiple occasions featured Hudson as a speaker, and its website offers links to her essays as well as an admiring review of her book. One sees here a symbiotic relation between civility organizations and civility literature. Hudson’s praise makes her readers more likely to buy memberships in Braver Angels, and the organization’s support makes its members more likely to buy her book. This mutually beneficial relationship suggests, if not a fully fledged civility-industrial complex, at least a substantively integrated civility media ecosystem. In this media ecosystem, the problem is not impoliteness, as it was for earlier civility advocates. Both Radically Civil and The Soul of Civility critique the weaponized impositions of politeness that were also the primary target of Zamalin, and before him, Douglass. Rather, the central problem advocates of agree-to-disagree civility set out to solve is something like acrimony—people just not getting along, especially regarding politics. Danisch and Keith are worried about partisan polarization, and they draw on communications studies to develop an “antidote,” their notion of “radical civility”: “engaging respectfully with others in ways that can create meaningful connections across difference.” For Hudson, the issue is an inescapable human tendency toward “self-love” that “divides us,” often politically. She finds a remedy affirmed across a range of literary and historical texts: civility, which she understands as a regard for the dignity and value of others and sees as yielding the “cultural toleration of diverse views.” If the problem is indeed people not getting along, then “agreeing to disagree” makes sense. But is that really America’s problem? Describing our difficulties in primly nonpartisan terms like polarization and selfishness obscures how the Trump-era Right has played an outsize role in generating political conflict, through, for instance, legislative obstruction, election denialism, and insurrection. Agree-to-disagree civility suffers from a certain content agnosticism; it tends to ask for the toleration of disagreement without considering what the disagreement is about, or the relative merits of the arguments. As Danisch and Keith put it, “A commitment to civility teaches us that rightness and wrongness are less important than how we treat others.” Yet this relative indifference to the substance of debates carries liabilities. For instance, Braver Angels offers the content-agnostic assertion that in their programs, “neither side is teaching the other or giving feedback on how to think or say things differently.” But is it right, say, to ask a Black American not to be “giving feedback” on how a white supremacist should think differently? Agreeing to disagree, in such a case, would require tolerating the intolerable. Such invocations of agree-to-disagree civility beg the question of when to set aside toleration and embrace persuasion, protest, and praxis. Radically Civil and The Soul of Civility each briefly grapple with that question, in ways that suggest how agreeing to disagree conflicts with social change. Danisch and Keith acknowledge that persuasion can have a role in disagreement, but their radical civility entails far more toleration and waiting than persuasion and action. Especially in “hard cases” involving racism and misogyny, they explain, it’s best to focus on relationship building in an “outcome-indifferent way,” even though doing so is to “take a risk” that will pay off slowly, if at all. Hudson’s approach might seem more open to setting aside toleration for protest; her book features a chapter on “Civil Disobedience,” which focuses on abolitionism, Gandhian satyagraha, and US civil rights struggles and argues that inequality and discrimination demand protest. But Hudson is more interested in spelling out a “litmus test” for acceptable forms of protest than in explaining what should prompt a move from toleration to praxis in the first place. The chapter ends up most concerned with containing, not fostering change-producing protests—making sure they’re sufficiently civil and respectful. To underscore her point, Hudson invokes Frederick Douglass, asserting that if he and other abolitionists “could be civil while criticizing slaveholders—people who owned other persons—we can be civil in disagreements in our modern political realm.” This is not a particularly convincing characterization of Douglass, whose 1845 autobiography recounts physically fighting the slaveholder Edward Covey and who in an 1848 editorial called on abolitionists to speak “words of burning truth.” But if Douglass was happy to cast aside civil decorum, civility was nonetheless often on his mind, as an object of critical analysis. His speeches offer a still-timely interrogation of agree-to-disagree civility. In the decades following the Civil War, there arose a version of agree-to-disagree civility similar to that promulgated today by the bipartisan coalition of Presidential Centers and organizations like Braver Angels. Then, many white Americans prioritized national reconciliation and sought to accommodate the divergent attitudes toward slavery that animated the conflict. Douglass responded in an 1878 speech given in New York City reflecting on the Civil War’s legacy. He warns, “We must not be asked to put no difference between those who fought for the Union and those who fought against it.” He insists—in a line that gives the speech its title in the Blight collection—“There was a right side and a wrong side in the late war.” For Douglass, agreeing to disagree was untenable because such civility would legitimate the ideas of those partisans of the South who wanted to continue the Confederate project by other means. For us, Douglass’s insistence on moral and intellectual clarity regarding slavery is a reminder that some issues require no ongoing debate. It should be no more possible to agree to disagree about, say, the need to fight climate change or the importance of sexual and racial equality than about slavery. In Douglass’s oratory, though, we find not just critiques of various forms of civility but also a deeper analysis of the ideal of an insistently tolerant political culture. Douglass recognizes that an investment in civility amounts to what we’d now call, following the cultural critic Lauren Berlant, a case of cruel optimism. “A relationship of cruel optimism exists,” Berlant writes, “when something you desire is actually an obstacle to your flourishing.” Their touchstone example is the fantasy of the good life: of upward mobility, political fairness, and domestic fulfillment, all increasingly hard to come by in recent decades. Such a fantasy is cruelly optimistic, for Berlant, because it is a yearning for a future unlikely to arrive, and because an overriding focus on such a future prevents people from imagining other ways to thrive. Douglass anticipates some of Berlant’s thinking about this self-defeating kind of desire. He suggests that the civility many hold up as a remedy to a contentious public sphere is in fact an obstacle to an improved state of affairs. Douglass makes this point in the “West India Emancipation” speech, when he excoriates not affability-seeking slaveholders but rather a cohort of ostensible allies of abolitionism, who see civility as necessary to the pursuit of freedom. “Those who profess to favor freedom and yet deprecate agitation,” he explains, “are men who want crops without plowing up the ground.” Douglass here likens the civility advocate wary of “agitation” to a farmer who, for some reason, is unwilling to plow his field. Either way, the wariness of agitation calls to mind the devotee of political politeness as much as the advocate of agreeing to disagree. The suggestion is that their desire to suppress agitation stands as an obstacle to the flourishing of the freedom they profess to desire, much as a farmer’s desire to not plow a field would prevent a bountiful harvest. In this way, Douglass’s unusual comparison indicates how proponents of civility are stuck in a relation of cruel optimism. Seeing civility as a form of cruel optimism allows us to better grasp its appeal. Some speakers invoke the idea of civility cynically, solely to silence opponents or to make space for otherwise untenable positions. But others might well be drawn to notions of civil discourse out of a sincere, if misguided, belief that it offers a path to a better future. In a contentious world, it can feel good to stand for civility. Therefore, any effort to displace civility as an ideal needs to offer a vision of change that is no less affectively satisfying. Fortunately, by reading across Douglass’s oratory, we can find an alternative to civility that does so. One might expect Douglass’s critical analysis of civility to yield a defense of incivility, as it has for some contemporary writers. But, as the philosopher Olúfẹ́mi Táíwò has argued, there are good reasons we should not simply embrace incivility as a counterideal. He observes that incivility might not be politically effective and that, moreover, “opposing civility because elites have abused it betrays the sort of politics that cannot find any orientation to the world (or even to itself) except in relation to today’s oppressor of choice.” These comments show how embracing incivility ends up conceding too much to the advocates of civility: Such a focus keeps attention on the form of discussion rather than its context, content, and outcomes. Douglass flips the script, letting the topic, occasion, and goal of a speech determine his mode of address. Throughout his vast body of speeches, his savvy oratorical adaptability is unmistakable. His was a practice of rhetorical pragmatism: the objects were the freedom and equality of racial justice, and he would adopt whatever manner of speaking, civil or uncivil, would most likely persuade his audience to pursue those ends. It’s this orientation that emotionally energizes Douglass, and his audience, in ways that lend his rhetorical pragmatism enough affective appeal to be a viable and preferable alternative to cruelly optimistic civilities. Douglass invites us to feel passionate about a possible-but-not-assured future of freedom and equality, not the means of getting there. There is an optimism at the heart of his rhetorical pragmatism, but it’s unlikely to turn cruel—to calcify into an obstacle—because Douglass’s practice is so variable, constantly adjusting to most effectively pursue flourishing. When necessary, Douglass rejected civility, like in his 1852 address “What to the Slave Is the Fourth of July?” He gave that speech at a meeting of the Rochester Ladies Anti-Slavery Society, but, as Blight explains, he saw himself as also addressing a broader national audience, which would take in his speech later, in print. Douglass aimed to use the occasion of a former slave reflecting on the nation’s anniversary to underscore the prevailing gap between American ideals of freedom and the realities of slavery and to snap his audience out of its complacent hypocrisy. To achieve that end, he eschewed civil norms. “At a time like this, scorching irony, not convincing argument, is needed,” he declared. “O! had I the ability, and could reach the nation’s ear, I would, today, pour out a fiery stream of biting ridicule, blasting reproach, withering sarcasm, and stern rebuke.” Such fiery rhetoric is purposeful, Douglass explains, for only through such language can “the feeling of the nation […] be quickened” in a way that would silence slavery’s apologists, bring people to the antislavery cause, and lead the cause’s partisans to intensify their activism. But Douglass could strike a civil tone if the situation required it. When in 1855 he addressed fellow abolitionists on “The Anti-Slavery Movement,” he announced, “I wish to speak of that movement, to-night, more as the calm observer, than as the ardent and personally interested advocate.” This approach reflected his immediate goals. At a moment when there were many “sects and parties” within abolitionism, Douglass wanted to persuade his audience to favor the Liberty Party, which insisted on “no slavery for man under the whole heavens,” over other, less radical organizations, which he saw as merely trying to contain slavery. Faced with the task of persuading the already engaged, he opted for a measured style of address. This rhetorical pragmatism offers a thoroughgoing rejection of civility discourse. For advocates of civility, attaining civil speech—polite talk, agreeing to disagree—is an end in itself. By contrast, Douglass refuses such civility for civility’s sake. His objective is not the achievement of some mode of address; his goal is justice. Douglass embodies the sort of “constructive political culture” Táíwò has recently called for. Táíwò holds that such a culture, focused “on outcome over process,” will mount the most effective challenge to racism, capitalism, and global inequality. Whereas civility would end social change, social change is the end toward which Douglass would push us. Source of the article

The problem of mindfulness

Mindfulness promotes itself as value-neutral but it is loaded with (troubling) assumptions about the self and the cosmos Three years ago, when I was studying for a Masters in Philosophy at the University of Cambridge, mindfulness was very much in the air. The Department of Psychiatry had launched a large-scale study on the effects of mindfulness in collaboration with the university’s counselling service. Everyone I knew seemed to be involved in some way: either they were attending regular mindfulness classes and dutifully filling out surveys or, like me, they were part of a control group who didn’t attend classes, but found themselves caught up in the craze even so. We gathered in strangers’ houses to meditate at odd hours, and avidly discussed our meditative experiences. It was a strange time. Raised as a Buddhist in New Zealand and Sri Lanka, I have a long history with meditation – although, like many ‘cultural Catholics’, my involvement was often superficial. I was crushingly bored whenever my parents dragged me to the temple as a child. At university, however, I turned to psychotherapy to cope with the stress of the academic environment. Unsurprisingly, I found myself drawn to schools or approaches marked by the influence of Buddhist philosophy and meditation, one of which was mindfulness. Over the years, before and during the Cambridge trial, therapists have taught me an arsenal of mindfulness techniques. I have been instructed to observe my breath, to scan my body and note the range of its sensations, and to observe the play of thoughts and emotions in my mind. This last exercise often involves visual imagery, where a person is asked to consider thoughts and feelings in terms of clouds in the sky or leaves drifting in a river. A popular activity (though I’ve never tried it myself) even involves eating a raisin mindfully, where you carefully observe the sensory experience from start to finish, including changes in texture and the different tastes and smells. At the end of the Cambridge study, I found myself to be calmer, more relaxed and better able to step away from any overwhelming feelings. My experience was mirrored in the research findings, which concluded that regular mindfulness meditation reduces stress levels and builds resilience. Yet I’d also become troubled by a cluster of feelings that I couldn’t quite identify. It was as if I could no longer make sense of my emotions and thoughts. Did I think the essay I’d just written was bad because the argument didn’t quite work, or was I simply anxious about the looming deadline? Why did I feel so inadequate? Was it imposter syndrome, depression or was I just not a good fit for this kind of research? I couldn’t tell whether I had particular thoughts and feelings simply because I was stressed and inclined to give in to melodramatic thoughts, or because there was a good reason to think and feel those things. Something about the mindfulness practice I’d cultivated, and the way it encouraged me to engage with my emotions, made me feel increasingly estranged from myself and my life. In the intervening years, I’ve obsessed over this experience – to the point that I left a PhD in an entirely different area of philosophy and put myself through the gruelling process of reapplying for graduate programmes, just so I could understand what had happened. I began following a thread from ancient Buddhist texts to more recent books on meditation to see how ideas have migrated to the contemporary mindfulness movement. What I’ve uncovered has disturbing implications for how mindfulness encourages us to relate to our thoughts, emotions and very sense of self. Where once Europeans and North Americans might have turned to religion or philosophy to understand themselves, increasingly they are embracing psychotherapy and its cousins. The mindfulness movement is a prominent example of this shift in cultural habits of self-reflection and interrogation. Instead of engaging in deliberation about oneself, what the arts of mindfulness have in common is a certain mode of attending to present events – often described as a ‘nonjudgmental awareness of the present moment’. Practitioners are discouraged from engaging with their experiences in a critical or evaluative manner, and often they’re explicitly instructed to disregard the content of their own thoughts. When eating the raisin, for example, the focus is on the process of consuming it, rather than reflecting on whether you like raisins or recalling the little red boxes of them you had in your school lunches, and so on. Similarly, when focusing on your breath or scanning your body, you should concentrate on the activity, rather than following the train of your thoughts or giving in to feelings of boredom and frustration. The goal is not to end up thinking or feeling nothing, but rather to note whatever arises, and to let it pass with the same lightness. One reason that mindfulness finds such an eager audience is that it garbs itself in a mantle of value-neutrality. In his book Wherever You Go (1994), Jon Kabat-Zinn, a founding father of the contemporary mindfulness movement, claims that mindfulness ‘will not conflict with any beliefs … – religious or for that matter scientific – nor is it trying to sell you anything, especially not a belief system or ideology’. As well as relieving stress, Kabat-Zinn and his followers claim that mindfulness practices can help with alleviating physical pain, treat mental illness, boost productivity and creativity, and help us understand our ‘true’ selves. Mindfulness has become something of a one-size-fits-all response for a host of modern ills – something ideologically innocent that fits easily into anyone’s life, regardless of background, beliefs or values. Commodification has produced watered-down versions – available via apps, and taught in schools and offices Yet mindfulness is not without its critics. The way in which it relates to Buddhism, particularly its meditation practices, is an ongoing area of controversy. Buddhist scholars have accused the contemporary mindfulness movement of everything from misrepresenting Buddhism to cultural appropriation. Kabat-Zinn has muddied the waters further by claiming that mindfulness demonstrates the truth of key Buddhist doctrines. But critics say that the nonjudgmental aspects of mindfulness are in fact at odds with Buddhist meditation, in which individuals are instructed to actively evaluate and engage with their experiences in light of Buddhist doctrine. Others point out that the goals of psychotherapy and mindfulness do not match up with core Buddhist tenets: while psychotherapy might attempt to reduce suffering, for example, Buddhism takes it to be so deeply entrenched that one should aim to escape the miserable cycle of rebirth altogether. A third line of attack can be summed up in the epithet ‘McMindfulness’. Critics such as the author David Forbes and the management professor Ronald Purser argue that, as mindfulness has moved from therapy to the mainstream, commodification and marketing have produced watered-down, corrupted versions – available via apps such as Headspace and Calm, and taught as courses in schools, universities and offices. My own gripes with mindfulness are of a different, though related, order. In claiming to offer a multipurpose, multi-user remedy for all occasions, mindfulness oversimplifies the difficult business of understanding oneself. It fits oh-so-neatly into a culture of techno-fixes, easy answers and self-hacks, where we can all just tinker with the contents of our heads to solve problems, instead of probing why we’re so dissatisfied with our lives in the first place. As I found with my own experience, though, it’s not enough to simply watch one’s thoughts and feelings. To understand why mindfulness is uniquely unsuited for the project of real self-understanding, we need to probe the suppressed assumptions about the self that are embedded in its foundations. Contrary to Kabat-Zinn’s loftier claims to universalism, mindfulness is in fact ‘metaphysically loaded’: it relies on its practitioners signing up to positions they might not readily accept. In particular, mindfulness is grounded in the Buddhist doctrine of anattā, or the ‘no-self’. Anattā is a metaphysical denial of the self, defending the idea that there is nothing like a soul, spirit or any ongoing individual basis for identity. This view denies that each of us is an underlying subject of our own experience. By contrast, Western metaphysics typically holds that – in addition to the existence of any thoughts, emotions and physical sensations – there is some entity to whom all these experiences are happening, and that it makes sense to refer to this entity as ‘I’ or ‘me’. However, according to Buddhist philosophy, there is no ‘self’ or ‘me’ to which such phenomena belong. It’s striking how much shared terrain there is among the strategies that Buddhists use to reveal the ‘truth’ of anattā, and the exercises of mindfulness practitioners. One technique in Buddhism, for example, involves examining thoughts, feelings and physical sensations, and noting that they are impermanent, both individually and collectively. Our thoughts and emotions change rapidly, and physical sensations come and go in response to stimuli. As such (the thinking goes), they cannot be the entity that persists throughout a lifetime – and, whatever the self is, it cannot be as ephemeral and short-lived as these phenomena. Nor can the self be these phenomena collectively as they are all equally impermanent. But then, the Buddhists point out, there is also nothing besides these phenomena that could be the self. Consequently, there is no self. From the realisation of impermanence, you gain the additional insight that these phenomena are impersonal; if there is no such thing as ‘me’, to whom transitory phenomena such as thoughts can be said to belong, then there’s no sense in which these thoughts are ‘mine’. Like their Buddhist predecessors, contemporary mindfulness practitioners stress these qualities of impermanence and impersonality. Exercises repeatedly draw attention to the transitory nature of what is being observed in the present moment. Explicit directions (‘see how thoughts seem to simply arise and cease’) and visual imagery (‘think of your thoughts like clouds drifting away in the sky’) reinforce ideas of transience, and encourage us to detach ourselves from getting too caught up in our own experience (‘You are not your thoughts; you are not your pain’ are common mantras). After a certain point, mindfulness doesn’t allow you to take responsibility for and analyse your feelings I put my earlier sense of self-estrangement and disorientation down to mindfulness’s close relationship with anattā. With the no-self doctrine, we relinquish not only more familiar understandings of the self, but also the idea that mental phenomena such as thoughts and feelings are our own. In doing so, we make it harder to understand why we think and feel the way we do, and to tell a broader story about ourselves and our lives. The desire for self-understanding tends to be tied up with the belief that there is something to be understood – not necessarily in terms of some metaphysical substrate, but a more commonplace, persisting entity, such as one’s character or personality. We don’t tend to think that thoughts and feelings are disconnected, transitory events that just happen to occur in our minds. Rather, we see them as belonging to us because they are reflective of us in some way. People who worry that they are neurotic, for example, will probably do so based on their repeated feelings of insecurity and anxiety, and their tendency towards nitpicking. They will recognise these feelings as flowing from the fact that they might have a particular personality or character trait. Of course, it’s often pragmatically useful to step away from your own fraught ruminations and emotions. Seeing them as drifting leaves can help us gain a certain distance from the heat of our feelings, so as to discern patterns and identify triggers. But after a certain point, mindfulness doesn’t allow you to take responsibility for and analyse such feelings. It’s not much help in sifting through competing explanations for why you might be thinking or feeling a certain way. Nor can it clarify what these thoughts and feelings might reveal about your character. Mindfulness, grounded in anattā, can offer only the platitude: ‘I am not my feelings.’ Its conceptual toolbox doesn’t allow for more confronting statements, such as ‘I am feeling insecure,’ ‘These are my anxious feelings,’ or even ‘I might be a neurotic person.’ Without some ownership of one’s feelings and thoughts, it is difficult to take responsibility for them. The relationship between individuals and their mental phenomena is a weighty one, encompassing questions of personal responsibility and history. These matters shouldn’t be shunted so easily to one side. As well as severing the relationship between you and your thoughts and feelings, mindfulness makes self-understanding difficult in another way. By relinquishing the self, we divorce it from its environment and therefore its particular explanatory context. As I write this, I’ve spent the past month being fairly miserable. If I were being mindful, I would note that there were emotions of sadness and helplessness as well as anxious thoughts. While mindfulness might indirectly help me glean something about the recurring content of my thoughts, without some idea of a self, separate from but embedded in a social context, I couldn’t gain much further insight. Trails of thought and feeling, on their own, give us no way of telling whether we’re reacting disproportionately to some small event in our lives, or, as I was, responding appropriately to recent tragic events. To look for richer explanations about why you think and feel the way you do, you need to see yourself as a distinct individual, operating within a certain context. You need to have some account of the self, as this demarcates what is a response to your context, and what flows from yourself. I know I have a propensity towards neurotic worrying and overthinking. Thinking of myself as an individual in a particular context is what allows me to identify whether the source of these worries stems from my internal character traits or if I am simply responding to an external situation. Often the answer is a mixture of both, but even this ambiguity requires a careful scrutiny, not only of thoughts and feelings but the specific context in which they arose. The problem is the tendency to present mindfulness as a panacea for all manner of modern ills The contrasting tendency in mindfulness to bracket context not only cramps self-understanding. It also renders our mental challenges dangerously apolitical. In spite of a growing literature probing the root causes of mental-health issues, policymakers tend to rely on low-cost, supposedly all-encompassing solutions for a broad base of clients. The focus tends to be solely on the contents of an individual’s mind and the alleviation of their distress, rather than on interrogating the deeper socioeconomic and political conditions that give rise to the distress in the first place. Older people tend to suffer high rates of depression, for example, but that’s usually addressed via pharmaceutical or therapeutic means – instead of considering, say, social isolation or financial pressures. Mindfulness follows the trend for simplicity and individuation. Its embedded assumptions about the self make it particularly prone to neglecting broader considerations, since they allow for no notion of individuals as enmeshed in and affected by society at large. I don’t mean to suggest that everyone who does mindfulness will feel estranged from their thoughts the way I did, nor that it will inevitably restrict their capacity to understand themselves. It can be a useful tool in helping us gain some distance from the tumult of our inner experience. The problem is the current tendency to present mindfulness as a wholesale remedy, a panacea for all manner of modern ills. I still dabble in mindfulness, but these days I tend to draw on it sparingly. I might do a mindfulness meditation when I’ve had a difficult day at work, or if I’m having trouble sleeping, rather than keeping up a regular practice. With its promises of assisting everyone with anything and everything, the mistake of the mindfulness movement is to present its impersonal mode of awareness as a superior or universally useful one. Its roots in the Buddhist doctrine of anattā mean that it sidelines a certain kind of deep, deliberative reflection that’s required for unpicking which of our thoughts and emotions are reflective of ourselves, which are responses to the environment, and – the most difficult question of all – what we should be doing about it. Source of the article

‘You are constantly told you are evil’: inside the lives of diagnosed narcissists

Few psychiatric conditions are as stigmatised or as misunderstood as narcissistic personality disorder. Here’s how it can damage careers and relationships – even before prejudice takes its toll There are times when Jay Spring believes he is “the greatest person on planet Earth”. The 22-year-old from Los Angeles is a diagnosed narcissist, and in his most grandiose moments, “it can get really delusional”, he says. “You are on cloud nine and you’re like, ‘Everyone’s going to know that I’m better than them … I’ll do great things for the world’.” For Spring, these periods of self-aggrandisement are generally followed by a “crash”, when he feels emotional and embarrassed by his behaviour, and is particularly vulnerable to criticism from others. He came to suspect that he may have narcissistic personality disorder (NPD) after researching his symptoms online – and was eventually diagnosed by a professional. But he doesn’t think he would have accepted the diagnosis had he not already come to the conclusion on his own. “If you try to tell somebody that they have this disorder, they’ll probably deny it,” he says – especially if they experience feelings of superiority, as he does. “They’re in a delusional world that they made for themselves. And that world is like, I’m the greatest and nobody can question me.” Though people have been labelled as narcissists for more than a century, it’s not always clear what is meant by the term. “Everyone calls everybody a narcissist,” says W Keith Campbell, psychology professor at the University of Georgia and a narcissism expert. The word is “used more than it should be” – but when it comes to a formal diagnosis, he believes many people hide it, as there is so much stigma around the disorder. A narcissist will tend to have “an inflated view of oneself”, “a lack of empathy”, and “a strategy of using people to bolster one’s self-esteem or social status through things like seeking admiration, displaying material goods, seeking power,” says Campbell. Those with NPD may be “extremely narcissistic”, to the point that “they’re not able to hold down stable relationships, it damages their jobs”, and they have a “distorted view of reality”, he says. Though up to 75% of people diagnosed with narcissistic personality disorder are men, research from the University of London published last year suggests this figure does not mean there are fewer narcissistic women, but that female narcissism is more often presented in the covert form (also defined as vulnerable narcissism), which is less commonly diagnosed. “Men’s narcissism tends to be a bit more accepted, just kind of like everything in society,” says Atlanta-based Kaelah Oberdorf, 23, who posts about her NPD and borderline personality disorder (BPD) diagnoses on TikTok. It is not uncommon to see the two disorders co-occur. “I really struggle with handling criticism and rejection,” says Oberdorf , “because if I hear that the problem is me, I either go into defence mode or I completely shut down.” Despite having this response – which is sometimes referred to as “narcissistic injury”, she has been trying to overcome it and take advice from her loved ones, as she doesn’t want to slip into the harmful behaviour of her past. “I was very emotionally abusive to my partners as a teenager,” she says. Through dialectical behavioural therapy, she has been able to mitigate her NPD symptoms, and she says she and her current boyfriend “have a dynamic where I told him, ‘If I say something messed up, if I say something manipulative, call it out right then and there’.” Oberdorf grew up primarily in the care of her father and says she lacked positive role models as a child. “I’ve been learning all this time what is and is not appropriate to say during a fight because I never had that growing up,” she says. “Nothing was off-limits when my family members were insulting me when I was growing up.” Personality disorders tend to be associated with difficulties as a child. “There is a genetic component,” says Tennyson Lee, an NHS consultant psychiatrist who works at the DeanCross personality disorder service in London. But, when someone develops narcissistic traits, it is often “linked to that individual’s particular early environment”. Those traits were “their strategy in some ways to survive at a very early age”, he adds, when they may have been neglected, or only shown love that was conditional on meeting certain expectations. They then “continue to use those same mechanisms as adults”. Like several of the NPD-diagnosed people I speak to, John (not his real name) thinks his parents “may be narcissists themselves”. The 38-year-old from Leeds says when he was a child, “everything was all about them and their work and their social life. So it was like, stay out of our way.” When their focus was on him, it came in the form of “a great amount of pressure” to achieve good grades and career success, he says, which made him feel that if he didn’t meet their standards, he wasn’t “good enough”. When he became an adult, none of his relationships ever worked out. “I’ve never cared about anyone really,” he says. “So I’ve never taken relationships seriously.” He didn’t think he was capable of loving someone, until he met his current partner of three years, who is diagnosed with BPD, so, like him, struggles with emotional regulation. She is “really understanding of the stuff that goes on in my head”, he says – it was actually she who first suspected he might have NPD. After a visit to his GP, John was referred to a clinical psychologist for an assessment and was told his diagnosis. He has been referred for talking therapy on the NHS (a long period of therapy is the only treatment that has been shown to help NPD patients, says Lee), but has been on the waiting list for a year and a half: “They said it is probably going to be maybe February or March next year.” John has only told a handful of people about his NPD diagnosis, because “there’s a big stigma that all narcissists are abusers”, but, privately, he has accepted it. “It helps me to understand myself better, which is always a good thing,” he says. All of the people I speak to have accepted their narcissism and are seeking help for it – hence being willing to talk about it – which is probably not representative of all people with the disorder. But the existence of NPD content creators such as Oberdorf and Lee Hammock, and the growth of online support communities, suggest that more narcissists are openly acknowledging the issues they face – and the ones they may be causing for others. “Seeing that you’re not alone in what you’re struggling with, being able to talk to other people who relate to you and maybe hearing coping mechanisms” are reasons why reddit user Phteven_j (who would like to remain anonymous) started joining in conversations about NPD online. Now a moderator of the r/NPD subreddit, the 37-year-old software engineer thinks he and his co-moderators are “pretty good about not encouraging disordered behaviour” and ensuring “it’s not a breeding ground for any sort of negative or disorder behaviour and more of a place where you can try to improve”. Although, in volunteering as a moderator, “I’d be lying if I said that I wasn’t seeking out some kind of position of authority” – which arguably stems from NPD symptoms – Phteven_j believes the subreddit is largely a force for good. However, the slew of reddit users wanting to complain about narcissists (and sometimes even the existence of a subreddit that acts as a support group for them) “is constant” he says. Across the internet, narcissists are often “painted as almost like supervillains” and the stories shared are often from the perspective of those who have been abused by someone they believe to be a narcissist. “The advice is, typically, the same: run away, you’ve got to leave them, don’t ever talk to them again,” the moderator says. Oberdorf is also critical of the way narcissism is discussed online. Social media users have accused her of “bragging” about her personality disorders because she lists them on her profiles and discusses them in her content. “I’m not bragging about the fact that I have debilitating mental illness,” she says. “I am proud of the fact that I have survived with mental illnesses that statistically could have taken my life.” She is keen to open up more conversations about NPD – “stigma is the number one worst thing for any illness ever”. In this age of selfies and thirst traps, it can feel that narcissism must be on the rise. But just because there are now more outlets for narcissistic behaviour, prevalence of the clinical condition doesn’t seem to be increasing, says Lee. It’s worth noting, Campbell adds, that “social media is making people feel worse about themselves”, and, for most people, “it doesn’t make them feel positive about themselves or think they’re awesome”. The way NPD diagnoses are made is “suboptimal”, however, according to Lee. Most of the research on NPD has been done in the US, where a paper published by the American Psychiatric Association estimates the disorder is found in 1%–2% of the population. “If you make the diagnosis, then it’s made on the [American Psychiatric Association’s] Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) guidelines, where it only captures an aspect of narcissism, which is the more overt, sort of aggressive type of narcissism, but it doesn’t capture the more covert or sensitive form,” says Lee. There are two most commonly talked about types of narcissism. The first is the “grandiose” or “overt” form, which manifests in stereotypically narcissistic behaviours such as aggression and attention-seeking. The second is a “vulnerable” or “covert” narcissist, which is “the kind of individual the clinician might miss, because they often come across as far more contained, even self-effacing at times”, Lee says. Grandiose and vulnerable narcissism “are different sides of the same coin”, he says. Both types will have an inflated sense of their own importance, but for a covert narcissist that may mean a hypersensitivity to criticism or a victim mentality rather than a desire to put themselves in the spotlight. Campbell points out that there is a risk of narcissists “using social media to maintain their narcissism”, as it can be a tool “to get favourable attention or positive feedback”, but does see the benefit of positive role models and support for people with NPD. When a celebrity, such as the American comedian Nick Cannon in 2024, “comes out with NPD and says it’s causing me problems, that’s a great message,” says Campbell, “that’s a great message”. Lee, too, is wary of social media being used to educate or as a support system for people with NPD, “because there’s so much misinformation”. But he believes “more structured” information is missing, particularly in the NHS. “The service for narcissistic individuals is very uneven throughout the UK” and “many clinicians don’t make the diagnosis of narcissism”, Lee says, partly because they aren’t primed to notice it, and partly out of reluctance to make a diagnosis that is perceived so negatively. The symptoms of NPD also mean that “if a narcissist is successfully leading their life, even though they might have quite a strong level of narcissism, they’re not going to seek treatment”. When a patient with NPD does seek help, it is often because they have suffered negative consequences of their narcissistic behaviour, or a partner or family member has encouraged them. Spring wishes people would reframe the way they think about narcissists. “A narcissist is attempting to believe that they are the best because that is the coping mechanism for feeling like: ‘I am the worst,’” he says. “There’s something missing with me and I need to be in this fantasy world where I’m the hero because maybe in my childhood I was the villain and now I need to overcompensate for that.” NPD is clearly a condition that requires psychological help, but Oberdorf can understand why narcissists don’t seek it: “If you have a problem, and you are constantly being told that people with your type of specific problem are unworthy, or they’re evil, or they’re horrible people because of this problem, why would you want to admit that you have that problem?” Source of the article

GOATReads: Psychology

Brain man

How can you have a picture of the world when your brain is locked up in your skull? Neuroscientist Dale Purves has clues Picture someone washing their hands. The water running down the drain is a deep red. How you interpret this scene depends on its setting, and your history. If the person is in a gas station bathroom, and you just saw the latest true-crime series, these are the ablutions of a serial killer. If the person is at a kitchen sink, then perhaps they cut themselves while preparing a meal. If the person is in an art studio, you might find resonance with the struggle to get paint off your hands. If you are naive to crime story tropes, cooking or painting, you would have a different interpretation. If you are present, watching someone wash deep red off their hands into a sink, your response depends on even more variables. How we act in the world is also specific to our species; we all live in an ‘umwelt’, or self-centred world, in the words of the philosopher-biologist Jakob von Uexküll (1864-1944). It’s not as simple as just taking in all the sensory information and then making a decision. First, our particular eyes, ears, nose, tongue and skin already filter what we can see, hear, smell, taste and feel. We don’t take in everything. We don’t see ultraviolet light like a bird, we don’t hear infrasound like elephants and baleen whales do. Second, the size and shape of our bodies determine what possible actions we can take. Parkour athletes – those who run, vault, climb and jump in complex urban environments – are remarkable in their skills and daring, but sustain injuries that a cat doing the exact same thing would not. Every animal comes with a unique bag of tricks to exploit their environment; these tricks are also limitations under different conditions. Third, the world, our environment, changes. Seasons change, what animals can eat therefore also changes. If it’s the rainy season, grass will be abundant. The amount of grass determines who is around to eat it and therefore who is around to eat the grass-eaters. Ultimately, the challenge for each of us animals is how to act in this unstable world that we do not fully apprehend with our senses and our body’s limited degrees of freedom. There is a fourth constraint, one that isn’t typically recognised. Most of the time, our intuition tells us that what we are seeing (or hearing or feeling) is an accurate representation of what is out there, and that anyone else would see (or hear or feel) it the same way. But we all know that’s not true and yet are continually surprised by it. It is even more fundamental than that: you know that seemingly basic sensory information that we are able to take in with our eyes and ears? It’s inaccurate. How we perceive elementary colours, ‘red’ for example, always depends on the amount of light, surrounding colours and other factors. In low lighting, the deep red washing down the sink might appear black. A yellow sink will make it look more orange; a blue sink may make it look violet. If, instead of through human eyeballs, we measured the wavelengths of light coming off the scene with a device called a spectrophotometer, then the wavelength of the light reflected off that ‘blood’ would be the same, no matter the surrounding colours. But our eyes don’t see the world as it really is because our eyes don’t measure wavelengths like a spectrophotometer. Dale Purves, the George B Geller Professor of Neurobiology (Emeritus) at Duke University in North Carolina, thinks that, because we can never really see the world accurately, the brain’s primary purpose is to help us make associations to guide our behaviour in a way that, literally, makes sense. Purves sees ‘making sense’ as an active process where the brain produces inferences, of which we are not consciously aware, based on past experiences, to interpret and construct a coherent picture of our surroundings. Our brains use learned patterns and expectations to compensate for our imperfect senses and finite experiences, to give us the best understanding of the world it can. Purves is the scientist’s scientist. He pursues the questions he’s genuinely interested in and does so with original approaches and ideas. Over the years, he’s changed the subject of his research multiple times, all with the intent of understanding how the brain works, tackling subjects that were new and unfamiliar to him, as opposed to chasing trends and techniques or sticking to a tried-and-true research path. His career is an instance of the claim Viktor Frankl makes in Man’s Search for Meaning (1946): ‘For success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended side-effect of one’s personal dedication to a cause greater than oneself …’ If success is measured by accolades, then it did indeed follow Purves’s pursuits. Among a laundry list of awards and honours, he’s one of the few scientists elected both to the National Academy of Sciences (1989) and to what is now the National Academy of Medicine (1996). Election to either is considered to be among the highest honours that can be bestowed on a scientist in the United States. Nevertheless, if the name ‘Dale Purves’ sounds familiar to you, it is likely because you took a neuroscience course in college, for which the textbook was Neuroscience by Purves et al (one of the most popular and now in its 7th edition). Indeed, this is the text I used when I taught an introductory neuroscience course at Princeton. Oddly enough, Purves’s passion for neuroscience took time and experience to materialise. As an undergraduate at Yale, he struggled initially but found a major – philosophy – intending to pursue medicine. Purves developed an interest in science but didn’t know anything about being a scientist, and medicine seemed close enough. In 1960, he entered Harvard Medical School thinking he would become a psychiatrist. In his first year, he took a course on the nervous system, taught by young hotshot neuroscientists, some of whom would go on to be among the greats of the 20th century (and whose work is now textbook material): David Potter, Ed Furshpan, David Hubel and Torsten Wiesel (the latter two won the Nobel Prize in 1981, along with Roger Sperry). Purves finished his medical degree in 1964, but he had grown disillusioned with psychiatry. He’d tried switching to general surgery, but realised he lacked the intensity of interest in surgery required to excel. It was 1965, and the Vietnam War gave him time to think. Purves was drafted but, as a physician, could serve his time as a Peace Corps employee, which he did in Venezuela. There, he said, he came across a book, The Machinery of the Brain (1963) by Dean Wooldridge, that synthesised what he learned years ago in that first-year course on the nervous system. The book was written for the lay reader and shared the current knowledge about the brains of humans and other animals. Its particular angle was to compare the brain to the computer technology of the time. The book reignited Purves’s interest in the brain, which was first piqued in his medical school course. When he returned to the US, Purves would begin again as a researcher in neuroscience. There were more neurons in the developing embryo than there would be in the adult chicken I was an undergraduate at the University of Idaho when I first read Purves’s work. It was the early 1990s and I was a philosophy major like Purves but was really interested in neuroscience. So I sought hands-on research experience in a professor’s lab in the biology department. We were studying the immune-system markers of motor neurons in the rat’s spinal cord, the ones that connect to leg muscles causing them to contract. Mark DeSantis, my advisor, suggested a book by Purves. At the time, there were few courses or serious books on the brain. Purves’s Body and Brain: A Trophic Theory of Neural Connections (1988) was perfect, just what I needed. Its central thesis is that the survival of neurons making connections, and the number of these connections, is regulated by the targets of those neuronal connections. In essence, he was telling us that, unlike the static circuit board of a computer, which is carefully designed and built according to a plan, the circuits of the nervous system are constructed on the fly in accordance with signals it receives from the targets of its connections. Those targets could be other neurons, organs or muscles in the body. As a result, as the body changes over the course of development in an individual or in the evolution of species, the neural circuits will adjust accordingly. Why is this important? Purves showed us that the brain is more than just a controller of the body: it is also an organ system that is embedded in a dynamic relationship with the rest of the body, affected by body size, shape and activity. One of Purves’s favourite examples to illustrate what sparked this theory is from the work of one of his scientific heroes, Viktor Hamburger. Hamburger had been studying central nervous system development in the 1930s, first at the University of Chicago, then as a faculty member at Washington University in St Louis. Using chicken embryos and those spinal cord motor neurons, Hamburger showed that there were more neurons in the developing embryo than there would be in the adult chicken. How could that be? Why were there more neurons in an embryo than actually needed and why did some die off? Hamburger’s idea was that the muscles targeted by those neurons supplied a limited amount of trophic, or nutrient, factors. In essence, the target muscles were producing a ‘food’ (we now call it ‘nerve growth factor’) that kept the neuron alive. The size of the target determined how much food was available and therefore how many neuron lives it could sustain. Exploiting the ease with which chick embryos could be manipulated, Hamburger showed this by first amputating one of the wing buds (ie, the nascent wing). When he did so, the final number of motor neurons on the amputated side was lower than typical, with fewer on the ‘control’ side of the spinal cord. So, the limb bud was important for the survival of neurons. If that was true, then more of this ‘target tissue’ should save more neurons. Was it possible to ‘rescue’ those extra neurons that would normally die off? To answer this question, Hamburger surgically attached an extra limb bud on one side of the embryo, thereby artificially creating more target tissue. The result: more motor neurons survived on that side of the spinal cord. In both experiments, the size of the connection target – the body, those limb buds specifically – determined the number of neurons that survived. Purves ran with this idea of a dialogic relationship between body and brain. Around four decades later, in the 1970s and ’80s, and Purves was now a young faculty colleague of Hamburger, his elder statesman at Washington University. Here, he took Hamburger’s theory about neuron-to-muscle connections and applied it to neuron-to-neuron connections. While he also looked at neuronal cell survival like Hamburger did, Purves also investigated the elimination and elaboration of individual connections that neurons make, their synapses. This was a big leap because Purves was now testing whether or not Hamburger’s findings about death and survival of neurons in the chick embryo were peculiar to that species’ neuron-to-muscle relationship. Was the same process apparent in other developing circuits in other animals? And, if so, if a neuron survives, then are the number of connections (those synapses) also subject to competition for trophic factors? Neurons come in all shapes and sizes, with different degrees of complexity. If you’ve seen a typical picture of a single neuron then you know it looks like a tree or bush, with a set of root-like branches on one end and a single, long limb-like branch on the other. The latter can branch as well, depending on circumstances. One of those sets of branches – the dendrites – receives inputs from other neurons, and the other – the axon – sends outputs to other neurons. Together with his graduate student Jeff Lichtman, Purves wanted to know how synapse numbers change with development and across different species of animals. Purves and Lichtman started with a simple neuron-to-neuron connection, where the receiving neuron had zero dendrites and the sending neuron’s axons made synapses directly on the receiving neuron’s cell body. To see this, they would surgically remove a tight group of functionally similar neurons, known as a ‘ganglion’, from different animals. They would then carefully fill a few individual neurons with a special enzyme. When this enzyme is then given a chemical to react with, it produces a colour. This colouring allows the neuron to be visualised under a microscope in its full glory – all their branches can be seen and counted. (Imagine a microscopic glass-blown tentacled creature being filled with ink and then counting its appendages.) The brain and body constitute a single, coherent and dynamic network; there is no way to separate them The end of each branch represents a synaptic connection. Comparing connections in developing rats versus adults, they found that neurons initially received a few synapses from a number of different neurons. In a sense, the circuits were tangled up in young rats. By the time the rats were adults, each neuron had many synapses but only from one neuron – the circuit was untangled. How did this happen? Akin to the process of eliminating extra neurons based on target size, there was a process of elimination of superfluous connections from some neurons. Then there was an additional process of multiplication (or elaboration) of connections coming from the ‘correct’ (so to speak) neuron. In essence, once neurons could find the right partners by getting rid of the less desirable ones, their relationship could blossom in the form of additional synapses. Purves and Lichtman then replicated this basic finding with increasingly complex sets of neurons and in other species. Before we get lost in the weeds, here’s the bottom line: trophic interactions between neurons match the number of neurons to the target size, and these interactions also regulate how many synapses they make. The grander theory is this: each class of cells in a neural pathway is supporting and regulating the connections it receives by trophic interactions with the cells it’s connected to down the line. Thus, a coordinated chain of connectivity extends from neuronal connections with the body’s muscles and organs to connections among neurons within the brain itself. The brain and body constitute a single, coherent and dynamic network; there is no way to separate them. They depend on each other at every level. Some artists go through distinct periods in their careers, while others stick to similar themes in and/or approaches to their work for decades. Scientists are the same. Most stick to trying to answer one particular question, getting deeper and deeper, as they figure out more and more details about their subject. Others find that, at some point, they are satisfied with the answers at hand and move on, finding a new question or challenge. Purves is the latter type of scientist, making multiple radical shifts in his scientific research. His important work supporting trophic theory had an obvious direction for continued investigation: using molecular tools to find new ways to visualise synaptic development. Purves was not interested. His research programme up to this time exploited easily manipulated and unambiguously visualised neural circuits in the peripheral nervous system. The brain itself is a different story, there where all the important action supposedly is; its complexity and density make it impossible to address the same questions – which connections are disappearing or multiplying – with the same degree of clarity and specificity. Neuroscience was changing as well. By the time the late 1980s and ’90s rolled around, the most attention-getting work focused on the brain, particularly in the neocortex – the part that has disproportionately increased in size in primates like us. Many who were interested in how the brain developed were inspired by those Nobel Prize-winners, Hubel and Wiesel, who elegantly demonstrated that the visual part of the neocortex had a critical period of development. At this point, Purves had reached middle age, and he was at an impasse. The answer to the question ‘What to do next?’ was not self-evident. As an academic scientist, one can pretty much study whatever one wants, but it has to be interesting, potentially consequential, and, for Purves especially, it has to be tractable: you should be able to formulate a clear hypothesis that could lead to an unambiguous finding. The answer came in the form of a new collaborator, Anthony-Samuel LaMantia, who joined his lab in 1988 as a postdoctoral fellow after completing his PhD on the development of the neocortex. Together, Purves and LaMantia decided to tackle the question ‘How does the brain grow?’ Mice were not born with their full set of glomeruli; over the course of development, new ones were added There are many different kinds of brains, as many as there are animals. There is a beauty in all of them, perhaps because they adhere quite nicely to the form-follows-function principle of design. The designer in each case is natural selection’s influence on how a species develops, and thus the form its body and brain take in response to environmental challenges. Brain scientists study the anatomy of these solutions when we use any number of techniques like tracers, stains, imaging, etc. Each technique is suited to looking at the brain at a particular spatial scale. Consistently, what they reveal is that the brain is beautiful, sometimes stunning. At one of those scales, you can see repeated patterns of neural circuitry, or modules, that look exactly like the spots and stripes we see on the skin of so many animals. For example, depending on what dyes you use to stain it, the visual cortex of primates has a pattern of stripes, with each stripe seemingly dedicated to the visual signals coming from one of our two eyes. Stain it another way, and you’ll see an array of ‘blobs’ that are claimed by some to be dedicated to colour processing. Other animals have different patterns: rats have an array of barrel-shaped modules in their somatosensory (touch) cortex which corresponds to their array of facial whiskers. Dolphins have blobs in their auditory cortex; we don’t know what their function is. Purves wanted to know how these iterated patterns of neural circuitry developed. He started by first looking just outside the neocortex, in the olfactory bulb of the mouse. In this structure, mice have a number of modules known as ‘glomeruli’. The olfactory bulbs of mice jut out from the main part of the brain and so are more accessible for experiments. Purves and LaMantia developed a method for exposing the bulbs in a live animal and staining the glomeruli with a dye that would not hurt the mice. They could then see that mice were not born with their full set of glomeruli; over the course of development, new ones were added. This was exciting and surprising because many popular theories at the time argued that brain development is mainly the result of selecting useful circuits from a larger repertoire of possible circuits. Here, they were showing that useful circuits were actually being constructed, not selected. Moreover, if circuits were constructed in this way after the animal is born, then the circuits might be influenced by experience. Were other modules in other species and brain areas added in the same way? In the macaque monkey visual cortex (ie, the experimental animal most closely related to humans and the brain area that is among the most studied) they couldn’t look at module development like they did in the mouse (looking at the same brain structure in the same animal repeatedly over time), but they were able to count the number of blobs in young monkeys versus adult monkeys. Unlike the glomeruli in mice, however, the number of blobs remained constant over time. To Purves, this was not super exciting. He had hoped to find more traction on perhaps a new process of neocortex development in primates, one that he could elaborate into a novel research programme. Nevertheless, he did come to one important conclusion. It seemed that most scientists – indeed, many luminaries of neuroscience – wanted to see brain modules as fundamental features of the neocortex, each serving a particular behavioural or perceptual purpose. For example, one ‘barrel’ in the rat’s touch cortex is there to process the inputs of one whisker on its face. Purves pointed out that iterated patterns of modules may be found in one species’ brain but absent in a closely related species. Moreover, he also noted that they don’t seem to be obligatorily linked to function. ‘Blobs’ are there in the human and the monkey visual cortex and are linked to colour-vision processing, but nocturnal primates with poor colour vision still have blobs in their visual cortex. So, the blobs do not seem to enable colour vision. Similarly, chinchillas have a barrel cortex like rats but don’t have the whisker movements of rats. Cats and dogs have whiskers but no related modules in their touch cortex. Thus, it seems that, while the iterated patterns of the brain are beautiful, they are unlike modern architecture in that their beauty is not linked to function. So why then do they form at all? Here, Purves suggested that iterated patterns are the result of synaptic connections finding and relying on each other and then making more of those connections in pathways that are most active. In other words, the iterated patterns of the brain are epiphenomenal, the byproducts of the rules of neural connections and competing patterns of neural activity. Those activity patterns are generated by sensory inputs coming from the sensory organs – the eyes, ears, nose and skin. So seeing beautiful-looking patterns in the brain does not necessarily mean they were constructed for a particular purpose. Ifirst met Purves in 1993 when I was interviewing for graduate school after he had moved to Duke University. I had already read a lot of his work and was in awe of his contrarian instincts and pursuit of work that is out of the mainstream yet important. When I entered his office for my interview, I was extremely nervous but managed to ask about the portraits on his office walls. They were scientists. John Newport Langley, a 19th-century British physiologist who made important discoveries about neurotransmitters. He inspired the problems Purves tackled as a new professor. The aforementioned Viktor Hamburger was also there. He was a major figure in 20th-century embryology and also a good friend of Purves, despite the difference in their ages and experience. Another photo was of Stephen Kuffler, perhaps the most beloved figure in neuroscience at the time and who made key discoveries in vision. Kuffler had organised the neuroscience team who taught Purves when he was in medical school, and Purves considers him a mentor who exemplified what to pursue (and what not to pursue) in neuroscience. The final photo was of Bernard Katz, a Nobel laureate who figured out how neurons communicate with muscles. Purves collaborated with Katz in the 1970s and considers him a paragon of scientific excellence. I was admitted to Duke and, a year later, moved to Durham, North Carolina hoping to study with Purves or LaMantia, who was there too as a new professor. When I arrived at Duke, Purves was about to make a major change, away from studying the brain itself entirely. This seemed kind of crazy after so much success with discoveries about the developing nervous system, building an enviable career and becoming a sought-after leader in the field. But Purves’s restless instinct arose again and he switched his focus, this time to study perception. He had a hunch that the great advances wowing people about brain anatomy and the function of circuits therein were not going to be enough to make it clear how the brain works in actually guiding human behaviours. The origin of the hunch was in philosophy, which Purves had majored in as an undergraduate. The philosopher George Berkeley (1685-1753) had noticed that our eyeballs take in radically different-sized, three-dimensional objects and then project them back onto the retina (the sensory wall in the back of the eye) in exactly the same size and only in two dimensions (known as the inverse optics problem). This is why framing a distant human’s whole body between your two fingers, seemingly able to crush them, is amusing. It uses forced perspective to imply an impossibility. The implication of the inverse problem is profound. It means that the information about the object (the source) coming into our brain is uncertain, incomplete, partial. As a solution to the inverse problem, the scientist Hermann von Helmholtz (1821-94) proposed that perception relied on learning from experience. We learn about objects through trial and error, and make inferences about any ambiguous image. Thus, since we have no experiences with lilliputian human beings, we can infer that the tiny human in the forced-perspective example is actually far away. Purves took the seed of Helmholtz’s idea – that our perception depends on experience – and built an entire research programme around it. Since the mid-1990s, he and his collaborators have systematically analysed a variety of visual illusions in brightness, contrast, motion, and geometry. They have shown that our perceptions are experience-based constructions, not accurate reflections of their sources in the real world. The example of ‘red’ from the beginning of this essay is based on his colour work. Our ability to perceive accurately is largely based on past experiences and learned associations Purves and his collaborator Beau Lotto would generate two identically coloured ‘target’ squares on a computer screen but give them backgrounds of different colours. The backgrounds would make the two squares look like they were different colours (even though they were actually identical, as measured by a spectrophotometer). Then, participants were asked to adjust the hue, saturation and brightness (the same controls on your phone’s camera app) of the target squares until they looked identical. Each participant’s adjustments were quantified and used as a difference measure between perception and reality. Ultimately, Purves’s research led to the conclusion that the brain functions on a wholly empirical basis. We construct our perception of the world through our past experiences in that world. This is a radical departure from the long-standing prevailing orthodoxy that the brain extracts features from objects and other sensory sources only to recombine them to guide our behaviour. Instead of extracting features and combining them in the brain (red+round = apple), Purves argues that it is our learned associations among an event or feature of the world, the context in which it appeared and the consequences of our subsequent actions that builds our umwelt, our self-centred world. The research of Purves and his collaborators showed that our ability to perceive accurately is largely based on past experiences and learned associations. This means that we must learn about the space around us, the objects in it and other aspects of perception; these are not innate but developed through interaction with the environment. This all seems very reasonable. The environment is always in flux, with different challenges at different times for any animal. You wouldn’t want your brain fine-tuned to an environment in which you no longer live. Equally, it also wouldn’t make sense for a species to build each individual’s brain as a tabula rasa, to start from scratch with every generation. Purves’s findings and interpretation lead to a more philosophical puzzle. To what extent is the ‘environment’ that the brain is trying to make sense of actually ‘out there’ beyond our heads? Is there a real reality? Purves has shown that, even if there is a real reality, we don’t perceive much of it… or at least we don’t have a universal way of perceiving it. For example, not all humans see the same colours the same way. There are two reasons for this. One is that colour and its interpretation depend highly on environmental factors. The other is that perception also depends on experience. Experiences depend on your interaction with specific environments. Do you live in the sea, on land, in burrows, in a nest or in a climate-controlled house? Do you have vision, or are you blind? What do your physiology and anatomy allow you to perceive and interact with? What have you seen before? Your perception and interpretation of the world, and indeed those of other animals, depend on the answers to these types of questions. In her memoir Pilgrim at Tinker Creek (1974), Annie Dillard writes about coming across a book on vision, which shows what happens when blind humans of all ages are suddenly given the ability to see. Does experience really determine how we see and act in the world? That book was Space and Sight (1960). In it, Marius von Senden describes how patients who were blind at birth because of cataracts saw the world when those cataracts were removed. Were they able to see the world as we see it, those of us with vision since birth? No. Most patients did not. In one example from the book, Dillard recounts: Before the operation a doctor would give a blind patient a cube and a sphere; the patient would tongue it or feel it with his hands, and name it correctly. After the operation the doctor would show the same objects to the patient without letting him touch them; now he had no clue whatsoever what he was seeing. There is even an example of a patient finally ‘seeing’ her mother but at a distance. Because of a lack of experience, she failed to understand the relationship between size and distance (forced perspective) that we learn from experience with sight. When asked how big her mother was, she set her two fingers a few inches apart. These types of experiments (which have been replicated in various ways) show just how important experience and learned associations are to making sense of the world. Today, Purves has enough studies to show an operating principle of the nervous system or – more cautiously, he would say – ‘how it seems to work’. The function of the nervous system is to make, maintain and modify neural associations to guide adaptive behaviours – those that lead to survival and reproduction – in a world that sensory systems cannot accurately capture. It is not a stretch to link Purves’s work, all the way back, on trophic theory to the current ideas. A biological agent must assemble, without any blueprint or instructions, a nervous system that matches the shape and size of a changing body. This nervous system, paired with sensory organs that filter the world in peculiar ways, must somehow process the physical states of the world in order to guide behaviour. Similar principles – neural activity, changing synaptic connections – that guided development also guide our ongoing perceptions of an ever-changing world. We use our individual experiences to do this guiding. If we happen to perceive or interpret events like other human beings, it is because we have similar bodies and shared similar experiences at some point. To my experience, Purves is a paragon of scientific excellence. Purves first identifies a big, interesting question, and then finds whatever means are necessary to find the answer Recently, I asked Purves how he thought of the arc of his career, and it was very different from my perception of it. From my perspective, it seems that Purves’s question is always ‘How is a nervous system built?’ Addressing this question took him to increasingly large scales: from trophic theory and neural connections in the peripheral nervous system to neural construction of the brain (iterated patterns; growth of the neocortex) to the relationships between brain-area size and perceptual acuity to the construction of ‘reality’ via experience. I asked him what he thought about this narrative, and he responded: That is one way of framing it, but I don’t really see a narrative arc. As you know, one’s work is often driven by venal/trivial considerations such as what research direction has a better chance of winning grant support or addressing the popular issues of the day. The theme you mention (how nervous systems are built) was not much in my thinking, although in retrospect that narrative seems to fit. Neither one of us is wrong. In fact, we are interpreting the body of work according to our own experiences and how it best suits our needs, just as his research would suggest. Purves’s remarkable research insights are the product of his distinctive approach to science. A popular approach in neuroscience is to identify one problem early in a career and to just keep plugging away, learning more and more details about it. Then, maybe acquire an exciting new technique – like fMRI in the 1990s or optogenetics in the 2000s – to investigate the same problem in a different way. Or adopt the new technique and then search for a new question that could be answered by that method. Another approach would be to just apply the method, collect some data and only then ask what story can be told with those data. None of these approaches is ‘wrong’, but Purves’s scientific approach is in stark contrast to this. He first identifies a big, interesting question, one that could possibly have a ‘yes’ or ‘no’ answer, and then finds whatever means are necessary to find the answer. To put it another way, there is a lot of thought behind his work and approach, and there is a lot of thought about what any findings may mean in the big picture of brain science. Purves is always engaged. Very few scientists have original, influential work in multiple domains of their field. Dale Purves has achieved major advances in our understanding of brain development, from small circuits to big ones, and from bodily experience to a new way of thinking about how the brain works. Source of the article

GOATReads: History

7 Ways the Printing Press Changed the World

In the 15th century, an innovation enabled people to share knowledge more quickly and widely. Civilization never looked back. Knowledge is power, as the saying goes, and the invention of the mechanical movable type printing press helped disseminate knowledge wider and faster than ever before. German goldsmith Johannes Gutenberg is credited with inventing the printing press around 1436, although he was far from the first to automate the book-printing process. Woodblock printing in China dates back to the 9th century and Korean bookmakers were printing with moveable metal type a century before Gutenberg. But most historians believe Gutenberg’s adaptation, which employed a screw-type wine press to squeeze down evenly on the inked metal type, was the key to unlocking the modern age. With the newfound ability to inexpensively mass-produce books on every imaginable topic, revolutionary ideas and priceless ancient knowledge were placed in the hands of every literate European, whose numbers doubled every century. Here are just some of the ways the printing press helped pull Europe out of the Middle Ages and accelerate human progress. 1. A Global News Network Was Launched Gutenberg didn’t live to see the immense impact of his invention. His greatest accomplishment was the first print run of the Bible in Latin, which took three years to print around 200 copies, a miraculously speedy achievement in the day of hand-copied manuscripts. But as historian Ada Palmer explains, Gutenberg’s invention wasn’t profitable until there was a distribution network for books. Palmer, a professor of early modern European history at the University of Chicago, compares early printed books like the Gutenberg Bible to how e-books struggled to find a market before Amazon introduced the Kindle. “Congratulations, you’ve printed 200 copies of the Bible; there are about three people in your town who can read the Bible in Latin,” says Palmer. “What are you going to do with the other 197 copies?” Gutenberg died penniless, his presses impounded by his creditors. Other German printers fled for greener pastures, eventually arriving in Venice, which was the central shipping hub of the Mediterranean in the late 15th century. “If you printed 200 copies of a book in Venice, you could sell five to the captain of each ship leaving port,” says Palmer, which created the first mass-distribution mechanism for printed books. The ships left Venice carrying religious texts and literature, but also breaking news from across the known world. Printers in Venice sold four-page news pamphlets to sailors, and when their ships arrived in distant ports, local printers would copy the pamphlets and hand them off to riders who would race them off to dozens of towns. Since literacy rates were still very low in the 1490s, locals would gather at the pub to hear a paid reader recite the latest news, which was everything from bawdy scandals to war reports. “This radically changed the consumption of news,” says Palmer. “It made it normal to go check the news every day.” 2. The Renaissance Kicked Into High Gear The Italian Renaissance began nearly a century before Gutenberg invented his printing press when 14th-century political leaders in Italian city-states like Rome and Florence set out to revive the Ancient Roman educational system that had produced giants like Caesar, Cicero and Seneca. One of the chief projects of the early Renaissance was to find long-lost works by figures like Plato and Aristotle and republish them. Wealthy patrons funded expensive expeditions across the Alps in search of isolated monasteries. Italian emissaries spent years in the Ottoman Empire learning enough Ancient Greek and Arabic to translate and copy rare texts into Latin. The operation to retrieve classic texts was in action long before the printing press, but publishing the texts had been arduously slow and prohibitively expensive for anyone other than the richest of the rich. Palmer says that one hand-copied book in the 14th century cost as much as a house and libraries cost a small fortune. The largest European library in 1300 was the university library of Paris, which had 300 total manuscripts. By the 1490s, when Venice was the book-printing capital of Europe, a printed copy of a great work by Cicero only cost a month’s salary for a school teacher. The printing press didn’t launch the Renaissance, but it vastly accelerated the rediscovery and sharing of knowledge. “Suddenly, what had been a project to educate only the few wealthiest elite in this society could now become a project to put a library in every medium-sized town, and a library in the house of every reasonably wealthy merchant family,” says Palmer. 3. Martin Luther Becomes the First Best-Selling Author There’s a famous quote attributed to German religious reformer Martin Luther that sums up the role of the printing press in the Protestant Reformation: “Printing is the ultimate gift of God and the greatest one.” Luther wasn’t the first theologian to question the Church, but he was the first to widely publish his message. Other “heretics” saw their movements quickly quashed by Church authorities and the few copies of their writings easily destroyed. But the timing of Luther’s crusade against the selling of indulgences coincided with an explosion of printing presses across Europe. As the legend goes, Luther nailed his “95 Theses” to the church door in Wittenberg on October 31, 1517. Palmer says that broadsheet copies of Luther’s document were being printed in London as quickly as 17 days later. Thanks to the printing press and the timely power of his message, Luther became the world’s first best-selling author. Luther’s translation of the New Testament into German sold 5,000 copies in just two weeks. From 1518 to 1525, Luther’s writings accounted for a third of all books sold in Germany and his German Bible went through more than 430 editions. 4. Printing Powers the Scientific Revolution The English philosopher Francis Bacon, who’s credited with developing the scientific method, wrote in 1620 that the three inventions that forever changed the world were gunpowder, the nautical compass and the printing press. For millennia, science was a largely solitary pursuit. Great mathematicians and natural philosophers were separated by geography, language and the sloth-like pace of hand-written publishing. Not only were handwritten copies of scientific data expensive and hard to come by, they were also prone to human error. With the newfound ability to publish and share scientific findings and experimental data with a wide audience, science took great leaps forward in the 16th and 17th centuries. When developing his sun-centric model of the galaxy in the early 1500s, for example, Polish astronomer Nicolaus Copernicus relied not only on his own heavenly observations, but on printed astronomical tables of planetary movements. When historian Elizabeth Eisenstein wrote her 1980 book about the impact of the printing press, she said that its biggest gift to science wasn’t necessarily the speed at which ideas could spread with printed books, but the accuracy with which the original data were copied. With printed formulas and mathematical tables in hand, scientists could trust the fidelity of existing data and devote more energy to breaking new ground. 5. Fringe Voices Get a Platform “Whenever a new information technology comes along, and this includes the printing press, among the very first groups to be ‘loud’ in it are the people who were silenced in the earlier system, which means radical voices,” says Palmer. It takes effort to adopt a new information technology, whether it’s the ham radio, an internet bulletin board, or Instagram. The people most willing to take risks and make the effort to be early adopters are those who had no voice before that technology existed. “In the print revolution, that meant radical heresies, radical Christian splinter groups, radical egalitarian groups, critics of the government,” says Palmer. “The Protestant Reformation is only one of many symptoms of print enabling these voices to be heard.” As critical and alternative opinions entered the public discourse, those in power tried to censor it. Before the printing press, censorship was easy. All it required was killing the “heretic” and burning his or her handful of notebooks. But after the printing press, Palmer says it became nearly impossible to destroy all copies of a dangerous idea. And the more dangerous a book was claimed to be, the more the people wanted to read it. Every time the Church published a list of banned books, the booksellers knew exactly what they should print next. 6. From Public Opinion to Popular Revolution During the Enlightenment era, philosophers like John Locke, Voltaire and Jean-Jacques Rousseau were widely read among an increasingly literate populace. Their elevation of critical reasoning above custom and tradition encouraged people to question religious authority and prize personal liberty. Increasing democratization of knowledge in the Enlightenment era led to the development of public opinion and its power to topple the ruling elite. Writing in pre-Revolution France, Louis-Sebástien Mercier declared: “A great and momentous revolution in our ideas has taken place within the last thirty years. Public opinion has now become a preponderant power in Europe, one that cannot be resisted… one may hope that enlightened ideas will bring about the greatest good on Earth and that tyrants of all kinds will tremble before the universal cry that echoes everywhere, awakening Europe from its slumbers.” “[Printing] is the most beautiful gift from heaven,” continues Mercier. “It soon will change the countenance of the universe… Printing was only born a short while ago, and already everything is heading toward perfection… Tremble, therefore, tyrants of the world! Tremble before the virtuous writer!” Even the illiterate couldn’t resist the attraction of revolutionary Enlightenment authors, Palmer says. When Thomas Paine published “Common Sense” in 1776, the literacy rate in the American colonies was around 15 percent, yet there were more copies printed and sold of the revolutionary tract than the entire population of the colonies. 7. Machines ‘Steal Jobs’ From Workers The Industrial Revolution didn’t get into full swing in Europe until the mid-18th century, but you can make the argument that the printing press introduced the world to the idea of machines “stealing jobs” from workers. Before Gutenberg’s paradigm-shifting invention, scribes were in high demand. Bookmakers would employ dozens of trained artisans to painstakingly hand-copy and illuminate manuscripts. But by the late 15th century, the printing press had rendered their unique skillset all but obsolete. On the flip side, the huge demand for printed material spawned the creation of an entirely new industry of printers, brick-and-mortar booksellers and enterprising street peddlers. Among those who got his start as a printer's apprentice was future Founding Father, Benjamin Franklin. Source of the article

GOATReads:Politics

María Corina Machado’s peace prize follows Nobel tradition of awarding recipients for complex reasons

Few can doubt the courage María Corina Machado has shown in fighting for a return to democracy in Venezuela. The 58-year-old politician and activist is the undisputed leader of the opposition to Nicolás Maduro – a man widely seen as a dictator who has taken Venezuela down the path of repression, human rights violations and increasing poverty since becoming president in 2013. Maduro is widely believed to have lost the 2024 presidential election to rival Edmundo González, a candidate substituting Machado, yet still claimed victory. Machado has been in hiding since the fraudulent vote. And her courage in having participated in an unfair contest and in exposing Maduro’s fraud by publishing the true vote tallies on the internet, surely made Machado stand out to the Nobel committee. Indeed, in making Machado the 2025 recipient of the Nobel Peace Prize, organizers stated they were recognizing her “tireless work promoting democratic rights for the people of Venezuela and for her struggle to achieve a just and peaceful transition from dictatorship to democracy.” But as a scholar of Venezuela’s political process, I know that is only part of the story. Machado is in many ways a controversial pick, less a peace activist than a political operator willing to use some of the trade’s dark arts for the greater democratic good. Joining a controversial list of laureates Of course, many Nobel Peace Prize awards generate controversy. It has often been bestowed on great politicians over activists. And sometimes the prize’s winners can have complex pasts and very non-peaceful resumes. Past recipients include Henry Kissinger, who as U.S. secretary of state and Richard Nixon’s security adviser was responsible for the illegal bombing of Cambodia, supporting Indonesia’s brutal invasion of East Timor and propping up dictators in Latin America, among many other morally dubious actions. Similarly, former Palestinian Liberation Organization leader Yasser Arafat and Israeli Prime Minister Menachem Begin were both awarded the prize, in 1994 and 1978 respectively, despite their past association with violent activities in the Middle East. The Nobel Committee often seems to use these awards not to celebrate past achievements but to affect the future course of events. The nods to Begin and Arafat were, in this way, used for encouragement of the Middle East peace process. In fact, sometimes, the peace prize is seemingly bestowed as a sign of approval for a break from the past. Barack Obama won his in 2009 despite only being nine months into his presidency. It was taken by many as a rejection of the previous presidency of George W. Bush, rather than recognition of Obama’s limited achievements at that time. In 2016, Colombian President Juan Manuel Santos was awarded the Nobel Peace Prize just days after his peace plan was rejected in a referendum. In that instance, the committee seemed to want to give his efforts a push just after a major setback. Democratic path or dark arts? So what should be made of the Nobel Peace Prize committee’s decision to recognize opposition to Maduro now? Certainly Machado’s profile is ascendant. In her political career, she has participated in elections – winning a seat in the National Assembly in 2010 – but boycotted many more. She also boycotted negotiation processes, suggesting instead that foreign intervention was the only way to remove Maduro. In 2023 she returned to the electoral path and steadfastly mobilized the Venezuelan population for opposition primaries and presidential elections, even after her candidacy was disqualified by the government-controlled electoral authority, and innumerable other obstacles were put in her path. The campaign included spearheading caravans and events across the country at significant personal risk. However, much of her fight since then has been via less-democratic means. Machado has shunned local and regional elections suggesting there was no sense in participating until the government honored the results of the 2024 presidential election. She has also again sought international intervention to remove Maduro. Over the past year she has aggressively promoted the discredited theory that Maduro is in control of the Tren de Aragua gang and is using it to invade the United States – a narrative gladly accepted and repurposed by U.S. President Donald Trump. In addition to being the expressed motivation for a U.S. military buildup off the coasts of Venezuela, this theory has also been the central justification cited by the Trump administration for using the Alien Enemies Act to deport, without due process, 238 Venezuelan men to a horrific prison in El Salvador. Relations with Trump The Nobel Peace Prize could help unify the Venezuelan opposition movement, which over the past year has begun to fray over differences in strategy, especially with respect to Machado’s return to electoral boycotts. And it will certainly draw more international attention to Venezuelans’ struggle for democracy and could galvanize international stakeholders to push for change. What it will mean in terms of Trump’s relationship to Machado and Venezuela is yet to be seen. Her main connection with the administration is through Secretary of State Marco Rubio, who has aggressively represented her views and is pushing for U.S. military intervention to depose Maduro Awarding Machado the prize could strengthen Trump’s resolve to seek regime change in Venezuela. Or, if he feels snubbed by the Nobel committee after very vocally lobbying to be awarded the peace prize himself, it could be a wedge between the U.S. president and Machado. Machado seems to understand this. After not mentioning him in her first statement after the award was announced, she has since mentioned him multiple times, even dedicating the prize to both the Venezuelan people and Trump. Trump has subsequently called to congratulate her. A game changer? Perhaps not To the degree that the Nobel Peace Prize is not just a model of change but a model for change, the decision to award it to Machado could conceivably affect the nature of Venezuela’s struggle against authoritarianism, leading her to continue to seek the restoration of democracy with a greater focus on reconciliation and coexistence among all Venezuelans, including the still politically relevant followers of the late Hugo Chávez. Whatever the impact, it probably will not be game-changing. As we have seen with other winners, the initial glow of public recognition is quickly consumed by political conflict. And in Venezuela, there is no easy way to translate this prize into real democratic progress. While Machado and other Venezuelan democrats may have more support than ever among global democrats, Nicolás Maduro controls all of Venezuela’s institutions including the armed forces and the state oil company, which, even when sanctioned, provides substantial resources. Maduro also has forged strategic alliances with China, Russia and Iran. The only way one can imagine the restoration of democracy in Venezuela, with or without military action, is through an extensive process of negotiation, reconciliation, disarmament and justice that could lay the groundwork for coexistence. This Nobel Peace Prize could position Machado for this task. Source of the article

GOATReads: Psychology

The major schools of thought in psychological counseling

As a professional practice, psychological counseling has evolved through contributions from legends that enriched the domain with their research and practice, and in the process, created the schools of thought that are followed by practitioners even today. Here are the most prominent ones among them: The Albert Ellis Way: Albert Ellis’s approach REBT – Rational Emotive Behavior Therapy – postulates that people’s emotional & behavioral problems are caused by erroneous beliefs about situations they are involved in. To lead a fulfilling life, these disturbance-causing beliefs have to be challenged and changed. The Sigmund Freud Way: Sigmund Freud’s ‘psychoanalytic’ approach says that cognition & behavior is determined by early experiences and instinctual drives that are rooted in a person’s unconscious but are denied entry into conscious by psychological defense mechanisms. This conflict between conscious and unconscious is at the heart of emotional disturbances & conflicts. The cure is to bring the ‘content of the unconscious’ into conscious through therapeutic intervention. The Carl Rogers Way: Carl Rogers developed ‘person-centered therapy’ whose basic premise is that a suffering person already possesses all that is needed to come out of it and flourish. The therapist’s job is to facilitate the person’s natural self-actualizing tendency through three factors – ‘unconditional positive regard’, genuineness and empathetic understanding. The Fritz Perls Way: Fritz’s approach is popularly known as ‘Gestalt Therapy’. Rather than focusing on blocks and unfinished business of past, it encourages client to be ‘here and now’ and utilize the resultant ‘mindfulness’ to become aware of what he or she is doing. This is what triggers willingness and ability to change for better. This approach banks on the proposition that when the client is fully & creatively alive, they are better positioned to make changes and face challenges.    The Aaron Beck Way: Aaron Beck’s approach called CBT – Cognitive Behavioral Therapy – is a ‘problem-focused & action-oriented’ approach which suggests that psychological problems are caused by maladaptive patterns of interaction between the trio of ‘feelings, thoughts and behavior’. CBT addresses these patterns by teaching new information-processing skills and coping mechanisms to the person. The Erik Erikson Way: Erikson’s theory of ‘psychosocial development’ identifies a series of eight stages that an individual passes through in life. Each stage has a negative and positive outcome. While positive outcome aids healthy development of individual, the negative outcome creates incongruence & deficit which then lead to emotional & behavioral problems. Counselor helps the client not only understand ‘what went wrong at which stage’ but also make choices that compensate for what is lost. The counselor also facilitates the alignment of client behavior for positive outcomes in present and future stages. The William Glasser Way: Inspired by the ideas of great Alfred Adler, William Glasser developed ‘Reality Therapy’ which is based on 3 R’s – realism, responsibility and right-and-wrong. It is a present-day non-symptom-focused approach that first makes client realize how their perceptions & imaginations distract them from the choices they control in life, and then helps the client make choices that align with the vision they have for their lives. It also emphasizes on building meaningful relationships to create a conducive psychosocial ecosystem. The Michael White Way: Michael White, along with David Epston, developed ‘Narrative Therapy’ which postulates that a person’s problems are strongly shaped by the story they tell about themselves, and counselor’s job is to help the person create such stories about themselves that are constructive and helpful. Through this process of ‘re-authoring identity’, counselor helps the client identify which values are important to them and how they can utilize their knowledge & skills to live these values. And then there is ‘integrative or holistic’ way in which the counselor or therapist doesn’t limit themselves to any one school of thought but meticulously blends elements from various approaches to exactly suit each client’s specific and individual needs. Source of the article

The Origin of Love and Nightmares

In an unnamed, imaginary city, the government comes up with an idea to salvage its failing economy and latent social crisis: literally sewing living human beings together. Through the “Conjoinment Act,” the authorities match individuals by “height, weight, skin color, age, and metabolic rate” and surgically join them at the shoulder or chest, purporting to free them for life from fear and futility. “Only by being with another person,” explains a psychologist, “can we experience the cycles of joy, heartbreak, harmony, and conflict necessary to arrive at true fulfillment.” Mending transcends metaphor in Hon Lai Chu’s Mending Bodies: to mend is the 縫 of the novel’s original Chinese title《縫身》. 縫 invokes the reparation of sewing, a needle and thread binding disparate bodies— 身—together. Yet consider the affects of conjoinment: the collapse of your individuality, the complete annihilation of your personal spacetime, and the physical toll of the wound crawling down your torso, leaving you “too tired to care about matters of society.” In Hon’s city, conjoined people take a new name after their operation: “It’s both of us or neither of us,” says the first-person narrator’s partner, formerly named Nok. Another character voices the suspicion that conjoinment facets “an elaborate political ploy to make citizens forget about their long campaign for the city’s independence.” All this appears in Hon’s Mending Bodies, published for the first time in English in 2025 in a supple, measured translation by Jacqueline Leung. Both Hon and Leung are from Hong Kong: born in 1978, Hon is one of the territory’s most prolific and admired contemporary authors, and Leung is one of its most talented translators. How could Hon’s “long campaign,” then, not jolt your memory of headlines from Hong Kong in 2014 and 2019? And yet《縫身》was first published in 2010, before Hong Kong’s struggle for democracy made international news or even boiled to fever pitch in the city. Is it possible not to read Mending Bodies as an allegory for Hong Kong, prescient of the city’s present political precarity? Can we separate the literature from the context, the conjoined from each other? If we were to detach it from its accompanying sociopolitical structures, “conjoinment” is a union between two individuals, like marriage or cohabitation. “Apparently, everyone knows the story of the two halves,” writes Roland Barthes, in a fragment titled “Union” in A Lover’s Discourse, “trying to join themselves back together.”1 He means Aristophanes’s fourth-century BC account of the origins of love, first related in Plato’s Symposium. In this story, the earliest humans—who possess two-headed, eight-limbed bodies as a sign of their wholeness—are bisected by the gods, as punishment for trying to topple the divinities. The two halves now spend the rest of their lives attempting to reunite; hence, “love.” This original human, praises Barthes, is a “figure of that ‘ancient unity.’” Yet in trying to draw that remarkable figure, all Barthes’s speaker can muster is a “monstrous, grotesque, improbable body. Out of dreams,” he concludes, “emerges a farce figure.”2 This “farce figure” appears in the very first line of Mending Bodies, when it rings the doorbell. Hon spares no time plunging us into her dreams, whose pathetic fallacy Leung renders in a haunting, minorly surreal key: “It was an afternoon in plum rain season, lush mold blooming all around.” Even though the protagonist herself has been conjoined, she—like Barthes—fixates on the grotesque. On seeing her university friend, May, materialize conjoined from behind the front door, “I couldn’t help but feel shocked [by] the way their bodies clung together … chests drilled and sewn together until their skin, muscles, cartilages, and tissues were connected as if by a small, short bridge.” Thus, Mending Bodies inverts the arc of Aristophanes’s mythos of desire. Here, the one-headed, four-limbed humans—symbols of our lost freedom—are now transformed into two-headed, six-limbed creatures by the government. Or could we say that Mending Bodies perpetuates the myth: a transposed, contemporary sequel to Aristophanes’s tale? What Mending Bodies charts could be the reunion of these once-bisected humans: morally redeemed by their successful return to “ancient unity” and yet “monstrous, grotesque, improbable,” even in Hon’s fictional city where conjoinment is the norm. Yet conjoinment is a union story that, in Westernized imaginations of democratic freedom, monstrously forsakes the citizen’s freedom to choose. In Hon’s city-building—standout for its cogent, poised materialization of social structures in a mode akin to social science fiction—“[conjoinment] was not [the narrator’s] decision, nor was it any individual’s decision.” Instead, conjoinment figures as a “collective responsibility” to be borne without protest or complaint; it is the duty of an individual active in the national framework, just like “being born and becoming a person, a woman, a man.” Under the auspices of mending—the ethical weight of its value judgment—conjoinment espouses the public ideal of the one-stop happy ending: the static beatification of those who physically unite. But as Anne Carson asks in her hermeneutics of desire, Eros the Bittersweet: “Was it the case that the round beings of [Aristophanes’s] fantasy remained perfectly content rolling about the world in prelapsarian oneness? No. They got big ideas and started rolling toward Olympus to make an attempt on the gods. “They began reaching for something else. So much for oneness.”3 Is it the case that the conjoined beings of Hon’s fantasy remain perfectly content in postlapsarian oneness, their bodies “mended”? No. In fact, we know this Carson-esque rejection from the outset. Our speaker is nostalgic for the days when she was just a nonconjoined “wisp of soul,” to the extent that she asks her conjoined other, Nok, to take sleeping pills so they can “dream different dreams.” In her pivot from Hon’s passive, first-person singular in the original Chinese, Leung translates the dissent from this lack of choice into the collective plural of the narrator’s fellow citizens: “Only a long time later, when I stepped into adulthood, did I learn that a certain ambivalence toward policies we had no power over was the last effective resort for protecting our remaining freedoms. If we obeyed as if it didn’t really matter, or carefully looked for the loopholes, we could secretly map the limits of their control.” Not only are the citizens of Hon’s world compelled to conjoin their bodies, they also cannot choose the body to whom that invasive surgery will irrevocably fasten them. Beyond the strictures of the law, this compulsion is enacted more persuasively through social normification and peer pressure. Mother, doctor, “the staff at the matching center … over the phone,” friends: all applaud the narrator’s conjoinment; and she “recognize[s] that they weren’t cheering for us but for themselves: us becoming part of the conjoinment population was an encouragement that strengthened their beliefs and validated their own choices.” These minor epiphanies and moments of recognition are a recurring motif in Mending Bodies, their echoes amplified by the somatic stakes of Hon’s ‘social science’ structure-building and her narrator’s acute observations. In their first year at university—before conjoinment—the narrator and May strip naked in their dorm room to conduct their regular activities, “read[ing] a book, [drinking] tea”: “That was the moment when we realized that people weren’t bound by the gaze and criticism of others, but the habits we had normalized ourselves.” Later, as our narrator leans into her fearful fascination with conjoinment, their friendship faces unmendable schisms. “It felt like [May] wasn’t taking me seriously, but a long time later, I understood that maybe she just lacked the courage to face things as they were.” By things, we understand the protagonist to mean conjoinment’s choicelessness, its faceless impositions, orchestrated by the matching center’s deceptive “chance occurrence. … Yet change”—concrete, irreversible—“always starts from small, unidentified moments—by the time I realized just how critical this was, I was already a poor swimmer flailing in a maelstrom.” Carson’s genius in Eros the Bittersweet is to visualize desire as a tripointed constellation: lover, beloved, and intransigent third party. These three are simultaneously magnified to and repelled from each other by lack, the active vector that necessarily enables desire. From Carson’s poetic exegesis, we learn how to read the triangular dynamics of an erotic or psychic union: “Conjoined they are held apart. … The third component plays a paradoxical role for it both connects and separates.”4 Here, in Mending Bodies, that is embodied not only in the conjoinment between our speaker and Nok, but in the fusional, bitter friendship between her and May. Between Nok, May, and the narrator, who is the third party? The answer is that no matter which side of the triangle you ponder, one point will always stand out. It is May who first veers toward conjoinment, lured by its promise of financial stability, even as she denounced it, early in the novel’s chronology, as “groom[ing] to become part of a dominant population that pushes others to the periphery of our society.” Meanwhile, our narrator, now in her final year of university, begins to research her dissertation under the supervision of a slippery, legless professor nicknamed Foot, who counsels her: “Make the fear that paralyzes you in your sleep real. Stand before it—don’t turn away—and look. Record it if you can.” And so the friction between her minor metanoias, the pressure to conform, and her paralyzing fear usher our narrator toward the heart of her obsession, her dissertation topic: “Third Identities and Hidden Selves—the Faces of Conjoinment.” In it, she takes an anthropological slant to the “origins of conjoinment,” to explore “the shadow that still exists between two bodies when they are connected, how it flourishes and expands, and whether it is capable of birthing new selves.” Hon’s speaker recounts the psychological experiments of fear and self-actualization conducted by fictional researchers (one is called David Lynch), detailed in fictional monographs: “Therefore, the only thing that can grow between two bodies where distance is impossible is hatred.” She analyzes the cases of fictional Siamese twins: “According to Erich Fromm’s psychoanalyst theory, Evelyn had a masochistic character, meaning she would rather surrender her freedom than face the fear of taking control over her solitude. … But the question is, how is freedom defined?” (She does not mention Aristophanes’s myth but references fictional legends from non-European cultures.) Hon’s formal boldness sutures the supposed extracts of her narrator’s dissertation to her first-person, past-tense narrative, alternating from chapter to chapter. This metatextual conjoinment performs an extraordinary feat of mapping: that of the limits of narrative freedom. Hon’s metatextual narrative technique emphasizes the dissonant voices of distinct forms, impossible to seamlessly conjoin yet conditioned here by her choice of structure. Enmeshed in the psychic social architecture of repression and lack, the protagonist nevertheless decides to retell the story of conjoinment, and to do so in a way that contradicts the top-down, essentialist narrative pitched by the authorities. This discordancy juts out, particularly where active ambivalence is the norm of dissent. Teasing out the narrator’s sliver of putative freedoms, pitting her logic of epiphany against the politics of her city, Hon’s consummate world-building commends the speaker’s voltas of recognition to our lived reality. Aren’t we all the “poor swimmer flailing in a maelstrom”? Another meta-layer in Hon’s exploration of unmelding bodies is the novel’s take on insomnia, its evocation of the paradox between sleeplessness and dreams. In《縫身》’s afterword—untranslated into English—Hon briefly describes the chronic insomnia she experienced as she wrote the novel: how this sleeplessness quite literally, absurdly, seeped into her surface dreams; how she “transformed [her] dreams into the material of [her] novel,” bending her disorder into lucid prose, permeating the language and mood of another, unreal city.5 Similarly, dreams circulate through the nervous system of the novel, as stories, memories, immediate experience, and as they fuse with the novel’s textual fabric, so its interpenetrating structures of dream and reality crumble. This can be seen in Bak, one of the characters of our narrator’s dreams. Bak mutates into something closer to a signifier of pervasion and transgression; indeed, his name in Hon’s Chinese original means “white,” as though he were a void of light. As our narrator’s insomnia consumes her nights, May schedules her an appointment with a sleep therapist. Their first meeting is one of those “small, unidentified moments” that spiral into—by Hon’s deft design of growing ambiguity—a gripping, gloomy alarm, a total sense of abjection. “I grew to understand,” translates Leung in the deceptively sedate pitch of another minor epiphany, “that in certain states of existence, to live was really just another form of death.” In Leung’s brilliant, punning transliteration of her and Hon’s Cantonese, the sleep therapist is named Lok—「樂」. Polysemous, pronounced Lok6 or Nok6 depending on context, “Lok” tends to be translated as “happiness”: “not … ‘Nok’ as in music, the sounds people use to numb their emotions.” Double-edged, contradictory, the contranym of 樂 conjoins corrosive meanings; or, personified in the sleep therapist’s ungraspable identity, the contrast between Lok and Nok conjures a slip into dreams, even nightmares. “The round beings” of Barthes’s farcical improbability also “began reaching for something else” in Carson’s retelling of Aristophanes’s “fantasy.”6 After her conjoinment with Nok, the societal aspiration of wholeness realized, what does our narrator begin to reach for? Reading Leung’s version of Mending Bodies in 2025—15 years after《縫身》’s publication—what do we begin to dream of? Our narrator, for her part, remains mired in the idea that “every sleep is, in fact, a temporary death.” But, in these 15 years, the limits of freedom on Hong Kong’s map have significantly contracted, and “sleep” and “dreaming” have come to mean something else since the pro-democracy movement of 2019. They are metaphors for distinct states of resistance against imposed origin narratives, against the real, encroaching superstructures of censorship and control. Thus, there is something terribly resonant in Leung’s post-2019 translation of Mending Bodies—especially considering Leung’s Americentric Anglophone audience, who might be most familiar with Hong Kong for its recent political predicaments. Yet the city of Leung’s translation has no name at all—not in Hon’s original, either. As such, it could, ostensibly, be any city on Earth, fictional or not. (Certainly, Hong Kong is not the only body politic to encounter imposed silences and revisions of its history.) Ultimately, Mending Bodies’s only links with Hong Kong are that both Hon and Leung are from there, and that some of its characters carry Cantonese names. That said, Hong Kong has become an apt prism through which to probe the skin tissue between state violence and victimization, and the widening wounds to personal freedom. In another book, 《黑日》 (Darkness under the Sun), published in Taipei in 2020 and as yet untranslated, Hon writes: “Solitude isn’t what saddens you. A lot of the time, solitude is a kind of freedom. What really sets you at a loss is that you have no way to be truly alone. There’s a part of you that’s always connected to a collective flesh-and-blood.”7 Published 10 years after Mending Bodies, Darkness under the Sun is a staggeringly sensitive nonfiction account of Hon’s experiences during the 2014 Umbrella Revolution and the 2019 pro-democracy movement. And, indeed, the book appears to treat a lot of the same material that Mending Bodies engages with: questions of social belonging, the right to choose, and human relationships in times of collective loss, mediated by the capitalistic, cutthroat conditions of working and writing in Hong Kong. As Mending Bodies’s narrator observes (on the website of an underground charity she encounters during her research): “SEPARATION IS A BASIC HUMAN RIGHT.” Yet there is a part of her that will always remain connected, that cannot be excised. The exceptional clarity of this lived paradox is what we begin reaching for. So much for conjoinment. Source of the article

Roaming rocks

As geological sites go, this one is easy to miss. It’s just a low rise of exposed rock along a back road in northern Wisconsin, outside a town whose one claim to fame is a tavern that the gangster John Dillinger used as a hideout in the 1930s. Even though I’ve been to this outcrop many times before, I drive right past it on this autumn day and need to look for a place where it’s possible for three university vans to turn around. We manage to make the manoeuvre, come back from the other direction, and park on the shoulder. Students spill out of the vehicles, clearly underwhelmed, puzzled at why we’ve bothered to stop here. They don’t yet realise that this is a secret portal into Earth’s interior. I urge them to look closely, to get down on hands and knees for a few minutes, use their magnifying glasses, and then tell me what they see. There’s a little moaning – they’re getting hungry, and we’ve already seen a lot of rocks today – but everybody complies. Within a minute, I start to hear exclamations that make me smile: Hey, look at all that biotite! And tonnes of tiny red garnets! What are those bluish crystals – kyanite? The students’ initial disinterest turns to respect; they understand that the rocks have taken a journey no human ever could. We’ve talked in class about rocks like these: mica schists, and their improbable biographies; how they are emissaries bearing news from alien realms. But it’s different to see them up close, in their current resting state. By geological classification, these rocks are ‘metamorphic’, meaning that they have been transformed under punishing heat and pressure beneath the surface – and then, astonishingly, come back up. Unlike an igneous basalt crystallised from lava, or a sedimentary sandstone laid down by water, metamorphic rocks form in one environment, then go on journeys deep in the crust. This makes them the itinerant ‘travel writers’ of the rock world, returning to tell us about the restless, animate, hidden nature of the solid Earth. At each stage of their pilgrimages, they preserve a record of their experiences, and through them we can gain a glimpse of inaccessible subsurface worlds – places that we humans may never encounter directly. The metamorphic schists my students and I are analysing in Wisconsin would have had humble beginnings as mucky, muddy sediments – the weathered residuum of still older rocks – on an ancient seabed. As more sediment blanketed and buried them, these muds lost contact with the hectic surface world, the commotion of wind and waves. Under the growing weight of the overlying deposits, the mud was compelled to let go of its natal seawater, becoming ever denser and more compact, and finally solidifying into mudstone, or shale. Millions of years pass. The geography of the world changes with the neverending dance of tectonic plates. One day, the shale finds itself in the vice-like squeeze of colliding continents, folded deep into the interior of a mountain belt. The pressure at such depths is extreme. The fine clay minerals in the shale, now far from the shallow marine waters in which they formed, can no longer hold their shape. Their chemical bonds weaken, their grain boundaries become diffuse, and a remarkable transfiguration begins. The elements within, previously part of a rigid crystal scaffolding, are now free to wander. Atoms of aluminium, silicon, magnesium and iron, surprised at their unaccustomed mobility, form new alliances and reconfigure themselves as minerals comfortable at these depths and temperatures: shiny black biotite, wine-dark garnet, and sky-blue kyanite. Intellectually, I understand the cause of metamorphism: it’s the thermodynamic imperative for crystals to reconfigure themselves into forms that are stable under new temperature and pressure conditions, in the same way that powdery snow transforms with burial to glacial ice. Still, the process strikes me as deeply mysterious, a kind of natural alchemy. Metamorphic literally means ‘after-formed’, an apt description of these shape-shifting rocks. Prosaic mud reinvents itself as resplendent mica schist, dull limestones transubstantiate into milky marbles, sandstones are reincarnated as luminous quartzites – even though in their subterranean world there is no light to reveal their beautiful new guises. It’s like meeting the same person as a child and as an adult on one day The mountains that harboured our schists eventually attained their maximum heights, and the tectonic action moved elsewhere. Erosion, intent on enforcing topographic egalitarianism, set to work dismantling them, focusing with special ferocity on the summits in the heart of the range. For the schists, this was the start of a slow pilgrimage back to the surface. Eventually they would feel the wind and sunlight again, testifying to anyone listening about the lofty peaks that once loomed here. I ask my students to recall an earlier stop about 50 km to the north, where we saw some dark-grey mudstones. When I tell them those rocks were the progenitors of these garnet-biotite-kyanite schists – part of the same stratum, but never buried to such depths – they are stunned, and even strangely moved; it’s like meeting the same person as a child and as an adult on one day. Knowing what these rocks once looked like makes their journey feel more real and remarkable. Most people would identify space as the ‘final frontier’, the last unexplored territory. But consider this asymmetry: high-altitude aircraft routinely fly 15 km above the surface of Earth, and two dozen humans have made the 384,000 km voyage to the Moon. Our satellites now clutter the outer edge of the atmosphere, and we’ve sent rovers to Mars. The Voyager 1 spacecraft left our solar system in 2012 and continues to hurtle into interstellar space. Yet no person has ever been much more than 4 km into the subsurface (the reach of the deepest mine, a gold operation in South Africa). Even our mechanical proxies haven’t gone much further. In a ridiculous Cold War competition, the USSR and NATO countries tried to best each other in deep drilling. The Soviets won that event, with a hole drilled into granites on Russia’s Kola Peninsula that managed to reach 12 km before the bit became ineffective, softened by geothermal heat. That’s little more than a long walk – about the distance from the north end of Central Park to the ferry terminals at Manhattan’s southern tip. We do have ways of making indirect inferences about the composition of Earth’s interior – most importantly, by observing how seismic waves propagate through rocks far below the surface. But the only physical samples we have of the lower crust and mantle are metamorphic rocks, like our Wisconsin schists, that have spent time at inaccessible depths and then helpfully made the long trek back up to share with us surface-dwellers what they witnessed. In my teaching, I sometimes struggle to convey the weirdness of metamorphism. The three main categories of rock – igneous, sedimentary and metamorphic – are akin to different literary genres. Igneous rocks are like action-packed thrillers, telling dramatic tales of volcanism and roiling magma chambers. Sedimentary rocks are serious historical tomes, admirably thorough but sometimes dull. Metamorphic rocks, in comparison, have a much greater range of narrative arcs and defy easy categorisation. Their early chapters could be igneous or sedimentary, but then the action veers in a completely different direction, with the mineral protagonists finding themselves in alien territory and having to find ways to adjust. It’s as if, midway through a scholarly biography of Lord Nelson, the scene shifts to the seafloor and he becomes Captain Nemo. We’ve observed metamorphism – technically ‘solid-state recrystallisation’ of rocks – since at least the mid-1800s. However, our modern understanding is usually traced to work done in the 1910s and ’20s. Back then, geologists understood deep time, but they were only beginning to grapple with the idea that an internal engine fuelled by radioactive heat animates our planet. It was decades before the theory of plate tectonics would emerge and provide the explanatory framework for how mountains form and rocks could travel to new environments. In the Scottish Highlands, geologists were mapping a complexly deformed sequence of sedimentary rocks called the Dalradian Series. We now know that these rocks record the growth of the hemisphere-scale Caledonian-Appalachian mountain chain during the assembly of Pangaea, but at that time, when continents were thought to be rooted in place, the global context of the Highland rocks was not understood. During their mapping of the Grampian Region, geologists carefully recorded the minerals in a metamorphosed mudstone. These included biotite, garnet and kyanite, the same colourful characters my students recognised in the Wisconsin schist. At the time, geologists thought that metamorphism was caused solely by heat. So it was a surprise when they found no kyanite in similarly heated Dalradian mudstones just to the northeast of Scotland, in the Buchan district. We can now simulate the high pressure and temperature conditions of metamorphism in a lab Instead, the Buchan rocks were dotted with a cream-coloured mineral called andalusite, which sometimes occurs in the shape of a stubby cross. Why the difference? This seemed akin to putting two cakes made with the same batter in the oven at the same time and emerging with completely different flavours. One clue was that andalusite and kyanite have the same chemical composition and formula – Al2SiO5 – but different crystal structures. They are ‘polymorphs’ of each other, in the same way that graphite and diamond are both crystalline varieties of carbon but with distinct molecular forms. A second important observation was that kyanite is significantly denser than andalusite, indicating that its crystal structure is more compact – and formed under greater pressure. This was the solution to the Dalradian mystery: metamorphic reactions, and the metamorphic minerals that record them, are functions of both temperature and pressure. Around the same time in the 1910s-20s, working independently, the Finnish geologist Pentti Eskola had come to the same conclusion as his British counterparts. Based on his deep knowledge of the Baltic Shield’s geology, Eskola plotted the relative positions of various metamorphic rocks on a graph of pressure vs temperature, creating the first, rough metamorphic phase diagram. He anticipated a time when the stability fields – the native habitats or ‘comfort zones’ – of metamorphic minerals could be determined quantitatively, and the obscure dialects of metamorphic rocks could be translated. A century later, Eskola’s vision has been achieved. It’s now possible to simulate the high pressure and temperature conditions of metamorphism in laboratory experiments and coax tiny amounts of minerals like garnet, kyanite and andalusite to form. We now have detailed metamorphic phase diagrams for rocks of all starting compositions, and their travel itineraries can be reconstructed in detail. Some metamorphic reactions – like the conversion of graphite to diamond – are primarily sensitive to pressure, with temperature playing only a secondary role. These reactions are known as ‘geobarometers’ – because the presence of one mineral or the other tightly constrains the pressures that the rocks endured. Conversely, temperature-sensitive but pressure-independent reactions are ‘geothermometers’, reliable indicators of past temperatures. Like explorers carefully keeping a log of their latitude and longitude, metamorphic rocks thus record their ‘coordinates’ in pressure-temperature space. This information, in combination with methods for dating minerals using natural radioactivity, makes it possible to track metamorphic rocks in space and time. Eskola would have been pleased and astonished. Under the microscope, metamorphic rocks are like exquisite illuminated manuscripts, with interlacing minerals chronicling their hidden, subterranean experiences. Crystals that grew in distinct episodes over time will develop concentric bands like tree rings, and early formed minerals may be engulfed or rimmed by later ones. On the way to becoming schist, a mudstone might first have been a slate, and garnets that overgrew the slate’s aligned minerals can preserve a record of that stage even when the slaty texture has been erased in the rest of the rock – like a tree that grew around the wires of an old fence that has long since been removed. Microscopic observations, together with laboratory experiments, have also shown that fluids in the crust – mainly water, but also carbon dioxide, and various elements dissolved in these phases – are as important as temperature and pressure in governing metamorphic reactions. Water’s presence dictates the temperatures and rates at which reactions occur, and on a watery planet this is the rule rather than the exception. Indeed, the one important scientific result of the absurd and expensive Cold War race to drill the deepest hole was the discovery that water (in a supercritical form, neither liquid nor gas) occurred even at the greatest depths reached. Some of my own work in western Norway, another part of the great Caledonian-Appalachian mountain chain, has shown that, in the complete absence of fluids, metamorphism may not happen at all. Just north of Bergen, rocks that were once deep in the heart of that great mountain range are exposed on the ice-scoured and windswept island of Holsnøy. Tiny Holsnøy is disproportionately famous among geologists because it provides a kind of ‘counterfactual’ glimpse of an Earth in which crustal rocks are devoid of water. It provides a ‘real-time’ view of metamorphic processes too deep in the crust for us ever to witness The rocks on Holsnøy have the composition of basalt, the rock that makes up the ocean crust. They experienced two distinct mountain-building events, one about 900 million years ago and another around 400 million years ago – and in both cases were subjected to extreme metamorphic conditions. During the first event, the rocks experienced moderate pressures, at 25 km depth, but very high temperatures – close to their melting point – so hot that all water in hydrous minerals was ‘baked’ out, leaving an exceptionally dry rock mass. Five hundred million years later, these parched rocks found themselves even deeper in the crust – more than 45 km down – but at less extreme temperatures. At such depth, they should have completely converted, via metamorphic recrystallisation, to a dense, improbably beautiful rock called eclogite, with raspberry garnets set in a field of grass-green pyroxene (and a particular favourite of Eskola’s). But, oddly, eclogite on Holsnøy occurs only along narrow zones and bands constituting perhaps 30 per cent of the rock mass. The eclogite-forming metamorphic reactions were for some reason frozen in action, leaving converted and unconverted rock in juxtaposition and providing a rare, close-up, ‘real-time’ view of the metamorphic processes that happen too deep in the crust for us ever to witness. How could the rocks on Holsnøy have ignored the thermodynamic edict, under very literal pressure, to recrystallise entirely to eclogite? The answer seems to be the extraordinary dryness of the rocks at the time they were at eclogite-forming depths. Without water, atoms can move through rocks only by the arduous process of diffusion, which is tortuously slow, like driving a car in urban gridlock. Metamorphic reactions in the complete absence of water would be so sluggish that they would not happen even on geologic timescales. In contrast, when water is present, atoms can slip into solution and hitch a ride, akin to hopping on the subway and moving swiftly through the city. Transported quickly to new sites, the elements in rocks can easily reorganise themselves, and build minerals that are stable under the new conditions. We think that, on Holsnøy, eclogite-forming reactions were suppressed by the lack of water until several large earthquakes shattered the unusually dry rocks and allowed limited amounts of external fluids to enter along fractures, forming the localised lenses of eclogite that are so odd and remarkable. Equally remarkable is the fact that these rocks ever made it all the way back to the surface, via tectonic uplift and tenacious erosion, to be marvelled at by human pilgrims. This might seem an arcane story about weird rocks from a remote place, but it illuminates some fundamental truths about Earth’s plate tectonic system, and in particular its signature process of subduction. Along subduction zones, like the one sitting off Japan’s east coast, basaltic ocean crust plunges into the mantle. Subduction of the ocean floor is Earth’s brilliant recycling and rejuvenation system, and without it our planet would be utterly different. However, slabs of ocean crust would be too buoyant to sink very far into the mantle if they did not convert to their much denser metamorphic form: eclogite. To me, one of the strangest things about Earth is that basalt – which is derived from the mantle – can be recrystallised at high pressure into eclogite, which is denser than the mantle. Our work on Holsnøy suggests that, in the absence of water, the basalt-eclogite conversion would not happen, deep subduction would be impossible, and the planet’s tectonic system would be utterly different. Water-mediated metamorphism, in other words, is the key to plate tectonics, which rejuvenates topography, replenishes the atmosphere, and keeps the planet in a constant state of renewal. Only Earth rocks have the chance to study abroad, wander the globe, dive into the crust and recreate themselves More generally, given the crucial role of water in their formation, metamorphic rocks of any type (except those created by brute force in meteorite impacts) may be unique in the solar system. And in the absence of plate tectonics, which is also unique to Earth, opportunities for rocks to travel to new environments, and thereby be transformed, would be limited anyway. Moon rocks collected in the Apollo missions, and rare meteorites that have come to us from Mars, attest to our harrowing bombardment by space debris but have no stories to tell about adventures on their own worlds. Despite their long trips to Earth, these rocks are strangely naive, inexperienced. Only Earth rocks have the chance to study abroad, wander the globe, dive into the crust and recreate themselves. In fact, metamorphic rocks are actually the dominant type on Earth, because most rocks, if they are around long enough, will find themselves in new environments. It is our good fortune to have such an abundance of metamorphic rocks in our midst, with insights to share not only about their origins but everything they’ve experienced since. It’s not too much of a stretch to say that they are responsive, even sentient, in that they ‘perceive’ their surroundings and change in response. Their stories are genuinely epic: the journey of a rock like our Wisconsin schist from the surface to the centre of a mountain belt and back echoes the narrative arc of katabasis and anabasis in Greek myth: the protagonist’s descent into the Underworld, the tribulations experienced there, and the eventual return, with hard-won wisdom, to the land of the living. Metamorphic rocks embody the animate, resilient, creative nature of the solid Earth. For us mortal Earthlings, they speak of the possibility of reinvention, the beauty of transformation – and the ephemerality of any particular version of the world. It’s getting late, almost time to leave. In the slanting rays of the late afternoon sunlight, a sense of serenity fills me as I watch students crawling across this easily missed outcrop of mica schist on their hands and knees, like supplicants approaching a holy shrine. Source of the article

GOATReads: History

How a Sentimental Yiddish Song Became a Worldwide Hit—and a Nazi Target

Sophie Tucker was best known for her sexy songs—crowd-pleasers that showed off her curves, her sass, and her frank love of men and money. But when the singer took to the stage in 1925, something else was on her mind: her mother. That night, Tucker debuted a new song. Instead of singing about dating or success, it was about a successful person mourning her departed Jewish mother—an angelic “yiddishe momme” who had suffered in life, but was now dead. Performed in both English and Yiddish, the song was a hit. When Tucker finished, there wasn’t a dry eye in the house. And though she felt a deep personal connection to the song, she had no idea she had just performed an anthem. The 1928 record for Columbia sold over a million copies. Performed in both Yiddish and English, "My Yiddishe Momme" took the world by storm during the 1920s and 1930s, giving voice to many immigrants' complicated feelings about assimilation and the sorrow of losing a mother. But the song was more than a tearjerker, or an American phenomenon. “My Yiddishe Momme” would go on to play an unexpected role in Nazi Germany and even the Holocaust. The song hit a nerve with Jewish and non-Jewish audiences alike, writes biographer Lauren Rebecca Sklaroff. “The singer was steadfast in her explanation that the song was meant for all listeners,” she notes. But it expressed a bittersweet emotion that would have rung true to audiences of immigrant and second-generation Jews who were far from home and whose mothers had sacrificed to make their lives better. My yiddishe momme I need her more then ever now My yiddishe momme I'd like to kiss that wrinkled brow I long to hold her hands once more as in days gone by And ask her to forgive me for things I did that made her cry The song was written by lyricist Jack Yellen and composer Lew Pollack. Yellen is best known for writing upbeat hits like “Ain’t She Sweet” and “Happy Days Are Here Again.” He had something in common with Sophie Tucker: Both were Jews who emigrated to the United States as children in the late 19th century, and both were drawn to New York’s burgeoning Yiddish theater scene. At the time, Jewish immigrants were flooding to the United States, driven from their homes by pogroms, institutional discrimination, and anti-Semitism that made life in Eastern Europe intolerable. Between 1881 and 1924, about 2.5 million Jewish people came to the United States from Eastern Europe in search of opportunity and religious freedom. They brought the Yiddish language with them, and soon Yiddish papers, books and theatrical productions boomed in New York. The city was “the undisputed world capital of the Yiddish stage,” writes Yiddish scholar Edna Nahshon, and it attracted star talent like Tucker. She had begun her career wearing blackface and performing in minstrel shows, but eventually made her name by defying her producers, revealing her Jewish identity and refusing to perform her act in blackface. By the time she met Yellen, Tucker was a bona fide star, renowned for an act that included references to her plus-sized figure and a comic, partially spoken singing style. “My Yiddishe Momme” was a departure—a sorrowful song that acknowledged Tucker’s Jewish roots. When she debuted the song, her own mother was ailing. Her mother died soon after it became a bestseller. In 1930, the song inspired a film, Mayne Yiddishe Mame, the first Yiddish musical on film and one of the first times Yiddish was used in a talking picture. In 1931 Tucker took her show to Europe. But not everyone loved the song. During a performance in France, she sang the song to a mixed group of Jewish and gentile theatergoers. During her performance, anti-Semitic tensions in the crowd boiled over as gentiles booed and Jews shouted at them. The shouting match “threatened to turn into a riot,” writes biographer Armond Fields, and Tucker quickly switched songs. It was a taste of things to come. When Hitler came to power in 1933, “My Yiddishe Momme” was one of the songs banned and destroyed by the Nazis. “I was hopping mad,” wrote Tucker blithely. “I sat right down and wrote a letter to Herr Hitler which was a masterpiece. To date, I have never had an answer.” Later, the song she popularized was sung in concentration camps by Jewish victims of the Holocaust. After the war, reports The New York Times, Yiddish song archivist Chana Mlotek heard from a former concentration camp inmate who remembered a German guard being so affected by the song that he told his guards to give the Jewish prisoners more to eat. The story was corroborated by others who witnessed the incident. During World War II, Tucker performed for American troops. Meanwhile, her plaintive song even became part of a soldier’s tragic story at the end of World War II. After the war, reports the BBC World Service, Tucker received a letter from Robert Knowles, an Army soldier who had heard a Jewish comrade talk about his longing to hear Tucker’s song played on the streets of Berlin. “We did reach Berlin, four days after the war was over,” he told Tucker. By then his friend, Al, was dead. So Knowles and his fellow soldiers devised a tribute: They rigged up a record player on a truck and drove around the city playing “My Yiddishe Momme” at full volume. “Might I say that you gave a wonderful performance,” Knowles wrote. “You sang…for over three hours, and didn’t even get hoarse. I was proud of you that day, and I think that Al was too, for I am sure that he knew about it….The record was old and believe me very scratched, but you were in voice my friend, you were in voice.” Today, “My Yiddishe Momme” is seen by scholars as an expression of the guilt and nostalgia of Jews like Tucker and Yellen who felt the pressures of assimilation and accomplishment that came with the Jewish immigrant experience. For Jewish immigrants to the United States and Holocaust survivors alike, the song spoke to the importance of family and the resilience and sacrifice of Jewish women: How few were her pleasures, she never cared for fashion's styles Her jewels and treasures she found them in her baby's smiles Oh I know that I owe what I am today To that dear little lady so old and gray  To that wonderful yiddishe momme of mine The song’s popularity survived into Tucker’s old age, but Yiddish vaudeville and theater didn’t. Over the years, the use of Yiddish declined among American Jews, and Yiddish theater slowly gave way to Broadway. But Tucker remained proudly, openly Jewish and devoted time, energy and money to raising funds to help other Jewish performers and raise money to help Jews displaced by the Holocaust. She would perform the mournful “mama” song for the rest of her life. It was later recorded by Billie Holiday, Tom Jones, and Ray Charles. Even today, the nostalgic lyrics of “My Yiddishe Momme” evoke sorrow and love in listeners—but its most enduring legacy may be the one it left to people who drew on the song for hope and comfort during the darkest hours of the Holocaust. Source of the article