Verify it's really you

Please re-enter your password to continue with this action.

Posts

GOATReads:Sociology

Safety is fatal

Humans need closeness and belonging but any society that closes its gates is doomed to atrophy. How do we stay open? Many of us will recall Petri dishes from our first biology class – those shallow glass vessels containing a nutrient gel into which a microbe sample is injected. In this sea of nutrients, the cells grow and multiply, allowing the colony to flourish, its cells dividing again and again. But just as interesting is how these cells die. Cell death in a colony occurs in two ways, essentially. One is through an active process of programmed elimination; in this so-called ‘apoptotic’ death, cells die across the colony, ‘sacrificing’ themselves in an apparent attempt to keep the colony going. Though the mechanisms underlying apoptotic death are not well understood, it’s clear that some cells benefit from the local nutrient deposits of dying cells in their midst, while others seek nutrition at the colony’s edges. The other kind of colony cell death is the result of nutrient depletion – a death induced by the impact of decreased resources on the structure of the waning colony. Both kinds of cell death have social parallels in the human world, but the second type is less often studied, because any colony’s focus is on sustainable development; and because a colony is disarmed in a crisis by suddenly having to focus on hoarding resources. At such times, the cells in a colony huddle together at the centre to preserve energy (they even develop protective spores to conserve heat). While individual cells at the centre slow down, become less mobile and eventually die – not from any outside threat, but from their own dynamic decline – life at the edges of such colonies remains, by contrast, dynamic. Are such peripheral cells seeking nourishment, or perhaps, in desperation, an alternative means to live? But how far can we really push this metaphor: are human societies the same? As they age under confinement, do they become less resilient? Do they slow down as resources dwindle, and develop their own kinds of protective ‘spores’? And do these patterns of dying occur because we’ve built our social networks – like cells growing together with sufficient nutrients – on the naive notion that resources are guaranteed and infinite? Finally, do human colonies on the wane also become increasingly less capable of differentiation? We know that, when human societies feel threatened, they protect themselves: they zero in on short-term gains, even at the cost of their long-term futures. And they scale up their ‘inclusion criteria’. They value sameness over difference; stasis over change; and they privilege selfish advantage over civic sacrifice. Viewed this way, the comparison seems compelling. In crisis, the colony introverts; collapsing inwards as inequalities escalate and there’s not enough to go around. In a crisis, as we’ve seen during the COVID-19 pandemic, people define ‘culture’ more aggressively, looking for alliances in the very places where they can invest their threatened social trust; for the centre is threatened and perhaps ‘cannot hold’. Human cultures, like cell cultures, are not steady states. They can have split purposes as their expanding and contracting concepts of insiders and outsiders shift, depending on levels of trust, and on the relationship between available resources and how many people need them. Trust, in other words, is not only related to moral engagement, or the health of a moral economy. It’s also dependent on the dynamics of sharing, and the relationship of sharing practices to group size – this last being a subject that fascinates anthropologists. In recent years, there’s been growing attention to what drives group size – and what the implications are for how we build alliances, how we see ourselves and others, and who ‘belongs’ and who doesn’t. Of course, with the advent of social media, our understanding of what a group is has fundamentally changed. The British anthropologist Robin Dunbar popularised the question of group size in his book How Many Friends Does One Person Need? (2010). In that study, he took on the challenge of relating the question of group size to our understanding of social relationships. His interest was based on his early studies of group behaviour in animal primates, and his comparison of group sizes among tribal clans. Dunbar realised that, in groups of more than 150 people, clans tend to split. Averaging sizes of some 20 clan groups, he arrived at 153 members as their generalised limit. However, as we all know, ‘sympathy groups’ (those built on meaningful relationships and emotional connections) are much smaller. Studies of grieving, for example, show that our number of deep relationships (as measured by extended grieving following the death of a sympathy group member) reach their upward limit at around 15 people, though others see that number as even smaller at 10, while others, still, focus on close support groups that average around five people. For Dunbar, 150 is the optimal size of a personal network (even if Facebook thinks we have more like 500 ‘friends’), while management specialists think that this number represents the higher limits of cooperation. In tribal contexts, where agrarian or hunting skills might be distributed across a small population, the limiting number is taken to indicate the point after which hierarchy and specialisation emerge. Indeed, military units, small egalitarian companies and innovative think-tanks seem to top out somewhere between 150 and 200 people, depending on the strength of shared conventional understandings. Though it’s tempting to think that 150 represents both the limits of what our brains can accommodate in assuring common purpose, and the place where complexity emerges, the truth is different; for the actual size of a group successfully working together is, it turns out, less important than our being aware of what those around us are doing. In other words, 150 might be an artefact of social agreement and trust, rather than a biologically determined structural management goal, as Dunbar and so many others think. We know this because it’s the limit after which hierarchy develops in already well-ordered contexts. But we also know this because of the way that group size shrinks radically in the absence of social trust. When people aren’t confident about what proximate others are mutually engaged in, the relevant question quickly turns from numbers of people in a functioning network to numbers of potential relationships in a group. So, while 153 people might constitute a maximum ideal clan size, based on brain capacity, 153 relationships exist in a much smaller group – in fact, 153 relationships exist exactly among only 18 people. Smaller college size facilitates growing trust among strangers, making for better educational experiences Dunbar’s number should actually be 18, since, under stress, the quality of your relationships matters much more than the number of people in your network. The real question is not how many friends a person can have, but how many people with unknown ideas can be put together and manage themselves in creating a common purpose, bolstered by social rules or cultures of practice (such as the need to live or work together). Once considered this way, anyone can understand why certain small elite groups devoted to creative thinking are sized so similarly. Take small North American colleges. Increasingly, they vie with big-name universities such as Harvard and Stanford not only because they’re considered safer environments by worried parents, but because their smaller size facilitates growing trust among strangers, making for better educational experiences. Their smaller size matters. Plus, it’s no accident that the best of these colleges on average have about 150 teaching staff (Dunbar’s number) and that (as any teacher will know) a seminar in which you expect everyone to talk tops out at around 18 people. But what do we learn from these facts? Well, we can learn quite a bit. While charismatic speakers can wow a crowd, even the most gifted seminar leader will tell you that his or her ability to involve everyone starts to come undone as you approach 20 people. And if any of those people require special attention (or can’t tolerate ideological uncertainty) that number will quickly shrink. In the end, therefore, what matters much more than group size is social integration and social trust. As for Facebook’s or Dunbar’s question of how many ‘friends’ we can manage, the real question ought to be: how healthy is the Petri dish? To determine this, we need to assess not how strong are the dish’s bastions (an indicator of what it fears) but its ability, as with the small North American college, to engage productively and creatively in extroverted risk. And that’s a question that some other cultures have embraced much better than even North American colleges. On the Indonesian island of Bali, a village isn’t a community unless it has three temples: one for the dead ancestors and things past (pura dalem); a community temple that manages social life (pura desa); and a temple of origin (pura puseh). This last temple is what literally ties an individual self to a particular place. For the word puseh means ‘navel’. To this last temple every Balinese is connected by a spiritual umbilicus, and every 210 days (that is, every Balinese year) a person thus tied is obliged to return physically to honour that connectedness, becoming again a metaphorical stem cell: returning to their place of origin, examining their patterns of growth, and using their ‘stem’ in the interests of restructuring a healthier future. The stem cell, of course, is the recursive place where embryologists gather cells to regrow us more healthily; and, in Bali, extroversion is health-enhancing only once we bring back what we learn to where we began. Neglecting this originary connection can cause grave harm, and being far removed, or abroad for an extended period, risks snapping that cord if stretched too far, severing the very lifeline to one’s own past, present and future. But why stretch your umbilicus at all if potential outcomes might be dire? Because boundary exploration helps us define who we are; because the unfamiliar makes us conscious of what’s central; because we need to approach things that are unusual if we’re to diversify and grow. It’s the idea behind the avant-garde (literally, the advance guard) – the original French term referred to a small group of soldiers dispatched to explore the terrain ahead so as to test the enemy. You could stay put and remain ignorant, or go too far and get killed. Alternatively, you might go just far enough to learn something and come back to describe what you’d witnessed. It’s a simple idea, part of every vision quest, and filled with deep uncertainty. Indeed, the very uncertainty of exploration is critical to adaptation and growth. Our shared values (the ‘cultures’ we think we know at the centre of the Petri dish) are always explicitly defined at the peripheries, where we become more aware of our assumptions. And if there’s no wall or Petri dish to contain us, we need to have that umbilicus: because we need a device to measure how far is too far. This being the case, it follows that curiosity is critical to rethinking what we take for granted. It can make us better informed, but it can also get us into trouble. When will the umbilicus snap? How far is too far? These are good questions that once again might be illuminated by a biological example. The human immune system is the best one I know. For a long time, science told us that immunity was about defending ourselves from foreign invaders. This model explains the way we resist becoming host to lots of foreign things that could destroy us – it’s how the body resists becoming a toxic dump site. It also animates the way we teach schoolchildren about washing hands and, today, donning masks and remaining socially distant. Viruses are not living invaders. They’re just information that can sit around like books in our genetic library Setting aside its inherent xenophobia (keep out all things foreign), the defence model works well enough. But there’s a big problem with this simple idea: because we need knowledge of the foreign landscape and its inhabitants in order to adapt. Indeed, we build immunity on the back of dendritic (presentation) cells that, like the military advance guard, bring back to our bodies specific information that we assess and respond to. While it’s true that, in this sense, we’re reacting ‘defensively’ when we adapt, that’s pretty much where the utility of the military metaphor ends – and where modern immunity begins to challenge what immunologists have defined for decades as the ‘recognition and elimination of nonself’. The metaphor fails because viruses are not living invaders. They are just information that can sit around like books in our genetic library until someone reads them, revising what they mean through some editorial updating, and then bringing the information they offer to life once again, in a new form. Moreover, like books in a lending library, some viruses remain unread, while others are widely used. Some are dusty, some dog-eared. That’s because viruses proliferate only when people congregate in reading groups and animate them; where what those groups attend to is socially, not biologically, driven. Like those books, viruses are just bits of data that our bodies interpret and share with others, for better or worse. This is a process that happens every day, and mostly for the better, especially when viral intelligence helps us to adapt, and prevents us (like isolated tribes) from dying of the common cold every time cruise ships or truckers from abroad show up at our ferries and ports. But there’s another reason that invasive images fail to explain the science. In 1994, the immunologist Polly Matzinger introduced an immune system model in which our antibodies don’t respond solely as a matter of defence. They respond, in her view, because antigen-presenting (dendritic) cells stimulate immunologic responses. Although the immune system remains defensive in this view, Matzinger’s argument shifted the debate ever so slightly from levels of self-preservation to information-presentation – from excluding outsiders to understanding them. The idea was radical in immunologic science, but mundane in anthropology. Countless anthropological arguments saying much the same thing about self, awareness of ‘the other’ had been around for more than a century (and obvious to other cultures for millennia), but the assault on self-preservation through extroverted risk finally entered bench science with Matzinger, appearing not only as ‘new’, but in a form familiar enough to bench scientists to sound plausible. The immune system is your biological intelligence. It needs the ‘infection’ of foreign bodies to help you survive Now, if belatedly, immunology was poised to question both Darwinian preservation and selfishness in one go, as well as its own otherwise unexamined assumptions about the social and biological exclusion of ‘nonself’. Matzinger’s idea got traction, its shift from defence to curiosity calling attention to the immune system’s role in assessing the unknown (as opposed to shunning the outside). Still, the argument would in any case be revised by three key realities. The first, which didn’t take root among theoretical immunologists until regenerative medicine emerged at the end of the 1990s, is that viruses are less invaders than informants. I’d picked up this idea from the Balinese whom I worked with during the AIDS crisis in the 1980s. But it wasn’t limited to them. Other, less ‘Cartesian’ Indigenous groups, such as the Navajo, share this understanding. The second truth, which came from the same cross-cultural experience, was that immunology was stuck in self-interest: it couldn’t fathom why a self would reach out in an extroverted and potentially dangerous manner instead of only selfishly defending its identity. Scientists were slowly awakening to a fact well known in many non-Darwinian settings: namely, that externality (extroversion) matters. So does reciprocity – as anthropologists well know. External information has to resonate with ‘self’ – in this case, with cells that your body already makes – in order to bind, transcribe and replicate. That’s the key function of our immune cells, which are made mostly in the thymus (T cells) and bone marrow (B cells). Our bodies make millions of novel cells in these mutation factories, so many in fact that we can’t even count them. Like experimental radio beams sent into outer space, these cells send out signals, functioning as much as search engines as systems of defence. The point here is that thinking of the immune system only as a defensive fortress-builder seriously misses what it’s actually doing. Because the immune system is also, and quite literally, your biological intelligence. It needs the ‘infection’ of foreign bodies to help you develop and survive. This same need also explains how vaccines protect us from biological meltdown. Extroversion is therefore not only needed as a defence strategy, as Matzinger would have it, but as a means of engaging with and also creating environmental adaptations, even if these encounters prove life-threatening for some. We see this need manifest itself graphically in the present COVID-19 crisis – less by what is happening scientifically, than by what is happening socially. Arecent report on wellbeing and mental health by the Brookings Institution attempts to deconstruct the apparent paradox of reported feelings of hope among otherwise disadvantaged and openly disenfranchised populations in the United States during the pandemic. ‘Predominantly Black counties have COVID-19 infection rates that are nearly three times higher than that of predominantly white counties,’ the report says, ‘and are 3.5 times more likely to die from the disease compared to white populations.’ Yet those same communities also express much higher levels of optimism and hope. The authors list various potential explanations for these higher rates of infection and death: ‘overrepresentation in “essential” jobs in the health sector and in transportation sectors where social distancing is impossible’; ‘underrepresentation in access to good health care, and their higher probability of being poor’; ‘longer-term systemic barriers in housing, opportunity, and other realms’; and being ‘more likely to have pre-existing health conditions [risk factors] such as asthma, diabetes, and cardiovascular diseases’. Given such disadvantage, and the inability to practise social distancing, the authors understandably presume that these socially disadvantaged groups should ‘demonstrate the highest losses in terms of mental health and other dimensions of wellbeing’. However, what they discovered is the exact opposite. Not only do African Americans remain the most optimistic of all the cohorts studied, when data is controlled for race and income, they also report ‘better mental health than whites, with the most significant differences between low-income Blacks and whites’. Indeed, low-income African Americans are 50 per cent less likely to report experiencing stress than low-income whites, and (along with Hispanics) are far less likely to involve themselves in deaths born of despair than whites. There are, of course, many complex reasons involved, including such things as community resilience and extended family ties, a belief in the merits of higher education, and a history of overcoming social inequality – some of which (like the merits of education) have declined among low-income whites. According to the authors of the Brookings Institution report, ‘the same traits that drive minority resilience in general are also protective of wellbeing and mental health in the context of the pandemic’. Now, these factors fit well with the literature on so-called ‘post-traumatic growth’ (where overcoming threatening hurdles can be strengthening). They also conform with what has been written about ‘resilient kids’ – those children who make good on challenging backgrounds to become considerate and sometimes successful human beings. Such findings, though, can be dangerous if the only take-home message is that adversity produces resilience. Herbert Spencer, the 19th-century father of Social Darwinism, who believed that stress was strengthening, and that charity only delayed what biology, in eliminating the weak, would take care of on its own. For Spencer, stress defined resilience. Every time we look one another in the eye and nod affirmatively, we create an informal contract And that’s the problem. Because the simple act of translating a biological story into a social one exposes a critical fallacy in the biology itself – this being that our otherwise inert genes possess the animated capacity for ‘selfishness’, even though they’re just bits of inert information to which our cells clearly bring life. Here, the supposedly scientific argument about determinism emerges as animated fantasy – a tendentious fundamentalism bordering on religious fundamentalism; or a moral lesson, as E O Wilson thought of sociobiology, in which stress emerges as morally and allegorically conditional. The only problem is, well, that’s just not what’s happening. Stress, to be clear, is neither good nor bad. It is amoral – or rather, its moral content is something we make together – socially, not biologically. For social engagement is itself a form of extroversion – an act of accommodation, a belief in the value of difference – in short, an anti-fundamentalist, anti-determinist view of the merits of navigating uncertainty together. But resilience can look Darwinian – both because the disadvantaged African Americans who respond to Brookings Institution surveys have already transcended significant challenges; and because the uneven playing field on which they’ve lived has long since silenced, ruined or completely destroyed those lacking survival networks. Such a story might even be corroborated by the unhappy fact that African Americans (and men in particular) live less long than their counterparts in other groups; and, when they do live longer, they’re more likely to spend time in prison if what stress teaches them is antisocial. Research on minority resilience must, therefore, be read differently. For it is social exchange – our very sociality, the ‘moral economy’ – that produces hope. Here, everything depends on social context. So, those who engage and exchange socially (by choice with families, or by default or of necessity in healthcare and service jobs) are better equipped to deal with the uncertainty of COVID-19 – and remain hopeful. It’s the engagement part – by choice or necessity – that nourishes hope. Every time we look one another in the eye and nod affirmatively in a social setting, we create an informal contract with another person. Dozens, sometimes hundreds, of times a day, we affirm our trust in others by this simple act, masked or not. We do this as an act of extroversion, hoping that we can survive and grow through creative engagement with what we learn on the edges of our community, and, if not, that our resilience can be nourished by those with whom we share common purpose. Black people in America might die more than three times more often than whites in the pandemic, but they’re also less socially isolated via their higher representation in public-facing jobs in which they have to engage with others. Like the military advance guard, or those cells at the edge of the Petri dish colony, they’re more likely to learn more from extroverted risk, and to adjust their expectations accordingly, emerging as more resilient in themselves and less vulnerable to mistrusting others. That’s not only why deaths born of despair are less common among them, but why isolation itself is a major driver of COVID-19 fatigue for all of us. It’s the engagement that matters. The so-called ‘healthy migrant effect’ offers a clear example. Migrant struggles are well documented, but migrants who enter into new communities often have just as good or even better health statuses than native populations. Thus, second-generation Asian-American migrants are more likely to excel in secondary school, and have much higher test scores, attend elite colleges and receive high-income professional degrees (eg, business, medicine, etc). The point is that it’s not only the extroverted risk of migrating that matters: it’s whether that risk results in a sense of meaningful exchange within a social context. It’s exchange itself, it turns out, that’s important. What’s more, the more moral its content, the better the odds that such exchange will enhance resilience. Most of the time, risks don’t work out as expected. And when they don’t work out, we all need a parent’s couch to sleep on and a shared meal to increase our sense of belonging and hope. It’s what the French sociologist Marcel Mauss observed almost a century ago about the value of reciprocity in his essay The Gift (1925): that the giver gives a part of him or herself, and that the thing given implies a return. Which is to say that it’s the exchange relationship that makes an economy ‘moral’ in the first place. By contrast, being alone undermines wellbeing. We know this from studying the impact of social isolation on mortality and morbidity. There’s lots of evidence here, and not just from studies of suicide: experiencing social isolation is a key reason why children who are wards of state, for example, often elect to return to families that are dangerous for them. In fact, being socially engaged even trumps being equal to others when it comes to what we all need. Again, evidence falls readily to hand. Some recent work on isolation and healthcare in China, carried out by members of the Cities Changing Diabetes global academic network that I lead, shows just how much of a risk factor social isolation is. Asked if equality of access to healthcare contributed directly to an inability to manage disease, about one-third of the several hundred people we interviewed said ‘yes, equality matters’. Asked how much the absence of family networks (a proxy for social isolation) impacted illness experience, and the percentage who said it did rose to almost everyone (93 per cent). And that’s in a country known to provide next-to-no care, let alone equal care, for economic migrants who must go home to be treated. This finding is startling, because equality is the gold standard for engagement in any democracy. Yet even it fades in importance when the moral economy is measured. For hope to proliferate, we need much more than endurance in the heroic, Darwinian sense The same holds true of refugees from violence. In another project (one in which I’ve been personally involved), funded by the University of Applied Sciences in Bochum, Germany, we systematically documented the health vulnerabilities of recent migrants. Asked whether they were receiving good healthcare, Syrian refugees resettled in communities often answered that they were receiving excellent care, even though German-born citizens publicly stated that those migrants were getting less. That’s not just because welfare in Germany looks pretty good when compared with Aleppo. It’s because extroverted hope, when paired with the altruism it generates socially, mediates a person’s ability to believe in the future – even if that hoped-for future is still somewhere far in the distance. There’s an important conclusion here: equality is only a first step towards alleviating human suffering and promoting feeling well within a moral economy. The bigger part concerns how people learn to hope about more than getting through the day. To put it another way, being hopeful requires a belief in the future, a long-term view. But being hopeful also requires more than that. It requires a sense of deep time and an enduring willingness – a desire – to engage. For hope to proliferate, we need much more than endurance in the heroic, Darwinian sense. We need a willingness to accept the natural place of everyday uncertainty, and we need diversity – even redundancy – to make that possible. The idea isn’t hard to grasp. The American inventor Thomas Edison once said that, in order to create, inventors need ‘a good imagination and a pile of junk’. The implication is that the hope required to convert junk into something useful sustains your extended contemplation of a pile of rubbish (what looks irrelevant now) over the deep time required to reshape it. But there’s another lesson: if you eliminate (recycle) what in the moment seems redundant or useless, without giving it a fair chance at invention, you also eliminate the possibility of making something new. Growth depends on merging two unlike things in the interest of making something greater. Redundancy and diversity form the basis of every moral economy, which is why neoliberal economies – those that take what look like redundancies and eliminate them in the interest of ‘efficiency’ – fail miserably in assisting population wellbeing. I have yet to see, for example, how profit manages itself in places where state welfare is almost entirely absent (eg, Nigeria). Neoliberalism succeeds only when it emerges within otherwise generous societies that have welfare stockpiles that can be selfishly mined. On that point, Ayn-Rand-style economics fails, and will forever fail, by favouring self-interest and efficiency over diversity, generosity and altruism. Observe what short-term self-interest has done to challenged economies, and a picture of what my fellow anthropologist Jonathan Benthall in 1991 called ‘market fundamentalism’ is easily painted. The social parallels here almost need no stating: what seems irrelevant to any one of us today, including the peculiar views of others, might in the end provide the very thing necessary to make us resilient to a future challenge – just as hope in the future mediates the uncertainties of COVID-19 through social engagement. Source of the article

GOATReads: Psychology

4 Ways You Self-Sabotage Your Joy In Daily Life, By A Psychologist

Most of us, at some point in our lives, have stood in the way of our own growth. We make progress on a project, start to feel hopeful about a relationship or finally get on track with a goal, and then we do something that undermines it. We fall into a procrastination spiral, pick a fight or simply quit; in doing so, we talk ourselves out of something that could potentially bring us happiness. There’s a name for this kind of behavior: self-sabotage. It looks like standing in your own way, but beneath the surface, there are deep cognitive and emotional dynamics at work. Here are four well-studied reasons why people sabotage good things, based on research in psychology. 1. You Self-Sabotage By Avoiding Blame One of the most consistently researched patterns in self-sabotage comes from what psychologists call self-handicapping. This is a behavior in which people create obstacles to their own success so that if they fail, they can blame external factors instead of internal ability. A prime example comes from classic research where researchers observed students who procrastinated studying for an important test. The ones who failed mostly attributed it to lack of preparation rather than lack of organization or discipline. For the ones who succeeded, it felt like a greater personal triumph because they succeeded despite the handicap. Self-handicapping is not simply laziness or whimsy. Rather, it is a strategy people use to protect their self-worth in situations they might perform “poorly” in or where they might be perceived as inadequate. It goes without saying that this strategy is counterproductive in the long run. The abovementioned study notes that habitual self-handicapping is linked with lower achievement, reduced effort and increasing avoidance over time. People end up sabotaging their outcomes to protect their ego in the moment. 2. You Self-Sabotage Because Of A Fear Of Failure Or Success (Or Both) People often think of the fear of failure as the main emotional driver behind self-sabotage. But research points to the fear of success as an equal, yet less talked-about engine of the phenomenon. Both fears can push people to undermine opportunities that are actually aligned with their long-term goals. Fear of failure motivates avoidance as it can protect people from harsh self-judgment if things go poorly. People who worry that failure will confirm their negative self-beliefs are more likely to adopt defensive avoidance tactics, like procrastination or quitting early. Fear of success, though less widely discussed, operates in a similar fashion. What motivates this fear is the anxiety that comes with the consequences of success. These could be higher self-expectations (or from others), increased visibility or a sense that you will no longer fit into familiar social roles. Psychologists like Abraham Maslow called this the Jonah complex: the fear of one’s own potential when success creates new demands and threats to identity. So, self-sabotaging success can be a way to stay within a comfort zone where expectations are familiar, even if that zone is unsatisfying. 3. You Self-Sabotage Because Of Negative Self-Beliefs Self-sabotage is tightly intertwined with how people view themselves. When someone doubts their worth, their ability or their right to be happy, they may unconsciously act in ways that confirm those negative self-views. Psychological theories like self-discrepancy theory help explain this. It proposes that people experience emotional discomfort when their actual self does not match their ideal self. This mismatch can lead to negative emotions such as shame, anxiety or depression. To reduce that discomfort, some may unconsciously avoid situations where those discrepancies could be highlighted, even if those situations are positive in nature. For example, someone who believes deep down that they “don’t deserve” success may avoid opportunities where success is possible because acceptance of that success would trigger uncomfortable self-judgments. Their behavior is not illogical when viewed through the lens of protecting a fragile identity, even though the outcome — self-sabotage — is counterproductive. 4. You Self-Sabotage Because You’re Coping With Stress and Anxiety Self-sabotage often emerges in moments of high stress or emotional threat. When people feel overwhelmed, anxious or stretched thin, their nervous systems shift into protective modes. Instead of moving forward, they retreat, avoid or defensively withdraw. Threat or uncertainty can reduce cognitive regulation and increase avoidance behaviors. In situations of perceived threat, even if the threat is potential success or evaluation, people can default to behaviors that feel safer, even if they undermine long-term goals. In practical terms, this means someone under chronic stress may procrastinate, ruminate or choose short-term relief over long-term gain, effectively sabotaging progress to manage anxiety in the moment. What All These Patterns Of Self-Sabotage Have in Common These psychological processes may feel very different on the surface, manifesting as procrastination, quitting, relationship withdrawal, distraction or negative self-talk, but they share common underlying themes: A desire to protect self-esteem by avoiding situations where perceived personal flaws might be exposed A fear of consequences, either of failure or of success, that feels threatening to identity or emotional stability Internal negative beliefs about self-worth and competence that are at odds with conscious goals Short-term emotional regulation strategies that comfort over long-term achievement What looks like “standing in your own way” is often a defensive strategy your mind developed to manage risk, emotional, social or identity related. That explains why self-sabotage can feel automatic and unconscious, rather than deliberate. Understanding why self-sabotage occurs is the first step toward changing it. But real progress comes from shifting underlying beliefs and responses, not just behaviors: Reframe failure as feedback. When failure is seen as a source of information rather than a judgment on worth, the fear that drives sabotage weakens. Build self-compassion. Self-compassion has been linked with lower tendencies to self-handicap. Treating yourself kindly in the face of setbacks makes it easier to stay engaged rather than withdraw. Challenge self-worth beliefs. Work on internal narratives that tell you you’re unworthy of success. Aligning self-concept with realistic goals reduces conflict and avoidance. Develop adaptive stress responses. Reducing chronic stress and improving emotional regulation, through mindfulness, social support or therapy, can help prevent threat-driven avoidance. These strategies don’t magically eliminate self-sabotage, but they weaken its psychological roots. The goal isn’t to eradicate fear or doubt, but to stop letting them dictate your actions. Source of the articel

Earthly delights

Noticing first one then many parrots, peacocks, owls and more birds in Old Master paintings taught me to truly see the world Iam an accidental birder. While I never used to pay much attention to the birds outside my window, even being a bit afraid of them when I was a child, I have always loved making lists. Ranking operas and opera houses, categorising favourite books and beautiful libraries – not to mention decades of creating ‘Top Ten’ lists of hikes, drives, national parks, hotels, and bottles of wine. My birding hobby grew out of this predilection. Specifically, out of my penchant for writing down the birds I found in the paintings by the Old Masters. Hieronymus Bosch, for starters. Bringing my opera glasses to the Museo del Prado in Madrid, I delighted in sitting across the room and counting the birds in Bosch’s painting, today called Garden of Earthly Delights (1490-1510). The triptych, which visualises the fate of humanity in three large panels, is exploding with birds. So far, my list of Bosch birds includes spiralling flocks of starlings amid posing peacocks and pheasants. Closer to the water are storks, egrets and two kinds of herons. A jackdaw and a jay can be identified near a giant ‘strawberry tree’, below which are two spoonbills. And lurking in the trees are three kinds of owls, serving as signs of heresy. In his book A Dark Premonition: Journeys to Hieronymus Bosch (2016), the Dutch poet and novelist Cees Nooteboom describes seeing Bosch’s work when he was a young man of 21 – and then seeing it again when he was 82. He asks of one picture: How has the painting changed? How has the viewer changed? Am I even the same man now? These are the questions I ask myself while standing in front of a certain picture by Raphael in the Uffizi. The first time I saw the Madonna del Cardellino (c1505-06) was more than 30 years ago. I was 19. My college boyfriend and I had stopped in Europe on the way back from two magical months in India. It was my first time in Italy. And Florence was so damn pretty. I vividly recall what a warm day it was, and how overwhelmed I felt by the grand museum. Walking past picture after picture, I turned back to look for my boyfriend, who was trailing behind. And there he was, utterly gobsmacked in front of a painting. So I walked back to look at it too. It was a Madonna by Raphael. A beautiful blonde Madonna, in a rich red dress with her cloak of ultramarine draped over her shoulders, and seated with two babes at her feet. One was holding a goldfinch. Being young Americans, we couldn’t understand any of it. Why were there two baby boys? If the second was John the Baptist, where was the child’s mother? And were those violets and chamomile under their feet? Serious birders sometimes talk about their first bird memory. My own earliest bird-in-a-painting memory was that goldfinch in the painting by Raphael in the Uffizi. Its composition is much like Raphael’s Madonna del Prato (1506), in Vienna – but at the Uffizi, instead of a cross, the children play with a tiny bird. Thirty years later, standing in front of the same painting, I now know the bird symbolises the Christ Child and the Passion. In Catalonia in Spain, there is a wonderful legend that suggests that the jagged and holy mountains of Montserrat rose from the earth at the precise moment that Christ was crucified in Jerusalem – as if the earth itself rose in anger. There was a similar story from the Middle Ages about how the goldfinch received its red spot. Flying down over Christ on the Cross, the bird tried to help Him by picking out a thorn from the Crown – and in this way was forever after splashed with the drop of His blood. In an enchanted world, everything seems to be telling a story. Second marriages are notoriously difficult. My new husband had been wiped out financially and emotionally by his previous marriages (yes, there was more than one). By the time I met Chris, he was barely hanging on to the house, his kids showing varying degrees of alienation. It was impressive that he wanted to try again – and so soon? Not six months after our first date and whirlwind romance, we had done it! I sometimes think we were like survivors of a shipwreck; his life was a wreck, but mine was worse. Of course, we underwent couples therapy and laughed off the obligatory (but serious) warnings about our dim hopes of survival. We were just happy to have found each other; happy to be still breathing; for, as Voltaire said in 1761: ‘[E]verything is a shipwreck; save yourself who can! … Let us then cultivate our garden …’ My first marriage had been to a Japanese man. Having spent my adult life in his country, where I spoke, thought, and dreamt in Japanese, I hoped marrying an American would be easier. After all, we shared a language and a culture. But it wasn’t easier. Marriage is tough in any language. And so, I have tried much harder this time to cultivate shared values and interests – which is challenging when you are married to an astrophysicist! I do love watching Chris look at art. He becomes intensely attentive, as if every nerve-ending in his body is switched on. It’s not like he’s trying to figure out the nature of galaxy evolution or doing the complicated mathematics that he does when he’s working. He just stands there before the picture, fully present. Most of the time, I have a hard time understanding what he’s thinking about. I know he can build things that go into space. And that he teaches quantum mechanics at Caltech and can perform multivariable calculus. He can even make a cat die and not die at the same time. This is mainly lost on me, which is why I love looking at art together with him. It’s something we can share, something over which we can linger, in each other’s company. That was how my husband and I started going on what we call our ‘art pilgrimages’. From the very beginning of our marriage, we spent enormous amounts of time standing side by side silently looking at Old Masters. Sometimes we might talk a bit, hold hands, and exchange a knowing smile, but mainly we stood there silently soaking it all in. Shortly after getting married, I took Chris to the Getty Museum, in Los Angeles. I was excited to share my favourite picture in the collection, Vittore Carpaccio’s Hunting on the Lagoon (c1490-95). The museum acquired the painting in 1979, from the collection of the Metropolitan Opera basso Luben Vichey and his wife. Hunting on the Lagoon shimmers with atmospheric effects. Painted in azurite, yellow ochre and lead white, there are touches of costly ultramarine used for the sky and mountains, while vermilion is used on the servant’s jacket. Hunting on the Lagoon depicts a group of aristocratic gentlemen hunting from a small boat on the water. ‘Hunting birds with a bow and arrow?’ Chris wondered. Looking carefully, you can see they are shooting clay balls at what appear to be grebes. I tell him that it was apparently the custom to hunt birds in this way so as not to damage their pelts. ‘But what about those dark birds with the serpentine necks sitting one to a boat?’ he asked. I watched his eyes move to the same birds posing on pylons in the water. ‘Unmistakably cormorants.’ And the theory is, I tell him, that the birds were used for hunting fish. In Japan, you can still see this traditional way of fishing, called ukai. I am always so excited to share something of my life in Japan with Chris, even though it was in the days before we met. I tell him how I watched this kind of fishing years ago. ‘It was at night by lamplight on boats that ply the Kiso River, in Aichi Prefecture.’ The birds, held by what seem to be spruce fibre leashes, were trained to dive for ayu sweetfish and deliver them back to the fishermen on the boats, I say, wishing I could show him. ‘Do you think the custom came to Europe from Japan?’ he wonders. I think it arrived from China, though that story might be made up. In 17th-century England, King James I was known to have kept a large – and very costly – stock of cormorants in London, which he took hunting. Looking at the painting, however, I thought the practice I’d seen in Japan had been altered almost beyond recognition. During the Renaissance, the lagoon in the painting must have been jam-packed with fish and mussels and clams and birds. A perfect place to spend an afternoon. But those men, with their colourful hose, with their bows and clay balls, are clearly no fishermen. It was then that Chris noticed the strange, oversized lilies protruding from the water in the foreground of the painting. It took him long enough to notice, I thought. Those flowers have driven art historians crazy for generations. ‘Don’t tell me,’ he said, ‘There must be another picture? One with a missing vase, right?’ Right he was! There is an even better-known painting by Vittore Carpaccio, Two Venetian Ladies (c1490-1510), hanging in the Museo Correr in Venice. We went to see it a few years later. And, sure enough, there is a pretty majolica vase sitting on the wall of the balcony, which seems ready and waiting for those lilies. The two works (painted on wooden panels) fit together, one on top of the other. Before this was figured out, art historians believed the two bored-looking ladies to be courtesans. One of the reasons for thinking this was the two doves sitting on the balustrade, which are ancient symbols of Venus and romantic love. But the ladies are also shown sitting next to a large peahen, symbols of marriage and fidelity. Looking bored, with their tall wooden clogs tossed to the side, they were declared by art historians to be courtesans. Definitely courtesans. Like pieces of a puzzle, the matched set of paintings has now convinced art historians that these ‘ladies’ are in fact wives of the ‘fishermen’, who are themselves no longer believed to be fishermen but, rather, aristocratic Venetians out hunting waterfowl for sport on the lagoon. A great painter of dogs, Carpaccio was even better at birds. Beyond his doves, grebes and cormorants, he is perhaps best known for his colourful red parrots. According to Jan Morris writing in 2014, the Victorian art critic John Ruskin was much taken with Carpaccio’s menagerie. At the Ashmolean Museum in Oxford, there is a small watercolour drawing that is a copy of Carpaccio’s red parrot, made by Ruskin in 1875. Calling it a scarlet parrot, Ruskin wondered if it wasn’t an unknown species, and so decided to draw a picture of it in order to ‘immortalise Carpaccio’s name and mine’. It might be classified as Epops carpaccii, he suggested – Carpaccio’s Hoopoe. Chris and I were delirious to have found each other. Grateful for this chance to have our spirits reborn, we celebrated by taking multiple honeymoons that first year. And without a doubt, the most romantic was the trip we took to Venice – on the hunt to find Carpaccio’s red parrot, which, happily, one can see in the place for which it was originally commissioned: in the Scuola di San Giorgio degli Schiavoni. Today, when introducing foreign visitors to Venice’s scuole, tour guides will sometimes compare the medieval confraternities to modern-day business associations that carry out philanthropic activities, like the rotary club. That is probably not far off the mark. Carpaccio’s great narrative cycles were created to adorn the walls of these scuole. The pictures were not merely to decorate, but there to tell stories relevant to the confraternity. Perhaps the best known of these are two of the paintings commissioned by the Scuola di San Giorgio degli Schiavoni. The red parrot that Ruskin adored is still there in one of the paintings, the Baptism of the Selenites (1502). Chris and I barely made it in time before the small scuola closed for the day. It was hot and the air heavy in the dark interior. When the author Henry James visited the Schiavoni in 1882, he complained that ‘the pictures are out of sight and ill-lighted, the custodian is rapacious, the visitors are mutually intolerable …’ However, then he magnanimously added: ‘but the shabby little chapel is a palace of art.’ Eventually locating the parrot, we marvelled at how often such exotic birds can be counted in religious paintings from the Renaissance. We assumed they must be prized like the tulips of Amsterdam during the Dutch Golden Age of paintings, coveted and displayed for their rarity. I learned only later that it was also because they were a symbol of the Virgin birth. Art historians suggest that this is due to an ancient belief that conception occurred through the ear (and parrots can speak…?) Another more interesting explanation is something found in the Latin writings of Macrobius, who said that when it was announced in Rome that Caesar’s adopted nephew Octavian was triumphant at the Battle of Actium in 31 BCE, at least one parrot congratulated him with: ‘Ave Caesar.’ This was seen as prefiguring the Annunciation and Ave Maria. In another painting in the scuola, Saint Jerome and the Lion (1509), Carpaccio has drawn what looked to us as an entire bestiary – including a beautiful peacock that seems to be trying to get as far away from the lion as it can. Peacocks always remind me of Flannery O’Connor, who lived on a farm in Georgia with ‘forty beaks to feed’. She loved her peacocks, calling them the ‘king of the birds’. No matter how her family complained, she remained firm in her devotion. Recently re-reading her essays in the posthumous collection Mystery and Manners (1969), I learned that the Anglo tradition is very different from the Indian one, when it comes to peacocks. In India, they are viewed as symbols of love and beauty, while Europeans typically associate peacocks with vanity and pride. This notion stretches all the way back to Aristotle, who remarked that some animals are jealous and vain, like a peacock. That is why you find them aplenty in Bosch’s paintings. A warning against the pride of vanity. O’Connor knew that the peacock was a Christian symbol of resurrection and eternal life. Others concurred. The ancient Romans held that the flesh of the peacock stayed fresh forever. Augustine of Hippo tested this with a live peacock in Carthage, noting that: ‘A year later, it was still the same, except that it was a little more shrivelled, and drier.’ Thus, the peacock came to populate Christian art from mosaics in the Basilica di San Marco to paintings by Fra Angelico in the Renaissance. Perhaps this is one of the reasons I came to love peacocks so much; as after all, I was experiencing my own kind of resurrection of the spirit with Chris. The late German art historian Hans Belting wrote about the exotic creatures found in Bosch’s triptych. Belting’s interpretation is interesting, as he views the middle panel – the eponymous Garden of Earthly Delights – as being a version of utopia. By Bosch’s day, the New World had been ‘discovered’ by Europeans – and, indeed, the painting can be dated because of the New World pineapples seen in the central panel. When Christopher Columbus set sail to the Indies, he believed, like many of the theologians of his time, that an earthy paradise existed in the waters antipodal to Jerusalem, just as Dante Alighieri described. But what is Bosch trying to say? I don’t think anyone really understands. What we do know is that the triptych was never installed in a church – but was instead shown along with exotic items in the Wunderkammer of his patrons. Albrecht Dürer, my beloved painter of owls and rhinos, visited Brussels three years after the completion of Bosch’s painting but said not one word about it in his copious journals. Was he disappointed? Scandalised? Belting thinks his silence speaks volumes, and he describes Dürer’s astonishment when visiting the castle and seeing the wild animals and all manner of exotic things from the Americas and beyond. There was a reason why the Europeans of the time called the Americas the New World, instead of just the ‘new continent’. For this was a revelation, not just of new land, but of sought-after minerals, like gold and silver. It was a new world of tastes. From potatoes to tomatoes and chocolate to corn, the dinner tables of Europe would be transformed in the wake of Columbus’s trip. There were animals never seen in Europe, like the turkey and the American bison. And hummingbirds. How wide-eyed those Europeans must have been. In 1923, Marcel Proust wrote that: ‘The only true voyage of discovery … would be not to visit strange lands but to possess other eyes.’ And this was how I felt coming back to California after two decades in Japan. It was also how I felt during the early days of the COVID-19 pandemic, when time took on a stretched-out quality. To feel oneself slowing down was also to discover new eyes – to begin to savour the seasons changing, the birdsong, or the peaceful sound of the rustling leaves in the palm trees. To listen to the loud rustle of the grapefruit tree just before a huge, round fruit falls smack onto the ground was like a revelation the first time I heard it. And how did I reach 50 years old and never once hear baby birds chirping to be fed – like crickets! The lockdowns became a time for me to see the world with new eyes. And it continues, wave after wave. It was during that time when our ‘birdwatching in oil paintings’ obsession, mine and Chris’s, was transformed into real-life birding. The pandemic, and lockdown, changed everything. When restrictions lifted, rather than taking off to museums in Europe, we travelled to Alaska, where we spent weeks traipsing across the tundra in Denali National Park. So often looking down at my feet, I’d marvel at the wondrous tangle of green and yellow lichen; of moss and red berries; and at a variety of dwarf willow and rhododendron, none more than an inch tall. It created a beautiful pattern, like a Persian carpet. Enchanted, I wanted to take off my shoes and feel the spongy earth between my toes. When was the last time I had walked anywhere barefoot? Even at the beach, I usually keep my shoes on. And not only that, but I had never in my life walked off-trail, much less traipsed across tundra. When I was young, I once camped along the Indus River, in India, but that was so long ago. How had I become so alienated from wild things? Life is, after all, constantly shuffling the deck, with each moment precious and unique. All those heightened moments we experienced in our favourite paintings are precisely what the great artists were celebrating. The perfect unfolding of now. And what was true in the paintings was also true out in the world. Birding alone and then later in groups, we have savoured those moments when a bird is spotted, and we all grow instantly quiet. Frantically training our binoculars on the object, it seems we are all frozen in a great hush. With laser focus, we attune ourselves to the bird, on a hair’s breadth of losing it, aware of the tiniest flitter, flutter and peep. It is enchantment. And through this, I have felt a little of how birds must have exerted power over the Renaissance imagination too. I continue to marvel at these free creatures of the air, symbolising hope and rebirth, messengers from distant lands, inhabitants of a canvas of beauty and life in this great garden of earthly delights. Source of the article

Quantum dialectics

When quantum mechanics posed a threat to the Marxist doctrine of materialism, communist physicists sought to reconcile the two The quantum revolution in physics played out over a period of 22 years, from 1905 to 1927. When it was done, the new theory of quantum mechanics had completely undermined the basis for our understanding of the material world. The familiar and intuitively appealing description of an atom as a tiny solar system, with electrons orbiting the atomic nucleus, was no longer satisfactory. The electron had instead become a phantom. Physicists discovered that in one kind of experiment, electrons behave like regular particles – as small, concentrated bits of matter. In another kind of experiment, electrons behave like waves. No experiment can be devised to show both types of behaviour at the same time. Quantum mechanics is unable to tell us what an electron is. More unpalatable consequences ensued. The uncertainty principle placed fundamental limits on what we can hope to discover about the properties of quantum ‘wave-particles’. Quantum mechanics also broke the sacred link between cause and effect, wreaking havoc on determinism, reducing scientific prediction to a matter of probability – to a roll of the dice. We could no longer say: when we do this, that will definitely happen. We could say only: when we do this, that will happen with a certain probability. As the founders of the theory argued about what it meant, the views of the Danish physicist Niels Bohr began to dominate. He concluded that we have no choice but to describe our experiments and their results using seemingly contradictory, but nevertheless complementary, concepts of waves and particles borrowed from classical (pre-quantum) physics. This is Bohr’s principle of ‘complementarity’. He argued that there is no contradiction because, in the context of the quantum world, our use of these concepts is purely symbolic. We reach for whichever description – waves or particles – best serves the situation at hand, and we should not take the theory too literally. It has no meaning beyond its ability to connect our experiences of the quantum world as they are projected to us by the classical instruments we use to study it. Bohr emphasised that complementarity did not deny the existence of an objective quantum reality lying beneath the phenomena. But it did deny that we can discover anything meaningful about this. Alas, despite his strenuous efforts to exercise care in his use of language, Bohr could be notoriously vague and more than occasionally incomprehensible. Pronouncements were delivered in tortured ‘Bohrish’. It is said of his last recorded lecture that it took a team of linguists a week to discover the language he was speaking. And physicists of Bohr’s school, most notably the German theorist Werner Heisenberg, were guilty of using language that, though less tortured, was frequently less cautious. It was all too easy to interpret some of Heisenberg’s pronouncements as a return to radical subjectivism, to the notion that our knowledge of the world is conjured only in the mind without reference to a real external world. It did not help that Bohr and physicists of Bohr’s school sought to shoehorn complementarity into other domains of enquiry, such as biology and psychology, and attempted to use it to resolve age-old conundrums concerning free will and the nature of life. Such efforts garnered little support from the wider scientific community and attracted plenty of opprobrium. Albert Einstein famously pushed back, declaring that, unlike quantum mechanics, God does not play dice. He argued that, while quantum mechanics was undoubtedly powerful, it was in some measure incomplete. In 1927, Bohr and Einstein commenced a lively debate. Einstein was joined in dissent by the Austrian physicist Erwin Schrödinger, who devised the conundrum of ‘Schrödinger’s cat’ to highlight the seemingly absurd implications of quantum mechanics. But although both Einstein and Schrödinger remained strident critics, they offered no counter-interpretation of their own. Despite their misgivings, there was simply no consensus on a viable alternative to complementarity. Complementarity also fell foul of the principal political ideologies that, in different ways, dominated human affairs from the early 1930s, through the Second World War, to the Cold War that followed. Both Bohr and Einstein were of Jewish descent and, to Nazi ideologues, complementarity and relativity theory were poisonous Jewish abstractions, at odds with the nationalistic programme of Deutsche Physik, or ‘Aryan physics’. But the proponents of Deutsche Physik failed to secure the backing of the Nazi leadership, and any threat to complementarity from Nazi ideology disappeared with the war’s ending. Much more enduring were the objections of Soviet communist philosophers who argued that complementarity was at odds with the official Marxist doctrine of ‘dialectical materialism’. Vladimir Lenin, who had led the Bolshevik Party in the October Revolution of 1917, was a dogmatic advocate of the materialist worldview expounded by the German philosophers Karl Marx and Friedrich Engels, authors of The Communist Manifesto, first published in 1848. The world according to Marxism consists of objectively existing matter in constant motion, bound by laws. Such laws govern different levels of existence that we attempt to describe through different scientific disciplines that are not necessarily reducible one to another. For example, sociology – regarded as an empirical science – is not reducible to physics and is therefore bound by its own laws of human social and economic behaviour. Marx and Engels observed that such behaviour breeds functional contradictions within an organised society. To survive, people submit to exploitative relationships with the means of economic production and those who own them. Distinct classes emerge: masters and their slaves, lords and their serfs, business owners (the bourgeoisie) and their low-wage workers (the proletariat). It was not enough just to interpret the world, Marx claimed. Philosophers must also seek to change it These functional contradictions are ultimately resolved through inevitable class struggle resulting in irreversible changes in social organisation and the means of production. The classical antiquity of Greece and Rome had given way to feudalism. Feudalism had given way to capitalism. And capitalism was destined to give way to socialism and communism, to the utopia of a classless society. But the necessary changes in social organisation would not happen by themselves. The path led first through socialism and the ‘dictatorship of the proletariat’, supported by an autocratic state that would eventually no longer be needed when the communist utopia was realised. For Lenin, the ends justified the means, which included the violent repression of bourgeois capitalist and counter-revolutionary forces. In Marxist philosophy, the method of studying and apprehending both social and physical phenomena is dialectical, and the interpretation of natural phenomena is firmly materialistic. It was not enough just to interpret the world, Marx claimed. Philosophers must also seek to change it, and this could not be done in a world built only from perceptions and ideas. Any philosophy that sought to disconnect us from material reality, by reducing the world to mere sensation and experience, posed a threat to Marxism. In Materialism and Empirio-Criticism (1909), Lenin had berated the physicist Ernst Mach and his Russian followers, and the German philosopher Richard Avenarius, who had formulated the positivist doctrine of empirio-criticism. The philosophy of positivism was anathema, as it sought to reduce knowledge of the world to sensory experience. Lenin argued that such thinking led only to a subjective idealism, or even solipsism. To him, this was just so much ‘gibberish’. Complementarity looked just like the kind of positivist gibberish that Lenin had sought to annihilate. A reality accessible only in the form of quantum probabilities did not suit the needs of the official philosophy of Soviet communists. It appeared to undermine orthodox materialism. Nevertheless, an influential group of Soviet physicists, including Vladimir Fock, Lev Landau, Igor Tamm and Matvei Bronstein, promoted Bohr’s views and for a time represented the ‘Russian branch’ of Bohr’s school. This was not without some risk. Communist Party philosophers sought their dismissal, to no avail, largely because they could not agree on the issues among themselves. The situation in the Soviet Union changed dramatically a few years later. As his health declined, Lenin had tried to remove the Communist Party’s general secretary, Joseph Stalin, whom he deemed unfit for the role. But Stalin had been quietly consolidating his position and had placed loyalists in key administrative posts. After a brief power struggle following Lenin’s death in 1924, Stalin became supreme leader. In 1937-38, he tightened his grip by unleashing a reign of terror, known as the Great Purge, in which many of the old Bolsheviks who had fought alongside Lenin in 1917 were executed. Although the total death toll is difficult to determine, a figure of 1 million is not unreasonable. Physicists were not exempt. Bronstein was arrested, accused of terrorism offences, and executed in February 1938. Stalin put his own stamp on the political ideology of Soviet communists in his short text titled Dialectical and Historical Materialism (1938), a formulation of Marxist philosophy that would be adopted as the official Communist Party line. Those intellectuals who resisted the official doctrine now faced real risks of losing more than just their jobs. An outspoken commitment to complementarity became positively dangerous The distractions of the Second World War meant that little changed for physicists until Andrei Zhdanov, the Party’s philosopher and propagandist-in-chief, who was thought by many to be Stalin’s successor-in-waiting, specifically targeted the interpretation of quantum mechanics in a speech delivered in June 1947. ‘The Kantian vagaries of modern bourgeois atomic physicists,’ he proclaimed, ‘lead them to inferences about the electron’s possessing “free will”, to attempts to describe matter as only a certain conjunction of waves, and to other devilish tricks.’ This was the beginning, writes the historian Loren Graham, ‘of the most intense ideological campaign in the history of Soviet scholarship’. An outspoken commitment to complementarity became positively dangerous. Soviet physicists scrambled to defensible positions. Fock retreated from complementarity as an objective law of nature, and criticised Bohr for his vagueness. Others sought ways to ‘materialise’ quantum mechanics. Dmitry Blokhintsev, a student of Tamm’s, favoured a statistical interpretation based on the collective properties of an ‘ensemble’ of real particles. In such an interpretation we are obliged to deal with probabilities simply because we are ignorant of the properties and behaviours of the individual material particles that make up the ensemble. Einstein had used this conception in the opening salvo of his debate with Bohr in 1927. Yakov Terletsky who, like Tamm, had studied under the Soviet physicist Leonid Mandelstam, favoured a ‘pilot-wave’ interpretation of the kind that had initially been promoted by the French physicist Louis de Broglie before it was shot down by Bohr’s school in 1927. In this interpretation, a real wave field guides real particles, and probabilities again arise because we are ignorant of the details. As the 1930s progressed towards world war, many Western intellectuals had embraced communism as the only perceived alternative to the looming threat of Nazism. Numbered among the small group of Jewish communist physicists gathered around J Robert Oppenheimer at the University of California, Berkeley was David Bohm. As Oppenheimer began to recruit a team of theorists to work on the physics of the atomic bomb at the newly established Los Alamos National Laboratory in early 1943, Bohm was high on his list. But Bohm’s communist affiliations led the director of the Manhattan Project, Leslie Groves, to deny him the security clearance necessary to join the project. Bohm was left behind at Berkeley and joined with his fellow communist and close friend Joseph Weinberg in teaching the absent Oppenheimer’s course on quantum mechanics. His long discussions with Weinberg, who argued that complementarity was itself a form of dialectic and so not in conflict with Marxist philosophy, encouraged him to accept Bohr’s arguments, although he was not free of doubt. In his textbook Quantum Theory (1951), derived in part from his experiences teaching Oppenheimer’s course, Bohm broadly adhered to Bohr’s views. Bohm had by this time moved to Princeton University in New Jersey. Einstein, who in 1933 had fled from Nazi Germany to Princeton’s Institute for Advanced Study, asked to meet with him sometime in the spring of 1951. The meeting re-awakened the Marxist materialist in Bohm. As Einstein explained the basis for his own misgivings, Bohm’s doubts returned. ‘This encounter with Einstein had a strong effect on the direction of my research,’ he later wrote, ‘because I then became seriously interested in whether a deterministic extension of the quantum theory could be found.’ Was there, after all, a more materialistic alternative to complementarity? ‘My discussions with Einstein … encouraged me to look again.’ Although there is no documented evidence to support it, Bohm later claimed he had also been influenced ‘probably by Blokhintsev or some other Russian theorist like Terletsky’. Bohm’s theory sought to restore causality and determinism to the quantum world But Bohm’s relationship with Weinberg had by now returned to haunt him. In March 1943, Weinberg had been caught betraying atomic secrets by an illegal FBI bug planted in the home of Steve Nelson, a key figure in the Communist Party apparatus in the San Francisco Bay Area. This evidence was inadmissible in court. In an attempt to expose Weinberg’s betrayal, in May 1949 Bohm had been called to testify to the House Un-American Activities Committee, set up by the House of Representatives to investigate communist subversion in the US. He pleaded the Fifth Amendment, a standard means of avoiding self-incrimination, which only raised more suspicion. Bohm was arrested, then brought to trial in May 1951. He was acquitted (as was Weinberg a couple of years later). Now caught in the anti-communist hysteria whipped up by Joseph McCarthy, Bohm lost his position at Princeton. Only Einstein tried to help, offering to bring him to the Institute. But its new director – Oppenheimer, now lauded as the ‘father of the atomic bomb’ and increasingly haunted by the FBI’s interest in his own Leftist past – vetoed Bohm’s appointment. Bohm left the US for exile in Brazil, from where he published two papers setting out what was, in effect, a re-discovery of de Broglie’s pilot-wave theory. The theory sought to restore causality and determinism to the quantum world and was firmly materialist. Oppenheimer rejected Bohm’s efforts as ‘juvenile deviationism’. Einstein, who had once toyed with a similar approach and might have been expected to be sympathetic, declared it ‘too cheap’. Under a barrage of criticism, Bohm gained support from the French physicist Jean-Pierre Vigier, then assistant to de Broglie in Paris. He was just what Bohm needed: a resourceful theorist, a man of action, a hero of the French Resistance during the war, and a friend of the president of the Democratic Republic of Vietnam, Ho Chi Minh. Invited to join Einstein in Princeton, Vigier’s communist associations had led the Department of State to forbid his entry into the US. He worked with Bohm on another variation of the pilot-wave theory and persuaded de Broglie to rekindle his interest in it, sounding alarm bells among the Bohr faithful: ‘Catholics and communists in France are uniting against complementarity!’ But Bohm’s mission to restore materiality to quantum mechanics amounted to more than demonstrating the possibility of a deterministic alternative. In 1935, working with his Princeton colleagues Boris Podolsky and Nathan Rosen, Einstein had set up a stubborn challenge, a last throw of the dice in his debate with Bohr. In the Einstein-Podolsky-Rosen (EPR) thought experiment, a pair of quantum particles interact and move apart, to the left and right, their properties correlated by some physical law. Schrödinger invented the term ‘entanglement’ to describe their situation. For simplicity, we assume that the particles can have properties ‘up’ and ‘down’, each with a 50 per cent probability. We have no way of knowing in advance what results we’re going to get for each particle. But if the particle on the left is found to be ‘up’, the correlated particle on the right must be ‘down’, and vice versa. Now, according to quantum mechanics, the entangled particles are mysteriously bound together no matter how far apart they get, and the correlation persists. Suppose the particles move so far apart that any message or influence sent from one cannot get to the other even if it travels at the speed of light. How then does the particle on the right ‘know’ what result we obtained for the particle on the left, so that it can correlate itself? We could assume that when they are sufficiently far apart the particles can be considered separate and distinct, or ‘locally real’. But this conflicts with Einstein’s special theory of relativity, which forbids messages or influences from travelling faster than light, as Einstein himself explained: ‘One can escape from this conclusion only by either assuming that the measurement of [the particle on the left] (telepathically) changes the real situation of [the particle on the right] or by denying independent real situations as such to things which are spatially separated from each other. Both alternatives appear to me entirely unacceptable.’ (Emphasis added.) Particles that do not exist independently of each other are said to be ‘nonlocal’. A prospective Soviet spy codenamed ‘Quantum’ attended a meeting at the Soviet embassy in Washington, DC Einstein was known for his pacifist and Leftist inclinations. Podolsky was Russian-born, and Rosen was a first-generation descendant of Russian émigrés. Both of Einstein’s assistants were sympathetic to the Soviet cause. Six months after the publication of the EPR paper, Rosen asked Einstein to recommend him for a job in the Soviet Union. Einstein wrote to the chairman of the Council of People’s Commissars, Vyacheslav Molotov, praising Rosen for his talents as a physicist. Rosen was at first delighted with his new home, and soon he had a son. ‘I hope,’ Einstein wrote in congratulation, ‘that he too can help in furthering the great cultural mission that the new Russia has undertaken with such energy.’ But by October 1938 Rosen was back in the US, having discovered that his research did not prosper in the people’s paradise. Podolsky had earned his PhD at the California Institute of Technology and had returned to the Soviet Union in 1931 to work with Fock and Landau (and the visiting English theorist Paul Dirac) at the Ukrainian Institute of Physics and Technology in Kharkiv. From there, he joined Einstein at the Institute in Princeton in 1933. Ten years later, a prospective atomic spy assigned the codename ‘Quantum’ by Soviet intelligence attended a meeting at the Soviet embassy in Washington, DC and spoke with a high-ranking diplomat. Quantum was seeking an opportunity to join the Soviet effort to build an atomic bomb, and offered information on a technique for separating quantities of the fissile isotope uranium-235. He was paid $300 for his trouble. In Russian Foreign Intelligence Service (SVR) files made public in 2009, Quantum was revealed to be Podolsky. Bohm examined the EPR experiment in considerable detail. He developed an alternative that offered the prospect of translation from a thought experiment into a real one. With the Israeli physicist Yakir Aharonov, in 1957 he sought to demonstrate that real experiments had in fact already been done (in 1950), concluding that they did indeed deny independent real situations to the separated particles, such that these cannot be considered locally real. This was far from the end of the matter. Befuddled in his turn by Bohrian vagueness and inspired by Bohm, the Irish physicist John Bell also pushed back against complementarity and in 1964 built on Bohm’s version of EPR to develop his theorem and inequality. The experiments of 1950 had not gone far enough. Further experiments to test Bell’s inequality in 1972 and in 1981-82 demonstrated entanglement and nonlocality with few grounds for doubt. It began to dawn on the wider scientific community that entanglement and nonlocality were real phenomena, leading to speculations on the possibility of building a quantum computer, and on the use of entangled particles in a system of quantum cryptography. The 2022 Nobel Prize in Physics was awarded to the three experimentalists who had done most to expose the reality of entanglement and its promise of ‘a new kind of quantum technology’. The projected value of the quantum computing industry is estimated to be somewhere between $9 billion and $93 billion by 2040. I doubt there is any other example in history of such a high-value industry constructed on a physical principle that nobody understands. Marxism powered many objections to Bohr’s complementarity, and so helped to shape the development of postwar quantum mechanics. Soviet physicist-philosophers lent their support by finding positivist tendencies in Bohr’s teaching in conflict with dialectical materialism. Some sought an alternative materialistic interpretation. Podolsky and Rosen both admired the Soviet Union and in different ways sought to contribute to its mission. Bohm laboured at a time when there was little appetite for what many physicists judged to be philosophical, and therefore irrelevant, foundational questions. It says much about Bohm’s commitment that he resisted the temptation to leave such questions to play out in the theatre of the mind. The Marxist in Bohm sought not only to show that a materialistic alternative was possible, but also to find a way to bring the arguments into the real world of the laboratory. It was not enough just to interpret the world. Bohm also sought to change it. Source of the article

GOATReads: Psychology

People are nicer than you think

We consistently underestimate how much other people like us, and it may be hurting our social lives. It’s probably happened to you: A stranger starts talking to you at a party. In this moment, you’re not nearly as clever or charming as you hoped you’d be, and you struggle to volley with the anecdotes, opinions, and witticisms lobbed your way. At the end of it, you come away thinking, “They totally thought I was a complete idiot.” But research shows, they probably didn’t. In a phenomenon dubbed the “liking gap,” people consistently tend to like you better than you think they do. All sorts of other “gaps” — or “social prediction errors,” as experts would call them — govern our social lives. We consistently underestimate everything from people’s empathy toward us to how willing they are to help us. These patterns are strongest when we interact with strangers or acquaintances but can persist for many months into a friendship. They permeate relationships with all kinds of people, from classmates to roommates and coworkers. This pessimism about other people’s attitudes toward us also has consequences, like undercutting our own willingness to connect with others. One particularly stark example of this misjudgement is how likely people think it is that a random stranger would return your dropped wallet to you. This question is often used in surveys as a measure of social trust, says Lara Aknin, a professor of psychology at Simon Fraser University who studies social relationships and happiness. When you take people’s responses and compare them to the results of real-world “wallet drop” studies, where researchers drop or leave wallets in public spaces and observe the rate of return, Aknin says, “Wallets are returned way more than people expect.” In one of the most well-known wallet drop studies from 2019, researchers followed more than 17,000 “lost” wallets containing various sums of money in 355 cities across 40 countries. They found that “in virtually all countries, citizens were more likely to return wallets that contained more money” — a result virtually no one predicted. We misjudge not only other people’s altruism or empathy, but also how they’ll react to our overtures. Other research shows that people consistently underestimate how happy someone will feel after we show them a random act of kindness, pay them a compliment, or shoot a message just to get in touch. This all starts at a pretty young age, too. One 2021 paper found that the liking gap begins appearing in children as young as 5, and research from 2023 showed that children as young as four underestimate how much another person will appreciate an act of kindness. To some, these may feel like pretty minor points — who cares if people enjoy our compliments more than we think they do? But experts say that these misperceptions of others can be a big obstacle to forming connections, especially in our purported loneliness epidemic. What we lose when we underestimate others We doubt others at our own cost, according to Gillian Sandstrom, an associate professor of psychology at the University of Sussex. If we don’t think someone will appreciate a compliment, then we won’t give it. If we don’t think a friend will be happy to hear from us, we won’t reach out. “We get nervous, and then we turn inwards,” Sandstrom says, “and so we’re less happy and more fearful.” We behave as if others don’t like us, possibly shutting them out, hurting our chances of connection, and curtailing any possibility for building new friendships. “If you don’t trust someone will be tender with you, you won’t get vulnerable with them, and you’ll just stay at surface level.” “It becomes a self-fulfilling prophecy,” she adds — if you don’t think someone will help you, you’ll behave in a way that signals that you don’t expect kindness from them, and then they really won’t help. The cycle reinforces our doubts, and over time it “undercuts our willingness to reach out and engage with other people,” says Aknin. After all, people generally try to hew to norms and behave according to how they think most people behave. It doesn’t help when so many of us are inundated with bad news, reading and hearing stories that highlight people’s bad qualities. That “reduces our expectations of other people’s kindness” and makes the world feel like a riskier place, she says, one where you maybe don’t want to ask for help or extend a hand. And so, “we’ll miss social opportunities,” she says, “which we know by and large to have a pretty direct impact on our happiness.” To hammer the point home, Aknin points to the World Happiness Report, which she helps to produce every year. For the 2025 report, researchers assessed how various factors — including unemployment, doubling your income, or believing that it’s “very likely” that your lost wallet will be returned to you — impact self-reported life satisfaction. More than any of the variables they looked at, believing that others will return your wallet to you was most strongly linked with greater well-being, an effect that was almost eight times larger than for doubling your income. The message is clear: Trust in other people and happiness go hand in hand. One theory behind these persistent underestimations is that people are “naturally super driven to stay connected to the group, and super vigilant for signs of rejection,” says Vanessa Bohns, a professor of organizational behavior at Cornell University. “We get super cautious about putting ourselves out there because we don’t want to take social risks,” she says. “But we forget that other people are also driven by those same concerns.” Insecurity — or at least self-consciousness — about our competence, charisma, or likability plays a big role in how we misjudge our interactions. Research shows that we tend to assess our role in conversations by how competent we were, whereas other people tend to focus on our warmth or how nice we seemed. In the case of giving and receiving compliments, we can all probably think back to a time when someone said something nice out of the blue, and how warm and happy we felt, Bohns says. But in times when we’re about to give a compliment, “we lose all perspective about what it feels like to be in the other role — we’re so focused on how awkwardly we’re going to deliver that compliment, the fact that maybe we’re interrupting them, or that maybe they don’t want to be approached by us right now.” How to recalibrate So how do we beat back the pessimism and stop underestimating others? The research so far says there’s no easy answer, says Sandstrom. You can tell people about the data and teach them that people enjoy interactions with you more than you’d predict, but that doesn’t tangibly change people’s attitudes or behaviors. “The only thing that’s really worked is just making people do the scary thing,” Sandstrom says. When people regularly exercise the muscles of talking with strangers, paying compliments, or reaching out to old friends, and see that they go well and are received kindly, then their outlooks start to change. But without regular practice, it’s easy to forget. “You don’t need to drop your wallet and see if it’s returned,” says Aknin, “but give yourself opportunities to be proven right or wrong about people.” Because “if the data are right, people will be kinder than we expect.” All close relationships start somewhere. And that process requires you to open up, be vulnerable, ask for help, and offer it, says Sandstrom. And if you can muster up the bravery to go ahead and trust that the person you’re talking to will be kinder and more open than you instinctively feel, that can open up a lot more opportunities for connection. After all, she says, “somebody has to go first.” Source of the article

GOATReads:Sociology

Freedom over death

Death is a certainty. But choosing how and when we depart is a modest opportunity for freedom – and dignity Assisted dying is now lawful under some circumstances, in jurisdictions affecting at least 300 million people, a remarkable shift given that it was unlawful virtually everywhere in the world only a generation ago. Lively legislative debates about assisted dying are taking place in many societies, including France, Italy, Germany, Ireland and the United Kingdom. Typically, the question at hand for these legislatures is whether to allow medical professionals to help individuals to die, and, if so, under what conditions. The laws under debate remove legal or professional penalties for those medical professionals who help individuals to die. Having conducted research into the ethics of death and dying for more than a quarter of a century, I am rarely surprised by how the debates unfold. On one side, advocates for legalised assisted dying invoke patients’ rights to make their own medical choices. Making it possible for doctors to assist their patients to die, they propose, allows us to avoid pointless suffering and to die ‘with dignity’. While assisted dying represents a departure from recent medical practice, it accords with values that the medical community holds dear, including compassion and beneficence. On the other side, much of the opposition to assisted dying has historically been motivated by religion (though support for it among religious groups appears to be growing), but today’s opponents rarely reference religious claims. Instead, they argue that assisted dying crosses a moral Rubicon, whether it takes the form of doctors prescribing lethal medications that patients administer to themselves (which we might classify as assisted suicide) or their administering those medications to patients (usually designated ‘active euthanasia’). Doctors, they say, may not knowingly and intentionally contribute to patients’ deaths. Increasingly, assisted dying opponents also express worries about the effects of legalisation on ‘vulnerable populations’ such as the disabled, the poor or those without access to adequate end-of-life palliative care. The question today is about how to make progress in a debate where both sides are both deeply dug in and all too predictable. We must take a different approach, one that spotlights the central values at stake. To my eye, freedom is the neglected value in these debates. Freedom is a notoriously complex and contested philosophical notion, and I won’t pretend to settle any of the big controversies it raises. But I believe that a type of freedom we can call freedom over death – that is, a freedom in which we shape the timing and circumstances of how we die – should be central to this conversation. Developments both technological and sociocultural have afforded us far greater freedom over death than we had in the past, and while we are still adapting ourselves to that freedom, we now appreciate the moral importance of this freedom. Legalising assisted dying is but a further step in realising this freedom over death. Ihave sometimes heard arguments that assisted dying should be discouraged because it amounts to ‘choosing death’. That is inaccurate. We human beings have made remarkable progress in extending our lives, but we remain mortal creatures, fated to die. Death has, in a sense, already chosen us. Some enthusiasts believe that we are on the verge of conquering death and achieving immortality. I’m sceptical. For now, it’s clear that we are not free from death. But dying itself has undergone dramatic changes in the past century or so, changes that have given us increasing freedom over death. Today, most people die not of injuries or fast-acting infections but of chronic illnesses such as heart disease and cancer. These chronic illnesses typically bring about a long pre-mortem decline in health. Alongside the availability of new medical interventions and treatments – everything from artificial ventilation to antibiotics to chemotherapy – the comparative slowness of modern death now means we have many more opportunities to shape the timing and circumstances of our deaths. Our freedom over death will always be imperfect. Nevertheless, the timing and circumstances of our deaths increasingly reflect choices made by patients, their families and their caregivers. They can include the following: which treatments to receive for our medical conditions and which not (the cancer patient deciding between surgery and chemotherapy); whether to continue to seek cures or extend life, versus opting for palliation or comfort care; whether to receive interventions at all (for instance, the heart attack victim with a ‘do not resuscitate’ order); and where and with whom death will take place (in a hospital, a hospice, an individual’s home, etc). In each of these choices, we see attempts to shape death, to delay it or hasten it, to decide when, where, how or in whose presence it will take place. Crucially, these are not choices about whether to die – that’s not within our ambit. They are choices reflecting our growing freedom over death: that is, over its timing and circumstances. Death is, of course, ‘natural’. Medically and biologically, we die because our bodies and brains can no longer sustain the functions requisite for life. We all die of ‘natural’ causes in that sense. But in an era where we have such extensive freedom over death – when it occurs at the end of a prolonged and often highly medicalised process punctuated by choices about when and how dying will occur – it is no longer credible to depict dying as cordoned off from human freedom. A comparison: we now appreciate that ‘natural disaster’ is a misnomer. Natural disasters are unavoidable insofar as they result from the operations of physical systems that we largely cannot control, but exactly how and when they occur (the particular ways in which they prove ‘disastrous’) can be shaped by where, when and how human activities are organised. And just as it is foolhardy to fail to prepare for or to mitigate natural disasters, so too is it foolhardy not to prepare for or mitigate the harms of dying. Fortunately, we now enjoy unprecedented ability to exercise freedom over death to reduce its harms. Collectively, we are still adapting to this newfound freedom. One sign of our lingering discomfort with this freedom is the belief that assisted dying represents a kind of hubris, a misguided attempt to control or manage death. Some hold that, rather than doctors providing assistance in dying to patients facing particularly gruelling conditions, we should instead let nature (or God, or a person’s illness) ‘take its course’, merely doing our best to ensure that the individual dies without pain and with dignity. As Leon Kass put it: ‘We must care for the dying, not make them dead.’ Assisted dying, from this perspective, foolishly tries to place death itself under human authority. The trouble with this worry is that we already have a surprisingly large freedom over death, a freedom almost no one opposes on grounds of hubris. The course of dying belongs less and less to nature or God than to us, a fact that those who are not religious objectors, such as Christian Scientists, welcome. There is no social momentum in favour of denying individuals choices regarding life-extending treatments, palliation and the like. If assisted suicide represents a hubristic attempt to usurp nature and replace it with human judgment, then why is it not equally hubristic to try to delay death through medical means, or to hasten it by choosing hospice care, rather than further treatments aimed at extending life? Assisted dying’s opponents draw arbitrary lines concerning which exercises of freedom over death should be permitted. Therefore, assisted dying cannot be rejected because it amounts to an ‘unnatural’ intervention in human mortality. Rather, it is merely the latest major incarnation of a freedom over death that we have rightfully embraced. We no longer need stand aside and let nature ‘take its course’, and thank goodness for that. Still, opponents may grant my claim that assisted dying enables us to exercise further freedom over death but wonder whether it is a bridge too far. Do we really need to be legally entitled to medical assistance in dying in order to enjoy sufficient freedom over death? Many people evidently believe so. Support for the legalisation of assisted dying has been steady for several decades in many nations throughout the world, with about two-thirds of those polled supporting its legalisation. No jurisdiction that has legalised assisted dying has subsequently ended the practice, and public support for the practice tends to grow once legalised. In addition, when assisted dying is not available, many will seek it out at considerable expense or inconvenience to themselves. The Swiss organisation Dignitas has assisted in several thousand deaths for individuals willing to pay significant fees and travel expenses (currently estimated at $20,000), as well as to risk possible legal ramifications in their home countries. There is a high demand for enjoying freedom over one’s death. Given the lengths to which individuals will go to seek assisted dying, it’s unlikely that any legal or medical regime will succeed in preventing the practice altogether. Realistically, the question we face is not whether assisted dying will take place. The demand for it ensures that it will, and it is likely that we are overlooking the frequency with which assisted dying occurs clandestinely or through the equivalent of a ‘black market’. Many people thus endorse, through their opinions or their choices, our freedom over death encompassing a right to medical assistance in hastening our deaths. Yet it is not obvious why or how being able to opt for assisted dying is a valuable form of freedom over death. In my estimation, its value becomes apparent when we reflect on the distinctive role that dying plays in human lives. Our freedom over death should include a legal right to assisted dying because sometimes being able to die earlier rather than later not only allows us to avoid suffering but also because of the special role that dying has in our biographies. At the risk of stating the obvious, dying is the last thing we do, and endings matter to us. To see this, contrast two lives – or more precisely, two experiences of dying: The Athenian philosopher Socrates was sentenced to die on charges of corrupting the youth and teaching falsehoods about the gods. Though Socrates was given the opportunity to avoid death by going into exile, he nevertheless chose to ingest the fatal hemlock. He went to his death not long after a lengthy philosophical conversation with his friends and students, in which he articulated his beliefs that the soul is immortal and that a virtuous person cannot be harmed by death. As Drew Gilpin Faust illustrates in her book This Republic of Suffering (2008), the Civil War confronted Americans with death on an unprecedented scale. Not only were the sheer numbers of soldiers killed in battle staggering, these soldiers died, nearly to the last, in ways at odds with their (and their culture’s) understanding of a ‘good death’. These soldiers typically died frightened and alone, either on the battlefield or in a makeshift military hospital far from their loved ones and, in many cases, with no opportunity to perform the Christian rites of atonement. Some died knowing that they could not expect a proper Christian burial. Many died fighting a war they participated in involuntarily, did not support, or whose causes or significance they could not understand. Socrates, I submit, had a good death, while many Civil War soldiers did not. The difference consists mainly in how Socrates’ death strongly reflected his identity and values, whereas the soldiers’ deaths largely did not. In Being Mortal: Medicine and What Matters in the End (2014), the surgeon Atul Gawande vividly captures the challenge that dying presents to our integrity: Over the course of our lives, we may encounter unimaginable difficulties. Our concerns and desires may shift. But whatever happens, we want to retain the freedom to shape our lives in ways consistent with our character and loyalties … The battle of being mortal is the battle to maintain the integrity of one’s life – to avoid becoming so diminished or dissipated or subjugated that who you are becomes disconnected from who you were or who you want to be. Dying is an event in life but, as the final event in life, it has an outsize importance in the integrity of our lives. Dying often represents a monumental challenge to our integrity: how can we make dying something we do, reflecting our values and outlook, as opposed to something that merely happens to us, over which we have little agency? We hope our deaths will reflect us (or the best of us), to reflect the values that define our lives as a whole. When they do not, our deaths end up being alien impositions, jarring final chapters instead of fitting conclusions. Many of those who opt for assisted dying are, in my estimation, seeking to die with integrity. Survey research finds that the relief of pain or physical suffering often plays a fairly marginal role in their decision. Far more prominent are worries about being unable to participate in worthwhile activities or losing autonomy or dignity. The unifying threads in these worries is integrity, the desire to have one’s final days amount to a chapter in a life that one can recognise as one’s own. Freedom over death makes it possible for dying to more fully reflect our selves. And a shorter life sometimes is a better reflection of what we care about, and thereby has greater integrity, than a longer life. Being helped to die is sometimes essential to achieving such a life. I do not mean to convey the impression that deciding to end one’s life prematurely, whether to maintain one’s integrity or for some other reason, is a simple one. It is hard to envision a decision more harrowing than this. But the case for legalised assisted dying does not require that such decisions are simple. In many cases, we should expect some ambivalence regarding the choice to seek assisted dying. We should not expect those who opt for assisted dying to approach death with the serenity of Socrates. The case for legalised assisted dying merely requires that individuals be able to make such choices in the same informed and thoughtful way that they are able to make other life-shaping choices where their integrity is at stake, such as choices regarding marriage, procreation or other medical matters. After all, it is the dying individual’s integrity that is at stake, and there is every reason to think that they are best situated to judge how to die so as to honour their own values or concerns. Indeed, the freedom to die with integrity is one that many care about even if they decide not to exercise it. As many studies have shown, in jurisdictions where assisted dying occurs by means of self-administration, many individuals who receive prescriptions for the lethal drugs end up not using them to end their lives. Simply having these drugs available can offer peace of mind to those who hope to be able to die with integrity. At this point, my opponents may concede that choosing the circumstances or timing of one’s death is a valuable instance of freedom over death but question whether we ought to be able to enlist others’ help in acting on such choices, especially when their help involves ‘active’ measures such as providing us with a lethal medication or even injecting that medication into us. All the more, the topic at hand is medically assisted dying, and doubts may be raised about whether assisted dying is compatible with the values of medicine. Patients do not, after all, have the right to receive from their doctors whatever interventions or procedures they want (patients cannot request medically unproven treatments, or to have medical resources directed toward them in ways that treat others unjustly, for instance). Opponents of assisted dying may argue that doctors have a clearly defined role – to treat or cure illness or injury – but assisted dying neither treats nor cures the patient’s condition. Furthermore, medicine’s aims do not encompass enabling us to live with integrity. Might there be a right then to choose the circumstances or timing of one’s death but no right that doctors assist one in realising those circumstances or timing? This line of thought takes a naive view of medicine. Medicine is invariably a value-driven enterprise and, in some cases, doctors forego treating an illness that poses little risk to patients (many cases of prostate cancer, for example) or agree to provide an intervention that benefits a patient despite its not treating an illness or injury (voluntary sterilisation, for example). The treatment of illness or injury thus does not delimit the boundaries of legitimate medical practice, and so the fact that assisted dying does not treat or cure is not a reason to oppose it. Allowing a person’s life to conclude with integrity does fall within the central mission of medicine: to address conditions of the body so as to allow a person to live in ways they see fit. Moreover, individuals concerned about dying with integrity have few if any other options besides the medical profession to provide them with the dying experience they seek. For better or worse, healthcare systems monopolise access to the options we have for exercising freedom over death, including a monopoly on the easeful, non-traumatic, non-violent forms of death most of us want. Medicine’s monopoly on access to these options can be justified on the grounds that lethal medications need to be safeguarded, but this monopoly cannot justify a blanket prohibition on patients having access to such medications when they stand to benefit from them (when, for instance, being assisted to die allows them to die with greater integrity). That doctors should not knowingly and intentionally contribute to their patient’s deaths is perhaps the oldest argument against assisted dying. But this appeal to ‘do not kill’ faces a dilemma: doctors are already permitted, legally and morally, to intentionally contribute to their patients’ deaths in ways that arguably amount to killing those patients. For example, if anyone besides a doctor removes a person from life-sustaining artificial ventilation, this is standardly classified as killing rather than ‘letting the patient die’. So too then in the case when doctors act with patient consent to remove life-sustaining measures: they kill their patients, albeit justifiably. But if opponents of assisted dying agree (a) that doctors may honour patient requests to accelerate the timing of their death by the removal of such measures and (b) in so doing, doctors are acting to kill their patients, then the argument that assisted dying involves doctors wrongfully ‘killing’ patients falls apart. If doctors may permissibly contribute to killing patients by the removal of life-sustaining measures, then there seems no reason to conclude that they may not contribute to killing their patients (when they competently consent to it and meet other conditions) by providing them with, or administering to them, a lethal medication. There are of course many other objections to the legalisation of assisted dying. Many of these rest on empirical predictions that are not supported by the evidence that we now have from the jurisdictions where it has been legalised, evidence that now dates back a quarter-century. The availability of assisted dying hasn’t made it harder to access quality palliative care or caused a decline in its quality. Assisted dying hasn’t undermined healthcare for those with disabilities, and those with disabilities generally agree that it is discriminatory and disrespectful toward the disabled to oppose assisted dying in order to protect the disabled. As to worries about ‘abuse’ or ‘coercion’, it is often unclear how to interpret what opponents intend by these terms, but all evidence suggests that abuse or coercion in connection with assisted dying are extremely rare. All the more, for the majority of patients, legalising assisted dying doesn’t erode trust in their doctors and can in fact make possible wider and more candid conversations among patients, doctors and their families about choices at the end of life. Whenever assisted dying is legalised, its opponents attempt to discredit the law. The most recent example of this is Canada, which passed its assisted dying law in 2016. Advocacy groups depict the law as a catastrophic ‘slippery slope’, but their criticisms do not withstand factual scrutiny: Canadians are not being provided assisted dying due to poverty or homelessness, nor are they turning to assisted dying because they are receiving substandard palliative care for support of their disabilities. (Indeed, it is striking how ‘unmarginalised’, even privileged, beneficiaries of assisted dying tend to be.) Contrary to assisted dying’s opponents, assisted dying laws work as intended, and it is one of the beauties of democratic societies that they can continue to refine their assisted dying laws and practices to ensure fairness and transparency. So little of our lives is up to us: who our parents are, where we are brought up, how we are educated, even whether we exist at all. To be able to die with integrity offers us a modest opportunity for freedom in an existence whose defining features we are largely not free to choose. Granted, societies may justifiably impose constraints on how we exercise this freedom, constraints aimed at ensuring that we exercise it after due consideration and when it is reasonable for us to want to exercise it. But those in the medical profession ought not suffer adverse consequences when they assist us in exercising this freedom. They ought not be subject to professional sanctions, nor ought they be subjected to imprisonment or other legal penalties. We ought to legalise medically assisted dying and free doctors from such risks. Source of the article

GOATReads:Politics

Is Macaulay a Villain or a Champion of Social Justice?

The real question is not whether Macaulay failed India, but whether India’s own elites failed to fulfil even the limited emancipatory possibilities that colonial modernity, however imperfectly, made available. In recent years, Thomas Babington Macaulay has been recast as a principal villain in contemporary Hindutva discourse. His alleged misdeeds are said to lie in the educational system he inaugurated – an arrangement portrayed as having crippled Hindu civilisation until its supposed recent “liberation.” It is doubtful that most of Macaulay’s detractors have read his writings; even among those who have, selective quotation is the norm. At the other end of the spectrum stands another constituency that elevates Macaulay to the status of a pioneer of social justice. Both these narratives obscure more than they illuminate. A historically grounded assessment requires a closer look at the pre-colonial educational landscape and at what Macaulay actually argued. The pre-colonial educational context Before indicting Macaulay, it is essential to understand the state of indigenous education in early nineteenth-century India. The observations of the collector of Bellary, recorded in the 1820s and often cited approvingly by scholars such as Dharampal, are instructive. He noted that Telugu and Kannada instruction depended heavily on literary forms of the language that bore little resemblance to the vernaculars actually spoken: “The natives therefore read these (to them unintelligible) books to acquire the power of reading letters… but the poetical is quite different from the prose dialect… Few teachers can explain, and still fewer scholars understand… Every schoolboy can repeat verbatim a vast number of verses of the meaning of which he knows no more than the parrot which has been taught to utter certain words.” In short, comprehension was minimal; rote memorisation was paramount. Many teachers themselves lacked understanding of the texts they taught. The subject matter was similarly circumscribed. Campbell has recorded that students from “manufacturing castes” studied works aligned with their sectarian traditions, while Lingayat students studied texts considered sacred. Beyond religious material, instruction included rudimentary accounting and memorised lists – astronomical categories, festival names, and the like. The renowned Amarakosha was used largely for its catalogues of synonyms, including names of deities, plants, animals, and geographical divisions. Caste and educational access The sociological profile of teachers and students further reveals the exclusivity of the system. Frykenberg, in his seminal work on education in South India, noted that Brahmins dominated the teaching profession in the Telugu region, while Vellalas did so in Tamil areas. Students overwhelmingly came from upper castes. Frykenberg explains that hereditary occupations, the sacralised exclusivity of high-caste learning, and the financial burden of even modest fees made education virtually inaccessible to the majority. A fee of three annas a month was beyond the reach of many “clean caste” families, let alone the “unclean” – Paraiyar, Pallar, Chakriyar, Mala, Madiga and other communities who constituted close to half the population. Even within the classroom, caste segregation was strictly maintained. If this was the state of affairs in the comparatively less feudal, ryotwari regions of South India, the situation in North India –dominated by zamindari and entrenched feudal relations – can only be imagined. The conclusion is unavoidable: before the advent of British rule, formal education was functionally restricted to privileged groups. Re-reading Macaulay’s minute Against this backdrop, Macaulay’s 1835 minute must be understood. His rhetoric was undoubtedly steeped in imperial arrogance, and he dismissed Indian literary and scientific traditions with unwarranted disdain. Yet the substantive debates of the era did not concern the desirability of mother-tongue instruction; that idea had virtually no advocates at the time. The controversy revolved around whether Sanskrit, Arabic, or English should serve as the medium for higher education. Macaulay argued that: “All parties seem to be agreed… that the dialects commonly spoken… contain neither literary nor scientific information… until they are enriched from some other quarter.” He noted further that despite the state’s investment in printing Sanskrit and Arabic works, these books remained unsold, while English books were in high demand. Thousands of folios filled warehouses, unsought and unused. Meanwhile, the School Book Society sold English texts in large numbers and even made a profit. His infamous proposal to create: “a class of persons Indian in blood and colour, but English in tastes, in opinions, in morals and in intellect” must be read in conjunction with his expectation that this class would subsequently transmit modern knowledge into the vernaculars, rendering them, over time, suitable for mass education. Whether this expectation was realistic or sincerely held is debatable, but the stated logic is unambiguous: English was intended as a bridge for elite modernisation, not the permanent medium of education for India’s masses. The greater tragedy lies not in Macaulay’s intention but in the fact that, 190 years later, vernacular languages have still not been fully equipped to serve as robust vehicles of modern scientific knowledge. Elite demand for English education It is also historically erroneous to claim that English education was imposed against the wishes of the populace. In the Madras Presidency particularly, demand for English education was strong. The 1839 petition signed by seventy thousand individuals, including Gazulu Lakshminarasu Chetty, Narayanaswami Naidu, and Srinivasa Pillai, explicitly requested that English education be introduced without delay. Their petition asserted: “If diffusion of Education be among the highest benefits and duties of a Government, we, the people, petition for our share… We ask advancement through those means which will best enable us… to promote the general interests of our native land.” Similarly, the Wood’s Despatch of 1854 – the so-called Magna Carta of Indian education—stated unequivocally that education should be available irrespective of caste or creed and reiterated the expectation that Indians themselves would carry modern knowledge to the masses through vernacular languages. Practice: Liberal principles, exclusionary outcomes Despite the ostensibly universal language of the 1839 petition, the actual practice in Madras was exclusionary. The standards set for admission to higher education ensured that only the highest castes could qualify. The rhetoric of liberalism facilitated an elite project: by advocating “higher branches of knowledge,” the curriculum implicitly excluded those without prior linguistic and cultural capital. Thus, liberalism provided a vocabulary for political demands while simultaneously enabling the marginalisation of the very groups whose support had made those demands politically effective. Dalit entry into schools frequently required direct resistance to entrenched social norms. The well-documented case of Father Anderson, who was pressured to expel two Dalit boys yet refused to do so, illustrates the uphill struggle faced by marginalised communities across the region. Did Macaulay promote social justice? The British educational system, however limited in intent, did expand opportunities for groups previously excluded from formal learning. The evidence is overwhelming: literacy and access to education grew significantly during colonial rule, whereas pre-colonial systems were highly restricted. But this expansion was an unintended byproduct of administrative rationalisation and economic modernisation – not a deliberate project of social justice. Macaulay himself was no egalitarian. His speeches against the Chartists in Britain reveal his deep opposition to universal suffrage. He famously declared: “The essence of the Charter is universal suffrage… If you grant that, the country is lost.” He compared extending rights to working-class Britons to opening granaries during a food shortage – an act he described as turning “scarcity into famine.” His analogy to starving Indian peasants begging for grain, whom he would refuse even “a draught of water,” reveals a worldview firmly rooted in class privilege and imperial paternalism. Conclusion Macaulay was, unquestionably, an imperialist dedicated to advancing British interests and the interests of his own class. His project sought to cultivate an Indian elite that would perpetuate colonial governance and ideology. That elite did emerge, and it is this class – not Macaulay – that bears responsibility for failing to democratise education and modern knowledge. To depict Macaulay either as the destroyer of an egalitarian indigenous utopia or as a hero of social justice is historically unsustainable. He was neither. He was an articulate functionary of empire whose policies interacted with existing social hierarchies in complex ways – sometimes reinforcing them, sometimes inadvertently weakening them. The real question is not whether Macaulay failed India, but whether India’s own elites failed to fulfil even the limited emancipatory possibilities that colonial modernity, however imperfectly, made available. Source of the article

Why Your Company Needs a Chief Data, Analytics, and AI Officer

How is AI and data leadership at large organizations being transformed by the accelerating pace of AI adoption? Do these leaders’ mandates need to change? And should overseeing AI and data be viewed as a business or a technology role? Boards, business leaders, and technology leaders are asking these questions with increasing urgency as they’re being asked to transform almost all business processes and practices with AI. Unfortunately, they’re not easy questions to answer. In a survey that we published earlier this year, 89% of respondents said that AI is likely to be the most transformative technology in a generation. But experience shows that companies are still struggling to create value with it. Figuring out how to lead in this new era is essential. We have had a front row seat over the past three decades to how data, analytics, and now, AI, can transform businesses. As a Chief Data and Analytics Officer with AI responsibility for two Fortune 150 companies, as an author of groundbreaking books on competing with analytics and AI in business, and as a participant and advisor on data, analytics, and AI leadership to Fortune 1000 companies, we regularly counsel leading organizations on how they must structure their executive leadership to achieve the maximum business benefit possible from these tools. So, based on our collective first-hand experience, our research and survey data, and our advisory roles with these organizations, we can state with confidence that it almost always makes the most sense to have a single leader responsible for data, analytics, and AI. While many organizations currently have several C-level tech executives, we believe that a proliferation of roles is unnecessary and ultimately unproductive. Our view is that a combined role—what we call the CDAIO (Chief Data, Analytics, and AI Officer)—will best prepare organizations as they plan for AI going forward. Here is how the CDAIO role will succeed. CDAIOs Must be Evangelists and Realists Before the 2008–09 financial crisis, data and analytics were widely seen as back-office functions, often relegated to the sidelines of corporate decision making. The crisis was a wakeup call to the absolute need for reliable data, the lack of which was seen by many as a precipitating factor of the financial crisis. In its wake, data and analytics became a C-suite function. Initially formed as a defensive function focused on risk and compliance, the Chief Data Officer (CDO) has evolved in the years since its establishment, as a growing number of firms repositioned these roles as Chief Data and Analytics Officers (CDAOs). Organizations that expanded the CDAO mandate saw an opportunity to move beyond traditional risk and compliance safeguards to focus on offense-related activities intending to use data and analytics as a tool for business growth. Once again, the role seems to be undergoing rapid change, according to forthcoming data of an annual survey that one of us (Bean) has conducted since 2012. With the rapid proliferation of AI, 53% of companies report having appointed a Chief AI Officer (or equivalent), believe that one is needed, or are expanding the CDO/CDAO mandate to include AI. AI is also leading to a greater focus and investment in data, according to 93% of respondents. These periods of evolution can be confusing to both CDAIOs and their broader organizations. Responsibilities, reporting relationships, priorities, and demands can change rapidly—as can the skills needed to do the job right. In this particular case, the massive surge in interest in AI has driven organizations to invest heavily in piloting various AI concepts. (Perhaps too frequently.) These AI initiatives have grown rapidly—and often without coordination—and leaders have been asked to orchestrate AI strategy, training data, governance, and execution across the enterprise. To address the challenges of this particular era, we believe that companies should think of the CDAIO as both evangelist and realist—a visionary storyteller who inspires the organization, a disciplined operator who focuses on projects that create value for the company while terminating those that do not deliver a return, and a strategist who deeply understands the AI technology landscape. At the core of these efforts—and essential to success in a CDAIO position—is ensuring that investments in AI and data deliver measurable business value. This has been a common stumbling block of this kind of role. The failure of data initiatives to create commensurate business value has likely contributed to the short tenures of data leaders (to say nothing of doubts about the future of the role entirely). CDAIOs need to focus on business value from day one. To Make Sure AI and Data Investments Pay Off, CDAIOs Need a Clear Mandate In most mid-to-large enterprises, data and AI touch revenue, cost, product differentiation, and risk. If trends continue, the coming decade will see systematic embedding of AI into products, processes, and customer interactions. The role of the CDAIO is to act as orchestrator of enterprise value while managing emerging risks. A single leader with a clear business mandate and close relationships with key stakeholders is essential to lead this transformation. Based on successful AI transformations we’ve observed, organizations today must entrust their CDAIOs with a mandate that includes the following: · Owning the AI strategy. To bring about any AI-enabled transformation, a single organizational leader must define the company’s “AI thesis”—how AI creates value—along with the corresponding roadmap and ROI hypothesis. The strategy needs to be sold to and endorsed by the senior executive team and the board. · Preparing for a new class of risks. AI introduces safety, privacy, IP, and regulatory risks that require unified governance beyond traditional policies. CDAIOs should normally partner with Chief Compliance or Legal Officers to manage this mandate. · Developing the AI technology stack for the company. Fragmentation and inconsistent management of tools and technology can add expense and reduce the likelihood of successful use case development. CDAIOs need the power to follow through on their vision for the adoption and development of tools and technologies that are right for the organization, providing secure “AI platforms as products” that teams can use with minimal friction. · Ensuring the company’s data is ready for AI. This is particularly critical for generative AI, which primarily uses unstructured data such as text and images. Most companies have focused only on structured numerical data in the recent past. The data quality approaches for unstructured data are both critical to success with generative AI and quite different from those for structured data. · Creating an AI-ready culture. Companies with the best AI tech might not be the long-term winners; the race will be won by those with a culture of AI adoption and effective use that maximizes value creation. CDAIOs should in most cases partner with CHROs to accomplish this objective. · Developing internal talent and external partner ecosystems. It’s essential to develop a strong talent pipeline by recruiting externally as well as upskilling internal talent. This requires building strategic alliances with technology partners and academic institutions to accelerate innovation and implementation. Generating significant ROI for the company. At the end of the day, CDAIOs need to drive measurable business outcomes—such as revenue growth, operational efficiency, and innovation velocity—by prioritizing AI initiatives tied to clear financial and strategic KPIs. They serve as the bridge between experimentation and enterprise-scale value creation. Positioning CDAIOs for Organizational Success As important as what CDAIOs are being empowered to do, is how they’re positioned in an organization to do it. Companies are adopting different models for where the CDAIO reports within the organization. While some CDAIOs report into the IT organization, others report directly to the CEO or to business area leaders. At its core, the primary role of the CDAIO is to drive business value through data, analytics, and AI, owning responsibility for business outcomes such as revenue lift and cost reduction. While AI technology enablement is a key part of the role, it is only one component of CDAIO’s broader mandate of value creation. Given the emphasis on business value creation, we believe that in most cases CDAIOs should be positioned closer to business functions than to technology operations. Early evidence suggests that only a small fraction of organizations report positive P&L impact from gen AI, a fact that underscores the need for business-first AI leadership. While we have seen successful examples of CDAIOs reporting into a technology function, this is only when the leader of that function (typically a “supertech” Chief Information Officer) is focused on technology-enabled business transformation. Today, we are witnessing a sustained trend of AI and data leadership roles reporting into business leaders. According to forthcoming survey data from this year, 42% of leading organizations report that their AI and data leadership reports to business or transformation leadership, with 33% reporting to the company’s president or Chief Operating Officer. Data, analytics, and AI are no longer back-office functions. Leading organizations like JPMorgan have made the CDAIO function part of the company’s 14-member operating committee. We see this as a direction for other organizations to follow. Whatever the reporting relationship for CDAIOs, their bosses often don’t fully understand this relatively new role and what to expect of it. To ensure success of the CDAIO role, executives to whom a CDAIO reports should maintain a checklist of the organization’s AI ambitions and CDAIO mandate. Key questions include: Do I have a single accountable leader for AI value, technology, data, risk and talent? Are AI and data roadmaps funded sufficiently against business outcomes? Are our AI risk and ethics guardrails strong enough move ahead quickly? Are we measuring AI KPIs quarterly at minimum and pivoting as needed? Are we creating measurable and sustainable value and competitive advantage with AI? The Future of AI and Data Leadership Is Here Surveys about the early CDO role reveal a consistent challenge—expectations were often unclear, and ROI was hard to demonstrate with a mission focused solely on foundational data investments. Data and AI are complementary resources. AI provides a powerful channel to show the value of data investments, but success with AI requires strong data foundations—structured data for analytical AI, and unstructured data for generative AI. Attaching data programs to AI initiatives allows demonstration of value for both, and structurally this favors a CDAIO role. The data charter (governance, platform, quality, architecture, privacy) becomes a data and platforms component within the CDAIO’s remit. Benefits include fewer hand-offs, faster decision cycles and clearer accountability. To turn AI from experiment to enterprise muscle, organizations must establish a CDAIO role with business, cultural, and technology transformation mandates. We believe strongly that the CDAIO will not be a transitional role. CEOs and other senior executives must ensure that CDAIOs are positioned for success, with resources and organizational design that supports the business, cultural, and technology mandate of the CDAIO. The demand and need for strong AI and data leadership will be essential if firms expect to compete successfully in an AI future which is arriving sooner than anyone anticipated.  Source of the article

Medical Students Are Learning Anatomy From Digital Cadavers. Can Technology Ever Replace Real Human Bodies?

From interactive diagrams to A.I. assistants, virtual tools are beginning to supplant physical dissections in some classrooms A human chest as large as a room fills your entire field of view. With a few commands, you shrink it down until it’s a mere speck. Then, you return it to life-size and lay it prone, where you proceed to strip off the skin and layers of viscera and muscle. Helpful text hovers in the air, explaining what you see, projected across your field of vision by a headset. This futuristic experience is becoming more commonplace in medical schools across the country, as instructors adopt virtual reality tools and other digital technologies to teach human anatomy. On dissection tables in some classrooms, where students might have once gathered around human cadavers, digitized reconstructions of the human body appear on screens, allowing students to parse the layers of bones and tendons, watch muscles contract, and navigate to specific anatomical features. Sandra Brown, a professor of occupational therapy at Jacksonville University in Florida, teaches her introductory anatomy class with exclusively digital cadavers. “In a way, the dissection is brought to life,” she says. “It’s a very visual way for [students] to learn. And they love it.” The dissection of real human cadavers has long been a cornerstone of medical education. Dissection reveals not only the form of the organs, but how the structures of the human body work together as a whole system. The best way to understand the human body, researchers have argued, is to get up close and personal with one. But human dissection has also been controversial for hundreds of years, with a history burdened by grave robbers and unscrupulous physicians. Now, with interactive diagrams, artificial intelligence assistants and virtual reality experiences, new technology might provide an effective alternative for students—no bodies necessary. Still, the shift toward these tools raises questions around what might be lost when real bodies leave the classroom—and whether dissecting a human body carries lessons that no digital substitute can teach. “Is it helpful to be exposed to death, and is there something beyond just the functional learning of dissecting a cadaver?” says Ezra Feder, a second-year medical student at the Icahn School of Medicine at Mount Sinai in New York. “I don’t really have a great answer for that.” “A new dimension of interaction” Among the most popular new additions to anatomy classrooms are digital cadaver “tables.” These giant, iPad-like screens can be wheeled into the classroom or the lab. Anatomage, a California-based company that produces one such table, has seen its product adopted by more than 4,000 health care and education institutions. The company uses real human cadavers that have been frozen and imaged in thousands of thin sheets, then reconstructs them digitally, so students can repeatedly practice differentiating layers and systems of the body. Digital cadavers are not new, but they’re getting better, more realistic and more interactive. They’re so good that some schools have phased out real human cadavers entirely. Brown, who uses the Anatomage table, says digital dissection meets the educational styles preferred by her students. “They’ve had smartphones in their hands since they were born, practically. So, the fact that we have this massive virtual technology that they can use, and they can actually start to incorporate all the skills they have into learning—it was just a no-brainer for me,” she says. “It’s really fun.” Brown’s students can rotate, move and manipulate the digital cadavers in ways that would be impossible with a real body. “They literally have the brain upside down, and they’re looking at it from underneath. You can’t really do a lot of that when you have a cadaver in front of you, because they’re so fragile,” she says. “It’s an errorless way for [students] to explore, because if they make a mistake, or they can’t find something, they can reset it, and they can undo it.” Other companies, like Surglasses, which developed the Asclepius AI Table, are taking the digital cadaver model one step further. This table features A.I. assistants with human avatars that can listen and respond to voice commands from students and educators. The assistants can pull up relevant images on the table and quiz students on what they’ve learned. Recent research has shown that A.I. assistants can effectively support student learning and that those with avatars are particularly promising. “Students really respond well to technology that’s accessible to them,” says Saeed Juggan, a graduate teaching assistant at Yale Medical School, which has its own suite of digital anatomy tools, including a 3D model of a body that students can access from their own devices. Still, Juggan is a bit wary of A.I. tools because of potential limitations with the data they’re trained on. “Suppose students ask a question that’s not answered by those resources. What do you do in that case? And what do you tell the bot to do in that case?” he says. With virtual and augmented reality (VR/AR) anatomy programs, human dissection has become even more futuristic. Companies like Toltech have created VR headsets that transport students into an immersive digital cadaver lab, where they manipulate a detailed, annotated body standing in a gray void. While learning remotely during the Covid-19 pandemic, students at Case Western Reserve University donned bug-like visors to interact with holographic bodies that appeared to be floating in the students’ apartments. Still, VR comes with complications. Some students experience motion sickness from the headsets, explains Kristen Ramirez, a research instructor and content director for anatomy lab at New York University’s Grossman School of Medicine. Her approach, and that of her team at NYU, is to tailor the technology to fit the type and content of instruction. Ramirez and a colleague have created an in-house VR program that allows students to stand inside a human heart. Students can see “everything that the red blood cells would have seen if they had eyes and cognition,” she says. For certain parts of the body, an immersive experience is the best way to understand them, Ramirez adds. The pterygopalatine fossa, for example, is a small space deep inside the face, roughly between the cheek and nose—and the only way to see it, until now, has been by sawing through a donor’s skull. Even then, the fragile structures are inevitably damaged. With VR, students can view that cavity as though they are standing inside it—the “Grand Central Station of the head and neck,” as Ramirez calls it—and access “literally a new dimension of interaction.” The body on the table Even as digital tools land in more medical school classrooms, some say that learning from an actual body is irreplaceable. William Stewart, an associate professor of surgery at Yale University who instructs gross anatomy, sees the embodied experience of dissection as vital. “There’s a view of learning called ‘gestalt,’ which is that learning is the sum of all of the senses as the experience occurs,” he explains. “There’s seeing, there’s touching, there’s camaraderie around the table. There’s—I know this sounds silly, but it’s true—there’s the smell,” Stewart says. “All of those contribute, in one way or other, to the knowledge, and the more and more of those senses you take away, in my view, the less and less you learn.” When it comes to preparing for surgery or getting tactile experience with a body, surveys suggest students generally favor cadaver dissection, with some citing better retention of concepts. Working solely with digitized, color-coded models that can respond to voice commands does students a disservice, Stewart argues. “It’s not seeing it, it’s finding it that makes the knowledge.” The donation of one’s body to scientific learning and research is not taken lightly. It’s common practice for medical students to participate in a memorial service to honor the people who they will dissect as part of their studies. Jai Khurana, a first-year medical student at the Harvard-MIT Health Sciences Technology Program currently taking introductory anatomy, describes a respect and care for the human body that he and his fellow students learn through human dissection. “We regularly stay many hours past our anatomy lab if we don’t think we’re going to finish,” he says. “You still want to finish what you’re doing, do it in a respectful way and learn everything that you can learn.” Still, ethical violations have long plagued human dissection. In the 18th and 19th centuries, medical students dissected corpses stolen from graves and even those that had been murdered and sold by unscrupulous profiteers. At the time, dissection was implemented as an extra punishment for executed criminals in Britain to deprive them of a Christian burial; the boon to medical research was a bonus. Today, some countries around the world and many U.S. states still permit the dissection of “unclaimed remains,” or the bodies of those who die without family to properly bury them, raising concerns about consent. A recent investigation revealed that unclaimed corpses sent to the University of Southern California’s anatomy program were sold to the U.S. Navy, where they were used to train military operatives in the Israel Defense Forces. And despite the fact that most medical school cadavers in the U.S. are willingly donated, the donors and families are sometimes underinformed about what may happen to their remains. No federal agency monitors what happens to bodies donated for research and education. In an extreme case from this year, the former manager of the Harvard morgue pleaded guilty to stealing donated human remains and selling them to retailers for profit. Digital cadavers, VR/AR and A.I.-enhanced anatomy technology could offer a way to skirt these issues by reducing the number of human bodies needed for education—and the cutting-edge tech might actually be less costly than human cadavers for some medical programs. Whatever way you swing it, bodies are expensive, even if they are donated. Supporting a cadaver lab requires administrative staff to coordinate body donations, a wet lab space equipped for dissections and infrastructure for disposing of human remains. Because of this, students typically work in small groups to use fewer bodies. Each human cadaver can be used only once for each procedure, but digital ones can be reset repeatedly. For Brown, who teaches with the Anatomage table, the ideal lab would be a mix of synthetic, digital and real human dissection, where she could supplement a largely digital cadaver-based education with different tools to demonstrate various elements of anatomy. But given the financial constraints at Jacksonville University, Brown does what she can with the Anatomage table, having her students rotate the body’s shoulders, color code structures and create videos of their work to reference later. Her occupational therapy students are not preparing to be surgeons, so they would not have to practice cutting into flesh, she adds. Learning from a human cadaver has long been considered a rite of passage for medical students, who typically must dissect a body during their first-year anatomy class. But the emotional weight of human dissection can sometimes hinder, not enhance, the experience. “Cadavers can be scary, like a dead body laying in front of you that you have to look at,” says Brown. “And I think that [digital dissection] is just a safe way for [students] to explore.” She explains that some students enter the program expecting cadaver dissection and feel more comfortable when they encounter the digital models instead. Perhaps there’s value to sitting with that discomfort. Ramirez says that students who were initially apprehensive to human dissection might never overcome their squeamishness when offered a virtual alternative. “Because they are getting such small moments of interaction with the cadavers, I definitely will still see students even a couple weeks in, you know, disappointed, if you will, that they’re at a cadaver station, hesitant about going and interacting with it,” she says. For Feder, the student at Mount Sinai, dissecting a real human body elicited mixed emotions. In the lab, some students seemed to become desensitized over time to the cadaver’s humanity and treated it inappropriately, he says. “For some people, it became so ordinary and routine that maybe they lost some respect for the body,” Feder adds. “Maybe these are coping mechanisms.” Regarding respect for the dead, he says, “I think I’d feel a lot more comfortable learning from a technology-based cadaver than a real human bone-and-flesh cadaver.” Educationally, the physical cadaver dissection “was really invaluable,” Feder adds, and “a little bit hard to replace.” But he notes that the value of being exposed to death might vary between students, depending on what exactly they’re training for. “Overall, most doctors are in the business of keeping people alive.” The future of learning from cadavers While anatomy classes are increasingly using digitized bodies, cadaver dissection writ large is not likely to disappear anytime soon. It’s still a common way for surgeons to gain tactile experience with manipulating human flesh. Juggan, the Yale graduate student, explains that a neurosurgeon recently practiced a hemispherectomy on cadavers at the university before operating on a living patient. The procedure, which entails surgically separating different parts of the brain, is difficult, with a potential for catastrophic failure. Practicing this surgery on a cadaver is “not necessarily looking at the tissue,” Juggan says. “It’s getting the muscle memory. It’s getting the tactile feel for … this anatomical structure.” It goes without saying: No living patient wants to be the beta test for brain surgery. But not all cadavers are used to prep for such dramatic, high-stakes operations. As Mary Roach observes in her book Stiff: The Curious Lives of Human Cadavers, even plastic surgeons practice nose jobs on disembodied heads from donors. “Perhaps there ought to be a box for people to check or not check on their body donor form: Okay to use me for cosmetic purposes,” she writes in the book. Though the tactile training of surgeons remains important, surgery itself is getting more technologically advanced. With more robot-assisted surgeries, it’s not hard to imagine that technology and anatomy teaching will become more deeply integrated. Take, for instance, laparoscopic surgery, meant to look inside the pelvis or stomach by inserting a tiny camera into the abdomen. The surgery feels almost like science fiction: The surgeon makes minute adjustments from a distance, while the patient’s organs appear on a glowing screen. “If you’re doing laparoscopic surgery, you’re putting three tiny holes in the abdomen, and you’re playing a video game,” Ramirez says. There’s a conceivable future where the majority of medical students do not dissect an actual human body. The problem of tactile experience might also soon be solved by innovation, as synthetic cadavers—made of thermoplastic and organosilicate—mimic the physicality of the human body without limitations of ethics or decomposition. The anatomy classroom may soon be filled with digital dead people and synthetic approximations of flesh, rather than a decaying memento mori. Death’s banishment is, after all, in service of keeping more people alive longer. Source of the article

GOATReads:Politics

After “Abortion”: A 1966 Book and the World That It Made

“We were all considered slightly cracked, if not outright fanatics, that first year.” —Larry Lader, Abortion II   “Abortion is the dread secret of our society.”1 So began journalist Larry Lader’s controversial book, Abortion, published in 1966 after years of rejection from publishers. If you had told Lader or the mere handful of activists then dedicated to legalizing abortion that a Supreme Court case would overturn anti-abortion laws across the US seven years later—in a January 1973 case named Roe v. Wade—they probably would have laughed. In fact, in the early 1960s when Lader began researching, it was harder to get an abortion in the US than it had been in the early decades of the twentieth century. In 1966, American doctors—who were overwhelmingly white men—tightly controlled women’s reproductive options. And women of color, primarily Black and Latina women, had even fewer choices if they found themselves accidentally pregnant. Nearly 80 percent of all illegal abortion fatalities were women of color—primarily Black and Puerto Rican.2 And, worst of all, as Lader documented, deaths from illegal abortions had doubled in the preceding decade. Before Lader’s book, no one, it seemed, wanted to talk about abortion publicly. But something changed with the 1966 Abortion. For starters, Reader’s Digest—one of the bestselling magazines in the US at the time, with a circulation of millions—excerpted eight pages. This thrust Lader into the limelight, turning him from a journalist into an abortion activist almost overnight. He began receiving hundreds of letters and phone calls to his home from people asking for contacts for abortion providers who would perform the procedure safely. Lader’s wife, Joan Summers Lader, remembers receiving the calls at all hours and worrying that their phone might be tapped. She advised women to write to their home address with their request, because if necessary, letters could be burned. They worked tirelessly to make sure that no woman was turned away and received a safe and affordable abortion, if possible. Because of Abortion, Lader found himself at the center of a burgeoning radical abortion rights movement. Activists, lawyers, religious leaders, and health practitioners from across the country who supported the repeal of abortion laws reached out to him, applauding his book and asking what they could do next to advance their cause. In 1969, with Dr. Lonny Myers and Reverend Canon Don Shaw in Chicago, Lader organized the first national conference dedicated to overturning abortion laws. He insisted that the conference’s efforts should result in a national organization that could continue centralizing the efforts of local repeal groups. On the last day, he chaired a meeting with hundreds of people, trying to bridge differences and prevent shouting matches, mediating between those who wanted to forge ahead and those who were more cautious. At the end, NARAL was founded. In those early days its name stood for the National Association for the Repeal of Abortion Laws, and its ultimate goal was precisely that: to repeal all laws restricting abortion. Six decades ago, Lader’s book launched a movement. In the days before the internet connected people, Abortion served as a link joining activists, doctors, lawyers, clergy people, and others impacted by restrictive abortion laws, who felt deeply that those needed to change. Doctors saw firsthand how anti-abortion laws killed, maimed, and emotionally destroyed women who couldn’t have safe and legal abortions; progressive clergy people understood the trauma inflicted by these laws and how they alienated people from religion; and lawyers saw an opportunity to extend new privacy protections afforded by the 1965 Supreme Court case Griswold v. Connecticut, which finally legalized birth control for married couples. And of course, women, their partners, and their families understood firsthand how anti-abortion laws curtailed their lives and limited their freedom. Still, bringing people together to start a national movement on a controversial subject that rarely received sympathetic attention from the media or politicians was challenging. Abortion, however, proclaimed loudly that all abortion laws should be repealed, that there was no shame in seeking an abortion, and that without legal abortion women would never be free. It was the needed spark to bring together a movement, and Lader embraced his role as its convener. For nearly a century, abortion had been outlawed in every American state. Some of the earliest anti-abortion laws were passed as antipoison measures to protect women who were sold toxic chemicals claimed to be abortifacients. However, by the 1860s, it was clear that most anti-abortion laws intended to control women’s reproduction to keep them in the domestic sphere. Nineteenth-century anti-abortion rhetoric framed abortion as unnatural or as interfering with (white) women’s moral duty to reproduce for the state. Despite these restrictions, however, abortion did not go away. When, in 1962, Lader decided to write about abortion’s history and present-day consequences, he received little support at first. He was a seasoned magazine writer with two published books. Even so, he was surprised that—even with all his connections, even as a white man—it was impossibly difficult to find an editor willing to publish a pro-legal abortion article in a mainstream magazine.3 In letter after letter, he pitched a long-form article that would present his research on the impact of anti-abortion laws to show the harm they caused. No editor was willing. One wrote back that he had become “a little squeamish on the subject,” and he recommended Lader “find an editor with more guts.”4 Rather than give up on the project, Lader decided to dive deeper, and he wrote a book proposal. Abortion would be the first book to explore the history of the procedure in the US and make a case for repealing all abortion laws. It was a radical decision, and Lader knew it would open him up for attack. He also worried that it might kill his career as a magazine writer and ensure he was never again offered an assignment. Still, he decided to forge ahead. More than 12 publishers rejected the proposal before he found an editor willing to give him a book contract with the Bobbs-Merrill Company. In 1964—when Lader was knee-deep in abortion research—there were an estimated 1.1 million abortions a year in the US.5 Only 8,000 of those were legal “therapeutic” abortions, performed in hospitals that were approved by a panel of doctors. In 1955, therapeutic abortions for psychiatric reasons made up 50 percent of all hospital abortions; by 1965, only 19 percent of abortions were given for psychiatric reasons.6 That means that almost all women sought the procedure illegally. Of those 8,000 legal abortions in hospitals, virtually none was given to Black and Puerto Rican women.7 As Lader wrote, there was one abortion law for the rich and one for the poor. In other words, only women with financial means could buy their way to a hospital abortion given under safe and legal conditions. In Abortion, he argued that legal abortions should be available outside the hospital setting—in freestanding abortion clinics—because hospitals too often treated patients as “pawns” and cared more about preserving their reputations than about preserving the health of their patients. He also saw how some hospitals still sterilized patients against their will as the price for agreeing to offer an abortion.8 Abortion not only presented a history of abortion in the US but also functioned as a handbook for people looking to connect with an abortion provider. Lader was careful not to publish names, but he included a chart of how many skilled abortion providers he could locate in 30 states, including Washington, DC. He also explained the practicalities of traveling to Puerto Rico, Mexico, or Japan for an abortion. In another chapter, titled “The Underworld of Abortion,” he explored what happens when abortion is illegal and unregulated. He noted that the victims of illegal abortions are people who can’t afford to travel and so resort to getting abortions from unqualified people—sometimes doctors who had lost their license for alcoholism or other substance abuse issues. Because these abortions were underground, safety standards weren’t always followed; if unsterilized equipment was used, it increased the chance of infection. Some desperate women resorted to self-induced abortions, and although determining the numbers was difficult, one Kinsey study of Black women and of white and Black incarcerated women estimated that 30 percent of abortions were self-induced. These abortions were especially dangerous, often leading to lethal infections or infertility.9 For all these reasons, he argued for the complete repeal of all abortion laws across the US. He believed abortion should be no more regulated than any other routine medical procedure. When Lader became interested in abortion politics, there was a tepid abortion reform movement in New York City, led in part by the gynecologist/obstetrician Alan Guttmacher.10 Guttmacher participated in the first conference organized about abortion by Planned Parenthood’s medical director, Mary Calderone, in 1955. However, following Planned Parenthood’s stance on abortion in the 1950s, the conference focused on abortion to highlight the need for better contraception, and most of the participants supported anti-abortion laws.11 When proceedings from the conference were published in 1958, only Guttmacher’s contribution emphasized the need to liberalize those laws. However, even Guttmacher only supported abortions performed in hospitals, approved by a board of doctors, and limited to cases that merited it because of the woman’s mental or physical health.12 As head of obstetrics and gynecology at Mount Sinai Hospital in New York City, he created a panel of doctors to approve abortions, and the number of approved abortions grew modestly.13 In 1959, Guttmacher—with the help of his twin brother, Manfred—joined with the influential American Law Institute (ALI) to draft the first abortion reform law, which would allow for abortion in cases where continuing the pregnancy would affect the woman’s physical and mental health or if the pregnancy resulted from rape or incest. The law also mandated that two doctors had to approve the abortion. The law lowered the bar and created clearer guidelines for obtaining an abortion. Still, it maintained that abortions should only be performed in hospitals, and women had to submit their request for an abortion to a panel of doctors for approval. In his unpublished memoirs, Lader recounts considering what position to take on abortion. He knew he stood against the laws that imposed severe restrictions, making it virtually impossible for most American women to obtain a legal abortion. But as he set out to write on the topic, he consulted with his wife about how far to go in his argument for legalization. Was setting a limit after 20 weeks of pregnancy reasonable? Should abortions only be performed in hospitals? What about limiting the reasons under which an abortion is permissible? Should it be allowed under all circumstances or for no stated reason at all? Lader knew that arguing for a complete repeal of all abortion laws was radical, given the barely existent conversation about changing the laws at the time. After talking with his wife and remembering how he helped his ex-girlfriend’s friend obtain an illegal abortion, driving her to Pennsylvania, he decided that the terms of legal abortion should never be circumscribed by law.14 Lader subtitled the last chapter of Abortion “The Final Freedom” because he believed that “The ultimate freedom remains the right of every woman to legalized abortion.” He cites Margaret Sanger, the subject of his first book, who argued for birth control under similar terms: a woman cannot call herself free until she can control her own body. Sanger never argued for legal abortion because she naively believed that with accessible and effective birth control the need for it would be obviated. Lader understood that the natural extension of Sanger’s argument is not only legal but also affordable abortion with no strings attached. He presciently recognized—decades before its invention—that if an abortion pill could be invented, it would radically transform abortion access. Lader was not a religious man, but he sought the help of clergy—from the Reverend Howard Moody to the Rabbi Israel Margolies—who supported the repeal of all abortion laws. Recognizing the power of religious leaders to sway public opinion, he ended his book with the rabbi’s words: “Let us help build a world in which no human being enters life unwanted and unloved.” Source of the article