CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

Forwards, not back

Medicine aims to return bodies to the state they were in before illness. But there’s a better way of thinking about health The ancient Greek physician Hippocrates defined health as a body in balance. The human body was a system of four coordinated humours (blood, black bile, yellow bile, phlegm), each of which had its own balance of qualities (hot or cold, wet or dry) connected to the four elements (fire, air, earth, water). The balance of these qualities and humours was inextricably linked to environmental conditions. Phlegm, for instance, was connected to the element of water, and had the qualities of cold and wet. Hippocrates recognised that when this bodily system was exposed to conditions that perturbed or exacerbated its humours, such as the cold and wet of a snowstorm, the system became unbalanced, in this case producing too much phlegm. His treatments for an abundance of phlegm focused on removing the excess humour and increasing the yellow bile (hot and dry) to bring the body back into balance. Today, we might identify such a condition as a respiratory disease. Hippocrates would not have identified the condition so locally. He would have considered it a condition of the whole body: ‘dis-ease’ was literally a system out of ease or balance. Today we think less about the whole body as a complex system and more about its parts and sub-systems. An X-ray of the lungs shows congestion, so we assume the problem is in the lungs and treat for pneumonia, not worrying about other parts of the body unless they show their own symptoms. If a culture in the pathology lab shows a particular microbe at work, we prescribe the relevant antibiotics to combat that microbe rather than attempt to rebalance the system as a whole (despite the current trend to add ‘probiotics’ to the antibiotics approach). This move toward localised diagnoses began in the 18th century with the rise of morbid anatomy and the localisation of symptoms to parts. The French physician René Laënnec’s invention of the stethoscope in 1816 allowed him to hear various abnormal sounds in the chest of a patient suffering respiratory distress. He could then carry out an autopsy after the patient died and correlate the visibly abnormal parts with the symptoms that he had detected, which led to powerful diagnostic tools. The result was the localisation of abnormalities to particular parts of the body, which led to the concept of localised diseases and the need for specialised treatments. Modern medicine celebrates the ability to diagnose problems based on localisation of symptoms. The systemic imbalance or ‘dis-ease’ of Hippocrates thus became a growing set of separate diseases associated with specific causes, dissociated from the notion of the body as a system. Following the localisation of diseases came the specialisation of treatments. These treatments typically rely on identifying the problem within a particular part or subsystem and fixing it, with the goal of getting the patient back to ‘normal’. If a physical part is damaged, we want to repair it so that it works again as closely as possible to the way it was working before. Or, if it cannot be fixed, perhaps because a limb was cut off or a kidney failed, then the tradition has been to replace it with something as close as possible that will work the same way: prosthetics for lost limbs, dialysis machines for failing kidneys. Despite vast differences in understandings of disease and treatments, models of health from Hippocrates to modern medicine have focused on reestablishing the same state as occurred before illness. Health is a concept imbued with a sense of stability; it is constituted by a body returning to a state that existed before disease or injury and the maintenance of this state. Stability is therefore understood as the maintenance or regeneration of a single, ‘healthy’ state of the body. Since this approach has saved vast numbers of lives, medicine applauds the invention of new analytical tools, procedures and treatments that advance an understanding of health as a return to a previous state. The sense of stability implicit in thinking about health leads to a picture of health as an outcome of regeneration: a body damaged by injury or disease is brought back to, or regenerates, a previous, ‘healthy’ state. But what if health isn’t simply a return to a previous state? If we think about health as part of a larger framework of considering organisms as complex systems, there is no ‘return’. Complex systems shift in response to environmental challenges; they adapt to their conditions in order to survive – and adaptation breeds change. Framing health in terms of regeneration, and then asking what it means to regenerate, allows us to prod our assumptions about health as a singular, predetermined outcome and rethink our values in sustaining complex systems in light of damage. That raises the question: what does regeneration mean? In 2017, we formed the McDonnell Initiative at the Marine Biological Laboratory in Woods Hole, Massachusetts, which brought together historians, philosophers and scientists to investigate what regeneration means across multiple scales of life. Regeneration is typically thought of as renewal, revitalisation, rejuvenation, repair, recovery and a lot of other re- words. The ‘re-ness’ suggests return to prior conditions. Together, the authors of this essay explored the concept in the history of biology, asking what regeneration has meant when applied to individual organisms, especially through the 20th century. Others in the McDonnell Initiative have been looking at what regeneration means for microbial communities, germline, nervous systems and ecosystems. In our book, What Is Regeneration? (2022), we laid out ideas of regeneration arising in the 18th century and looked closely at the contributions of early 20th-century biologists. Led by René-Antoine Ferchault de Réaumur and Abraham Trembley, 18th-century naturalists made meticulous and detailed observations of organisms responding to injury. In the first decades of the 18th century, Réaumur watched as crayfish limbs slowly regenerated, documenting different stages of regrowth that led to a complete replacement of the lost limb. A few decades later, Trembley cut hydra into various pieces, and discovered that they regrew all of their lost parts. Writing in 1989, Howard and Sylvia Lenhoff convincingly argued that Trembley saw the organisms he worked with as living systems – he looked at the interacting whole, its structure, function and behaviour. Réaumur and Trembley, and others at the time, saw regeneration as a process of repair and replacement, bringing organisms back to their previous states, akin to the notion of stability we see in medicine. In 1901, the American biologist Thomas Hunt Morgan summarised what was known about regenerating organisms, from both his own studies and preceding explorations. Morgan experimented on a broad swath of organisms, building up evidence for his argument that scientific claims should be grounded in experimentation and devoid of theoretical speculation. Perhaps his most stunning insights came from planarian worms. After he cut them into various bits, he found that they could regenerate, producing a new tail, middle or even a fresh head, if needed. Some experiments even showed planarians regenerating a second head on the posterior of the old head after the body had been removed, a marked departure from the planarian’s previous state. A stable system might in some senses be even more healthy than what was there before What about hydra? Morgan turned them inside out, cut-and-pasted various parts together, and they kept growing and remained alive. These were most definitely not in the same condition as they were before his experiments – their bodies, through the regenerative process, had transformed into new and curious systems. Or, as Morgan put it, ‘a change in one part takes place in relation to all other parts, and it is this interconnection of the parts that is one of the chief peculiarities of the organism.’ Morgan joined other researchers at the Marine Biological Laboratory in marvelling at the variety of responses these organisms could make when responding to changing contexts and conditions. One of those biologists was Jacques Loeb, who saw the potential to go further than just experimenting and describing what the organisms did. He imagined being able to engineer the results, to use the scientific knowledge about what organisms do during regeneration in order to control the results of this process and improve on what happened normally. The result would be a stable system that might in some senses be even more healthy than what was there before. The works of Morgan and Loeb offer definitive shifts from previous views of regeneration, and they show us two important things. First, they brought to fruition Trembley’s early views that regeneration is a process that occurs within a living system. A system is a group of parts that interact in a coordinated fashion. The resulting whole follows rules and principles, which allow some kind of communication and integration of the parts so that the entire system is responsive and regulated as well as stable. Stability, then, meant not a return to a previous state, but the ability of a system to maintain its coordination and communication among parts. Morgan and Loeb understood that an intervention on one part would, in some way, affect the entire organism, much in the same way that Hippocrates understood that disease affected the entire body. Second, Morgan and Loeb show us that regeneration does not simply mean a return to ‘normal’ like the 18th-century naturalists believed. Rather, they saw regeneration as an adaptive process. These men were not evolutionary biologists and did not think of adaptation in evolutionary terms of species adapting to a changing environment over geological periods of time. Instead, they emphasised adaptive responses of individual systems to stimuli affecting the individuals, a more proximal sense of adaptation. They did not think in terms of replacing particular localised cells or tissues or organs, but rather in terms of stimulating the organism to respond by repairing damage for the whole. They saw the individual organism as a complex system, and one that adapts to its changing environment by initiating repair. Skipping forward toward the 21st century, we find many life scientists embracing this notion of regeneration as a process in adaptive systems. Research on gastrointestinal microbial communities has shown that when the community is perturbed by antibiotics, it will reform, but the microbes will be different, and so will their community interactions. Experiments on lampreys indicate that, when spinal synapses regenerate following injury, the regenerated synapse is morphologically distinct from its pre-regenerative state, even if it performs much the same function. All of this research indicates that regeneration is, broadly conceived, the adaptive process by which a living system responds to a stimulus and maintains its stability. Adaptive, in this context, means that systems change through the process of regeneration in response to both internal and external conditions. Stability means that the system remains able to coordinate its parts. A system before regeneration will never be identical to a system after regeneration; there will always be changes in the parts or the relationships between them, even if they look or function the ‘same’. What appears to shape the results of regeneration is context. Whether that context is the microenvironment of a cell, a tissue, the entire body, the body within its environment or all of these, context is an essential part of regeneration within the complex system because it provides the fodder for the adaptive response. As the biologist Michael Levin and the philosopher Daniel Dennett pointed out in their Aeon essay: ‘The great progress has been mainly on drilling down to the molecular level, but the higher levels are actually not well-off … We know how to specify individual cell fates from stem cells, but we’re still far from being able to make complex organs on demand.’ Despite decades of enthusiasm for molecular biology and genetics, we can’t rely on knowledge of the molecular alone – we also need to be able to understand how molecules, cells, tissues and the body within an environment interact in order to move towards repair of individual organisms. And it is not just the higher levels of the individuals’ structure and function that shape adaptive responses to environmental changes, but also the context in which they exist. Because living systems regenerate by changing and adapting to both internal and external stimuli, understanding health as simply about stability in the sense of being fixed, as recovering a system to its condition before a stimulus, doesn’t quite make sense. The concept of regeneration embodies Heraclitus’ famous saying: ‘You cannot step in the same river twice.’ If we consider health as an outcome of regeneration, and regeneration is an adaptive process, we should lean into regeneration and adaptation when considering health. Research at the intersection of complex systems and health outcomes has shown that poor health states are closely linked with loss of adaptive responses within complex systems. We want to focus on the bigger conceptual picture and not get swept away in the details of thinking about the dynamics of complex systems. Extrapolating from lessons in regeneration biology to conceptions of health has many consequences, but we want to explore two, intertwined changes: how this adaptive view of regeneration alters our thinking about goals and outcomes of health, and what this means for our values related to health. If we’re not tied to the position that systems must return to a previous state, then we can explore alternatives. After all, if we don’t take the previous state of the system to act as our target outcome, we have to select what we want the outcome – or the range of possible acceptable outcomes – to be. If we want to intervene in a system, for instance chopping off the salamander’s limb, severing the lamprey’s synapses or manipulating planarian genes, our expectation cannot be fidelity to a pre-diseased or pre-injured state. Instead of focusing on replacement and repair towards reestablishing the state before illness, we can focus on what would be the ideal state for that goal. Is it necessary that the salamander regenerate a limb of the same size, or might it work just as well to have a smaller limb that takes less energy to regrow? Perhaps the new synapses that bypass the damaged site in the lamprey can be more efficient because they ‘know’ what path to follow? Or knocking out genes in planarians may allow other regulatory pathways to work and reveal connections among the genes, cells and resulting organism. The question of ‘what constitutes an ideal state’ is one of values Now let’s consider this focus on choosing outcomes within human health. Despite more than 3,000 years of fashioning prostheses to replace missing parts, today around 20 per cent of people outfitted with a limb prosthesis end up abandoning their device, often because it doesn’t feel or work ‘right’. There has been a great deal of research in improving functionality of prosthetic devices, less in improving comfort, and far less in understanding psychosocial factors that contribute to device abandonment. From our new perspective of health as an outcome of adaptive regeneration, we can begin to ask questions about the goals we want to achieve: do prosthetic devices need to mimic the form of the lost structure? If functionality is the primary concern of most prosthesis wearers, are there other ways to achieve functionality? And are prostheses always the best solution? While this is not the place to address the vast and richly informative literature on ableism and embracing different abilities, it is worth noting that thinking about regeneration as promoting a range of different approaches to the health of a complex adaptive system offers parallel challenges to the assumption that all organisms need to be the same to be considered whole and healthy. This brings us to the second shift in perspective about health: values. The question of ‘what constitutes an ideal state’ is one of values. After all, if health is possible within an array of outcomes, we have to select what we want the outcome to be. Whatever we select will be based on what we deem most important – that is, what we value. Hippocrates realised this when he emphasised health as a balance of elements within the whole system. If asked ‘rebalance for what?’ he would have said ‘health’. We can ask an updated version of his question concerning regeneration: regeneration for what? When intervening in a system to promote health, should we privilege enhanced function? If so, which functions do we want to optimise? Picturing health (and regeneration) through this lens of rebalancing the health of the whole has a bearing on the transhumanism movement, which asks similar questions. In both cases, we also need to recognise who is asking and answering the question: the patient? The clinician? The biomedical researcher? Each of these stakeholders may have a different sense of what health is. Being clear about our goals is a starting point, and reflecting on the terms we use is another major step toward opening our minds to different – and perhaps better – ways to think about health, the health of organisms including humans. We do not have the answers, but we have offered suggestions to think in terms of complex systems that are adaptive to change and therefore themselves change over time. Plus, we invite robust discussion about what our goals of health should be if we embrace the idea that it is not always best simply to go back to what we had before. Health is more complicated than that. Source of the article

GOATReads: Philosophy

Science + religion

The science-versus-religion opposition is a barrier to thought. Each one is a gift, rather than a threat, to the other To riff on the opening lines of Steven Shapin’s book The Scientific Revolution (1996), there is no such thing as a science-religion conflict, and this is an essay about it. It is not, however, another rebuttal of the ‘conflict narrative’ – there is already an abundance of good, recent writing in that vein from historians, sociologists and philosophers as well as scientists themselves. Readers still under the misapprehension that the history of science can be accurately characterised by a continuous struggle to escape from the shackles of religious oppression into a sunny secular upland of free thought (loudly expressed by a few scientists but no historians) can consult Peter Harrison’s masterly book The Territories of Science and Religion (2015), or dip into Ronald Numbers’s delightful edited volume Galileo Goes to Jail and Other Myths about Science and Religion (2009). Likewise, assumptions that theological and scientific methodologies and truth-claims are necessarily in philosophical or rational conflict might be challenged by Alister McGrath’s book The Territories of Human Reason (2019) or Andrew Torrance and Thomas McCall’s edited Knowing Creation (2018). The late-Victorian origin of the ‘alternative history’ of unavoidable conflict is fascinating in its own right, but also damaging in that it has multiplied through so much public and educational discourse in the 20th century in both secular and religious communities. That is the topic of a new and fascinating study by the historian James Ungureanu, Science, Religion, and the Protestant Tradition (2019). Finally, the concomitant assumption that scientists must, by logical force, adopt non-theistic worldviews is roundly rebutted by recent and global social science, such as Elaine Eklund’s major survey, also published in a new book, Secularity and Science (2019). All well and good – so the history, philosophy and sociology of science and religion are richer and more interesting than the media-tales and high-school stories of opposition we were all brought up on. It seems a good time to ask the ‘so what?’ questions, however, especially since there has been less work in that direction. If Islamic, Jewish and Christian theologies were demonstrably central in the construction of our current scientific methodologies, for example, then what might such a reassessment imply for fruitful development of the role that science plays in our modern world? In what ways might religious communities support science especially under the shadow of a ‘post-truth’ political order? What implications and resources might a rethink of science and religion offer for the anguished science-educational discussion on both sides of the Atlantic, and for the emerging international discussions on ‘science-literacy’? I want to explore here directions in which we could take those consequential questions. Three perspectives will suggest lines of new resources for thinking: the critical tools offered by the discipline of theology itself (even in an entirely secular context), a reappraisal of ancient and premodern texts, and a new way of looking at the unanswered questions and predicament of some postmodern philosophy and sociology. I’ll finish by suggesting how these in turn suggest new configurations of religious communities in regard to science and technology. The humble conjunction ‘and’ does much more work in framing discussions of ‘theology and science’ than at first apparent. It tacitly assumes that its referents belong to the same category (‘red’ and ‘blue’), implying a limited overlap between them (‘north’ and ‘south’), and it might already bias the discussion into oppositional mode (‘liberal’ and ‘conservative’). Yet both science and theology resist boundaries – each has something to say about everything. Other conjunctions are possible that do much greater justice to the history and philosophy of science, and also to the cultural narratives of theology. A strong candidate is ‘of’, when the appropriate question now becomes: ‘What is a theology of science?’ and its complement, ‘What is a science of theology?’ A ‘theology of…’ delivers a narrative of teleology, a story of purpose. A ‘theology of science’ will describe, within the religious narrative of one or more traditions, what the work of science is for. There have been examples of the ‘theology of…’ genre addressing, for example, music – see James Begbie’s Theology, Music and Time (2000) – and art – see Nicholas Wolterstorff’s Art in Action (1997). Note that working through a teleology of a cultural art by calling on theological resources does not imply a personal commitment to that theology – it might simply respond to a need for academic thinking about purpose. For example, Begbie explores the role that music plays in accommodating human experience to time, while Wolterstorff discovers a responsibility toward the visual aesthetics of public spaces. In both cases, we find that theology has retained a set of critical tools that address the essential human experience of purpose, value and ethics in regard to a capacity or endeavour. Intriguingly, it appears that some of the social frustrations that science now experiences result from missing, inadequate or even damaging cultural narratives of science. Absence of a narrative that delineates what science is for leave it open to hijacking by personal or corporate sectarian interests alone, such as the purely economic framings of much government policy. It also muddies educational waters, resulting in an over-instrumental approach to science formation. I have elsewhere attempted to tease out a longer argument for what a ‘theology of science’ might look like, but even a summary must begin with examples of the fresh (though ancient) sources that a late-modern theological project of this kind requires. A ‘de-centralising’ text places humans at the periphery of the world, looking on in wonder and terror at the ‘other’ The cue for a first wellspring of raw material comes from the neo-Kantian Berlin philosopher Susan Neiman. In a remarkable essay, she urges that Western philosophy acknowledge, for a number of reasons, a second foundational source alongside Plato – that of the Biblical Book of Job. The ancient Semitic text offers a matchless starting point for a narratology of the human relationship of the mind, and the experience of human suffering, with the material world. Long recognised as a masterpiece of ancient literature, Job has attracted and perplexed scholars in equal measures for centuries, and is still a vibrant field of study. David Clines, a leading and lifelong scholar of the text, calls Job ‘the most intense book theologically and intellectually of the Old Testament’. Inspiring commentators across vistas of centuries and philosophies, from Basil the Great to Emmanuel Levinas, its relevance to a theology of science is immediately apparent from the poetic ‘Lord’s Answer’ to Job’s complaints late in the book: Where were you when I founded the earth? Tell me, if you have insight. Who fixed its dimensions? Surely you know! … Have you entered the storehouses of the snow? Or have you seen the arsenals of the hail? The writer develops material from the core creation narrative in Hebrew wisdom poetry – as found in Psalms, Proverbs and Prophets – that speaks of creation through ‘ordering’, as well as bounding and setting foundations. The questing survey next sweeps over the animal kingdom, then finishes with a celebrated ‘de-centralising’ text that places humans at the periphery of the world, looking on in wonder and terror at the ‘other’ – the great beasts Behemoth and Leviathan. The text is an ancient recognition of the unpredictable aspects of the world: the whirlwind, the earthquake, the flood, unknown great beasts. In today’s terms, we have in the Lord’s Answer to Job a foundational framing for the primary questions of the fields we now call cosmology, geology, meteorology, astronomy, zoology… We recognise an ancient and questioning view into nature unsurpassed in its astute attention to detail and sensibility towards the tensions of humanity in confrontation with materiality. The call to a questioning relationship of the mind from this ancient and enigmatic source feeds questions of purpose in the human engagement with nature from a cultural depth that a restriction to contemporary discourse does not touch. Drawing on historical sources is helpful in another way. The philosophy of every age contains its tacit assumptions, taken as evident so not critically examined. A project on the human purpose for science that draws on theological thinking might, in this light, draw on writing from periods when this was an academically developed topic, such as the scientific renaissances of the 13th and 17th centuries. Both saw considerable scientific progress (such as, respectively, the development of geometric optics to explain the rainbow phenomenon, and the establishment of heliocentricity). Furthermore, both periods, while perfectly distinguishing ‘natural philosophy’ from theology, worked in an intellectual atmosphere that encouraged a fluidity of thought between them. An instructive and insightful thinker from the first is the polymath Robert Grosseteste. Master to the Oxford Franciscans in the 1220s, and Bishop of Lincoln from 1235 to his death in 1253, Grosseteste wrote in highly mathematical ways about light, colour, sound and the heavens. He drew on the earlier Arab transmission of and commentaries on Aristotle, yet developed many topics well beyond the legacy of the ancient philosopher (he was the first, for example, to identify the phenomenon of refraction to be responsible for rainbows). He also brought a developed Christian philosophy to bear upon the reawakening of natural philosophy in Europe, whose programmes of astronomy, mechanics and above all optics would lead to early modern science. In his Commentary on the Posterior Analytics (Aristotle’s most detailed exposition of his scientific method), Grosseteste places a sophisticated theological philosophy of science within an overarching Christian narrative of Creation, Fall and Redemption. Employing an ancient metaphor for the effect of the Fall on the higher intellectual powers as a ‘lulling to sleep’, he maintains that the lower faculties, including critically the senses, are less affected by fallen human nature than the higher. So, re-illumination must start there: Since sense perception, the weakest of all human powers, apprehending only corruptible individual things, survives, imagination stands, memory stands, and finally understanding, which is the noblest of human powers capable of apprehending the incorruptible, universal, first essences, stands! Human re-engagement with the external world through the senses, recovering a potential knowledge of it, becomes a participation in the theological project of healing. Furthermore, the reason that this is possible is because this relationship with the created world is also the nexus at which human seeking is met by divine illumination. The rise of experimentation in science as we now know it is itself a counterintuitive turn The old idea that there is something incomplete, damaged or ‘out of joint’ in the human relationship with materiality (itself drawing on traditions such as Job), and that the human ability to engage a question-based and rational investigation of the physical world constitutes a step towards a reversal of it, represents a strand of continuity between medieval and early modern thinking. Francis Bacon’s theologically motivated framing of the new ‘experimental philosophy’ in the 17th century takes (though not explicitly) Grosseteste’s framing as its starting point. As framed in his Novum Organum, the Biblical and medieval tradition that sense data are more reliable than those from reason or imagination constitutes his foundation for the ‘experimental method’. The rise of experimentation in science as we now know it is itself a counterintuitive turn, in spite the hindsight-fuelled criticism of ancient, renaissance and medieval natural philosophers for their failure to adopt it. Yet the notion that one could learn anything general about the workings of nature by acts as specific and as artificial as those constituting an experiment was not at all evident, even after the foundation of the Royal Society. The 17th-century philosopher Margaret Cavendish was among the clearest of critics in her Observations upon Experimental Philosophy (1668): For as much as a natural man differs from an artificial statue or picture of a man, so much differs a natural effect from an artificial… Paradoxically perhaps, it was the theologically informed imagination of the medieval and early modern teleology of science that motivated the counterintuitive step that won against Cavendish’s critique. Much of ‘postmodern’ philosophical thinking and its antecedents through the 20th century appear at best to have no contact with science at all, and at worst to strike at the very root-assumptions on which natural science is built, such as the existence of a real world, and the human ability to speak representationally of it. The occasional explicit skirmishes in the 1990s ‘science wars’ between philosophers and scientists (such as the ‘Sokal-affair’ and the subsequent public acrimony between the physicist Alan Sokal and the philosopher Jacques Derrida) have suggested an irreconcilable conflict. A superficial evaluation might conclude that the charges of ‘intellectual imposture’ and ‘uncritical naivety’ levied from either side are simply the millennial manifestation of the earlier ‘two cultures’ conflict of F R Leavis and C P Snow, between the late-modern divided intellectual world of the sciences and the humanities. Yet in light of the long and theologically informed perspective on the story that we have sketched, the relationship of science to the major postmodern philosophical themes looks rather different. Søren Kierkegaard and Albert Camus wrote of the ‘absurd’ – a gulf between the human quest for meaning and its absence in the world. Levinas and Jean-Paul Sartre wrote of the ‘nausea’ that arises from a human confrontation with sheer, basic existence. Derrida and Ferdinand de Saussure framed the human predicament of desire to represent the unrepresentable as différance. Hannah Arendt introduces The Human Condition (1958) with a meditation on the iconic value of human spaceflight, and concludes that the history of modernism has been a turning away from the world that has increased its inhospitality, so that we are suffering from ‘world alienation’. The first modern articulation of what these thinkers have in common, an irreconcilable aspect of the human condition in respect of the world, comes from Immanuel Kant’s Critique of Judgment (1790): Between the realm of the natural concept, as the sensible, and the realm of the concept of freedom, as the supersensible, there is a great gulf fixed, so that it is not possible to pass from the former to the latter by means of the theoretical employment of reason. Kant’s recognition that more than reason alone is required for human re-engagement with the world is echoed by George Steiner. Real Presences (1989), his short but plangent lament over late-modern literary disengagement with reference and meaning, looks from predicament to possible solution: Only art can go some way towards making accessible, towards waking into some measure of communicability, the sheer inhuman otherness of matter… Steiner’s relational language is full of religious resonance – for re-ligio is simply at source the re-connection of the broken. Yet, once we are prepared to situate science within the same relationship to the humanities as enjoyed by the arts, then it also fits rather snugly into a framing of ‘making accessible the sheer inhuman otherness of matter’. What else, on reflection, does science do? The modernist hope of controlling nature through technology is dashed on the rocks Although both theology and philosophy suffer frequent accusations of irrelevance, on this point of brokenness and confusion in the relationship of humans to the world, current public debate on crucial science and technology indicate that both strands of thought are on the mark. Climate change, vaccination, artificial intelligence – these and other topics are marked in the quality of public and political discourse by anything but enlightenment values. The philosopher Jean-Pierre Dupuy, commenting in 2010 on a Europe-wide project using narrative analysis of public debates around nanotechnology, shows that they draw instead on both ancient and modern ‘narratives of despair’, creating an undertow to any discussion of ‘troubled technologies’ that, if unrecognised, renders effective public consultation impossible. The research team labelled the narratives: (1) Be careful what you wish for – the narrative of desire; (2) Pandora’s Box – the narrative of evil and hope; (3) Messing with nature – the narrative of the sacred; (4) Kept in the dark – the narrative of alienation; and (5) The rich get richer and the poor get poorer – the narrative of exploitation. These dark and alienated stories turn up again and again below the surface of public framings of science, yet driving opinion and policy. The continuously complex case of genetically modified organisms is another example. None of these underlying and framing stories draws on the theological resources within the history of science itself, but all do illustrate the absurd, the alienation and the irreconcilable of postmodern thinking. Small wonder, perhaps, that Bruno Latour, writing in 2007 on environmentalism, revisits the narrative of Pandora’s Box, showing that the modernist hope of controlling nature through technology is dashed on the rocks of the same increasingly deep and problematic entangling with the world that prevents our withdrawal from it. But Latour then makes a surprising move: he calls for a re-examination of the connection between mastery, technology and theology as a route out of the environmental impasse. What forms would an answer to Latour’s call take? One is simply the strong yet gentle repeating of truth to power that a confessional voice for science, and evidence-based thinking, can have when it is resting on deep foundations of a theology that understands science as a gift rather than a threat. One reason that Katharine Hayhoe, the Texan climate scientist, is such a powerful advocate in the United States for taking climate change seriously is that she is able to explicitly work through a theological argument for environmental care with those who resonate with that, but whose ideological commitments are impervious to secular voices. There are more grassroots-level examples that demonstrate how religious communities can support a healthy lay engagement with science. Local movements can dissolve some of the alienation and fear that characterises science for many people. In 2010, a group of local churches in Leeds in the UK, decided to hold a community science festival that encouraged people to share their own and their families’ stories, together with the objects that went with them (from an ancient telescope to a circuit board from an early colour TV set that was constructed by a resident’s grandfather). A diverse movement under the general title ‘Equipping Christian Leadership in an Age of Science’ in the UK has discovered a natural empathy for science as a creative gift, rather than a threat to belief, within local churches. At a national level, the past five years have seen a remarkable project engaging senior church leaders in the UK with current scientific issues and their researchers. In a country with an established Church, it is essential that its voices in the national political process are scientifically informed and connected. Workshop participants, including scientists with no religious background or practice, have found the combination of science, theology and community leadership to be uniquely powerful in resourcing discussions of ethical ways forward, in issues from fracking to artificial intelligence. A relational narrative for science that speaks to the need to reconcile the human with the material, and that draws on ancient wisdom, contributes to the construction of new pathways to a healthier public discourse, and an interdisciplinary educational project that is faithful to the story of human engagement with the apparently chaotic, inhuman materiality of nature, yet one whose future must be negotiated alongside our own. Without new thinking on ‘science and religion’, we risk forfeiting an essential source for wisdom today. Source of the article

In the glow of the candle

In a dark library lit by a single lamp, four men, a young woman and three children crowd around a circular dais. They are staring at a clockwork contraption called an orrery, housed within giant bands of metal that suggest a celestial sphere. Below, tiny planets rotate around the Sun, orbited by pearl moons. Concentric plates allow the planets to move according to their relative speeds. The lecturer in his striking red gown is pointing to Jupiter’s moons, while a younger man in a purple coat and gold striped waistcoat assiduously takes notes. His notetaking implies the event isn’t run-of-the-mill, but something special and worth recording. A small lamp has taken the place of the central sun in the orrery and throws light upon everyone’s faces. We can only see it as a reflection below the elbow of the silhouetted youth in the foreground – a wick burning in a jar of oil. The lamplight adds an eager gleam to the eyes of the inquisitive young children and illuminates the contemplative gaze of the young man on the right. It highlights the edges of the young woman’s frilled bonnet and the cheekbones of the adolescent who leans over the edge of the orrery in front of us. It is his shoulder that we can look over without feeling as if we are intruding. The lamp illuminates all our faces as our minds are enlightened by the science we observe in action. Today, A Philosopher Giving that Lecture on the Orrery, in which a Lamp is Put in the Place of the Sun (1766) by Joseph Wright of Derby (1734-97) is rightly considered a masterpiece. When it was first exhibited at the annual exhibition of the Society of Artists in Spring Gardens, London, in 1766, reviewers singled it out for particular praise, saying it was ‘exceeding fine’. It attracted more attention than any other work on display, inspiring one reviewer to break out in rhyming couplets: ‘Without a rival let this “Wright” be known,/For this amazing province is his own.’ Wright’s Orrery was a huge statement from a young and ambitious artist. So why, then, was he overlooked when the Royal Academy of Arts was founded by George III just two years later? When Wright had been lauded as a ‘genius’ at that same year’s Society of Artists exhibition? Why was he not a founder member of the new august institution? The Royal Academy was founded in December 1768 by George III at the behest of a small number of artists and architects. Membership was limited to 40, meaning more than 160 members of the Society of Artists did not make the cut. The Society had formed only eight years earlier, in 1760, as a way for Britain’s leading artists to meet, converse, study and exhibit together in London. It’s hard to imagine but there were no regular public exhibitions of art before this time. Art was shown to discerning patrons in artists’ studios and viewed in private collections in aristocratic homes. The Society changed this and, for a shilling (around £5 today), anyone could scrutinise the finest paintings of the year. At its peak, it had more than 200 members, including the country’s leading landscape painter Richard Wilson, the ‘grand manner’ portraitist Joshua Reynolds and the architect William Chambers, who would later design the Great Room of the Royal Academy. It also included younger artists such as the American painter Benjamin West and the history painter Nathaniel Dance, as well as Wright himself and his friends John Hamilton Mortimer and Ozias Humphry. These young men were finding their feet by exhibiting in this new annual mixed exhibition. Wright was 30 when he was elected a member of the Society in May 1765, the year his first ‘candlelight’ painting, Three Persons Viewing the Gladiator by Candlelight (1765), was exhibited to favourable reviews. While he was known as a portraitist in his home town of Derby, where he lived and worked, it was his ambitious candlelight paintings that significantly raised his profile in the capital. Wright’s friends in London were young and opinionated artists, who were members of the Howdalian Society, but his circle in Derby was older and more scientific. John Whitehurst, a horologist and geologist, lived at 22 Iron Gate, a few doors away from Wright at number 28, while Peter Perez Burdett, a cartographer and son of an instrument maker, lived in Full Street. Burdett was so familiar with Wright that he often borrowed money from him (and never seemed to pay him back). Many of Whitehurst’s friends – including the doctor Erasmus Darwin (grandfather of Charles Darwin) and the potter Josiah Wedgwood – were associated with the Lunar Society, a group of industrialists and natural philosophers who met regularly in Birmingham. The Lunar Society gathered on the Monday closest to the full moon. Their choice of date is often interpreted as travel related, the full moon offering the best light for journeys home after lengthy meetings. But these inquisitive men (and they were all men) chose the name of their society with care. Most were members of the Royal Society, the prestigious scientific organisation founded in London in 1660, and were fascinated by astronomy, engineering and the latest developments in chemistry and physics. The Moon was linked to Earth’s tides, and the race was on to solve the problem of measuring longitude at sea (not least because there was a £20,000 prize on offer for the first person to do this accurately). Meanwhile, the orbits of Jupiter’s moons were already used to measure longitude on land. Wright would allude to his allegiance to the Lunar Society from 1768 onwards by painting a full moon visible through windows and open doors in many of his paintings, including An Experiment on a Bird in the Air Pump (1768), The Blacksmith’s Shop (1771) and The Alchymist (1771). Wright drew upon the Lunar Society’s investigations for his candlelight paintings of scientific lectures and experiments. The Orrery is the most masterful of these (matched only by An Experiment on a Bird in the Air Pump). It is a resolutely modern painting, more than 2 metres wide, on a scale that competed with classical history paintings; one that drew upon the current fascination for scientific learning, and predated West’s revolutionary modern history paintings. Although it was rooted in Wright’s portrait practice, the Orrery was much more besides, because it was simultaneously a manifestation of the sublime in nature, of the insignificance of Man when contemplating the Universe. Lighting his painting from within also showed a deft understanding of art-historical antecedents. Caravaggio’s theatrical compositions often highlighted faces with a shaft of light, leaving other figures in darkness. His international followers were known as the Caravaggisti, and it is the work of the 17th-century Dutch artist Gerard van Honthorst that inspired Wright to use a centralised light source such as a lamp or candle. Nothing quite like the Orrery had been seen before in England. Wright’s training in London and subsequent practice as a portrait painter in Derby ensured the faces of those clustered around the planetary machine were credible and nuanced, and he used friends and collectors as models. The painting quickly sold to Washington Shirley, 5th Earl Ferrers, for the impressive sum of £210 (£22,000 today). This was up to eight times the amount he received for his portraits, a price that reflected the painting’s ambitious narrative content and large scale. Wright’s friend Burdett appears in the Orrery, his tricorne hat under his arm as he ardently takes notes. His own interests were, in fact, not astronomical but terrestrial – he won a £100 prize for his accurate survey of Derbyshire. The lecturer discussing the planets was possibly based on Whitehurst, who was researching the formation of Earth at the time, while Ferrers, Burdett’s patron and the purchaser of the Orrery, stands on the far right, admiring his protégé’s diligent notetaking. Ferrers may, in fact, have commissioned this painting – his nephew, Lawrence Rowland Ferrers, is the young boy contemplating Saturn. Ferrers owned an orrery, but Wright may have first seen one in action at public lectures given in Derby by James Ferguson (a friend of Whitehurst’s) in 1762. It was designed as an astronomical teaching tool for naval academies, with a brass ball representing the Sun in the centre. In some models, the brass ball representing the Sun could be replaced by a lamp to allow the fundamentals of eclipses to be demonstrated. Ferrers, a former naval officer, had been elected to the Royal Society in 1761 following his observations of the transit of Venus, and the ability of Wright’s orrery to show eclipses (of great interest to Ferrers) suggests that Wright chose both the equipment and his sitters with great care. We can see Wright was well connected in London and Derby, and reviews show that he was well respected, both for his portraiture and his new candlelight paintings. So why was he overlooked by the founders of the Royal Academy? Was it his depiction of science that didn’t chime with their aspirations, or were there other factors at play? Many of the artists who were founder members of the Royal Academy had formerly been directors of the Society of Artists, including West, Dance and Wilson. They had resigned en masse after a group of young members, including Wright’s friend Mortimer, demanded reform. These young members were sick of the monopoly the old guard held on the director posts, and voted to change the election procedure, wanting those who ran the Society to be the artists who were deemed best on merit, not on seniority or the number of artists they could cajole or even bribe to vote for them. So, from its very inception, the Royal Academy was a partisan place, founded in retaliation to changes forced through at the Society. Those directors who resigned were appointed to executive posts at the new, rival Academy, and a line in the sand was drawn – to be an Academician, you had to withdraw from all other British art societies. There were grand appointments of overseas artists, including the Swiss painter Angelica Kauffman (one of only two women on the initial roster), and Reynolds was persuaded to become the first president. But the founder member places were largely taken up by second-rate painters and jobbing artists – Reynolds’s loyal drapery painter Peter Toms, for example. Samuel Wale, another member, was described by a contemporary as ‘not one of the first artists of the age’. By contrast, none of the agitators for change at the Society of Artists were offered membership, not even Wright. Wright had never stood against the old guard, but his outspoken friends in the Howdalian Society ensured the entire group and its associates were barred from joining the Academy. If the reason for Wright’s exclusion was largely political, there may have been other factors at play too. The Academy was fashioned on academies on the Continent, in Florence, Milan, Paris and Rome. They followed a classical model, championing ancient history paintings, and religious and mythological scenes over portraits, landscapes and still lifes. The latter were deemed lower down the pecking order when it came to being assigned the best spots in annual exhibitions. There was no provision at all for artists who painted contemporary scenes that aspired to rise above genre painting and domestic conversation pieces. West cannily waited until he became an Academician before he switched from ancient to contemporary historic scenes and, even then, he was seen as a radical. Wright, by comparison, was painting fashionable science lectures that grappled with giant subjects such as our place in the Universe, with life and death from a scientific perspective. There was no pigeonhole for him; no clear place in the hierarchy. His focus on science might well have seemed threatening to traditional religious painting, where only God could play dice. Wright had made mechanical contraptions as a boy and, in later years, created his own version of the camera obscura to study light and shadow. He painted solar systems and chemical reactions, and experimented with unusual light sources. Two fellow artists who similarly pursued scientific routes were also excluded from the Academy: the equestrian painter George Stubbs and the botanical collagist Mary Delany. Stubbs had spent 18 months holed up in a barn in Lincolnshire, in the mid-1750s, dissecting an embalmed horse to better understand its anatomical structure. (His painting of the racehorse Whistlejacket is now one of the most iconic works in the National Gallery in London.) Delany painstakingly reproduced the botanical likenesses of nearly a thousand flowers, creating intricate collages that the naturalist Joseph Banks claimed were the only ones he would trust to ‘describe botanically any plant without the least fear of committing an error’. In contrast to the classical idealisation of nature advocated by the Academy, Wright, Stubbs and Delany studied nature for themselves, using the latest scientific discoveries to further their work. Wright’s choice of subject matter was not only contemporary, but bordered on the heretical. In his candlelight paintings of the orrery, the air pump and the alchemist at work, he not only employed dramatic lighting and plunging shadows to heighten the drama, but the scenes themselves dealt in mortality and the insignificance of man in relation to the natural world, as well as suggesting that the scientist was now usurping the divine creator. In the Orrery, a mixed assembly of people, representing the various ages of man, contemplate the solar system from above. Earth is a tiny sphere, barely visible on the right side of the clockwork machine, orbited by one desultory moon. When considered next to the larger spheres of Jupiter and Saturn, with their generous dusting of orbiting bodies, it looks small and insignificant. Man is inconsequential from this perspective, invisible from space. Immanuel Kant, in his Universal Natural History and Theory of the Heavens (1755) compared the ‘infinite multitude of worlds and systems which fill the extension of the Milky Way’ to ‘the Earth, as a grain of sand, [that] is scarcely perceived’. The Anglo-Irish philosopher Edmund Burke concluded that the starry heaven ‘never fails to excite an idea of grandeur’. Burke explored the grandeur of the Universe in his treatise A Philosophical Enquiry into our Ideas of the Sublime and the Beautiful (1757) and, long before Caspar David Friedrich and J M W Turner popularised the sublime in art, Wright explored its visual power. The sublime is the sensation we feel when looking at a thundering waterfall or sheer cliff face, or when we contemplate the Universe – a fear, an awe so overwhelming that it affects us in a bodily way. For Burke, not only could natural phenomena on Earth be sublime, but so could the properties of light and dark. He wrote: Mere light is too common a thing to make a strong impression on the mind, and without a strong impression nothing can be sublime. But such a light as that of the sun, immediately exerted on the eye, as it overpowers the sense, is a very great idea. Burke’s theory may have informed viewers who contemplated the rotating brass orreries, which taught navigation by day and the sublime contemplation of Man’s insignificance by night. Wright intensifies the viewing experience in his painting, using a small central lamp to transform the library into a sea of shadows, a dark nothingness in which the brass wheels turn and planets rotate. The scale of the painting and the silhouetted figure over whose shoulder we can peer both contribute to the sensation that we are experiencing this phenomenon for ourselves. In the Orrery, Wright paints a manmade model of the solar system. Increasingly, in his candlelight paintings, he shows the mastery of mankind over nature. In the Air Pump, the lecturer places a live cockatoo – an exotic pet – in a glass sphere, and removes the air to show that nothing can live in a vacuum. Travelling lecturers in fact used something called a ‘lungs-glass’ that replicated this without the need for animal sacrifice. That would be too cruel, ‘too shocking’, acknowledged Ferguson. Wright goes for the dramatic approach, forcing the young girls to hug each other for support as the older one turns away in horror. This isn’t the natural world, but man’s domination of it – nature is imprisoned in the glass and banished beyond the windowpane, where a young boy fiddles with the blind, about to shut out the natural light source of the Moon (full, of course) to maximise the drama. The cockatoo resembles the white dove of the Holy Spirit, and the lecturer, by extension, appears to play God. In 1773, Wright journeyed to Rome, where he stayed for two years. He bathed in its warm, diffused light, and sketched classical ruins akin to those that had formed the setting for The Blacksmith’s Shop. He never returned to his candlelight scenes, to scientific or industrial subjects, preferring to paint figures in Italianate landscapes and portraits in his studios in Bath and Derby. Wright’s early patron, Ferrers, died in 1778, and the 6th earl (Ferrers’s younger brother) did not care for the Orrery, despite his own son Lawrence Rowland featuring in it. The Orrery was offered for sale at Christie and Ansell, but didn’t attract enough interest to sell, and was bought in by the auction house for £84, a fraction of the original sale price. Subsequent sales saw it changing hands for just £50 before it was bought by public subscription in 1884 for the newly opened Derby Museum and Art Gallery. This was to be a turning point in the fortunes of the Orrery. Today it is greatly admired (alongside the Air Pump), and is Wright’s most celebrated painting. While the paintings of the founder Academicians Toms and Wale have been lost to history, Wright has a suite of galleries dedicated to his work at the Derby Museum and Art Gallery, and is regularly the subject of major exhibitions. So, while the Orrery may not have led to early success at the Royal Academy, it ensured that Wright is now seen as one of the 18th century’s most assured and celebrated British artists. Source of the article

GOATReads:Sociology

The Art We Do Together: “Art Worlds” 40th Anniversary

One way to think about intellectual life is as a musical composition where each new book adds to the chorus by bringing in the rhythms, tonalities, and hooks that give shape to the overall melody. Every now and then, however, a book comes that changes the tune altogether. Howard Becker’s Art Worlds, which now celebrates its 40th birthday, is one of those books. It is hardly an overstatement to say that the publication of Art Worlds in 1982 changed forever how sociologists study art. Art Worlds created a seismic change. It demonstrated that the sociological study of art need not be engulfed in trying to solve highfalutin aesthetic questions (e.g., What is art? How do we distinguish it from non-art? What is an author?) and could instead focus on studying the collective practices through which artworks are realized. Art Worlds offered a sharp contrast to the scholarship that had dominated the study of art up to that point. The two decades before its publication had been characterized by an all-out assault on the central ideas of modern aesthetics. In France, for example, the idea of the author had been demolished by poststructuralist authors like Roland Barthes and Michel Foucault (there’s an irony somewhere there). Meanwhile, in Italy, Umberto Eco celebrated the iconoclastic emergence of popular culture and how it upended the old hierarchies that thinkers of an earlier generation, most notably Theodor Adorno, had sought to defend. In England, Stuart Hall and Raymond Williams, along with the rest of the Birmingham School, laid bare the inner workings of cultural hegemony, while John Berger introduced British public television’s mass audiences to a new way of seeing art as an apparatus of symbolic domination. Even the heavily fortified citadels of art history and aesthetics were not immune; schools of thought such as the institutional theory of art, feminist art history, and Marxist aesthetics mounted internal rebellions against long-held ideals about the purity and universality of art. Meanwhile, art itself was going through similar convulsions, with movements like pop art, land art, performance art, Fluxus, feminist art, and institutional criticism, not to mention the myriad of art collectives in Latin American and Eastern Europe, defying the modern canon and institutions that had defined art up to that point. Sociology, Becker’s intellectual home, was “all in” in the mutiny against modern aesthetics. The 1960s and ’70s were a time in which trenchant critique, bordering on philistinism, dominated the sociology of art. Text after text adopted what we might call a Scooby-Doo research model: taking a seemingly good character (art) and proceeding to unmask it as the bad guy (ideology). Thus, while authors such as Arnold Hauser and Pierre Francastel sought to expose seemingly inert, formal elements of artworks as projections propelled by “real social forces,” others like Raymonde Moulin revealed that art was not a pure and autonomous field of activity but an activity guided by market forces. Still others, like Pierre Bourdieu, revealed that love for art was little more than a bourgeois conceit for social reproduction. In a time in which doing a sociology of art seemed to require deploying sociology against art, the genius of Art Worlds was radically simple: it just studied art as something that people do together. In so doing, Becker took a potentially controversial idea—that art is a form of collective action—and presented it in a disarmingly common-sense way. If we study how art is produced, Becker argued, we soon realize that this process is rarely, if ever, an individual one. Artists always depend on others to obtain materials to produce their works, as well as to exhibit, play, publish, and distribute them. Art, it follows, is a process that requires collaboration and coordination among different people. In this sense, it is no different from any other social activity, which means that we need to study it as we do any other type of social process: by focusing on what people do. This meant studying not only artists but also critics, curators, editors, art materials suppliers, administrators, and audiences, to name just a few, along with the standards, conventions, and technologies that allowed them to coordinate their actions and produce an artwork. Becker opened an entirely different empirical research program in the sociological study of art, one that moved the attention from ontological and epistemological questions that had dominated traditional aesthetics, such as “What is art?” and “How do we know and experience art?”, to the pragmatic question “How is art done?” Thus, facing a painting like, say, Picasso’s Guernica, Becker invited us not simply to focus on decoding its symbolism and formal composition or on trying to decipher Picasso’s artistic intent and reveal its underlying meaning, but to ask ourselves how such an artwork could be done. This approach employed a new arsenal of empirical queries, such as: What were the networks of collaboration and cooperation that helped Picasso paint this work? What materials did he use, where did he get them, who provided them? What conventions did he follow (or break)? What institutions supported him? In short, what kind of “world” and collective effort had to be in place so that Picasso could create and display this masterpiece? As I was preparing this short essay, I was curious to see what contemporary readers make of this now 40-year-old book. So I indulged in that most peculiar ritual of our age: reading online reviews. The overwhelming majority were positive, effusively praising the clarity and richness of Becker’s descriptions. The negative were almost unanimous in their criticism: Art Worlds lacks any “real” theory and is filled with trivially obvious observations about how the art world works. Such criticisms did not surprise me, as they mirror those I have heard leveled in graduate seminars over the years. Students in search of a “theory fix” typically fault Art Worlds for being a perfect example of why sociology has a bad press as a “science of the obvious.” These types of critiques forget that the obvious is often what is most easily missed—and dismissed. If Art Worlds was and remains important, it is precisely because it reminds us of the obvious: that art is a collective practice. Somehow, this seemingly platitudinous observation had been missing from most art analysis, thereby reducing art history to a narrative about individuals and their heroic feats of creativity. By inviting us to remember that art is always a form of collective action, Art Worlds widened our attention to include all those agents, practices, and technologies that had typically remained invisible and barely made it into the hegemonic narratives of art, but without which art would be simply impossible. In so doing, Art Worlds reminded those studying art to be humble in their descriptions and pay attention to the perfectly banal, yet crucial facts that compose the social worlds we inhabit. Photography needs film, digital files, and cameras; consequently, you really cannot understand the transformation of photography without understanding the corporations that produce photographic material that shapes what artists can and cannot do. Sometimes the absence of an artist’s work in a museum is not necessarily for ideological or aesthetic reasons, but simply because the artworks were “too big to go through the museum’s doors and too heavy for its floors to support.” But unlike the mad king in Borges’s “On Exactitude in Science” story, who sought to create a map that was a perfect representation of his empire, Becker offered in Art Worlds an incomplete map. This inconclusiveness was not a bug but a carefully designed feature of the sociological tradition he had inherited from his mentor, Everett Hughes. The tradition was firmly anchored in the belief that any attempt at providing a definitive account, let alone a conclusive theoretical model, of any social world is fated to fail because these worlds are continually changing. This is why Art Worlds, to the desperation of some, does not offer any theoretical model. It is also why Art Worlds contains no pretensions to having provided a final account of what art worlds are or how they work. Instead, the book is constructed around carefully curated, open-ended, and inconclusive lists. Paragraph after paragraph, we are told that “sometimes” artists do this, while “other times” they do that, and yet “other times” they do something else. The result is a book that reads not as a closed treatise or model but as a compendium of researchable empirical questions that invite the reader to continue exploring them. If there is something that defines Art Worlds, it is this dogmatic antidogmatism—a complete refusal to have the last word. This antidogmatism and open-endedness are precisely what make Art Worlds a fresh and necessary read even today. Unlike classics now sunk by the weight of their theoretical models, Art Worlds still reads as an object lesson for anyone writing in academesque for a living. At a moment in which oppositional and antagonistic writing seems to dominate the conversation, Art Worlds never tries to convince, demonstrate, or conclude, just to invite us to a conversation. The book does not require anything from the reader, such as prior knowledge of controversies in some subfield or being well-versed in concepts, theories, or debates. Art Worlds offers a leveled playing field on which the author never imposes himself upon the reader, because there is no battle to be won, just a conversation to be had. This is, for me, the indelible value of this book as a perennial reminder that writing can take the form of an open-ended invitation to think together. Source of the article

GOATReads: Psychology

The Hybrid Tipping Zone

4 steps to widen the closing window of our cognitive independence. Imagine explaining to someone raised with smartphones what it felt like to be genuinely lost—that disorientation before GPS when you navigated using landmarks and intuition. That uncertainty, followed by the satisfaction of finding your way, represents something we're rapidly losing: the full spectrum of human cognitive experience. The Psychology of Convergent Change We're navigating the Hybrid Tipping Zone: As the last generation with lived experience of pre-AI decision-making, we possess episodic memory of unmediated thinking. These aren't nostalgic recollections but crucial psychological data points on human capability that younger generations may never acquire. This loss puts humanity at risk of agency decay—part of an "ABCD" of AI issues encompassing agency decay, bond erosion, climate conundrum, and division of society. The ABCD Framework: Mental Dynamics We've moved from AI experimentation to integration, opening the door to reliance—the last phase before addiction, when absence of digital crutches leads to paralysis. Agency Atrophy: Research on automation bias shows humans systematically over-rely on algorithmic recommendations, even when our judgment would be superior. This isn't laziness; it's predictable cognitive bias that intensifies with exposure. Frequent GPS users show measurable atrophy in hippocampal activity, the brain region responsible for spatial memory. We're literally watching our brains reduce internal capacity when external support is available. Social Scaffolding Dissolution: AI increasingly mediates human interaction, reducing opportunities for authentic social learning. Our capacity for empathy, emotional regulation, and conflict resolution develops through direct human contact. When algorithms curate our feeds and suggest responses, we lose essential social development opportunities. Environmental Disconnection: One Gemini query uses nearly 10 times the energy of a Google search, contributing to data center consumption that may double by 2026. Yet we don't "see" these consequences, creating psychological distance between our actions and planetary impact, making moral disengagement easy. Identity Fragmentation: When AI generates content indistinguishable from human creation, our sense of unique human value becomes confused. What happens to an artist's identity when ChatGPT generates in seconds what would take them weeks of study and effort—especially when the result is superior? The Focusing Illusion of AI Benefits The focusing illusion is our tendency to overweight factors capturing our attention while underestimating less visible effects. Our AI relationship exhibits this perfectly: Immediate convenience gains attention while gradual cognitive changes remain invisible. We notice that AI helps us write emails faster but miss that we're losing language construction facilities. We appreciate AI research assistance but don't recognize our declining information synthesis capacity. We enjoy AI-generated entertainment but overlook our reduced tolerance for ambiguity that characterizes authentic human creativity. Values-Driven Choices The path forward requires deliberate choices conditioned by intentional self-regulation—making choices aligned with long-term values rather than immediate impulses. Applied to AI, this means asking not just "What can this technology do?" but "What kind of person do I want to become through using this technology?" Beyond "What can AI do?" we must ask "What should I use AI for?" If we desire autonomous choice and human flourishing with planetary dignity, we must make present choices that set us on that trajectory. Enacting ProSocial AI: 4 Steps to Take Today ProSocial AI refers to systems tailored, trained, tested, and targeted to bring out the best in people and planet. At the personal level, four practical steps emerge: Think: Clarifying Your Aspirations. Begin with honest self-reflection about core values. What cognitive capacity do you want to maintain? Which aspects of thinking feel most authentically "you"? Create cognitive baselines by engaging in activities without AI assistance—writing longhand, navigating without GPS, solving problems through conversation rather than search. And practice metacognition: When using AI tools, observe your cognitive state before and after. Are you learning or outsourcing? Growing or atrophying? Talk: Authentic Human Connection. Engage conversations about AI's psychological impact, especially with young people who may never have experienced pre-AI cognition. Share your experiences of cognitive change candidly. When has AI enhanced your thinking? When has it made you feel less capable? Practice discussions that deliberately avoid AI assistance—no fact-checking via search, no AI-suggested responses, just human-to-human dialogue with all its uncertainty and discovery. Teach: Modeling Hybrid Intelligence Share pre-AI cognitive strategies with others, especially youth. Teach them to use physical maps, engage in sustained thinking without external input, and tolerate not immediately knowing answers. Model AI integration in ways that enhance rather than replace human creativity. Show how to use AI as a thinking partner rather than a substitute, maintaining agency while leveraging artificial capability. Transform: Making Meaningful Difference Apply insights within your sphere of influence. If you're a parent, create AI-free spaces for children's cognitive development. If you're an educator, design learning experiences that strengthen human capacities alongside AI literacy. In professional contexts, advocate for AI implementation that preserves human agency rather than simply maximizing efficiency. Choose personal AI usage patterns aligned with your values about human development—perhaps using AI for routine tasks while preserving human effort for activities that develop valued capabilities. Our Closing Window We occupy a unique historical position: We have cognitive leftovers from unmediated human intelligence. Our children will grow up in an AI-mediated world. Whether they develop robust human capabilities alongside artificial assistance or become dependent on technological thinking substitutes depends largely on choices we make now. The Hybrid Tipping Zone concerns the future of human consciousness. The question isn't whether AI will change how we think, but whether we'll be conscious participants in that change. We can still shape AI's role in human development—if we choose growth over convenience, agency over automation, and long-term flourishing over short-term efficiency. The cognitive patterns we establish now will become either the foundation for future human potential or the boundaries constraining it. Source of the article

Money for Nothing: Finance and the End of Culture

If you want to know about contemporary popular art (television, music, movies, perhaps a few books), you’ve got to know the names. I don’t mean the names of artists, writers, musicians, actors, or directors. I don’t mean the names of the big studios, publishers, and record companies either. I mean names behind the names: the names of equity, the money faucets, the money pools, the money men. It’s bleak fun to list them; money has so many names now. You might be familiar with a few of these: take Bain Capital, which gave us Mitt Romney in 2012 and the takeover of Warner Music in 2004. Or there’s the Carlyle Group, a nest for figures like George H. W. Bush. And you know BlackRock’s in the house. But there are many others, and often they have appellations from a Pynchon novel: Pershing Square, Archegos Capital Management, State Street, Vanguard, KKR, Colony Capital, Apollo Global Management, Silver Lake, AlpInvest, Saban Capital Group, Providence Equity Partners, Pluribus Capital, Red Zone Capital Management, RatPac-Dune Entertainment (that’s Steven Mnuchin, the Trump 1.0 cabinet secretary), the ominously named “Disney Accelerator.” This profusion might make it sound like the media and culture industries are heterogeneous, varied, and diverse, pumping out more innovative content every day—who can ever keep up with new shows?—but they aren’t. Instead, there’s a lot of capital collecting in a very few hands. That’s because the past several decades have seen rampant consolidation via mergers and acquisitions across creative fields, all of it backed by rivers of Wall Street equity. In visual media, for example, there are just five major players (Comcast, Disney, Sony, Paramount, and Warner Bros). The music industry, meanwhile, has the big three labels (Universal, Sony, Warner). Silicon Valley, of course, is up to its neck in media production and distribution via platforms like Hulu and Netflix, and has just five prime stakeholders (Meta, Alphabet/Google, Amazon, Apple, and Microsoft). The story of contemporary popular art, then, is centralization. That is, it’s the same pattern we have seen everywhere during what Robert Brenner calls “the long downturn”: the ongoing post-1973 period of declining industrial productivity, mushrooming debt, and—most important for our story—rising financialization of the economy, the turn toward the trading floors of London and Manhattan, with all the cultural inequality and decay this entails. The same class of people stripping journalism and higher education for parts, then, are also fully in control of the apparatus used to produce narrative and visual art. And, as Andrew deWaard, an assistant professor of media and popular culture at UC San Diego, shows in his shocking new study Derivative Media: How Wall Street Devours Culture, this same class of people are constructing “a more poisonous, extractive media system,” where “production cultures are increasingly constrained by extraction cultures.” The first half of Derivative Media historicizes and theorizes the rise of high-finance art, then the latter chapters turn to specific media properties. One of the most exciting elements of the book is how deWaard swings between a granular critique of texts and “distant,” quantitative, macro-scale research into vast corpuses of textual information collected in databases like Genius (for song lyrics) and IMDb (film and television). Using cataloging software, he examines vast swathes of lyric composition and intertextual reference for patterns in artistic production, in everything from Jay-Z tracks to 30 Rock episodes, “on a scale that would not be possible without computation” or “the database form” or rigorous “data visualization.” If you want charts of industry mergers, market trends, and the rate at which various types of cars get name-checked by rappers, deWaard has you covered, as he plots the connections between what a Marxist might call the economic base and the cultural superstructure. If you’d like to know how individual linguistic and aesthetic signifiers (Mercedes, Patrón, Star Wars, Apple, GE, McDonald’s) get deployed in specific texts, deWaard does criticism on that scale too. This is a scholar who personally catalogued all the references in 30 Rock, perhaps the most referential show ever made, then visualized his findings in dense, colorful graphics, all while listening to a lot of Billboard rap. DeWaard’s work provoked a lot of uneasy encounters between art I like and the realities of culture capitalism. Your favorite work might be stained with blood. It probably smells like cash. The result? On this and much else, deWaard is direct: “Power has been concentrated within financial institutions and is expressed using financial instruments and financial engineering strategies,” and “it is obscured behind byzantine shell corporations, complex mathematics, and an army of mostly men in expensive suits. It’s a convoluted story, but it can be told simply: the money pools in one location.” DeWaard doesn’t put it quite this way, but I would: What emerges from his book is a host of (yet more) reasons everyone should be a Marxist. Thus, Derivative Media is not about a lost golden age in Hollywood or music or whatever. DeWaard is clear about this: Under capitalism, these have always been for-profit ventures with industrial conditions. And whether you’re talking Nina Simone or The Matrix or all those Renaissance paintings the Medici family had done, art and commerce have always existed in what we might charitably call an uneasy tension. Indeed, for theorist Max Haiven, money and art form one of modernity’s core dialectics. They are not “mutually opposed mythical forces” but “mutually encrypting structures”: “Art,” Haiven claims in Art After Money, Money After Art, “cannot be corrupted by capitalism because it has always already been derivative of capitalism.” Yet even he concedes that “art is never completely or fully incorporated: what allows it to generate its saleable contemporaneity is precisely the small latitude of incomplete freedom, obstreperousness, antagonism, and radicality it is afforded.” The key word here being “saleable”—even radical work can be ingested by capital and used to comfort or enrich “people who,” deWaard observes, “treat culture as just another input in their cash-flow-extraction strategies.” And, as he tells it, capital has figured out new ways to penetrate artistic production and to metastasize through the circuits of distribution and consumption of artworks: fracturing once-coherent human systems in order to wring out new profits. Everyone’s brain gets melted. Critique dies. Numbed consumption wins. We pay good money for this. The culture industries have been thoroughly colonized by complex financial players (mainly hedge funds and private-equity groups) and their warped logics of sociality. Wall Street has the copyrights, the masters, the deeds. What might seem like a firehose of new art—fresh indie artists appear on Spotify all the time, A24 distributes a lot of movies, the New York Times still has book lists—is, in reality, “a Russian nesting doll of conglomeration and investment.” It gets worse. Beyond the sectoral infestation of artistic fields by capital, the takeover of film production and song catalogs and radio stations, Wall Street has found ways to financialize art at the constitutive level of the text itself. It’s not just the industries that are financialized; works qua works are themselves securitized investment vehicles, too. In a song or show, for example, every nod to alcohol or cars can lead to further monetization opportunities. Any memorable words, ironic references, witty allusions, striking images, and catchy songs can all be broken up, “securitized,” and redistributed, across platforms owned by a few people. Indeed, for the masters of the universe, there is no art, only content, only intellectual property. DeWaard argues that “financialized texts become sites of capital formation” alone, and that “culture has a subservient role in the financial system, which sees it as merely another numerical value to trade. The stock exchange,” he continues, “has been embedded within the media text.” The result? Mostly “flooding the zone with shit,” with “lots of content, but little creativity or criticism.” Thus, both the culture industries and the artworks they disgorge are “derivative,” in the double sense. On the one hand, they are literally financialized; on the other, they are increasingly boring, toothless, and antiradical. There is no such thing as a “naturally occurring media economy,” deWaard contends. “There is only political economy, a system of social relations constituted through law and institutional behaviors, one that is currently arranged hierarchically and could just as easily be arranged differently. The one we have is driven by power, not exchange of goods and services.” In other words, to understand art, you must talk power, which means talking money. It is tempting to narrate this as just another branch in the gruesome story of neoliberalism, the post-’70s marketization and privatization of the world. And to be fair, deregulatory monsters like Reagan and Clinton are part of this story. But by drawing on a tradition of Marxist historical scholarship, deWaard asks us to think more widely of capitalism’s “longue durée” over the past 600 years, where, if we look right, we see the cyclical reemergence of finance capital. Again and again, capitalist systems mature, wither, and morph into new structures and nodes of power. It happened to Renaissance Italy, to the Dutch trading empire (think of those Old Masters), to imperial Britain, and now to the United States, with its loosening grip on hegemony. DeWaard concurs with historian Samuel Chambers, quoting the latter’s contention that “there is no such thing as ‘the economy’” in the sense of a naturally occurring free market, only “an overlapping, uneven, discontinuous, and non-bounded domain” that is simultaneously political, financial, social, and cultural.1 This extends to the media economy, which deWaard argues is not a balanced system of supply and demand where artists satisfy the existential needs of consumer-customers who “decide what is popular.” Thanks to deregulation of the finance industry in the 1980s and ’90s, hedge funds and private-equity groups and investment banks were suddenly off the leash and eager to direct “the economy” to where they could accumulate the most.2 Finance, deWaard says, is a machine controlled by the powerful; it changes and directs economic trends and “is not a picture or representation of some external phenomenon we call the marketplace; rather, finance has become the powerful engine that drives the marketplace in certain directions. The destination is power, wealth, and inequality.” Thus, rather than a seemingly organic creation and exchange of goods, what we see everywhere is simply the flexing of elite power: when corporations offer shareholder dividends and stock buybacks; when CEO compensation balloons; when mergers permit “cartel-like behavior” by too-big-to-fail conglomerates; when musicians get paid pennies for streams, while consumers are trapped within a “rentier logic” that privileges access over ownership; when even our best art house films are funded by capricious billionaires; when hedge funds proliferate (11,000 operating today) with “no incentive to produce value, only extract it,” capturing Rolling Stone and Artforum along the way; when private-equity vultures leverage debt to buy companies and then strip the copper from their walls, leaving “bankruptcies, layoffs, and unpaid bills.” All of these instances of elite power deWaard labels “vehicles for upward redistribution,” and he goes on to quote Henry Hill’s Mafia credo from Goodfellas (1990): “Fuck you, pay me.” Artists, musicians, actors, writers, and other media creatives, along with their audiences, get fucked. Financiers get paid. Wall Street is versed in literary theory, even if this “cultural cartel enacting mass theft of creativity” doesn’t know it, or care. To begin: Financialization entails abstracting away from the attributes of an existing thing: as deWaard has it, “derivatives are an instrument to hedge or speculate on risk, basically a wager on the fluctuation of the cost of money, currencies, assets, or the relationships among them … Their value is derived from the performance of an underlying entity” (original italics) that is not itself traded. Further, this “logic of fluid conversion,” whereby every coherent asset can be abstracted and “unbundled” into fungible instruments, is “a natural fit for transnational media conglomerates with holdings in film, television, music, the popular press, video games, online media, theme parks, and other cultural properties.” Artworks are not cohesive objects but rather collections of glittering fragments that can be packaged, traded, resold. In 1980, around the same time the US economy was getting financialized, the literary critic Julia Kristeva theorized “intertextuality,” the concept that no artistic text is legible as a singular bounded item, but is rather always “a mosaic of quotations” from other texts, contexts, and viewers/readers. In deWaard’s telling there is now “a concrete bankability to the once-radical concept.” When art becomes IP, each privately owned and “radically open text offers vast intertextual and intermedial opportunities for potential profit,” and his subsequent description of Kristeva’s nightmare is worth quoting at length: Derivative media operationalizes intertextuality. On one end of the spectrum, figurative devices such as allusion, parody, satire, and homage create constellations of textual reference and influence; on the other, commercial devices such as product placement, brand integration, branded entertainment, and native advertising deliver consumer influence. The latter typically involves a direct transfer of money, while the former often enacts an indirect exchange of cultural capital. The key to this exchange is the interplay between these two forms of “derivation,” the textual and the financial. In this “interconnected referential economy” every text has been “internally financialized,” and there is no outside we can escape to: “From the content of the securitized cultural text, to the fragmented audience that engages with it, to the precarious labor that produces it, to the overpaid management that organizes it, to the networks that circulate it, to the indebted corporations that catalog it, to the systems of accumulation that facilitate it—financial capital now fuels the pop-music hit machine and the Hollywood dream factory.” Therefore, in this new gilded age, the same apparatus flooding the market with AI slop and Marvel sequels and Frasier reboots is simultaneously impoverishing artists. Jay-Z put it memorably: “I’m not a businessman, / I’m a business, man.” Much as Jay-Z prides himself on artistry, all that lyrical firepower on display in albums like Reasonable Doubt (1996) and The Blueprint (2001) was for money, not laurels. “Hip-hop,” argues the critic Greg Tate, “is the perverse logic of capitalism pursued by an artform.” DeWaard concurs, and Derivative Media maps out a “lyrical marketplace” that “merges the formal and the financial,” because hip-hop was born in the 1970s, just like this round of financialization. As such, “its form, style, and structure have come to explicitly exhibit properties of its economic context … hip hop is not just subject to business processes, it is itself consciously a business process.” MCs are “musician-speculators” whose every flow and word are embedded in a system where “lyrics are rendered fungible assets and securitized into a speculative instrument.” Many rappers like to boast about expensive vehicles and top-shelf booze, and deWaard uses computational analysis of Genuis’s lyric archive to plot patterns. For example, Cristal fell off after the 1990s, while Patrón had a big moment in the recession aughts; Mercedes, meanwhile, has been a constant since the Clinton era. DeWaard notes economic connections like Courvoisier sales growing after Busta Rhymes’s 2002 single “Pass the Courvoisier Part II.” But he really zeroes in on Jay-Z’s relationship to Armand de Brignac champagne, which the mogul rechristened “Ace of Spades” in the late 2000s, shortly after investing in and just before becoming a majority owner of the brand. Seemingly organic references (“I used to drink Cristal, the muh’fucker’s racist / So I switched gold bottles on to that Spade shit”) are actually market moves. In derivative modernity every text is a package of assets. The poet as petit-bourgeois business owner or, if she dreams big, a mogul. But you can pack even more intertextuality into an episode of TV, and deWaard turns to 30 Rock: the pinnacle of the self-aware, allusive style of postmodern comedy that you also see in The Simpsons, South Park, and Family Guy. In what deWaard calls “securitized sitcoms”—where every joke about a real or fake brand is a chance to monetize and extend value—“the full extent of the fiscal exchange is concealed, but with added formal mechanisms such as parody, satire, and irony used as camouflage.” On financialized TV, “referential jokes as a form are rendered a potential asset class”: far less crude than midcentury product placement, far slyer and more sophisticated, congratulating the viewer on getting the joke, which is itself “a comedic shroud for the constant onslaught of brands and corporate texts.” This is particularly true of Tina Fey’s creation, the in-house jester of NBCUniversal, which besides being brilliantly funny is an extended rumination on corporatized media. DeWaard calls it “industrial self-theorizing both ironic and lucrative.” In other words, the rapid-fire, poly-referential writing on 30 Rock is tongue-in-cheek humor that also pays well, that both “satisfies and subverts a corporate mandate” while congratulating the viewer’s intelligence. Make a satirical reference to Bed Bath & Beyond or Star Wars or Siri? That’s still intertextual placement, woven into the cultural object at the level of plot. Mock NBC, the company that employs everyone on 30 Rock? No problem. It all pays dividends for NBCUniversal’s parent company, Comcast. Even the sharpest jokes on the show are ultimately situated within, not against, the status quo. Like The Simpsons, The Office, or Peep Show, 30 Rock is fundamentally conservative, in the sense of suggesting very little that would threaten the accumulation of capital or the current hierarchies of American society. Liz Lemon is awful, but she’s relatable, and nothing in the dense texture of incredibly good jokes on the show threatens anyone’s ability to make money in the real world. When she finds the perfect pair of jeans at a hip Brooklyn spot that regrettably turns out to be owned by a Halliburton slave-labor scheme, the joke lands. We get it: Halliburton, which helped loot and destroy Iraq in the 2000s (this episode aired in 2010), is evil. But nothing further. Congratulations. We are doing the standard bourgeois move of bearing witness to something terrible without a hint of how to change the conditions that produce it, only through the prism of genius joke writing. It becomes a little sickening after a while, because “the winking is constant and the nudging becomes a sharp elbow.” No show told jokes faster or packed more references into its diegetic universe, and after a certain point it’s like being with a very witty person, which is to say exhausting. Indeed, the show’s universe is so parodic, so reflexive, so referential with respect to the detritus of the capitalism, that narrative-aesthetic interpretation isn’t sufficient—again, we must think like Wall Street. “Textual analysis typically involves asking questions about a text’s form, composition, and style,” deWaard points out. But “increasingly, that means asking: Was this formal component for sale? Might it be for sale in the future? What are the market relations and pricing mechanisms among these components”? There is no mise-en-scène, only what he calls “mise-en-synergy” (synergy being a favorite term of cynical managers like Liz’s boss, Jack): “the multi-platform relationship between audiovisual style, meaning, and economics.” Don’t worry if you feel queasy—you still get the joke, like the other discerning consumers. So if even a celebrated satire is domesticated and toothless—just a high-grade marketing ploy with jokes—where can one turn for radical art? You might think highbrow cinema would be the place to go, the stuff that isn’t Marvel Extended Universe sludge: the work of the Martin Scorseses and Greta Gerwigs and Bong Joon-hos and Alfonso Cuaróns of the world. Yet finance rules here, too. The overwhelming majority of art house production and distribution companies are what deWaard calls “billionaire boutiques” funded by the largesse of a “plutocratic patron,” often the child of someone who made their money screwing over others. There’s Big Beach (Little Miss Sunshine, Away We Go, Our Idiot Brother, The Farewell, and others), founded with a fortune Marc Turtletaub’s father, Alan, made giving out subprime mortgages prior to 2008. There’s Oracle magnate Larry Ellison’s child Megan, who runs Annapurna Pictures (Zero Dark Thirty, Phantom Thread, Her, If Beale Street Could Talk). And A24, distributing glossy, sensitive films (Ex Machina, Lady Bird, Hereditary, Uncut Gems, The Lobster)? Unfortunately, the company is a direct outgrowth of the Guggenheim mining fortune, built in the pits and quarries of the global South, and it is now run through the asset-management firm Guggenheim Partners, where A24’s cofounder Daniel Katz was once the head of film finance. Most of the time, art house film is just reputation laundering or “corrupt philanthropy” or both. For the ownership class, nothing is fundamentally about art. Good luck getting radical art past these gatekeepers. Challenges to the artistic, cultural, or political status quo are defanged in financialized Hollywood, since that status quo benefits Hollywood’s own elite. Even at the boutique houses, “social justice ideals are compromised, neutralized, and suppressed within the framework of plutocratic patronage,” and even the edgier films offer only “low-level, technocratic fixes” or appeals to individual nobility. Your film had better not suggest systemic change or critique capitalism. Thus we get beautiful, mournful, liberal (at best) art—“a sort of calculating complicity”—and “the overall picture is one of mountains of wealth casting a shadow on arthouse theaters playing esoteric indie films.” And if one of those little movies does well financially, maybe you have just found your next Marvel director: Indies serve as “the research and development wing of Hollywood, as many of these directors are subsumed into blockbuster film and television.” Art continues to get made—that’s what human beings do—but capital devours it. For me, the grimmest part of Derivative Media is the fact that deWaard isn’t just writing about trash. It would be one thing to take aim at reality TV, which everyone knows sucks; but 30 Rock is one of the best sitcoms ever made. There’s plenty of chart garbage in modern music; but at his peak Jay-Z was one of the greatest MCs who ever lived. A24 distributes a lot of incredible films that are the opposite of superhero schlock; it’s just all backed by the toil of workers in the global South. The point of a blockbuster film isn’t cinema per se, profitable as these movies are. It’s to branch into other monetizable media, especially video-game voids that “attempt to build unique universes in which a broad range of [a company’s] intellectual property is not just exploited strategically, but offered in a more immersive manner.” I certainly experienced a dark night of the soul reading the list of Disney game properties: You’ve got Kingdom Hearts, the epochal multi-character hit spread over 13 editions, but also Disney Infinity, Disney Princess, Disney Magical World, Disney Dreamlight Valley, Disney Friends, Disney Ultimate, Disney Art Academy, Disney Learning, Epic Mickey, Disney Sing It, Dance Dance Revolution Disney Mix, Disney Twisted-Wonderland, Disney Magic Kingdoms, Disney Emoji Blitz, Disney Heroes: Battle Mode, Disney Fantasy Online, and the Disney Mirrorverse. It’s all right there on the internet; your students and children are probably playing these, each gaming universe a money and data hole, seductive as any drug from science fiction. In Immediacy, or, The Style of Too Late Capitalism (2024), Marxist critic Anna Kornbluh identifies “a master category for making sense of twenty-first-century cultural production” that is narrative or quasi-narrative, everything from fiction to theory to film to television: She calls it “immediacy.” Under a planetary regime premised on speed, instantaneity, urgent circulation, and fluid logistics (a “petrodepression hellscape”), art comes to resemble the economy: intense, immersive, fast, liquid, a “pulsing effulgence [that] purveys itself as spontaneous and free, pure vibe.” Art has a harder time with representation, intersubjectivity, duration, and critical thought (and audiences come to desire less of these). Kornbluh writes, “Fluid, smooth, fast circulation, whether of oil or information, fuels contemporary capitalism’s extremization of its eternal systemic pretext: things are produced for the purpose of being exchanged; money supersedes its role as mediator of exchange to become its end point.” And like deWaard, Kornbluh emphasizes the decay of narrative art forms, their general turn toward a slop of sequels, prequels, reboots, franchises. “What matters in [such] a universe is its endless replication,” she argues, “the distention without innovation of new ideas, new characters, new universes. And by ‘matters,’ one means ‘sells’: the twenty highest-grossing Hollywood films since 2010 were all sequels, eighteen of them issued by Disney” (original italics). And this makes sense, for though deWaard targets good art, he certainly doesn’t ignore trash altogether. After all, the ultimate finance pipe dream is what he calls the “brandscape blockbuster,” the IP or “metaverse” movie, “derivative media at the scale of the world.” All this starts in the late Reagan era with a text I love, Who Framed Roger Rabbit (1988), a film that, thanks to Steven Spielberg’s many phone calls, blends 70 references from works belonging to multiple companies, including Disney, Turner, Universal, and Warner Bros. Mickey Mouse and Bugs Bunny even appear together for the only time in big-screen history. Imagine the possibilities! In fact, plenty of people did. Sharks noticed that Roger Rabbit made over $350,000,000 at the box office. Subsequent, fully articulated IP franchises—which bleed from screen to screen, platform to platform, device to device—are the ultimate in the capitalist enclosure of media, which duplicates the real-world destruction of “anything resembling public or communal space that isn’t monetizable.” “With strategic licensing agreements and merchandising deals,” deWaard writes, “these brandscape blockbusters seek to develop a fantasyland made in the image of the financialized marketplace, reflecting our dystopian reality back to us as a playful fantasy.” Texts like Wreck-It Ralph, Avengers, and The LEGO Movie appeal to kids, of course, but there’s also “a drip feed of dopamine for older viewers playing spot-the-reference.” It’s democratic: Everyone’s brain gets melted. Critique dies. Numbed consumption wins. We pay good money for this. Reading Derivative Media’s account of the popular-art industries, now subject to finance instead of Fordism, I kept thinking about T. S. Eliot’s “The Waste Land,” which had its centenary three years ago. Part of this pertains to Eliot’s subject matter: Published in 1922 amid endless wars and just after a brutal pandemic, his modernist assemblage of allusions and texts imagines the West as a hellish necropolis, where civilization lies in fragments: “Unreal city, / Under the brown fog of a winter dawn, / A crowd flowed over London Bridge, so many, / I had not thought death had undone so many.” No need to get into Eliot’s royalist / Anglo-Catholic politics or to unpack all the allusions in just this selection (Baudelaire and Dante, basically), let alone the whole 434-line poem. What matters, here, is that his vision of the world resonates a century later, as we look down the barrel of hot-planet fascism, while flowing crowds stare at their phones. Content aside, the form of “The Waste Land” matters too. For Eliot and other modernists, narrative as well as visual and lyric form could be fractured, rearranged, requoted, rendered multilateral and nonchronological. A million monographs have been written about this, and people go on arguing about whether it even makes sense to separate ironic modernist intertextuality from ironic postmodern collage. Eliot and David Foster Wallace both used endnotes. Anyway, Webster’s defines modernism as. … There’s a rotten irony here, though: The reference farmers at Roc-A-Fella Records and 30 Rockefeller Plaza and “song management” firms like Hipgnosis are extractive parodists of Eliot (who also had a day job as a banker). For his part, though, the poet hoped to save something meaningful from the wreckage of war and illness: to shore some fragments against his ruin. Today, it’s about capital now, real money: the stuff that you can spend or hoard or both. Eliot saw shards and pulled some together, because he felt spiritually compelled to. Financiers see fragments and yearn to mix them into new markets, pricing them ever more dearly. Thus, we live inside Kristeva’s and Eliot’s black mirror, this extractive network of knowing allusions and winking irony, culture articulated not by melancholy artists but Wall Street ghouls. The landscape of social experience remains as atomized and alienating in 2025 as it was in 1922, with some new aesthetics and technologies, but with the same monster at its back (imperial capitalism). If anything, the post-1970s reemergence of high finance puts Eliot’s grim view of the West—we’d now say the global North—on steroids. Now, the broken landscape is securitized; all the pieces have tradeable value, at least for a few people. We endure the same existential problems, plus the specter of biospheric collapse, plus capital colonizing more of interior and expressive life. In Civilization and Capitalism, Fernand Braudel calls the financialized stage of capitalism “a sign of autumn.” Maybe Eliot just got the date early. Then again, seven years after he published “The Waste Land,” the world economy evaporated. Mixing aesthetic and political-economic critique, deWaard’s work is a mind-bending contribution to whatever is left of public-humanities criticism. He emphasizes that “one of the fundamental questions of this project is to ask what autumnal culture looks like,” and concludes that “it looks a lot like hip hop, reflexive comedy, and branded blockbusters: texts that are entrepreneurial, speculative, and, above all, derivative.” More ominously, his conclusions are relevant to fields of cultural production, distribution, and consumption that he doesn’t have space for: Why is short, anti-intellectual Insta poetry most marketable now? Don’t ask critics or poets—ask Wall Street and what Kornbluh calls “algorithmic culture.” Kay Ryan is still alive, but most new readers prefer Rupi Kaur. “In the extremity of too late capitalism,” Kornbluh observes, “distance evaporates, thought ebbs, intensity gulps. Whatever. Like the meme says: get in, loser.” Both the culture industries and the artworks they disgorge are “derivative,” in the double sense. On the one hand, they are literally financialized; on the other, they are increasingly boring, toothless, and antiradical. Landlords are in control, breaking everything to bits and renting back to us at higher prices. And it is all worth a tremendous amount of money, at least to a few people, who are willing to kill the rest of us—and our cultures—to keep it. Derivative Media concludes with a bold-faced set of pragmatic, social-democratic ways to break the grip of finance. We could, for example, tax billionaires more, or fight like hell for unionization, or close the “carried interest” loophole that only benefits hedge-fund and private-equity managers, or actually enforce antitrust legislation that is already on the books. (Indeed, under the Biden administration, Lina Khan was doing that at the Federal Trade Commission.) We could, deWaard writes, have “a less capitalist, more democratic organization of society [that] could be modeled in how we collectively allocate culture, in both how we access media and the labor that goes into making it.” Of course, if we had the ability—as a politically functional society—to enact such reforms, we probably wouldn’t need them in the first place, and with Donald Trump resuming control of the White House, people like Lina Khan and possibilities like progressive tax reform are gone. What deWaard tentatively envisions will not happen. Things are probably going to get worse in art and media, because the python grip of capital is only getting stronger. We have what Kornbluh calls the “recycling of sopping content” to look forward to, with a few brilliant works financed by billionaires in the mix for the awards cycles. Fuck you, pay me. So, what’s on tonight? Source of the article

GOATReads: History

The World Trade Center, by the Numbers

From the foundation to the elevators, everything about the Twin Towers was supersized. When the World Trade Center’s Twin Towers opened to the public in 1973, they were the tallest buildings in the world. Even before they became iconic features of the New York City skyline, they reflected America’s soaring ambition, innovation and technological prowess. The towers' eye-popping statistics amply illustrate that ambition: They rose a quarter-mile in the sky. They contained 15 miles of elevator shafts and nearly 44,000 windows—which took 20 days to wash. From the South Tower observation deck on a clear day, visitors could see 45 miles. The Trade Center complex was so big, it had its own zip code. But some of the same impressive architectural elements may have also helped worsen the tragedy on the fateful morning of September 11, 2001. Calling the project “the architecture of power,” Ada Louise Huxtable, an architecture critic for The New York Times offered a prescient warning when the towers were going up in 1966: “The trade-center towers could be the start of a new skyscraper age or the biggest tombstones in the world,” she wrote. These facts and figures offer some perspective on the engineering and architectural feats that made the Twin Towers possible. Time to build: 14 years (from formal proposal to finish) David Rockefeller, grandson of the first billionaire in the U.S., had the idea to build a World Trade Center in the port district in Lower Manhattan in the 1950s. By 1960, city, state and business leaders came on board. The Port Authority of New York and New Jersey presented a formal proposal to the two states’ governors in 1961, then hired an architect and cleared 14 blocks of the city’s historic grid. They broke ground in 1966. Two or three stories went up weekly. The towers used 200,000 tons of steel and, according to the 9/11 Memorial & Museum, enough concrete to run a sidewalk between New York City and Washington, D.C. The ambitious project overcame community opposition, design and construction setbacks, attempted sabotage by New York real estate rivals and major engineering challenges to open its doors in April 1973 while still under construction. The towers were completed in 1975. Number of architectural design drafts: 105 After creating more than 100 design ideas with various combinations of buildings, architect Minoru Yamasaki’s team settled on a seven-building complex with a centerpiece of two identical 110-story towers. The towers' design featured a distinctive steel-cage exterior consisting of 59 precise, narrowly spaced slender steel columns per side. Cost to build: more than $1 billion According to The New York Times, the cost of building the towers ballooned to more than $1 billion, far beyond its original budget of $280 million. Project managers faced cost overruns as safety, wind and fire tests were conducted. And engineers embraced or created innovative construction techniques and new technologies to make the towers lighter and taller. Rentable floor space: about one acre per floor The Twin Towers’ innovative design, which placed structural load on the outside columns rather than inside pillars, facilitated the owners’ desire for a maximum amount of rentable space. With 10 million square feet of office space—more than Houston, Detroit or downtown Los Angeles had at the time, according to The New York Times—the World Trade Center came to be dubbed “a city within a city.” Depth of the Twin Towers’ foundation: 70 feet To build such tall towers on landfill that had piled up onto Lower Manhattan for centuries, the towers needed exceedingly strong foundations. So engineers dug a huge rectangular hole seven stories down into the soft soil to reach bedrock. Using a technique developed by Italian builders in the 1940s, the towers’ builders used slurry, a mud-type material lighter than soil, to dig a 70-foot-deep trench and keep the surrounding soil from collapsing as they poured in concrete to form three-foot-thick walls, like a waterproof “bathtub.” But it worked like a bathtub in reverse. It didn’t keep water in, but rather kept water from the Hudson River out—and away from the Trade Center complex. On 9/11, the crashing debris damaged the walls, but they mostly held up. If they hadn’t, engineers fear the Hudson River would have flooded the city’s subway system and drowned thousands of commuters. Extra land created by building the WTC: 23 acres The 1.2 million cubic yards of soil dug up to build the “bathtub” were used to add 23 acres to Lower Manhattan—about a quarter of the area of a planned community of parks, apartment buildings, stores and restaurants nearby called Battery Park City that lines the Hudson River. Twin Towers' elevator speed: 1,600 feet per minute The Twin Towers had 198 elevators operating inside 15 miles of elevator shafts, and when they were installed, their motors were the largest in the world. The towers’ innovative elevator design mimicked the New York City subway, with express and local conveyances. That innovation lessened the amount of space the elevators took, leaving more rentable floor space. On 9/11, the tower’s elevator shafts became an efficient conduit for airplane fuel—and deadly fire. Windspeed the towers could sustain: 80 m.p.h. Engineers concluded in wind tunnel tests in 1964 that the towers could sustain a thrashing of 80-m.p.h. winds, the equivalent of a category 1-force hurricane. With this study, one of the first of its kind for a skyscraper, engineers tested how the towers’ innovative tubular structural design, lighter than the traditional masonry construction, would handle strong winds. But they also realized that in the winds coming off the harbor, the towers could sway as much as 10 feet, making office space potentially tough to rent. So the chief engineers developed viscoelastic dampers as part of the towers’ structural design. Some 11,000 of these shock absorbers were installed in each tower, diminishing the sway to about 12 inches side to side on windy days, according to the 9/11 Memorial & Museum. Number of sprinklers in the towers:  3,700 Two months after the release of the blockbuster movie The Towering Inferno, a three-alarm blaze in the North Tower in 1975 raised concerns that the Twin Towers had no sprinklers. That was common for skyscrapers at the time, and the Port Authority of New York and New Jersey, which owned the buildings, was exempt from the city’s fire safety codes. But facing pressure from state lawmakers and employees in the Center, Port Authority officials spent $45 million to install some 3,700 sprinklers in the two buildings during the 1980s. But the sprinklers failed when they were needed the most. On 9/11, the attacking planes snapped the water intake system upon impact, so they didn’t work. Height of the tightrope walk between the towers: 1,350 feet On the morning of August 7, 1974, French acrobat Philippe Petit walked the more than 130 feet between the Twin Towers on a high wire approximately one-quarter mile up in the air. Thousands of commuters stared up, gasping in amazement. Exuding confidence in his 45-minute show, the tightrope artist laid down on the wire, knelt down on one knee, talked to seagulls and teased police officers waiting to arrest him. Using his 50-pound, 26-foot-long balancing pole, he crossed between the tallest buildings in the world eight times before stopping when it started to rain. Initially critiqued as a “white elephant,” the new towers had difficulty attracting tenants in the early years. Petit’s show, followed by a skydiver jumping off the North Tower and a toymaker climbing up the wall of the South Tower, began to turn that around, making the towers seem more human in scale and more accessible to New Yorkers and tourists. Force of tremor when the towers fell: akin to 2.1 and 2.3 earthquakes On September 11, 2001, seismologists in 13 stations in five states—including the furthest in Lisbon, New Hampshire 266 miles away—found that the collapse of the South Tower at 9:59 a.m. generated a tremor comparable to that of a small earthquake registering 2.1 on the Richter scale. Measurements for the North Tower collapse half an hour later: 2.5 on the Richter scale. Source of the article

The Big Bang’s big gaps

The current theory for the origin of the Universe is remarkably successful yet full of explanatory holes. Expect surprises Did the Universe have a beginning? Will it eventually come to an end? How did the Universe evolve into what we can see today: a ‘cosmic web’ of stars, galaxies, planets and, at least on one pale blue planet, what sometimes passes for intelligent life? Not so very long ago, these kinds of existential questions were judged to be without scientific answers. Yet scientists have found some answers, through more than a century of astronomical observations and theoretical developments that have been woven together to give us the Big Bang theory of cosmology. This extraordinary theory is supported by a wide range of astronomical evidence, is broadly accepted by the scientific community, and has (a least by name) become embedded in popular culture. We shouldn’t get too comfortable. Although it tells an altogether remarkable story, the current Big Bang theory leaves us with many unsatisfactorily unanswered questions, and recent astronomical observations threaten to undermine it completely. The Big Bang theory may very soon be in crisis. To understand why, it helps to appreciate that there is much more to the theory than the Big Bang itself. That the Universe must have had a historical beginning was an inevitable consequence of concluding that the space in it is expanding. In 1929, observations of distant galaxies by the American astronomer Edwin Hubble and his assistant Milton Humason had produced a remarkable result. The overwhelming majority of the galaxies they had studied are moving away from us, at speeds directly proportional to their distances. To get some sense of these speeds, imagine planet Earth making its annual pilgrimage around the Sun at a sedate orbital speed of about 30 kilometres per second. Hubble and Humason found galaxies moving away at tens of thousands of kilometres per second, representing significant fractions of the speed of light. Hubble’s speed-distance relation had been anticipated by the Belgian theorist Georges Lemaître a few years before and is today known as the Hubble-Lemaître law. The constant of proportionality between speed and distance is the Hubble constant, a measure of the rate at which the Universe is expanding. In truth, the galaxies are not actually moving away at such high speeds, and Earth occupies no special place at the centre of the Universe. The galaxies are being carried away by the expansion of the space that lies between us, much as two points drawn on a deflated balloon will move apart as the balloon is inflated. In a universe in which space is expanding, everything is being carried away from everything else. The Big Bang story is almost as fascinating as the story of the Universe itself To get a handle on the distances of these galaxies, astronomers made use of so-called Cepheid variable stars as ‘standard candles’, cosmic lighthouses flashing on and off in the darkness that can tell us how far away they are. But in the late-1920s, these touchstone stars were poorly understood and the distances derived from them were greatly underestimated, leading scientists to overestimate the Hubble constant and the rate of expansion. It took astronomers 70 years to sort this out. But such problems were irrelevant to the principal conclusion. If space in the Universe is expanding, then extrapolation backwards in time using known physical laws and principles suggests there must have been a moment when the Universe was compressed to a point of extraordinarily high density and temperature, representing the fiery origin of everything: space, time, matter, and radiation. As far as we can tell, this occurred nearly 14 billion years ago. In a BBC radio programme broadcast in 1949, the maverick British astronomer Fred Hoyle called this the ‘Big Bang’ theory. The name stuck. Of course, it’s not enough that the theory simply tells us when things got started. We demand more. We also expect the Big Bang theory to tell the story of our universe, to describe how the Universe evolved from its beginning, and how it came to grow into the cosmic web of stars and galaxies we see today. The theorists reduced this to a simple existential question: Why do stars and galaxies exist? To give a proper account, the Big Bang theory has itself evolved from its not-so-humble beginnings, picking up much-needed additional ingredients along the way, in a story almost as fascinating as the story of the Universe itself. The Big Bang theory is a theory of physical cosmology, constructed on foundations derived from solutions of Albert Einstein’s equations of general relativity – in essence, Einstein’s theory of gravity – applied to the whole universe. Einstein himself had set this ball rolling in 1917. At that time, he chose to fudge his own equations to obtain a solution describing an intellectually satisfying static, eternal universe. Ten years later, Lemaître rediscovered an alternative solution describing an expanding universe. Although Einstein rejected this as ‘quite abominable’, when confronted by the evidence presented by Hubble and Humason, he eventually recanted. Working together with the Dutch theorist Willem de Sitter, in 1932 Einstein presented a new formulation of his theory. In the Einstein-de Sitter universe, space is expanding, and the Universe is assumed to contain just enough matter to apply a gentle gravitational brake, ensuring that the expansion slows and eventually ceases after an infinite amount of time, or so far into the future as to be of no concern to us now. This ‘critical’ density of matter also ensures that space is ‘flat’ or Euclidean, which means that our familiar schoolroom geometry prevails: parallel lines never cross and the angles of a triangle add up to 180 degrees. Think of this another way. The critical density delivers a ‘Goldilocks’ universe, one that will eventually be just right for human habitation. By definition, the Einstein-de Sitter expanding universe is an early version of the Big Bang theory. It formed the basis for cosmological research for many decades. But problems begin as soon as we try to use the Einstein-de Sitter version to tell the story of our own universe. It just doesn’t work. If the post-Big Bang universe had expanded just a fraction faster or slower, stars and galaxies would not have formed Applying Einstein’s equations requires making a few assumptions. One of these, called the cosmological principle, assumes that on a large scale the Universe is homogeneous (the same everywhere) and isotropic (uniform in all directions). But if this were true of our universe in its very earliest moments following the Big Bang, matter would have been spread uniformly in all directions. This is a problem because if gravity pulled equally on all matter in all directions, then nothing would move and so no stars or galaxies could form. What the early universe needed was a little anisotropy, a sprinkling of regions of excess matter that would serve as cosmic ‘seeds’ for the formation of stars and galaxies. Such anisotropy could not be found in the Einstein-de Sitter universe. So where had it come from? Matters quickly got worse. Theorists realised that getting to the Universe we see from the Big Bang of the Einstein-de Sitter version demanded an extraordinary fine-tuning. If the immediate, post-Big Bang universe had expanded just a fraction faster or slower, then stars and galaxies would have never had a chance to form. This fine-tuning was traced to the critical or ‘Goldilocks’ density of matter. Deviations from the critical density of just one in 100 trillionth – higher or lower – would have delivered universes very different from our own, in which there would be no intelligent life to bear witness. It got worse: theoretical studies of the formation of spiral galaxies and observational studies of the rotational motions of their stars led to another distinctly uncomfortable conclusion. Neither could be explained by taking account of all the matter that we can see. Calculations based only on the visible matter of stars suggested that, even if conditions allowing their formation could be met, spiral galaxies should still be physically impossible, and the patterns of rotation of the stars within them should look very different. To add insult to injury, when astronomers added up all the matter that could be identified in all the visible stars and galaxies, they found only about 5 per cent of the matter required for the critical density. Where was the rest of the Universe? There was clearly more to our universe than could be found in the Einstein-de Sitter version of the Big Bang theory. The solutions to some of these problems could be found only by looking back to the very beginning of the story of the Universe and, as this is not a moment that is accessible to astronomers, it fell once more to the theorists to figure out what might have happened. In the early 1980s, a group of theorists concluded that, in its very earliest instants, the post-Big Bang universe would have been small enough to be subject to random quantum fluctuations – temporary changes in the amount of energy present at specific locations in space, governed by Werner Heisenberg’s uncertainty principle. These fluctuations created tiny concentrations of excess matter in some places, leaving voids in others. These anisotropies would have then been imprinted on the larger universe by an insane burst of exponential expansion called cosmic inflation. In this way, the tiny concentrations of matter would grow to act as seeds from which stars and galaxies would later spring. To a certain extent, cosmic inflation also fixed some aspects of the fine-tuning problem. It was like a blunt instrument: no matter what conditions might have prevailed at the very beginning, cosmic inflation would have hammered the Universe into the desired shape. The theorists also reasoned that the hot, young universe would have behaved like a ball of electrically charged plasma, more fluid than gas. It would have contained matter stripped right back to its elementary constituents, assembling atomic nuclei and electrons only when temperatures had cooled sufficiently as a result of further expansion. They understood that there would have been a singular moment, just a few hundred thousand years after the Big Bang, when the temperature had dropped low enough to allow positively charged atomic nuclei (protons and helium nuclei) and negatively charged electrons to combine to form neutral hydrogen and helium atoms. This moment is called recombination. The light that would have danced back and forth between the charged particles in the ball of plasma was released, in all directions through space, and the Universe became transparent: literally, a ‘let there be light’ moment. Some of this light would have been visible, though there was obviously nobody around to see it. This is the oldest light in the Universe, known as the cosmic background radiation. Much like a bloody thumbprint at a cosmic crime scene, it left a pattern of temperature variations across the sky This radiation would have cooled further as the Universe continued to expand: estimates in 1949 suggested it would possess today a temperature of about 5 degrees above absolute zero (or -268oC), corresponding to microwave and infrared radiation. This estimate was largely forgotten, only to be rediscovered in 1964. A year later, as physicists scrambled to build an apparatus to search for it, the American radio astronomers Arno Penzias and Robert Wilson found it by accident. This discovery changed everything. The cosmic background, witness to events that had occurred when the Universe was in its infancy, was getting ready to testify. The tiny concentrations of matter produced by quantum fluctuations and imprinted on the larger universe by cosmic inflation would have made the cosmic background very slightly hotter in some places compared with others. This left a pattern of temperature variations in the cosmic background across the sky, much like a bloody thumbprint at a cosmic crime scene. These small temperature variations were detected by an instrument aboard NASA’s Cosmic Background Explorer satellite, and were reported in 1992. George Smoot, who had led the project to detect them, struggled to find superlatives to convey the importance of the discovery. ‘If you’re religious,’ he said, ‘it’s like seeing God.’ The evidence was in. We owe our very existence to anisotropies in the distribution of matter created by quantum fluctuations in the early, post-Big Bang universe, impressed on the larger universe by cosmic inflation. But cosmic inflation could not fix the problems posed by the physics of galaxy formation and the rotations of stars, and it could not solve the problem posed by the missing density. Curiously, part of the solution had already been suggested by the irascible Swiss astronomer Fritz Zwicky in 1933. His efforts had been forgotten, only to be rediscovered in the 1970s. Galaxies are much larger than they appear, suggesting that there must exist a form of invisible matter that interacts only through its gravity. Zwicky had called it dunkle Materie: dark matter. Each spiral galaxy, including our own Milky Way, is shrouded in a halo of dark matter that was essential for its formation, and explains why stars in these galaxies rotate the way they do. This was an important step in the right direction, but it was not enough. Even with dark matter estimated to be five times more abundant in the Universe than ordinary visible matter, about 70 per cent of the Universe was still missing. Astronomers now had pieces of evidence from the very earliest moment in the history of the Universe, and from objects much later in this history. The cosmic background radiation is about 13.8 billion years old. But nearby galaxies whose distances can be measured using Cepheid variable stars are much younger. We can get some sense of this by acknowledging that light does not travel from one place to another instantaneously. It takes time. It takes light eight minutes to reach us from the surface of the Sun, so we see the Sun as it appeared eight minutes ago, called a ‘look-back’ time. But the Cepheids are individual stars, so their use as standard candles is limited to nearby galaxies with short look-back times of hundreds of millions of years. To reconstruct the story of the Universe, astronomers somehow had to find a way to bridge the huge gulf between these points in its history. It is possible to study more distant galaxies but only by observing the totality of the light from all the stars contained within them. Astronomers realised that when an individual star explodes in a spectacular supernova it can light up an entire galaxy for a brief period, showing us where the galaxy is and how fast it is being carried away by expansion. Look-back times could be extended from hundreds to thousands of millions of years. A certain class of supernova offered itself as a standard candle, and the distances to their host galaxies could be calibrated by studying supernovae in nearby galaxies that possessed one or more Cepheid variable stars. The expectation was that, following the Big Bang, the rate of expansion of the Universe would have slowed over time, reaching the rate as we measure it today using the Hubble-Lemaître law. According to the Einstein-de Sitter version, it would continue to decelerate into the future, eventually coming to a halt. But when astronomers started using supernovae as standard candles in the late 1990s, what they discovered was truly astonishing. The rate of expansion is actually accelerating. Further data suggested that the post-Big Bang universe had indeed decelerated, but about 5 billion years ago this had flipped over to acceleration. In a hugely ironic twist, the fudge that Einstein had introduced in his equations in 1917 only to abandon in 1932 now had to be put back. Einstein had added an extra ‘cosmological term’ to his equations, governed by a ‘cosmological constant’, which imbues empty space with a mysterious energy. The only way to explain an accelerating expansion was to restore Einstein’s cosmological term to the Big Bang theory. The mysterious energy of empty space was called dark energy. I like to think of this as a period when the Universe was singing In 1905, Einstein had demonstrated the equivalence of mass (m) and energy (E) through his equation E = mc2, where c is the speed of light. It might come as no surprise to learn that when the critical density of matter is expressed instead as a critical density of mass-energy, dark energy accounts for the missing 70 per cent of the Universe. It may also seal its ultimate fate. As the Universe continues to expand, more and more of it will disappear from view. And, as the Universe grows colder, the matter that remains in reach may be led inexorably to a ‘heat death’. How do we know? More answers could be found in the cosmic background radiation. The theorists had further reasoned that competition between gravity and the enormous pressure of radiation in the post-Big Bang ball of plasma would have triggered acoustic oscillations – sound waves – wherever there was an excess of matter. These would have been sound waves propagating at speeds of more than half the speed of light, so even if there had been someone around who could listen, these were sounds that could not have been heard. Nevertheless, I still like to think of this as a period when the Universe was singing. The acoustic oscillations left tell-tale imprints in the temperature of the cosmic background, and in the large-scale distribution of galaxies across the Universe. These imprints cannot be modelled without first assuming a specific cosmology, in this case the Big Bang theory including dark matter and dark energy. Modelling results reported in 2013 tell us what kind of universe we live in – its total density of matter and energy, the shape of space, the nature and density of dark matter, the value of Einstein’s cosmological constant (and hence the density of dark energy), the density of visible matter, and the rate of expansion today (the Hubble constant). This is how we know. But the story is not over yet. Astronomers continued to sharpen their understanding of the history of the Universe through further studies of Cepheids and supernovae using the Hubble Space Telescope. Because these are studies based on the use of standard candles to measure speeds and distances, they provide measurements of the Hubble constant and the rate of expansion later in the Universe’s history that do not require the presumption of a specific cosmology. The Hubble constant and rate of expansion deduced from analysis of the acoustic oscillations is necessarily a model-dependent prediction, as it is derived from events much earlier in the Universe’s history. For a time, prediction and measurement were in good agreement, and the Big Bang theory looked robust. Then from about 2010 things started to go wrong again. As the precision of the observations improved, the predictions and the measurements went separate ways. The difference is small but appears to be significant. It is called the Hubble tension. The Universe appears to be expanding a little faster than we would predict by modelling the acoustic oscillations it experienced in infancy. Imagine constructing a bridge spanning the age of the Universe, begun simultaneously on both ‘early’ and ‘late’ sides of the divide. Foundations, piers and bridge supports have been completed, but the engineers have now discovered to their dismay that the two sides do not quite meet in the middle. Matters have been complicated by the development of different kinds of standard candle that are a little more straightforward to analyse than the Cepheids, and rival teams of astronomers are currently debating the details. We should know in another couple of years if the tension is real. And if it is real, then one way to fix it is to tweak the Big Bang theory yet again by supposing that dark energy has weakened over time, equivalent to supposing that Einstein’s cosmological constant is not, in fact, constant. Some tentative evidence for this was published in March this year. And there is yet more trouble ahead. The James Webb Space Telescope, launched on Christmas Day in 2021, can see galaxies with look-back times of more than 13 billion years, reaching back to a time just a few hundred million years after the Big Bang. Our understanding of the physics based on the current theory suggests that, at these look-back times, we might expect to see the first stars and galaxies in the process of forming. But the telescope is instead seeing already fully formed galaxies and clusters of galaxies. It is too soon to tell if this is a crisis, but there are grounds for considerable uneasiness. Some cosmologists have had enough. The Big Bang theory relies heavily on several concepts for which, despite much effort over the past 20 to 30 years, we have secured no additional empirical evidence beyond the basic need for them. The theory is remarkably successful yet full of explanatory holes. Cosmic inflation, dark matter and dark energy are all needed, but all come with serious caveats and doubts. Imagine trying to explain the (human) history of the 20th century in terms of the societal forces of fascism and communism, without being able to explain what these terms mean: without really knowing what they are, fundamentally. In an open letter published in New Scientist magazine in 2004, a group of renegade cosmologists declared: In no other field of physics would this continual recourse to new hypothetical objects be accepted as a way of bridging the gap between theory and observation. It would, at the least, raise serious questions about the validity of the underlying theory. This is simply the scientific enterprise at work. Answers to some of our deepest questions about the Universe and our place in it can sometimes appear frustratingly incomplete. There is no denying that, for all its faults, the present Big Bang theory continues to dominate the science of cosmology, for good reasons. But the lessons from history warn against becoming too comfortable. There is undoubtedly more to discover about the story of our Universe. There will be more surprises. The challenges are, as always, to retain a sense of humility in the face of an inscrutable universe, and to keep an open mind. As Einstein once put it: ‘The truth of a theory can never be proven, for one never knows if future experience will contradict its conclusions.’ Source of the article

GOATReads: Literature

Mute Compulsion

Alex needs a place to stay, just for a few days. After that, she plans to appear at a party held at the seaside mansion of her former love, Simon. What she wants is to move back into the mansion; as such, she hopes that Simon will reunite with her. Alex’s interest in Simon is obsessive, even desperate. And yet, there is nothing in Emma Cline’s 2023 novel The Guest to suggest that Alex’s fixation is any deeper than needing a place to stay, for as long as she can manage. Nor is it ever clarified if she considers herself a sex worker. There is a lot about her that we don’t know. Reviews of the novel on sites like Goodreads and Reddit will mention Alex’s flat characterization—her seeming lack of depth or backstory, her absence of introspection, her surface-level thinking. This opacity, however, is a deliberate strategy of Cline’s. After all, it doesn’t quite matter how Alex thinks about herself. Instead, the novel focuses on how sheer material compulsion means that she is forced to subsume her desires to Simon’s: to try to please him, to look how he wants her to look, to act how he wants her to act. She needs to do everything right, at risk of having nothing. In crafting this opacity, Cline is resisting the “trauma plot”: a form of expression of character—as Parul Sehgal has described it—that has become increasingly common in contemporary fiction. The commercial success of stories of personal suffering, Sehgal argues, has “elevated trauma from a sign of moral defect to a source of moral authority, even a kind of expertise.” By contrast, Cline’s Alex is a figure utterly devoid of authority. Her only expertise is in the hard work of figuring out how to survive, despite having no money and no place to live. Now let’s go backwards in the timeline of the novel’s author, Emma Cline. It’s 2017. And Cline’s ex-boyfriend is suing her. In the lawsuit, Chaz Reetz-Laiolo alleges that Cline plagiarized from his unpublished writing and used the material in her first novel, The Girls. Narrated by a woman who had, in her youth, been caught up with the Manson Family, The Girls was a splashy book when it was released in 2016. In a three-book deal, Random House paid Cline a $2 million advance for it. In his suit against Cline, Reetz-Laiolo was represented by the law firm Boies Schiller Flexner. This was the same law firm that represented Harvey Weinstein, the notorious Hollywood producer, as he was fighting sexual harassment and assault allegations. Weinstein also hired private investigators to construct “dossiers” about the women whom he thought would expose him. These dossiers were meant to shame them: with details of their alleged sexual histories, for example, or pictures and messages showing how they continued to be friendly with Weinstein after he abused them. His lawyer, David Boies, knew about these dossiers and went along with the plan. In 2017, Boies sent Cline a draft of Reetz-Laiolo’s complaint, saying that he planned to file it in court if she didn’t agree to a settlement. This draft included the same kind of dossier that Weinstein was even then employing. Titled “Cline’s History of Manipulating Older Men,” it featured details of her ostensible sexual past, including her private text messages and photos. This was to be used as evidence to corroborate Reetz-Laiolo’s claim that Cline was not, as the document read, “the innocent and inexperienced naïf she portrayed herself to be.” She was instead often prone to manipulating men to her benefit, extracting gifts and money. They were, basically, threatening to use this information to discredit Cline for the jury. The “Cline’s History” section was later removed from the filing. The New York Times and The New Yorker had just published articles about the allegations against Harvey Weinstein, and another piece about Weinstein’s hiring of private investigators was about to appear. Moreover, Cline’s lawyers included Carrie Goldberg, who has represented many victims of harassment, sexual shaming online, and revenge porn. Cline claimed that her ex had been abusive throughout their relationship; the New York Times reported that he was violently jealous, and that when Cline sold The Girls to Random House, Reetz-Laiolo threatened her again. Given Cline’s new high status, he warned, people “might be interested in naked photos of her,” or maybe they would want to read a “tell-all article about their relationship.” In her wanderings through high-priced real estate, Alex is often aligned in the novel with the other household employees she encounters, although they are busy with their respective areas of work while she is mainly loafing about and observing them and other house guests. She notes the way people will try to dig into the staff’s backstories, to demonstrate “how comfortable they were fraternizing” with them. This fraternizing becomes another service that is expected of people who are already working, like when Stevens in Kazuo Ishiguro’s The Remains of the Day (1989) worries over how to “banter” with his new American boss because he wants to please him. “She had experienced her own version of it,” we read of Alex, reflecting on the guests who demanded to fraternize with the staff: “the men who asked her endless questions about herself, faces composed in self-conscious empathy. Waiting with badly suppressed titillation for her to offer up some buried trauma.” The revelation of one’s inner life, in The Guest, is simply one more bit of compliance that people might extract from her. Tell us your story: Make it traumatic, so we can feel good about your employment here, your service to us, the little bit of money you are making doing what we ask of you in this gorgeous home. Her story would be a form of value added, amplifying their enjoyment, elevating their transactions with her by enabling them to believe that if Alex is “bad” in some way, she is nevertheless wounded, and so deserving of their interest—taking her out for dinner, giving her a place to stay for the night. Having sex with her would be more than just self-serving then, as an act of rightful charity on their part. And yet, this revelation is additional work that Alex blankly refuses. She provides them nothing with which to cover over the basic fact of their power over her: power that they pay for with money. The economic relations are left simply to stand. In 2018, a judge dismissed the plagiarism case against Cline, the one brought by her ex-boyfriend and his lawyers who were simultaneously defending Harvey Weinstein. Two years later, in 2020, Cline published a short story in the New Yorker, called “White Noise.” The point-of-view is Weinstein’s. We find Weinstein at home preparing to appear in court the following day, and his wandering thoughts delve into his own cunning deployment of the trauma plot. His lawyers counsel him to dress raggedly and use a walker for court performance, so as to extract sympathy. This advice brings to his mind other things Weinstein has said when trying to illicit a woman’s submissiveness: “my mother died today, he said, watching the girl’s face change.” Meanwhile, next door, a new neighbor has moved in: American writer Don DeLillo. It occurs to Weinstein that he should produce an adaptation of the novel White Noise. This will restore him to his rightful social status, he believes. He doesn’t know DeLillo’s work at all well: he mistakes the first line of Thomas Pynchon’s Gravity’s Rainbow, “A screaming comes across the sky,” for the opening of White Noise. The line appeals to him because it is about what he thinks of as a “rending of the known world,” and this is how he understands the case against him, and the moment it expresses: a screaming across the sky, a rending. Weinstein hopes his moment of crisis may be repaired through the lionization of another male creative—through the patrilineage connecting one great man to the next—fortified by his production of the White Noise film. He even imagines that, despite the charges against him, his granddaughter will love him, because she will be in his debt when she gets to intern on the film set, and inevitably DeLillo becomes a friend and writes her a college recommendation letter. Thinking about all this future promise, Weinstein texts a friend, “we as a nation are hungry 4 meaning.” Surely his own trial is the best evidence of that. Let’s return then to The Guest. We have left Alex refusing to tell any sad stories and finding herself dependent on the whims of wealthy, powerful men for life’s basic necessities. Alex is “a reluctant reader of her own self,” according to Jane Hu. But, by contrast, I think that nothing like a reading of her “self” is relevant to the struggle Alex faces in making it through each single day. Having a legible self is just another responsibility to others that she can’t afford. Hu argues also that “The Guest largely remains at the level of mere forms, rarely venturing to probe what might be troubling the waters beneath such glistening stillness.” Yet the novel is, rather, full of images of things emerging from beneath the surface. And this is more and more true as it progresses. As Alex runs out of resources, she gets more desperate and less put together. Pools get muddier, people’s faces get more wary in her presence, worries start to trickle up, and she is always waiting for a man she stole money from to find her and hurt her. This gradual oozing up metaphorizes the whole reality beneath the novel’s apparent surface: first, the fact of the tremendous wealth of everyone around Alex; second, the way she is controlled by and kept out of that wealth’s orbit, even as she passes through posh homes and luxury vehicles. Cline’s writing of the novel was inspired, in part, by John Cheever’s short story, “The Swimmer.” Here, a man sets out on a swimming tour of the neighborhood, going from party to party, toward a home where it turns out that he is not wanted. He is passing through a landscape of wealth from which he is excluded ultimately: “Oh, how bonny and lush were the banks of the Lucinda River!,” he thinks. “Prosperous men and women gathered by the sapphire colored waters while caterer’s men in white coats passed them cold gin.” This is precisely Simon’s world. When Alex shows up at his party in the novel’s climactic scene, she becomes a figure of the oozing return of the repressed herself. Far from manifesting a “glistening stillness,” she is its very interruption, destroying the illusion by her sheer presence: messy and tired, seeking a reconciliation with Simon that we know is not coming. We are waiting for some description of the horrified look on his face. Waiting for his reaction to seeing her on his property, uninvited and unwelcome, having broken out of her expected role as a woman subservient to his whims and oriented only by his needs. The quintessential character of today’s trauma novel, according to Sehgal, is “withholding, giving off a fragrance of unspecified damage,” at least at first. She is “Stalled, confusing to others, prone to sudden silences and jumpy responsiveness.” We sense constantly that something “gnaws at her, keeps her solitary and opaque, until there’s a sudden rip in her composure and her history comes spilling out.” This withholding, stalled figure in The Guest is Alex. And, indeed, the novel plays with the tension of us waiting for a moment of dramatic revelation. Here she is in the novel’s final scene, smiling in Simon’s direction, wishfully thinking that “Everything had turned out fine.” But he doesn’t come over to her. Alex thinks, instead, “this was all wrong”—“his eyes seem to look at something beyond her.” The novel concludes with them in this frozen diorama. No sudden rip. No spilling out. No revelations and reconciliations. He is looking right past her. He couldn’t care less about her trauma plot. The Guest is an instance, then, of what Christina Fogarasi has described as the “anti-trauma trauma novel,” in which trauma as a form of narrative “prosthesis” is refused, precisely because it “abstains from mentioning the systemic forces undergirding” anyone’s suffering. Who is Alex? How did she come to be here? The novel’s refusal to answer these questions is a way, too, of refusing the authority of the Weinstein-style “dossier,” which can exculpate or shame, excuse or condemn. Instead, all that Cline leaves on display is the sheer fact of economic domination: the “mute compulsion of economic relations,” as Marx famously put it, which “seals the domination of the capitalist over the worker.” Cline pinpoints the stark truth of this domination within a contemporary landscape of unspecified, informal sex work, on the fringes of a society of spectacularly wealthy asset holders. In other words, Cline pinpoints a landscape not unlike the creative industries: where women often find themselves doing what they can to attract and sustain the attention of people like Weinstein, who have the power to make careers for them or let them sink into oblivion. The trauma plot and the slut-shaming dossier are actually parallel formations, reveals The Guest. They are both formations that deliberately look away from material reality—the determining force of the law of capital in shaping what a woman is willing to do for a man—and, instead, locate particular compulsions and proclivities in a woman’s traumatic back story, compromised morality, and history of intimate entanglements. What Weinstein’s case made so clear—as did Jian Ghomeshi’s in Canada—is the weaponization of the personal story (including the plunge into traumatic interiority) in the busy activity of figuring out how a woman really felt about a man after he did what he did, not just what affect she performed but how she really felt, in her heart of hearts. This is all deployed to disguise and excuse the actual domination that compels people to do horrible things, like maintain relationships with evil men, and that compels people even to feel shitty feelings, like gratitude toward these demons, or sympathy, or—dare I say—love.  Source of the article

Society needs hope

Youths around the world are in a profound crisis of despair. Adults must help them to believe that the future will be better Young people around the world are experiencing an unprecedented crisis of unhappiness and poor mental health. Many observers blame the expansion of social media that began in 2012-13, as well as the long-term negative effects of the COVID-19 pandemic on the social lives of the young, and no doubt those things have exacerbated the decline in mental health. But the causes of the current crisis run deeper. They have to do with the increasingly uncertain futures that the young face due to the changing nature of jobs and the more complex skill sets required to succeed in them; extreme political polarisation and misinformation; an erosion of global norms of peace and cooperation; the uncertainties posed by climate changes; and the decline in traditional civil society organisations – such as labour unions and church groups. Meanwhile, families play a bigger role in providing financial and social support in poor and middle-income countries than in rich ones, serving as a buffer in the face of this perfect storm of trends. There are many ways in which this crisis of unhappiness expresses itself. One is the recent disappearance of a long-established U-shaped curve in the relationship between age and happiness. Until recently, the nadir or low point was in the mid-life years, and both the young and the old had higher levels of happiness and other dimensions of wellbeing. This relationship held in most countries around the world, except for those that are extremely poor, have high levels of political violence, or both. Yet, since 2020, the relationship has become a linear upward trend in many countries in North America and Europe – and several in Latin America and Africa as well as Australia. This means that the least-happy group in these countries is now the young (those aged 18-34) and the happiest are those over the age of 55. A more extreme manifestation has been the increase in suicides, rise in reported anxiety and depression, and ‘epidemic’ levels of loneliness among the young, particularly in the United States and the United Kingdom. The US already has a crisis of ‘deaths of despair’; first identified as a problem of middle age by the economists Anne Case and Angus Deaton in 2015, such premature deaths due to suicide, drug overdoses and alcohol and other poisonings are now being seen in greater numbers in the young, especially those Americans between the ages of 18-25. Youth unhappiness trends are particularly extreme in the US, in part due to its much more limited social support system for those who fall behind, the exorbitant costs of higher education and healthcare, and very high levels of gun violence – including in schools. As a result, there is a large and growing mortality gap between Americans with and without college degrees. Those with degrees live eight more years, on average, than those without. These are potentially overwhelming challenges for young people to navigate. This crisis matters because of the human costs, such as reduced longevity and significant gaps in quality of life, because those with mental illness are much less likely to complete higher education, and more likely to be in poor health and experience homelessness and other kinds of deprivation. They are also less likely to be in stable jobs and/or long-term relationships. Yet it also has deeper and more far-reaching implications as it reflects a lack of hope for the future among an entire generation in many countries, suggestive of a broad systemic failure that we do not fully understand. My research – and that of some others – shows that hope is a key factor in health and longevity, productivity, educational attainment and stable social relationships, among other things. The reason hope is so important, even more than current levels of wellbeing, is that integral to hope (as opposed to optimism) is having agency and potential pathways to a better future. Psychiatrists, for example, while they don’t provide examples of how to restore hope, note that it is the critical first step to recovering from mental illness. Individuals with hope are more likely to believe in their futures and to invest in them, as well as to avoid risky behaviours that jeopardise them. In contrast, people in despair have reached a point where they literally do not care whether they live or die. Several studies find direct or indirect linkages between despair and other mental health disorders and misinformation and radicalisation, although there are also some studies that dispute the claim. While these manifestations are particularly extreme in the US, many other countries – especially in Europe – are also experiencing them. While I have worked on wellbeing for many years now, I began my career as a development economist. I was born in Peru, and from an early age was exposed to the long reach of poverty because of the research my father (a paediatrician at Johns Hopkins in Baltimore) did on infant malnutrition. He found that, with the right diagnoses and treatments, like addressing inadequate levels of key minerals such as copper and zinc, severely malnourished infants could recover and have healthy lives without cognitive and other kinds of impairment. At the time, neither knowledge of nor treatment for these deficits was widely available. My early exposure to those issues made me committed to better understanding the effects that poverty and inequality have on people’s lives and health. Development economics seemed to be the best tool kit for doing so. But I also became increasingly curious about the psychology involved, such as why and how people living in extreme deprivation could be so optimistic, generous and resourceful, as they are in Peru. I did my PhD dissertation on the coping strategies of the poor in the context of hyper-inflation and Shining Path terrorism in late-1980s Peru. What stood out was how resilient the poor population was but also how sophisticated in navigating incredibly difficult economic circumstances – such as exchanging their local wages into dollars overnight to prevent them from being quickly eroded by inflation. They were also learning what foods they could afford that were also healthy for their children. I still remember thinking that it was unlikely that consumers in the US, who have never faced such challenges, would have been able to navigate them with the same dexterity and optimism. And even now, more than 30 years later, my most recent surveys among low-income adolescents in both Peru and in Missouri find a sharp contrast between the high hopes and aspirations of Peruvian adolescents for education advancement and the very low hope of low-income American adolescents, especially white ones. My research among poor populations in Peru directly challenged traditional economic theory Low-income minority communities in the US are much more hopeful in general, and value education as a pathway, while the latter has eroded among low-income white groups. Remarkably, the parents of my white respondents did not support them attending college, while the minority parents and other members of their communities strongly supported their young people seeking higher education. The visionary school superintendent I worked with to launch the Missouri project, Art McCoy, was a living example of how community support – in this case, in schools – could dramatically change the life trajectories of adolescents from minority communities (I will return to McCoy later). I was incredibly lucky, as a young scholar at the Brookings Institution in Washington, DC right after my PhD, to be in an environment where the economists were interested in my explorations into the psychology of poverty. The results of the survey research I was doing among poor populations in Peru directly challenged traditional economic theory. I found that the most upwardly mobile respondents in my sample were the most pessimistic in their assessments of their past economic progress, while poorer respondents were more positive in their assessments. We had objective data on how these respondents had fared in terms of income gains and losses, and we were able to confirm that there was no sampling error or bias in our survey questions. Thus, the question remained, why? Was it rising expectations? Loss aversion given the context of rapid but unstable growth? Newly acquired knowledge of how much more income the wealthy still had, despite their own upward progress? Character traits? It turns out it was a combination of all these things. Again, I was incredibly lucky to be in an environment where prestigious economists such as Henry Aaron, George Akerlof and Alice Rivlin – and Daniel Kahneman, the first psychologist to receive the Nobel Prize in economics – encouraged me to explore more, including using tools from other disciplines beyond economics. As a result, I got to know a small but incredibly talented group of economists, such as Richard Easterlin, Andrew Oswald and Angus Deaton, and top psychologists, such as Kahneman and Ed Diener, who were collaborating in combining the qualitative survey approach of psychologists with the econometric, maths-based techniques of economists. There was a great deal of scepticism early on from economists – indeed, many of them thought we were nuts! Yet the many puzzles we uncovered and the increasing use of the approach by a range of scholars ultimately resulted in it developing into a new science of wellbeing measurement. The new metrics added a great deal to our understanding by incorporating human emotions, aspirations and character traits into how we think about, analyse and model economic behaviours. By now, it has become almost mainstream to do so. Indeed, many governments around the world, led by the UK’s early effort in 2010 (in which I was lucky to play a small role) have begun to incorporate wellbeing metrics in their official statistics as a complement to traditional income-based data and as a tool in policy design and assessment, such as in cost-benefit analysis or health policy and environmental policy innovations. The OECD now has guidelines for best practices for national statistics offices around the world that want to utilise the metrics. (Unfortunately, despite having similar recommendations for how to do this in the US from a National Academy of Sciences panel – on which I participated – the US did not follow suit.) Most recently, the United Nations formed a commission to develop and recommend indicators of progress to complement GDP that can be adopted by countries around the world. The field has evolved from analysing the determinants of happiness and other dimensions of wellbeing (income matters but other things such as health, friendships, and meaning and purpose in life matter even more) to exploring what impact increases in wellbeing have on individuals and society as a whole. We consistently find that higher levels of wellbeing result in longer lives, better health, more productivity in the labour market, and more stable long-term relationships. Hope, meanwhile, is more important in determining those outcomes than is current happiness or life satisfaction. Hope is not just the belief that things will get better – that is optimism – but the conviction that individuals have the agency to make their lives better. During this intellectual journey, I increasingly began observing large contrasts between the hope and optimism of the poor in Latin America and the deep despair among the poor and near-poor in the US. My empirical surveys, based on Gallup data, confirmed those gaps. At the same time, Case and Deaton released their first paper on deaths of despair in the US. As I explored across race and income cohorts in the US, I found that it was low-income white people who had the lowest levels of hope and were also the cohort most represented in the deaths. I realised that the metrics we had developed could serve as warning indicators of populations at risk. While people with hope, meaning and agency have better outcomes, those in despair lack these emotions and traits and have lost their life narratives. Because of that, they are vulnerable to risky behaviours that jeopardise their futures; they typically do not respond to incentives; and they are more vulnerable to misinformation and related conspiracy theories that can fill the vacuum. I hope that we can now use the metrics as one of the tools we need to solve the mental health crisis among our young people. As I noted earlier, the causes of the youth mental health crisis – in the US and beyond – are deep and far reaching, and defy simple solutions. Yet if we highlight one of the key drivers of the crisis – the deep uncertainty the youth of today have about their futures and their ability to get jobs that will enable them to support families and have a reasonable quality of life – there is little doubt that education is a critical part of the solution. And central to that are the innovations that make education more accessible for those with limited means and resources, and train them for the new and complex skills required for the labour markets of tomorrow. To make education more accessible and relevant, we need to rethink the way it is delivered, so that students are supported and mentored, and how they can acquire skills that are not typically developed in secondary school curricula. There is no one tried-and-true recipe for success, not least because education has a context-specific element that must be tailored to the populations and communities in which it is delivered. As such, it is critical to involve key stakeholders (eg, parents, students and communities) and the consumers of education (eg, parents and students) when implementing innovations, as changes in the way children are educated rarely succeed without their support. A related and even more important lesson is the value of mentorship, particularly in less-privileged contexts, where students may be the first in their families to attend post-high-school education or in which parents are absent for a variety of reasons, such as familial trauma or overstretched work schedules. My research finds that mentors – either in schools or in families or communities – are critical to guiding students as they make difficult decisions about investing in their education with limited information and, perhaps even more important, in sustaining their aspirations and goals in the face of negative shocks or other challenges. Seeing the important role of hope and aspirations in driving the efforts of the poor in Peru to pursue better lives is what inspired me to study emotions and wellbeing in economics and to focus now on the youth mental health crisis through that lens. The renaissance of debating clubs in Chicago public high schools has improved academic achievement Research also shows the important role of skills that are not usually part of high-school curricula but which expose students to things that can help them succeed in the labour market, such as financial literacy, self-esteem and communications skills. The Youthful Savings programme, based in New York and Santa Monica, teaches high-schoolers financial literacy, equitable and ethical business practices, and the importance of paying attention to mental wellbeing. Mentors are an important part of the programme’s successful record. An example of their success in helping young people is Jose Santana, a high-school student from the Bronx and the son of Dominican immigrants. He was not planning on college but was able to go with support from Youthful Savings. He credits the skills the programme gave him but, more importantly, the mentorship he received from Somya Munjal, a young entrepreneur of Indian origin and the founder of Youthful Savings, which inspired Santana to continue his education to increase his chances of becoming a successful entrepreneur himself. Munjal, too, had to struggle to finance her own education but became increasingly motivated to succeed as she found meaning and purpose in helping others along the way. Santana is only one of the many young people Munjal has inspired. An example of exposing youth to skills that they can use in the labour market is debating clubs in high schools. Debating requires good communication skills, the ability to listen to opponents, and to back one’s argument in fact-based reasoning, with calm and civil discourse. These are the skills that are quickly being eroded in today’s polarised and acrimonious environment. The renaissance of debating clubs in several Chicago public high schools has improved academic achievement and increased the hope, agency and engagement of the participating students, according to Robert Litan’s book Resolved (2020). Meanwhile, research by Rebecca Winthrop and Jenny Anderson for their book The Disengaged Teen (2025) confirms the importance of supporting students in developing skills beyond the usual academic curricula, such as creativity, exploration and agency. They categorise students as ‘resisters’, ‘passengers’, ‘achievers’ and ‘explorers’ – the last of which being the most engaged of all the groups. In the UK, the #BeeWell programme in the greater Manchester district conducts annual surveys that measure key wellbeing indicators and makes recommendations to communities based on their findings. Evaluations three years after the programme was implemented in 2019 across 192 schools throughout the district showed that significant action was taken in communities thanks to the findings of the programme, which are released privately to schools and publicly by neighbourhood. Meanwhile, community colleges around the US are playing an increasingly important role in helping low-income students attend and complete college. A leading-edge example is Macomb Community College (MCC) in Detroit, which in addition to its own faculty and curriculum provides a hub where local colleges and universities, including Michigan State University, offer courses so that students can go on to complete four-year degrees – an option that is invaluable for low-income students who often work and have families in the county, and cannot afford to move and pay room and board elsewhere. Approximately 63 per cent of the students who transfer from MCC go on to complete a four-year degree. He inspired students to invest and succeed in their education aspirations, with seemingly limitless hope MCC’s curricular innovations are paired with dedicated mentorship for each student who attends the college, as well as a programme that encourages civic discourse and attending lectures from outside speakers. This is unusual in a county that has as divided a population as Macomb. It combines autoworkers, newly arrived immigrants and a long-standing but historically discriminated-against African-American population. The college’s president emeritus Jim Jacobs told me that one objective is to show MCC students that they can thrive in the workforce without moving out of their county. Another programme aimed at inspiring low-income young adults is the micro-grants programme founded by Julie Rusk, the co-founder of Civic Wellbeing Partners in California. The programme provides small grants to low-income people, usually of minority origin, to initiate new entrepreneurial activities in Santa Monica. Most of the grantees are young, and the activities – which range from the arts to culinary initiatives to bike repair shops – benefit low-income communities in the city and have positive effects on the hopes and aspirations of the grant recipients. One reason for this success is the dedication of Rusk and her team to ensuring that their participants have the support they need to succeed. I close with the profile of Art McCoy, the amazing former school superintendent of the primarily African American Jennings school district in Saint Louis, Missouri. When McCoy took over the superintendency, Jennings had one of the worst completion records of Missouri’s high schools. It then achieved an impressive 100 per cent graduation, career and college placement rate, which is significantly higher than those of the other public high schools in the same district. While McCoy’s success is no doubt due to many complex factors, it is evident that a critical component was his ability to inspire students to invest in and succeed in their education aspirations, with seemingly limitless energy and hope. McCoy has an unparalleled ability to communicate to a wide range of audiences, including but not only students, with genuine empathy, compassion and humour, while encouraging effort, initiative and persistence. Working with McCoy also showed me that dedicated mentorship could transform a failing school district into a cohesive community that provides support and inspiration to take on daunting challenges. I have never ended a conversation with him (and there have been many) without a restored sense of hope, even during our divided and difficult times. He is now active in supporting young adults in deprived parts of Saint Louis to embrace entrepreneurship and independence as they begin their post-education careers. At a time where young people are suffering from doubts and anxiety about what their futures hold, it is people like Art McCoy, Somya Munjal and Julie Rusk who help them believe in themselves and their ability to overcome the challenges they face. While these challenges are complex and daunting and we do not have solutions for many of them, we do know that young people crippled with despair and anxiety are far more likely to withdraw from society. Restoring hope is not a guaranteed solution, but it is a critical first step. That is a lesson I earned early in my career from the poor people of Peru, and it still holds today in a very different context and time. Source of the article